[ovirt-users] NFS Share file to mount in ovIrt

2024-04-17 Thread De Lee
Hi 

I was trying to add the new NFS share path in the ovirt node. But unforunately 
it doesn' work for me.Try to mount the nfs via ovirt manager but fails with 
following error message

Error while executing action Add Storage Connection: Permission settings on the 
specified path do not allow access to the storage.
Verify permission settings on the specified storage path.

tried with troubleshooting document but it doesn't work

[root@ov02 contrib]# python3 nfs-check.py 10.34.95.5:/TESTNAS01
Current hostname: ov02 - IP addr 10.34.95.42
Trying to /bin/mount -t nfs 10.34.95.5:/TESTNAS01...
return = 32 error b'mount.nfs: access denied by server while mounting 
10.34.95.5:/TESTNAS01'
0 drwxr-xr-x.3 vdsm kvm17 Apr  9 06:12 exports

[root@ov01 /]# /usr/bin/sudo -n /bin/mount -t nfs -o 
soft,nosharecache,timeo=600,retrans=6,nfsvers=3 10.34.95.5:/TESTNAS01 
/exports/nfs/
mount.nfs: access denied by server while mounting 10.34.95.5:/TESTNAS01
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/GBKBMFJYXYW3TCR3YSUCMJNSYSDLHQWU/


[ovirt-users] NFS Synology NAS (DSM 7)

2021-08-27 Thread Maton, Brett
Hi List,

  I can't get oVirt 4.4.8.5-1.el8 (running on oVirt Node hosts) to connect
to an NFS share on a Synology NAS.

  I gave up trying to get the hosted engine deployed and put that on an
iscsi volume instead...

  The directory being exported from NAS is owned by vdsm / kvm (36:36)
perms I've tried:
  0750
  0755
  0777

Tried auto / v3 / v4_0

  As others have mentioned regarding NFS, if I connect manually from the
host with

  mount nas.mydomain.com:/volume1/ov_nas

  It connects and works just fine.

  If I try to add the share as a domain in oVirt I get

Operation Cancelled
Error while executing action Add Storage Connection: Permission settings on
the specified path do not allow access to the storage.
Verify permission settings on the specified storage path.

When tailing /var/log/messages on

When tailing /var/log/messages on the oVirt host, I see this message appear
(I changed the domain name for this post so the dots might be transcoded in
reality):

Aug 27 17:36:07 ov001 systemd[1]:
rhev-data\x2dcenter-mnt-nas.mydomain.com:_volume1_ov__nas.mount:
Succeeded.

  The NAS is running the 'new' DSM 7, /etc/exports looks like this:

/volume1/ov_nas  x.x.x.x(rw,async,no_root_squash,anonuid=36,anongid=36)

(reloaded with exportfs -ra)


Any suggestions appreciated.

Regards,
Brett
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/Y74ILXTU6L2XEIIQDG5SE6WEQ6DSUOXA/


[ovirt-users] NFS storage was locked for 45 minutes after I attempted a clone operation

2021-08-25 Thread David White via Users
I have an HCI cluster running on Gluster storage. I exposed an NFS share into 
oVirt as a storage domain so that I could clone all of my VMs (I'm preparing to 
move physically to a new datacenter). I got 3-4 VMs cloned perfectly fine 
yesterday. But then this evening, I tried to clone a big VM, and it caused the 
disk to lock up. The VM went totally unresponsive, and I didn't see a way to 
cancel the clone. Nagios NRPE (on the client VM) was reporting server load over 
65+, but I was never able to establish an SSH connection. 

Eventually, I tried restarting the ovirt-engine, per 
https://access.redhat.com/solutions/396753. When that didn't work, I powered 
down the VM completely. But the disks were still locked. So I then tried to put 
the storage domain into maintenance mode, but that wound up putting the entire 
domain into a "locked" state. Finally, eventually, the disks unlocked, and I 
was able to power the VM back online.

>From start to finish, my VM was down for about 45 minutes, including the time 
>when NRPE was still sending data to Nagios.

What logs should I look at, and how can I troubleshoot what went wrong here, 
and hopefully avoid this from happening again?

Sent with ProtonMail Secure Email.

publickey - dmwhite823@protonmail.com - 0x320CD582.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ASEENELT4TRTXQ7MF4FKB6L75D3H75AN/


[ovirt-users] NFS import storage domain permissions error

2020-06-27 Thread Joop
Hi All,

I've got an error when I try to import a storage domain from a Synology
NAS that used to be a 4.3.10 oVirt installation but no matter what I try
I can't get it to import. I keep getting a permission denied error while
with the same settings it worked for the previous version oVirt.
This is the log:

2020-06-27 19:39:38,113+0200 INFO  (jsonrpc/0) [vdsm.api] START 
connectStorageServer(domType=1, spUUID='----', 
conList=[{'password': '', 'protocol_version': 'auto', 'port': '', 
'iqn': '', 'conn
ection': 'pakhuis:/volume1/nfs/data', 'ipv6_enabled': 'false', 'id': 
'----', 'user': '', 'tpgt': '1'}], 
options=None) from=:::192.168.11.14,53948, 
flow_id=c753ebc5-20a3-4e1c-bcd8-794ddde3ec69, task
_id=2a9ef4e7-e2c0-43d6-b03a-7f0d62f08e50 (api:48)
2020-06-27 19:39:38,113+0200 DEBUG (jsonrpc/0) [storage.Server.NFS] Using local 
locks for NFSv3 locks (storageServer:442)
2020-06-27 19:39:38,114+0200 INFO  (jsonrpc/0) 
[storage.StorageServer.MountConnection] Creating directory 
'/rhev/data-center/mnt/pakhuis:_volume1_nfs_data' (storageServer:167)
2020-06-27 19:39:38,114+0200 INFO  (jsonrpc/0) [storage.fileUtils] Creating 
directory: /rhev/data-center/mnt/pakhuis:_volume1_nfs_data mode: None 
(fileUtils:198)
2020-06-27 19:39:38,114+0200 INFO  (jsonrpc/0) [storage.Mount] mounting 
pakhuis:/volume1/nfs/data at /rhev/data-center/mnt/pakhuis:_volume1_nfs_data 
(mount:207)
2020-06-27 19:39:38,360+0200 DEBUG (jsonrpc/0) [storage.Mount] 
/rhev/data-center/mnt/pakhuis:_volume1_nfs_data mounted: 0.25 seconds 
(utils:390)
2020-06-27 19:39:38,380+0200 DEBUG (jsonrpc/0) [storage.Mount] Waiting for udev 
mount events: 0.02 seconds (utils:390)
2020-06-27 19:39:38,381+0200 DEBUG (jsonrpc/0) [storage.oop] Creating ioprocess 
Global (outOfProcess:89)
2020-06-27 19:39:38,382+0200 INFO  (jsonrpc/0) [IOProcessClient] (Global) 
Starting client (__init__:308)
2020-06-27 19:39:38,394+0200 INFO  (ioprocess/10476) [IOProcess] (Global) 
Starting ioprocess (__init__:434)
2020-06-27 19:39:38,394+0200 DEBUG (ioprocess/10476) [IOProcess] (Global) 
Closing unrelated FDs... (__init__:432)
2020-06-27 19:39:38,394+0200 DEBUG (ioprocess/10476) [IOProcess] (Global) 
Opening communication channels... (__init__:432)
2020-06-27 19:39:38,394+0200 DEBUG (ioprocess/10476) [IOProcess] (Global) 
Queuing request (slotsLeft=20) (__init__:432)
2020-06-27 19:39:38,394+0200 DEBUG (ioprocess/10476) [IOProcess] (Global) (1) 
Start request for method 'access' (waitTime=49) (__init__:432)
2020-06-27 19:39:38,395+0200 DEBUG (ioprocess/10476) [IOProcess] (Global) (1) 
Finished request for method 'access' (runTime=421) (__init__:432)
2020-06-27 19:39:38,395+0200 DEBUG (ioprocess/10476) [IOProcess] (Global) (1) 
Dequeuing request (slotsLeft=21) (__init__:432)
2020-06-27 19:39:38,396+0200 WARN  (jsonrpc/0) [storage.oop] Permission denied 
for directory: /rhev/data-center/mnt/pakhuis:_volume1_nfs_data with 
permissions:7 (outOfProcess:193)
2020-06-27 19:39:38,396+0200 INFO  (jsonrpc/0) [storage.Mount] unmounting 
/rhev/data-center/mnt/pakhuis:_volume1_nfs_data (mount:215)
2020-06-27 19:39:38,435+0200 DEBUG (jsonrpc/0) [storage.Mount] 
/rhev/data-center/mnt/pakhuis:_volume1_nfs_data unmounted: 0.04 seconds 
(utils:390)
2020-06-27 19:39:38,453+0200 DEBUG (jsonrpc/0) [storage.Mount] Waiting for udev 
mount events: 0.02 seconds (utils:390)
2020-06-27 19:39:38,453+0200 ERROR (jsonrpc/0) [storage.HSM] Could not connect 
to storageServer (hsm:2421)
Traceback (most recent call last):
  File "/usr/lib/python3.6/site-packages/vdsm/storage/fileSD.py", line 80, in 
validateDirAccess
getProcPool().fileUtils.validateAccess(dirPath)
  File "/usr/lib/python3.6/site-packages/vdsm/storage/outOfProcess.py", line 
194, in validateAccess
raise OSError(errno.EACCES, os.strerror(errno.EACCES))
PermissionError: [Errno 13] Permission denied

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/lib/python3.6/site-packages/vdsm/storage/hsm.py", line 2418, in 
connectStorageServer
conObj.connect()
  File "/usr/lib/python3.6/site-packages/vdsm/storage/storageServer.py", line 
449, in connect
return self._mountCon.connect()
  File "/usr/lib/python3.6/site-packages/vdsm/storage/storageServer.py", line 
190, in connect
six.reraise(t, v, tb)
  File "/usr/lib/python3.6/site-packages/six.py", line 693, in reraise
raise value
  File "/usr/lib/python3.6/site-packages/vdsm/storage/storageServer.py", line 
183, in connect
self.getMountObj().getRecord().fs_file)
  File "/usr/lib/python3.6/site-packages/vdsm/storage/fileSD.py", line 91, in 
validateDirAccess
raise se.StorageServerAccessPermissionError(dirPath)
vdsm.storage.exception.StorageServerAccessPermissionError: Permission settings 
on the specified path do not allow access to the storage. Verify permission 
settings on the specified storage path.: 'path = 

[ovirt-users] NFS storage domain forced to use /exports/data ?

2020-05-06 Thread kelley bryan
Hello 
Should the documentation point out that the /exports/data mount point is hard 
coded in  ovirt-hosted-engine-setup-ansible-create_storage_domain.yaml.

I think some  Data Centers will want to use a different mount path
or Did I miss a prompt or entry in the  Deployment screen for 
Hosted-engine-deployment?
Bryan
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/3OBETIUZX4OLSSLPJBDH2HVKOEGNIMPA/


[ovirt-users] NFS permissions error on ISODomain file with correct permissions

2020-03-31 Thread Shareef Jalloq
Hi,

I asked this question in another thread but it seems to have been lost in
the noise so I'm reposting with a more descriptive subject.

I'm trying to start a Windows VM and use the virtio-win VFD floppy to get
the drivers but the VM startup fails due to a permissions issue detailed
below.  The permissions look fine to me so why can't the VFD be read?

Shareef.

I found a permissions issue in the engine.log:

2020-03-25 21:28:41,662Z ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(ForkJoinPool-1-worker-14) [] EVENT_ID: VM_DOWN_ERROR(119), VM win-2019 is
down with error. Exit message: internal error: qemu unexpectedly closed the
monitor: 2020-03-25T21:28:40.324426Z qemu-kvm: -drive
file=/rhev/data-center/mnt/nas-01.phoelex.com:_volume2_isostore/41cebb4b-c164-4956-8f44-6426170cd9f5/images/----/virtio-win_servers_amd64.vfd,format=raw,if=none,id=drive-ua-0b9c28b5-f75c-4575-ad85-b5b836f67d61,readonly=on:
Could not open 
'/rhev/data-center/mnt/nas-01.phoelex.com:_volume2_isostore/41cebb4b-c164-4956-8f44-6426170cd9f5/images/----/virtio-win_servers_amd64.vfd':
Permission denied.

But when I look at that path on the node in question, every folder and the
final file have the correct vdsm:kvm permissions:

[root@ovirt-node-01 ~]# ll /rhev/data-center/mnt/nas-01.phoelex.com:
_volume2_isostore/41cebb4b-c164-4956-8f44-6426170cd9f5/images/----/virtio-win_servers_amd64.vfd
-rwxrwxrwx. 1 vdsm kvm 2949120 Mar 25 21:24
/rhev/data-center/mnt/nas-01.phoelex.com:
_volume2_isostore/41cebb4b-c164-4956-8f44-6426170cd9f5/images/----/virtio-win_servers_amd64.vfd

The files were uploaded to the ISO domain using:

engine-iso-uploader --iso-domain=iso_storage upload virtio-win.iso
virtio-win_servers_amd64.vfd
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/BCA2UOLPDS3FUESFOQUXBBHXONAFUCKU/


[ovirt-users] NFS ISO Domain - Outage ?

2020-03-11 Thread Nardus Geldenhuys
Hi Ovirt Mailing-list

Hope you are well. We had an outage over several ovirt clusters. It looks
like we had the same ISO NFS domain shared to all off them, many of the
VM's had a CD attached to it. The NFS server went down for an hour, all
hell broke lose when the NFS server went down. Some of the ovirt nodes
became "red/unresponssive" on the ovirt dashboard. We learned now to spilt
the NFS server for ISO's and/or remove the ISO's when done.

Has anyone seen similar issues with an NFS ISO Domain? Is there special
options we need to pass to the mount to get around this? Can we put the
ISO's somewhere else?

Regards

Nardus
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/77UVMNQLP2TNXPTDU7A554MUWQHVWYET/


[ovirt-users] NFS Storage Domain on OpenMediaVault

2019-12-01 Thread rwebb
I have a clean install with openmediavault as backend NFS and cannot get it to 
work. Keep getting permission errors even though I created a vdsm user and kvm 
group; and they are the owners of the directory on OMV with full permissions.

The directory gets created on the NFS side for the host, but then get the 
permission error and is removed form the host but the directory structure is 
left on the NFS server.

Logs:

From the engine:

Error while executing action New NFS Storage Domain: Unexpected exception

From the oVirt node log:

2019-11-29 10:03:02 136998 [30025]: open error -13 EACCES: no permission to 
open 
/rhev/data-center/mnt/192.168.1.56:_export_Datastore-oVirt/f38b19e4-8060-4467-860b-09cf606ccc15/dom_md/ids
2019-11-29 10:03:02 136998 [30025]: check that daemon user sanlock 179 group 
sanlock 179 has access to disk or file.

File system on Openmediavault:

drwxrwsrwx+ 3 vdsm kvm 4096 Nov 29 10:03 .
drwxr-xr-x  9 root root 4096 Nov 27 20:56 ..
drwxrwsr-x+ 4 vdsm kvm 4096 Nov 29 10:03 f38b19e4-8060-4467-860b-09cf606ccc15

drwxrwsr-x+ 4 vdsm kvm 4096 Nov 29 10:03 .
drwxrwsrwx+ 3 vdsm kvm 4096 Nov 29 10:03 ..
drwxrwsr-x+ 2 vdsm kvm 4096 Nov 29 10:03 dom_md
drwxrwsr-x+ 2 vdsm kvm 4096 Nov 29 10:03 images

drwxrwsr-x+ 2 vdsm kvm 4096 Nov 29 10:03 .
drwxrwsr-x+ 4 vdsm kvm 4096 Nov 29 10:03 ..
-rw-rw+ 1 vdsm kvm0 Nov 29 10:03 ids
-rw-rw+ 1 vdsm kvm 16777216 Nov 29 10:03 inbox
-rw-rw+ 1 vdsm kvm0 Nov 29 10:03 leases
-rw-rw-r--+ 1 vdsm kvm  343 Nov 29 10:03 metadata
-rw-rw+ 1 vdsm kvm 16777216 Nov 29 10:03 outbox
-rw-rw+ 1 vdsm kvm  1302528 Nov 29 10:03 xleases
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ILKNT57F6VUHEVKMOACLLQRAO3J364MC/


[ovirt-users] NFS based - incremental backups

2019-09-26 Thread Leo David
Hello Everyone,
Ive been struggling since a while to find out a propper solution to have a
full backup scenario for a production environment.
In the past, we have used Proxmox, and the scheduled incremenral nfs based
full vm backups is a thing that we really miss.
As far as i know, at this point the only way to have a backup in oVirt /
rhev is by using gluster geo-replication feature.
This is nice, but as far as i know it lacks some important features:
- ability to have incremental backups to restore vms from
- ability to backup vms placed on different storage domains ( only one
storage domain can be geo-replicated !!! some vms have disks on ssd volume,
some on hdd, some on both)
- need to setup an external 3 nodes gluster cluster ( although a workaround
would be to have single bricks based volumes for a single instance )
I know we can create snaps, but they will die with the platform in a fail
scenario, and neither they can be scheduled.
We have found bacchus project that looked promising, although it had a
pretty hassled way to achieve backups ( create snap, create vm from the
snap, export vm to export domain, delete vm, delete snap - all in a
scheduled fashion )
As a mention, Proxmox is incrementaly creating a tar archive of the vm disk
content, and places it to an external network storage like nfs. This
allowed us to backup/reatore both linux and windows vms very easily.
Now, I know this have been discussed before, but i would like to know if
there are at least any future plans to implement this feature in the next
releases.
Personally, i consider this a major, and quite decent feature to have with
the platform, without the need to pay for 3rd party solutions that may or
may not achieve the goal while adding extra pieces to the stack.
Geo-replication is a good and nice feature, but in my oppinion it is not
what a "backup domain" would be.
Have a nice day,

Leo
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/QX7H2DNKJWMKNPX5V465V2CRSJS4IXQJ/


[ovirt-users] nfs

2019-09-09 Thread mailing-ovirt
I`m trying to mount a nfs share.

 

if I manually mount it from ssh, I can access it without issues.

 

However when I do it from the web config, it keeps failing: 

 

Not sure how to solve that.

 

Thanks

 

Simon

 

2019-09-09 09:08:47,601-0400 INFO  (jsonrpc/5) [jsonrpc.JsonRpcServer] RPC
call Host.ping2 succeeded in 0.00 seconds (__init__:312)

2019-09-09 09:08:47,610-0400 INFO  (jsonrpc/3) [vdsm.api] START
repoStats(domains=[u'4f4828c2-8f78-4566-a69d-a37cd73446d4']) from=::1,42394,
task_id=6c1eeef7-ff8c-4d0d-be85-ced03aae4bc2 (api:48)

2019-09-09 09:08:47,610-0400 INFO  (jsonrpc/3) [vdsm.api] FINISH repoStats
return={u'4f4828c2-8f78-4566-a69d-a37cd73446d4': {'code': 0, 'actual': True,
'version': 5, 'acquired': True, 'delay': '0.00101971', 'lastCheck': '0.4',
'valid': True}} from=::1,42394, task_id=6c1eeef7-ff8c-4d0d-be85-ced03aae4bc2
(api:54)

2019-09-09 09:08:47,611-0400 INFO  (jsonrpc/3) [jsonrpc.JsonRpcServer] RPC
call Host.getStorageRepoStats succeeded in 0.00 seconds (__init__:312)

2019-09-09 09:08:47,839-0400 INFO  (jsonrpc/4) [jsonrpc.JsonRpcServer] RPC
call Host.ping2 succeeded in 0.00 seconds (__init__:312)

2019-09-09 09:08:47,846-0400 INFO  (jsonrpc/2) [api.host] START
getCapabilities() from=::1,42394 (api:48)

2019-09-09 09:08:48,149-0400 INFO  (jsonrpc/2) [root] managedvolume not
supported: Managed Volume Not Supported. Missing package os-brick.: ('Cannot
import os_brick',) (caps:152)

2019-09-09 09:08:49,212-0400 INFO  (jsonrpc/6) [jsonrpc.JsonRpcServer] RPC
call Host.ping2 succeeded in 0.01 seconds (__init__:312)

2019-09-09 09:08:49,263-0400 INFO  (jsonrpc/2) [root]
/usr/libexec/vdsm/hooks/after_get_caps/50_openstacknet: rc=0 err=
(hooks:114)

2019-09-09 09:08:49,497-0400 INFO  (jsonrpc/0) [jsonrpc.JsonRpcServer] RPC
call Host.ping2 succeeded in 0.00 seconds (__init__:312)

2019-09-09 09:08:49,657-0400 INFO  (jsonrpc/1) [jsonrpc.JsonRpcServer] RPC
call Host.ping2 succeeded in 0.00 seconds (__init__:312)

2019-09-09 09:08:49,665-0400 INFO  (jsonrpc/7) [vdsm.api] START
repoStats(domains=[u'4f4828c2-8f78-4566-a69d-a37cd73446d4']) from=::1,42394,
task_id=14b596bf-bc1c-412b-b1fc-8d53f17d88f7 (api:48)

2019-09-09 09:08:49,666-0400 INFO  (jsonrpc/7) [vdsm.api] FINISH repoStats
return={u'4f4828c2-8f78-4566-a69d-a37cd73446d4': {'code': 0, 'actual': True,
'version': 5, 'acquired': True, 'delay': '0.00101971', 'lastCheck': '2.4',
'valid': True}} from=::1,42394, task_id=14b596bf-bc1c-412b-b1fc-8d53f17d88f7
(api:54)

2019-09-09 09:08:49,667-0400 INFO  (jsonrpc/7) [jsonrpc.JsonRpcServer] RPC
call Host.getStorageRepoStats succeeded in 0.00 seconds (__init__:312)

2019-09-09 09:08:49,800-0400 INFO  (jsonrpc/2) [root]
/usr/libexec/vdsm/hooks/after_get_caps/openstacknet_utils.py: rc=0 err=
(hooks:114)

2019-09-09 09:08:50,464-0400 INFO  (jsonrpc/2) [root]
/usr/libexec/vdsm/hooks/after_get_caps/ovirt_provider_ovn_hook: rc=0 err=
(hooks:114)

2019-09-09 09:08:50,467-0400 INFO  (jsonrpc/2) [api.host] FINISH
getCapabilities return={'status': {'message': 'Done', 'code': 0}, 'info':
{u'HBAInventory': {u'iSCSI': [{u'InitiatorName':
u'iqn.1994-05.com.redhat:d8c85fc0ab85'}], u'FC': []}, u'packages2':
{u'kernel': {u'release': u'957.27.2.el7.x86_64', u'version': u'3.10.0'},
u'spice-server': {u'release': u'6.el7_6.1', u'version': u'0.14.0'},
u'librbd1': {u'release': u'4.el7', u'version': u'10.2.5'}, u'vdsm':
{u'release': u'1.el7', u'version': u'4.30.24'}, u'qemu-kvm': {u'release':
u'18.el7_6.7.1', u'version': u'2.12.0'}, u'openvswitch': {u'release':
u'4.el7', u'version': u'2.11.0'}, u'libvirt': {u'release': u'10.el7_6.12',
u'version': u'4.5.0'}, u'ovirt-hosted-engine-ha': {u'release': u'1.el7',
u'version': u'2.3.3'}, u'qemu-img': {u'release': u'18.el7_6.7.1',
u'version': u'2.12.0'}, u'mom': {u'release': u'1.el7.centos', u'version':
u'0.5.12'}, u'glusterfs-cli': {u'release': u'1.el7', u'version': u'6.5'}},
u'numaNodeDistance': {u'1': [20, 10], u'0': [10, 20]}, u'cpuModel':
u'Intel(R) Xeon(R) CPU E5-2640 v2 @ 2.00GHz', u'nestedVirtualization':
False, u'liveMerge': u'true', u'hooks': {u'before_vm_start':
{u'50_hostedengine': {u'md5': u'95c810cdcfe4195302a59574a5148289'},
u'50_vhostmd': {u'md5': u'9206bc390bcbf208b06a8e899581be2d'}},
u'after_network_setup': {u'30_ethtool_options': {u'md5':
u'ce1fbad7aa0389e3b06231219140bf0d'}}, u'after_vm_destroy':
{u'delete_vhostuserclient_hook': {u'md5':
u'c2f279cc9483a3f842f6c29df13994c1'}, u'50_vhostmd': {u'md5':
u'bdf4802c0521cf1bae08f2b90a9559cf'}}, u'after_vm_start':
{u'openstacknet_utils.py': {u'md5': u'1ed38ddf30f8a9c7574589e77e2c0b1f'},
u'50_openstacknet': {u'md5': u'ea0a5a715da8c1badbcda28e8b8fa00e'}},
u'after_device_migrate_destination': {u'openstacknet_utils.py': {u'md5':
u'1ed38ddf30f8a9c7574589e77e2c0b1f'}, u'50_openstacknet': {u'md5':
u'6226fbc4d1602994828a3904fc1b875d'}}, u'before_device_migrate_destination':
{u'50_vmfex': {u'md5': u'49caba1a5faadd8efacef966f79bc30a'}},
u'after_device_create': {u'openstacknet_utils.py': {u'md5':

[ovirt-users] NFS storage and importation

2018-11-25 Thread Alexis Grillon
(sorry for double post earlier)
Le 25/11/2018 à 18:19, Nir Soffer a écrit :>
> Hello,
> 
> Our cluster use NFS storage (on each node as data, and on a NAS for
> export)
> 
> 
> Looks like you invented your own hyperconverge solution.
Well, when we looks at the network specs for Gluster to works, we
thinked initially that was a more reasonable solution. Seems that might
be true for VM, but deep wrong for the engine for any case.
Regarding the engine, we thinked (once again), that the backup will
allow us to rebuild in case of problem. Wrong again, backup won't work
with new installation, and even if it's looks like it works on things
that haven't changed, it doesn't.

> 
> The NFS server on the node will always be accessible on the node, but if
> it is
> not accessible from other hosts, the entire DC may go down.
> 
> Also you probably don't have any redundancy, so failure of single node cause
> downtime and data loss.
We have RAID and regular backups, but that's not good enough, of course.

> 
> Did you consider hyperconverge Gluster based setup?
> https://ovirt.org/documentation/gluster-hyperconverged/Gluster_Hyperconverged_Guide/
Yes, we will try to rebuild our cluster with that, at last for engine.

>  
> 
> We currently have a huge problem on our cluster, and export VM as OVA
> takes lot's of time.
> 
> 
> The huge problem is slow export to OVA or there is another problem?

At this time, we have mostly all data cover by backup or OVA, but nearly
all VMs are down and can't be restarted (i explain why at the end of the
mail). Some export in OVA failed (probably, but not only), because of of
free space on storage. So we have to rebuild a clean cluster without
destroying all the data we still have on our disks. (Import domains
function should do the trick, i hope)

>  
> 
> I checked on the NFS storage, and it looks like there's files who might
> be OVA in the images directory,
> 
> 
> I don't know the ova export code, but I'm sure it does not save in the
> images
> directory. It probably creates temporary volume for preparing and
> storing the
> exported ova file.
> 
> Arik, how do you suggest to debug slow export to OVA?
That might not be a bug, some of them make hundred of Gb, it can be
"normal". Anyway, files doesn't have the .ova extension (but the size
matchs the vms)

>  
> 
> and OVF files in the the master/vms
> 
> 
> OVF should be stored in OVF_STORE disks. Maybe you see files 
> created by an old version?
well, it was file with ovf extension on NFS, but i might be wrong, it's
this type of path :
c3c17a66-52e8-42dc-9c09-3e667e4c7290/master/vms/0265bf6b-7bbb-44b1-87f6-7cb5552d12c2/0265bf6b-7bbb-44b1-87f6-7cb5552d12c2.ovf
but it maybe only on exports domains.

A little after my mail, and like you mention it under, i heard about the
"import domain" function in storage domains, which makes me hope my mail
was meaningless, i'll try it in a few hours with true vm inside.

>  
> 
> Can someone tell me if, when i install a new engine,
> there's a way to get this VMs back inside the new engine (like the
> import tools by example)
> 
> 
> What do yo mean by "get this VMs back"?
> 
> If you mean importing all vms and disks on storage to a new engine,
> yest, it should work. This is the base for oVirt DR support.
Yes, thank you.
>  
> 
> ps : it should be said in the documentation to NEVER use backup of an
> engine when he is in a NFS storage domain on a node. It looks like it's
> working, but all the data are unsynch with reality.
> 
> 
> Do you mean hosted engine stored on NFS storage domain served by
> one of the nodes?
Yes

> 
> Can you give more details on this problem?

ovirt version 4.2.7

I'll try to make it short, but it's a weeks worth of stress and wrong
decisions.

We have build our cluster with a few nodes, but our whole storage are on
the nodes (the reason we choose NFS). And we put our engine on one of
this node in a NFS share. We had regular backup. I saw someday that the
status of this node was detoriated (on nodectl check), and it
recommanded us to make a lvs to check.

[ Small precision, if needed, the node has a three disks, merge in a
physical raid5. The installation of the node was a standard ovirt
partitionning except for one thing : we reduced the / part (without size
problem, it's more than 100Gb), to make a separate part in xfs to store
the vm data, this part have shares with the engine, data (vms) and iso
(export is on a NAS). ]

When I check with lvs, the data partition was used at 99.97% (!) when
the df says 55% (spoiler alert, df was right, but who cares).
There's a few days, it wasn't 99.97% but 99.99% (after a log collector,
love the irony) and the whole node crash, with engine on it, of course.
I restarted the cluster on another node, without too much trouble. Then
I looked how to repair the node where the engine was stored.
It seems there were no real solution to clean the lvs (if it was what we
should have done), so i 

[ovirt-users] NFS storage and importation

2018-11-23 Thread Alexis Grillon
Hello,

Our cluster use NFS storage (on each node as data, and on a NAS for export)
We currently have a huge problem on our cluster, and export VM as OVA
takes lot's of time.

I checked on the NFS storage, and it looks like there's files who might
be OVA in the images directory, and OVF files in the the master/vms

Can someone tell me if, when i install a new engine,
there's a way to get this VMs back inside the new engine (like the
import tools by example)

Thank you a lot for your answers,



regards,
Alexis Grillon

Pôle Humanités Numériques, Outils, Méthodes et Analyse de Données
Maison européenne des sciences de l'homme et de la société
MESHS - Lille Nord de France / CNRS
tel. +33 (0)3 20 12 58 57 | alexis.gril...@meshs.fr
www.meshs.fr | 2, rue des Canonniers 59000 Lille
GPG fingerprint AC37 4C4B 6308 975B 77D4 772F 214F 1E97 6C08 CD11



signature.asc
Description: OpenPGP digital signature
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/S4QBTXLRZLA77LEFYD57TJVZJNY2SVUT/


[ovirt-users] NFS storage and importation

2018-11-23 Thread AG
Hello,

Our cluster use NFS storage (on each node as data, and on a NAS for export)
We currently have a huge problem on our cluster, and export VM as OVA
takes lot's of time.

I checked on the NFS storage, and it looks like there's files who might
be OVA in the images directory, and OVF files in the the master/vms

Can someone tell me if, when i install a new engine,
there's a way to get this VMs back inside the new engine (like the
import tools by example)

Thank you a lot for your answers,

Regards,
Alexis


ps : it should be said in the documentation to NEVER use backup of an
engine when he is in a NFS storage domain on a node. It looks like it's
working, but all the data are unsynch with reality.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/HD3ZF5LVAIFHD4RADJQ5QV3Y2IMFZMIA/


[ovirt-users] NFS Multipathing

2018-08-30 Thread Thomas Letherby
Hello all,

I've been  looking around but I've not found anything definitive on whether
Ovirt can do NFS Multipathing, and if so how?

Does anyone have any good how tos or configuration guides?

Thanks,

Thomas
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/OANZIPQIBCTZ5XAMJKAAIKBZ7XPTSLGF/


[ovirt-users] NFS Storage Advice?

2018-07-27 Thread Wesley Stewart
I currently have a NFS server with decent speeds.  I get about 200MB/s
write and 400+ MB/s read.

My single node oVirt host has a mirrored SSD store for my Windows 10 VM,
and I have about 3-5 VMs running on the NFS data store.

However, VM/s on the NFS datastore are SLOW.  They can write to their own
disk around 10-50 MB/s.  However a VM on the mirrored SSD drives can get
150-250 MB/s transfer speed from the same NFS storage (Through NFS mounts).

Does anyone have any suggestions on what I could try to speed up the NFS
storage for the VMs?  My single node ovirt box has a 1ft Cat6 crossover
cable plugged directly into my NFS servers 10GB port.

Thanks!
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/OFTRL5QWQS5SVL23YFZ5KBFBN3F6THQ3/


[ovirt-users] NFS Mount on EMC VNXe3200 / User - Group 36:36 / Permission issue

2018-07-11 Thread jeanbaptiste.coupiac
Hello,

 

I try to mount an NFS Data share from a EMC VNXe3200 export. Unfortunately,
NFS share cannot be mount, there is permission error. 

(Indeed, after the same issue on another NFS ISO repo, I've change exported
directory user/group to 36:36, and O-Virt NFS mount was great).

 

So, I think issue with my VNXe3200 is regarding UID / GID owner on VNXe3200
side. After webex with EMC support, change owner / permission on VNXe side
seems not possible / supported.

Can I mount the NFS share with some options to avoid vdsm:kvm mounting issue
? Can I force mount with root ?

 

Regards,

 

Jean-Baptiste COUPIAC


 



 

 

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/NKQ3MBUHWMCRH23NOXQNWOD7VIYYSUOT/


[ovirt-users] NFS Mount on EMC VNXe3200 / User - Group 36:36 / Permission issue

2018-07-11 Thread jeanbaptiste
Hello,

I try to mount an NFS Data share from a EMC VNXe3200 export. Unfortunately, NFS 
share cannot be mount, there is permission error. 
(Indeed, after the same issue on another NFS ISO repo, I’ve change exported 
directory user/group to 36:36, and O-Virt NFS mount was great).

So, I think issue with my VNXe3200 is regarding UID / GID owner on VNXe3200 
side. After webex with EMC support, change owner / permission on VNXe side 
seems not possible / supported.
Can I mount the NFS share with some options to avoid vdsm:kvm mounting issue ? 
Can I force mount with root ?

Regards,

Jean-Baptiste,
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/L6PYLQN6A6XNIEYNWZ3Y5RX2SJWOKL2R/


[ovirt-users] nfs 4.2 support

2018-04-03 Thread Alan Griffiths
Hi,

I noticed that when mounting nfs domains in ovirt 4.1 using
auto-negotiate it settles on v4.1.

Is this due to lack of live storage migration as referenced in this bug

https://bugzilla.redhat.com/show_bug.cgi?id=1464787

Are there other known issues with 4.2 support?

Thanks,

Alan
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] NFS 4.1 support and migration

2018-03-15 Thread Eyal Shenitzky
Have to admit that I didn't play with the hosted engine thing, but maybe
you can find the answers in the documentation:


   - https://ovirt.org/documentation/self-hosted/Self-Hosted_Engine_Guide/
   - https://www.ovirt.org/documentation/how-to/hosted-engine/
   -
   https://ovirt.org/develop/release-management/features/sla/self-hosted-engine/


On Thu, Mar 15, 2018 at 3:25 PM,  wrote:

> Thanks, I totally missed that :-/
>
> And this wil also work for the hosted engine dedicated domain, putting the
> storage domain the virtual machine is depending on in maintenance ?
>
>
>
> Le 15-Mar-2018 10:38:48 +0100, eshen...@redhat.com a écrit:
>
>
> You can edit the storage domain setting after the storage domain
> deactivated (entered to maintenance mode).
>
>
> On Thu, Mar 15, 2018 at 11:12 AM,  wrote:
>
>> In fact I don't really know how to change storage domains setttings (like
>> nfs version or export path, ...), if it is only possible.
>>
>> I thought they could be disabled after stopping all related VMS, and
>> maybe settings panel would then unlock ?
>>
>> But this should be impossible with hosted engine dedicated storage domain
>> as it is required for the GUI itself.
>>
>> So I am stuck.
>>
>> Le 15-Mar-2018 09:59:30 +0100, eshen...@redhat.com a écrit:
>>
>>
>> I am not sure what you mean,
>> Can you please try to explain what is the difference between "VMs domain"
>> to "hosted storage domain" according to you?
>>
>> Thanks,
>>
>> On Thu, Mar 15, 2018 at 10:45 AM,  wrote:
>>
>>> Thanks for your answer.
>>>
>>> And to use V4.1 instead of V3 on a domain, do I just have to disconnect
>>> it and change its settings ? Seems to be easy to do with VMs domains, but
>>> how to do it with hosted storage domain ?
>>>
>>> Regards
>>>
>>>
>>>
>>> Le 14-Mar-2018 11:54:52 +0100, eshen...@redhat.com a écrit:
>>>
>>>
>>> Hi,
>>>
>>> NFS 4.1 supported and working since version 3.6 (according to this bug
>>> fix [1])
>>>
>>> [1] Support NFS v4.1 connections - https://bugzilla.redhat.com/
>>> show_bug.cgi?id=1283964
>>>
>>>
>>> --
>>> FreeMail powered by mail.fr
>>> ___
>>> Users mailing list
>>> Users@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>>>
>>>
>>
>>
>> --
>> Regards,
>> Eyal Shenitzky
>>
>>
>> --
>> FreeMail powered by mail.fr
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>
>
> --
> Regards,
> Eyal Shenitzky
>
>
> --
> FreeMail powered by mail.fr
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>


-- 
Regards,
Eyal Shenitzky
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] NFS 4.1 support and migration

2018-03-15 Thread spfma . tech
Thanks, I totally missed that :-/
   And this wil also work for the hosted engine dedicated domain, putting the 
storage domain the virtual machine is depending on in maintenance ? 

 Le 15-Mar-2018 10:38:48 +0100, eshen...@redhat.com a crit:   
 You can edit the storage domain setting after the storage domain deactivated 
(entered to maintenance mode).
 On Thu, Mar 15, 2018 at 11:12 AM,  wrote:

In fact I don't really know how to change storage domains setttings (like nfs 
version or export path, ...), if it is only possible. 

I thought they could be disabled after stopping all related VMS, and maybe 
settings panel would then unlock ? 

But this should be impossible with hosted engine dedicated storage domain as it 
is required for the GUI itself. 

So I am stuck.

 Le 15-Mar-2018 09:59:30 +0100, eshen...@redhat.com a crit:   
 I am not sure what you mean, Can you please try to explain what is the 
difference between "VMs domain" to "hosted storage domain" according to you?   
Thanks,  
 On Thu, Mar 15, 2018 at 10:45 AM,  wrote:

 Thanks for your answer.
   And to use V4.1 instead of V3 on a domain, do I just have to disconnect it 
and change its settings ? Seems to be easy to do with VMs domains, but how to 
do it with hosted storage domain ?   Regards 

 Le 14-Mar-2018 11:54:52 +0100, eshen...@redhat.com a crit:   
 Hi,   NFS 4.1 supported and working since version 3.6 (according to this bug 
fix [1])   [1] Support NFS v4.1 connections - 
https://bugzilla.redhat.com/show_bug.cgi?id=1283964   

-
FreeMail powered by mail.fr 
___
 Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users

   -- 
  Regards, Eyal Shenitzky 

-
FreeMail powered by mail.fr 
___
 Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users

   -- 
  Regards, Eyal Shenitzky 

-
FreeMail powered by mail.fr
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] NFS 4.1 support and migration

2018-03-15 Thread Eyal Shenitzky
You can edit the storage domain setting after the storage domain
deactivated (entered to maintenance mode).


On Thu, Mar 15, 2018 at 11:12 AM,  wrote:

> In fact I don't really know how to change storage domains setttings (like
> nfs version or export path, ...), if it is only possible.
>
> I thought they could be disabled after stopping all related VMS, and maybe
> settings panel would then unlock ?
>
> But this should be impossible with hosted engine dedicated storage domain
> as it is required for the GUI itself.
>
> So I am stuck.
>
> Le 15-Mar-2018 09:59:30 +0100, eshen...@redhat.com a écrit:
>
>
> I am not sure what you mean,
> Can you please try to explain what is the difference between "VMs domain"
> to "hosted storage domain" according to you?
>
> Thanks,
>
> On Thu, Mar 15, 2018 at 10:45 AM,  wrote:
>
>> Thanks for your answer.
>>
>> And to use V4.1 instead of V3 on a domain, do I just have to disconnect
>> it and change its settings ? Seems to be easy to do with VMs domains, but
>> how to do it with hosted storage domain ?
>>
>> Regards
>>
>>
>>
>> Le 14-Mar-2018 11:54:52 +0100, eshen...@redhat.com a écrit:
>>
>>
>> Hi,
>>
>> NFS 4.1 supported and working since version 3.6 (according to this bug
>> fix [1])
>>
>> [1] Support NFS v4.1 connections - https://bugzilla.redhat.com/
>> show_bug.cgi?id=1283964
>>
>>
>> --
>> FreeMail powered by mail.fr
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>
>
> --
> Regards,
> Eyal Shenitzky
>
>
> --
> FreeMail powered by mail.fr
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>


-- 
Regards,
Eyal Shenitzky
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] NFS 4.1 support and migration

2018-03-15 Thread spfma . tech
In fact I don't really know how to change storage domains setttings (like nfs 
version or export path, ...), if it is only possible. 

I thought they could be disabled after stopping all related VMS, and maybe 
settings panel would then unlock ? 

But this should be impossible with hosted engine dedicated storage domain as it 
is required for the GUI itself. 

So I am stuck.

 Le 15-Mar-2018 09:59:30 +0100, eshen...@redhat.com a crit:   
 I am not sure what you mean, Can you please try to explain what is the 
difference between "VMs domain" to "hosted storage domain" according to you?   
Thanks,  
 On Thu, Mar 15, 2018 at 10:45 AM,  wrote:

 Thanks for your answer.
   And to use V4.1 instead of V3 on a domain, do I just have to disconnect it 
and change its settings ? Seems to be easy to do with VMs domains, but how to 
do it with hosted storage domain ?   Regards 

 Le 14-Mar-2018 11:54:52 +0100, eshen...@redhat.com a crit:   
 Hi,   NFS 4.1 supported and working since version 3.6 (according to this bug 
fix [1])   [1] Support NFS v4.1 connections - 
https://bugzilla.redhat.com/show_bug.cgi?id=1283964   

-
FreeMail powered by mail.fr 
___
 Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users

   -- 
  Regards, Eyal Shenitzky 

-
FreeMail powered by mail.fr
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] NFS 4.1 support and migration

2018-03-15 Thread Eyal Shenitzky
I am not sure what you mean,
Can you please try to explain what is the difference between "VMs domain"
to "hosted storage domain" according to you?

Thanks,

On Thu, Mar 15, 2018 at 10:45 AM,  wrote:

> Thanks for your answer.
>
> And to use V4.1 instead of V3 on a domain, do I just have to disconnect it
> and change its settings ? Seems to be easy to do with VMs domains, but how
> to do it with hosted storage domain ?
>
> Regards
>
>
>
> Le 14-Mar-2018 11:54:52 +0100, eshen...@redhat.com a écrit:
>
>
> Hi,
>
> NFS 4.1 supported and working since version 3.6 (according to this bug fix
> [1])
>
> [1] Support NFS v4.1 connections - https://bugzilla.redhat.com/
> show_bug.cgi?id=1283964
>
>
> --
> FreeMail powered by mail.fr
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>


-- 
Regards,
Eyal Shenitzky
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] NFS 4.1 support and migration

2018-03-15 Thread spfma . tech
Thanks for your answer.
   And to use V4.1 instead of V3 on a domain, do I just have to disconnect it 
and change its settings ? Seems to be easy to do with VMs domains, but how to 
do it with hosted storage domain ?   Regards 

 Le 14-Mar-2018 11:54:52 +0100, eshen...@redhat.com a crit:   
 Hi,   NFS 4.1 supported and working since version 3.6 (according to this bug 
fix [1])   [1] Support NFS v4.1 connections - 
https://bugzilla.redhat.com/show_bug.cgi?id=1283964   

-
FreeMail powered by mail.fr
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] NFS 4.1 support and migration

2018-03-15 Thread spfma . tech
Thanks for your answer.
   And in order to use V4.1 instead of V3 on a domain, do I just have to 
disconnect it and change its settings ? Seems to be doable with VMs domains, 
but how to do it with hosted storage domain ? I haven't find a command line way 
to do this yet.
   Regards   

 Le 14-Mar-2018 11:54:52 +0100, eshen...@redhat.com a crit:   
 Hi,   NFS 4.1 supported and working since version 3.6 (according to this bug 
fix [1])   [1] Support NFS v4.1 connections - 
https://bugzilla.redhat.com/show_bug.cgi?id=1283964   

-
FreeMail powered by mail.fr
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] NFS 4.1 support and migration

2018-03-14 Thread Eyal Shenitzky
Hi,

NFS 4.1 supported and working since version 3.6 (according to this bug fix
[1])

[1] Support NFS v4.1 connections -
https://bugzilla.redhat.com/show_bug.cgi?id=1283964
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] NFS 4.1 support and migration

2018-03-14 Thread spfma . tech
Hi,
   Is NFS 4.1 supported and working flawlessly in oVirt ? I would like to 
give it a try (performances with // transferts), but as it requires changes in 
my network design if I want to add new links, I want to be sure it worth the 
effort.   Is there an easy way to "migrate"a NFS3 datastore to a NFS4.1 one 
? Options are greyed when the domain is up, so maybe it requires more care than 
just a simple click.
   Regards 

-
FreeMail powered by mail.fr
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] NFS sync vs async

2018-01-29 Thread Jean-Francois Courteau

Hello there,

At first I thought I had a performance problem with virtio-scsi on 
Windows, but after thorough experimentation, I finally found that my 
performance problem was related to the way I share my storage using NFS.


Using the settings suggested on the oVirt website for the /etc/exports 
file, I implemented the following line:
   /storage
*(rw,sync,no_subtree_check,all_squash,anonuid=36,anongid=36)


The underlying filesystem is ext4.

In the end, whatever the VM I am running through this NFS export, I get 
extremely poor write performance, like sub-100 IOPS (my disks usually 
can do 800-1k). Under the hood, iotop shows that my host IO is all taken 
up by jbd2, and if I understand correctly, it is the ext4 logging 
process.


I have read that using the "async" option in my NFS export is unsafe, 
like if my host crashes during a write operation, it could corrupt my VM 
Disks.


What is the best combination of filesystem / settings if I want to go 
with NFS sync? Is someone getting good performance with the same options 
as me? if so, why do I get such abysmal IOPS numbers?


Thanks!

J-F Courteau

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] NFS storage domain path is illegal

2017-04-05 Thread Fred Rolland
On Wed, Apr 5, 2017 at 7:01 PM, Bill James  wrote:

> once it is detached it no longer shows up in any list other than the
> dialog for "attach data domain".
> How do I right click on it or select it to "remove" it?
>

It will show up if you select "System" on the left tree view , and then the
storage tab on the right pane.


> I did find that if I select Data Center -> Default on left, select the
> storage tab, for domain in maintenance right click will let me Destroy the
> volume. Remove option never became active.
>
>
>
> We will rename the host to not use '_', thanks for the replies!
>
>
>
>
> On 04/05/2017 12:00 AM, Fred Rolland wrote:
>
>
>
> On Wed, Apr 5, 2017 at 2:27 AM, Bill James  wrote:
>
>> ovirt-engine-tools-4.1.0.4-1.el7.centos.noarch
>> vdsm-4.19.4-1.el7.centos.x86_64
>>
>>
>> I'm trying to add a NFS storage domain but the gui keeps telling me the
>> export path is illegal.
>> Even though I can mount the volume fine at the command line.
>> I have not found any log that contains the error so I can't attach log
>> entry.
>>
>> It says "Use IP:/path or FQDN:/path" but only IP works, no name I've
>> tried is allowed.
>>
>> Also, how do I remove a storage domain?
>> I could put it into maintenance and then detach it, but I have not found
>> any way to remove/destroy/delete it.
>>
>
> Once it is detached, you have a "remove" button on the top above the list.
> You can also do a right click on the row and select "destroy.
>
> With the "remove" action you have the possibility to format the domain to
> wipe any data on it.
>
>
>>
>> What I'm trying for Export Path:
>>qagenfil1_nfs1:/ovirt_inside/images
>>
>>
>> Thanks.
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>
>
> [image: www.j2.com]
> 
>
> This email, including its contents and attachments contains information
> from j2 Global, Inc
> .
> and/or its affiliates that may be privileged, confidential or otherwise
> protected from disclosure. The information is intended for the addressee(s)
> only. If you are not an addressee, any disclosure, copy, distribution, or
> use of this message is prohibited. If you have received this email in
> error, please immediately notify the sender by reply email and delete the
> message and any copies. © 2017 j2 Global, Inc , and
> affiliates. All rights reserved. eFax ® , eVoice ®
> , Campaigner ® , FuseMail
> ® , KeepItSafe ® 
> and Onebox ®  are tr ademarks of j2 Global, Inc
> . and its affiliates.
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] NFS storage domain path is illegal

2017-04-05 Thread Bill James
once it is detached it no longer shows up in any list other than the 
dialog for "attach data domain".

How do I right click on it or select it to "remove" it?
I did find that if I select Data Center -> Default on left, select the 
storage tab, for domain in maintenance right click will let me Destroy 
the volume. Remove option never became active.




We will rename the host to not use '_', thanks for the replies!



On 04/05/2017 12:00 AM, Fred Rolland wrote:



On Wed, Apr 5, 2017 at 2:27 AM, Bill James > wrote:


ovirt-engine-tools-4.1.0.4-1.el7.centos.noarch
vdsm-4.19.4-1.el7.centos.x86_64


I'm trying to add a NFS storage domain but the gui keeps telling
me the export path is illegal.
Even though I can mount the volume fine at the command line.
I have not found any log that contains the error so I can't attach
log entry.

It says "Use IP:/path or FQDN:/path" but only IP works, no name
I've tried is allowed.

Also, how do I remove a storage domain?
I could put it into maintenance and then detach it, but I have not
found any way to remove/destroy/delete it.


Once it is detached, you have a "remove" button on the top above the list.
You can also do a right click on the row and select "destroy.

With the "remove" action you have the possibility to format the domain 
to wipe any data on it.



What I'm trying for Export Path:
   qagenfil1_nfs1:/ovirt_inside/images


Thanks.
___
Users mailing list
Users@ovirt.org 
http://lists.ovirt.org/mailman/listinfo/users






Cloud Services for Business www.j2.com
j2 | eFax | eVoice | FuseMail | Campaigner | KeepItSafe | Onebox


This email, including its contents and attachments, contains information from 
j2 Global, Inc. and/or its affiliates that may be privileged, confidential, or 
otherwise protected from disclosure.  The information is intended for the 
addressee(s) only.  If you are not an addressee, any disclosure, copy, 
distribution, or use of this message is prohibited.  If you have received this 
email in error, please immediately notify the sender by reply email and delete 
the message and any copies.  �2017 j2 Global, Inc. and affiliates.  All rights 
reserved.  eFax�, eVoice�, Campaigner�, FuseMail�, KeepItSafe�, and Onebox� are 
trademarks of j2 Global, Inc. and affiliates.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] NFS storage domain path is illegal

2017-04-05 Thread Fred Rolland
On Wed, Apr 5, 2017 at 2:27 AM, Bill James  wrote:

> ovirt-engine-tools-4.1.0.4-1.el7.centos.noarch
> vdsm-4.19.4-1.el7.centos.x86_64
>
>
> I'm trying to add a NFS storage domain but the gui keeps telling me the
> export path is illegal.
> Even though I can mount the volume fine at the command line.
> I have not found any log that contains the error so I can't attach log
> entry.
>
> It says "Use IP:/path or FQDN:/path" but only IP works, no name I've tried
> is allowed.
>
> Also, how do I remove a storage domain?
> I could put it into maintenance and then detach it, but I have not found
> any way to remove/destroy/delete it.
>

Once it is detached, you have a "remove" button on the top above the list.
You can also do a right click on the row and select "destroy.

With the "remove" action you have the possibility to format the domain to
wipe any data on it.


>
> What I'm trying for Export Path:
>qagenfil1_nfs1:/ovirt_inside/images
>
>
> Thanks.
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] NFS storage domain path is illegal

2017-04-05 Thread Yedidyah Bar David
On Wed, Apr 5, 2017 at 2:27 AM, Bill James  wrote:
> ovirt-engine-tools-4.1.0.4-1.el7.centos.noarch
> vdsm-4.19.4-1.el7.centos.x86_64
>
>
> I'm trying to add a NFS storage domain but the gui keeps telling me the
> export path is illegal.
> Even though I can mount the volume fine at the command line.
> I have not found any log that contains the error so I can't attach log
> entry.
>
> It says "Use IP:/path or FQDN:/path" but only IP works, no name I've tried
> is allowed.
>
> Also, how do I remove a storage domain?
> I could put it into maintenance and then detach it, but I have not found any
> way to remove/destroy/delete it.
>
> What I'm trying for Export Path:
>qagenfil1_nfs1:/ovirt_inside/images

'_' is an illegal char for hostnames, so we disallow it as well. See also:

https://en.wikipedia.org/wiki/Hostname

The fact that it does work for you elsewhere does not make it a good choice -
you risk other things to fail in the future.

Best,

>
>
> Thanks.
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users



-- 
Didi
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] NFS storage domain path is illegal

2017-04-04 Thread Bill James

ovirt-engine-tools-4.1.0.4-1.el7.centos.noarch
vdsm-4.19.4-1.el7.centos.x86_64


I'm trying to add a NFS storage domain but the gui keeps telling me the 
export path is illegal.

Even though I can mount the volume fine at the command line.
I have not found any log that contains the error so I can't attach log 
entry.


It says "Use IP:/path or FQDN:/path" but only IP works, no name I've 
tried is allowed.


Also, how do I remove a storage domain?
I could put it into maintenance and then detach it, but I have not found 
any way to remove/destroy/delete it.


What I'm trying for Export Path:
   qagenfil1_nfs1:/ovirt_inside/images


Thanks.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] NFS ISO domain from hosted-engine VM

2017-03-23 Thread Devin A. Bougie
On Mar 23, 2017, at 10:51 AM, Yedidyah Bar David  wrote:
> On Thu, Mar 23, 2017 at 4:12 PM, Devin A. Bougie
>  wrote:
>> Hi, All.  Are there any recommendations or best practices WRT whether or not 
>> to host an NFS ISO domain from the hosted-engine VM (running the oVirt 
>> Engine Appliance)?  We have a hosted-engine 4.1.1 cluster up and running, 
>> and now just have to decide where to serve the NFS ISO domain from.
> 
> NFS ISO domain on the engine machine is generally deprecated, and
> specifically problematic for hosted-engine, see also:

Thanks, Didi!  I'll go ahead and setup the NFS ISO domain in a separate cluster.

Sincerely,
Devin

> https://bugzilla.redhat.com/show_bug.cgi?id=1332813
> I recently pushed a patch to remove the question about it altogether:
> https://gerrit.ovirt.org/74409
> 
> Best,
> -- 
> Didi

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] NFS ISO domain from hosted-engine VM

2017-03-23 Thread Yedidyah Bar David
On Thu, Mar 23, 2017 at 4:12 PM, Devin A. Bougie
 wrote:
> Hi, All.  Are there any recommendations or best practices WRT whether or not 
> to host an NFS ISO domain from the hosted-engine VM (running the oVirt Engine 
> Appliance)?  We have a hosted-engine 4.1.1 cluster up and running, and now 
> just have to decide where to serve the NFS ISO domain from.

NFS ISO domain on the engine machine is generally deprecated, and
specifically problematic for hosted-engine, see also:

https://bugzilla.redhat.com/show_bug.cgi?id=1332813

I recently pushed a patch to remove the question about it altogether:

https://gerrit.ovirt.org/74409

Best,
-- 
Didi
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] NFS ISO domain from hosted-engine VM

2017-03-23 Thread Devin A. Bougie
Hi, All.  Are there any recommendations or best practices WRT whether or not to 
host an NFS ISO domain from the hosted-engine VM (running the oVirt Engine 
Appliance)?  We have a hosted-engine 4.1.1 cluster up and running, and now just 
have to decide where to serve the NFS ISO domain from.

Many thanks,
Devin
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] NFS and pass discards\unmap question

2017-02-03 Thread Sergey Kulikov

Thanks! I think now It's question to NetApp, when they'll make 4.2 available,
I've tried to manually mount v4.2 on host,
but unfortunately:
# mount -o vers=4.2 10.1.1.111:/test /tmp/123 
mount.nfs: Protocol not supported

so, my NetApp is vers=4.1 max (

-- 



 Friday, February 3, 2017, 15:54:54:





> On Feb 3, 2017 1:50 PM, "Nir Soffer"  wrote:

> On Fri, Feb 3, 2017 at 2:29 PM, Sergey Kulikov  wrote:
 >>
 >>
 >> Hm... maybe I need to set any options, is there any way to force ovirt to 
 >> mount with this extension, or version 4.2
 >> there is only 4.1 selection in "New Domain" menu.
 >> Current mount options:
 >> type nfs4 
 >> (rw,relatime,vers=4.1,rsize=65536,wsize=65536,namlen=255,soft,nosharecache,proto=tcp,port=0,timeo=600,retrans=6,sec=sys,local_lock=none)
 >>
 >> it should work only if forced option vers=4.2 ?
 >> I thought it's implemented as feature to older version, not 4.2, there is 
 >> few info about this.
>  
>  Looks like ovirt engine does not allow nfs version 4.2.




> But custom options can be used. 
> Y. 


>  
>  We have this RFE:
>  https://bugzilla.redhat.com/1406398
>  
>  So practically, both sparsify and pass discard with NFS are useless
>  in the current version.
>  
>  I think this should be fix for next 4.1 build.
>  
>  Nir
>  

 >>
 >>
 >> --
 >>
 >>
 >>
 >>  Friday, February 3, 2017, 14:45:43:
 >>
 >>
 >>
 >>
 >>
 >>> On Thu, Feb 2, 2017 at 11:45 PM, Sergey Kulikov  wrote:
 >>
 >>
 >>>  I've upgraded to 4.1 release, it have great feature "Pass
 >>> discards", that now can be used without vdsm hooks,
 >>>  After upgrade I've tested it with NFS 4.1 storage, exported from
 >>> netapp, but unfortunately found out, that
 >>>  it's not working, after some investigation, I've found, that NFS
 >>> implementation(even 4.1) in Centos 7
 >>>  doesn't support sparse files and fallocate(FALLOC_FL_PUNCH_HOLE),
 >>> that quemu uses for file storage, it was
 >>>  added only in kernel 3.18, and sparse files is also announced feature of 
 >>>upcoming NFS4.2,
 >>>  sparsify also not working on this data domains(runs, but nothing happens).
 >>>
 >>>  This test also shows, that FALLOC_FL_PUNCH_HOLE not working, it
 >>> was executed on centos ovirt host with mounted nfs share:
 >>>  # truncate -s 1024 test1
 >>>  # fallocate -p -o 0 -l 1024 test1
 >>>  fallocate: keep size mode (-n option) unsupported
 >>>
 >>>  Is there any plans to backport this feature to node-ng, or centos? or we 
 >>>should wait for RHEL 8?
 >>
 >>
 >>
 >>
 >>> Interesting, I was under the impression it was fixed some time ago,
 >>> for 7.2[1] (kernel-3.10.0-313.el7)
 >>> Perhaps you are not mounted with 4.2?
 >>
 >>
 >>> Y.
 >>> [1] https://bugzilla.redhat.com/show_bug.cgi?id=1079385
 >>>
 >>>  NFS is more and more popular, so discards is VERY useful feature.
 >>>  I'm also planning to test fallocate on latest fedora with 4.x kernel and 
 >>>mounted nfs.
 >>>
 >>>  Thanks for your work!
 >>>
 >>>  --
 >>>
 >>>  ___
 >>>  Users mailing list
 >>>  Users@ovirt.org
 >>>  http://lists.ovirt.org/mailman/listinfo/users
 >>>
 >>
 >> ___
 >> Users mailing list
 >> Users@ovirt.org
 >> http://lists.ovirt.org/mailman/listinfo/users
>  

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] NFS and pass discards\unmap question

2017-02-03 Thread Nir Soffer
On Fri, Feb 3, 2017 at 2:54 PM, Yaniv Kaul  wrote:
>
>
> On Feb 3, 2017 1:50 PM, "Nir Soffer"  wrote:
>
> On Fri, Feb 3, 2017 at 2:29 PM, Sergey Kulikov  wrote:
>>
>>
>> Hm... maybe I need to set any options, is there any way to force ovirt to
>> mount with this extension, or version 4.2
>> there is only 4.1 selection in "New Domain" menu.
>> Current mount options:
>> type nfs4
>> (rw,relatime,vers=4.1,rsize=65536,wsize=65536,namlen=255,soft,nosharecache,proto=tcp,port=0,timeo=600,retrans=6,sec=sys,local_lock=none)
>>
>> it should work only if forced option vers=4.2 ?
>> I thought it's implemented as feature to older version, not 4.2, there is
>> few info about this.
>
> Looks like ovirt engine does not allow nfs version 4.2.
>
>
> But custom options can be used.

Does not work for me - I tried all combinations of:

NFS Version: Auto Negotiate
NFS Version: V4
NFS Version: V4.1
NFS Version: V3 (default)

With:

Additional mount options: nfsvers=4,minorversion=2
Additional mount options: minorversion=2
Additional mount options: vers=4.2

It always fail with this error:

Error while executing action: Cannot edit Storage Connection.
Custom mount options contain the following duplicate managed options:
...

Engine does not let you specify minorversion, nfsvers, or vers.

Adding managed 4.2 item to the menu seems like the way to fix this.

Nir

> Y.
>
>
> We have this RFE:
> https://bugzilla.redhat.com/1406398
>
> So practically, both sparsify and pass discard with NFS are useless
> in the current version.
>
> I think this should be fix for next 4.1 build.
>
> Nir
>
>>
>>
>> --
>>
>>
>>
>>  Friday, February 3, 2017, 14:45:43:
>>
>>
>>
>>
>>
>>> On Thu, Feb 2, 2017 at 11:45 PM, Sergey Kulikov  wrote:
>>
>>
>>>  I've upgraded to 4.1 release, it have great feature "Pass
>>> discards", that now can be used without vdsm hooks,
>>>  After upgrade I've tested it with NFS 4.1 storage, exported from
>>> netapp, but unfortunately found out, that
>>>  it's not working, after some investigation, I've found, that NFS
>>> implementation(even 4.1) in Centos 7
>>>  doesn't support sparse files and fallocate(FALLOC_FL_PUNCH_HOLE),
>>> that quemu uses for file storage, it was
>>>  added only in kernel 3.18, and sparse files is also announced feature of
>>> upcoming NFS4.2,
>>>  sparsify also not working on this data domains(runs, but nothing
>>> happens).
>>>
>>>  This test also shows, that FALLOC_FL_PUNCH_HOLE not working, it
>>> was executed on centos ovirt host with mounted nfs share:
>>>  # truncate -s 1024 test1
>>>  # fallocate -p -o 0 -l 1024 test1
>>>  fallocate: keep size mode (-n option) unsupported
>>>
>>>  Is there any plans to backport this feature to node-ng, or centos? or we
>>> should wait for RHEL 8?
>>
>>
>>
>>
>>> Interesting, I was under the impression it was fixed some time ago,
>>> for 7.2[1] (kernel-3.10.0-313.el7)
>>> Perhaps you are not mounted with 4.2?
>>
>>
>>> Y.
>>> [1] https://bugzilla.redhat.com/show_bug.cgi?id=1079385
>>>
>>>  NFS is more and more popular, so discards is VERY useful feature.
>>>  I'm also planning to test fallocate on latest fedora with 4.x kernel and
>>> mounted nfs.
>>>
>>>  Thanks for your work!
>>>
>>>  --
>>>
>>>  ___
>>>  Users mailing list
>>>  Users@ovirt.org
>>>  http://lists.ovirt.org/mailman/listinfo/users
>>>
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] NFS and pass discards\unmap question

2017-02-03 Thread Yaniv Kaul
On Feb 3, 2017 1:50 PM, "Nir Soffer"  wrote:

On Fri, Feb 3, 2017 at 2:29 PM, Sergey Kulikov  wrote:
>
>
> Hm... maybe I need to set any options, is there any way to force ovirt to
mount with this extension, or version 4.2
> there is only 4.1 selection in "New Domain" menu.
> Current mount options:
> type nfs4 (rw,relatime,vers=4.1,rsize=65536,wsize=65536,namlen=255,
soft,nosharecache,proto=tcp,port=0,timeo=600,retrans=6,
sec=sys,local_lock=none)
>
> it should work only if forced option vers=4.2 ?
> I thought it's implemented as feature to older version, not 4.2, there is
few info about this.

Looks like ovirt engine does not allow nfs version 4.2.


But custom options can be used.
Y.


We have this RFE:
https://bugzilla.redhat.com/1406398

So practically, both sparsify and pass discard with NFS are useless
in the current version.

I think this should be fix for next 4.1 build.

Nir

>
>
> --
>
>
>
>  Friday, February 3, 2017, 14:45:43:
>
>
>
>
>
>> On Thu, Feb 2, 2017 at 11:45 PM, Sergey Kulikov  wrote:
>
>
>>  I've upgraded to 4.1 release, it have great feature "Pass
>> discards", that now can be used without vdsm hooks,
>>  After upgrade I've tested it with NFS 4.1 storage, exported from
>> netapp, but unfortunately found out, that
>>  it's not working, after some investigation, I've found, that NFS
>> implementation(even 4.1) in Centos 7
>>  doesn't support sparse files and fallocate(FALLOC_FL_PUNCH_HOLE),
>> that quemu uses for file storage, it was
>>  added only in kernel 3.18, and sparse files is also announced feature
of upcoming NFS4.2,
>>  sparsify also not working on this data domains(runs, but nothing
happens).
>>
>>  This test also shows, that FALLOC_FL_PUNCH_HOLE not working, it
>> was executed on centos ovirt host with mounted nfs share:
>>  # truncate -s 1024 test1
>>  # fallocate -p -o 0 -l 1024 test1
>>  fallocate: keep size mode (-n option) unsupported
>>
>>  Is there any plans to backport this feature to node-ng, or centos? or
we should wait for RHEL 8?
>
>
>
>
>> Interesting, I was under the impression it was fixed some time ago,
>> for 7.2[1] (kernel-3.10.0-313.el7)
>> Perhaps you are not mounted with 4.2?
>
>
>> Y.
>> [1] https://bugzilla.redhat.com/show_bug.cgi?id=1079385
>>
>>  NFS is more and more popular, so discards is VERY useful feature.
>>  I'm also planning to test fallocate on latest fedora with 4.x kernel
and mounted nfs.
>>
>>  Thanks for your work!
>>
>>  --
>>
>>  ___
>>  Users mailing list
>>  Users@ovirt.org
>>  http://lists.ovirt.org/mailman/listinfo/users
>>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] NFS and pass discards\unmap question

2017-02-03 Thread Nir Soffer
On Fri, Feb 3, 2017 at 2:29 PM, Sergey Kulikov  wrote:
>
>
> Hm... maybe I need to set any options, is there any way to force ovirt to 
> mount with this extension, or version 4.2
> there is only 4.1 selection in "New Domain" menu.
> Current mount options:
> type nfs4 
> (rw,relatime,vers=4.1,rsize=65536,wsize=65536,namlen=255,soft,nosharecache,proto=tcp,port=0,timeo=600,retrans=6,sec=sys,local_lock=none)
>
> it should work only if forced option vers=4.2 ?
> I thought it's implemented as feature to older version, not 4.2, there is few 
> info about this.

Looks like ovirt engine does not allow nfs version 4.2.

We have this RFE:
https://bugzilla.redhat.com/1406398

So practically, both sparsify and pass discard with NFS are useless
in the current version.

I think this should be fix for next 4.1 build.

Nir

>
>
> --
>
>
>
>  Friday, February 3, 2017, 14:45:43:
>
>
>
>
>
>> On Thu, Feb 2, 2017 at 11:45 PM, Sergey Kulikov  wrote:
>
>
>>  I've upgraded to 4.1 release, it have great feature "Pass
>> discards", that now can be used without vdsm hooks,
>>  After upgrade I've tested it with NFS 4.1 storage, exported from
>> netapp, but unfortunately found out, that
>>  it's not working, after some investigation, I've found, that NFS
>> implementation(even 4.1) in Centos 7
>>  doesn't support sparse files and fallocate(FALLOC_FL_PUNCH_HOLE),
>> that quemu uses for file storage, it was
>>  added only in kernel 3.18, and sparse files is also announced feature of 
>> upcoming NFS4.2,
>>  sparsify also not working on this data domains(runs, but nothing happens).
>>
>>  This test also shows, that FALLOC_FL_PUNCH_HOLE not working, it
>> was executed on centos ovirt host with mounted nfs share:
>>  # truncate -s 1024 test1
>>  # fallocate -p -o 0 -l 1024 test1
>>  fallocate: keep size mode (-n option) unsupported
>>
>>  Is there any plans to backport this feature to node-ng, or centos? or we 
>> should wait for RHEL 8?
>
>
>
>
>> Interesting, I was under the impression it was fixed some time ago,
>> for 7.2[1] (kernel-3.10.0-313.el7)
>> Perhaps you are not mounted with 4.2?
>
>
>> Y.
>> [1] https://bugzilla.redhat.com/show_bug.cgi?id=1079385
>>
>>  NFS is more and more popular, so discards is VERY useful feature.
>>  I'm also planning to test fallocate on latest fedora with 4.x kernel and 
>> mounted nfs.
>>
>>  Thanks for your work!
>>
>>  --
>>
>>  ___
>>  Users mailing list
>>  Users@ovirt.org
>>  http://lists.ovirt.org/mailman/listinfo/users
>>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] NFS and pass discards\unmap question

2017-02-03 Thread Sergey Kulikov


Unfortunately I can't browse this bug:
"You are not authorized to access bug #1079385."
Can you email me details on ths bug?
I think that's the reason I can't find this fix for rhel\centos in google)


-- 



 Friday, February 3, 2017, 14:45:43:





> On Thu, Feb 2, 2017 at 11:45 PM, Sergey Kulikov  wrote:


>  I've upgraded to 4.1 release, it have great feature "Pass
> discards", that now can be used without vdsm hooks,
>  After upgrade I've tested it with NFS 4.1 storage, exported from
> netapp, but unfortunately found out, that
>  it's not working, after some investigation, I've found, that NFS
> implementation(even 4.1) in Centos 7
>  doesn't support sparse files and fallocate(FALLOC_FL_PUNCH_HOLE),
> that quemu uses for file storage, it was
>  added only in kernel 3.18, and sparse files is also announced feature of 
> upcoming NFS4.2,
>  sparsify also not working on this data domains(runs, but nothing happens).
>  
>  This test also shows, that FALLOC_FL_PUNCH_HOLE not working, it
> was executed on centos ovirt host with mounted nfs share:
>  # truncate -s 1024 test1
>  # fallocate -p -o 0 -l 1024 test1
>  fallocate: keep size mode (-n option) unsupported
>  
>  Is there any plans to backport this feature to node-ng, or centos? or we 
> should wait for RHEL 8?




> Interesting, I was under the impression it was fixed some time ago,
> for 7.2[1] (kernel-3.10.0-313.el7)
> Perhaps you are not mounted with 4.2?


> Y.
> [1] https://bugzilla.redhat.com/show_bug.cgi?id=1079385
>  
>  NFS is more and more popular, so discards is VERY useful feature.
>  I'm also planning to test fallocate on latest fedora with 4.x kernel and 
> mounted nfs.
>  
>  Thanks for your work!
>  
>  --
>  
>  ___
>  Users mailing list
>  Users@ovirt.org
>  http://lists.ovirt.org/mailman/listinfo/users
>  

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] NFS and pass discards\unmap question

2017-02-03 Thread Sergey Kulikov


Hm... maybe I need to set any options, is there any way to force ovirt to mount 
with this extension, or version 4.2
there is only 4.1 selection in "New Domain" menu.
Current mount options:
type nfs4 
(rw,relatime,vers=4.1,rsize=65536,wsize=65536,namlen=255,soft,nosharecache,proto=tcp,port=0,timeo=600,retrans=6,sec=sys,local_lock=none)

it should work only if forced option vers=4.2 ?
I thought it's implemented as feature to older version, not 4.2, there is few 
info about this.


-- 



 Friday, February 3, 2017, 14:45:43:





> On Thu, Feb 2, 2017 at 11:45 PM, Sergey Kulikov  wrote:


>  I've upgraded to 4.1 release, it have great feature "Pass
> discards", that now can be used without vdsm hooks,
>  After upgrade I've tested it with NFS 4.1 storage, exported from
> netapp, but unfortunately found out, that
>  it's not working, after some investigation, I've found, that NFS
> implementation(even 4.1) in Centos 7
>  doesn't support sparse files and fallocate(FALLOC_FL_PUNCH_HOLE),
> that quemu uses for file storage, it was
>  added only in kernel 3.18, and sparse files is also announced feature of 
> upcoming NFS4.2,
>  sparsify also not working on this data domains(runs, but nothing happens).
>  
>  This test also shows, that FALLOC_FL_PUNCH_HOLE not working, it
> was executed on centos ovirt host with mounted nfs share:
>  # truncate -s 1024 test1
>  # fallocate -p -o 0 -l 1024 test1
>  fallocate: keep size mode (-n option) unsupported
>  
>  Is there any plans to backport this feature to node-ng, or centos? or we 
> should wait for RHEL 8?




> Interesting, I was under the impression it was fixed some time ago,
> for 7.2[1] (kernel-3.10.0-313.el7)
> Perhaps you are not mounted with 4.2?


> Y.
> [1] https://bugzilla.redhat.com/show_bug.cgi?id=1079385
>  
>  NFS is more and more popular, so discards is VERY useful feature.
>  I'm also planning to test fallocate on latest fedora with 4.x kernel and 
> mounted nfs.
>  
>  Thanks for your work!
>  
>  --
>  
>  ___
>  Users mailing list
>  Users@ovirt.org
>  http://lists.ovirt.org/mailman/listinfo/users
>  

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] NFS and pass discards\unmap question

2017-02-03 Thread Yaniv Kaul
On Thu, Feb 2, 2017 at 11:45 PM, Sergey Kulikov  wrote:

>
> I've upgraded to 4.1 release, it have great feature "Pass discards", that
> now can be used without vdsm hooks,
> After upgrade I've tested it with NFS 4.1 storage, exported from netapp,
> but unfortunately found out, that
> it's not working, after some investigation, I've found, that NFS
> implementation(even 4.1) in Centos 7
> doesn't support sparse files and fallocate(FALLOC_FL_PUNCH_HOLE), that
> quemu uses for file storage, it was
> added only in kernel 3.18, and sparse files is also announced feature of
> upcoming NFS4.2,
> sparsify also not working on this data domains(runs, but nothing happens).
>
> This test also shows, that FALLOC_FL_PUNCH_HOLE not working, it was
> executed on centos ovirt host with mounted nfs share:
> # truncate -s 1024 test1
> # fallocate -p -o 0 -l 1024 test1
> fallocate: keep size mode (-n option) unsupported
>
> Is there any plans to backport this feature to node-ng, or centos? or we
> should wait for RHEL 8?
>

Interesting, I was under the impression it was fixed some time ago, for
7.2[1] (kernel-3.10.0-313.el7)
Perhaps you are not mounted with 4.2?

Y.
[1] https://bugzilla.redhat.com/show_bug.cgi?id=1079385


> NFS is more and more popular, so discards is VERY useful feature.
> I'm also planning to test fallocate on latest fedora with 4.x kernel and
> mounted nfs.
>
> Thanks for your work!
>
> --
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] NFS and pass discards\unmap question

2017-02-02 Thread Sergey Kulikov

I've upgraded to 4.1 release, it have great feature "Pass discards", that now 
can be used without vdsm hooks,
After upgrade I've tested it with NFS 4.1 storage, exported from netapp, but 
unfortunately found out, that
it's not working, after some investigation, I've found, that NFS 
implementation(even 4.1) in Centos 7
doesn't support sparse files and fallocate(FALLOC_FL_PUNCH_HOLE), that quemu 
uses for file storage, it was
added only in kernel 3.18, and sparse files is also announced feature of 
upcoming NFS4.2,
sparsify also not working on this data domains(runs, but nothing happens).

This test also shows, that FALLOC_FL_PUNCH_HOLE not working, it was executed on 
centos ovirt host with mounted nfs share:
# truncate -s 1024 test1
# fallocate -p -o 0 -l 1024 test1
fallocate: keep size mode (-n option) unsupported

Is there any plans to backport this feature to node-ng, or centos? or we should 
wait for RHEL 8?
NFS is more and more popular, so discards is VERY useful feature.
I'm also planning to test fallocate on latest fedora with 4.x kernel and 
mounted nfs.

Thanks for your work!

-- 

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] nfs storage permission problem

2016-04-15 Thread Bill James

just to close off this issue.
I found problem.
My /etc/exports file had a space in it where it shouldn't have (between 
* and options).
This seemed to acceptable with older versions of centos7.2 but on my 
newer host with latest centos7.2 (3.10.0-327.13.1.el7.x86_64) it only 
mounted read-only. maybe default changed from rw to ro?


I fixed exports file and now both versions of centos work.

/mount_point *(rw,sync,no_subtree_check,all_squash,anonuid=36,anongid=36)



On 4/13/16 4:39 PM, Brett I. Holcomb wrote:

On Wed, 2016-04-13 at 15:52 -0700, Bill James wrote:
[vdsm@ovirt4 test /]$ touch 
/rhev/data-center/mnt/ovirt3-ks.test.j2noc.com:_ovirt-store_nfs/test
touch: cannot touch 
‘/rhev/data-center/mnt/ovirt3-ks.test.j2noc.com:_ovirt-store_nfs/test’: 
Read-only file system


Hmm, read-only.  :-(

ovirt3-ks.test.j2noc.com:/ovirt-store/nfs on 
/rhev/data-center/mnt/ovirt3-ks.test.j2noc.com:_ovirt-store_nfs type 
nfs4 
(*rw*,relatime,vers=4.0,rsize=1048576,wsize=1048576,namlen=255,soft,nosharecache,proto=tcp,port=0,timeo=600,retrans=6,sec=sys,clientaddr=10.100.108.96,local_lock=none,addr=10.100.108.33)


now to figure out why


[root@ovirt4 test ~]# ls -la /rhev/data-center/mnt/
total 8
drwxr-xr-x 4 vdsm kvm  110 Apr 13 15:30 .
drwxr-xr-x 3 vdsm kvm   16 Apr 13 08:06 ..
drwxr-xr-x 3 vdsm kvm 4096 Mar 11 15:19 
netappqa3:_vol_cloud__images_ovirt__QA__export
drwxr-xr-x 3 vdsm kvm 4096 Mar 11 15:17 
netappqa3:_vol_cloud__images_ovirt__QA__ISOs




export and ISO domain mount fine too. (and rw)



ovirt-engine-3.6.4.1-1.el7.centos.noarch


On 04/13/2016 03:21 PM, Brett I. Holcomb wrote:

On Wed, 2016-04-13 at 15:09 -0700, Bill James wrote:

I have a cluster working fine with 2 nodes.
I'm trying to add a third and it is complaining:


StorageServerAccessPermissionError: Permission settings on the
specified
path do not allow access to the storage. Verify permission settings
on
the specified storage path.: 'path =
/rhev/data-center/mnt/ovirt3-ks.test.j2noc.com:_ovirt-store_nfs'


if I try the commands manually as vdsm they work fine and the volume
mounts.


[vdsm@ovirt4 test /]$ mkdir -p
/rhev/data-center/mnt/ovirt3-ks.test.j2noc.com:_ovirt-store_nfs
[vdsm@ovirt4 test /]$ sudo -n /usr/bin/mount -t nfs -o
soft,nosharecache,timeo=600,retrans=6
ovirt3-ks.test.j2noc.com:/ovirt-store/nfs
/rhev/data-center/mnt/ovirt3-ks.test.j2noc.com:_ovirt-store_nfs
[vdsm@ovirt4 test /]$ df -h
/rhev/data-center/mnt/ovirt3-ks.test.j2noc.com:_ovirt-store_nfs
Filesystem Size  Used Avail Use%
Mounted on
ovirt3-ks.test.j2noc.com:/ovirt-store/nfs  1.1T  305G  759G  29%
/rhev/data-center/mnt/ovirt3-ks.test.j2noc.com:_ovirt-store_nfs


After manually mounting the NFS volumes and activating the node it
still
fails.


2016-04-13 14:55:16,559 WARN
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector
]
(DefaultQuartzScheduler_Worker-61) [64ceea1d] Correlation ID:
64ceea1d,
Job ID: a47b74c7-2ae0-43f9-9bdf-e50963a28895, Call Stack: null,
Custom
Event ID: -1, Message: Host ovirt4.test.j2noc.com cannot access the
Storage Domain(s)  attached to the Data Center Default.
Setting
Host state to Non-Operational.


Not sure what "UNKNOWN" storage is, unless its one I deleted earlier
that somehow isn't really removed.

Also tried "reinstall" on node. same issue.

Attached are engine and vdsm logs.


Thanks.

___
Users mailing list
Users@ovirt.org 
http://lists.ovirt.org/mailman/listinfo/users

Try adding anonuid=36,anongid=36 to the mount and make sure 36:36 is
the owner group on the mount point.  I found this,http://www.ovirt.org
/documentation/how-to/troubleshooting/troubleshooting-nfs-storage-
issues/, helpful.

___
Users mailing list
Users@ovirt.org 
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org 
http://lists.ovirt.org/mailman/listinfo/users 



Try adding the anonuid=36,anongid=36 to the NFS mount options.




___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] nfs storage permission problem

2016-04-13 Thread Brett I. Holcomb
On Wed, 2016-04-13 at 15:52 -0700, Bill James wrote:
> [vdsm@ovirt4 test /]$ touch /rhev/data-center/mnt/ovirt3-
> ks.test.j2noc.com:_ovirt-store_nfs/test
> touch: cannot touch ‘/rhev/data-center/mnt/ovirt3-
> ks.test.j2noc.com:_ovirt-store_nfs/test’: Read-only file system
> 
> Hmm, read-only.  :-(
> 
> ovirt3-ks.test.j2noc.com:/ovirt-store/nfs on /rhev/data-
> center/mnt/ovirt3-ks.test.j2noc.com:_ovirt-store_nfs type nfs4
> (rw,relatime,vers=4.0,rsize=1048576,wsize=1048576,namlen=255,soft,nos
> harecache,proto=tcp,port=0,timeo=600,retrans=6,sec=sys,clientaddr=10.
> 100.108.96,local_lock=none,addr=10.100.108.33)
> 
> now to figure out why
> 
> 
> [root@ovirt4 test ~]# ls -la /rhev/data-center/mnt/
> total 8
> drwxr-xr-x 4 vdsm kvm  110 Apr 13 15:30 .
> drwxr-xr-x 3 vdsm kvm   16 Apr 13 08:06 ..
> drwxr-xr-x 3 vdsm kvm 4096 Mar 11 15:19
> netappqa3:_vol_cloud__images_ovirt__QA__export
> drwxr-xr-x 3 vdsm kvm 4096 Mar 11 15:17
> netappqa3:_vol_cloud__images_ovirt__QA__ISOs
> 
> 
> 
> export and ISO domain mount fine too. (and rw)
> 
> 
> 
> ovirt-engine-3.6.4.1-1.el7.centos.noarch
> 
> 
> On 04/13/2016 03:21 PM, Brett I. Holcomb wrote:
> > On Wed, 2016-04-13 at 15:09 -0700, Bill James wrote:
> > > I have a cluster working fine with 2 nodes.
> > > I'm trying to add a third and it is complaining:
> > > 
> > > 
> > > StorageServerAccessPermissionError: Permission settings on the
> > > specified 
> > > path do not allow access to the storage. Verify permission
> > > settings
> > > on 
> > > the specified storage path.: 'path = 
> > > /rhev/data-center/mnt/ovirt3-ks.test.j2noc.com:_ovirt-store_nfs'
> > > 
> > > 
> > > if I try the commands manually as vdsm they work fine and the
> > > volume
> > > mounts.
> > > 
> > > 
> > > [vdsm@ovirt4 test /]$ mkdir -p 
> > > /rhev/data-center/mnt/ovirt3-ks.test.j2noc.com:_ovirt-store_nfs
> > > [vdsm@ovirt4 test /]$ sudo -n /usr/bin/mount -t nfs -o 
> > > soft,nosharecache,timeo=600,retrans=6 
> > > ovirt3-ks.test.j2noc.com:/ovirt-store/nfs 
> > > /rhev/data-center/mnt/ovirt3-ks.test.j2noc.com:_ovirt-store_nfs
> > > [vdsm@ovirt4 test /]$ df -h 
> > > /rhev/data-center/mnt/ovirt3-ks.test.j2noc.com:_ovirt-store_nfs
> > > Filesystem Size  Used Avail Use%
> > > Mounted on
> > > ovirt3-ks.test.j2noc.com:/ovirt-store/nfs  1.1T  305G  759G  29% 
> > > /rhev/data-center/mnt/ovirt3-ks.test.j2noc.com:_ovirt-store_nfs
> > > 
> > > 
> > > After manually mounting the NFS volumes and activating the node
> > > it
> > > still 
> > > fails.
> > > 
> > > 
> > > 2016-04-13 14:55:16,559 WARN 
> > > [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDire
> > > ctor
> > > ] 
> > > (DefaultQuartzScheduler_Worker-61) [64ceea1d] Correlation ID:
> > > 64ceea1d, 
> > > Job ID: a47b74c7-2ae0-43f9-9bdf-e50963a28895, Call Stack: null,
> > > Custom 
> > > Event ID: -1, Message: Host ovirt4.test.j2noc.com cannot access
> > > the 
> > > Storage Domain(s)  attached to the Data Center Default.
> > > Setting 
> > > Host state to Non-Operational.
> > > 
> > > 
> > > Not sure what "UNKNOWN" storage is, unless its one I deleted
> > > earlier 
> > > that somehow isn't really removed.
> > > 
> > > Also tried "reinstall" on node. same issue.
> > > 
> > > Attached are engine and vdsm logs.
> > > 
> > > 
> > > Thanks.
> > > 
> > > ___
> > > Users mailing list
> > > Users@ovirt.org
> > > http://lists.ovirt.org/mailman/listinfo/users
> > Try adding anonuid=36,anongid=36 to the mount and make sure 36:36
> > is
> > the owner group on the mount point.  I found this, http://www.ovirt
> > .org
> > /documentation/how-to/troubleshooting/troubleshooting-nfs-storage-
> > issues/, helpful.
> > 
> > ___
> > Users mailing list
> > Users@ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/users
>  
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
Try adding the anonuid=36,anongid=36 to the NFS mount options.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] nfs storage permission problem

2016-04-13 Thread Brett I. Holcomb
On Wed, 2016-04-13 at 15:09 -0700, Bill James wrote:
> I have a cluster working fine with 2 nodes.
> I'm trying to add a third and it is complaining:
> 
> 
> StorageServerAccessPermissionError: Permission settings on the
> specified 
> path do not allow access to the storage. Verify permission settings
> on 
> the specified storage path.: 'path = 
> /rhev/data-center/mnt/ovirt3-ks.test.j2noc.com:_ovirt-store_nfs'
> 
> 
> if I try the commands manually as vdsm they work fine and the volume
> mounts.
> 
> 
> [vdsm@ovirt4 test /]$ mkdir -p 
> /rhev/data-center/mnt/ovirt3-ks.test.j2noc.com:_ovirt-store_nfs
> [vdsm@ovirt4 test /]$ sudo -n /usr/bin/mount -t nfs -o 
> soft,nosharecache,timeo=600,retrans=6 
> ovirt3-ks.test.j2noc.com:/ovirt-store/nfs 
> /rhev/data-center/mnt/ovirt3-ks.test.j2noc.com:_ovirt-store_nfs
> [vdsm@ovirt4 test /]$ df -h 
> /rhev/data-center/mnt/ovirt3-ks.test.j2noc.com:_ovirt-store_nfs
> Filesystem Size  Used Avail Use%
> Mounted on
> ovirt3-ks.test.j2noc.com:/ovirt-store/nfs  1.1T  305G  759G  29% 
> /rhev/data-center/mnt/ovirt3-ks.test.j2noc.com:_ovirt-store_nfs
> 
> 
> After manually mounting the NFS volumes and activating the node it
> still 
> fails.
> 
> 
> 2016-04-13 14:55:16,559 WARN 
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector
> ] 
> (DefaultQuartzScheduler_Worker-61) [64ceea1d] Correlation ID:
> 64ceea1d, 
> Job ID: a47b74c7-2ae0-43f9-9bdf-e50963a28895, Call Stack: null,
> Custom 
> Event ID: -1, Message: Host ovirt4.test.j2noc.com cannot access the 
> Storage Domain(s)  attached to the Data Center Default.
> Setting 
> Host state to Non-Operational.
> 
> 
> Not sure what "UNKNOWN" storage is, unless its one I deleted earlier 
> that somehow isn't really removed.
> 
> Also tried "reinstall" on node. same issue.
> 
> Attached are engine and vdsm logs.
> 
> 
> Thanks.
> 
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users

Try adding anonuid=36,anongid=36 to the mount and make sure 36:36 is
the owner group on the mount point.  I found this, http://www.ovirt.org
/documentation/how-to/troubleshooting/troubleshooting-nfs-storage-
issues/, helpful.

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] NFS firewall rule

2016-03-23 Thread Yedidyah Bar David
On Tue, Mar 22, 2016 at 6:12 PM, Bill James  wrote:
> I thought NFS was pretty standard in use on ovirt systems.

I guess it's very common on hosts as a client, not sure about server.
I guess most setups use separate machines to host VMs and for storage.

> Why does it take
> a custom setup to enable NFS in firewall rule?

The default rules open only, AFAIU, mandatory stuff - ports for vdsm
communication, remote consoles, etc.

Best,

>
>
> https://bugzilla.redhat.com/show_bug.cgi?id=513
>
> added:
> engine-config --set IPTablesConfigSiteCustom="-A INPUT -p tcp -m multiport
> --dports 2049 -j ACCEPT"
>
>
> Thanks.
>
>
> On 03/19/2016 11:29 PM, Yedidyah Bar David wrote:
>>
>> On Fri, Mar 18, 2016 at 2:03 AM, Bill James  wrote:
>>>
>>> How do I make it that when ever I add or reinstall a hardware node that
>>> oVirt creates a rule for NFS, port 2049?
>>
>> Search for 'IPTablesConfigSiteCustom'.
>>
>> Best,
>>
>>> I have to either add it manually after ovirt removes it, or just tell
>>> ovirt
>>> not to touch firewall rules.
>>> Our ISO domain is not hosted by the ovirt-engine, fyi.
>>>
>>>
>>> ovirt-engine-3.6.3.4-1.el7.centos.noarch
>>>
>>> ___
>>> Users mailing list
>>> Users@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>>
>
>
> Cloud Services for Business www.j2.com
> j2 | eFax | eVoice | FuseMail | Campaigner | KeepItSafe | Onebox
>
>
> This email, its contents and attachments contain information from j2 Global,
> Inc. and/or its affiliates which may be privileged, confidential or
> otherwise protected from disclosure. The information is intended to be for
> the addressee(s) only. If you are not an addressee, any disclosure, copy,
> distribution, or use of the contents of this message is prohibited. If you
> have received this email in error please notify the sender by reply e-mail
> and delete the original message and any copies. (c) 2015 j2 Global, Inc. All
> rights reserved. eFax, eVoice, Campaigner, FuseMail, KeepItSafe, and Onebox
> are registered trademarks of j2 Global, Inc. and its affiliates.



-- 
Didi
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] NFS firewall rule

2016-03-22 Thread Bill James
I thought NFS was pretty standard in use on ovirt systems. Why does it 
take a custom setup to enable NFS in firewall rule?



https://bugzilla.redhat.com/show_bug.cgi?id=513

added:
engine-config --set IPTablesConfigSiteCustom="-A INPUT -p tcp -m 
multiport --dports 2049 -j ACCEPT"



Thanks.


On 03/19/2016 11:29 PM, Yedidyah Bar David wrote:

On Fri, Mar 18, 2016 at 2:03 AM, Bill James  wrote:

How do I make it that when ever I add or reinstall a hardware node that
oVirt creates a rule for NFS, port 2049?

Search for 'IPTablesConfigSiteCustom'.

Best,


I have to either add it manually after ovirt removes it, or just tell ovirt
not to touch firewall rules.
Our ISO domain is not hosted by the ovirt-engine, fyi.


ovirt-engine-3.6.3.4-1.el7.centos.noarch

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users






Cloud Services for Business www.j2.com
j2 | eFax | eVoice | FuseMail | Campaigner | KeepItSafe | Onebox


This email, its contents and attachments contain information from j2 Global, 
Inc. and/or its affiliates which may be privileged, confidential or otherwise 
protected from disclosure. The information is intended to be for the 
addressee(s) only. If you are not an addressee, any disclosure, copy, 
distribution, or use of the contents of this message is prohibited. If you have 
received this email in error please notify the sender by reply e-mail and 
delete the original message and any copies. (c) 2015 j2 Global, Inc. All rights 
reserved. eFax, eVoice, Campaigner, FuseMail, KeepItSafe, and Onebox are 
registered trademarks of j2 Global, Inc. and its affiliates.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] NFS firewall rule

2016-03-20 Thread Yedidyah Bar David
On Fri, Mar 18, 2016 at 2:03 AM, Bill James  wrote:
> How do I make it that when ever I add or reinstall a hardware node that
> oVirt creates a rule for NFS, port 2049?

Search for 'IPTablesConfigSiteCustom'.

Best,

>
> I have to either add it manually after ovirt removes it, or just tell ovirt
> not to touch firewall rules.
> Our ISO domain is not hosted by the ovirt-engine, fyi.
>
>
> ovirt-engine-3.6.3.4-1.el7.centos.noarch
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users



-- 
Didi
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] NFS firewall rule

2016-03-19 Thread Bill James
How do I make it that when ever I add or reinstall a hardware node that 
oVirt creates a rule for NFS, port 2049?


I have to either add it manually after ovirt removes it, or just tell 
ovirt not to touch firewall rules.

Our ISO domain is not hosted by the ovirt-engine, fyi.


ovirt-engine-3.6.3.4-1.el7.centos.noarch

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] NFS SERVICE ERROR IN OVIRT

2016-01-27 Thread Nathanaël Blanchet

Hi,
if your nfs issue came after a 7.2 upgrade, you may facing this naughty 
bug (that disappears after a reboot) : 
https://bugzilla.redhat.com/show_bug.cgi?id=1214496.

If you can't reboot because of production then do such:
# systemctl restart rpcbind.socket
# systemctl restart rpc-statd

That what happens to me too and it helped me to solve the issue.

Le 27/01/2016 08:04, Nir Soffer a écrit :
Please provide more information in the body of the mail, and attach 
engine and vdsm log showing the timeframe of the error.


On Tue, Jan 26, 2016 at 7:29 AM, Anzar Sainudeen > wrote:


Dear Team,

We are installed new version of ovirt 3.6 on centos 7 using IBM
servers. We are facing the following issues. Please find the
attached error reports and kindly support.

1.NFS service not starting

2.Unable to mount and use the storage in ovirt host

Any additional report, please let me know

*Many Thanks,*

Anzar Sainudeen

Group  Security Incharge

IT Network & Hardware Infra Division

Thumbay Group

P.O Box : 4184, Ajman, U.A.E

Tel: +971-6-7431333  - Ext 1303

Mob : +971-558633699 

Email *:*an...@it.thumbay.com 

Web *:***www.thumbay.com 

Description: thumbay

Description: Logo



This email has been sent from a virus-free computer protected by
Avast.
www.avast.com





___
Users mailing list
Users@ovirt.org 
http://lists.ovirt.org/mailman/listinfo/users




___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


--
Nathanaël Blanchet

Supervision réseau
Pôle Infrastrutures Informatiques
227 avenue Professeur-Jean-Louis-Viala
34193 MONTPELLIER CEDEX 5   
Tél. 33 (0)4 67 54 84 55
Fax  33 (0)4 67 54 84 14
blanc...@abes.fr

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] NFS SERVICE ERROR IN OVIRT

2016-01-26 Thread Nir Soffer
Please provide more information in the body of the mail, and attach engine
and vdsm log showing the timeframe of the error.

On Tue, Jan 26, 2016 at 7:29 AM, Anzar Sainudeen 
wrote:

> Dear Team,
>
>
>
> We are installed new version of ovirt 3.6 on centos 7 using IBM servers.
> We are facing the following issues. Please find the attached error reports
> and kindly support.
>
>
>
> 1.NFS service not starting
>
> 2.Unable to mount and use the storage in ovirt host
>
>
>
> Any additional report, please let me know
>
>
>
> *Many Thanks,*
>
>
>
> Anzar Sainudeen
>
> Group  Security Incharge
>
> IT Network & Hardware Infra Division
>
> Thumbay Group
>
> P.O Box : 4184, Ajman, U.A.E
>
> Tel: +971-6-7431333 - Ext 1303
>
> Mob : +971-558633699
>
> Email *:* an...@it.thumbay.com
>
> Web *:* www.thumbay.com
>
>
>
> [image: Description: thumbay]
>
> [image: Description: Logo]
>
>
>
> 
>  This
> email has been sent from a virus-free computer protected by Avast.
> www.avast.com
> 
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] NFS IO timeout configuration

2016-01-13 Thread Yaniv Kaul
On Tue, Jan 12, 2016 at 10:45 PM, Markus Stockhausen <
stockhau...@collogia.de> wrote:

> >> Von: Yaniv Kaul [yk...@redhat.com]
> >> Gesendet: Dienstag, 12. Januar 2016 13:15
> >> An: Markus Stockhausen
> >> Cc: users@ovirt.org; Mike Hildebrandt
> >> Betreff: Re: [ovirt-users] NFS IO timeout configuration
> >>
> >> On Tue, Jan 12, 2016 at 9:32 AM, Markus Stockhausen
> stockhau...@collogia.de> wrote:
> >> Hi there,
> >>
> >> we got a nasty situation yesterday in our OVirt 3.5.6 environment.
> >> We ran a LSM that failed during the cleanup operation. To be precise
> >> when the process deleted an image on the source NFS storage.
> >
> > Can you share with us your NFS server details?
> >Is the NFS connection healthy (can be seen with nfsstat)
> >Generally, delete on NFS should be a pretty quick operation.
> > Y.
>
> Hi Yaniv,
>
> we usually have no problems with our NFS server. From our observations we
> only have issues when deleting files with many extents. This applies to all
> OVirt images files. Several of them have more than 50.000 extents, a few
> even more than 300.000.
>
> > xfs_bmap 1cb5906f-65d8-4174-99b1-74f5b3cbc537
> ...
> 52976: [629122144..629130335]: 10986198592..10986206783
> 52977: [629130336..629130343]: 10986403456..10986403463
> 52978: [629130344..629138535]: 10986206792..10986214983
> 52979: [629138536..629138543]: 10986411656..10986411663
> 52980: [629138544..629145471]: 10986214992..10986221919
> 52981: [629145472..629145575]: 10809903560..10809903663
> 52982: [629145576..629145599]: 10737615056..10737615079
>
> Our XFS is mounted with:
>
> /dev/mapper/vg00-lvdata on /var/nas4 type xfs
> (rw,noatime,nodiratime,allocsize=16m)
>
> Why we use allocsize=16M? We once started with allocize=512MB. This
> led to sparse files that did not save much bytes. Because a single byte
> written
> resulted in a 512MB allocation. Thin allocation of these files resulted in
> long runtimes
> for formatting disks inside the VMS. So we reduced to 16MB as a balanced
> config
>
> This works quite well but not for remove operations.
>
> Better ideas?
>

Sounds like an XFS issue more than NFS.
I've consulted with one of our XFS gurus - here's his reply:

For vm image files, users should set up extent size hints to define
> the minimum extent allocation size in a file - allocsize does
> nothing for random writes into sparse files. I typically use a hint
> of 1MB for all my vm images`
>

Y.


>
> Markus
>
>
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] NFS IO timeout configuration

2016-01-12 Thread Nir Soffer
On Tue, Jan 12, 2016 at 9:32 AM, Markus Stockhausen
 wrote:
> Hi there,
>
> we got a nasty situation yesterday in our OVirt 3.5.6 environment.
> We ran a LSM that failed during the cleanup operation. To be precise
> when the process deleted an image on the source NFS storage.
>
> Engine log gives:
>
> 2016-01-11 20:49:45,120 INFO  
> [org.ovirt.engine.core.vdsbroker.irsbroker.DeleteImageGroupVDSCommand] 
> (org.ovirt.thread.pool-8-thread-14) [77277f0] START, 
> DeleteImageGroupVDSCommand( storagePoolId = 
> 94ed7a19-fade-4bd6-83f2-2cbb2f730b95, ignoreFailoverLimit = false, 
> storageDomainId = 272ec473-6041-42ee-bd1a-732789dd18d4, imageGroupId = 
> aed132ef-703a-44d0-b875-db8c0d2c1a92, postZeros = false, forceDelete = 
> false), log id: b52d59c
> ...
> 2016-01-11 20:50:45,206 ERROR 
> [org.ovirt.engine.core.vdsbroker.irsbroker.DeleteImageGroupVDSCommand] 
> (org.ovirt.thread.pool-8-thread-14) [77277f0] Failed in DeleteImageGroupVDS 
> method
>
> VDSM (SPM) log gives:
>
> Thread-97::DEBUG::2016-01-11 
> 20:49:45,737::fileSD::384::Storage.StorageDomain::(deleteImage) Removing 
> file: 
> /rhev/data-center/mnt/1.2.3.4:_var_nas2_OVirtIB/272ec473-6041-42ee-bd1a-732789dd18d4/images/_remojzBd1r/0d623afb-291e-4f4c-acba-caecb125c4ed
> ...
> Thread-97::ERROR::2016-01-11 
> 20:50:45,737::task::866::Storage.TaskManager.Task::(_setError) 
> Task=`cd477878-47b4-44b1-85a3-b5da19543a5e`::Unexpected error
> Traceback (most recent call last):
>   File "/usr/share/vdsm/storage/task.py", line 873, in _run
> return fn(*args, **kargs)
>   File "/usr/share/vdsm/logUtils.py", line 45, in wrapper
> res = f(*args, **kwargs)
>   File "/usr/share/vdsm/storage/hsm.py", line 1549, in deleteImage
> pool.deleteImage(dom, imgUUID, volsByImg)
>   File "/usr/share/vdsm/storage/securable.py", line 77, in wrapper
> return method(self, *args, **kwargs)
>   File "/usr/share/vdsm/storage/sp.py", line 1884, in deleteImage
> domain.deleteImage(domain.sdUUID, imgUUID, volsByImg)
>   File "/usr/share/vdsm/storage/fileSD.py", line 385, in deleteImage
> self.oop.os.remove(volPath)
>   File "/usr/share/vdsm/storage/outOfProcess.py", line 245, in remove
> self._iop.unlink(path)
>   File "/usr/lib/python2.7/site-packages/ioprocess/__init__.py", line 455, in 
> unlink
> return self._sendCommand("unlink", {"path": path}, self.timeout)
>   File "/usr/lib/python2.7/site-packages/ioprocess/__init__.py", line 385, in 
> _sendCommand
> raise Timeout(os.strerror(errno.ETIMEDOUT))
> Timeout: Connection timed out

You stumbled into  https://bugzilla.redhat.com/1270220

>
> Reading the docs I got the idea that vdsm default 60 second timeout
> for IO operations might be changed within /etc/vdsm/vdsm.conf
>
> [irs]
> process_pool_timeout = 180
>
> Can anyone confirm that this will solve the problem?

Yes, this is the correct option.

But note that deleting an image on nfs means 3 unlink operations per volume.
If you have an image with one snapshot, that means 2 volumes, and 6
unlink calls.

If unlink takes 70 seconds (timing out with current 60 seconds
timeout), deleting
the image with one snaphost will take 420 seconds.

On the engine side, engine wait until deleteImage finish, or until vdsTimeout
expired (by default 180 seconds), so you may need to increase the engine
timeout as well.

While engine wait for deleteImage to finish, no other spm operation can run.

So increasing the timeout is not the correct solution. You should check why your
storage needs more then 60 seconds to perform unlink operation and change
your setup so unlink works in a timely manner.

As a start, it would be useful to see the results of nfsstat on the
host experiencing
the slow deletes.

In master we perform now the delteImage operation in a background task, so
slow unlink should not effect the engine side, and you can increase
process_pool_timeout
as needed.
See 
https://github.com/oVirt/vdsm/commit/3239e74d1a9087352fca454926224f47272da6c5

We don't plan to backport this change to 3.6 since it is risky and
does not fix the root
cause, which is the slow nfs server, but if you want to test it, I can
make a patch for 3.6.

Nir
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] NFS IO timeout configuration

2016-01-12 Thread Vinzenz Feenstra

> On Jan 12, 2016, at 8:32 AM, Markus Stockhausen  
> wrote:
> 
> Hi there,
> 
> we got a nasty situation yesterday in our OVirt 3.5.6 environment. 
> We ran a LSM that failed during the cleanup operation. To be precise 
> when the process deleted an image on the source NFS storage. 
> 
> Engine log gives:
> 
> 2016-01-11 20:49:45,120 INFO  
> [org.ovirt.engine.core.vdsbroker.irsbroker.DeleteImageGroupVDSCommand] 
> (org.ovirt.thread.pool-8-thread-14) [77277f0] START, 
> DeleteImageGroupVDSCommand( storagePoolId = 
> 94ed7a19-fade-4bd6-83f2-2cbb2f730b95, ignoreFailoverLimit = false, 
> storageDomainId = 272ec473-6041-42ee-bd1a-732789dd18d4, imageGroupId = 
> aed132ef-703a-44d0-b875-db8c0d2c1a92, postZeros = false, forceDelete = 
> false), log id: b52d59c
> ...
> 2016-01-11 20:50:45,206 ERROR 
> [org.ovirt.engine.core.vdsbroker.irsbroker.DeleteImageGroupVDSCommand] 
> (org.ovirt.thread.pool-8-thread-14) [77277f0] Failed in DeleteImageGroupVDS 
> method
> 
> VDSM (SPM) log gives:
> 
> Thread-97::DEBUG::2016-01-11 
> 20:49:45,737::fileSD::384::Storage.StorageDomain::(deleteImage) Removing 
> file: 
> /rhev/data-center/mnt/1.2.3.4:_var_nas2_OVirtIB/272ec473-6041-42ee-bd1a-732789dd18d4/images/_remojzBd1r/0d623afb-291e-4f4c-acba-caecb125c4ed
> ...
> Thread-97::ERROR::2016-01-11 
> 20:50:45,737::task::866::Storage.TaskManager.Task::(_setError) 
> Task=`cd477878-47b4-44b1-85a3-b5da19543a5e`::Unexpected error
> Traceback (most recent call last):
>  File "/usr/share/vdsm/storage/task.py", line 873, in _run
>return fn(*args, **kargs)
>  File "/usr/share/vdsm/logUtils.py", line 45, in wrapper
>res = f(*args, **kwargs)
>  File "/usr/share/vdsm/storage/hsm.py", line 1549, in deleteImage
>pool.deleteImage(dom, imgUUID, volsByImg)
>  File "/usr/share/vdsm/storage/securable.py", line 77, in wrapper
>return method(self, *args, **kwargs)
>  File "/usr/share/vdsm/storage/sp.py", line 1884, in deleteImage
>domain.deleteImage(domain.sdUUID, imgUUID, volsByImg)
>  File "/usr/share/vdsm/storage/fileSD.py", line 385, in deleteImage
>self.oop.os.remove(volPath)
>  File "/usr/share/vdsm/storage/outOfProcess.py", line 245, in remove
>self._iop.unlink(path)
>  File "/usr/lib/python2.7/site-packages/ioprocess/__init__.py", line 455, in 
> unlink
>return self._sendCommand("unlink", {"path": path}, self.timeout)
>  File "/usr/lib/python2.7/site-packages/ioprocess/__init__.py", line 385, in 
> _sendCommand
>raise Timeout(os.strerror(errno.ETIMEDOUT))
> Timeout: Connection timed out
> 
> Reading the docs I got the idea that vdsm default 60 second timeout
> for IO operations might be changed within /etc/vdsm/vdsm.conf
> 
> [irs]
> process_pool_timeout = 180
> 
> Can anyone confirm that this will solve the problem?

Well it will increase the time to 3 minutes and takes effect after restarting 
vdsm and supervdsm - If that is enough that might depend on your setup.

> 
> Markus
> 
> 
> 
> 
> 
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] NFS IO timeout configuration

2016-01-12 Thread Vinzenz Feenstra

> On Jan 12, 2016, at 9:11 AM, Markus Stockhausen <stockhau...@collogia.de> 
> wrote:
> 
>> Von: Vinzenz Feenstra [vfeen...@redhat.com]
>> Gesendet: Dienstag, 12. Januar 2016 09:00
>> An: Markus Stockhausen
>> Cc: users@ovirt.org; Mike Hildebrandt
>> Betreff: Re: [ovirt-users] NFS IO timeout configuration
>>> Hi there,
>>> 
>>> we got a nasty situation yesterday in our OVirt 3.5.6 environment. 
>>> We ran a LSM that failed during the cleanup operation. To be precise 
>>> when the process deleted an image on the source NFS storage. 
>>> 
> ...
>>> 
>>> Reading the docs I got the idea that vdsm default 60 second timeout
>>> for IO operations might be changed within /etc/vdsm/vdsm.conf
>>> 
>>> [irs]
>>> process_pool_timeout = 180
>>> 
>>> Can anyone confirm that this will solve the problem?
>> 
>> Well it will increase the time to 3 minutes and takes effect after 
>> restarting vdsm and supervdsm - If that is enough that might depend on your 
>> setup.
>> 
> 
> Thanks Vinzenz,
> 
> maybe my question was not 100% correct. I need to know, if this parameter 
> really influences the described timeout behaviour. The best value of the 
> parameter must be checked of course.

Well I might be wrong, but from what I can see that is the right value to 
configure this.

> 
> Markus

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] NFS IO timeout configuration

2016-01-12 Thread Markus Stockhausen
> Von: Vinzenz Feenstra [vfeen...@redhat.com]
> Gesendet: Dienstag, 12. Januar 2016 09:00
> An: Markus Stockhausen
> Cc: users@ovirt.org; Mike Hildebrandt
> Betreff: Re: [ovirt-users] NFS IO timeout configuration
> > Hi there,
> > 
> > we got a nasty situation yesterday in our OVirt 3.5.6 environment. 
> > We ran a LSM that failed during the cleanup operation. To be precise 
> > when the process deleted an image on the source NFS storage. 
> > 
...
> >
> > Reading the docs I got the idea that vdsm default 60 second timeout
> > for IO operations might be changed within /etc/vdsm/vdsm.conf
> >
> > [irs]
> > process_pool_timeout = 180
> >
> > Can anyone confirm that this will solve the problem?
> 
> Well it will increase the time to 3 minutes and takes effect after restarting 
> vdsm and supervdsm - If that is enough that might depend on your setup.
> 

Thanks Vinzenz,

maybe my question was not 100% correct. I need to know, if this parameter 
really influences the described timeout behaviour. The best value of the 
parameter must be checked of course.

Markus
Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte
Informationen. Wenn Sie nicht der richtige Adressat sind oder diese E-Mail
irrtümlich erhalten haben, informieren Sie bitte sofort den Absender und
vernichten Sie diese Mail. Das unerlaubte Kopieren sowie die unbefugte
Weitergabe dieser Mail ist nicht gestattet.

Über das Internet versandte E-Mails können unter fremden Namen erstellt oder
manipuliert werden. Deshalb ist diese als E-Mail verschickte Nachricht keine
rechtsverbindliche Willenserklärung.

Collogia
Unternehmensberatung AG
Ubierring 11
D-50678 Köln

Vorstand:
Kadir Akin
Dr. Michael Höhnerbach

Vorsitzender des Aufsichtsrates:
Hans Kristian Langva

Registergericht: Amtsgericht Köln
Registernummer: HRB 52 497

This e-mail may contain confidential and/or privileged information. If you
are not the intended recipient (or have received this e-mail in error)
please notify the sender immediately and destroy this e-mail. Any
unauthorized copying, disclosure or distribution of the material in this
e-mail is strictly forbidden.

e-mails sent over the internet may have been written under a wrong name or
been manipulated. That is why this message sent as an e-mail is not a
legally binding declaration of intention.

Collogia
Unternehmensberatung AG
Ubierring 11
D-50678 Köln

executive board:
Kadir Akin
Dr. Michael Höhnerbach

President of the supervisory board:
Hans Kristian Langva

Registry office: district court Cologne
Register number: HRB 52 497


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] NFS IO timeout configuration

2016-01-12 Thread Yaniv Kaul
On Tue, Jan 12, 2016 at 9:32 AM, Markus Stockhausen  wrote:

> Hi there,
>
> we got a nasty situation yesterday in our OVirt 3.5.6 environment.
> We ran a LSM that failed during the cleanup operation. To be precise
> when the process deleted an image on the source NFS storage.
>

Can you share with us your NFS server details?
Is the NFS connection healthy (can be seen with nfsstat)
Generally, delete on NFS should be a pretty quick operation.
Y.


>
> Engine log gives:
>
> 2016-01-11 20:49:45,120 INFO
> [org.ovirt.engine.core.vdsbroker.irsbroker.DeleteImageGroupVDSCommand]
> (org.ovirt.thread.pool-8-thread-14) [77277f0] START,
> DeleteImageGroupVDSCommand( storagePoolId =
> 94ed7a19-fade-4bd6-83f2-2cbb2f730b95, ignoreFailoverLimit = false,
> storageDomainId = 272ec473-6041-42ee-bd1a-732789dd18d4, imageGroupId =
> aed132ef-703a-44d0-b875-db8c0d2c1a92, postZeros = false, forceDelete =
> false), log id: b52d59c
> ...
> 2016-01-11 20:50:45,206 ERROR
> [org.ovirt.engine.core.vdsbroker.irsbroker.DeleteImageGroupVDSCommand]
> (org.ovirt.thread.pool-8-thread-14) [77277f0] Failed in DeleteImageGroupVDS
> method
>
> VDSM (SPM) log gives:
>
> Thread-97::DEBUG::2016-01-11
> 20:49:45,737::fileSD::384::Storage.StorageDomain::(deleteImage) Removing
> file: /rhev/data-center/mnt/1.2.3.4:
> _var_nas2_OVirtIB/272ec473-6041-42ee-bd1a-732789dd18d4/images/_remojzBd1r/0d623afb-291e-4f4c-acba-caecb125c4ed
> ...
> Thread-97::ERROR::2016-01-11
> 20:50:45,737::task::866::Storage.TaskManager.Task::(_setError)
> Task=`cd477878-47b4-44b1-85a3-b5da19543a5e`::Unexpected error
> Traceback (most recent call last):
>   File "/usr/share/vdsm/storage/task.py", line 873, in _run
> return fn(*args, **kargs)
>   File "/usr/share/vdsm/logUtils.py", line 45, in wrapper
> res = f(*args, **kwargs)
>   File "/usr/share/vdsm/storage/hsm.py", line 1549, in deleteImage
> pool.deleteImage(dom, imgUUID, volsByImg)
>   File "/usr/share/vdsm/storage/securable.py", line 77, in wrapper
> return method(self, *args, **kwargs)
>   File "/usr/share/vdsm/storage/sp.py", line 1884, in deleteImage
> domain.deleteImage(domain.sdUUID, imgUUID, volsByImg)
>   File "/usr/share/vdsm/storage/fileSD.py", line 385, in deleteImage
> self.oop.os.remove(volPath)
>   File "/usr/share/vdsm/storage/outOfProcess.py", line 245, in remove
> self._iop.unlink(path)
>   File "/usr/lib/python2.7/site-packages/ioprocess/__init__.py", line 455,
> in unlink
> return self._sendCommand("unlink", {"path": path}, self.timeout)
>   File "/usr/lib/python2.7/site-packages/ioprocess/__init__.py", line 385,
> in _sendCommand
> raise Timeout(os.strerror(errno.ETIMEDOUT))
> Timeout: Connection timed out
>
> Reading the docs I got the idea that vdsm default 60 second timeout
> for IO operations might be changed within /etc/vdsm/vdsm.conf
>
> [irs]
> process_pool_timeout = 180
>
> Can anyone confirm that this will solve the problem?
>
> Markus
>
>
>
>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] NFS IO timeout configuration

2016-01-12 Thread Markus Stockhausen
>> Von: Yaniv Kaul [yk...@redhat.com]
>> Gesendet: Dienstag, 12. Januar 2016 13:15
>> An: Markus Stockhausen
>> Cc: users@ovirt.org; Mike Hildebrandt
>> Betreff: Re: [ovirt-users] NFS IO timeout configuration
>> 
>> On Tue, Jan 12, 2016 at 9:32 AM, Markus Stockhausen stockhau...@collogia.de> 
>> wrote:
>> Hi there,
>> 
>> we got a nasty situation yesterday in our OVirt 3.5.6 environment.
>> We ran a LSM that failed during the cleanup operation. To be precise
>> when the process deleted an image on the source NFS storage.
>
> Can you share with us your NFS server details? 
>Is the NFS connection healthy (can be seen with nfsstat)
>Generally, delete on NFS should be a pretty quick operation. 
> Y.

Hi Yaniv,

we usually have no problems with our NFS server. From our observations we 
only have issues when deleting files with many extents. This applies to all 
OVirt images files. Several of them have more than 50.000 extents, a few 
even more than 300.000.

> xfs_bmap 1cb5906f-65d8-4174-99b1-74f5b3cbc537
...
52976: [629122144..629130335]: 10986198592..10986206783
52977: [629130336..629130343]: 10986403456..10986403463
52978: [629130344..629138535]: 10986206792..10986214983
52979: [629138536..629138543]: 10986411656..10986411663
52980: [629138544..629145471]: 10986214992..10986221919
52981: [629145472..629145575]: 10809903560..10809903663
52982: [629145576..629145599]: 10737615056..10737615079

Our XFS is mounted with:

/dev/mapper/vg00-lvdata on /var/nas4 type xfs 
(rw,noatime,nodiratime,allocsize=16m)

Why we use allocsize=16M? We once started with allocize=512MB. This
led to sparse files that did not save much bytes. Because a single byte written
resulted in a 512MB allocation. Thin allocation of these files resulted in long 
runtimes
for formatting disks inside the VMS. So we reduced to 16MB as a balanced config

This works quite well but not for remove operations.

Better ideas?

Markus




Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte
Informationen. Wenn Sie nicht der richtige Adressat sind oder diese E-Mail
irrtümlich erhalten haben, informieren Sie bitte sofort den Absender und
vernichten Sie diese Mail. Das unerlaubte Kopieren sowie die unbefugte
Weitergabe dieser Mail ist nicht gestattet.

Über das Internet versandte E-Mails können unter fremden Namen erstellt oder
manipuliert werden. Deshalb ist diese als E-Mail verschickte Nachricht keine
rechtsverbindliche Willenserklärung.

Collogia
Unternehmensberatung AG
Ubierring 11
D-50678 Köln

Vorstand:
Kadir Akin
Dr. Michael Höhnerbach

Vorsitzender des Aufsichtsrates:
Hans Kristian Langva

Registergericht: Amtsgericht Köln
Registernummer: HRB 52 497

This e-mail may contain confidential and/or privileged information. If you
are not the intended recipient (or have received this e-mail in error)
please notify the sender immediately and destroy this e-mail. Any
unauthorized copying, disclosure or distribution of the material in this
e-mail is strictly forbidden.

e-mails sent over the internet may have been written under a wrong name or
been manipulated. That is why this message sent as an e-mail is not a
legally binding declaration of intention.

Collogia
Unternehmensberatung AG
Ubierring 11
D-50678 Köln

executive board:
Kadir Akin
Dr. Michael Höhnerbach

President of the supervisory board:
Hans Kristian Langva

Registry office: district court Cologne
Register number: HRB 52 497


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] NFS IO timeout configuration

2016-01-11 Thread Markus Stockhausen
Hi there,

we got a nasty situation yesterday in our OVirt 3.5.6 environment. 
We ran a LSM that failed during the cleanup operation. To be precise 
when the process deleted an image on the source NFS storage. 

Engine log gives:

2016-01-11 20:49:45,120 INFO  
[org.ovirt.engine.core.vdsbroker.irsbroker.DeleteImageGroupVDSCommand] 
(org.ovirt.thread.pool-8-thread-14) [77277f0] START, 
DeleteImageGroupVDSCommand( storagePoolId = 
94ed7a19-fade-4bd6-83f2-2cbb2f730b95, ignoreFailoverLimit = false, 
storageDomainId = 272ec473-6041-42ee-bd1a-732789dd18d4, imageGroupId = 
aed132ef-703a-44d0-b875-db8c0d2c1a92, postZeros = false, forceDelete = false), 
log id: b52d59c
...
2016-01-11 20:50:45,206 ERROR 
[org.ovirt.engine.core.vdsbroker.irsbroker.DeleteImageGroupVDSCommand] 
(org.ovirt.thread.pool-8-thread-14) [77277f0] Failed in DeleteImageGroupVDS 
method

VDSM (SPM) log gives:

Thread-97::DEBUG::2016-01-11 
20:49:45,737::fileSD::384::Storage.StorageDomain::(deleteImage) Removing file: 
/rhev/data-center/mnt/1.2.3.4:_var_nas2_OVirtIB/272ec473-6041-42ee-bd1a-732789dd18d4/images/_remojzBd1r/0d623afb-291e-4f4c-acba-caecb125c4ed
...
Thread-97::ERROR::2016-01-11 
20:50:45,737::task::866::Storage.TaskManager.Task::(_setError) 
Task=`cd477878-47b4-44b1-85a3-b5da19543a5e`::Unexpected error
Traceback (most recent call last):
  File "/usr/share/vdsm/storage/task.py", line 873, in _run
return fn(*args, **kargs)
  File "/usr/share/vdsm/logUtils.py", line 45, in wrapper
res = f(*args, **kwargs)
  File "/usr/share/vdsm/storage/hsm.py", line 1549, in deleteImage
pool.deleteImage(dom, imgUUID, volsByImg)
  File "/usr/share/vdsm/storage/securable.py", line 77, in wrapper
return method(self, *args, **kwargs)
  File "/usr/share/vdsm/storage/sp.py", line 1884, in deleteImage
domain.deleteImage(domain.sdUUID, imgUUID, volsByImg)
  File "/usr/share/vdsm/storage/fileSD.py", line 385, in deleteImage
self.oop.os.remove(volPath)
  File "/usr/share/vdsm/storage/outOfProcess.py", line 245, in remove
self._iop.unlink(path)
  File "/usr/lib/python2.7/site-packages/ioprocess/__init__.py", line 455, in 
unlink
return self._sendCommand("unlink", {"path": path}, self.timeout)
  File "/usr/lib/python2.7/site-packages/ioprocess/__init__.py", line 385, in 
_sendCommand
raise Timeout(os.strerror(errno.ETIMEDOUT))
Timeout: Connection timed out

Reading the docs I got the idea that vdsm default 60 second timeout
for IO operations might be changed within /etc/vdsm/vdsm.conf

[irs]
process_pool_timeout = 180

Can anyone confirm that this will solve the problem?

Markus






Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte
Informationen. Wenn Sie nicht der richtige Adressat sind oder diese E-Mail
irrtümlich erhalten haben, informieren Sie bitte sofort den Absender und
vernichten Sie diese Mail. Das unerlaubte Kopieren sowie die unbefugte
Weitergabe dieser Mail ist nicht gestattet.

Über das Internet versandte E-Mails können unter fremden Namen erstellt oder
manipuliert werden. Deshalb ist diese als E-Mail verschickte Nachricht keine
rechtsverbindliche Willenserklärung.

Collogia
Unternehmensberatung AG
Ubierring 11
D-50678 Köln

Vorstand:
Kadir Akin
Dr. Michael Höhnerbach

Vorsitzender des Aufsichtsrates:
Hans Kristian Langva

Registergericht: Amtsgericht Köln
Registernummer: HRB 52 497

This e-mail may contain confidential and/or privileged information. If you
are not the intended recipient (or have received this e-mail in error)
please notify the sender immediately and destroy this e-mail. Any
unauthorized copying, disclosure or distribution of the material in this
e-mail is strictly forbidden.

e-mails sent over the internet may have been written under a wrong name or
been manipulated. That is why this message sent as an e-mail is not a
legally binding declaration of intention.

Collogia
Unternehmensberatung AG
Ubierring 11
D-50678 Köln

executive board:
Kadir Akin
Dr. Michael Höhnerbach

President of the supervisory board:
Hans Kristian Langva

Registry office: district court Cologne
Register number: HRB 52 497


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] NFS Domain with 1GB limit

2015-12-03 Thread kontakt

Hi,

i ust run  "yum install ovirt-engine" and "engine-setup" and setup 
"/var/lib/exports/iso", which is default, as shared NFS Domain. And i 
think thats why ovirt setups this with default <1GB of space. So i need 
an explanation why this is so and if this is so i whould next time 
answer with no for nfs domain in installation process.


Am 2015-12-03 12:51, schrieb Simone Tiraboschi:

On Thu, Dec 3, 2015 at 12:02 PM,  wrote:


Hello,
i am new with ovirt and i installed ovirt engine on centos7. After
i logged in the administrator web gui i see the default nfs domain
with less than 1gb space, while i have more than 200gb on disk. The
problem is, that i cant change the value and found no informtion in
the manuals too. Can someone explain this to me?
thx


There no default NFS data domain after engine-setup. It's up to the
user to create the first regular data domain.
Engine-setup will simply optionally creates an NFS ISO storage domain
on the engine machines.
 


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users [1]




Links:
--
[1] http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] NFS Domain with 1GB limit

2015-12-03 Thread kontakt

Hello,
i am new with ovirt and i installed ovirt engine on centos7. After i 
logged in the administrator web gui i see the default nfs domain with 
less than 1gb space, while i have more than 200gb on disk. The problem 
is, that i cant change the value and found no informtion in the manuals 
too. Can someone explain this to me?

thx

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] NFS Domain with 1GB limit

2015-12-03 Thread Simone Tiraboschi
On Thu, Dec 3, 2015 at 12:02 PM,  wrote:

> Hello,
> i am new with ovirt and i installed ovirt engine on centos7. After i
> logged in the administrator web gui i see the default nfs domain with less
> than 1gb space, while i have more than 200gb on disk. The problem is, that
> i cant change the value and found no informtion in the manuals too. Can
> someone explain this to me?
> thx
>
>
There no default NFS data domain after engine-setup. It's up to the user to
create the first regular data domain.
Engine-setup will simply optionally creates an NFS ISO storage domain on
the engine machines.


> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] NFS Domain with 1GB limit

2015-12-03 Thread Simone Tiraboschi
On Thu, Dec 3, 2015 at 1:40 PM,  wrote:

> Hi,
>
> i ust run  "yum install ovirt-engine" and "engine-setup" and setup
> "/var/lib/exports/iso", which is default, as shared NFS Domain. And i think
> thats why ovirt setups this with default <1GB of space. So i need an
> explanation why this is so and if this is so i whould next time answer with
> no for nfs domain in installation process.
>
>
The domain created by engine-setup under /var/lib/exports/iso should be an
ISO storage domain (just for ISO images) not a Data storage domain (for
VMs).
Can you please check that and add your first regular data storage domain?



>
> Am 2015-12-03 12:51, schrieb Simone Tiraboschi:
>
>> On Thu, Dec 3, 2015 at 12:02 PM,  wrote:
>>
>> Hello,
>>> i am new with ovirt and i installed ovirt engine on centos7. After
>>> i logged in the administrator web gui i see the default nfs domain
>>> with less than 1gb space, while i have more than 200gb on disk. The
>>> problem is, that i cant change the value and found no informtion in
>>> the manuals too. Can someone explain this to me?
>>> thx
>>>
>>
>> There no default NFS data domain after engine-setup. It's up to the
>> user to create the first regular data domain.
>> Engine-setup will simply optionally creates an NFS ISO storage domain
>> on the engine machines.
>>
>>
>> ___
>>> Users mailing list
>>> Users@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users [1]
>>>
>>
>>
>>
>> Links:
>> --
>> [1] http://lists.ovirt.org/mailman/listinfo/users
>>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] NFS Domain with 1GB limit

2015-12-03 Thread kontakt

Hi,

yes its an iso domain and yes i can only create a data domain. but the 
question is why it is limited to 1gb? and why i cant changed it? what 
are the reasons for that?

thx

Am 2015-12-03 14:25, schrieb Simone Tiraboschi:

On Thu, Dec 3, 2015 at 1:40 PM,  wrote:


Hi,

i ust run  "yum install ovirt-engine" and "engine-setup" and setup
"/var/lib/exports/iso", which is default, as shared NFS Domain. And
i think thats why ovirt setups this with default <1GB of space. So i
need an explanation why this is so and if this is so i whould next
time answer with no for nfs domain in installation process.


The domain created by engine-setup under /var/lib/exports/iso should
be an ISO storage domain (just for ISO images) not a Data storage
domain (for VMs).
Can you please check that and add your first regular data storage
domain?

 


Am 2015-12-03 12:51, schrieb Simone Tiraboschi:

On Thu, Dec 3, 2015 at 12:02 PM,  wrote:

Hello,
i am new with ovirt and i installed ovirt engine on centos7. After
i logged in the administrator web gui i see the default nfs domain
with less than 1gb space, while i have more than 200gb on disk. The
problem is, that i cant change the value and found no informtion in
the manuals too. Can someone explain this to me?
thx

There no default NFS data domain after engine-setup. It's up to the
user to create the first regular data domain.
Engine-setup will simply optionally creates an NFS ISO storage
domain
on the engine machines.
 

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users [1] [1]

Links:
--
[1] http://lists.ovirt.org/mailman/listinfo/users [1]


 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users [1]



Links:
--
[1] http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] NFS Domain with 1GB limit

2015-12-03 Thread Joop
On 3-12-2015 14:41, kont...@taste-of-it.de wrote:
> Hi,
>
> yes its an iso domain and yes i can only create a data domain. but the
> question is why it is limited to 1gb? and why i cant changed it? what
> are the reasons for that?
Just guessing towards your env but what does df -h /var or df -h
/var/lib say?

Joop



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] NFS Domain with 1GB limit

2015-12-03 Thread Simone Tiraboschi
On Thu, Dec 3, 2015 at 4:14 PM, Joop  wrote:

> On 3-12-2015 14:41, kont...@taste-of-it.de wrote:
> > Hi,
> >
> > yes its an iso domain and yes i can only create a data domain. but the
> > question is why it is limited to 1gb? and why i cant changed it? what
> > are the reasons for that?
> Just guessing towards your env but what does df -h /var or df -h
> /var/lib say?
>
>
Another question, did you already added the first host?



> Joop
>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] NFS Domain with 1GB limit

2015-12-03 Thread Alan Murrell

On 03/12/2015 5:41 AM, kont...@taste-of-it.de wrote:

yes its an iso domain and yes i can only create a data domain. but the
question is why it is limited to 1gb?


When you create the ISO domain during engine-setup, I believe the ISO 
domain is created on the Engine host itself.


Can you see how much space is available under /var/lib on the Engine host?

Regards,

Alan

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] NFS mounts default to UDP?

2015-09-08 Thread Markus Stockhausen
Hi there,


we noticed that a newly created NFS data domain is mounted with
UDP protocol. Does anyone know if that is the desired behaviour
of current OVirt versions?

ovirtnode# mount -a
...
10.10.30.254:/var/nas4/OVirtIB on 
/rhev/data-center/mnt/10.10.30.254:_var_nas4_OVirtIB 
type nfs (rw,relatime,vers=3,rsize=32768,wsize=32768,namlen=255,soft,
nosharecache,proto=udp,timeo=600,retrans=6,sec=sys,mountaddr=10.10.30.254,
mountvers=3,mountport=48383,mountproto=udp,local_lock=none,addr=10.10.30.254)
...

See also https://bugzilla.redhat.com/show_bug.cgi?id=1261178

Best regards.

Markus

Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte
Informationen. Wenn Sie nicht der richtige Adressat sind oder diese E-Mail
irrtümlich erhalten haben, informieren Sie bitte sofort den Absender und
vernichten Sie diese Mail. Das unerlaubte Kopieren sowie die unbefugte
Weitergabe dieser Mail ist nicht gestattet.

Über das Internet versandte E-Mails können unter fremden Namen erstellt oder
manipuliert werden. Deshalb ist diese als E-Mail verschickte Nachricht keine
rechtsverbindliche Willenserklärung.

Collogia
Unternehmensberatung AG
Ubierring 11
D-50678 Köln

Vorstand:
Kadir Akin
Dr. Michael Höhnerbach

Vorsitzender des Aufsichtsrates:
Hans Kristian Langva

Registergericht: Amtsgericht Köln
Registernummer: HRB 52 497

This e-mail may contain confidential and/or privileged information. If you
are not the intended recipient (or have received this e-mail in error)
please notify the sender immediately and destroy this e-mail. Any
unauthorized copying, disclosure or distribution of the material in this
e-mail is strictly forbidden.

e-mails sent over the internet may have been written under a wrong name or
been manipulated. That is why this message sent as an e-mail is not a
legally binding declaration of intention.

Collogia
Unternehmensberatung AG
Ubierring 11
D-50678 Köln

executive board:
Kadir Akin
Dr. Michael Höhnerbach

President of the supervisory board:
Hans Kristian Langva

Registry office: district court Cologne
Register number: HRB 52 497


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] NFS export domain still remain in Preparing for maintenance

2015-05-13 Thread Maor Lipchuk
Hi NUNIN,

I'm not sure that I clearly understood the problem.
You wrote that your NFS export is attached to a 6.6 cluster, though a cluster 
is mainly an entity which contains Hosts.

If it is the Host that was preparing for maintenance then it could be that 
there are VMs which are running on that Host which are currently during live 
migration.
In that case you could either manually migrate those VMs, shut them down, or 
simply move the Host back to active.
Is that indeed the issue? if not ,can you elaborate a but more please


Thanks,
Maor



- Original Message -
 From: NUNIN Roberto roberto.nu...@comifar.it
 To: users@ovirt.org
 Sent: Tuesday, May 12, 2015 5:17:39 PM
 Subject: [ovirt-users] NFS export domain still remain in Preparing for   
 maintenance
 
 
 
 Hi all
 
 
 
 We are using oVirt engine 3.5.1-0.0 on Centos 6.6
 
 We have two DC. One with hosts using vdsm-4.16.10-8.gitc937927.el7.x86_64,
 the other vdsm-4.16.12-7.gita30da75.el6.x86_64 on Centos6.6
 
 No hosted-engine, it run on a dedicates VM, outside oVirt.
 
 
 
 Behavior: When try to put the NFS export currently active and attached to the
 6.6 cluster, used to move VM from one DC to the other, this remain
 indefinitely in “Preparing for maintenance phase”.
 
 
 
 No DNS resolution issue in place. All parties involved are solved directly
 and via reverse resolution.
 
 I’ve read about the issue on el7 and IPv6 bug, but here we have the problem
 on Centos 6.6 hosts.
 
 
 
 Any idea/suggestion/further investigation ?
 
 
 
 Can we reinitialize the NFS export in some way ? Only erasing content ?
 
 Thanks in advance for any suggestion.
 
 
 
 
 
 Roberto Nunin
 
 Italy
 
 
 
 
 Questo messaggio e' indirizzato esclusivamente al destinatario indicato e
 potrebbe contenere informazioni confidenziali, riservate o proprietarie.
 Qualora la presente venisse ricevuta per errore, si prega di segnalarlo
 immediatamente al mittente, cancellando l'originale e ogni sua copia e
 distruggendo eventuali copie cartacee. Ogni altro uso e' strettamente
 proibito e potrebbe essere fonte di violazione di legge.
 
 This message is for the designated recipient only and may contain privileged,
 proprietary, or otherwise private information. If you have received it in
 error, please notify the sender immediately, deleting the original and all
 copies and destroying any hard copies. Any other use is strictly prohibited
 and may be unlawful.
 
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] NFS export domain still remain in Preparing for maintenance

2015-05-12 Thread NUNIN Roberto
Hi all

We are using oVirt engine 3.5.1-0.0 on Centos 6.6
We have two DC. One with hosts using vdsm-4.16.10-8.gitc937927.el7.x86_64, the 
other vdsm-4.16.12-7.gita30da75.el6.x86_64 on Centos6.6
No hosted-engine, it run on a dedicates VM, outside oVirt.

Behavior: When try to put the NFS export currently active and attached to the 
6.6 cluster, used to move VM from one DC to the other, this remain indefinitely 
in Preparing for maintenance phase.

No DNS resolution issue in place. All parties involved are solved directly and 
via reverse resolution.
I've read about the issue on el7 and IPv6 bug, but here we have the problem on 
Centos 6.6 hosts.

Any idea/suggestion/further investigation ?

Can we reinitialize the NFS export in some way ? Only erasing content ?
Thanks in advance for any suggestion.


Roberto Nunin
Italy



Questo messaggio e' indirizzato esclusivamente al destinatario indicato e 
potrebbe contenere informazioni confidenziali, riservate o proprietarie. 
Qualora la presente venisse ricevuta per errore, si prega di segnalarlo 
immediatamente al mittente, cancellando l'originale e ogni sua copia e 
distruggendo eventuali copie cartacee. Ogni altro uso e' strettamente proibito 
e potrebbe essere fonte di violazione di legge.

This message is for the designated recipient only and may contain privileged, 
proprietary, or otherwise private information. If you have received it in 
error, please notify the sender immediately, deleting the original and all 
copies and destroying any hard copies. Any other use is strictly prohibited and 
may be unlawful.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] NFS storage domain fail after engine upgrade [SOLVED]

2015-04-28 Thread Rik Theys

Hi,

The root cause of my problem was that there was a stale NFS mount on my 
hosts. They still had an nfs mount active.


Killing the ioprocess processes that were keeping the mount busy and 
then unmounting the nfs mounts allowed me to activate the iso and export 
domain again.


Regards,

Rik

On 04/28/2015 01:48 PM, Rik Theys wrote:

Hi,

I migrated my engine from CentOS 6 to CentOS 7 by taking an
engine-backup on the CentOS 6 install and running the restore on a
CentOS 7.1 install.

This worked rather well. I can log into the admin webinterface and see
my still running VM's.

The only issue I'm facing is that the hosts can no longer access the
export and ISO domain (which are nfs exports on my engine).

When I try to activate the storage domain on a host I get the following
message in the engine log (see below).

It seems the engine thinks the storage domain does not exist. I copied
the files from the old installation into the same directories on the new
installation and I can nfs mount them manually from the hosts. I can
also nfs mount it on the engine itself.

Any idea on how to debug this?

My engine is running 3.5.1 (actually 3.5.2 now as it just got upgraded,
but the upgrade did not change anything regarding this bug).

Is there a way to remove the export/iso domain? I can not detach it from
my data centers using the web interface.

Regards,

Rik


2015-04-28 12:59:19,271 INFO
[org.ovirt.engine.core.bll.storage.ActivateStorageDomainCommand]
(ajp--127.0.0.1-8702-8) [450f2bb2] Lock Acquired to object EngineLock
[exclusiveLocks= key: 31ba6486-d4ef-45ae-a184-8296185ef79b value: STORAGE
, sharedLocks= ]
2015-04-28 12:59:19,330 INFO
[org.ovirt.engine.core.bll.storage.ActivateStorageDomainCommand]
(org.ovirt.thread.pool-8-thread-25) [450f2bb2] Running command:
ActivateStorageDomainCommand internal: false. Entities affected :  ID:
31ba6486-d4ef-45ae-a184-8296185ef79b Type: StorageAction group
MANIPULATE_STORAGE_DOMAIN with role type ADMIN
2015-04-28 12:59:19,360 INFO
[org.ovirt.engine.core.bll.storage.ActivateStorageDomainCommand]
(org.ovirt.thread.pool-8-thread-25) [450f2bb2] Lock freed to object
EngineLock [exclusiveLocks= key: 31ba6486-d4ef-45ae-a184-8296185ef79b
value: STORAGE
, sharedLocks= ]
2015-04-28 12:59:19,362 INFO
[org.ovirt.engine.core.bll.storage.ActivateStorageDomainCommand]
(org.ovirt.thread.pool-8-thread-25) [450f2bb2] ActivateStorage Domain.
Before Connect all hosts to pool. Time:4/28/15 12:59 PM
2015-04-28 12:59:19,383 INFO
[org.ovirt.engine.core.bll.storage.ConnectStorageToVdsCommand]
(org.ovirt.thread.pool-8-thread-28) [3e09aa16] Running command:
ConnectStorageToVdsCommand internal: true. Entities affected :  ID:
aaa0----123456789aaa Type: SystemAction group
CREATE_STORAGE_DOMAIN with role type ADMIN
2015-04-28 12:59:19,388 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand]
(org.ovirt.thread.pool-8-thread-28) [3e09aa16] START,
ConnectStorageServerVDSCommand(HostName = stadius-virt2, HostId =
7212971a-d38a-42e7-8e6a-24d3396dfa6a, storagePoolId =
----, storageType = NFS, connectionList
= [{ id: 5f18ed21-8c71-4e71-874a-a6a8594c3138, connection:
iron:/var/lib/exports/export-domain, iqn: null, vfsType: null,
mountOptions: null, nfsVersion: null, nfsRetrans: null, nfsTimeo: null
};]), log id: 13d9ec07
2015-04-28 12:59:19,409 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand]
(org.ovirt.thread.pool-8-thread-28) [3e09aa16] FINISH,
ConnectStorageServerVDSCommand, return:
{5f18ed21-8c71-4e71-874a-a6a8594c3138=0}, log id: 13d9ec07
2015-04-28 12:59:19,417 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.ActivateStorageDomainVDSCommand]
(org.ovirt.thread.pool-8-thread-25) [450f2bb2] START,
ActivateStorageDomainVDSCommand( storagePoolId =
e7bdba88-e718-41a9-8d2b-0ca79c517630, ignoreFailoverLimit = false,
storageDomainId = 31ba6486-d4ef-45ae-a184-8296185ef79b), log id: 1da2c4c4
2015-04-28 12:59:19,774 ERROR
[org.ovirt.engine.core.vdsbroker.irsbroker.ActivateStorageDomainVDSCommand]
(org.ovirt.thread.pool-8-thread-25) [450f2bb2] Failed in
ActivateStorageDomainVDS method
2015-04-28 12:59:19,781 ERROR
[org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand]
(org.ovirt.thread.pool-8-thread-25) [450f2bb2]
IrsBroker::Failed::ActivateStorageDomainVDS due to: IRSErrorException:
IRSGenericException: IRSErrorException: Failed to
ActivateStorageDomainVDS, error = Storage domain does not exist:
('31ba6486-d4ef-45ae-a184-8296185ef79b',), code = 358
2015-04-28 12:59:19,793 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.ActivateStorageDomainVDSCommand]
(org.ovirt.thread.pool-8-thread-25) [450f2bb2] FINISH,
ActivateStorageDomainVDSCommand, log id: 1da2c4c4
2015-04-28 12:59:19,794 ERROR
[org.ovirt.engine.core.bll.storage.ActivateStorageDomainCommand]
(org.ovirt.thread.pool-8-thread-25) [450f2bb2] Command
org.ovirt.engine.core.bll.storage.ActivateStorageDomainCommand throw Vdc
Bll exception. With error 

Re: [ovirt-users] NFS Storage: domain locked by NFS-provider?

2015-04-13 Thread Allon Mureinik
We use sanlock for locking. 

Can you add the log files with the error? 

- Original Message -

 From: shimano shim...@go2.pl
 To: users@ovirt.org
 Sent: Friday, April 10, 2015 2:56:58 PM
 Subject: [ovirt-users] NFS Storage: domain locked by NFS-provider?

 Hi all,

 I tried to configure NFS Storage Domain with load balance by configuring
 different IPs on Hosts for the same NFS-hostname and running few NFS servers
 on those machines (NFS exports from MooseFS cluster).

 Unfortunatelly after correct add and attach Storage Domain, all Hosts are
 switching to Non-operational because of problems with access to that Storage
 Domain.

 Is oVirt making any locks on NFS SD's metafile?

 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] NFS Storage: domain locked by NFS-provider?

2015-04-13 Thread shimano
(...)
2015-04-13 03:22:23+0200 3243478 [1795]: s163 lockspace
62e29230-c475-4b16-a1e4-25e66ab8e382:13:/rhev/data-center/mnt/mi-ci-3:_ovirt_mi-ci-3/62e29230-c475-4b16-a1e4-25e66ab8e382/dom_md/ids:0
(...)
2015-04-13 09:54:15+0200 3266990 [11437]: s176 delta_renew read rv -116
offset 0
/rhev/data-center/mnt/mi-ci-3:_ovirt_mi-ci-3/62e29230-c475-4b16-a1e4-25e66ab8e382/dom_md/ids
2015-04-13 09:54:15+0200 3266990 [11437]: s176 renewal error -116
delta_length 0 last_success 3266965
2015-04-13 09:54:16+0200 3266990 [11437]: 62e29230 aio collect 0
0x7f86c40008c0:0x7f86c40008d0:0x7f87005f6000 result -116:0 match res
(...)


Is that say something?


2015-04-13 11:28 GMT+02:00 Allon Mureinik amure...@redhat.com:

 We use sanlock for locking.

 Can you add the log files with the error?

 --

 *From: *shimano shim...@go2.pl
 *To: *users@ovirt.org
 *Sent: *Friday, April 10, 2015 2:56:58 PM
 *Subject: *[ovirt-users] NFS Storage: domain locked by NFS-provider?


 Hi all,

 I tried to configure NFS Storage Domain with load balance by configuring
 different IPs on Hosts for the same NFS-hostname and running few NFS
 servers on those machines (NFS exports from MooseFS cluster).

 Unfortunatelly after correct add and attach Storage Domain, all Hosts are
 switching to Non-operational because of problems with access to that
 Storage Domain.

 Is oVirt making any locks on NFS SD's metafile?


 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] NFS can not be mounted after the installation of ovirt-hosted-engine

2014-12-18 Thread Yedidyah Bar David
- Original Message -
 From: Cong Yue cong_...@alliedtelesis.com
 To: Donny Davis do...@cloudspin.me, users@ovirt.org
 Sent: Wednesday, December 17, 2014 9:27:56 PM
 Subject: Re: [ovirt-users] NFS can not be mounted after the installation of 
 ovirt-hosted-engine
 
 No, it was not realted with selinux. It is the same issue as Simone advised.

It's most likely this one:

https://bugzilla.redhat.com/show_bug.cgi?id=1080823 - [RFE] make override of 
iptables configurable when using hosted-engine

 
  Yes, it's a know issue [1]. Please check iptables rules and re-open
  NFS required ports.
  [1] https://bugzilla.redhat.com/show_bug.cgi?id=1109326
 
 What I did is to add the lost iptables rules into /etc/sysconfig/iptables and
 restarted iptables service.
 
 Thanks,
 Cong

Best,
-- 
Didi
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] NFS can not be mounted after the installation of ovirt-hosted-engine

2014-12-18 Thread Simone Tiraboschi


- Original Message -
 From: Cong Yue cong_...@alliedtelesis.com
 To: Simone Tiraboschi stira...@redhat.com
 Cc: users@ovirt.org
 Sent: Wednesday, December 17, 2014 7:18:26 PM
 Subject: RE: [ovirt-users] NFS can not be mounted after the installation of 
 ovirt-hosted-engine
 
 Thanks.
 
 I just want to double confirm whether I do the right thing or not.
 
 Currently, my /etc/sysconfig/iptables is like
 --
 # oVirt default firewall configuration. Automatically generated by vdsm
 bootstrap script.
 *filter
 :INPUT ACCEPT [0:0]
 :FORWARD ACCEPT [0:0]
 :OUTPUT ACCEPT [0:0]
 -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT
 
 -A INPUT -i lo -j ACCEPT
 # vdsm
 -A INPUT -p tcp --dport 54321 -j ACCEPT
 # SSH
 -A INPUT -p tcp --dport 22 -j ACCEPT
 # snmp
 -A INPUT -p udp --dport 161 -j ACCEPT
 
 
 # libvirt tls
 -A INPUT -p tcp --dport 16514 -j ACCEPT
 
 # guest consoles
 -A INPUT -p tcp -m multiport --dports 5900:6923 -j ACCEPT
 
 # migration
 -A INPUT -p tcp -m multiport --dports 49152:49216 -j ACCEPT
 
 
 # Reject any other input traffic
 -A INPUT -j REJECT --reject-with icmp-host-prohibited -A FORWARD -m physdev !
 --physdev-is-bridged -j REJECT --reject-with icmp-host-prohibited COMMIT
 --
 
 Do you mean I need to add the following rule to the table?
 --
 ACCEPT tcp  --  0.0.0.0/00.0.0.0/0   state NEW tcp
 dpt:6100

It's websocket proxy port, not really need there.

 ACCEPT tcp  --  0.0.0.0/00.0.0.0/0   state NEW tcp
 dpt:111
 ACCEPT udp  --  0.0.0.0/00.0.0.0/0   state NEW udp
 dpt:111
 ACCEPT tcp  --  0.0.0.0/00.0.0.0/0   state NEW tcp
 dpt:662
 ACCEPT udp  --  0.0.0.0/00.0.0.0/0   state NEW udp
 dpt:662
 ACCEPT tcp  --  0.0.0.0/00.0.0.0/0   state NEW tcp
 dpt:875
 ACCEPT udp  --  0.0.0.0/00.0.0.0/0   state NEW udp
 dpt:875
 ACCEPT tcp  --  0.0.0.0/00.0.0.0/0   state NEW tcp
 dpt:892
 ACCEPT udp  --  0.0.0.0/00.0.0.0/0   state NEW udp
 dpt:892
 ACCEPT tcp  --  0.0.0.0/00.0.0.0/0   state NEW tcp
 dpt:2049
 ACCEPT udp  --  0.0.0.0/00.0.0.0/0   state NEW udp
 dpt:32769
 ACCEPT tcp  --  0.0.0.0/00.0.0.0/0   state NEW tcp
 dpt:32803
 --

Ok

 Thanks in advance,
 Cong
 
 
 -Original Message-
 From: Simone Tiraboschi [mailto:stira...@redhat.com]
 Sent: Wednesday, December 17, 2014 9:48 AM
 To: Yue, Cong
 Cc: users@ovirt.org
 Subject: Re: [ovirt-users] NFS can not be mounted after the installation of
 ovirt-hosted-engine
 
 
 
 - Original Message -
  From: Simone Tiraboschi stira...@redhat.com
  To: Cong Yue cong_...@alliedtelesis.com
  Cc: users@ovirt.org
  Sent: Wednesday, December 17, 2014 6:43:34 PM
  Subject: Re: [ovirt-users] NFS can not be mounted after the installation
  of  ovirt-hosted-engine
 
 
 
  - Original Message -
   From: Cong Yue cong_...@alliedtelesis.com
   To: users@ovirt.org
   Sent: Wednesday, December 17, 2014 6:33:48 PM
   Subject: [ovirt-users] NFS can not be mounted after the installation of
   ovirt-hosted-engine
  
  
  
   Hi
  
  
  
   I walked through the installation of ovirt-hosted-engine as
  
   http://community.redhat.com/blog/2014/10/up-and-running-with-ovirt-3
   -5/
  
  
  
   And I met a problem in the step of “Configure storage”
  
  
  
   In my ovirt host, I am using nfs v3 for the test. I created two
   exports points, and just after that I confirmed with other client
   that I can mount these two points.
  
   My /etc/exports is as
  
  
  
   ---
  
   /engine 10.0.0.0/24(rw,sync,no_root_squash,no_all_squash)
  
   /data 10.0.0.0/24(rw,sync,no_root_squash,no_all_squash)
  
   ---
  
  
  
   While after I completed the engine VM install, I found these two
   points can not be mounted again with the same command
  
   as
  
   mount –t nfs 10.0.0.94:/engine /engine
  
  
  
   Is ovirt changed something for nfs server configuration
 
  Yes, it's a know issue [1]. Please check iptables rules and re-open
  NFS required ports.
  [1] https://bugzilla.redhat.com/show_bug.cgi?id=1109326
 
  We already had a patch for it, it will be included next month in oVirt
  3.5.1
 
 Oh, it's note really the same: this it's related to hosted-engine but is not
 that different.
 Please check iptables rules.
 
   or something wrong
   with my setting?
  
  
  
   Thanks in advance,
  
   Cong
  
  
  
  
   This e-mail message is for the sole use of the intended recipient(s)
   and may contain confidential and privileged information. Any
   unauthorized review, use, disclosure or distribution is prohibited.
   If you are not the intended recipient, please contact the sender by
   reply e-mail and destroy all copies of the original message. If you
   are the intended recipient, please be advised that the content of
   this message is subject to access, review

[ovirt-users] NFS can not be mounted after the installation of ovirt-hosted-engine

2014-12-17 Thread Yue, Cong
Hi

I walked through the installation of ovirt-hosted-engine as
http://community.redhat.com/blog/2014/10/up-and-running-with-ovirt-3-5/

And I met a problem in the step of Configure storage

In my ovirt host, I am using nfs v3 for the test. I created two exports points, 
and just after that I confirmed with other client that I can mount these two 
points.
My /etc/exports is as

---
/engine   10.0.0.0/24(rw,sync,no_root_squash,no_all_squash)
/data   10.0.0.0/24(rw,sync,no_root_squash,no_all_squash)
---

While after I completed the engine VM install, I found these two points can not 
be mounted again with the same command
as
mount -t nfs 10.0.0.94:/engine /engine

Is ovirt changed something for nfs server configuration or something wrong with 
my setting?

Thanks in advance,
Cong


This e-mail message is for the sole use of the intended recipient(s) and may 
contain confidential and privileged information. Any unauthorized review, use, 
disclosure or distribution is prohibited. If you are not the intended 
recipient, please contact the sender by reply e-mail and destroy all copies of 
the original message. If you are the intended recipient, please be advised that 
the content of this message is subject to access, review and disclosure by the 
sender's e-mail System Administrator.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] NFS can not be mounted after the installation of ovirt-hosted-engine

2014-12-17 Thread Simone Tiraboschi


- Original Message -
 From: Cong Yue cong_...@alliedtelesis.com
 To: users@ovirt.org
 Sent: Wednesday, December 17, 2014 6:33:48 PM
 Subject: [ovirt-users] NFS can not be mounted after the installation of   
 ovirt-hosted-engine
 
 
 
 Hi
 
 
 
 I walked through the installation of ovirt-hosted-engine as
 
 http://community.redhat.com/blog/2014/10/up-and-running-with-ovirt-3-5/
 
 
 
 And I met a problem in the step of “Configure storage”
 
 
 
 In my ovirt host, I am using nfs v3 for the test. I created two exports
 points, and just after that I confirmed with other client that I can mount
 these two points.
 
 My /etc/exports is as
 
 
 
 ---
 
 /engine 10.0.0.0/24(rw,sync,no_root_squash,no_all_squash)
 
 /data 10.0.0.0/24(rw,sync,no_root_squash,no_all_squash)
 
 ---
 
 
 
 While after I completed the engine VM install, I found these two points can
 not be mounted again with the same command
 
 as
 
 mount –t nfs 10.0.0.94:/engine /engine
 
 
 
 Is ovirt changed something for nfs server configuration

Yes, it's a know issue [1]. Please check iptables rules and re-open NFS 
required ports.
[1] https://bugzilla.redhat.com/show_bug.cgi?id=1109326

We already had a patch for it, it will be included next month in oVirt 3.5.1

 or something wrong
 with my setting?
 
 
 
 Thanks in advance,
 
 Cong
 
 
 
 
 This e-mail message is for the sole use of the intended recipient(s) and may
 contain confidential and privileged information. Any unauthorized review,
 use, disclosure or distribution is prohibited. If you are not the intended
 recipient, please contact the sender by reply e-mail and destroy all copies
 of the original message. If you are the intended recipient, please be
 advised that the content of this message is subject to access, review and
 disclosure by the sender's e-mail System Administrator.
 
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] NFS can not be mounted after the installation of ovirt-hosted-engine

2014-12-17 Thread Simone Tiraboschi


- Original Message -
 From: Simone Tiraboschi stira...@redhat.com
 To: Cong Yue cong_...@alliedtelesis.com
 Cc: users@ovirt.org
 Sent: Wednesday, December 17, 2014 6:43:34 PM
 Subject: Re: [ovirt-users] NFS can not be mounted after the installation  
 of  ovirt-hosted-engine
 
 
 
 - Original Message -
  From: Cong Yue cong_...@alliedtelesis.com
  To: users@ovirt.org
  Sent: Wednesday, December 17, 2014 6:33:48 PM
  Subject: [ovirt-users] NFS can not be mounted after the installation of
  ovirt-hosted-engine
  
  
  
  Hi
  
  
  
  I walked through the installation of ovirt-hosted-engine as
  
  http://community.redhat.com/blog/2014/10/up-and-running-with-ovirt-3-5/
  
  
  
  And I met a problem in the step of “Configure storage”
  
  
  
  In my ovirt host, I am using nfs v3 for the test. I created two exports
  points, and just after that I confirmed with other client that I can mount
  these two points.
  
  My /etc/exports is as
  
  
  
  ---
  
  /engine 10.0.0.0/24(rw,sync,no_root_squash,no_all_squash)
  
  /data 10.0.0.0/24(rw,sync,no_root_squash,no_all_squash)
  
  ---
  
  
  
  While after I completed the engine VM install, I found these two points can
  not be mounted again with the same command
  
  as
  
  mount –t nfs 10.0.0.94:/engine /engine
  
  
  
  Is ovirt changed something for nfs server configuration
 
 Yes, it's a know issue [1]. Please check iptables rules and re-open NFS
 required ports.
 [1] https://bugzilla.redhat.com/show_bug.cgi?id=1109326
 
 We already had a patch for it, it will be included next month in oVirt 3.5.1

Oh, it's note really the same: this it's related to hosted-engine but is not 
that different.
Please check iptables rules.

  or something wrong
  with my setting?
  
  
  
  Thanks in advance,
  
  Cong
  
  
  
  
  This e-mail message is for the sole use of the intended recipient(s) and
  may
  contain confidential and privileged information. Any unauthorized review,
  use, disclosure or distribution is prohibited. If you are not the intended
  recipient, please contact the sender by reply e-mail and destroy all copies
  of the original message. If you are the intended recipient, please be
  advised that the content of this message is subject to access, review and
  disclosure by the sender's e-mail System Administrator.
  
  ___
  Users mailing list
  Users@ovirt.org
  http://lists.ovirt.org/mailman/listinfo/users
 
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] NFS can not be mounted after the installation of ovirt-hosted-engine

2014-12-17 Thread Yue, Cong
Thanks.

I just want to double confirm whether I do the right thing or not.

Currently, my /etc/sysconfig/iptables is like
--
# oVirt default firewall configuration. Automatically generated by vdsm 
bootstrap script.
*filter
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
-A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT

-A INPUT -i lo -j ACCEPT
# vdsm
-A INPUT -p tcp --dport 54321 -j ACCEPT
# SSH
-A INPUT -p tcp --dport 22 -j ACCEPT
# snmp
-A INPUT -p udp --dport 161 -j ACCEPT


# libvirt tls
-A INPUT -p tcp --dport 16514 -j ACCEPT

# guest consoles
-A INPUT -p tcp -m multiport --dports 5900:6923 -j ACCEPT

# migration
-A INPUT -p tcp -m multiport --dports 49152:49216 -j ACCEPT


# Reject any other input traffic
-A INPUT -j REJECT --reject-with icmp-host-prohibited -A FORWARD -m physdev ! 
--physdev-is-bridged -j REJECT --reject-with icmp-host-prohibited COMMIT
--

Do you mean I need to add the following rule to the table?
--
ACCEPT tcp  --  0.0.0.0/00.0.0.0/0   state NEW tcp 
dpt:6100
ACCEPT tcp  --  0.0.0.0/00.0.0.0/0   state NEW tcp 
dpt:111
ACCEPT udp  --  0.0.0.0/00.0.0.0/0   state NEW udp 
dpt:111
ACCEPT tcp  --  0.0.0.0/00.0.0.0/0   state NEW tcp 
dpt:662
ACCEPT udp  --  0.0.0.0/00.0.0.0/0   state NEW udp 
dpt:662
ACCEPT tcp  --  0.0.0.0/00.0.0.0/0   state NEW tcp 
dpt:875
ACCEPT udp  --  0.0.0.0/00.0.0.0/0   state NEW udp 
dpt:875
ACCEPT tcp  --  0.0.0.0/00.0.0.0/0   state NEW tcp 
dpt:892
ACCEPT udp  --  0.0.0.0/00.0.0.0/0   state NEW udp 
dpt:892
ACCEPT tcp  --  0.0.0.0/00.0.0.0/0   state NEW tcp 
dpt:2049
ACCEPT udp  --  0.0.0.0/00.0.0.0/0   state NEW udp 
dpt:32769
ACCEPT tcp  --  0.0.0.0/00.0.0.0/0   state NEW tcp 
dpt:32803
--


Thanks in advance,
Cong


-Original Message-
From: Simone Tiraboschi [mailto:stira...@redhat.com]
Sent: Wednesday, December 17, 2014 9:48 AM
To: Yue, Cong
Cc: users@ovirt.org
Subject: Re: [ovirt-users] NFS can not be mounted after the installation of 
ovirt-hosted-engine



- Original Message -
 From: Simone Tiraboschi stira...@redhat.com
 To: Cong Yue cong_...@alliedtelesis.com
 Cc: users@ovirt.org
 Sent: Wednesday, December 17, 2014 6:43:34 PM
 Subject: Re: [ovirt-users] NFS can not be mounted after the installation  
 of  ovirt-hosted-engine



 - Original Message -
  From: Cong Yue cong_...@alliedtelesis.com
  To: users@ovirt.org
  Sent: Wednesday, December 17, 2014 6:33:48 PM
  Subject: [ovirt-users] NFS can not be mounted after the installation of
  ovirt-hosted-engine
 
 
 
  Hi
 
 
 
  I walked through the installation of ovirt-hosted-engine as
 
  http://community.redhat.com/blog/2014/10/up-and-running-with-ovirt-3
  -5/
 
 
 
  And I met a problem in the step of “Configure storage”
 
 
 
  In my ovirt host, I am using nfs v3 for the test. I created two
  exports points, and just after that I confirmed with other client
  that I can mount these two points.
 
  My /etc/exports is as
 
 
 
  ---
 
  /engine 10.0.0.0/24(rw,sync,no_root_squash,no_all_squash)
 
  /data 10.0.0.0/24(rw,sync,no_root_squash,no_all_squash)
 
  ---
 
 
 
  While after I completed the engine VM install, I found these two
  points can not be mounted again with the same command
 
  as
 
  mount –t nfs 10.0.0.94:/engine /engine
 
 
 
  Is ovirt changed something for nfs server configuration

 Yes, it's a know issue [1]. Please check iptables rules and re-open
 NFS required ports.
 [1] https://bugzilla.redhat.com/show_bug.cgi?id=1109326

 We already had a patch for it, it will be included next month in oVirt
 3.5.1

Oh, it's note really the same: this it's related to hosted-engine but is not 
that different.
Please check iptables rules.

  or something wrong
  with my setting?
 
 
 
  Thanks in advance,
 
  Cong
 
 
 
 
  This e-mail message is for the sole use of the intended recipient(s)
  and may contain confidential and privileged information. Any
  unauthorized review, use, disclosure or distribution is prohibited.
  If you are not the intended recipient, please contact the sender by
  reply e-mail and destroy all copies of the original message. If you
  are the intended recipient, please be advised that the content of
  this message is subject to access, review and disclosure by the
  sender's e-mail System Administrator.
 
  ___
  Users mailing list
  Users@ovirt.org
  http://lists.ovirt.org/mailman/listinfo/users
 
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users


This e-mail message is for the sole use of the intended recipient(s) and may 
contain confidential and privileged information. Any unauthorized review

Re: [ovirt-users] NFS can not be mounted after the installation of ovirt-hosted-engine

2014-12-17 Thread Yue, Cong
No, it was not realted with selinux. It is the same issue as Simone advised.

 Yes, it's a know issue [1]. Please check iptables rules and re-open
 NFS required ports.
 [1] https://bugzilla.redhat.com/show_bug.cgi?id=1109326

What I did is to add the lost iptables rules into /etc/sysconfig/iptables and 
restarted iptables service.

Thanks,
Cong


-Original Message-
From: Donny Davis [mailto:do...@cloudspin.me]
Sent: Wednesday, December 17, 2014 11:24 AM
To: Yue, Cong; users@ovirt.org
Subject: RE: [ovirt-users] NFS can not be mounted after the installation of 
ovirt-hosted-engine

So was it an selinux issue?
What was your resolution

-Original Message-
From: Yue, Cong [mailto:cong_...@alliedtelesis.com]
Sent: Wednesday, December 17, 2014 12:13 PM
To: Donny Davis
Subject: RE: [ovirt-users] NFS can not be mounted after the installation of 
ovirt-hosted-engine

I checked and now it works for me.
Thanks


-Original Message-
From: Donny Davis [mailto:do...@cloudspin.me]
Sent: Wednesday, December 17, 2014 10:42 AM
To: Yue, Cong
Subject: RE: [ovirt-users] NFS can not be mounted after the installation of 
ovirt-hosted-engine

did you check /var/log/audit/aduit.log

-Original Message-
From: users-boun...@ovirt.org [mailto:users-boun...@ovirt.org] On Behalf Of 
Yue, Cong
Sent: Wednesday, December 17, 2014 11:18 AM
To: Simone Tiraboschi
Cc: users@ovirt.org
Subject: Re: [ovirt-users] NFS can not be mounted after the installation of 
ovirt-hosted-engine

Thanks.

I just want to double confirm whether I do the right thing or not.

Currently, my /etc/sysconfig/iptables is like
--
# oVirt default firewall configuration. Automatically generated by vdsm 
bootstrap script.
*filter
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
-A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT

-A INPUT -i lo -j ACCEPT
# vdsm
-A INPUT -p tcp --dport 54321 -j ACCEPT
# SSH
-A INPUT -p tcp --dport 22 -j ACCEPT
# snmp
-A INPUT -p udp --dport 161 -j ACCEPT


# libvirt tls
-A INPUT -p tcp --dport 16514 -j ACCEPT

# guest consoles
-A INPUT -p tcp -m multiport --dports 5900:6923 -j ACCEPT

# migration
-A INPUT -p tcp -m multiport --dports 49152:49216 -j ACCEPT


# Reject any other input traffic
-A INPUT -j REJECT --reject-with icmp-host-prohibited -A FORWARD -m physdev ! 
--physdev-is-bridged -j REJECT --reject-with icmp-host-prohibited COMMIT
--

Do you mean I need to add the following rule to the table?
--
ACCEPT tcp  --  0.0.0.0/00.0.0.0/0   state NEW tcp 
dpt:6100
ACCEPT tcp  --  0.0.0.0/00.0.0.0/0   state NEW tcp 
dpt:111
ACCEPT udp  --  0.0.0.0/00.0.0.0/0   state NEW udp 
dpt:111
ACCEPT tcp  --  0.0.0.0/00.0.0.0/0   state NEW tcp 
dpt:662
ACCEPT udp  --  0.0.0.0/00.0.0.0/0   state NEW udp 
dpt:662
ACCEPT tcp  --  0.0.0.0/00.0.0.0/0   state NEW tcp 
dpt:875
ACCEPT udp  --  0.0.0.0/00.0.0.0/0   state NEW udp 
dpt:875
ACCEPT tcp  --  0.0.0.0/00.0.0.0/0   state NEW tcp 
dpt:892
ACCEPT udp  --  0.0.0.0/00.0.0.0/0   state NEW udp 
dpt:892
ACCEPT tcp  --  0.0.0.0/00.0.0.0/0   state NEW tcp 
dpt:2049
ACCEPT udp  --  0.0.0.0/00.0.0.0/0   state NEW udp 
dpt:32769
ACCEPT tcp  --  0.0.0.0/00.0.0.0/0   state NEW tcp 
dpt:32803
--


Thanks in advance,
Cong


-Original Message-
From: Simone Tiraboschi [mailto:stira...@redhat.com]
Sent: Wednesday, December 17, 2014 9:48 AM
To: Yue, Cong
Cc: users@ovirt.org
Subject: Re: [ovirt-users] NFS can not be mounted after the installation of 
ovirt-hosted-engine



- Original Message -
 From: Simone Tiraboschi stira...@redhat.com
 To: Cong Yue cong_...@alliedtelesis.com
 Cc: users@ovirt.org
 Sent: Wednesday, December 17, 2014 6:43:34 PM
 Subject: Re: [ovirt-users] NFS can not be mounted after the installation  
 of  ovirt-hosted-engine



 - Original Message -
  From: Cong Yue cong_...@alliedtelesis.com
  To: users@ovirt.org
  Sent: Wednesday, December 17, 2014 6:33:48 PM
  Subject: [ovirt-users] NFS can not be mounted after the installation of
  ovirt-hosted-engine
 
 
 
  Hi
 
 
 
  I walked through the installation of ovirt-hosted-engine as
 
  http://community.redhat.com/blog/2014/10/up-and-running-with-ovirt-3
  -5/
 
 
 
  And I met a problem in the step of “Configure storage”
 
 
 
  In my ovirt host, I am using nfs v3 for the test. I created two
  exports points, and just after that I confirmed with other client
  that I can mount these two points.
 
  My /etc/exports is as
 
 
 
  ---
 
  /engine 10.0.0.0/24(rw,sync,no_root_squash,no_all_squash)
 
  /data 10.0.0.0/24(rw,sync,no_root_squash,no_all_squash)
 
  ---
 
 
 
  While after I completed the engine VM install, I found these two
  points can not be mounted again with the same command

[ovirt-users] nfs shared storage can not be mounted in second host during hosted-engine --deploy

2014-12-17 Thread Yue, Cong
I am trying to install the second host to test the HA for hypervisor . I am 
using external storage and assume that one is with HA.
I configured the first node with that shared storage as nfs2-3:/engine. And now 
everything works well except for browser embedded console. :)

But when I did hosted-engine -deploy for the second host, there is some error 
which shows
--
Error while mounting specified storage path: mount.nfs: Connection timed out.
Cannot unmounts /tmp/tmpLALdB1
--

I checked from the second with mount -t nfs nfs2-3:/engine /test_mount, and it 
works well.

Do I need unblock something or is there some log I can dig further to find the 
problem?

Thanks in advance,
Cong



This e-mail message is for the sole use of the intended recipient(s) and may 
contain confidential and privileged information. Any unauthorized review, use, 
disclosure or distribution is prohibited. If you are not the intended 
recipient, please contact the sender by reply e-mail and destroy all copies of 
the original message. If you are the intended recipient, please be advised that 
the content of this message is subject to access, review and disclosure by the 
sender's e-mail System Administrator.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] nfs shared storage can not be mounted in second host during hosted-engine --deploy

2014-12-17 Thread Yue, Cong
For this, I found the reason. My external storage is with nfs4 type, but I 
tired with nfs3.

Thanks,
Cong


From: Yue, Cong
Sent: Wednesday, December 17, 2014 3:56 PM
To: 'users@ovirt.org'
Subject: RE: nfs shared storage can not be mounted in second host during 
hosted-engine --deploy

This is the log.

Thanks,
Cong


From: Yue, Cong
Sent: Wednesday, December 17, 2014 3:44 PM
To: users@ovirt.orgmailto:users@ovirt.org
Subject: nfs shared storage can not be mounted in second host during 
hosted-engine --deploy

I am trying to install the second host to test the HA for hypervisor . I am 
using external storage and assume that one is with HA.
I configured the first node with that shared storage as nfs2-3:/engine. And now 
everything works well except for browser embedded console. :)

But when I did hosted-engine -deploy for the second host, there is some error 
which shows
--
Error while mounting specified storage path: mount.nfs: Connection timed out.
Cannot unmounts /tmp/tmpLALdB1
--

I checked from the second with mount -t nfs nfs2-3:/engine /test_mount, and it 
works well.

Do I need unblock something or is there some log I can dig further to find the 
problem?

Thanks in advance,
Cong



This e-mail message is for the sole use of the intended recipient(s) and may 
contain confidential and privileged information. Any unauthorized review, use, 
disclosure or distribution is prohibited. If you are not the intended 
recipient, please contact the sender by reply e-mail and destroy all copies of 
the original message. If you are the intended recipient, please be advised that 
the content of this message is subject to access, review and disclosure by the 
sender's e-mail System Administrator.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] NFS

2014-12-16 Thread Koen Vanoppen
Dear all,

We recently added 2 hypervisors to the domain on ovirt, but for some reason
they can't connect to the nfs share:
When I manually try to mount the nfs-share ([root@ovirthyp01dev ~]# mount
-vvv -t nfs -o vers=3,tcp progress:/media/NfsProgress /rhev/data-center/mnt/
progress.brusselsairport.aero\:_media_NfsProgress/)
:
mount: external mount: argv[3] = -v
mount: external mount: argv[4] = -o
mount: external mount: argv[5] = rw,vers=3,tcp
mount.nfs: timeout set for Tue Dec 16 08:56:47 2014
mount.nfs: trying text-based options 'vers=3,tcp,addr=10.110.56.20'
mount.nfs: prog 13, trying vers=3, prot=6
mount.nfs: portmap query failed: RPC: Program not registered
mount.nfs: requested NFS version or transport protocol is not supported

From vdsm.log:
Thread-277::ERROR::2014-12-16
08:46:32,504::storageServer::211::Storage.StorageServer.MountConnection::(connect)
Mount failed: (32, ';mount.nfs: requested NFS version or transport protocol
is not supported\n')
Traceback (most recent call last):
  File /usr/share/vdsm/storage/storageServer.py, line 209, in connect
self._mount.mount(self.options, self._vfsType)
  File /usr/share/vdsm/storage/mount.py, line 223, in mount
return self._runcmd(cmd, timeout)
  File /usr/share/vdsm/storage/mount.py, line 239, in _runcmd
raise MountError(rc, ;.join((out, err)))
MountError: (32, ';mount.nfs: requested NFS version or transport protocol
is not supported\n')
Thread-277::ERROR::2014-12-16
08:46:32,508::hsm::2433::Storage.HSM::(connectStorageServer) Could not
connect to storageServer
Traceback (most recent call last):
  File /usr/share/vdsm/storage/hsm.py, line 2430, in connectStorageServer
conObj.connect()
  File /usr/share/vdsm/storage/storageServer.py, line 328, in connect
return self._mountCon.connect()
  File /usr/share/vdsm/storage/storageServer.py, line 217, in connect
raise e
MountError: (32, ';mount.nfs: requested NFS version or transport protocol
is not supported\n')

Any ideas? The rest (4 others) didn't have any problems...
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] NFS

2014-12-16 Thread Karli Sjöberg
On Tue, 2014-12-16 at 09:00 +0100, Koen Vanoppen wrote:
 Dear all,
 
 
 We recently added 2 hypervisors to the domain on ovirt, but for some
 reason they can't connect to the nfs share:
 When I manually try to mount the nfs-share ([root@ovirthyp01dev ~]#
 mount -vvv -t nfs -o vers=3,tcp
 progress:/media/NfsProgress 
 /rhev/data-center/mnt/progress.brusselsairport.aero\:_media_NfsProgress/)
 :
 mount: external mount: argv[3] = -v
 mount: external mount: argv[4] = -o
 mount: external mount: argv[5] = rw,vers=3,tcp
 mount.nfs: timeout set for Tue Dec 16 08:56:47 2014
 mount.nfs: trying text-based options 'vers=3,tcp,addr=10.110.56.20'
 mount.nfs: prog 13, trying vers=3, prot=6
 mount.nfs: portmap query failed: RPC: Program not registered
 mount.nfs: requested NFS version or transport protocol is not
 supported
 
 
 From vdsm.log:
 Thread-277::ERROR::2014-12-16
 08:46:32,504::storageServer::211::Storage.StorageServer.MountConnection::(connect)
  Mount failed: (32, ';mount.nfs: requested NFS version or transport protocol 
 is not supported\n')
 Traceback (most recent call last):
   File /usr/share/vdsm/storage/storageServer.py, line 209, in
 connect
 self._mount.mount(self.options, self._vfsType)
   File /usr/share/vdsm/storage/mount.py, line 223, in mount
 return self._runcmd(cmd, timeout)
   File /usr/share/vdsm/storage/mount.py, line 239, in _runcmd
 raise MountError(rc, ;.join((out, err)))
 MountError: (32, ';mount.nfs: requested NFS version or transport
 protocol is not supported\n')
 Thread-277::ERROR::2014-12-16
 08:46:32,508::hsm::2433::Storage.HSM::(connectStorageServer) Could not
 connect to storageServer
 Traceback (most recent call last):
   File /usr/share/vdsm/storage/hsm.py, line 2430, in
 connectStorageServer
 conObj.connect()
   File /usr/share/vdsm/storage/storageServer.py, line 328, in
 connect
 return self._mountCon.connect()
   File /usr/share/vdsm/storage/storageServer.py, line 217, in
 connect
 raise e
 MountError: (32, ';mount.nfs: requested NFS version or transport
 protocol is not supported\n')
 
 
 Any ideas? The rest (4 others) didn't have any problems...
 
 
 plain text document attachment (ATT1)
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users

# yum install -y nfs-utils

?



-- 

Med Vänliga Hälsningar

---
Karli Sjöberg
Swedish University of Agricultural Sciences Box 7079 (Visiting Address
Kronåsvägen 8)
S-750 07 Uppsala, Sweden
Phone:  +46-(0)18-67 15 66
karli.sjob...@slu.se
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] NFS

2014-12-16 Thread Koen Vanoppen
Already installed... :-) and the service nfs and rpcbind are running

2014-12-16 9:07 GMT+01:00 Karli Sjöberg karli.sjob...@slu.se:

 On Tue, 2014-12-16 at 09:00 +0100, Koen Vanoppen wrote:
  Dear all,
 
 
  We recently added 2 hypervisors to the domain on ovirt, but for some
  reason they can't connect to the nfs share:
  When I manually try to mount the nfs-share ([root@ovirthyp01dev ~]#
  mount -vvv -t nfs -o vers=3,tcp
  progress:/media/NfsProgress /rhev/data-center/mnt/
 progress.brusselsairport.aero\:_media_NfsProgress/)
  :
  mount: external mount: argv[3] = -v
  mount: external mount: argv[4] = -o
  mount: external mount: argv[5] = rw,vers=3,tcp
  mount.nfs: timeout set for Tue Dec 16 08:56:47 2014
  mount.nfs: trying text-based options 'vers=3,tcp,addr=10.110.56.20'
  mount.nfs: prog 13, trying vers=3, prot=6
  mount.nfs: portmap query failed: RPC: Program not registered
  mount.nfs: requested NFS version or transport protocol is not
  supported
 
 
  From vdsm.log:
  Thread-277::ERROR::2014-12-16
 
 08:46:32,504::storageServer::211::Storage.StorageServer.MountConnection::(connect)
 Mount failed: (32, ';mount.nfs: requested NFS version or transport protocol
 is not supported\n')
  Traceback (most recent call last):
File /usr/share/vdsm/storage/storageServer.py, line 209, in
  connect
  self._mount.mount(self.options, self._vfsType)
File /usr/share/vdsm/storage/mount.py, line 223, in mount
  return self._runcmd(cmd, timeout)
File /usr/share/vdsm/storage/mount.py, line 239, in _runcmd
  raise MountError(rc, ;.join((out, err)))
  MountError: (32, ';mount.nfs: requested NFS version or transport
  protocol is not supported\n')
  Thread-277::ERROR::2014-12-16
  08:46:32,508::hsm::2433::Storage.HSM::(connectStorageServer) Could not
  connect to storageServer
  Traceback (most recent call last):
File /usr/share/vdsm/storage/hsm.py, line 2430, in
  connectStorageServer
  conObj.connect()
File /usr/share/vdsm/storage/storageServer.py, line 328, in
  connect
  return self._mountCon.connect()
File /usr/share/vdsm/storage/storageServer.py, line 217, in
  connect
  raise e
  MountError: (32, ';mount.nfs: requested NFS version or transport
  protocol is not supported\n')
 
 
  Any ideas? The rest (4 others) didn't have any problems...
 
 
  plain text document attachment (ATT1)
  ___
  Users mailing list
  Users@ovirt.org
  http://lists.ovirt.org/mailman/listinfo/users

 # yum install -y nfs-utils

 ?



 --

 Med Vänliga Hälsningar


 ---
 Karli Sjöberg
 Swedish University of Agricultural Sciences Box 7079 (Visiting Address
 Kronåsvägen 8)
 S-750 07 Uppsala, Sweden
 Phone:  +46-(0)18-67 15 66
 karli.sjob...@slu.se

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] NFS

2014-12-16 Thread Karli Sjöberg
On Tue, 2014-12-16 at 09:30 +0100, Koen Vanoppen wrote:
 Already installed... :-) and the service nfs and rpcbind are running

Just checking:) Tried restarting the server? SELinux? Firewall?

/K

 
 
 2014-12-16 9:07 GMT+01:00 Karli Sjöberg karli.sjob...@slu.se:
 On Tue, 2014-12-16 at 09:00 +0100, Koen Vanoppen wrote:
  Dear all,
 
 
  We recently added 2 hypervisors to the domain on ovirt, but
 for some
  reason they can't connect to the nfs share:
  When I manually try to mount the nfs-share
 ([root@ovirthyp01dev ~]#
  mount -vvv -t nfs -o vers=3,tcp
 
 progress:/media/NfsProgress 
 /rhev/data-center/mnt/progress.brusselsairport.aero\:_media_NfsProgress/)
  :
  mount: external mount: argv[3] = -v
  mount: external mount: argv[4] = -o
  mount: external mount: argv[5] = rw,vers=3,tcp
  mount.nfs: timeout set for Tue Dec 16 08:56:47 2014
  mount.nfs: trying text-based options
 'vers=3,tcp,addr=10.110.56.20'
  mount.nfs: prog 13, trying vers=3, prot=6
  mount.nfs: portmap query failed: RPC: Program not registered
  mount.nfs: requested NFS version or transport protocol is
 not
  supported
 
 
  From vdsm.log:
  Thread-277::ERROR::2014-12-16
 
 
 08:46:32,504::storageServer::211::Storage.StorageServer.MountConnection::(connect)
  Mount failed: (32, ';mount.nfs: requested NFS version or transport protocol 
 is not supported\n')
  Traceback (most recent call last):
File /usr/share/vdsm/storage/storageServer.py, line 209,
 in
  connect
  self._mount.mount(self.options, self._vfsType)
File /usr/share/vdsm/storage/mount.py, line 223, in
 mount
  return self._runcmd(cmd, timeout)
File /usr/share/vdsm/storage/mount.py, line 239, in
 _runcmd
  raise MountError(rc, ;.join((out, err)))
  MountError: (32, ';mount.nfs: requested NFS version or
 transport
  protocol is not supported\n')
  Thread-277::ERROR::2014-12-16
  08:46:32,508::hsm::2433::Storage.HSM::(connectStorageServer)
 Could not
  connect to storageServer
  Traceback (most recent call last):
File /usr/share/vdsm/storage/hsm.py, line 2430, in
  connectStorageServer
  conObj.connect()
File /usr/share/vdsm/storage/storageServer.py, line 328,
 in
  connect
  return self._mountCon.connect()
File /usr/share/vdsm/storage/storageServer.py, line 217,
 in
  connect
  raise e
  MountError: (32, ';mount.nfs: requested NFS version or
 transport
  protocol is not supported\n')
 
 
  Any ideas? The rest (4 others) didn't have any problems...
 
 
 
  plain text document attachment (ATT1)
  ___
  Users mailing list
  Users@ovirt.org
  http://lists.ovirt.org/mailman/listinfo/users
 
 # yum install -y nfs-utils
 
 ?
 
 
 
 --
 
 Med Vänliga Hälsningar
 
 
 ---
 Karli Sjöberg
 Swedish University of Agricultural Sciences Box 7079 (Visiting
 Address
 Kronåsvägen 8)
 S-750 07 Uppsala, Sweden
 Phone:  +46-(0)18-67 15 66
 karli.sjob...@slu.se
 plain text document attachment (ATT1)
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users



-- 

Med Vänliga Hälsningar

---
Karli Sjöberg
Swedish University of Agricultural Sciences Box 7079 (Visiting Address
Kronåsvägen 8)
S-750 07 Uppsala, Sweden
Phone:  +46-(0)18-67 15 66
karli.sjob...@slu.se
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] NFS

2014-12-16 Thread Koen Vanoppen
Check, check, check :-)

Still nothing...

2014-12-16 9:36 GMT+01:00 Karli Sjöberg karli.sjob...@slu.se:

 On Tue, 2014-12-16 at 09:30 +0100, Koen Vanoppen wrote:
  Already installed... :-) and the service nfs and rpcbind are running

 Just checking:) Tried restarting the server? SELinux? Firewall?

 /K

 
 
  2014-12-16 9:07 GMT+01:00 Karli Sjöberg karli.sjob...@slu.se:
  On Tue, 2014-12-16 at 09:00 +0100, Koen Vanoppen wrote:
   Dear all,
  
  
   We recently added 2 hypervisors to the domain on ovirt, but
  for some
   reason they can't connect to the nfs share:
   When I manually try to mount the nfs-share
  ([root@ovirthyp01dev ~]#
   mount -vvv -t nfs -o vers=3,tcp
  
  progress:/media/NfsProgress /rhev/data-center/mnt/
 progress.brusselsairport.aero\:_media_NfsProgress/)
   :
   mount: external mount: argv[3] = -v
   mount: external mount: argv[4] = -o
   mount: external mount: argv[5] = rw,vers=3,tcp
   mount.nfs: timeout set for Tue Dec 16 08:56:47 2014
   mount.nfs: trying text-based options
  'vers=3,tcp,addr=10.110.56.20'
   mount.nfs: prog 13, trying vers=3, prot=6
   mount.nfs: portmap query failed: RPC: Program not registered
   mount.nfs: requested NFS version or transport protocol is
  not
   supported
  
  
   From vdsm.log:
   Thread-277::ERROR::2014-12-16
  
 
  
 08:46:32,504::storageServer::211::Storage.StorageServer.MountConnection::(connect)
 Mount failed: (32, ';mount.nfs: requested NFS version or transport protocol
 is not supported\n')
   Traceback (most recent call last):
 File /usr/share/vdsm/storage/storageServer.py, line 209,
  in
   connect
   self._mount.mount(self.options, self._vfsType)
 File /usr/share/vdsm/storage/mount.py, line 223, in
  mount
   return self._runcmd(cmd, timeout)
 File /usr/share/vdsm/storage/mount.py, line 239, in
  _runcmd
   raise MountError(rc, ;.join((out, err)))
   MountError: (32, ';mount.nfs: requested NFS version or
  transport
   protocol is not supported\n')
   Thread-277::ERROR::2014-12-16
   08:46:32,508::hsm::2433::Storage.HSM::(connectStorageServer)
  Could not
   connect to storageServer
   Traceback (most recent call last):
 File /usr/share/vdsm/storage/hsm.py, line 2430, in
   connectStorageServer
   conObj.connect()
 File /usr/share/vdsm/storage/storageServer.py, line 328,
  in
   connect
   return self._mountCon.connect()
 File /usr/share/vdsm/storage/storageServer.py, line 217,
  in
   connect
   raise e
   MountError: (32, ';mount.nfs: requested NFS version or
  transport
   protocol is not supported\n')
  
  
   Any ideas? The rest (4 others) didn't have any problems...
  
  
 
   plain text document attachment (ATT1)
   ___
   Users mailing list
   Users@ovirt.org
   http://lists.ovirt.org/mailman/listinfo/users
 
  # yum install -y nfs-utils
 
  ?
 
 
 
  --
 
  Med Vänliga Hälsningar
 
 
  
 ---
  Karli Sjöberg
  Swedish University of Agricultural Sciences Box 7079 (Visiting
  Address
  Kronåsvägen 8)
  S-750 07 Uppsala, Sweden
  Phone:  +46-(0)18-67 15 66
  karli.sjob...@slu.se
  plain text document attachment (ATT1)
  ___
  Users mailing list
  Users@ovirt.org
  http://lists.ovirt.org/mailman/listinfo/users



 --

 Med Vänliga Hälsningar


 ---
 Karli Sjöberg
 Swedish University of Agricultural Sciences Box 7079 (Visiting Address
 Kronåsvägen 8)
 S-750 07 Uppsala, Sweden
 Phone:  +46-(0)18-67 15 66
 karli.sjob...@slu.se

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] NFS

2014-12-16 Thread Nir Soffer
- Original Message -
 From: Koen Vanoppen vanoppen.k...@gmail.com
 To: users@ovirt.org
 Sent: Tuesday, December 16, 2014 10:00:32 AM
 Subject: [ovirt-users] NFS
 
 Dear all,
 
 We recently added 2 hypervisors to the domain on ovirt, but for some reason
 they can't connect to the nfs share:
 When I manually try to mount the nfs-share ([root@ovirthyp01dev ~]# mount
 -vvv -t nfs -o vers=3,tcp progress:/media/NfsProgress /rhev/data-center/mnt/

Looks like your server does not accept nfs version 3 - does it work
if you remove the vers=3 option?

 progress.brusselsairport.aero \:_media_NfsProgress/)
 :
 mount: external mount: argv[3] = -v
 mount: external mount: argv[4] = -o
 mount: external mount: argv[5] = rw,vers=3,tcp
 mount.nfs: timeout set for Tue Dec 16 08:56:47 2014
 mount.nfs: trying text-based options 'vers=3,tcp,addr=10.110.56.20'
 mount.nfs: prog 13, trying vers=3, prot=6
 mount.nfs: portmap query failed: RPC: Program not registered
 mount.nfs: requested NFS version or transport protocol is not supported
 
 From vdsm.log:
 Thread-277::ERROR::2014-12-16
 08:46:32,504::storageServer::211::Storage.StorageServer.MountConnection::(connect)
 Mount failed: (32, ';mount.nfs: requested NFS version or transport protocol
 is not supported\n')
 Traceback (most recent call last):
 File /usr/share/vdsm/storage/storageServer.py, line 209, in connect
 self._mount.mount(self.options, self._vfsType)
 File /usr/share/vdsm/storage/mount.py, line 223, in mount
 return self._runcmd(cmd, timeout)
 File /usr/share/vdsm/storage/mount.py, line 239, in _runcmd
 raise MountError(rc, ;.join((out, err)))
 MountError: (32, ';mount.nfs: requested NFS version or transport protocol is
 not supported\n')
 Thread-277::ERROR::2014-12-16
 08:46:32,508::hsm::2433::Storage.HSM::(connectStorageServer) Could not
 connect to storageServer
 Traceback (most recent call last):
 File /usr/share/vdsm/storage/hsm.py, line 2430, in connectStorageServer
 conObj.connect()
 File /usr/share/vdsm/storage/storageServer.py, line 328, in connect
 return self._mountCon.connect()
 File /usr/share/vdsm/storage/storageServer.py, line 217, in connect
 raise e
 MountError: (32, ';mount.nfs: requested NFS version or transport protocol is
 not supported\n')
 
 Any ideas? The rest (4 others) didn't have any problems...

4 other servers?

Try to compare the configuration between these servers.

Nir
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] NFS

2014-12-16 Thread Bob Doolittle
95% of the time this is a firewall issue.

As a test, I'd disable your firewall completely and see if that
rectifies it. If so, you can work on proper firewall rules to allow
oVirt to work.

-Bob

On 12/16/2014 03:30 AM, Koen Vanoppen wrote:
 Already installed... :-) and the service nfs and rpcbind are running

 2014-12-16 9:07 GMT+01:00 Karli Sjöberg karli.sjob...@slu.se
 mailto:karli.sjob...@slu.se:

 On Tue, 2014-12-16 at 09:00 +0100, Koen Vanoppen wrote:
  Dear all,
 
 
  We recently added 2 hypervisors to the domain on ovirt, but for some
  reason they can't connect to the nfs share:
  When I manually try to mount the nfs-share ([root@ovirthyp01dev ~]#
  mount -vvv -t nfs -o vers=3,tcp
  progress:/media/NfsProgress
 /rhev/data-center/mnt/progress.brusselsairport.aero
 http://progress.brusselsairport.aero\:_media_NfsProgress/)
  :
  mount: external mount: argv[3] = -v
  mount: external mount: argv[4] = -o
  mount: external mount: argv[5] = rw,vers=3,tcp
  mount.nfs: timeout set for Tue Dec 16 08:56:47 2014
  mount.nfs: trying text-based options 'vers=3,tcp,addr=10.110.56.20'
  mount.nfs: prog 13, trying vers=3, prot=6
  mount.nfs: portmap query failed: RPC: Program not registered
  mount.nfs: requested NFS version or transport protocol is not
  supported
 
 
  From vdsm.log:
  Thread-277::ERROR::2014-12-16
 
 
 08:46:32,504::storageServer::211::Storage.StorageServer.MountConnection::(connect)
 Mount failed: (32, ';mount.nfs: requested NFS version or transport
 protocol is not supported\n')
  Traceback (most recent call last):
File /usr/share/vdsm/storage/storageServer.py, line 209, in
  connect
  self._mount.mount(self.options, self._vfsType)
File /usr/share/vdsm/storage/mount.py, line 223, in mount
  return self._runcmd(cmd, timeout)
File /usr/share/vdsm/storage/mount.py, line 239, in _runcmd
  raise MountError(rc, ;.join((out, err)))
  MountError: (32, ';mount.nfs: requested NFS version or transport
  protocol is not supported\n')
  Thread-277::ERROR::2014-12-16
  08:46:32,508::hsm::2433::Storage.HSM::(connectStorageServer)
 Could not
  connect to storageServer
  Traceback (most recent call last):
File /usr/share/vdsm/storage/hsm.py, line 2430, in
  connectStorageServer
  conObj.connect()
File /usr/share/vdsm/storage/storageServer.py, line 328, in
  connect
  return self._mountCon.connect()
File /usr/share/vdsm/storage/storageServer.py, line 217, in
  connect
  raise e
  MountError: (32, ';mount.nfs: requested NFS version or transport
  protocol is not supported\n')
 
 
  Any ideas? The rest (4 others) didn't have any problems...
 
 
  plain text document attachment (ATT1)
  ___
  Users mailing list
  Users@ovirt.org mailto:Users@ovirt.org
  http://lists.ovirt.org/mailman/listinfo/users

 # yum install -y nfs-utils

 ?



 --

 Med Vänliga Hälsningar

 
 ---
 Karli Sjöberg
 Swedish University of Agricultural Sciences Box 7079 (Visiting Address
 Kronåsvägen 8)
 S-750 07 Uppsala, Sweden
 Phone:  +46-(0)18-67 15 66 tel:%2B46-%280%2918-67%2015%2066
 karli.sjob...@slu.se mailto:karli.sjob...@slu.se



 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] NFS

2014-12-16 Thread Nikolai Sednev
Hi, 
Can you check iptables configured correctly, then put to maintenance host, 
power-cycle the host and after powered up and reachable, activate it in WEBUI? 


Thanks in advance. 

Best regards, 
Nikolai 
 
Nikolai Sednev 
Senior Quality Engineer at Compute team 
Red Hat Israel 
34 Jerusalem Road, 
Ra'anana, Israel 43501 

Tel: +972 9 7692043 
Mobile: +972 52 7342734 
Email: nsed...@redhat.com 
IRC: nsednev 

- Original Message -

From: users-requ...@ovirt.org 
To: users@ovirt.org 
Sent: Tuesday, December 16, 2014 3:31:27 PM 
Subject: Users Digest, Vol 39, Issue 96 

Send Users mailing list submissions to 
users@ovirt.org 

To subscribe or unsubscribe via the World Wide Web, visit 
http://lists.ovirt.org/mailman/listinfo/users 
or, via email, send a message with subject or body 'help' to 
users-requ...@ovirt.org 

You can reach the person managing the list at 
users-ow...@ovirt.org 

When replying, please edit your Subject line so it is more specific 
than Re: Contents of Users digest... 


Today's Topics: 

1. Re: NFS (Nir Soffer) 
2. Re: Problem after update ovirt to 3.5 (Simone Tiraboschi) 
3. Re: Problem after update ovirt to 3.5 (Yedidyah Bar David) 
4. Re: How to update zanata's source text ? (Alexander Wels) 
5. Re: bash: ./autogen.sh: No such file or directory (Nir Soffer) 
6. Re: VM disk tab doesn't show storage name after 3.5 upgrade 
(Nir Soffer) 
7. Re: NFS (Bob Doolittle) 


-- 

Message: 1 
Date: Tue, 16 Dec 2014 07:45:57 -0500 (EST) 
From: Nir Soffer nsof...@redhat.com 
To: Koen Vanoppen vanoppen.k...@gmail.com 
Cc: users@ovirt.org 
Subject: Re: [ovirt-users] NFS 
Message-ID: 
1775969167.13951943.1418733957101.javamail.zim...@redhat.com 
Content-Type: text/plain; charset=utf-8 

- Original Message - 
 From: Koen Vanoppen vanoppen.k...@gmail.com 
 To: users@ovirt.org 
 Sent: Tuesday, December 16, 2014 10:00:32 AM 
 Subject: [ovirt-users] NFS 
 
 Dear all, 
 
 We recently added 2 hypervisors to the domain on ovirt, but for some reason 
 they can't connect to the nfs share: 
 When I manually try to mount the nfs-share ([root@ovirthyp01dev ~]# mount 
 -vvv -t nfs -o vers=3,tcp progress:/media/NfsProgress /rhev/data-center/mnt/ 

Looks like your server does not accept nfs version 3 - does it work 
if you remove the vers=3 option? 

 progress.brusselsairport.aero \:_media_NfsProgress/) 
 : 
 mount: external mount: argv[3] = -v 
 mount: external mount: argv[4] = -o 
 mount: external mount: argv[5] = rw,vers=3,tcp 
 mount.nfs: timeout set for Tue Dec 16 08:56:47 2014 
 mount.nfs: trying text-based options 'vers=3,tcp,addr=10.110.56.20' 
 mount.nfs: prog 13, trying vers=3, prot=6 
 mount.nfs: portmap query failed: RPC: Program not registered 
 mount.nfs: requested NFS version or transport protocol is not supported 
 
 From vdsm.log: 
 Thread-277::ERROR::2014-12-16 
 08:46:32,504::storageServer::211::Storage.StorageServer.MountConnection::(connect)
  
 Mount failed: (32, ';mount.nfs: requested NFS version or transport protocol 
 is not supported\n') 
 Traceback (most recent call last): 
 File /usr/share/vdsm/storage/storageServer.py, line 209, in connect 
 self._mount.mount(self.options, self._vfsType) 
 File /usr/share/vdsm/storage/mount.py, line 223, in mount 
 return self._runcmd(cmd, timeout) 
 File /usr/share/vdsm/storage/mount.py, line 239, in _runcmd 
 raise MountError(rc, ;.join((out, err))) 
 MountError: (32, ';mount.nfs: requested NFS version or transport protocol is 
 not supported\n') 
 Thread-277::ERROR::2014-12-16 
 08:46:32,508::hsm::2433::Storage.HSM::(connectStorageServer) Could not 
 connect to storageServer 
 Traceback (most recent call last): 
 File /usr/share/vdsm/storage/hsm.py, line 2430, in connectStorageServer 
 conObj.connect() 
 File /usr/share/vdsm/storage/storageServer.py, line 328, in connect 
 return self._mountCon.connect() 
 File /usr/share/vdsm/storage/storageServer.py, line 217, in connect 
 raise e 
 MountError: (32, ';mount.nfs: requested NFS version or transport protocol is 
 not supported\n') 
 
 Any ideas? The rest (4 others) didn't have any problems... 

4 other servers? 

Try to compare the configuration between these servers. 

Nir 


-- 

Message: 2 
Date: Tue, 16 Dec 2014 07:48:17 -0500 (EST) 
From: Simone Tiraboschi stira...@redhat.com 
To: Juan Jose jj197...@gmail.com 
Cc: users@ovirt.org 
Subject: Re: [ovirt-users] Problem after update ovirt to 3.5 
Message-ID: 
1161380684.12062869.1418734097118.javamail.zim...@redhat.com 
Content-Type: text/plain; charset=utf-8 



- Original Message - 
 From: Juan Jose jj197...@gmail.com 
 To: Yedidyah Bar David d...@redhat.com, sbona...@redhat.com 
 Cc: users@ovirt.org 
 Sent: Tuesday, December 16, 2014 1:03:17 PM 
 Subject: Re: [ovirt-users] Problem after update ovirt to 3.5 
 
 Hello everybody, 
 
 It was the firewall, after upgrade my engine the NFS configuration had

[ovirt-users] NFS Storage domain backup

2014-10-31 Thread Mohyedeen Nazzal
Hi ,

I'm using NFS storage domain, is it possible to backup the domain manually?
or is there an api in ovirt which support this?

For manual backup, is possible to use tools like rsync ? or the data will
not be consistent ?

Any idea of possible solutions for this?;as we don't have dedicated storage
box.

Thanks,
Mohyedeen.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] NFS Storage domain backup

2014-10-31 Thread Sven Kieske


On 31/10/14 10:13, Mohyedeen Nazzal wrote:
 I'm using NFS storage domain, is it possible to backup the domain manually?
 or is there an api in ovirt which support this?
 
 For manual backup, is possible to use tools like rsync ? or the data will
 not be consistent ?
 
 Any idea of possible solutions for this?;as we don't have dedicated storage
 box.

Just detach the storage domain (for consistency).
Than you can tar the whole thing, maybe other methods
work too, but most do not very well when it comes to sparse files.
Use tar with the -S flag.

HTH

PS: There is also a backup api and you can try to backup running
vms with live snapshots.

-- 
Mit freundlichen Grüßen / Regards

Sven Kieske

Systemadministrator
Mittwald CM Service GmbH  Co. KG
Königsberger Straße 6
32339 Espelkamp
T: +49-5772-293-100
F: +49-5772-293-333
https://www.mittwald.de
Geschäftsführer: Robert Meyer
St.Nr.: 331/5721/1033, USt-IdNr.: DE814773217, HRA 6640, AG Bad Oeynhausen
Komplementärin: Robert Meyer Verwaltungs GmbH, HRB 13260, AG Bad Oeynhausen
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] NFS Storage domain backup

2014-10-31 Thread Mohyedeen Nazzal
Thanks Sven,

Detaching the domain is not an option, sorry if did not make it clear i
meant to backup the storage domain while VMs are running.

Also regarding the Live Snapshot option, The live snapshot simply create a
second hardisk for the VM and start to write on it.. it does not do any
backup just a restore point.

Thanks,
Mohye.

On Fri, Oct 31, 2014 at 12:27 PM, Sven Kieske s.kie...@mittwald.de wrote:



 On 31/10/14 10:13, Mohyedeen Nazzal wrote:
  I'm using NFS storage domain, is it possible to backup the domain
 manually?
  or is there an api in ovirt which support this?
 
  For manual backup, is possible to use tools like rsync ? or the data will
  not be consistent ?
 
  Any idea of possible solutions for this?;as we don't have dedicated
 storage
  box.

 Just detach the storage domain (for consistency).
 Than you can tar the whole thing, maybe other methods
 work too, but most do not very well when it comes to sparse files.
 Use tar with the -S flag.

 HTH

 PS: There is also a backup api and you can try to backup running
 vms with live snapshots.

 --
 Mit freundlichen Grüßen / Regards

 Sven Kieske

 Systemadministrator
 Mittwald CM Service GmbH  Co. KG
 Königsberger Straße 6
 32339 Espelkamp
 T: +49-5772-293-100
 F: +49-5772-293-333
 https://www.mittwald.de
 Geschäftsführer: Robert Meyer
 St.Nr.: 331/5721/1033, USt-IdNr.: DE814773217, HRA 6640, AG Bad Oeynhausen
 Komplementärin: Robert Meyer Verwaltungs GmbH, HRB 13260, AG Bad Oeynhausen
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] NFS Storage domain backup

2014-10-31 Thread Sven Kieske


On 31/10/14 10:39, Mohyedeen Nazzal wrote:
 Also regarding the Live Snapshot option, The live snapshot simply create a
 second hardisk for the VM and start to write on it.. it does not do any
 backup just a restore point.

Yes exactly, so you can safely backup the first disk.
You have a consistent point in time backup.

But it depends on your workload, if you run dbs inside the vm
you want to have an agent for backups in the vms.

there is plenty of documentation on this topic available.

-- 
Mit freundlichen Grüßen / Regards

Sven Kieske

Systemadministrator
Mittwald CM Service GmbH  Co. KG
Königsberger Straße 6
32339 Espelkamp
T: +49-5772-293-100
F: +49-5772-293-333
https://www.mittwald.de
Geschäftsführer: Robert Meyer
St.Nr.: 331/5721/1033, USt-IdNr.: DE814773217, HRA 6640, AG Bad Oeynhausen
Komplementärin: Robert Meyer Verwaltungs GmbH, HRB 13260, AG Bad Oeynhausen
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] NFS service not starting after reboot

2014-05-12 Thread Sandro Bonazzola
Il 11/05/2014 22:19, Itamar Heim ha scritto:
 On 04/14/2014 02:44 AM, Nauman Abbas wrote:
 Hello all

 Need some quick help on this. I restarted one of my oVirt nodes and it
 doesn't seem to go up after that. I have done some troubleshooting and
 found out that the nfs-server service is not going up. Here's the error.



 [root@ovirtnode2 /]# systemctl start nfs-server.service
 Job for nfs-server.service failed. See 'systemctl status
 nfs-server.service' and 'journalctl -xn' for details.

 [root@ovirtnode2 /]# systemctl status nfs-server.service
 nfs-server.service - NFS Server
 Loaded: loaded (/usr/lib/systemd/system/nfs-server.service; disabled)
 Active: failed (Result: exit-code) since Mon 2014-04-14 06:43:15
 UTC; 18s ago
Process: 2751 ExecStart=/usr/sbin/rpc.nfsd $RPCNFSDARGS $RPCNFSDCOUNT
 (code=exited, status=203/EXEC)
Process: 2749 ExecStartPre=/usr/sbin/exportfs -r (code=exited,
 status=0/SUCCESS)
Process: 2747
 ExecStartPre=/usr/libexec/nfs-utils/scripts/nfs-server.preconfig
 (code=exited, status=0/SUCCESS)

 Apr 14 06:43:15 ovirtnode2 exportfs[2749]: exportfs: can't open
 /var/lib/nfs/rmtab for reading
 Apr 14 06:43:15 ovirtnode2 systemd[2751]: Failed at step EXEC spawning
 /usr/sbin/rpc.nfsd: No such file or directory

looks like nfs-utils package has been removed or /usr/sbin/rpc.nfsd deleted by 
hand.


 Apr 14 06:43:15 ovirtnode2 systemd[1]: nfs-server.service: main process
 exited, code=exited, status=203/EXEC
 Apr 14 06:43:15 ovirtnode2 systemd[1]: Failed to start NFS Server.
 Apr 14 06:43:15 ovirtnode2 systemd[1]: Unit nfs-server.service entered
 failed state.


 Would be great if anyone can help sorting this out.

 Regards

 Nauman Abbas
 Assistant System Administrator (LMS),
 Room No. A-207, SEECS,
 National University of Sciences  Technology,
 + 92 321 5359946


 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users

 
 was this resolved?
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users


-- 
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] NFS service not starting after reboot

2014-05-11 Thread Itamar Heim

On 04/14/2014 02:44 AM, Nauman Abbas wrote:

Hello all

Need some quick help on this. I restarted one of my oVirt nodes and it
doesn't seem to go up after that. I have done some troubleshooting and
found out that the nfs-server service is not going up. Here's the error.



[root@ovirtnode2 /]# systemctl start nfs-server.service
Job for nfs-server.service failed. See 'systemctl status
nfs-server.service' and 'journalctl -xn' for details.

[root@ovirtnode2 /]# systemctl status nfs-server.service
nfs-server.service - NFS Server
Loaded: loaded (/usr/lib/systemd/system/nfs-server.service; disabled)
Active: failed (Result: exit-code) since Mon 2014-04-14 06:43:15
UTC; 18s ago
   Process: 2751 ExecStart=/usr/sbin/rpc.nfsd $RPCNFSDARGS $RPCNFSDCOUNT
(code=exited, status=203/EXEC)
   Process: 2749 ExecStartPre=/usr/sbin/exportfs -r (code=exited,
status=0/SUCCESS)
   Process: 2747
ExecStartPre=/usr/libexec/nfs-utils/scripts/nfs-server.preconfig
(code=exited, status=0/SUCCESS)

Apr 14 06:43:15 ovirtnode2 exportfs[2749]: exportfs: can't open
/var/lib/nfs/rmtab for reading
Apr 14 06:43:15 ovirtnode2 systemd[2751]: Failed at step EXEC spawning
/usr/sbin/rpc.nfsd: No such file or directory
Apr 14 06:43:15 ovirtnode2 systemd[1]: nfs-server.service: main process
exited, code=exited, status=203/EXEC
Apr 14 06:43:15 ovirtnode2 systemd[1]: Failed to start NFS Server.
Apr 14 06:43:15 ovirtnode2 systemd[1]: Unit nfs-server.service entered
failed state.


Would be great if anyone can help sorting this out.

Regards

Nauman Abbas
Assistant System Administrator (LMS),
Room No. A-207, SEECS,
National University of Sciences  Technology,
+ 92 321 5359946


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



was this resolved?
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] NFS service not starting after reboot

2014-04-17 Thread Nauman Abbas
Hello all

Need some quick help on this. I restarted one of my oVirt nodes and it
doesn't seem to go up after that. I have done some troubleshooting and
found out that the nfs-server service is not going up. Here's the error.



[root@ovirtnode2 /]# systemctl start nfs-server.service
Job for nfs-server.service failed. See 'systemctl status
nfs-server.service' and 'journalctl -xn' for details.

[root@ovirtnode2 /]# systemctl status nfs-server.service
nfs-server.service - NFS Server
   Loaded: loaded (/usr/lib/systemd/system/nfs-server.service; disabled)
   Active: failed (Result: exit-code) since Mon 2014-04-14 06:43:15 UTC;
18s ago
  Process: 2751 ExecStart=/usr/sbin/rpc.nfsd $RPCNFSDARGS $RPCNFSDCOUNT
(code=exited, status=203/EXEC)
  Process: 2749 ExecStartPre=/usr/sbin/exportfs -r (code=exited,
status=0/SUCCESS)
  Process: 2747
ExecStartPre=/usr/libexec/nfs-utils/scripts/nfs-server.preconfig
(code=exited, status=0/SUCCESS)

Apr 14 06:43:15 ovirtnode2 exportfs[2749]: exportfs: can't open
/var/lib/nfs/rmtab for reading
Apr 14 06:43:15 ovirtnode2 systemd[2751]: Failed at step EXEC spawning
/usr/sbin/rpc.nfsd: No such file or directory
Apr 14 06:43:15 ovirtnode2 systemd[1]: nfs-server.service: main process
exited, code=exited, status=203/EXEC
Apr 14 06:43:15 ovirtnode2 systemd[1]: Failed to start NFS Server.
Apr 14 06:43:15 ovirtnode2 systemd[1]: Unit nfs-server.service entered
failed state.


Would be great if anyone can help sorting this out.

Regards

Nauman Abbas
Assistant System Administrator (LMS),
Room No. A-207, SEECS,
National University of Sciences  Technology,
+ 92 321 5359946
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users