[ovirt-users] Re: NFS Synology NAS (DSM 7)

2022-03-07 Thread Simon Kong
> Thanks for the replies, as it turns out it was nothing to do with
> /etc/exports or regular file system permissions.
> 
> Synology have applied their own brand of Access Control Lists (ACLs) to
> shared folders.
> 
> Basically I had to run the following commands to allow vdsm:kvm (36:36) to
> read and write to the share:
> 
> EXPORT_DIR=/volumeX/...
> 
> synoacltool -set-owner "$EXPORT_DIR" group kvm:allow:rwxpdDaARWcCo:fd--
> synoacltool -add "$EXPORT_DIR" user:vdsm:allow:rwxpdDaARWcCo:fd--
> synoacltool -add "$EXPORT_DIR" group:kvm:allow:rwxpdDaARWcCo:fd--
> 
> On Wed, 1 Sept 2021 at 04:28, Strahil Nikolov  wrote:

I have a synology with DSM 7.0-41890 too. had the same problem, and that fixed 
it for me too!!! Thanks!
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/QDQYKZAJLYUW4XRPYIUKPI6FDWC5RACD/


[ovirt-users] Re: NFS storage was locked for 45 minutes after I attempted a clone operation

2021-09-03 Thread Strahil Nikolov via Users
That's really odd. Maybe you can try to clone it and then experiment on the 
clone itself. Once the reason is found out, you can try with the original.
My first look is to check all logs on the engine and the SPM for clues.
Best Regards,Strahil Nikolov 
 
  On Fri, Sep 3, 2021 at 11:42, David White via Users wrote:   
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/6XHROFQDWZY4Y6Z5LWWORTEJKCDBYIPT/
  
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/AHSTIYM54FND422YEJ6GRXPJ4RTGGNAF/


[ovirt-users] Re: NFS storage was locked for 45 minutes after I attempted a clone operation

2021-09-03 Thread David White via Users
The save operation is going at a snail's pace, though.

Using "watch du -skh", I counted about 5-7 seconds per .1 GB (1/10 of 1GB).
It's a virtual disk, but I'm using over 200GB... so at this rate, it'll take a 
very long time.

I wonder if Pascal is on to something, and the export is happening over the 
frontend 1GB network?

I'm going to cancel this operation, as the VM has now been down for close to an 
hour.

Sent with ProtonMail Secure Email.

‐‐‐ Original Message ‐‐‐
On Friday, September 3rd, 2021 at 4:33 AM, David White 
 wrote:

> Update perhaps I have discovered a bug somewhere?
> 

> I started another export after hours (it's very early morning hours right 
> now, and I can tolerate a little downtime on this VM). I had the same 
> symptoms, but this time, I just left it alone. I waited about 45 minutes with 
> no progress.
> 

> I then ssh'd to the NFS destination (also on the 10Gbps storage network), and 
> running tcpdump, I didn't see any traffic coming across the wire.
> 

> So I then powered off my VM, and I immediately began to see a new backup 
> image appear in my NFS export. 
> 

> I wonder if the VM was trying to snapshot the memory and there wasn't enough 
> on the host or something? The VM has 16GB of RAM, and there are multiple VMs 
> on that host (although the host itself has 64GB of physical RAM, so should 
> have been plenty).
> 

> Sent with ProtonMail Secure Email.
> 

> ‐‐‐ Original Message ‐‐‐
> On Friday, September 3rd, 2021 at 4:10 AM, David White via Users 
>  wrote:
> 

> > In this particular case, I have 1 (one) 250GB virtual disk..
> > 

> > Sent with ProtonMail Secure Email.
> > 

> > ‐‐‐ Original Message ‐‐‐
> > On Tuesday, August 31st, 2021 at 11:21 PM, Strahil Nikolov 
> >  wrote:
> > 

> > > Hi David,
> > > 

> > > how big are your VM disks ?
> > > 

> > > I suppose you have several very large ones.
> > > 

> > > Best Regards,
> > > Strahil Nikolov
> > > 

> > > Sent from Yahoo Mail on Android
> > > 

> > > > On Thu, Aug 26, 2021 at 3:27, David White via Users
> > > >  wrote:
> > > > I have an HCI cluster running on Gluster storage. I exposed an NFS 
> > > > share into oVirt as a storage domain so that I could clone all of my 
> > > > VMs (I'm preparing to move physically to a new datacenter). I got 3-4 
> > > > VMs cloned perfectly fine yesterday. But then this evening, I tried to 
> > > > clone a big VM, and it caused the disk to lock up. The VM went totally 
> > > > unresponsive, and I didn't see a way to cancel the clone. Nagios NRPE 
> > > > (on the client VM) was reporting server load over 65+, but I was never 
> > > > able to establish an SSH connection. 
> > > > 

> > > > Eventually, I tried restarting the ovirt-engine, per 
> > > > https://access.redhat.com/solutions/396753. When that didn't work, I 
> > > > powered down the VM completely. But the disks were still locked. So I 
> > > > then tried to put the storage domain into maintenance mode, but that 
> > > > wound up putting the entire domain into a "locked" state. Finally, 
> > > > eventually, the disks unlocked, and I was able to power the VM back 
> > > > online.
> > > > 

> > > > From start to finish, my VM was down for about 45 minutes, including 
> > > > the time when NRPE was still sending data to Nagios.
> > > > 

> > > > What logs should I look at, and how can I troubleshoot what went wrong 
> > > > here, and hopefully avoid this from happening again?
> > > > 

> > > > Sent with ProtonMail Secure Email.
> > > > 

> > > > ___
> > > > Users mailing list -- users@ovirt.org
> > > > To unsubscribe send an email to users-le...@ovirt.org
> > > > Privacy Statement: https://www.ovirt.org/privacy-policy.html
> > > > oVirt Code of Conduct: 
> > > > https://www.ovirt.org/community/about/community-guidelines/
> > > > List Archives: 
> > > > https://lists.ovirt.org/archives/list/users@ovirt.org/message/ASEENELT4TRTXQ7MF4FKB6L75D3H75AN/

publickey - dmwhite823@protonmail.com - 0x320CD582.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/6XHROFQDWZY4Y6Z5LWWORTEJKCDBYIPT/


[ovirt-users] Re: NFS storage was locked for 45 minutes after I attempted a clone operation

2021-09-03 Thread David White via Users
Update perhaps I have discovered a bug somewhere?

I started another export after hours (it's very early morning hours right now, 
and I can tolerate a little downtime on this VM). I had the same symptoms, but 
this time, I just left it alone. I waited about 45 minutes with no progress.

I then ssh'd to the NFS destination (also on the 10Gbps storage network), and 
running tcpdump, I didn't see any traffic coming across the wire.

So I then powered off my VM, and I immediately began to see a new backup image 
appear in my NFS export. 

I wonder if the VM was trying to snapshot the memory and there wasn't enough on 
the host or something? The VM has 16GB of RAM, and there are multiple VMs on 
that host (although the host itself has 64GB of physical RAM, so should have 
been plenty).

Sent with ProtonMail Secure Email.

‐‐‐ Original Message ‐‐‐

On Friday, September 3rd, 2021 at 4:10 AM, David White via Users 
 wrote:

> In this particular case, I have 1 (one) 250GB virtual disk..
> 

> Sent with ProtonMail Secure Email.
> 

> ‐‐‐ Original Message ‐‐‐
> 

> On Tuesday, August 31st, 2021 at 11:21 PM, Strahil Nikolov 
>  wrote:
> 

> > Hi David,
> > 

> > how big are your VM disks ?
> > 

> > I suppose you have several very large ones.
> > 

> > Best Regards,Strahil Nikolov
> > 

> > Sent from Yahoo Mail on Android
> > 

> > > On Thu, Aug 26, 2021 at 3:27, David White via Users 
> > > wrote:I have an HCI cluster running on Gluster storage. I exposed an NFS 
> > > share into oVirt as a storage domain so that I could clone all of my VMs 
> > > (I'm preparing to move physically to a new datacenter). I got 3-4 VMs 
> > > cloned perfectly fine yesterday. But then this evening, I tried to clone 
> > > a big VM, and it caused the disk to lock up. The VM went totally 
> > > unresponsive, and I didn't see a way to cancel the clone. Nagios NRPE (on 
> > > the client VM) was reporting server load over 65+, but I was never able 
> > > to establish an SSH connection. 
> > > 

> > > Eventually, I tried restarting the ovirt-engine, per 
> > > https://access.redhat.com/solutions/396753. When that didn't work, I 
> > > powered down the VM completely. But the disks were still locked. So I 
> > > then tried to put the storage domain into maintenance mode, but that 
> > > wound up putting the entire domain into a "locked" state. Finally, 
> > > eventually, the disks unlocked, and I was able to power the VM back 
> > > online.
> > > 

> > > From start to finish, my VM was down for about 45 minutes, including the 
> > > time when NRPE was still sending data to Nagios.
> > > 

> > > What logs should I look at, and how can I troubleshoot what went wrong 
> > > here, and hopefully avoid this from happening again?
> > > 

> > > Sent with ProtonMail Secure Email.
> > > 

> > > ___
> > > 

> > > Users mailing list -- users@ovirt.org
> > > 

> > > To unsubscribe send an email to users-le...@ovirt.org
> > > 

> > > Privacy Statement: https://www.ovirt.org/privacy-policy.html
> > > 

> > > oVirt Code of Conduct: 
> > > https://www.ovirt.org/community/about/community-guidelines/
> > > 

> > > List Archives: 
> > > https://lists.ovirt.org/archives/list/users@ovirt.org/message/ASEENELT4TRTXQ7MF4FKB6L75D3H75AN/

publickey - dmwhite823@protonmail.com - 0x320CD582.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/PMKJX2YEAFHG574N375H3ASU3N3VR3UW/


[ovirt-users] Re: NFS storage was locked for 45 minutes after I attempted a clone operation

2021-09-03 Thread David White via Users
In this particular case, I have 1 (one) 250GB virtual disk..

Sent with ProtonMail Secure Email.

‐‐‐ Original Message ‐‐‐

On Tuesday, August 31st, 2021 at 11:21 PM, Strahil Nikolov 
 wrote:

> Hi David,
> 

> how big are your VM disks ?
> 

> I suppose you have several very large ones.
> 

> Best Regards,Strahil Nikolov
> 

> Sent from Yahoo Mail on Android
> 

> > On Thu, Aug 26, 2021 at 3:27, David White via Users 
> > wrote:I have an HCI cluster running on Gluster storage. I exposed an NFS 
> > share into oVirt as a storage domain so that I could clone all of my VMs 
> > (I'm preparing to move physically to a new datacenter). I got 3-4 VMs 
> > cloned perfectly fine yesterday. But then this evening, I tried to clone a 
> > big VM, and it caused the disk to lock up. The VM went totally 
> > unresponsive, and I didn't see a way to cancel the clone. Nagios NRPE (on 
> > the client VM) was reporting server load over 65+, but I was never able to 
> > establish an SSH connection. 
> > 

> > Eventually, I tried restarting the ovirt-engine, per 
> > https://access.redhat.com/solutions/396753. When that didn't work, I 
> > powered down the VM completely. But the disks were still locked. So I then 
> > tried to put the storage domain into maintenance mode, but that wound up 
> > putting the entire domain into a "locked" state. Finally, eventually, the 
> > disks unlocked, and I was able to power the VM back online.
> > 

> > From start to finish, my VM was down for about 45 minutes, including the 
> > time when NRPE was still sending data to Nagios.
> > 

> > What logs should I look at, and how can I troubleshoot what went wrong 
> > here, and hopefully avoid this from happening again?
> > 

> > Sent with ProtonMail Secure Email.
> > 

> > ___
> > 

> > Users mailing list -- users@ovirt.org
> > 

> > To unsubscribe send an email to users-le...@ovirt.org
> > 

> > Privacy Statement: https://www.ovirt.org/privacy-policy.html
> > 

> > oVirt Code of Conduct: 
> > https://www.ovirt.org/community/about/community-guidelines/
> > 

> > List Archives: 
> > https://lists.ovirt.org/archives/list/users@ovirt.org/message/ASEENELT4TRTXQ7MF4FKB6L75D3H75AN/

publickey - dmwhite823@protonmail.com - 0x320CD582.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/3CRWWNSNJTSATXRDIG7BHZDOQ3VCKQMT/


[ovirt-users] Re: NFS Synology NAS (DSM 7)

2021-09-01 Thread Nir Soffer
On Wed, Sep 1, 2021 at 9:55 AM Maton, Brett 
wrote:

> Thanks for the replies, as it turns out it was nothing to do with
> /etc/exports or regular file system permissions.
>
> Synology have applied their own brand of Access Control Lists (ACLs) to
> shared folders.
>

What kind of OS are they using? (freebsd?)


>
> Basically I had to run the following commands to allow vdsm:kvm (36:36) to
> read and write to the share:
>
> EXPORT_DIR=/volumeX/...
>
> synoacltool -set-owner "$EXPORT_DIR" group kvm:allow:rwxpdDaARWcCo:fd--
> synoacltool -add "$EXPORT_DIR" user:vdsm:allow:rwxpdDaARWcCo:fd--
> synoacltool -add "$EXPORT_DIR" group:kvm:allow:rwxpdDaARWcCo:fd--
>

It would be useful to add this solution to this page:
https://www.ovirt.org/develop/troubleshooting-nfs-storage-issues.html

You can click "Edit this page" on the bottom and add the info:
https://github.com/oVirt/ovirt-site/edit/master/source/develop/troubleshooting-nfs-storage-issues.md

Nir


>
> On Wed, 1 Sept 2021 at 04:28, Strahil Nikolov 
> wrote:
>
>> I guess gou need to try:
>> all_squash + anonuid=36 + anongid=36
>>
>>
>> Best Regards,
>> Strahil Nikolov
>>
>> On Fri, Aug 27, 2021 at 23:44, Alex K
>>  wrote:
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>>
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/MM6CLUHHQFIWTXQSIQ7C7EPWFAW3NAOF/
>>
>> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/IUYTVJIHSSWSK5ARUYW7SLEXDRBDIBFS/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/MECRIBTS5JGH4G4ZVRIPFFUCOIEY7DR2/


[ovirt-users] Re: NFS Synology NAS (DSM 7)

2021-09-01 Thread Maton, Brett
Thanks for the replies, as it turns out it was nothing to do with
/etc/exports or regular file system permissions.

Synology have applied their own brand of Access Control Lists (ACLs) to
shared folders.

Basically I had to run the following commands to allow vdsm:kvm (36:36) to
read and write to the share:

EXPORT_DIR=/volumeX/...

synoacltool -set-owner "$EXPORT_DIR" group kvm:allow:rwxpdDaARWcCo:fd--
synoacltool -add "$EXPORT_DIR" user:vdsm:allow:rwxpdDaARWcCo:fd--
synoacltool -add "$EXPORT_DIR" group:kvm:allow:rwxpdDaARWcCo:fd--

On Wed, 1 Sept 2021 at 04:28, Strahil Nikolov  wrote:

> I guess gou need to try:
> all_squash + anonuid=36 + anongid=36
>
>
> Best Regards,
> Strahil Nikolov
>
> On Fri, Aug 27, 2021 at 23:44, Alex K
>  wrote:
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
>
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/MM6CLUHHQFIWTXQSIQ7C7EPWFAW3NAOF/
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/IUYTVJIHSSWSK5ARUYW7SLEXDRBDIBFS/


[ovirt-users] Re: NFS storage was locked for 45 minutes after I attempted a clone operation

2021-08-31 Thread Pascal D
On my setup I noticed that creating a template from a VM takes a very long time 
(around 35min for a VM disk of around 35G) even though my NFS storage network 
is 10G with 9000 MTU. 

I am wondering if creating a template is done on ovirtmngt instead of storage 
as this network is 1G but still the number don't add up comparing when I use 
perf3 to test my network speeds

Any idea why it is taking so long to create a template?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/UJVCKMBN6FHUPLAHJG7MU5GM5O4FHER2/


[ovirt-users] Re: NFS Synology NAS (DSM 7)

2021-08-31 Thread Strahil Nikolov via Users
I guess gou need to try:all_squash + anonuid=36 + anongid=36

Best Regards,Strahil Nikolov 
 
  On Fri, Aug 27, 2021 at 23:44, Alex K wrote:   
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/MM6CLUHHQFIWTXQSIQ7C7EPWFAW3NAOF/
  
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/CF4QVIBCD5BFW7545JGLRW3R2O3SV6GL/


[ovirt-users] Re: NFS storage was locked for 45 minutes after I attempted a clone operation

2021-08-31 Thread Strahil Nikolov via Users
Hi David,
how big are your VM disks ?
I suppose you have several very large ones.

Best Regards,Strahil Nikolov

Sent from Yahoo Mail on Android 
 
  On Thu, Aug 26, 2021 at 3:27, David White via Users wrote:   
I have an HCI cluster running on Gluster storage. I exposed an NFS share into 
oVirt as a storage domain so that I could clone all of my VMs (I'm preparing to 
move physically to a new datacenter). I got 3-4 VMs cloned perfectly fine 
yesterday. But then this evening, I tried to clone a big VM, and it caused the 
disk to lock up. The VM went totally unresponsive, and I didn't see a way to 
cancel the clone. Nagios NRPE (on the client VM) was reporting server load over 
65+, but I was never able to establish an SSH connection. 

Eventually, I tried restarting the ovirt-engine, per 
https://access.redhat.com/solutions/396753. When that didn't work, I powered 
down the VM completely. But the disks were still locked. So I then tried to put 
the storage domain into maintenance mode, but that wound up putting the entire 
domain into a "locked" state. Finally, eventually, the disks unlocked, and I 
was able to power the VM back online.

>From start to finish, my VM was down for about 45 minutes, including the time 
>when NRPE was still sending data to Nagios.

What logs should I look at, and how can I troubleshoot what went wrong here, 
and hopefully avoid this from happening again?

Sent with ProtonMail Secure Email.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ASEENELT4TRTXQ7MF4FKB6L75D3H75AN/
  
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZT2NBSN2H44T2HD7RTJO7NC73ND37Y65/


[ovirt-users] Re: NFS Synology NAS (DSM 7)

2021-08-27 Thread Alex K
On Fri, Aug 27, 2021, 19:48 Maton, Brett  wrote:

> Hi List,
>
>   I can't get oVirt 4.4.8.5-1.el8 (running on oVirt Node hosts) to connect
> to an NFS share on a Synology NAS.
>
>   I gave up trying to get the hosted engine deployed and put that on an
> iscsi volume instead...
>
>   The directory being exported from NAS is owned by vdsm / kvm (36:36)
> perms I've tried:
>   0750
>   0755
>   0777
>
> Tried auto / v3 / v4_0
>
>   As others have mentioned regarding NFS, if I connect manually from the
> host with
>
>   mount nas.mydomain.com:/volume1/ov_nas
>
>   It connects and works just fine.
>
>   If I try to add the share as a domain in oVirt I get
>
> Operation Cancelled
> Error while executing action Add Storage Connection: Permission settings
> on the specified path do not allow access to the storage.
> Verify permission settings on the specified storage path.
>
> When tailing /var/log/messages on
>
> When tailing /var/log/messages on the oVirt host, I see this message
> appear (I changed the domain name for this post so the dots might be
> transcoded in reality):
>
> Aug 27 17:36:07 ov001 systemd[1]: 
> rhev-data\x2dcenter-mnt-nas.mydomain.com:_volume1_ov__nas.mount:
> Succeeded.
>
>   The NAS is running the 'new' DSM 7, /etc/exports looks like this:
>
> /volume1/ov_nas  x.x.x.x(rw,async,no_root_squash,anonuid=36,anongid=36)
>
Did you try squashing to root?

>
>
> (reloaded with exportfs -ra)
>
>
> Any suggestions appreciated.
>
> Regards,
> Brett
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/Y74ILXTU6L2XEIIQDG5SE6WEQ6DSUOXA/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/MM6CLUHHQFIWTXQSIQ7C7EPWFAW3NAOF/


[ovirt-users] Re: NFS import storage domain permissions error

2020-06-27 Thread Joop
On 27-6-2020 19:49, Joop wrote:
> Hi All,
>
> I've got an error when I try to import a storage domain from a Synology
> NAS that used to be a 4.3.10 oVirt installation but no matter what I try
> I can't get it to import. I keep getting a permission denied error while
> with the same settings it worked for the previous version oVirt.
> This is the log:
>
Reinstall using latest Centos7 and oVirt-4.3.10 using cockpit HCI
gluster and the 'old' NFS storage domain imported at once.

Joop
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/L4GLEQPOXPZRK7QAM55PJ3KUZKZ6IUJ3/


[ovirt-users] Re: NFS storage domain forced to use /exports/data ?

2020-05-06 Thread kelley bryan
hey wait now I show to mounts from NAS server what is going on:
One in the expected /exports/data.
the other 
/rhev/data-center/mnt/fsf-dal1001d-fz.adn.networklayer.com
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/IKTDB2V4NVGBGBSIN6COH3SMRA6S4HJP/


[ovirt-users] Re: NFS permissions error on ISODomain file with correct permissions

2020-04-03 Thread Shareef Jalloq
We're using a Synology SA3400 and you don't get much of a choice to
configure NFS in DSM.  But anyway, I now know that I need to check the
permissions on the NAS directly.

Thanks for the help.

On Fri, Apr 3, 2020 at 12:00 AM Strahil Nikolov 
wrote:

> On April 2, 2020 9:51:47 PM GMT+03:00, eev...@digitaldatatechs.com wrote:
> >It depends on the NAS. What NAS do you have? How is NFS setup and what
> >version? It could be several different things but without knowing the
> >specific setup, I’d be guessing.
> >
> >
> >
> >Eric Evans
> >
> >Digital Data Services LLC.
> >
> >304.660.9080
> >
> >
> >
> >
> >
> >From: Shareef Jalloq 
> >Sent: Thursday, April 02, 2020 10:41 AM
> >To: eev...@digitaldatatechs.com
> >Cc: users@ovirt.org
> >Subject: [ovirt-users] Re: NFS permissions error on ISODomain file with
> >correct permissions
> >
> >
> >
> >OK, think this is solved and it was a permissions issue.  I eventually
> >logged into the NAS to see if I could see anything different in the
> >exports and the ISO Domain didn't have any group ownership rights.
> >
> >
> >
> >This is strange because when you browse the directory from an oVirt
> >node, it shows each directory and file as having full permissions. Is
> >this an NFS thing?
> >
> >
> >
> >On Thu, Apr 2, 2020 at 3:20 PM Shareef Jalloq  ><mailto:shar...@jalloq.co.uk> > wrote:
> >
> >On your second point about an Export domain, how do you configure oVirt
> >to look in an Export domain for the Run Once setup?
> >
> >
> >
> >On Thu, Apr 2, 2020 at 3:01 PM Shareef Jalloq  ><mailto:shar...@jalloq.co.uk> > wrote:
> >
> >This doesn't seem, to me, to be an issue with the NAS or the mounts.
> >In my original post you can see the VFD files in the mounted directory
> >under /rhev.  Is that not what you're asking?
> >
> >
> >
> >I have both the ISODomain and two DataDomain's mounted.  I have a bunch
> >of VM's running off this NAS with no issues.  The permissions to the
> >VFD files are all correct and I can create and list files in the
> >ISODomain with no issue.
> >
> >
> >
> >On Tue, Mar 31, 2020 at 9:40 PM  ><mailto:eev...@digitaldatatechs.com> > wrote:
> >
> >If you issue the mount command does the path show.
> >
> >mount |grep 
> >
> >
> >
> >It looks like the ISO domain is on a NAS, so I would try a mount of the
> >actual folder on the ovirt node to make sure you are able to access it.
> >Also, can you place it in an export domain instead and try?
> >
> >I think version 4.2+ it just needs to be in an export domain.
> >
> >
> >
> >Also make sure the NAS path is correct. Most NAS use a data or shares
> >for nfs mounts. (nasname:/shares/
> >
> >
> >
> >
> >
> >Eric Evans
> >
> >Digital Data Services LLC.
> >
> >304.660.9080
> >
> >
> >
> >
> >
> >From: Shareef Jalloq  ><mailto:shar...@jalloq.co.uk> >
> >Sent: Tuesday, March 31, 2020 4:59 AM
> >To: users@ovirt.org <mailto:users@ovirt.org>
> >Subject: [ovirt-users] NFS permissions error on ISODomain file with
> >correct permissions
> >
> >
> >
> >Hi,
> >
> >
> >
> >I asked this question in another thread but it seems to have been lost
> >in the noise so I'm reposting with a more descriptive subject.
> >
> >
> >
> >I'm trying to start a Windows VM and use the virtio-win VFD floppy to
> >get the drivers but the VM startup fails due to a permissions issue
> >detailed below.  The permissions look fine to me so why can't the VFD
> >be read?
> >
> >
> >
> >Shareef.
> >
> >
> >
> >I found a permissions issue in the engine.log:
> >
> >
> >
> >2020-03-25 21:28:41,662Z ERROR
> >[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> >(ForkJoinPool-1-worker-14) [] EVENT_ID: VM_DOWN_ERROR(119), VM win-2019
> >is down with error. Exit message: internal error: qemu unexpectedly
> >closed the monitor: 2020-03-25T21:28:40.324426Z qemu-kvm: -drive
> >file=/rhev/data-center/mnt/nas-01.phoelex.com:
> _volume2_isostore/41cebb4b-c164-4956-8f44-6426170cd9f5/images/----/virtio-win_servers_amd64.vfd,format=raw,if=none,id=drive-ua-0b9c28b5-f75c-4575-ad85-b5b836f67d61,readonly=on:
> >Could not open
> >'/rhev/data-center/mnt/nas-01.phoelex.com:
> _volume2_

[ovirt-users] Re: NFS permissions error on ISODomain file with correct permissions

2020-04-02 Thread Strahil Nikolov
On April 2, 2020 9:51:47 PM GMT+03:00, eev...@digitaldatatechs.com wrote:
>It depends on the NAS. What NAS do you have? How is NFS setup and what
>version? It could be several different things but without knowing the
>specific setup, I’d be guessing.
>
> 
>
>Eric Evans
>
>Digital Data Services LLC.
>
>304.660.9080
>
>
>
> 
>
>From: Shareef Jalloq  
>Sent: Thursday, April 02, 2020 10:41 AM
>To: eev...@digitaldatatechs.com
>Cc: users@ovirt.org
>Subject: [ovirt-users] Re: NFS permissions error on ISODomain file with
>correct permissions
>
> 
>
>OK, think this is solved and it was a permissions issue.  I eventually
>logged into the NAS to see if I could see anything different in the
>exports and the ISO Domain didn't have any group ownership rights.
>
> 
>
>This is strange because when you browse the directory from an oVirt
>node, it shows each directory and file as having full permissions. Is
>this an NFS thing?
>
> 
>
>On Thu, Apr 2, 2020 at 3:20 PM Shareef Jalloq <mailto:shar...@jalloq.co.uk> > wrote:
>
>On your second point about an Export domain, how do you configure oVirt
>to look in an Export domain for the Run Once setup?
>
> 
>
>On Thu, Apr 2, 2020 at 3:01 PM Shareef Jalloq <mailto:shar...@jalloq.co.uk> > wrote:
>
>This doesn't seem, to me, to be an issue with the NAS or the mounts. 
>In my original post you can see the VFD files in the mounted directory
>under /rhev.  Is that not what you're asking?
>
> 
>
>I have both the ISODomain and two DataDomain's mounted.  I have a bunch
>of VM's running off this NAS with no issues.  The permissions to the
>VFD files are all correct and I can create and list files in the
>ISODomain with no issue.
>
> 
>
>On Tue, Mar 31, 2020 at 9:40 PM <mailto:eev...@digitaldatatechs.com> > wrote:
>
>If you issue the mount command does the path show. 
>
>mount |grep 
>
> 
>
>It looks like the ISO domain is on a NAS, so I would try a mount of the
>actual folder on the ovirt node to make sure you are able to access it.
>Also, can you place it in an export domain instead and try? 
>
>I think version 4.2+ it just needs to be in an export domain.
>
> 
>
>Also make sure the NAS path is correct. Most NAS use a data or shares
>for nfs mounts. (nasname:/shares/
>
> 
>
> 
>
>Eric Evans
>
>Digital Data Services LLC.
>
>304.660.9080
>
>
>
> 
>
>From: Shareef Jalloq <mailto:shar...@jalloq.co.uk> > 
>Sent: Tuesday, March 31, 2020 4:59 AM
>To: users@ovirt.org <mailto:users@ovirt.org> 
>Subject: [ovirt-users] NFS permissions error on ISODomain file with
>correct permissions
>
> 
>
>Hi,
>
> 
>
>I asked this question in another thread but it seems to have been lost
>in the noise so I'm reposting with a more descriptive subject.
>
> 
>
>I'm trying to start a Windows VM and use the virtio-win VFD floppy to
>get the drivers but the VM startup fails due to a permissions issue
>detailed below.  The permissions look fine to me so why can't the VFD
>be read?
>
> 
>
>Shareef.
>
> 
>
>I found a permissions issue in the engine.log:
>
> 
>
>2020-03-25 21:28:41,662Z ERROR
>[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
>(ForkJoinPool-1-worker-14) [] EVENT_ID: VM_DOWN_ERROR(119), VM win-2019
>is down with error. Exit message: internal error: qemu unexpectedly
>closed the monitor: 2020-03-25T21:28:40.324426Z qemu-kvm: -drive
>file=/rhev/data-center/mnt/nas-01.phoelex.com:_volume2_isostore/41cebb4b-c164-4956-8f44-6426170cd9f5/images/----/virtio-win_servers_amd64.vfd,format=raw,if=none,id=drive-ua-0b9c28b5-f75c-4575-ad85-b5b836f67d61,readonly=on:
>Could not open
>'/rhev/data-center/mnt/nas-01.phoelex.com:_volume2_isostore/41cebb4b-c164-4956-8f44-6426170cd9f5/images/----/virtio-win_servers_amd64.vfd':
>Permission denied.
>
> 
>
>But when I look at that path on the node in question, every folder and
>the final file have the correct vdsm:kvm permissions:
>
> 
>
>[root@ovirt-node-01 ~]# ll
>/rhev/data-center/mnt/nas-01.phoelex.com:_volume2_isostore/41cebb4b-c164-4956-8f44-6426170cd9f5/images/----/virtio-win_servers_amd64.vfd
>-rwxrwxrwx. 1 vdsm kvm 2949120 Mar 25 21:24
>/rhev/data-center/mnt/nas-01.phoelex.com:_volume2_isostore/41cebb4b-c164-4956-8f44-6426170cd9f5/images/----/virtio-win_servers_amd64.vfd
>
> 
>
>The files were uploaded to the ISO domain using:
>
> 
>
>engine-iso-uploader --iso-domain=iso_storage upload virtio-win.iso
>virtio-win_servers_amd64.vfd 
>

[ovirt-users] Re: NFS permissions error on ISODomain file with correct permissions

2020-04-02 Thread eevans
It depends on the NAS. What NAS do you have? How is NFS setup and what version? 
It could be several different things but without knowing the specific setup, 
I’d be guessing.

 

Eric Evans

Digital Data Services LLC.

304.660.9080



 

From: Shareef Jalloq  
Sent: Thursday, April 02, 2020 10:41 AM
To: eev...@digitaldatatechs.com
Cc: users@ovirt.org
Subject: [ovirt-users] Re: NFS permissions error on ISODomain file with correct 
permissions

 

OK, think this is solved and it was a permissions issue.  I eventually logged 
into the NAS to see if I could see anything different in the exports and the 
ISO Domain didn't have any group ownership rights.

 

This is strange because when you browse the directory from an oVirt node, it 
shows each directory and file as having full permissions. Is this an NFS thing?

 

On Thu, Apr 2, 2020 at 3:20 PM Shareef Jalloq mailto:shar...@jalloq.co.uk> > wrote:

On your second point about an Export domain, how do you configure oVirt to look 
in an Export domain for the Run Once setup?

 

On Thu, Apr 2, 2020 at 3:01 PM Shareef Jalloq mailto:shar...@jalloq.co.uk> > wrote:

This doesn't seem, to me, to be an issue with the NAS or the mounts.  In my 
original post you can see the VFD files in the mounted directory under /rhev.  
Is that not what you're asking?

 

I have both the ISODomain and two DataDomain's mounted.  I have a bunch of VM's 
running off this NAS with no issues.  The permissions to the VFD files are all 
correct and I can create and list files in the ISODomain with no issue.

 

On Tue, Mar 31, 2020 at 9:40 PM mailto:eev...@digitaldatatechs.com> > wrote:

If you issue the mount command does the path show. 

mount |grep 

 

It looks like the ISO domain is on a NAS, so I would try a mount of the actual 
folder on the ovirt node to make sure you are able to access it. Also, can you 
place it in an export domain instead and try? 

I think version 4.2+ it just needs to be in an export domain.

 

Also make sure the NAS path is correct. Most NAS use a data or shares for nfs 
mounts. (nasname:/shares/

 

 

Eric Evans

Digital Data Services LLC.

304.660.9080



 

From: Shareef Jalloq mailto:shar...@jalloq.co.uk> > 
Sent: Tuesday, March 31, 2020 4:59 AM
To: users@ovirt.org <mailto:users@ovirt.org> 
Subject: [ovirt-users] NFS permissions error on ISODomain file with correct 
permissions

 

Hi,

 

I asked this question in another thread but it seems to have been lost in the 
noise so I'm reposting with a more descriptive subject.

 

I'm trying to start a Windows VM and use the virtio-win VFD floppy to get the 
drivers but the VM startup fails due to a permissions issue detailed below.  
The permissions look fine to me so why can't the VFD be read?

 

Shareef.

 

I found a permissions issue in the engine.log:

 

2020-03-25 21:28:41,662Z ERROR 
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(ForkJoinPool-1-worker-14) [] EVENT_ID: VM_DOWN_ERROR(119), VM win-2019 is down 
with error. Exit message: internal error: qemu unexpectedly closed the monitor: 
2020-03-25T21:28:40.324426Z qemu-kvm: -drive 
file=/rhev/data-center/mnt/nas-01.phoelex.com:_volume2_isostore/41cebb4b-c164-4956-8f44-6426170cd9f5/images/----/virtio-win_servers_amd64.vfd,format=raw,if=none,id=drive-ua-0b9c28b5-f75c-4575-ad85-b5b836f67d61,readonly=on:
 Could not open 
'/rhev/data-center/mnt/nas-01.phoelex.com:_volume2_isostore/41cebb4b-c164-4956-8f44-6426170cd9f5/images/----/virtio-win_servers_amd64.vfd':
 Permission denied.

 

But when I look at that path on the node in question, every folder and the 
final file have the correct vdsm:kvm permissions:

 

[root@ovirt-node-01 ~]# ll 
/rhev/data-center/mnt/nas-01.phoelex.com:_volume2_isostore/41cebb4b-c164-4956-8f44-6426170cd9f5/images/----/virtio-win_servers_amd64.vfd
-rwxrwxrwx. 1 vdsm kvm 2949120 Mar 25 21:24 
/rhev/data-center/mnt/nas-01.phoelex.com:_volume2_isostore/41cebb4b-c164-4956-8f44-6426170cd9f5/images/----/virtio-win_servers_amd64.vfd

 

The files were uploaded to the ISO domain using:

 

engine-iso-uploader --iso-domain=iso_storage upload virtio-win.iso 
virtio-win_servers_amd64.vfd 

 


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ELA7O7B5OJAUUYCACITMX6I3WDXLFGA2/


[ovirt-users] Re: NFS permissions error on ISODomain file with correct permissions

2020-04-02 Thread Shareef Jalloq
OK, think this is solved and it was a permissions issue.  I eventually
logged into the NAS to see if I could see anything different in the exports
and the ISO Domain didn't have any group ownership rights.

This is strange because when you browse the directory from an oVirt node,
it shows each directory and file as having full permissions. Is this an NFS
thing?

On Thu, Apr 2, 2020 at 3:20 PM Shareef Jalloq  wrote:

> On your second point about an Export domain, how do you configure oVirt to
> look in an Export domain for the Run Once setup?
>
> On Thu, Apr 2, 2020 at 3:01 PM Shareef Jalloq 
> wrote:
>
>> This doesn't seem, to me, to be an issue with the NAS or the mounts.  In
>> my original post you can see the VFD files in the mounted directory under
>> /rhev.  Is that not what you're asking?
>>
>> I have both the ISODomain and two DataDomain's mounted.  I have a bunch
>> of VM's running off this NAS with no issues.  The permissions to the VFD
>> files are all correct and I can create and list files in the ISODomain with
>> no issue.
>>
>> On Tue, Mar 31, 2020 at 9:40 PM  wrote:
>>
>>> If you issue the mount command does the path show.
>>>
>>> mount |grep 
>>>
>>>
>>>
>>> It looks like the ISO domain is on a NAS, so I would try a mount of the
>>> actual folder on the ovirt node to make sure you are able to access it.
>>> Also, can you place it in an export domain instead and try?
>>>
>>> I think version 4.2+ it just needs to be in an export domain.
>>>
>>>
>>>
>>> Also make sure the NAS path is correct. Most NAS use a data or shares
>>> for nfs mounts. (nasname:/shares/
>>>
>>>
>>>
>>>
>>>
>>> Eric Evans
>>>
>>> Digital Data Services LLC.
>>>
>>> 304.660.9080
>>>
>>>
>>>
>>> *From:* Shareef Jalloq 
>>> *Sent:* Tuesday, March 31, 2020 4:59 AM
>>> *To:* users@ovirt.org
>>> *Subject:* [ovirt-users] NFS permissions error on ISODomain file with
>>> correct permissions
>>>
>>>
>>>
>>> Hi,
>>>
>>>
>>>
>>> I asked this question in another thread but it seems to have been lost
>>> in the noise so I'm reposting with a more descriptive subject.
>>>
>>>
>>>
>>> I'm trying to start a Windows VM and use the virtio-win VFD floppy to
>>> get the drivers but the VM startup fails due to a permissions issue
>>> detailed below.  The permissions look fine to me so why can't the VFD be
>>> read?
>>>
>>>
>>>
>>> Shareef.
>>>
>>>
>>>
>>> I found a permissions issue in the engine.log:
>>>
>>>
>>>
>>> 2020-03-25 21:28:41,662Z ERROR
>>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
>>> (ForkJoinPool-1-worker-14) [] EVENT_ID: VM_DOWN_ERROR(119), VM win-2019 is
>>> down with error. Exit message: internal error: qemu unexpectedly closed the
>>> monitor: 2020-03-25T21:28:40.324426Z qemu-kvm: -drive
>>> file=/rhev/data-center/mnt/nas-01.phoelex.com:_volume2_isostore/41cebb4b-c164-4956-8f44-6426170cd9f5/images/----/virtio-win_servers_amd64.vfd,format=raw,if=none,id=drive-ua-0b9c28b5-f75c-4575-ad85-b5b836f67d61,readonly=on:
>>> Could not open 
>>> '/rhev/data-center/mnt/nas-01.phoelex.com:_volume2_isostore/41cebb4b-c164-4956-8f44-6426170cd9f5/images/----/virtio-win_servers_amd64.vfd':
>>> Permission denied.
>>>
>>>
>>>
>>> But when I look at that path on the node in question, every folder and
>>> the final file have the correct vdsm:kvm permissions:
>>>
>>>
>>>
>>> [root@ovirt-node-01 ~]# ll /rhev/data-center/mnt/nas-01.phoelex.com:
>>> _volume2_isostore/41cebb4b-c164-4956-8f44-6426170cd9f5/images/----/virtio-win_servers_amd64.vfd
>>> -rwxrwxrwx. 1 vdsm kvm 2949120 Mar 25 21:24
>>> /rhev/data-center/mnt/nas-01.phoelex.com:
>>> _volume2_isostore/41cebb4b-c164-4956-8f44-6426170cd9f5/images/----/virtio-win_servers_amd64.vfd
>>>
>>>
>>>
>>> The files were uploaded to the ISO domain using:
>>>
>>>
>>>
>>> engine-iso-uploader --iso-domain=iso_storage upload virtio-win.iso
>>> virtio-win_servers_amd64.vfd
>>>
>>>
>>>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/IFCKQILEFXJP7FFJIWU63YINMDLWVG4V/


[ovirt-users] Re: NFS permissions error on ISODomain file with correct permissions

2020-04-02 Thread Shareef Jalloq
On your second point about an Export domain, how do you configure oVirt to
look in an Export domain for the Run Once setup?

On Thu, Apr 2, 2020 at 3:01 PM Shareef Jalloq  wrote:

> This doesn't seem, to me, to be an issue with the NAS or the mounts.  In
> my original post you can see the VFD files in the mounted directory under
> /rhev.  Is that not what you're asking?
>
> I have both the ISODomain and two DataDomain's mounted.  I have a bunch of
> VM's running off this NAS with no issues.  The permissions to the VFD files
> are all correct and I can create and list files in the ISODomain with no
> issue.
>
> On Tue, Mar 31, 2020 at 9:40 PM  wrote:
>
>> If you issue the mount command does the path show.
>>
>> mount |grep 
>>
>>
>>
>> It looks like the ISO domain is on a NAS, so I would try a mount of the
>> actual folder on the ovirt node to make sure you are able to access it.
>> Also, can you place it in an export domain instead and try?
>>
>> I think version 4.2+ it just needs to be in an export domain.
>>
>>
>>
>> Also make sure the NAS path is correct. Most NAS use a data or shares for
>> nfs mounts. (nasname:/shares/
>>
>>
>>
>>
>>
>> Eric Evans
>>
>> Digital Data Services LLC.
>>
>> 304.660.9080
>>
>>
>>
>> *From:* Shareef Jalloq 
>> *Sent:* Tuesday, March 31, 2020 4:59 AM
>> *To:* users@ovirt.org
>> *Subject:* [ovirt-users] NFS permissions error on ISODomain file with
>> correct permissions
>>
>>
>>
>> Hi,
>>
>>
>>
>> I asked this question in another thread but it seems to have been lost in
>> the noise so I'm reposting with a more descriptive subject.
>>
>>
>>
>> I'm trying to start a Windows VM and use the virtio-win VFD floppy to get
>> the drivers but the VM startup fails due to a permissions issue detailed
>> below.  The permissions look fine to me so why can't the VFD be read?
>>
>>
>>
>> Shareef.
>>
>>
>>
>> I found a permissions issue in the engine.log:
>>
>>
>>
>> 2020-03-25 21:28:41,662Z ERROR
>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
>> (ForkJoinPool-1-worker-14) [] EVENT_ID: VM_DOWN_ERROR(119), VM win-2019 is
>> down with error. Exit message: internal error: qemu unexpectedly closed the
>> monitor: 2020-03-25T21:28:40.324426Z qemu-kvm: -drive
>> file=/rhev/data-center/mnt/nas-01.phoelex.com:_volume2_isostore/41cebb4b-c164-4956-8f44-6426170cd9f5/images/----/virtio-win_servers_amd64.vfd,format=raw,if=none,id=drive-ua-0b9c28b5-f75c-4575-ad85-b5b836f67d61,readonly=on:
>> Could not open 
>> '/rhev/data-center/mnt/nas-01.phoelex.com:_volume2_isostore/41cebb4b-c164-4956-8f44-6426170cd9f5/images/----/virtio-win_servers_amd64.vfd':
>> Permission denied.
>>
>>
>>
>> But when I look at that path on the node in question, every folder and
>> the final file have the correct vdsm:kvm permissions:
>>
>>
>>
>> [root@ovirt-node-01 ~]# ll /rhev/data-center/mnt/nas-01.phoelex.com:
>> _volume2_isostore/41cebb4b-c164-4956-8f44-6426170cd9f5/images/----/virtio-win_servers_amd64.vfd
>> -rwxrwxrwx. 1 vdsm kvm 2949120 Mar 25 21:24
>> /rhev/data-center/mnt/nas-01.phoelex.com:
>> _volume2_isostore/41cebb4b-c164-4956-8f44-6426170cd9f5/images/----/virtio-win_servers_amd64.vfd
>>
>>
>>
>> The files were uploaded to the ISO domain using:
>>
>>
>>
>> engine-iso-uploader --iso-domain=iso_storage upload virtio-win.iso
>> virtio-win_servers_amd64.vfd
>>
>>
>>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/UMXVXZB6PZMMAGW7TSIXBVQW3LHPDTO6/


[ovirt-users] Re: NFS permissions error on ISODomain file with correct permissions

2020-04-02 Thread Shareef Jalloq
This doesn't seem, to me, to be an issue with the NAS or the mounts.  In my
original post you can see the VFD files in the mounted directory under
/rhev.  Is that not what you're asking?

I have both the ISODomain and two DataDomain's mounted.  I have a bunch of
VM's running off this NAS with no issues.  The permissions to the VFD files
are all correct and I can create and list files in the ISODomain with no
issue.

On Tue, Mar 31, 2020 at 9:40 PM  wrote:

> If you issue the mount command does the path show.
>
> mount |grep 
>
>
>
> It looks like the ISO domain is on a NAS, so I would try a mount of the
> actual folder on the ovirt node to make sure you are able to access it.
> Also, can you place it in an export domain instead and try?
>
> I think version 4.2+ it just needs to be in an export domain.
>
>
>
> Also make sure the NAS path is correct. Most NAS use a data or shares for
> nfs mounts. (nasname:/shares/
>
>
>
>
>
> Eric Evans
>
> Digital Data Services LLC.
>
> 304.660.9080
>
>
>
> *From:* Shareef Jalloq 
> *Sent:* Tuesday, March 31, 2020 4:59 AM
> *To:* users@ovirt.org
> *Subject:* [ovirt-users] NFS permissions error on ISODomain file with
> correct permissions
>
>
>
> Hi,
>
>
>
> I asked this question in another thread but it seems to have been lost in
> the noise so I'm reposting with a more descriptive subject.
>
>
>
> I'm trying to start a Windows VM and use the virtio-win VFD floppy to get
> the drivers but the VM startup fails due to a permissions issue detailed
> below.  The permissions look fine to me so why can't the VFD be read?
>
>
>
> Shareef.
>
>
>
> I found a permissions issue in the engine.log:
>
>
>
> 2020-03-25 21:28:41,662Z ERROR
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> (ForkJoinPool-1-worker-14) [] EVENT_ID: VM_DOWN_ERROR(119), VM win-2019 is
> down with error. Exit message: internal error: qemu unexpectedly closed the
> monitor: 2020-03-25T21:28:40.324426Z qemu-kvm: -drive
> file=/rhev/data-center/mnt/nas-01.phoelex.com:_volume2_isostore/41cebb4b-c164-4956-8f44-6426170cd9f5/images/----/virtio-win_servers_amd64.vfd,format=raw,if=none,id=drive-ua-0b9c28b5-f75c-4575-ad85-b5b836f67d61,readonly=on:
> Could not open 
> '/rhev/data-center/mnt/nas-01.phoelex.com:_volume2_isostore/41cebb4b-c164-4956-8f44-6426170cd9f5/images/----/virtio-win_servers_amd64.vfd':
> Permission denied.
>
>
>
> But when I look at that path on the node in question, every folder and the
> final file have the correct vdsm:kvm permissions:
>
>
>
> [root@ovirt-node-01 ~]# ll /rhev/data-center/mnt/nas-01.phoelex.com:
> _volume2_isostore/41cebb4b-c164-4956-8f44-6426170cd9f5/images/----/virtio-win_servers_amd64.vfd
> -rwxrwxrwx. 1 vdsm kvm 2949120 Mar 25 21:24
> /rhev/data-center/mnt/nas-01.phoelex.com:
> _volume2_isostore/41cebb4b-c164-4956-8f44-6426170cd9f5/images/----/virtio-win_servers_amd64.vfd
>
>
>
> The files were uploaded to the ISO domain using:
>
>
>
> engine-iso-uploader --iso-domain=iso_storage upload virtio-win.iso
> virtio-win_servers_amd64.vfd
>
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/FRBYZV3D26EUZR563YRRZY4VI4VZLWZ2/


[ovirt-users] Re: NFS permissions error on ISODomain file with correct permissions

2020-03-31 Thread eevans
If you issue the mount command does the path show. 

mount |grep 

 

It looks like the ISO domain is on a NAS, so I would try a mount of the actual 
folder on the ovirt node to make sure you are able to access it. Also, can you 
place it in an export domain instead and try? 

I think version 4.2+ it just needs to be in an export domain.

 

Also make sure the NAS path is correct. Most NAS use a data or shares for nfs 
mounts. (nasname:/shares/

 

 

Eric Evans

Digital Data Services LLC.

304.660.9080



 

From: Shareef Jalloq  
Sent: Tuesday, March 31, 2020 4:59 AM
To: users@ovirt.org
Subject: [ovirt-users] NFS permissions error on ISODomain file with correct 
permissions

 

Hi,

 

I asked this question in another thread but it seems to have been lost in the 
noise so I'm reposting with a more descriptive subject.

 

I'm trying to start a Windows VM and use the virtio-win VFD floppy to get the 
drivers but the VM startup fails due to a permissions issue detailed below.  
The permissions look fine to me so why can't the VFD be read?

 

Shareef.

 

I found a permissions issue in the engine.log:

 

2020-03-25 21:28:41,662Z ERROR 
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(ForkJoinPool-1-worker-14) [] EVENT_ID: VM_DOWN_ERROR(119), VM win-2019 is down 
with error. Exit message: internal error: qemu unexpectedly closed the monitor: 
2020-03-25T21:28:40.324426Z qemu-kvm: -drive 
file=/rhev/data-center/mnt/nas-01.phoelex.com:_volume2_isostore/41cebb4b-c164-4956-8f44-6426170cd9f5/images/----/virtio-win_servers_amd64.vfd,format=raw,if=none,id=drive-ua-0b9c28b5-f75c-4575-ad85-b5b836f67d61,readonly=on:
 Could not open 
'/rhev/data-center/mnt/nas-01.phoelex.com:_volume2_isostore/41cebb4b-c164-4956-8f44-6426170cd9f5/images/----/virtio-win_servers_amd64.vfd':
 Permission denied.

 

But when I look at that path on the node in question, every folder and the 
final file have the correct vdsm:kvm permissions:

 

[root@ovirt-node-01 ~]# ll 
/rhev/data-center/mnt/nas-01.phoelex.com:_volume2_isostore/41cebb4b-c164-4956-8f44-6426170cd9f5/images/----/virtio-win_servers_amd64.vfd
-rwxrwxrwx. 1 vdsm kvm 2949120 Mar 25 21:24 
/rhev/data-center/mnt/nas-01.phoelex.com:_volume2_isostore/41cebb4b-c164-4956-8f44-6426170cd9f5/images/----/virtio-win_servers_amd64.vfd

 

The files were uploaded to the ISO domain using:

 

engine-iso-uploader --iso-domain=iso_storage upload virtio-win.iso 
virtio-win_servers_amd64.vfd 


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/SE3LBGPKBK4C6MOPEHXRLVXNHBEKGMXY/


[ovirt-users] Re: NFS ISO Domain - Outage ?

2020-03-11 Thread eevans
I have notice in testing that if the ISO domain goes down, it does affect the 
vm's. The same server also has the sdm export domain on it, which I think 
houses most of the hard disks. Once the host came back up, everything resumed 
normal operations.

Eric Evans
Digital Data Services LLC.
304.660.9080


-Original Message-
From: Strahil Nikolov  
Sent: Wednesday, March 11, 2020 5:35 AM
To: users@ovirt.org; Nardus Geldenhuys 
Subject: [ovirt-users] Re: NFS ISO Domain - Outage ?

On March 11, 2020 9:45:30 AM GMT+02:00, Nardus Geldenhuys  
wrote:
>Hi Ovirt Mailing-list
>
>Hope you are well. We had an outage over several ovirt clusters. It 
>looks like we had the same ISO NFS domain shared to all off them, many 
>of the VM's had a CD attached to it. The NFS server went down for an 
>hour, all hell broke lose when the NFS server went down. Some of the 
>ovirt nodes became "red/unresponssive" on the ovirt dashboard. We 
>learned now to spilt the NFS server for ISO's and/or remove the ISO's 
>when done.
>
>Has anyone seen similar issues with an NFS ISO Domain? Is there special 
>options we need to pass to the mount to get around this? Can we put the 
>ISO's somewhere else?
>
>Regards
>
>Nardus

Hi Nardus,

The iso domain is deprecated, but there  are some issues  when uploading ISOs 
to block-based data domains.Using  ISOs uploaded  to gluster-based  data domain 
is working pretty fine (I'm using it in my LAB),  but you need to properly test 
prior implementing on Prod.

As far as I know, oVirt is making I/O checks frequently , so even the hard 
mount option won't help.

Is your NFS  clusterized ? If not ,you may consider clusterizing it.

Actually, did your VMs got  paused  or completely crashed ?

Best Regards,
Strahil Nikolov
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: 
https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/KJKLYNE3R3HMLTSCDRGTXI6T3A5374MS/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/P73VEK7Z6JXS3QZ5CQ4L2G7FJN2RIU2M/


[ovirt-users] Re: NFS ISO Domain - Outage ?

2020-03-11 Thread Strahil Nikolov
On March 11, 2020 11:35:52 AM GMT+02:00, Nardus Geldenhuys  
wrote:
>Hi
>
>Thanks for the response. Seems that they got paused, all came right
>after
>the NFS server came backup.
>
>Regards
>
>Nardus
>
>On Wed, 11 Mar 2020 at 11:34, Strahil Nikolov 
>wrote:
>
>> On March 11, 2020 9:45:30 AM GMT+02:00, Nardus Geldenhuys <
>> nard...@gmail.com> wrote:
>> >Hi Ovirt Mailing-list
>> >
>> >Hope you are well. We had an outage over several ovirt clusters. It
>> >looks
>> >like we had the same ISO NFS domain shared to all off them, many of
>the
>> >VM's had a CD attached to it. The NFS server went down for an hour,
>all
>> >hell broke lose when the NFS server went down. Some of the ovirt
>nodes
>> >became "red/unresponssive" on the ovirt dashboard. We learned now to
>> >spilt
>> >the NFS server for ISO's and/or remove the ISO's when done.
>> >
>> >Has anyone seen similar issues with an NFS ISO Domain? Is there
>special
>> >options we need to pass to the mount to get around this? Can we put
>the
>> >ISO's somewhere else?
>> >
>> >Regards
>> >
>> >Nardus
>>
>> Hi Nardus,
>>
>> The iso domain is deprecated, but there  are some issues  when
>uploading
>> ISOs to block-based data domains.Using  ISOs uploaded  to
>gluster-based
>> data domain is working pretty fine (I'm using it in my LAB),  but you
>need
>> to properly test prior implementing on Prod.
>>
>> As far as I know, oVirt is making I/O checks frequently , so even the
>hard
>> mount option won't help.
>>
>> Is your NFS  clusterized ? If not ,you may consider clusterizing it.
>>
>> Actually, did your VMs got  paused  or completely crashed ?
>>
>> Best Regards,
>> Strahil Nikolov
>>

In such case , you can consider  clusterization of the NFS Server if it is 
Linux-based one.

Best Regards,
Strahil Nikolov
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/6BMKSMZBV7Y65GTFWRVEYXZG53S7QRH7/


[ovirt-users] Re: NFS ISO Domain - Outage ?

2020-03-11 Thread Nardus Geldenhuys
Hi

Thanks for the response. Seems that they got paused, all came right after
the NFS server came backup.

Regards

Nardus

On Wed, 11 Mar 2020 at 11:34, Strahil Nikolov  wrote:

> On March 11, 2020 9:45:30 AM GMT+02:00, Nardus Geldenhuys <
> nard...@gmail.com> wrote:
> >Hi Ovirt Mailing-list
> >
> >Hope you are well. We had an outage over several ovirt clusters. It
> >looks
> >like we had the same ISO NFS domain shared to all off them, many of the
> >VM's had a CD attached to it. The NFS server went down for an hour, all
> >hell broke lose when the NFS server went down. Some of the ovirt nodes
> >became "red/unresponssive" on the ovirt dashboard. We learned now to
> >spilt
> >the NFS server for ISO's and/or remove the ISO's when done.
> >
> >Has anyone seen similar issues with an NFS ISO Domain? Is there special
> >options we need to pass to the mount to get around this? Can we put the
> >ISO's somewhere else?
> >
> >Regards
> >
> >Nardus
>
> Hi Nardus,
>
> The iso domain is deprecated, but there  are some issues  when uploading
> ISOs to block-based data domains.Using  ISOs uploaded  to gluster-based
> data domain is working pretty fine (I'm using it in my LAB),  but you need
> to properly test prior implementing on Prod.
>
> As far as I know, oVirt is making I/O checks frequently , so even the hard
> mount option won't help.
>
> Is your NFS  clusterized ? If not ,you may consider clusterizing it.
>
> Actually, did your VMs got  paused  or completely crashed ?
>
> Best Regards,
> Strahil Nikolov
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/7HJAYGM7W7KSVFYSSURF5TLFWLWY4CQV/


[ovirt-users] Re: NFS Storage Domain on OpenMediaVault

2019-12-02 Thread rwebb
Thanks for the link.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/NGW7E7AAQ3M7UUN7NAMXEKW7WTISW6J5/


[ovirt-users] Re: NFS Storage Domain on OpenMediaVault

2019-12-02 Thread Amit Bawer
On Mon, Dec 2, 2019 at 7:21 PM Robert Webb  wrote:

> Thanks for the response.
>
> As it turned out, the issue was on the export where I had to remove
> subtree_check and added no_root_squash and it started working.
>
> Being new to the setup, I am still trying to work through some config
> issues and deciding if I want to continue. The main thing I am looking for
> is good HA and failover capability. Been using Proxmox VE and I like the
> admin of it, but failover still needs work. Simple things like auto
> migration when rebooting a host does not exist and is something I need.
>
for RHEV, you can check the Cluster scheduling policies
https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.3/html/administration_guide/sect-scheduling_policies


> Robert
> --
> *From:* Strahil Nikolov 
> *Sent:* Sunday, December 1, 2019 2:54 PM
> *To:* users@ovirt.org ; Robert Webb 
> *Subject:* Re: [ovirt-users] NFS Storage Domain on OpenMediaVault
>
> Does sanlock user has rights on the ./dom_md/ids ?
>
> Check the sanlock.service for issues.
> journalctl -u sanlock.service
>
> Best Regards,
> Strahil Nikolov
>
> В неделя, 1 декември 2019 г., 17:22:21 ч. Гринуич+2, rw...@ropeguru.com <
> rw...@ropeguru.com> написа:
>
>
> I have a clean install with openmediavault as backend NFS and cannot get
> it to work. Keep getting permission errors even though I created a vdsm
> user and kvm group; and they are the owners of the directory on OMV with
> full permissions.
>
> The directory gets created on the NFS side for the host, but then get the
> permission error and is removed form the host but the directory structure
> is left on the NFS server.
>
> Logs:
>
> From the engine:
>
> Error while executing action New NFS Storage Domain: Unexpected exception
>
> From the oVirt node log:
>
> 2019-11-29 10:03:02 136998 [30025]: open error -13 EACCES: no permission
> to open /rhev/data-center/mnt/192.168.1.56:
> _export_Datastore-oVirt/f38b19e4-8060-4467-860b-09cf606ccc15/dom_md/ids
> 2019-11-29 10:03:02 136998 [30025]: check that daemon user sanlock 179
> group sanlock 179 has access to disk or file.
>
> File system on Openmediavault:
>
> drwxrwsrwx+ 3 vdsm kvm 4096 Nov 29 10:03 .
> drwxr-xr-x  9 root root 4096 Nov 27 20:56 ..
> drwxrwsr-x+ 4 vdsm kvm 4096 Nov 29 10:03
> f38b19e4-8060-4467-860b-09cf606ccc15
>
> drwxrwsr-x+ 4 vdsm kvm 4096 Nov 29 10:03 .
> drwxrwsrwx+ 3 vdsm kvm 4096 Nov 29 10:03 ..
> drwxrwsr-x+ 2 vdsm kvm 4096 Nov 29 10:03 dom_md
> drwxrwsr-x+ 2 vdsm kvm 4096 Nov 29 10:03 images
>
> drwxrwsr-x+ 2 vdsm kvm4096 Nov 29 10:03 .
> drwxrwsr-x+ 4 vdsm kvm4096 Nov 29 10:03 ..
> -rw-rw+ 1 vdsm kvm0 Nov 29 10:03 ids
> -rw-rw+ 1 vdsm kvm 16777216 Nov 29 10:03 inbox
> -rw-rw+ 1 vdsm kvm0 Nov 29 10:03 leases
> -rw-rw-r--+ 1 vdsm kvm  343 Nov 29 10:03 metadata
> -rw-rw+ 1 vdsm kvm 16777216 Nov 29 10:03 outbox
> -rw-rw+ 1 vdsm kvm  1302528 Nov 29 10:03 xleases
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/ILKNT57F6VUHEVKMOACLLQRAO3J364MC/
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/KH2H2V254Z7T7GP6NK3TOVDQBAPRBLZS/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/27WLJXFNPXK7MNPAIAXSNPR3T3YFLYSQ/


[ovirt-users] Re: NFS Storage Domain on OpenMediaVault

2019-12-02 Thread Robert Webb
Thanks for the response.

As it turned out, the issue was on the export where I had to remove 
subtree_check and added no_root_squash and it started working.

Being new to the setup, I am still trying to work through some config issues 
and deciding if I want to continue. The main thing I am looking for is good HA 
and failover capability. Been using Proxmox VE and I like the admin of it, but 
failover still needs work. Simple things like auto migration when rebooting a 
host does not exist and is something I need.

Robert

From: Strahil Nikolov 
Sent: Sunday, December 1, 2019 2:54 PM
To: users@ovirt.org ; Robert Webb 
Subject: Re: [ovirt-users] NFS Storage Domain on OpenMediaVault

Does sanlock user has rights on the ./dom_md/ids ?

Check the sanlock.service for issues.
journalctl -u sanlock.service

Best Regards,
Strahil Nikolov

В неделя, 1 декември 2019 г., 17:22:21 ч. Гринуич+2, rw...@ropeguru.com 
 написа:


I have a clean install with openmediavault as backend NFS and cannot get it to 
work. Keep getting permission errors even though I created a vdsm user and kvm 
group; and they are the owners of the directory on OMV with full permissions.

The directory gets created on the NFS side for the host, but then get the 
permission error and is removed form the host but the directory structure is 
left on the NFS server.

Logs:

>From the engine:

Error while executing action New NFS Storage Domain: Unexpected exception

>From the oVirt node log:

2019-11-29 10:03:02 136998 [30025]: open error -13 EACCES: no permission to 
open 
/rhev/data-center/mnt/192.168.1.56:_export_Datastore-oVirt/f38b19e4-8060-4467-860b-09cf606ccc15/dom_md/ids
2019-11-29 10:03:02 136998 [30025]: check that daemon user sanlock 179 group 
sanlock 179 has access to disk or file.

File system on Openmediavault:

drwxrwsrwx+ 3 vdsm kvm 4096 Nov 29 10:03 .
drwxr-xr-x  9 root root 4096 Nov 27 20:56 ..
drwxrwsr-x+ 4 vdsm kvm 4096 Nov 29 10:03 f38b19e4-8060-4467-860b-09cf606ccc15

drwxrwsr-x+ 4 vdsm kvm 4096 Nov 29 10:03 .
drwxrwsrwx+ 3 vdsm kvm 4096 Nov 29 10:03 ..
drwxrwsr-x+ 2 vdsm kvm 4096 Nov 29 10:03 dom_md
drwxrwsr-x+ 2 vdsm kvm 4096 Nov 29 10:03 images

drwxrwsr-x+ 2 vdsm kvm4096 Nov 29 10:03 .
drwxrwsr-x+ 4 vdsm kvm4096 Nov 29 10:03 ..
-rw-rw+ 1 vdsm kvm0 Nov 29 10:03 ids
-rw-rw+ 1 vdsm kvm 16777216 Nov 29 10:03 inbox
-rw-rw+ 1 vdsm kvm0 Nov 29 10:03 leases
-rw-rw-r--+ 1 vdsm kvm  343 Nov 29 10:03 metadata
-rw-rw+ 1 vdsm kvm 16777216 Nov 29 10:03 outbox
-rw-rw+ 1 vdsm kvm  1302528 Nov 29 10:03 xleases
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to 
users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ILKNT57F6VUHEVKMOACLLQRAO3J364MC/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/KH2H2V254Z7T7GP6NK3TOVDQBAPRBLZS/


[ovirt-users] Re: NFS Storage Domain on OpenMediaVault

2019-12-01 Thread Strahil Nikolov
 Does sanlock user has rights on the ./dom_md/ids ?
Check the sanlock.service for issues.journalctl -u sanlock.service
Best Regards,Strahil Nikolov
В неделя, 1 декември 2019 г., 17:22:21 ч. Гринуич+2, rw...@ropeguru.com 
 написа:  
 
 I have a clean install with openmediavault as backend NFS and cannot get it to 
work. Keep getting permission errors even though I created a vdsm user and kvm 
group; and they are the owners of the directory on OMV with full permissions.

The directory gets created on the NFS side for the host, but then get the 
permission error and is removed form the host but the directory structure is 
left on the NFS server.

Logs:

>From the engine:

Error while executing action New NFS Storage Domain: Unexpected exception

>From the oVirt node log:

2019-11-29 10:03:02 136998 [30025]: open error -13 EACCES: no permission to 
open 
/rhev/data-center/mnt/192.168.1.56:_export_Datastore-oVirt/f38b19e4-8060-4467-860b-09cf606ccc15/dom_md/ids
2019-11-29 10:03:02 136998 [30025]: check that daemon user sanlock 179 group 
sanlock 179 has access to disk or file.

File system on Openmediavault:

drwxrwsrwx+ 3 vdsm kvm 4096 Nov 29 10:03 .
drwxr-xr-x  9 root root 4096 Nov 27 20:56 ..
drwxrwsr-x+ 4 vdsm kvm 4096 Nov 29 10:03 f38b19e4-8060-4467-860b-09cf606ccc15

drwxrwsr-x+ 4 vdsm kvm 4096 Nov 29 10:03 .
drwxrwsrwx+ 3 vdsm kvm 4096 Nov 29 10:03 ..
drwxrwsr-x+ 2 vdsm kvm 4096 Nov 29 10:03 dom_md
drwxrwsr-x+ 2 vdsm kvm 4096 Nov 29 10:03 images

drwxrwsr-x+ 2 vdsm kvm    4096 Nov 29 10:03 .
drwxrwsr-x+ 4 vdsm kvm    4096 Nov 29 10:03 ..
-rw-rw+ 1 vdsm kvm        0 Nov 29 10:03 ids
-rw-rw+ 1 vdsm kvm 16777216 Nov 29 10:03 inbox
-rw-rw+ 1 vdsm kvm        0 Nov 29 10:03 leases
-rw-rw-r--+ 1 vdsm kvm      343 Nov 29 10:03 metadata
-rw-rw+ 1 vdsm kvm 16777216 Nov 29 10:03 outbox
-rw-rw+ 1 vdsm kvm  1302528 Nov 29 10:03 xleases
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ILKNT57F6VUHEVKMOACLLQRAO3J364MC/
  ___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/IULXFX3UCJ3CEFHLXL5XJ7YAX64N3EDP/


[ovirt-users] Re: NFS Storage Domain on OpenMediaVault

2019-12-01 Thread Amit Bawer
On Sun, Dec 1, 2019 at 5:22 PM  wrote:

> I have a clean install with openmediavault as backend NFS and cannot get
> it to work. Keep getting permission errors even though I created a vdsm
> user and kvm group; and they are the owners of the directory on OMV with
> full permissions.
>
> The directory gets created on the NFS side for the host, but then get the
> permission error and is removed form the host but the directory structure
> is left on the NFS server.
>
> Logs:
>
> From the engine:
>
> Error while executing action New NFS Storage Domain: Unexpected exception
>
> From the oVirt node log:
>
> 2019-11-29 10:03:02 136998 [30025]: open error -13 EACCES: no permission
> to open /rhev/data-center/mnt/192.168.1.56:
> _export_Datastore-oVirt/f38b19e4-8060-4467-860b-09cf606ccc15/dom_md/ids
> 2019-11-29 10:03:02 136998 [30025]: check that daemon user sanlock 179
> group sanlock 179 has access to disk or file.
>

Make sure sanlock user is a member of vdsm and kvm groups

id -a sanlock

should list also kvm and vdsm.

This is something that

vdsm-tool configure --force
systemctl restart libvirtd
systemctl restart vdsm

should set if not set already.


>
> File system on Openmediavault:
>
> drwxrwsrwx+ 3 vdsm kvm 4096 Nov 29 10:03 .
> drwxr-xr-x  9 root root 4096 Nov 27 20:56 ..
> drwxrwsr-x+ 4 vdsm kvm 4096 Nov 29 10:03
> f38b19e4-8060-4467-860b-09cf606ccc15
>
> drwxrwsr-x+ 4 vdsm kvm 4096 Nov 29 10:03 .
> drwxrwsrwx+ 3 vdsm kvm 4096 Nov 29 10:03 ..
> drwxrwsr-x+ 2 vdsm kvm 4096 Nov 29 10:03 dom_md
> drwxrwsr-x+ 2 vdsm kvm 4096 Nov 29 10:03 images
>
> drwxrwsr-x+ 2 vdsm kvm 4096 Nov 29 10:03 .
> drwxrwsr-x+ 4 vdsm kvm 4096 Nov 29 10:03 ..
> -rw-rw+ 1 vdsm kvm0 Nov 29 10:03 ids
> -rw-rw+ 1 vdsm kvm 16777216 Nov 29 10:03 inbox
> -rw-rw+ 1 vdsm kvm0 Nov 29 10:03 leases
> -rw-rw-r--+ 1 vdsm kvm  343 Nov 29 10:03 metadata
> -rw-rw+ 1 vdsm kvm 16777216 Nov 29 10:03 outbox
> -rw-rw+ 1 vdsm kvm  1302528 Nov 29 10:03 xleases
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/ILKNT57F6VUHEVKMOACLLQRAO3J364MC/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/VJOP2HRCE63QKCELUW2KUUH3PFSNRKF5/


[ovirt-users] Re: NFS based - incremental backups

2019-09-27 Thread Strahil
Geo replication requires master and slave volumes , but I'm not sure if the 
slave volumeshould also be replica 3.
Maybe in your case it coould be replica 1 or something lile that.

In KVM I would create a snapshot, back it up and then merge all disks back to 
one (a.k.a. block commit).

So a simple script with user for the API, can allow you to do that without 
paying anything extra to a 3rd party.

Best Regards,
Strahil Nikolov

On Sep 27, 2019 08:25, Leo David  wrote:
>
> Hello Everyone,
> Ive been struggling since a while to find out a propper solution to have a 
> full backup scenario for a production environment.
> In the past, we have used Proxmox, and the scheduled incremenral nfs based 
> full vm backups is a thing that we really miss.
> As far as i know, at this point the only way to have a backup in oVirt / rhev 
> is by using gluster geo-replication feature.
> This is nice, but as far as i know it lacks some important features:
> - ability to have incremental backups to restore vms from
> - ability to backup vms placed on different storage domains ( only one 
> storage domain can be geo-replicated !!! some vms have disks on ssd volume, 
> some on hdd, some on both)
> - need to setup an external 3 nodes gluster cluster ( although a workaround 
> would be to have single bricks based volumes for a single instance )
> I know we can create snaps, but they will die with the platform in a fail 
> scenario, and neither they can be scheduled.
> We have found bacchus project that looked promising, although it had a pretty 
> hassled way to achieve backups ( create snap, create vm from the snap, export 
> vm to export domain, delete vm, delete snap - all in a scheduled fashion )
> As a mention, Proxmox is incrementaly creating a tar archive of the vm disk 
> content, and places it to an external network storage like nfs. This allowed 
> us to backup/reatore both linux and windows vms very easily.
> Now, I know this have been discussed before, but i would like to know if 
> there are at least any future plans to implement this feature in the next 
> releases.
> Personally, i consider this a major, and quite decent feature to have with 
> the platform, without the need to pay for 3rd party solutions that may or may 
> not achieve the goal while adding extra pieces to the stack. Geo-replication 
> is a good and nice feature, but in my oppinion it is not what a "backup 
> domain" would be.
> Have a nice day,
>
> Leo
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/LGYR462MOGGQS5RA723HPYU73J5MQLDV/


[ovirt-users] Re: nfs

2019-09-10 Thread mailing-ovirt
So I think I fixed the option.  I created a sanlock user (179) on the NFS
server, share option also needed to use  no_root_squash.

Simon

-Original Message-
From: Vojtech Juranek  
Sent: September 9, 2019 2:55 PM
To: users@ovirt.org
Subject: [ovirt-users] Re: nfs

Hi,

> I`m trying to mount a nfs share.
> 
> 
> 
> if I manually mount it from ssh, I can access it without issues.
> 
> 
> 
> However when I do it from the web config, it keeps failing:

the error means sanlock cannot write a resource on our device. Please
check 
you have proper permission on your NFS (has to be writeable by uid:guid
36:36) 
and eventually check /var/log/sanlock.log if there are any details

> 
> 
> Not sure how to solve that.
> 
> 
> 
> Thanks
> 
> 
> 
> Simon
> 
> 
> 
> 2019-09-09 09:08:47,601-0400 INFO  (jsonrpc/5) [jsonrpc.JsonRpcServer]
RPC
> call Host.ping2 succeeded in 0.00 seconds (__init__:312)
> 
> 2019-09-09 09:08:47,610-0400 INFO  (jsonrpc/3) [vdsm.api] START
> repoStats(domains=[u'4f4828c2-8f78-4566-a69d-a37cd73446d4'])
from=::1,42394,
> task_id=6c1eeef7-ff8c-4d0d-be85-ced03aae4bc2 (api:48)
> 
> 2019-09-09 09:08:47,610-0400 INFO  (jsonrpc/3) [vdsm.api] FINISH
repoStats
> return={u'4f4828c2-8f78-4566-a69d-a37cd73446d4': {'code': 0, 'actual':
True,
> 'version': 5, 'acquired': True, 'delay': '0.00101971', 'lastCheck':
'0.4',
> 'valid': True}} from=::1,42394,
> task_id=6c1eeef7-ff8c-4d0d-be85-ced03aae4bc2 (api:54)
> 
> 2019-09-09 09:08:47,611-0400 INFO  (jsonrpc/3) [jsonrpc.JsonRpcServer]
RPC
> call Host.getStorageRepoStats succeeded in 0.00 seconds (__init__:312)
> 
> 2019-09-09 09:08:47,839-0400 INFO  (jsonrpc/4) [jsonrpc.JsonRpcServer]
RPC
> call Host.ping2 succeeded in 0.00 seconds (__init__:312)
> 
> 2019-09-09 09:08:47,846-0400 INFO  (jsonrpc/2) [api.host] START
> getCapabilities() from=::1,42394 (api:48)
> 
> 2019-09-09 09:08:48,149-0400 INFO  (jsonrpc/2) [root] managedvolume not
> supported: Managed Volume Not Supported. Missing package os-brick.:
('Cannot
> import os_brick',) (caps:152)
> 
> 2019-09-09 09:08:49,212-0400 INFO  (jsonrpc/6) [jsonrpc.JsonRpcServer]
RPC
> call Host.ping2 succeeded in 0.01 seconds (__init__:312)
> 
> 2019-09-09 09:08:49,263-0400 INFO  (jsonrpc/2) [root]
> /usr/libexec/vdsm/hooks/after_get_caps/50_openstacknet: rc=0 err=
> (hooks:114)
> 
> 2019-09-09 09:08:49,497-0400 INFO  (jsonrpc/0) [jsonrpc.JsonRpcServer]
RPC
> call Host.ping2 succeeded in 0.00 seconds (__init__:312)
> 
> 2019-09-09 09:08:49,657-0400 INFO  (jsonrpc/1) [jsonrpc.JsonRpcServer]
RPC
> call Host.ping2 succeeded in 0.00 seconds (__init__:312)
> 
> 2019-09-09 09:08:49,665-0400 INFO  (jsonrpc/7) [vdsm.api] START
> repoStats(domains=[u'4f4828c2-8f78-4566-a69d-a37cd73446d4'])
from=::1,42394,
> task_id=14b596bf-bc1c-412b-b1fc-8d53f17d88f7 (api:48)
> 
> 2019-09-09 09:08:49,666-0400 INFO  (jsonrpc/7) [vdsm.api] FINISH
repoStats
> return={u'4f4828c2-8f78-4566-a69d-a37cd73446d4': {'code': 0, 'actual':
True,
> 'version': 5, 'acquired': True, 'delay': '0.00101971', 'lastCheck':
'2.4',
> 'valid': True}} from=::1,42394,
> task_id=14b596bf-bc1c-412b-b1fc-8d53f17d88f7 (api:54)
> 
> 2019-09-09 09:08:49,667-0400 INFO  (jsonrpc/7) [jsonrpc.JsonRpcServer]
RPC
> call Host.getStorageRepoStats succeeded in 0.00 seconds (__init__:312)
> 
> 2019-09-09 09:08:49,800-0400 INFO  (jsonrpc/2) [root]
> /usr/libexec/vdsm/hooks/after_get_caps/openstacknet_utils.py: rc=0 err=
> (hooks:114)
> 
> 2019-09-09 09:08:50,464-0400 INFO  (jsonrpc/2) [root]
> /usr/libexec/vdsm/hooks/after_get_caps/ovirt_provider_ovn_hook: rc=0
err=
> (hooks:114)
> 
> 2019-09-09 09:08:50,467-0400 INFO  (jsonrpc/2) [api.host] FINISH
> getCapabilities return={'status': {'message': 'Done', 'code': 0},
'info':
> {u'HBAInventory': {u'iSCSI': [{u'InitiatorName':
> u'iqn.1994-05.com.redhat:d8c85fc0ab85'}], u'FC': []}, u'packages2':
> {u'kernel': {u'release': u'957.27.2.el7.x86_64', u'version': u'3.10.0'},
> u'spice-server': {u'release': u'6.el7_6.1', u'version': u'0.14.0'},
> u'librbd1': {u'release': u'4.el7', u'version': u'10.2.5'}, u'vdsm':
> {u'release': u'1.el7', u'version': u'4.30.24'}, u'qemu-kvm':
{u'release':
> u'18.el7_6.7.1', u'version': u'2.12.0'}, u'openvswitch': {u'release':
> u'4.el7', u'version': u'2.11.0'}, u'libvirt': {u'release':
u'10.el7_6.12',
> u'version': u'4.5.0'}, u'ovirt-hosted-engine-ha': {u'release': u'1.el7',
> u'version': u'2.3.3'}, u'qemu-img': {u'release': u'18.el7_6.7.1',
> u'version': u'2.12.0'}, u'mom': {u'release': u'1.el7.centos',
u'version':
> u'0.5.12'}, u'glusterfs-cli': {u'release': u'1.el7', u'version':
u'6.5'}},
> u'numaNodeDistance': {u'1': [20, 10], u'0': [10, 20]}, u'cpuModel':
> u'Intel(R) Xeon(R) CPU E5-26

[ovirt-users] Re: nfs

2019-09-09 Thread Vojtech Juranek
Hi,

> I`m trying to mount a nfs share.
> 
> 
> 
> if I manually mount it from ssh, I can access it without issues.
> 
> 
> 
> However when I do it from the web config, it keeps failing:

the error means sanlock cannot write a resource on our device. Please check 
you have proper permission on your NFS (has to be writeable by uid:guid 36:36) 
and eventually check /var/log/sanlock.log if there are any details

> 
> 
> Not sure how to solve that.
> 
> 
> 
> Thanks
> 
> 
> 
> Simon
> 
> 
> 
> 2019-09-09 09:08:47,601-0400 INFO  (jsonrpc/5) [jsonrpc.JsonRpcServer] RPC
> call Host.ping2 succeeded in 0.00 seconds (__init__:312)
> 
> 2019-09-09 09:08:47,610-0400 INFO  (jsonrpc/3) [vdsm.api] START
> repoStats(domains=[u'4f4828c2-8f78-4566-a69d-a37cd73446d4']) from=::1,42394,
> task_id=6c1eeef7-ff8c-4d0d-be85-ced03aae4bc2 (api:48)
> 
> 2019-09-09 09:08:47,610-0400 INFO  (jsonrpc/3) [vdsm.api] FINISH repoStats
> return={u'4f4828c2-8f78-4566-a69d-a37cd73446d4': {'code': 0, 'actual': True,
> 'version': 5, 'acquired': True, 'delay': '0.00101971', 'lastCheck': '0.4',
> 'valid': True}} from=::1,42394,
> task_id=6c1eeef7-ff8c-4d0d-be85-ced03aae4bc2 (api:54)
> 
> 2019-09-09 09:08:47,611-0400 INFO  (jsonrpc/3) [jsonrpc.JsonRpcServer] RPC
> call Host.getStorageRepoStats succeeded in 0.00 seconds (__init__:312)
> 
> 2019-09-09 09:08:47,839-0400 INFO  (jsonrpc/4) [jsonrpc.JsonRpcServer] RPC
> call Host.ping2 succeeded in 0.00 seconds (__init__:312)
> 
> 2019-09-09 09:08:47,846-0400 INFO  (jsonrpc/2) [api.host] START
> getCapabilities() from=::1,42394 (api:48)
> 
> 2019-09-09 09:08:48,149-0400 INFO  (jsonrpc/2) [root] managedvolume not
> supported: Managed Volume Not Supported. Missing package os-brick.: ('Cannot
> import os_brick',) (caps:152)
> 
> 2019-09-09 09:08:49,212-0400 INFO  (jsonrpc/6) [jsonrpc.JsonRpcServer] RPC
> call Host.ping2 succeeded in 0.01 seconds (__init__:312)
> 
> 2019-09-09 09:08:49,263-0400 INFO  (jsonrpc/2) [root]
> /usr/libexec/vdsm/hooks/after_get_caps/50_openstacknet: rc=0 err=
> (hooks:114)
> 
> 2019-09-09 09:08:49,497-0400 INFO  (jsonrpc/0) [jsonrpc.JsonRpcServer] RPC
> call Host.ping2 succeeded in 0.00 seconds (__init__:312)
> 
> 2019-09-09 09:08:49,657-0400 INFO  (jsonrpc/1) [jsonrpc.JsonRpcServer] RPC
> call Host.ping2 succeeded in 0.00 seconds (__init__:312)
> 
> 2019-09-09 09:08:49,665-0400 INFO  (jsonrpc/7) [vdsm.api] START
> repoStats(domains=[u'4f4828c2-8f78-4566-a69d-a37cd73446d4']) from=::1,42394,
> task_id=14b596bf-bc1c-412b-b1fc-8d53f17d88f7 (api:48)
> 
> 2019-09-09 09:08:49,666-0400 INFO  (jsonrpc/7) [vdsm.api] FINISH repoStats
> return={u'4f4828c2-8f78-4566-a69d-a37cd73446d4': {'code': 0, 'actual': True,
> 'version': 5, 'acquired': True, 'delay': '0.00101971', 'lastCheck': '2.4',
> 'valid': True}} from=::1,42394,
> task_id=14b596bf-bc1c-412b-b1fc-8d53f17d88f7 (api:54)
> 
> 2019-09-09 09:08:49,667-0400 INFO  (jsonrpc/7) [jsonrpc.JsonRpcServer] RPC
> call Host.getStorageRepoStats succeeded in 0.00 seconds (__init__:312)
> 
> 2019-09-09 09:08:49,800-0400 INFO  (jsonrpc/2) [root]
> /usr/libexec/vdsm/hooks/after_get_caps/openstacknet_utils.py: rc=0 err=
> (hooks:114)
> 
> 2019-09-09 09:08:50,464-0400 INFO  (jsonrpc/2) [root]
> /usr/libexec/vdsm/hooks/after_get_caps/ovirt_provider_ovn_hook: rc=0 err=
> (hooks:114)
> 
> 2019-09-09 09:08:50,467-0400 INFO  (jsonrpc/2) [api.host] FINISH
> getCapabilities return={'status': {'message': 'Done', 'code': 0}, 'info':
> {u'HBAInventory': {u'iSCSI': [{u'InitiatorName':
> u'iqn.1994-05.com.redhat:d8c85fc0ab85'}], u'FC': []}, u'packages2':
> {u'kernel': {u'release': u'957.27.2.el7.x86_64', u'version': u'3.10.0'},
> u'spice-server': {u'release': u'6.el7_6.1', u'version': u'0.14.0'},
> u'librbd1': {u'release': u'4.el7', u'version': u'10.2.5'}, u'vdsm':
> {u'release': u'1.el7', u'version': u'4.30.24'}, u'qemu-kvm': {u'release':
> u'18.el7_6.7.1', u'version': u'2.12.0'}, u'openvswitch': {u'release':
> u'4.el7', u'version': u'2.11.0'}, u'libvirt': {u'release': u'10.el7_6.12',
> u'version': u'4.5.0'}, u'ovirt-hosted-engine-ha': {u'release': u'1.el7',
> u'version': u'2.3.3'}, u'qemu-img': {u'release': u'18.el7_6.7.1',
> u'version': u'2.12.0'}, u'mom': {u'release': u'1.el7.centos', u'version':
> u'0.5.12'}, u'glusterfs-cli': {u'release': u'1.el7', u'version': u'6.5'}},
> u'numaNodeDistance': {u'1': [20, 10], u'0': [10, 20]}, u'cpuModel':
> u'Intel(R) Xeon(R) CPU E5-2640 v2 @ 2.00GHz', u'nestedVirtualization':
> False, u'liveMerge': u'true', u'hooks': {u'before_vm_start':
> {u'50_hostedengine': {u'md5': u'95c810cdcfe4195302a59574a5148289'},
> u'50_vhostmd': {u'md5': u'9206bc390bcbf208b06a8e899581be2d'}},
> u'after_network_setup': {u'30_ethtool_options': {u'md5':
> u'ce1fbad7aa0389e3b06231219140bf0d'}}, u'after_vm_destroy':
> {u'delete_vhostuserclient_hook': {u'md5':
> u'c2f279cc9483a3f842f6c29df13994c1'}, u'50_vhostmd': {u'md5':
> u'bdf4802c0521cf1bae08f2b90a9559cf'}}, u'after_vm_start':
> {u'openstacknet_utils.py': {u'md5': u'1ed38ddf30f8a9c7574589e77e2c0b1f'},

[ovirt-users] Re: NFS storage and importation

2018-11-25 Thread Nir Soffer
On Fri, Nov 23, 2018 at 1:12 PM AG  wrote:

> Hello,
>
> Our cluster use NFS storage (on each node as data, and on a NAS for export)
>

Looks like you invented your own hyperconverge solution.

The NFS server on the node will always be accessible on the node, but if it
is
not accessible from other hosts, the entire DC may go down.

Also you probably don't have any redundancy, so failure of single node cause
downtime and data loss.

Did you consider hyperconverge Gluster based setup?
https://ovirt.org/documentation/gluster-hyperconverged/Gluster_Hyperconverged_Guide/


> We currently have a huge problem on our cluster, and export VM as OVA
> takes lot's of time.
>

The huge problem is slow export to OVA or there is another problem?


> I checked on the NFS storage, and it looks like there's files who might
> be OVA in the images directory,


I don't know the ova export code, but I'm sure it does not save in the
images
directory. It probably creates temporary volume for preparing and storing
the
exported ova file.

Arik, how do you suggest to debug slow export to OVA?


> and OVF files in the the master/vms
>

OVF should be stored in OVF_STORE disks. Maybe you see files
created by an old version?


> Can someone tell me if, when i install a new engine,
> there's a way to get this VMs back inside the new engine (like the
> import tools by example)
>

What do yo mean by "get this VMs back"?

If you mean importing all vms and disks on storage to a new engine,
yest, it should work. This is the base for oVirt DR support.


> ps : it should be said in the documentation to NEVER use backup of an
> engine when he is in a NFS storage domain on a node. It looks like it's
> working, but all the data are unsynch with reality.
>

Do you mean hosted engine stored on NFS storage domain served by
one of the nodes?

Can you give more details on this problem?

Please also specify oVirt version you use.

Nir
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/4Z46JHBJAXGSTC6DXYYLCFNBUNBGYFJT/


[ovirt-users] Re: NFS Multipathing

2018-09-16 Thread Maor Lipchuk
On Fri, Sep 14, 2018 at 2:41 PM,  wrote:

> Hi,
> It should be possible, as oVirt is able to support NFS 4.1
> I have a Synology NAS which is also able to support this version of the
> protocol, but never found time to set this together and test it until now.
> Reagrds
>
>
> Le 30-Aug-2018 12:16:32 +0200, xrs...@xrs444.net a écrit:
>
> Hello all,
>
> I've been  looking around but I've not found anything definitive on
> whether Ovirt can do NFS Multipathing, and if so how?
>
> Does anyone have any good how tos or configuration guides?
>
>
I know that you asked about NFS, but if that helps oVirt do support
multipath for iSCSI storage domain:

https://ovirt.org/develop/release-management/features/storage/iscsi-multipath/
Hope it helps

Regards,
Maor


>
> Thanks,
>
> Thomas
>
>
> --
> FreeMail powered by mail.fr
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-
> guidelines/
> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/
> message/TT2SDAQSZJQ4TWI6Q5AAYCJCZSTGH3HH/
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/HGJPFEYTGX444R34TPP6CVANGJ2KZAUH/


[ovirt-users] Re: NFS Multipathing

2018-09-16 Thread spfma
Hi, It should be possible, as oVirt is able to support NFS 4.1 I have a 
Synology NAS which is also able to support this version of the protocol, but 
never found time to set this together and test it until now. Reagrds 

Le 30-Aug-2018 12:16:32 +0200, xrs...@xrs444.net a crit: 
  Hello all,   I've been looking around but I've not found anything definitive 
on whether Ovirt can do NFS Multipathing, and if so how?   Does anyone have any 
good how tos or configuration guides?   Thanks,   Thomas   

-
FreeMail powered by mail.fr
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/TT2SDAQSZJQ4TWI6Q5AAYCJCZSTGH3HH/


[ovirt-users] Re: NFS Multipathing

2018-09-14 Thread spfma . tech
Hi, It should be possible, as oVirt is able to support NFS 4.1 I have a 
Synology NAS which is also able to support this version of the protocol, but 
never found time to set this together and test it until now. Reagrds 

Le 30-Aug-2018 12:16:32 +0200, xrs...@xrs444.net a crit: 
  Hello all,   I've been looking around but I've not found anything definitive 
on whether Ovirt can do NFS Multipathing, and if so how?   Does anyone have any 
good how tos or configuration guides?   Thanks,   Thomas 
-
 FreeMail powered by mail.fr 

-
FreeMail powered by mail.fr
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/PMJP2W2ZBWEIBZ3SG3QYUOYKXHCLBCFD/


[ovirt-users] Re: NFS Multipathing

2018-08-30 Thread Luca 'remix_tj' Lorenzetto
Hello,

as far as i know, there's no such nfs multipathing option in oVirt. To
be honest, i don't know at all if something like this exists.

Luca
On Thu, Aug 30, 2018 at 12:16 PM Thomas Letherby  wrote:
>
> Hello all,
>
> I've been  looking around but I've not found anything definitive on whether 
> Ovirt can do NFS Multipathing, and if so how?
>
> Does anyone have any good how tos or configuration guides?
>
> Thanks,
>
> Thomas
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/OANZIPQIBCTZ5XAMJKAAIKBZ7XPTSLGF/



-- 
"E' assurdo impiegare gli uomini di intelligenza eccellente per fare
calcoli che potrebbero essere affidati a chiunque se si usassero delle
macchine"
Gottfried Wilhelm von Leibnitz, Filosofo e Matematico (1646-1716)

"Internet è la più grande biblioteca del mondo.
Ma il problema è che i libri sono tutti sparsi sul pavimento"
John Allen Paulos, Matematico (1945-vivente)

Luca 'remix_tj' Lorenzetto, http://www.remixtj.net , 
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/STZH5WGSIKLLWLIIITJYPY3SR2SAREW5/


[ovirt-users] Re: NFS Mount on EMC VNXe3200 / User - Group 36:36 / Permission issue

2018-07-31 Thread jeanbaptiste
Hello,

Mount the nfs share temporary in /mnt/chowned for example, and chown 36:36 the 
directory solve my problem.

Thanks for all !
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/5X527QU2QWPBULVQJCXJXFFDTP6DRN2X/


[ovirt-users] Re: NFS Storage Advice?

2018-07-29 Thread Denys Oryshchenko
Wesley,

Try to disable compression on zfs fs

On 07/28/2018 11:01 AM, Wesley Stewart wrote:
Windows reportes about 500-600 MB/s over a 4GB file.

However I believe I found the issue.  My NFS backend is ZFS which is apparently 
notorious for horrible sync writes.

https://forums.freenas.org/index.php?threads/sync-writes-or-why-is-my-esxi-nfs-so-slow-and-why-is-iscsi-faster.12506/

I will try an iSCSI target and see how that goes.

On Sat, Jul 28, 2018, 1:49 PM Karli Sjöberg 
mailto:ka...@inparadise.se>> wrote:


On Jul 28, 2018 19:30, Wesley Stewart 
mailto:wstewa...@gmail.com>> wrote:
I have added the "async" option to the "additional mount options" section.

Transferring from my raided SSD mirrors is QUITE fast now.

However, this might be confusing, but if I read/write to the nfs share at the 
same time performance plummits.

So if I grab something from the SMB share over 10gb network and write this to 
my VM being hosted on the NFS share.  Bad performance.

Or If I copy and paste ISO from the VM to itself, horrible performance.

If I grab a file from a VM on my raised SSD storage and copy it to my VM on the 
NFS Share, it has great performance (400+ MB/s)

Not quite sure what's going on!  However if I grab something from the 1gbps 
network everything seems okay.  Or if I change the NFS Mount to the 1 gbps 
everything is okay.

Any ideas?

Let's start with expectations, shall we? 10Gb ~ 1 GB/s. So you read a file over 
SMB and write that to a VM hosted on NFS, so remote to remote. The absolute max 
for that transfer would be ~ 500 MB/s. What do you get?

/K


On Sat, Jul 28, 2018 at 3:02 AM Karli Sjöberg 
mailto:ka...@inparadise.se>> wrote:


On Jul 28, 2018 01:01, Wesley Stewart 
mailto:wstewa...@gmail.com>> wrote:
I currently have a NFS server with decent speeds.  I get about 200MB/s write 
and 400+ MB/s read.

My single node oVirt host has a mirrored SSD store for my Windows 10 VM, and I 
have about 3-5 VMs running on the NFS data store.

However, VM/s on the NFS datastore are SLOW.  They can write to their own disk 
around 10-50 MB/s.  However a VM on the mirrored SSD drives can get 150-250 
MB/s transfer speed from the same NFS storage (Through NFS mounts).

Does anyone have any suggestions on what I could try to speed up the NFS 
storage for the VMs?  My single node ovirt box has a 1ft Cat6 crossover cable 
plugged directly into my NFS servers 10GB port.

Thanks!
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to 
users-le...@ovirt.org
Privacy Statement: 
https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/OFTRL5QWQS5SVL23YFZ5KBFBN3F6THQ3/

It may be because of how oVirt mounts them, much more carefully, than Linux 
does by default. If I am not mistaken, oVirt mounts the NFS shares with 'sync', 
whereas a standard 'mount' with no options gets you 'async'. The difference in 
performance is huge, but for a reason; it's unsafe in case of a power failure. 
That may explain things.

/K





___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to 
users-le...@ovirt.org
Privacy Statement: 
https://eur01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.ovirt.org%2Fsite%2Fprivacy-policy%2Fdata=02%7C01%7C%7C9866b7bc59f44a85d9b608d5f4b4f911%7C84df9e7fe9f640afb435%7C1%7C0%7C636683980468746699sdata=DAeU%2BqkFf7pJ%2BfTZJnT57e1nTClSjGrYoevRqZAuiUQ%3Dreserved=0
oVirt Code of Conduct: 

[ovirt-users] Re: NFS Storage Advice?

2018-07-28 Thread Karli Sjöberg
On Jul 28, 2018 22:17, Denys Oryshchenko  wrote:
Wesley,
Try to disable compression on zfs fsNooo! Quiet the contrary, compression actually improves writing, as you won't have to write as much as without it!


On 07/28/2018 11:01 AM, Wesley Stewart wrote:


Windows reportes about 500-600 MB/s over a 4GB file.


However I believe I found the issue.  My NFS backend is ZFS which is apparently notorious for horrible sync writes.


https://forums.freenas.org/index.php?threads/sync-writes-or-why-is-my-esxi-nfs-so-slow-and-why-is-iscsi-faster.12506/



I will try an iSCSI target and see how that goes.You don't say! Well ZFS just happens to be a love of mine, you're in luck!:)In that case, and since you're clearly disregarding my previous warning about how fsck'ed up you'll be when shit hits the fan about asynchronous writing then I can tell you that the article is referring to the old 'istgt' daemon that never even listened to sync calls, something you'd normally want if you care about consistency. Nowadays, that has been replaced by 'ctld' which is far superior and does listen to the calls, so no, you won't see any difference there. What you can do to force ZFS not to care is to:# zfs set sync=disabled After that it won't matter what the services want any more...Now that I've spent my time telling you what you shouldn't do, aka it'll get you in trouble sooner or later, I feel obligated to tell you what you should do, instead:)The right thing to do is to add an SSD to your pool as a SLOG (Separate LOG) that'll take care of all those pesky sync calls that normally slows you down. A pro tip is to just use a small partition of a bigger disk, helps the drive relocate sectors that are damaged over time, say 15 GB partition of a 256 GB SSD will last a life-time and even if it doesn't, it's ok if it dies, it'll just be as slow as without it, you'll surely notice and know to replace it:) As a big bonus, it'll also help prevent fragmentation moving the ZIL (ZFS Intent Log) out of the main pool onto the SLOG.Good luck with it!/K



On Sat, Jul 28, 2018, 1:49 PM Karli Sjöberg  wrote:





On Jul 28, 2018 19:30, Wesley Stewart  wrote:



I have added the "async" option to the "additional mount options" section.


Transferring from my raided SSD mirrors is QUITE fast now.


However, this might be confusing, but if I read/write to the nfs share at the same time performance plummits.  


So if I grab something from the SMB share over 10gb network and write this to my VM being hosted on the NFS share.  Bad performance.


Or If I copy and paste ISO from the VM to itself, horrible performance.


If I grab a file from a VM on my raised SSD storage and copy it to my VM on the NFS Share, it has great performance (400+ MB/s)


Not quite sure what's going on!  However if I grab something from the 1gbps network everything seems okay.  Or if I change the NFS Mount to the 1 gbps everything is okay. 


Any ideas?









Let's start with expectations, shall we? 10Gb ~ 1 GB/s. So you read a file over SMB and write that to a VM hosted on NFS, so remote to remote. The absolute max for that transfer would be ~ 500 MB/s. What do you get?


/K








On Sat, Jul 28, 2018 at 3:02 AM Karli Sjöberg  wrote:





On Jul 28, 2018 01:01, Wesley Stewart  wrote:


I currently have a NFS server with decent speeds.  I get about 200MB/s write and 400+ MB/s read.  


My single node oVirt host has a mirrored SSD store for my Windows 10 VM, and I have about 3-5 VMs running on the NFS data store.


However, VM/s on the NFS datastore are SLOW.  They can write to their own disk around 10-50 MB/s.  However a VM on the mirrored SSD drives can get 150-250 MB/s transfer speed from the same NFS storage (Through NFS mounts).


Does anyone have any suggestions on what I could try to speed up the NFS storage for the VMs?  My single node ovirt box has a 1ft Cat6 crossover cable plugged directly into my NFS servers 10GB port.


Thanks!

___
Users mailing list -- 
users@ovirt.org
To unsubscribe send an email to 
users-le...@ovirt.org
Privacy Statement: 
https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/OFTRL5QWQS5SVL23YFZ5KBFBN3F6THQ3/







It may be because of how oVirt mounts them, much more carefully, than Linux does by default. If I am not mistaken, oVirt mounts the NFS shares with 'sync', whereas a standard 'mount' with no options gets you 'async'. The difference in performance
 is huge, but for a reason; it's unsafe in case of a power failure. That may explain things.


/K

















 
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: 

[ovirt-users] Re: NFS Storage Advice?

2018-07-28 Thread Wesley Stewart
Windows reportes about 500-600 MB/s over a 4GB file.

However I believe I found the issue.  My NFS backend is ZFS which is
apparently notorious for horrible sync writes.

https://forums.freenas.org/index.php?threads/sync-writes-or-why-is-my-esxi-nfs-so-slow-and-why-is-iscsi-faster.12506/

I will try an iSCSI target and see how that goes.

On Sat, Jul 28, 2018, 1:49 PM Karli Sjöberg  wrote:

>
>
> On Jul 28, 2018 19:30, Wesley Stewart  wrote:
>
> I have added the "async" option to the "additional mount options" section.
>
> Transferring from my raided SSD mirrors is QUITE fast now.
>
> However, this might be confusing, but if I read/write to the nfs share at
> the same time performance plummits.
>
> So if I grab something from the SMB share over 10gb network and write this
> to my VM being hosted on the NFS share.  Bad performance.
>
> Or If I copy and paste ISO from the VM to itself, horrible performance.
>
> If I grab a file from a VM on my raised SSD storage and copy it to my VM
> on the NFS Share, it has great performance (400+ MB/s)
>
> Not quite sure what's going on!  However if I grab something from the
> 1gbps network everything seems okay.  Or if I change the NFS Mount to the 1
> gbps everything is okay.
>
> Any ideas?
>
>
> Let's start with expectations, shall we? 10Gb ~ 1 GB/s. So you read a file
> over SMB and write that to a VM hosted on NFS, so remote to remote. The
> absolute max for that transfer would be ~ 500 MB/s. What do you get?
>
> /K
>
>
> On Sat, Jul 28, 2018 at 3:02 AM Karli Sjöberg  wrote:
>
>
>
> On Jul 28, 2018 01:01, Wesley Stewart  wrote:
>
> I currently have a NFS server with decent speeds.  I get about 200MB/s
> write and 400+ MB/s read.
>
> My single node oVirt host has a mirrored SSD store for my Windows 10 VM,
> and I have about 3-5 VMs running on the NFS data store.
>
> However, VM/s on the NFS datastore are SLOW.  They can write to their own
> disk around 10-50 MB/s.  However a VM on the mirrored SSD drives can get
> 150-250 MB/s transfer speed from the same NFS storage (Through NFS mounts).
>
> Does anyone have any suggestions on what I could try to speed up the NFS
> storage for the VMs?  My single node ovirt box has a 1ft Cat6 crossover
> cable plugged directly into my NFS servers 10GB port.
>
> Thanks!
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/OFTRL5QWQS5SVL23YFZ5KBFBN3F6THQ3/
>
>
> It may be because of how oVirt mounts them, much more carefully, than
> Linux does by default. If I am not mistaken, oVirt mounts the NFS shares
> with 'sync', whereas a standard 'mount' with no options gets you 'async'.
> The difference in performance is huge, but for a reason; it's unsafe in
> case of a power failure. That may explain things.
>
> /K
>
>
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/VEZTV4SVQMWOTTYFUU3EGPE6EUD5G4F4/


[ovirt-users] Re: NFS Storage Advice?

2018-07-28 Thread Karli Sjöberg
On Jul 28, 2018 19:30, Wesley Stewart  wrote:I have added the "async" option to the "additional mount options" section.Transferring from my raided SSD mirrors is QUITE fast now.However, this might be confusing, but if I read/write to the nfs share at the same time performance plummits.  So if I grab something from the SMB share over 10gb network and write this to my VM being hosted on the NFS share.  Bad performance.Or If I copy and paste ISO from the VM to itself, horrible performance.If I grab a file from a VM on my raised SSD storage and copy it to my VM on the NFS Share, it has great performance (400+ MB/s)Not quite sure what's going on!  However if I grab something from the 1gbps network everything seems okay.  Or if I change the NFS Mount to the 1 gbps everything is okay. Any ideas?Let's start with expectations, shall we? 10Gb ~ 1 GB/s. So you read a file over SMB and write that to a VM hosted on NFS, so remote to remote. The absolute max for that transfer would be ~ 500 MB/s. What do you get?/KOn Sat, Jul 28, 2018 at 3:02 AM Karli Sjöberg  wrote:On Jul 28, 2018 01:01, Wesley Stewart  wrote:I currently have a NFS server with decent speeds.  I get about 200MB/s write and 400+ MB/s read.  My single node oVirt host has a mirrored SSD store for my Windows 10 VM, and I have about 3-5 VMs running on the NFS data store.However, VM/s on the NFS datastore are SLOW.  They can write to their own disk around 10-50 MB/s.  However a VM on the mirrored SSD drives can get 150-250 MB/s transfer speed from the same NFS storage (Through NFS mounts).Does anyone have any suggestions on what I could try to speed up the NFS storage for the VMs?  My single node ovirt box has a 1ft Cat6 crossover cable plugged directly into my NFS servers 10GB port.Thanks!
___Users mailing list -- users@ovirt.orgTo unsubscribe send an email to users-le...@ovirt.orgPrivacy Statement: https://www.ovirt.org/site/privacy-policy/oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/OFTRL5QWQS5SVL23YFZ5KBFBN3F6THQ3/It may be because of how oVirt mounts them, much more carefully, than Linux does by default. If I am not mistaken, oVirt mounts the NFS shares with 'sync', whereas a standard 'mount' with no options gets you 'async'. The difference in performance is huge, but for a reason; it's unsafe in case of a power failure. That may explain things./K
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/IKQ44BELVQQHFKYBUT4DO52QW6IFGZLP/


[ovirt-users] Re: NFS Storage Advice?

2018-07-28 Thread Wesley Stewart
I have added the "async" option to the "additional mount options" section.

Transferring from my raided SSD mirrors is QUITE fast now.

However, this might be confusing, but if I read/write to the nfs share at
the same time performance plummits.

So if I grab something from the SMB share over 10gb network and write this
to my VM being hosted on the NFS share.  Bad performance.

Or If I copy and paste ISO from the VM to itself, horrible performance.

If I grab a file from a VM on my raised SSD storage and copy it to my VM on
the NFS Share, it has great performance (400+ MB/s)

Not quite sure what's going on!  However if I grab something from the 1gbps
network everything seems okay.  Or if I change the NFS Mount to the 1 gbps
everything is okay.

Any ideas?

On Sat, Jul 28, 2018 at 3:02 AM Karli Sjöberg  wrote:

>
>
> On Jul 28, 2018 01:01, Wesley Stewart  wrote:
>
> I currently have a NFS server with decent speeds.  I get about 200MB/s
> write and 400+ MB/s read.
>
> My single node oVirt host has a mirrored SSD store for my Windows 10 VM,
> and I have about 3-5 VMs running on the NFS data store.
>
> However, VM/s on the NFS datastore are SLOW.  They can write to their own
> disk around 10-50 MB/s.  However a VM on the mirrored SSD drives can get
> 150-250 MB/s transfer speed from the same NFS storage (Through NFS mounts).
>
> Does anyone have any suggestions on what I could try to speed up the NFS
> storage for the VMs?  My single node ovirt box has a 1ft Cat6 crossover
> cable plugged directly into my NFS servers 10GB port.
>
> Thanks!
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/OFTRL5QWQS5SVL23YFZ5KBFBN3F6THQ3/
>
>
> It may be because of how oVirt mounts them, much more carefully, than
> Linux does by default. If I am not mistaken, oVirt mounts the NFS shares
> with 'sync', whereas a standard 'mount' with no options gets you 'async'.
> The difference in performance is huge, but for a reason; it's unsafe in
> case of a power failure. That may explain things.
>
> /K
>
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/AMPMK4ITWTHUPL2N4SZCVR6A4672SCWB/


[ovirt-users] Re: NFS Storage Advice?

2018-07-28 Thread Karli Sjöberg
On Jul 28, 2018 01:01, Wesley Stewart  wrote:I currently have a NFS server with decent speeds.  I get about 200MB/s write and 400+ MB/s read.  My single node oVirt host has a mirrored SSD store for my Windows 10 VM, and I have about 3-5 VMs running on the NFS data store.However, VM/s on the NFS datastore are SLOW.  They can write to their own disk around 10-50 MB/s.  However a VM on the mirrored SSD drives can get 150-250 MB/s transfer speed from the same NFS storage (Through NFS mounts).Does anyone have any suggestions on what I could try to speed up the NFS storage for the VMs?  My single node ovirt box has a 1ft Cat6 crossover cable plugged directly into my NFS servers 10GB port.Thanks!
___Users mailing list -- users@ovirt.orgTo unsubscribe send an email to users-le...@ovirt.orgPrivacy Statement: https://www.ovirt.org/site/privacy-policy/oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/OFTRL5QWQS5SVL23YFZ5KBFBN3F6THQ3/It may be because of how oVirt mounts them, much more carefully, than Linux does by default. If I am not mistaken, oVirt mounts the NFS shares with 'sync', whereas a standard 'mount' with no options gets you 'async'. The difference in performance is huge, but for a reason; it's unsafe in case of a power failure. That may explain things./K___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/7DXTU4NGNXSOTKUBPGNAYC7TCYNBP2CX/


[ovirt-users] Re: NFS Mount on EMC VNXe3200 / User - Group 36:36 / Permission issue

2018-07-12 Thread jeanbaptiste.coupiac
Hello,

 

Luca gave me an answer in direct for the NFS mount :

_

you can simply mount the empty volume manually on a path, saying 
/tmp/nfsvolume, and then change owner with chown 36:36 /tmp/nfsvolume.

Then you can mount without issues.

_

 

It worked perfectly, NFS share is ok now 

 

Thanks for help

 

De : Karli Sjöberg [mailto:ka...@inparadise.se] 
Envoyé : mercredi 11 juillet 2018 17:50
À : jeanbaptiste coupiac 
Cc : users@ovirt.org
Objet : Re: [ovirt-users]NFS Mount on EMC VNXe3200 / User - Group 36:36 / 
Permission issue

 

 

 

On Jul 11, 2018 17:36, jeanbaptiste coupiac mailto:jeanbaptiste.coup...@nfrance.com> > wrote:

Hello,

 

I try to mount an NFS Data share from a EMC VNXe3200 export. Unfortunately, NFS 
share cannot be mount, there is permission error. 

(Indeed, after the same issue on another NFS ISO repo, I’ve change exported 
directory user/group to 36:36, and O-Virt NFS mount was great).

 

So, I think issue with my VNXe3200 is regarding UID / GID owner on VNXe3200 
side. After webex with EMC support, change owner / permission on VNXe side 
seems not possible / supported.

 

Are you able to mount the export manually in any server and do 'chown' to 36:36 
there?

 

/K

 

Can I mount the NFS share with some options to avoid vdsm:kvm mounting issue ? 
Can I force mount with root ?

 

Regards,

 

Jean-Baptiste COUPIAC

 



 

 



___
Users mailing list -- users@ovirt.org  
To unsubscribe send an email to users-le...@ovirt.org 
 
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/NKQ3MBUHWMCRH23NOXQNWOD7VIYYSUOT/

 

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/CGNWD4L4LBYDZCQO45M5TL6TEMV2W7G4/


[ovirt-users] Re: NFS Mount on EMC VNXe3200 / User - Group 36:36 / Permission issue

2018-07-11 Thread Oliver Riesener
Hi,

did you share it correct to **all** your ovirt HOSTS with FQDN? Solid rock 
stable DNS ?
Could you mount it on these HOSTS manually as root ? exported as no root_squash 
needed.
On HOST side, mount it, cd into, change chown 36:36 . , umount it.
Try with overt again.

- or better, use iSCSI Volumes.
  
> Am 11.07.2018 um 15:04 schrieb jeanbapti...@nfrance.com:
> 
> Hello,
> 
> I try to mount an NFS Data share from a EMC VNXe3200 export. Unfortunately, 
> NFS share cannot be mount, there is permission error. 
> (Indeed, after the same issue on another NFS ISO repo, I’ve change exported 
> directory user/group to 36:36, and O-Virt NFS mount was great).
> 
> So, I think issue with my VNXe3200 is regarding UID / GID owner on VNXe3200 
> side. After webex with EMC support, change owner / permission on VNXe side 
> seems not possible / supported.
> Can I mount the NFS share with some options to avoid vdsm:kvm mounting issue 
> ? Can I force mount with root ?
> 
> Regards,
> 
> Jean-Baptiste,
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/L6PYLQN6A6XNIEYNWZ3Y5RX2SJWOKL2R/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/45U3FFJOHQ4DID5DVWDVD2FYHHNF2K6C/


[ovirt-users] Re: NFS Mount on EMC VNXe3200 / User - Group 36:36 / Permission issue

2018-07-11 Thread Karli Sjöberg
On Jul 11, 2018 17:36, jeanbaptiste coupiac  wrote:Hello, I try to mount an NFS Data share from a EMC VNXe3200 export. Unfortunately, NFS share cannot be mount, there is permission error. (Indeed, after the same issue on another NFS ISO repo, I’ve change exported directory user/group to 36:36, and O-Virt NFS mount was great). So, I think issue with my VNXe3200 is regarding UID / GID owner on VNXe3200 side. After webex with EMC support, change owner / permission on VNXe side seems not possible / supported.Are you able to mount the export manually in any server and do 'chown' to 36:36 there?/KCan I mount the NFS share with some options to avoid vdsm:kvm mounting issue ? Can I force mount with root ? Regards, Jean-Baptiste COUPIAC    ___Users mailing list -- users@ovirt.orgTo unsubscribe send an email to users-le...@ovirt.orgPrivacy Statement: https://www.ovirt.org/site/privacy-policy/oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/NKQ3MBUHWMCRH23NOXQNWOD7VIYYSUOT/___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/XFUF24B4KHEAUIJBEJTSOF277SK5BL2J/