Hey Shanii,

Correct, this is exactly what I do/did.
I did “umount /mnt/rhevstore” but I mentioned NFS storage path in order to be 
precise in description.

Thing is that I am able to mount it, as well as oVirt during Host provisioning. 
I was even able to create VMs on it from same Hypervisor.
As far as I remember, I was able to write to the storage and create a random 
file.

I will give it second attempt.


— — —
Met vriendelijke groet / Kind regards,

Marko Vrgotic




From: Shani Leviim <slev...@redhat.com>
Date: Monday, 12 August 2019 at 09:32
To: "Vrgotic, Marko" <m.vrgo...@activevideo.com>
Cc: "users@ovirt.org" <users@ovirt.org>
Subject: Re: [ovirt-users] Re: oVirt 4.3.5 potential issue with NFS storage

Basically, I meant to verify the access by ssh, but I want to verify something 
following your detailed reply:

According to [1], in order to set a NetApp NFS server, the required steps 
should look like this:

# mount NetApp_NFS:/path/to/export /mnt

# chown -R 36.36 /mnt

# chmod -R 755 /mnt

# umount /mnt
Which is quite similar to the steps you've mentioned, except the last step of 
unmounting:
Unmount the 10.214.13.64:/ovirt_production


I think that you had to unmount /mnt/rhevstore instead.


Can you please verify?


[1] https://access.redhat.com/solutions/660143

Regards,
Shani Leviim


On Sun, Aug 11, 2019 at 10:57 PM Vrgotic, Marko 
<m.vrgo...@activevideo.com<mailto:m.vrgo...@activevideo.com>> wrote:
Hi Shani,

Thank you for your reply, but
How do I do that?
Reason why I am asking is following:
Hosts 2,3,4 do not have that issue. Host 1 and 5 do.
What I learned previously is that when using Netapp based NFS, which we are, 
it’s required to before provisioning SHE and/or just adding a Host to a pool, 
it’s required to execute following steps:

Create random dir on a host:
- mkdir /mnt/rhevstore
Mount netapp volume to the dir
- mount -o sec=sys -t nfs 10.214.13.64:/ovirt_production /mnt/rhevstore
Set ownership to vdsm:kvm (36:36):
- chown -R vdsm:kvm /mnt/rhevstore/*
Unmount the 10.214.13.64:/ovirt_production

I do not expect the above ownership actions need to be done initially on each 
host, before starting the deployment, otherwise it would be practically 
impossible to expand the Host pool.

All 5 hosts are provisioned in same way. How? I am using foreman to provision 
these servers, so they are built of same kickstart hostgroup template.


I even installed ovirt-hosted-engine-setup package to make sure all required 
packages, users and groups are in place before adding host to oVirt via UI or 
Ansible.


Is it possible that we if I am already using or heavily using the mentioned 
volume via Hosts already added to oVirt pool, that ownership actions 
executed,on host about to be added to the pool, will fail to complete setting 
ownership on all required files on the volume?

To repeat the question above: How do I make sure Host can read metadata file of 
the storage volume?

Kindly awaiting your reply.

All best,
Marko Vrgotic
Sent from my iPhone

On 11 Aug 2019, at 01:19, Shani Leviim 
<slev...@redhat.com<mailto:slev...@redhat.com>> wrote:
Hi Marko,
Is seems that there's a connectivity problem with host 10.210.13.64.
Can you please make sure the metadata under 
/rhev/data-center/mnt/10.210.13.64:_ovirt__production/6effda5e-1a0d-4312-bf93-d97fa9eb5aee/dom_md/metadata
 is accessible?

Regards,
Shani Leviim


On Sat, Aug 10, 2019 at 2:57 AM Vrgotic, Marko 
<m.vrgo...@activevideo.com<mailto:m.vrgo...@activevideo.com>> wrote:
Log files from ovirt engine and ovirt-sj-05 vdsm  attached.

Its related to host named: ovirt-sj-05.ictv.com<http://ovirt-sj-05.ictv.com>

Kindly awaiting your reply.


— — —
Met vriendelijke groet / Kind regards,

Marko Vrgotic



From: "Vrgotic, Marko" 
<m.vrgo...@activevideo.com<mailto:m.vrgo...@activevideo.com>>
Date: Thursday, 8 August 2019 at 17:02
To: Shani Leviim <slev...@redhat.com<mailto:slev...@redhat.com>>
Cc: "users@ovirt.org<mailto:users@ovirt.org>" 
<users@ovirt.org<mailto:users@ovirt.org>>
Subject: Re: [ovirt-users] Re: oVirt 4.3.5 potential issue with NFS storage

Hey Shanii,

Thank you for the reply.
Sure, I will attach the full logs asap.
What do you mean by “flow you are doing”?

Kindly awaiting your reply.

Marko Vrgotic

From: Shani Leviim <slev...@redhat.com<mailto:slev...@redhat.com>>
Date: Thursday, 8 August 2019 at 00:01
To: "Vrgotic, Marko" 
<m.vrgo...@activevideo.com<mailto:m.vrgo...@activevideo.com>>
Cc: "users@ovirt.org<mailto:users@ovirt.org>" 
<users@ovirt.org<mailto:users@ovirt.org>>
Subject: Re: [ovirt-users] Re: oVirt 4.3.5 potential issue with NFS storage

Hi,
Can you please clarify the flow you're doing?
Also, can you please attach full vdsm and engine logs?

Regards,
Shani Leviim


On Thu, Aug 8, 2019 at 6:25 AM Vrgotic, Marko 
<m.vrgo...@activevideo.com<mailto:m.vrgo...@activevideo.com>> wrote:
Log line form VDSM:

“[root@ovirt-sj-05 ~]# tail -f /var/log/vdsm/vdsm.log | grep WARN
2019-08-07 09:40:03,556-0700 WARN  (check/loop) [storage.check] Checker 
u'/rhev/data-center/mnt/10.210.13.64:_ovirt__production/bda97276-a399-448f-9113-017972f6b55a/dom_md/metadata'
 is blocked for 20.00 seconds (check:282)
2019-08-07 09:40:47,132-0700 WARN  (monitor/bda9727) [storage.Monitor] Host id 
for domain bda97276-a399-448f-9113-017972f6b55a was released (id: 5) 
(monitor:445)
2019-08-07 09:44:53,564-0700 WARN  (check/loop) [storage.check] Checker 
u'/rhev/data-center/mnt/10.210.13.64:_ovirt__production/bda97276-a399-448f-9113-017972f6b55a/dom_md/metadata'
 is blocked for 20.00 seconds (check:282)
2019-08-07 09:46:38,604-0700 WARN  (monitor/bda9727) [storage.Monitor] Host id 
for domain bda97276-a399-448f-9113-017972f6b55a was released (id: 5) 
(monitor:445)”



From: "Vrgotic, Marko" 
<m.vrgo...@activevideo.com<mailto:m.vrgo...@activevideo.com>>
Date: Wednesday, 7 August 2019 at 09:09
To: "users@ovirt.org<mailto:users@ovirt.org>" 
<users@ovirt.org<mailto:users@ovirt.org>>
Subject: oVirt 4.3.5 potential issue with NFS storage

Dear oVIrt,

This is my third oVirt platform in the company, but first time I am seeing 
following logs:

“2019-08-07 16:00:16,099Z INFO  
[org.ovirt.engine.core.bll.provider.network.SyncNetworkProviderCommand] 
(EE-ManagedThreadFactory-engineScheduled-Thread-51) [1b85e637] Lock freed to 
object 
'EngineLock:{exclusiveLocks='[2350ee82-94ed-4f90-9366-451e0104d1d6=PROVIDER]', 
sharedLocks=''}'
2019-08-07 16:00:25,618Z WARN  
[org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxy] 
(EE-ManagedThreadFactory-engine-Thread-37723) [] domain 
'bda97276-a399-448f-9113-017972f6b55a:ovirt_production' in problem 
'PROBLEMATIC'. vds: 'ovirt-sj-05.ictv.com<http://ovirt-sj-05.ictv.com>'
2019-08-07 16:00:40,630Z INFO  
[org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxy] 
(EE-ManagedThreadFactory-engine-Thread-37735) [] Domain 
'bda97276-a399-448f-9113-017972f6b55a:ovirt_production' recovered from problem. 
vds: 'ovirt-sj-05.ictv.com<http://ovirt-sj-05.ictv.com>'
2019-08-07 16:00:40,652Z INFO  
[org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxy] 
(EE-ManagedThreadFactory-engine-Thread-37737) [] Domain 
'bda97276-a399-448f-9113-017972f6b55a:ovirt_production' recovered from problem. 
vds: 'ovirt-sj-01.ictv.com<http://ovirt-sj-01.ictv.com>'
2019-08-07 16:00:40,652Z INFO  
[org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxy] 
(EE-ManagedThreadFactory-engine-Thread-37737) [] Domain 
'bda97276-a399-448f-9113-017972f6b55a:ovirt_production' has recovered from 
problem. No active host in the DC is reporting it as problematic, so clearing 
the domain recovery timer.”

Can you help me understanding why is this being reported?

This setup is:

5HOSTS, 3 in HA
SelfHostedEngine
Version 4.3.5
NFS based Netapp storage, version 4.1
“10.210.13.64:/ovirt_hosted_engine on 
/rhev/data-center/mnt/10.210.13.64:_ovirt__hosted__engine type nfs4 
(rw,relatime,vers=4.1,rsize=65536,wsize=65536,namlen=255,soft,nosharecache,proto=tcp,timeo=600,retrans=6,sec=sys,clientaddr=10.210.11.14,local_lock=none,addr=10.210.13.64)

10.210.13.64:/ovirt_production on 
/rhev/data-center/mnt/10.210.13.64:_ovirt__production type nfs4 
(rw,relatime,vers=4.1,rsize=65536,wsize=65536,namlen=255,soft,nosharecache,proto=tcp,timeo=600,retrans=6,sec=sys,clientaddr=10.210.11.14,local_lock=none,addr=10.210.13.64)
tmpfs on /run/user/0 type tmpfs 
(rw,nosuid,nodev,relatime,seclabel,size=9878396k,mode=700)”

First mount is SHE dedicated storage.
Second mount “ovirt_produciton” is for other VM Guests.

Kindly awaiting your reply.

Marko Vrgotic
_______________________________________________
Users mailing list -- users@ovirt.org<mailto:users@ovirt.org>
To unsubscribe send an email to 
users-le...@ovirt.org<mailto:users-le...@ovirt.org>
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ICRKHD3GXTPQEZN2T6LJBS6YIVLER6TP/
_______________________________________________
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/DVZD6L5IXRHFJ6BRZUGL46H3MP5DBQ55/

Reply via email to