[ovirt-users] Re: Can't connect vdsm storage: Command StorageDomain.getInfo with args failed: (code=350, message=Error in storage domain action

2020-02-01 Thread asm
First of all i have check permission on my storage. And all permission is 
right: group kvm (36) and user vdsm (36), chown 36:36 and  chmod 0755. 
After i made 0777 permissions and updating all working fine!
My NFS storage on Synology NAS, i now i trying to find a way to make 
anonuid=36,anongid=36 in the export by right way, not by editing of exports 
file. After i find it, i will try to use it and 755 permission.
Thanks to ALL!
BR,
Alex
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/RRSDPOE4YR4JQKMXAOKHLSMEQ62RYI7A/


[ovirt-users] Re: Can't connect vdsm storage: Command StorageDomain.getInfo with args failed: (code=350, message=Error in storage domain action

2020-02-01 Thread Robert Webb
One thing not in any of the documentation I have found are the extra options 
required for the export. I followed all the docs and it was still failing.

I had to add, “sync,no_subtree_check,all_squash,anonuid=36,anongid=36”, to my 
export in order to get it to work.



From: Nir Soffer 
Sent: Saturday, February 1, 2020 4:51 PM
To: a...@pioner.kz
Cc: users 
Subject: [ovirt-users] Re: Can't connect vdsm storage: Command 
StorageDomain.getInfo with args failed: (code=350, message=Error in storage 
domain action

On Sat, Feb 1, 2020 at 5:39 PM mailto:a...@pioner.kz>> wrote:
Ok, i will try to set 777 permissoin on NFS storage.

This is invalid configuration. See RHV docs for proper configuration:
https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.3/html/administration_guide/sect-preparing_and_adding_nfs_storage#Preparing_NFS_Storage_storage_admin

But, why this issue starting from updating  4.30.32-1 to  4.30.33-1? Withowt 
any another changes.

I guess you had wrong permissions and ownership on the storage before, but vdsm 
was not detecting
the issue because it was missing validations in older versions. Current version 
is validating that creating
and deleting files and using direct I/O works with the storage when creating 
and activating a storage
domain.

Nir

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/OFTUAQS6SY5I5ZNG22R5S42Q6WUE75ZM/


[ovirt-users] Re: Can't connect vdsm storage: Command StorageDomain.getInfo with args failed: (code=350, message=Error in storage domain action

2020-02-01 Thread Nir Soffer
On Sat, Feb 1, 2020 at 5:39 PM  wrote:

> Ok, i will try to set 777 permissoin on NFS storage.


This is invalid configuration. See RHV docs for proper configuration:
https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.3/html/administration_guide/sect-preparing_and_adding_nfs_storage#Preparing_NFS_Storage_storage_admin


> But, why this issue starting from updating  4.30.32-1 to  4.30.33-1?
> Withowt any another changes.
>

I guess you had wrong permissions and ownership on the storage before, but
vdsm was not detecting
the issue because it was missing validations in older versions. Current
version is validating that creating
and deleting files and using direct I/O works with the storage when
creating and activating a storage
domain.

Nir
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/NWMDXNEBA62FD6QKKYFQJWEKXBD55VBD/


[ovirt-users] Re: Can't connect vdsm storage: Command StorageDomain.getInfo with args failed: (code=350, message=Error in storage domain action

2020-02-01 Thread Amit Bawer
On Sat, Feb 1, 2020 at 6:39 PM  wrote:

> Ok, i will try to set 777 permissoin on NFS storage. But, why this issue
> starting from updating  4.30.32-1 to  4.30.33-1? Withowt any another
> changes.
>

The differing commit for 4.30.33 over 4.30.32 is the transition into block
size probing done by ioprocess-1.3.0:
https://github.com/oVirt/vdsm/commit/9bd210e340be0855126d1620cdb94840ced56129



> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/ICWRJ75Q7DIZDDNYRP757YHDEN4N537V/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/RO4FS4ESIR5UEAHBMGSMBWQZM6Z5WFJD/


[ovirt-users] Re: Device /dev/sdb excluded by a filter.\n

2020-02-01 Thread Steve Watkins
After looking at this, I hit upon something.  

Each machine in this cluster has two drives -- a 500gb and a 1 tb one.  Each 
machine I was installing to the 500gb drive (going to put in bigger drives, but 
this was what was laying around) 

I went through and looked at the machines, and the one that was failing had the 
500gb drive as the second drive.  Reversed them, wiped everything and reloaded. 
 Seems to be installing now.  
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/4UNKJL4FSP2NTILTXLXGKXISOVEX2PGV/


[ovirt-users] Re: Device /dev/sdb excluded by a filter.\n

2020-02-01 Thread Amit Bawer
Maybe information on this message thread could help:

https://lists.ovirt.org/archives/list/users@ovirt.org/message/N7M57G7HC46NBQTX6T3KSVHEYV3IDDIP/

On Saturday, February 1, 2020, Steve Watkins  wrote:

> Since I managed to crash my last attempt at installing by uploading an
> ISO,  I wound up just reloading all the nodes and starting from scratch.
> Now one node gets "Device /dev/sdb excluded by a filter.\n" and fails when
> creating the volumes.  Can't seem to get passed that -- the other machiens
> are set up identally and don't fail, and it worked before when installed
> but now...
>
> Any ideas?
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-
> guidelines/
> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/
> message/B2ZI4YNZQPKG4PE3VQ4KS6MRURWNQ4FY/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/QO65KZQ7CBCXN5MXZWPZT45LAYGMHRNI/


[ovirt-users] Device /dev/sdb excluded by a filter.\n

2020-02-01 Thread Steve Watkins
Since I managed to crash my last attempt at installing by uploading an ISO,  I 
wound up just reloading all the nodes and starting from scratch.  Now one node 
gets "Device /dev/sdb excluded by a filter.\n" and fails when creating the 
volumes.  Can't seem to get passed that -- the other machiens are set up 
identally and don't fail, and it worked before when installed but now...

Any ideas?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/B2ZI4YNZQPKG4PE3VQ4KS6MRURWNQ4FY/


[ovirt-users] Re: Can't connect vdsm storage: Command StorageDomain.getInfo with args failed: (code=350, message=Error in storage domain action

2020-02-01 Thread asm
Ok, i will try to set 777 permissoin on NFS storage. But, why this issue 
starting from updating  4.30.32-1 to  4.30.33-1? Withowt any another changes.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ICWRJ75Q7DIZDDNYRP757YHDEN4N537V/


[ovirt-users] Re: Can't connect vdsm storage: Command StorageDomain.getInfo with args failed: (code=350, message=Error in storage domain action

2020-02-01 Thread Amit Bawer
On Saturday, February 1, 2020,  wrote:

> Hi! I trying to upgrade my hosts and have problem with it. After uprgading
> one host i see that this one NonOperational. All was fine with
> vdsm-4.30.24-1.el7 but after upgrading with new version
> vdsm-4.30.40-1.el7.x86_64 and some others i have errors.
> Firtst of all i see in ovirt Events: Host srv02 cannot access the Storage
> Domain(s)  attached to the Data Center Default. Setting Host state
> to Non-Operational. My Default storage domain with HE VM data on NFS
> storage.
>
> In messages log of host:
> srv02 journal: ovirt-ha-agent ovirt_hosted_engine_ha.agent.agent.Agent
> ERROR Traceback (most recent call last):#012  File "/usr/lib/python2.7/site-
> packages/ovirt_hosted_engine_ha/agent/a
> gent.py", line 131, in _run_agent#012return action(he)#012  File
> "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/agent/agent.py",
> line 55, in action_proper#012return he.start_monitoring
> ()#012  File 
> "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/agent/hosted_engine.py",
> line 432, in start_monitoring#012self._initialize_broker()#012  File
> "/usr/lib/python2.7/site-packages/
> ovirt_hosted_engine_ha/agent/hosted_engine.py", line 556, in
> _initialize_broker#012m.get('options', {}))#012  File
> "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/brokerlink.py",
> line 8
> 9, in start_monitor#012).format(t=type, o=options,
> e=e)#012RequestError: brokerlink - failed to start monitor via
> ovirt-ha-broker: [Errno 2] No such file or directory, [monitor: 'network',
> options:
> {'tcp_t_address': None, 'network_test': None, 'tcp_t_port': None, 'addr':
> '192.168.2.248'}]
> Feb  1 15:41:42 srv02 journal: ovirt-ha-agent 
> ovirt_hosted_engine_ha.agent.agent.Agent
> ERROR Trying to restart agent
>
> In broker log:
> MainThread::WARNING::2020-02-01 15:43:35,167::storage_broker::
> 97::ovirt_hosted_engine_ha.broker.storage_broker.StorageBroker::(__init__)
> Can't connect vdsm storage: Command StorageDomain.getInfo with ar
> gs {'storagedomainID': 'bbdddea7-9cd6-41e7-ace5-fb9a6795caa8'} failed:
> (code=350, message=Error in storage domain action:
> (u'sdUUID=bbdddea7-9cd6-41e7-ace5-fb9a6795caa8',))
>
> In vdsm.lod
> 2020-02-01 15:44:19,930+0600 INFO  (jsonrpc/0) [vdsm.api] FINISH
> getStorageDomainInfo error=[Errno 1] Operation not permitted
> from=::1,57528, task_id=40683f67-d7b0-4105-aab8-6338deb54b00 (api:52)
> 2020-02-01 15:44:19,930+0600 ERROR (jsonrpc/0) [storage.TaskManager.Task]
> (Task='40683f67-d7b0-4105-aab8-6338deb54b00') Unexpected error (task:875)
> Traceback (most recent call last):
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 882,
> in _run
> return fn(*args, **kargs)
>   File "", line 2, in getStorageDomainInfo
>   File "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 50, in
> method
> ret = func(*args, **kwargs)
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line 2753,
> in getStorageDomainInfo
> dom = self.validateSdUUID(sdUUID)
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line 305,
> in validateSdUUID
> sdDom = sdCache.produce(sdUUID=sdUUID)
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line 110,
> in produce
> domain.getRealDomain()
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line 51,
> in getRealDomain
> return self._cache._realProduce(self._sdUUID)
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line 134,
> in _realProduce
> domain = self._findDomain(sdUUID)
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line 151,
> in _findDomain
> return findMethod(sdUUID)
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/nfsSD.py", line
> 145, in findDomain
> return NfsStorageDomain(NfsStorageDomain.findDomainPath(sdUUID))
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/fileSD.py", line
> 378, in __init__
> manifest.sdUUID, manifest.mountpoint)
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/fileSD.py", line
> 853, in _detect_block_size
> block_size = iop.probe_block_size(mountpoint)
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/outOfProcess.py",
> line 384, in probe_block_size
> return self._ioproc.probe_block_size(dir_path)
>   File "/usr/lib/python2.7/site-packages/ioprocess/__init__.py", line
> 602, in probe_block_size
> "probe_block_size", {"dir": dir_path}, self.timeout)
>   File "/usr/lib/python2.7/site-packages/ioprocess/__init__.py", line
> 448, in _sendCommand
> raise OSError(errcode, errstr)
> OSError: [Errno 1] Operation not permitted
> 2020-02-01 15:44:19,930+0600 INFO  (jsonrpc/0) [storage.TaskManager.Task]
> (Task='40683f67-d7b0-4105-aab8-6338deb54b00') aborting: Task is aborted:
> u'[Errno 1] Operation not permitted' - code 100 (task:1
> 181)
> 2020-02-01 15:44:19,930+0600 ERROR (jsonrpc/0) [storage.Dispatcher] FINISH
> getStorageDomainInfo error=[Errno 1] Operation 

[ovirt-users] Re: oVirt upgrade problems...

2020-02-01 Thread matteo fedeli
Ok, so, which is your hint? which log level? And next, which log can I start to 
see?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZGJ5GXTVCHINWWIUI27TXPAFTTGN7VXC/


[ovirt-users] Re: Gluster Heal Issue

2020-02-01 Thread Strahil Nikolov
On February 1, 2020 12:00:43 PM GMT+02:00, a...@pioner.kz wrote:
>Hi! 
>I did it with working Gluster. Just copy missing files from one of the
>host and start hesl volume after this.
>But the main - i dont understand, why this is happening with this
>issue. I saw this many times after maintanance of one host, for
>example.
>___
>Users mailing list -- users@ovirt.org
>To unsubscribe send an email to users-le...@ovirt.org
>Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>oVirt Code of Conduct:
>https://www.ovirt.org/community/about/community-guidelines/
>List Archives:
>https://lists.ovirt.org/archives/list/users@ovirt.org/message/O5UALK2HXLQZDDBLXYDZJUCHC6LLS7J3/

Definately a bug - but I'm not sure if it's FUSE or server-side.

Best Regards,
Strahil Nikolov
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/WDWJS6CZKXDA24G5GZCA3CE6S7YSMSQ7/


[ovirt-users] Re: Gluster Heal Issue

2020-02-01 Thread Strahil Nikolov
On February 1, 2020 10:53:59 AM GMT+02:00, Christian Reiss 
 wrote:
>Hey Strahil,
>
>thanks for your answer.
>
>On 01/02/2020 08:18, Strahil Nikolov wrote:
>> There is an active thread in gluster-users , so it will be nice to
>mention this there.
>> 
>> About the sync, you can find the paths via:
>> 1. Mount
>>   mount -t glusterfs -o aux-gfid-mount vm1:test /mnt/testvol
>> 2. Find the path of files:
>> getfattr -n trusted.glusterfs.pathinfo -e
>text/mnt/testvol/.gfid/
>> 
>> I bet it's the same file that causes me problems. Just verify the
>contents  and you will see that one of them is newer ->  just rsync it
>to the bricks  node01 & node03  and run 'gluster volume heal 
>full.'
>
>
>I did a cross-post to gluster-users just now.
>You are right, the brick-files have a slightly different timestamp:
>
>[root@node01:~] # stat 
>/gluster_bricks/ssd_storage/ssd_storage/.glusterfs/a1/21/a121e4fb-0984-4e41-94d7-8f0c4f87f4b6
>   File: 
>‘/gluster_bricks/ssd_storage/ssd_storage/.glusterfs/a1/21/a121e4fb-0984-4e41-94d7-8f0c4f87f4b6’
>   Size: 67108864  Blocks: 54576  IO Block: 4096   regular file
>Device: fd09h/64777dInode: 16152829909  Links: 2
>Access: (0660/-rw-rw)  Uid: (0/root)   Gid: (0/   
>root)
>Context: system_u:object_r:glusterd_brick_t:s0
>Access: 2020-01-31 22:16:57.812620635 +0100
>Modify: 2020-02-01 07:19:24.183045141 +0100
>Change: 2020-02-01 07:19:24.186045203 +0100
>  Birth: -
>
>[root@node03:~] # stat 
>/gluster_bricks/ssd_storage/ssd_storage/.glusterfs/a1/21/a121e4fb-0984-4e41-94d7-8f0c4f87f4b6
>   File: 
>‘/gluster_bricks/ssd_storage/ssd_storage/.glusterfs/a1/21/a121e4fb-0984-4e41-94d7-8f0c4f87f4b6’
>   Size: 67108864  Blocks: 54576  IO Block: 4096   regular file
>Device: fd09h/64777dInode: 16154259424  Links: 2
>Access: (0660/-rw-rw)  Uid: (0/root)   Gid: (0/   
>root)
>Context: system_u:object_r:glusterd_brick_t:s0
>Access: 2020-01-31 22:16:57.811800217 +0100
>Modify: 2020-02-01 07:19:24.180939487 +0100
>Change: 2020-02-01 07:19:24.184939586 +0100
>  Birth: -
>
>Contents (getfattr, md5) are still identical.
>
>I am unsure about your suggested rsync, tho:
>
>  - node1 has file,
>  - node2 does not,
>  - node3 has file.
>
>So I can rsync node1 to node2 and node3 or node3 to node1 and node2.
>Sync to node1 and node3 cant be done as node2 does not have the file.
>
>Can I do the rsync on a live, running Gluster?

Hm ..
Just copy the file to node2 as it is missing.
In my case ovirt2 had a newer file than 1 & 3

Yes, you can sync the files - just place  them in the same folder.

Best Regards,
Strahil Nikolov
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/KDMVSFBUXXYD7G5F55D3AUFV45FHY3YJ/


[ovirt-users] Re: Can't connect vdsm storage: Command StorageDomain.getInfo with args failed: (code=350, message=Error in storage domain action

2020-02-01 Thread asm
Working fine with 4.30.32-1 of course, sorry.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/PABXTFHRCZE5AOQ2KUWQ7JEVXJRLZVW3/


[ovirt-users] Re: Can't connect vdsm storage: Command StorageDomain.getInfo with args failed: (code=350, message=Error in storage domain action

2020-02-01 Thread asm
Htis issue can be resolve by downgrading of the packages:
  Installing : vdsm-api-4.30.32-1.el7.noarch

 1/26
  Installing : vdsm-common-4.30.32-1.el7.noarch 

 2/26
  Installing : vdsm-yajsonrpc-4.30.32-1.el7.noarch  

 3/26
  Installing : vdsm-network-4.30.32-1.el7.x86_64

 4/26
  Installing : vdsm-python-4.30.32-1.el7.noarch 

 5/26
  Installing : vdsm-jsonrpc-4.30.32-1.el7.noarch

 6/26
  Installing : vdsm-http-4.30.32-1.el7.noarch   

 7/26
  Installing : vdsm-hook-vmfex-dev-4.30.32-1.el7.noarch 

 8/26
  Installing : vdsm-4.30.32-1.el7.x86_64

 9/26
  Installing : vdsm-gluster-4.30.32-1.el7.x86_64

10/26
  Installing : vdsm-hook-ethtool-options-4.30.32-1.el7.noarch   

11/26
  Installing : vdsm-hook-fcoe-4.30.32-1.el7.noarch  

12/26
  Installing : vdsm-client-4.30.32-1.el7.noarch 

13/26
  Cleanup: vdsm-client-4.30.33-1.el7.noarch 

14/26
  Cleanup: vdsm-hook-ethtool-options-4.30.33-1.el7.noarch   

15/26
  Cleanup: vdsm-gluster-4.30.33-1.el7.x86_64

16/26
  Cleanup: vdsm-hook-fcoe-4.30.33-1.el7.noarch  

17/26
  Cleanup: vdsm-hook-vmfex-dev-4.30.33-1.el7.noarch 

18/26
  Cleanup: vdsm-4.30.33-1.el7.x86_64

19/26
  Cleanup: vdsm-jsonrpc-4.30.33-1.el7.noarch

20/26
  Cleanup: vdsm-http-4.30.33-1.el7.noarch   

21/26
  Cleanup: vdsm-python-4.30.33-1.el7.noarch 

22/26
  Cleanup: vdsm-network-4.30.33-1.el7.x86_64

23/26
  Cleanup: vdsm-common-4.30.33-1.el7.noarch 

24/26
  Cleanup: vdsm-api-4.30.33-1.el7.noarch 

[ovirt-users] Re: Ovirt-engine-ha cannot to see live status of Hosted Engine

2020-02-01 Thread asm
Hi! You was right. 
The problem was due to an error in the hosts file. The FQDN of engine has 
another IP in this file on this host from previous instalation.
Thank you very muth. 
Please help me with my another question. I know that you can help me.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/7QIY5HJK3PRM4EXBZZVJJ7WZWLAOJUPI/


[ovirt-users] Re: Gluster Heal Issue

2020-02-01 Thread asm
Hi! 
I did it with working Gluster. Just copy missing files from one of the host and 
start hesl volume after this.
But the main - i dont understand, why this is happening with this issue. I saw 
this many times after maintanance of one host, for example.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/O5UALK2HXLQZDDBLXYDZJUCHC6LLS7J3/


[ovirt-users] Can't connect vdsm storage: Command StorageDomain.getInfo with args failed: (code=350, message=Error in storage domain action

2020-02-01 Thread asm
Hi! I trying to upgrade my hosts and have problem with it. After uprgading one 
host i see that this one NonOperational. All was fine with vdsm-4.30.24-1.el7 
but after upgrading with new version vdsm-4.30.40-1.el7.x86_64 and some others 
i have errors. 
Firtst of all i see in ovirt Events: Host srv02 cannot access the Storage 
Domain(s)  attached to the Data Center Default. Setting Host state to 
Non-Operational. My Default storage domain with HE VM data on NFS storage.

In messages log of host:  
srv02 journal: ovirt-ha-agent ovirt_hosted_engine_ha.agent.agent.Agent ERROR 
Traceback (most recent call last):#012  File 
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/agent/a
gent.py", line 131, in _run_agent#012return action(he)#012  File 
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/agent/agent.py", line 
55, in action_proper#012return he.start_monitoring
()#012  File 
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/agent/hosted_engine.py",
 line 432, in start_monitoring#012self._initialize_broker()#012  File 
"/usr/lib/python2.7/site-packages/
ovirt_hosted_engine_ha/agent/hosted_engine.py", line 556, in 
_initialize_broker#012m.get('options', {}))#012  File 
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/brokerlink.py", 
line 8
9, in start_monitor#012).format(t=type, o=options, e=e)#012RequestError: 
brokerlink - failed to start monitor via ovirt-ha-broker: [Errno 2] No such 
file or directory, [monitor: 'network', options:
{'tcp_t_address': None, 'network_test': None, 'tcp_t_port': None, 'addr': 
'192.168.2.248'}]
Feb  1 15:41:42 srv02 journal: ovirt-ha-agent 
ovirt_hosted_engine_ha.agent.agent.Agent ERROR Trying to restart agent

In broker log: 
MainThread::WARNING::2020-02-01 
15:43:35,167::storage_broker::97::ovirt_hosted_engine_ha.broker.storage_broker.StorageBroker::(__init__)
 Can't connect vdsm storage: Command StorageDomain.getInfo with ar
gs {'storagedomainID': 'bbdddea7-9cd6-41e7-ace5-fb9a6795caa8'} failed:
(code=350, message=Error in storage domain action: 
(u'sdUUID=bbdddea7-9cd6-41e7-ace5-fb9a6795caa8',))

In vdsm.lod
2020-02-01 15:44:19,930+0600 INFO  (jsonrpc/0) [vdsm.api] FINISH 
getStorageDomainInfo error=[Errno 1] Operation not permitted from=::1,57528, 
task_id=40683f67-d7b0-4105-aab8-6338deb54b00 (api:52)
2020-02-01 15:44:19,930+0600 ERROR (jsonrpc/0) [storage.TaskManager.Task] 
(Task='40683f67-d7b0-4105-aab8-6338deb54b00') Unexpected error (task:875)
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 882, in 
_run
return fn(*args, **kargs)
  File "", line 2, in getStorageDomainInfo
  File "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 50, in method
ret = func(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line 2753, in 
getStorageDomainInfo
dom = self.validateSdUUID(sdUUID)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line 305, in 
validateSdUUID
sdDom = sdCache.produce(sdUUID=sdUUID)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line 110, in 
produce
domain.getRealDomain()
  File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line 51, in 
getRealDomain
return self._cache._realProduce(self._sdUUID)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line 134, in 
_realProduce
domain = self._findDomain(sdUUID)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line 151, in 
_findDomain
return findMethod(sdUUID)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/nfsSD.py", line 145, in 
findDomain
return NfsStorageDomain(NfsStorageDomain.findDomainPath(sdUUID))
  File "/usr/lib/python2.7/site-packages/vdsm/storage/fileSD.py", line 378, in 
__init__
manifest.sdUUID, manifest.mountpoint)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/fileSD.py", line 853, in 
_detect_block_size
block_size = iop.probe_block_size(mountpoint)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/outOfProcess.py", line 
384, in probe_block_size
return self._ioproc.probe_block_size(dir_path)
  File "/usr/lib/python2.7/site-packages/ioprocess/__init__.py", line 602, in 
probe_block_size
"probe_block_size", {"dir": dir_path}, self.timeout)
  File "/usr/lib/python2.7/site-packages/ioprocess/__init__.py", line 448, in 
_sendCommand
raise OSError(errcode, errstr)
OSError: [Errno 1] Operation not permitted
2020-02-01 15:44:19,930+0600 INFO  (jsonrpc/0) [storage.TaskManager.Task] 
(Task='40683f67-d7b0-4105-aab8-6338deb54b00') aborting: Task is aborted: 
u'[Errno 1] Operation not permitted' - code 100 (task:1
181)
2020-02-01 15:44:19,930+0600 ERROR (jsonrpc/0) [storage.Dispatcher] FINISH 
getStorageDomainInfo error=[Errno 1] Operation not permitted (dispatcher:87)

But i see that this domain is mounted (by mount command):
storage:/volume3/ovirt-hosted on 
/rhev/data-center/mnt/storage:_volume3_ovirt-hosted type nfs4 
(rw,rela

[ovirt-users] Re: Gluster Heal Issue

2020-02-01 Thread Christian Reiss

Hey Strahil,

thanks for your answer.

On 01/02/2020 08:18, Strahil Nikolov wrote:

There is an active thread in gluster-users , so it will be nice to mention this 
there.

About the sync, you can find the paths via:
1. Mount
  mount -t glusterfs -o aux-gfid-mount vm1:test /mnt/testvol
2. Find the path of files:
getfattr -n trusted.glusterfs.pathinfo -e text/mnt/testvol/.gfid/

I bet it's the same file that causes me problems. Just verify the contents  and you will see 
that one of them is newer ->  just rsync it to the bricks  node01 & node03  and run 
'gluster volume heal  full.'



I did a cross-post to gluster-users just now.
You are right, the brick-files have a slightly different timestamp:

[root@node01:~] # stat 
/gluster_bricks/ssd_storage/ssd_storage/.glusterfs/a1/21/a121e4fb-0984-4e41-94d7-8f0c4f87f4b6
  File: 
‘/gluster_bricks/ssd_storage/ssd_storage/.glusterfs/a1/21/a121e4fb-0984-4e41-94d7-8f0c4f87f4b6’

  Size: 67108864  Blocks: 54576  IO Block: 4096   regular file
Device: fd09h/64777dInode: 16152829909  Links: 2
Access: (0660/-rw-rw)  Uid: (0/root)   Gid: (0/root)
Context: system_u:object_r:glusterd_brick_t:s0
Access: 2020-01-31 22:16:57.812620635 +0100
Modify: 2020-02-01 07:19:24.183045141 +0100
Change: 2020-02-01 07:19:24.186045203 +0100
 Birth: -

[root@node03:~] # stat 
/gluster_bricks/ssd_storage/ssd_storage/.glusterfs/a1/21/a121e4fb-0984-4e41-94d7-8f0c4f87f4b6
  File: 
‘/gluster_bricks/ssd_storage/ssd_storage/.glusterfs/a1/21/a121e4fb-0984-4e41-94d7-8f0c4f87f4b6’

  Size: 67108864  Blocks: 54576  IO Block: 4096   regular file
Device: fd09h/64777dInode: 16154259424  Links: 2
Access: (0660/-rw-rw)  Uid: (0/root)   Gid: (0/root)
Context: system_u:object_r:glusterd_brick_t:s0
Access: 2020-01-31 22:16:57.811800217 +0100
Modify: 2020-02-01 07:19:24.180939487 +0100
Change: 2020-02-01 07:19:24.184939586 +0100
 Birth: -

Contents (getfattr, md5) are still identical.

I am unsure about your suggested rsync, tho:

 - node1 has file,
 - node2 does not,
 - node3 has file.

So I can rsync node1 to node2 and node3 or node3 to node1 and node2.
Sync to node1 and node3 cant be done as node2 does not have the file.

Can I do the rsync on a live, running Gluster?

--
with kind regards,
mit freundlichen Gruessen,

Christian Reiss
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/NJ72ACAIML2ANSDTVUSPS7JWHWJ4BDFS/