[ovirt-users] Re: VG issue / Non Operational

2021-01-18 Thread Strahil Nikolov via Users
Are you sure that ovirt doesn't still use it (storage domains)?

Best Regards,
Strahil Nikolov






В понеделник, 18 януари 2021 г., 09:11:18 Гринуич+2, Christian Reiss 
 написа: 





Update:

I found out that 4a62cdb4-b314-4c7f-804e-8e7275518a7f is an iscsi target 
outside of gluster. It is a test that we do not need anymore but we cant 
remove. According to

[root@node03 ~]# iscsiadm -m session
tcp: [1] 10.100.200.20:3260,1 iqn.2005-10.org.freenas.ctl:ovirt-data 
(non-flash)

its attached, but something is still missing...


On 17/01/2021 11:45, Strahil Nikolov via Users wrote:
> What is the output of 'lsblk -t' on all nodes ?
> 
> Best Regards,
> Strahil NIkolov692371
> 
> В 11:19 +0100 на 17.01.2021 (нд), Christian Reiss написа:
>> Hey folks,
>>
>> quick (I hope) question: On my 3-node cluster I am swapping out all
>> the
>> SSDs with fewer but higher capacity ones. So I took one node down
>> (maintenance, stop), then removed all SSDs, set up a new RAID, set
>> up
>> lvm and gluster, let it resync. Gluster health status shows no
>> unsynced
>> entries.
>>
>> Uppon going from maintenance to online from ovirt management It goes
>> into non-operational status, vdsm log on the node shows:
>>
>> 2021-01-17 11:13:29,051+0100 INFO  (jsonrpc/6) [api.host] START
>> getAllVmStats() from=::1,48580 (api:48)
>> 2021-01-17 11:13:29,051+0100 INFO  (jsonrpc/6) [api.host] FINISH
>> getAllVmStats return={'status': {'message': 'Done', 'code': 0},
>> 'statsList': (suppressed)} from=::1,48580 (api:54)
>> 2021-01-17 11:13:29,052+0100 INFO  (jsonrpc/6)
>> [jsonrpc.JsonRpcServer]
>> RPC call Host.getAllVmStats succeeded in 0.00 seconds (__init__:312)
>> 2021-01-17 11:13:30,420+0100 WARN  (monitor/4a62cdb) [storage.LVM]
>> Reloading VGs failed (vgs=[u'4a62cdb4-b314-4c7f-804e-8e7275518a7f']
>> rc=5
>> out=[] err=['  Volume group "4a62cdb4-b314-4c7f-804e-8e7275518a7f"
>> not
>> found', '  Cannot process volume group
>> 4a62cdb4-b314-4c7f-804e-8e7275518a7f']) (lvm:470)
>> 2021-01-17 11:13:30,424+0100 ERROR (monitor/4a62cdb)
>> [storage.Monitor]
>> Setting up monitor for 4a62cdb4-b314-4c7f-804e-8e7275518a7f failed
>> (monitor:330)
>> Traceback (most recent call last):
>>    File "/usr/lib/python2.7/site-packages/vdsm/storage/monitor.py",
>> line
>> 327, in _setupLoop
>>      self._setupMonitor()
>>    File "/usr/lib/python2.7/site-packages/vdsm/storage/monitor.py",
>> line
>> 349, in _setupMonitor
>>      self._produceDomain()
>>    File "/usr/lib/python2.7/site-packages/vdsm/utils.py", line 159,
>> in
>> wrapper
>>      value = meth(self, *a, **kw)
>>    File "/usr/lib/python2.7/site-packages/vdsm/storage/monitor.py",
>> line
>> 367, in _produceDomain
>>      self.domain = sdCache.produce(self.sdUUID)
>>    File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line
>> 110, in produce
>>      domain.getRealDomain()
>>    File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line
>> 51,
>> in getRealDomain
>>      return self._cache._realProduce(self._sdUUID)
>>    File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line
>> 134, in _realProduce
>>      domain = self._findDomain(sdUUID)
>>    File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line
>> 151, in _findDomain
>>      return findMethod(sdUUID)
>>    File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line
>> 176, in _findUnfetchedDomain
>>      raise se.StorageDomainDoesNotExist(sdUUID)
>>
>>
>> I assume due to the changed LVM UUID it fails, right? Can I someone
>> fix/change the UUID and get the node back up again? It does not seem
>> to
>> be a major issue, to be honest.
>>
>> I can see the gluster mount (what ovirt mounts when it onlines a
>> node)
>> already, and gluster is happy too.
>>
>> Any help is appreciated!
>>
>> -Chris.
>>
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/LS4OEXZOE6UA4CDQYXFKC3TZCCO42SU4/
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/3IYV67D7JBE3CK6SQ7SEUUNRVWU5QNYK/
> 

-- 
with kind regards,
mit freundlichen Gruessen,


Christian Reiss

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 

[ovirt-users] Re: VG issue / Non Operational

2021-01-18 Thread Christian Reiss

Update & Fix:

There were remnant entries of filters in both

  /etc/lvm/lvm.conf and
  /etc/multipath.conf

that worked well with the Gluster LVM but went crazy with iscsi lvm mounts.

Fixed the entries, rebooted the server. Now it works and it is back up.

Cheerio!

-Chris.

On 18/01/2021 08:10, Christian Reiss wrote:

Update:

I found out that 4a62cdb4-b314-4c7f-804e-8e7275518a7f is an iscsi target 
outside of gluster. It is a test that we do not need anymore but we cant 
remove. According to


[root@node03 ~]# iscsiadm -m session
tcp: [1] 10.100.200.20:3260,1 iqn.2005-10.org.freenas.ctl:ovirt-data 
(non-flash)


its attached, but something is still missing...


On 17/01/2021 11:45, Strahil Nikolov via Users wrote:

What is the output of 'lsblk -t' on all nodes ?

Best Regards,
Strahil NIkolov692371

В 11:19 +0100 на 17.01.2021 (нд), Christian Reiss написа:

Hey folks,

quick (I hope) question: On my 3-node cluster I am swapping out all
the
SSDs with fewer but higher capacity ones. So I took one node down
(maintenance, stop), then removed all SSDs, set up a new RAID, set
up
lvm and gluster, let it resync. Gluster health status shows no
unsynced
entries.

Uppon going from maintenance to online from ovirt management It goes
into non-operational status, vdsm log on the node shows:

2021-01-17 11:13:29,051+0100 INFO  (jsonrpc/6) [api.host] START
getAllVmStats() from=::1,48580 (api:48)
2021-01-17 11:13:29,051+0100 INFO  (jsonrpc/6) [api.host] FINISH
getAllVmStats return={'status': {'message': 'Done', 'code': 0},
'statsList': (suppressed)} from=::1,48580 (api:54)
2021-01-17 11:13:29,052+0100 INFO  (jsonrpc/6)
[jsonrpc.JsonRpcServer]
RPC call Host.getAllVmStats succeeded in 0.00 seconds (__init__:312)
2021-01-17 11:13:30,420+0100 WARN  (monitor/4a62cdb) [storage.LVM]
Reloading VGs failed (vgs=[u'4a62cdb4-b314-4c7f-804e-8e7275518a7f']
rc=5
out=[] err=['  Volume group "4a62cdb4-b314-4c7f-804e-8e7275518a7f"
not
found', '  Cannot process volume group
4a62cdb4-b314-4c7f-804e-8e7275518a7f']) (lvm:470)
2021-01-17 11:13:30,424+0100 ERROR (monitor/4a62cdb)
[storage.Monitor]
Setting up monitor for 4a62cdb4-b314-4c7f-804e-8e7275518a7f failed
(monitor:330)
Traceback (most recent call last):
    File "/usr/lib/python2.7/site-packages/vdsm/storage/monitor.py",
line
327, in _setupLoop
  self._setupMonitor()
    File "/usr/lib/python2.7/site-packages/vdsm/storage/monitor.py",
line
349, in _setupMonitor
  self._produceDomain()
    File "/usr/lib/python2.7/site-packages/vdsm/utils.py", line 159,
in
wrapper
  value = meth(self, *a, **kw)
    File "/usr/lib/python2.7/site-packages/vdsm/storage/monitor.py",
line
367, in _produceDomain
  self.domain = sdCache.produce(self.sdUUID)
    File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line
110, in produce
  domain.getRealDomain()
    File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line
51,
in getRealDomain
  return self._cache._realProduce(self._sdUUID)
    File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line
134, in _realProduce
  domain = self._findDomain(sdUUID)
    File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line
151, in _findDomain
  return findMethod(sdUUID)
    File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line
176, in _findUnfetchedDomain
  raise se.StorageDomainDoesNotExist(sdUUID)


I assume due to the changed LVM UUID it fails, right? Can I someone
fix/change the UUID and get the node back up again? It does not seem
to
be a major issue, to be honest.

I can see the gluster mount (what ovirt mounts when it onlines a
node)
already, and gluster is happy too.

Any help is appreciated!

-Chris.

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/LS4OEXZOE6UA4CDQYXFKC3TZCCO42SU4/ 


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/3IYV67D7JBE3CK6SQ7SEUUNRVWU5QNYK/ 






___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/TZLQ5HPNZBTHLGPODFT3K6KZE6EKJP3Q/



--
with kind regards,
mit freundlichen Gruessen,

Christian Reiss



OpenPGP_0x44E29126ABCD43C5.asc

[ovirt-users] Re: VG issue / Non Operational

2021-01-17 Thread Christian Reiss

Update:

I found out that 4a62cdb4-b314-4c7f-804e-8e7275518a7f is an iscsi target 
outside of gluster. It is a test that we do not need anymore but we cant 
remove. According to


[root@node03 ~]# iscsiadm -m session
tcp: [1] 10.100.200.20:3260,1 iqn.2005-10.org.freenas.ctl:ovirt-data 
(non-flash)


its attached, but something is still missing...


On 17/01/2021 11:45, Strahil Nikolov via Users wrote:

What is the output of 'lsblk -t' on all nodes ?

Best Regards,
Strahil NIkolov692371

В 11:19 +0100 на 17.01.2021 (нд), Christian Reiss написа:

Hey folks,

quick (I hope) question: On my 3-node cluster I am swapping out all
the
SSDs with fewer but higher capacity ones. So I took one node down
(maintenance, stop), then removed all SSDs, set up a new RAID, set
up
lvm and gluster, let it resync. Gluster health status shows no
unsynced
entries.

Uppon going from maintenance to online from ovirt management It goes
into non-operational status, vdsm log on the node shows:

2021-01-17 11:13:29,051+0100 INFO  (jsonrpc/6) [api.host] START
getAllVmStats() from=::1,48580 (api:48)
2021-01-17 11:13:29,051+0100 INFO  (jsonrpc/6) [api.host] FINISH
getAllVmStats return={'status': {'message': 'Done', 'code': 0},
'statsList': (suppressed)} from=::1,48580 (api:54)
2021-01-17 11:13:29,052+0100 INFO  (jsonrpc/6)
[jsonrpc.JsonRpcServer]
RPC call Host.getAllVmStats succeeded in 0.00 seconds (__init__:312)
2021-01-17 11:13:30,420+0100 WARN  (monitor/4a62cdb) [storage.LVM]
Reloading VGs failed (vgs=[u'4a62cdb4-b314-4c7f-804e-8e7275518a7f']
rc=5
out=[] err=['  Volume group "4a62cdb4-b314-4c7f-804e-8e7275518a7f"
not
found', '  Cannot process volume group
4a62cdb4-b314-4c7f-804e-8e7275518a7f']) (lvm:470)
2021-01-17 11:13:30,424+0100 ERROR (monitor/4a62cdb)
[storage.Monitor]
Setting up monitor for 4a62cdb4-b314-4c7f-804e-8e7275518a7f failed
(monitor:330)
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/vdsm/storage/monitor.py",
line
327, in _setupLoop
  self._setupMonitor()
File "/usr/lib/python2.7/site-packages/vdsm/storage/monitor.py",
line
349, in _setupMonitor
  self._produceDomain()
File "/usr/lib/python2.7/site-packages/vdsm/utils.py", line 159,
in
wrapper
  value = meth(self, *a, **kw)
File "/usr/lib/python2.7/site-packages/vdsm/storage/monitor.py",
line
367, in _produceDomain
  self.domain = sdCache.produce(self.sdUUID)
File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line
110, in produce
  domain.getRealDomain()
File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line
51,
in getRealDomain
  return self._cache._realProduce(self._sdUUID)
File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line
134, in _realProduce
  domain = self._findDomain(sdUUID)
File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line
151, in _findDomain
  return findMethod(sdUUID)
File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line
176, in _findUnfetchedDomain
  raise se.StorageDomainDoesNotExist(sdUUID)


I assume due to the changed LVM UUID it fails, right? Can I someone
fix/change the UUID and get the node back up again? It does not seem
to
be a major issue, to be honest.

I can see the gluster mount (what ovirt mounts when it onlines a
node)
already, and gluster is happy too.

Any help is appreciated!

-Chris.

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/LS4OEXZOE6UA4CDQYXFKC3TZCCO42SU4/

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/3IYV67D7JBE3CK6SQ7SEUUNRVWU5QNYK/



--
with kind regards,
mit freundlichen Gruessen,

Christian Reiss



OpenPGP_0x44E29126ABCD43C5.asc
Description: application/pgp-keys
<>

OpenPGP_signature
Description: OpenPGP digital signature
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/TZLQ5HPNZBTHLGPODFT3K6KZE6EKJP3Q/


[ovirt-users] Re: VG issue / Non Operational

2021-01-17 Thread Strahil Nikolov via Users
What is the output of 'lsblk -t' on all nodes ?

Best Regards,
Strahil NIkolov692371

В 11:19 +0100 на 17.01.2021 (нд), Christian Reiss написа:
> Hey folks,
> 
> quick (I hope) question: On my 3-node cluster I am swapping out all
> the 
> SSDs with fewer but higher capacity ones. So I took one node down 
> (maintenance, stop), then removed all SSDs, set up a new RAID, set
> up 
> lvm and gluster, let it resync. Gluster health status shows no
> unsynced 
> entries.
> 
> Uppon going from maintenance to online from ovirt management It goes 
> into non-operational status, vdsm log on the node shows:
> 
> 2021-01-17 11:13:29,051+0100 INFO  (jsonrpc/6) [api.host] START 
> getAllVmStats() from=::1,48580 (api:48)
> 2021-01-17 11:13:29,051+0100 INFO  (jsonrpc/6) [api.host] FINISH 
> getAllVmStats return={'status': {'message': 'Done', 'code': 0}, 
> 'statsList': (suppressed)} from=::1,48580 (api:54)
> 2021-01-17 11:13:29,052+0100 INFO  (jsonrpc/6)
> [jsonrpc.JsonRpcServer] 
> RPC call Host.getAllVmStats succeeded in 0.00 seconds (__init__:312)
> 2021-01-17 11:13:30,420+0100 WARN  (monitor/4a62cdb) [storage.LVM] 
> Reloading VGs failed (vgs=[u'4a62cdb4-b314-4c7f-804e-8e7275518a7f']
> rc=5 
> out=[] err=['  Volume group "4a62cdb4-b314-4c7f-804e-8e7275518a7f"
> not 
> found', '  Cannot process volume group 
> 4a62cdb4-b314-4c7f-804e-8e7275518a7f']) (lvm:470)
> 2021-01-17 11:13:30,424+0100 ERROR (monitor/4a62cdb)
> [storage.Monitor] 
> Setting up monitor for 4a62cdb4-b314-4c7f-804e-8e7275518a7f failed 
> (monitor:330)
> Traceback (most recent call last):
>File "/usr/lib/python2.7/site-packages/vdsm/storage/monitor.py",
> line 
> 327, in _setupLoop
>  self._setupMonitor()
>File "/usr/lib/python2.7/site-packages/vdsm/storage/monitor.py",
> line 
> 349, in _setupMonitor
>  self._produceDomain()
>File "/usr/lib/python2.7/site-packages/vdsm/utils.py", line 159,
> in 
> wrapper
>  value = meth(self, *a, **kw)
>File "/usr/lib/python2.7/site-packages/vdsm/storage/monitor.py",
> line 
> 367, in _produceDomain
>  self.domain = sdCache.produce(self.sdUUID)
>File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line 
> 110, in produce
>  domain.getRealDomain()
>File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line
> 51, 
> in getRealDomain
>  return self._cache._realProduce(self._sdUUID)
>File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line 
> 134, in _realProduce
>  domain = self._findDomain(sdUUID)
>File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line 
> 151, in _findDomain
>  return findMethod(sdUUID)
>File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line 
> 176, in _findUnfetchedDomain
>  raise se.StorageDomainDoesNotExist(sdUUID)
> 
> 
> I assume due to the changed LVM UUID it fails, right? Can I someone 
> fix/change the UUID and get the node back up again? It does not seem
> to 
> be a major issue, to be honest.
> 
> I can see the gluster mount (what ovirt mounts when it onlines a
> node) 
> already, and gluster is happy too.
> 
> Any help is appreciated!
> 
> -Chris.
> 
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/LS4OEXZOE6UA4CDQYXFKC3TZCCO42SU4/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/3IYV67D7JBE3CK6SQ7SEUUNRVWU5QNYK/