[ovirt-users] Re: Gluster volume slower then raid1 zpool speed

2020-11-27 Thread wkmail
for that workload (using that particular test with the dsync) then that 
is what I saw on mounted gluster given the 7200 drives and simple 1G 
network.


Next week I'll make a point of running your test with bonded ethernet to 
see if that improves things.


Note: our testing uses the following:

for size in `echo 50M 10M 1M`
do
 echo 'starting'
 pwd
 echo "$size"
 dd if=/dev/zero of=./junk bs=$size count=100 oflag=direct;
 rm ./junk
done

so we are doing multiple copies of much smaller files.

and this is what I see on that kit

SIZE = 50M
1.01 0.84 0.77 2/388 28977
100+0 records in
100+0 records out
524288 bytes (5.2 GB, 4.9 GiB) copied, 70.262 s, 74.6 MB/s
SIZE = 10M
3.88 1.79 1.11 2/400 29336
100+0 records in
100+0 records out
1048576000 bytes (1.0 GB, 1000 MiB) copied, 15.8082 s, 66.3 MB/s
SIZE = 1M
3.93 1.95 1.18 1/394 29616
100+0 records in
100+0 records out
104857600 bytes (105 MB, 100 MiB) copied, 1.67975 s, 62.4 MB/s

with teamd (bonding) I would expect an approx 40-50% speed increase 
(which is why I didn't catch my error earlier as I am used to seeing 
values in the 80s)




On 11/26/2020 11:11 PM, Harry O wrote:

So my gluster performance results is expected?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/RCQ5LA77ZFQF5V5VM5FLX3PG3AYQ3FMK/

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/3YNJFZEIEZGX5NWPMVFWXB2XGVXZMMMS/


[ovirt-users] Re: Ovirt 4.4.3 - Unable to start hosted engine

2020-11-27 Thread Marco Marino
Other details related to sanlock:

2020-11-27 20:25:10 7413 [61860]: verify_leader 1 wrong space name
hosted-engin hosted-engine
/run/vdsm/storage/de4645fc-f379-4837-916b-a0c2b89927d9/69dc22c2-8e60-43f1-8d18-6aabfbc98581/dfa4e933-2b9c-4057-a4c5-aa4485b070e9
2020-11-27 20:25:10 7413 [61860]: leader1 delta_acquire_begin error -226
lockspace hosted-engine host_id 1
2020-11-27 20:25:10 7413 [61860]: leader2 path
/run/vdsm/storage/de4645fc-f379-4837-916b-a0c2b89927d9/69dc22c2-8e60-43f1-8d18-6aabfbc98581/dfa4e933-2b9c-4057-a4c5-aa4485b070e9
offset 0
2020-11-27 20:25:10 7413 [61860]: leader3 m 12212010 v 30004 ss 512 nh 0 mh
1 oi 0 og 0 lv 0
2020-11-27 20:25:10 7413 [61860]: leader4 sn hosted-engin rn  ts 0 cs
23839828
2020-11-27 20:25:11 7414 [57456]: s38 add_lockspace fail result -226
2020-11-27 20:25:19 7421 [57456]: s39 lockspace
hosted-engine:1:/run/vdsm/storage/de4645fc-f379-4837-916b-a0c2b89927d9/69dc22c2-8e60-43f1-8d18-6aabfbc98581/dfa4e933-2b9c-4057-a4c5-aa4485b070e9:0
2020-11-27 20:25:19 7421 [62044]: verify_leader 1 wrong space name
hosted-engin hosted-engine
/run/vdsm/storage/de4645fc-f379-4837-916b-a0c2b89927d9/69dc22c2-8e60-43f1-8d18-6aabfbc98581/dfa4e933-2b9c-4057-a4c5-aa4485b070e9
2020-11-27 20:25:19 7421 [62044]: leader1 delta_acquire_begin error -226
lockspace hosted-engine host_id 1
2020-11-27 20:25:19 7421 [62044]: leader2 path
/run/vdsm/storage/de4645fc-f379-4837-916b-a0c2b89927d9/69dc22c2-8e60-43f1-8d18-6aabfbc98581/dfa4e933-2b9c-4057-a4c5-aa4485b070e9
offset 0
2020-11-27 20:25:19 7421 [62044]: leader3 m 12212010 v 30004 ss 512 nh 0 mh
1 oi 0 og 0 lv 0
2020-11-27 20:25:19 7421 [62044]: leader4 sn hosted-engin rn  ts 0 cs
23839828
2020-11-27 20:25:20 7422 [57456]: s39 add_lockspace fail result -226
2020-11-27 20:25:25 7427 [57456]: s40 lockspace
hosted-engine:1:/run/vdsm/storage/de4645fc-f379-4837-916b-a0c2b89927d9/69dc22c2-8e60-43f1-8d18-6aabfbc98581/dfa4e933-2b9c-4057-a4c5-aa4485b070e9:0
2020-11-27 20:25:25 7427 [62090]: verify_leader 1 wrong space name
hosted-engin hosted-engine
/run/vdsm/storage/de4645fc-f379-4837-916b-a0c2b89927d9/69dc22c2-8e60-43f1-8d18-6aabfbc98581/dfa4e933-2b9c-4057-a4c5-aa4485b070e9
2020-11-27 20:25:25 7427 [62090]: leader1 delta_acquire_begin error -226
lockspace hosted-engine host_id 1
2020-11-27 20:25:25 7427 [62090]: leader2 path
/run/vdsm/storage/de4645fc-f379-4837-916b-a0c2b89927d9/69dc22c2-8e60-43f1-8d18-6aabfbc98581/dfa4e933-2b9c-4057-a4c5-aa4485b070e9
offset 0
2020-11-27 20:25:25 7427 [62090]: leader3 m 12212010 v 30004 ss 512 nh 0 mh
1 oi 0 og 0 lv 0
2020-11-27 20:25:25 7427 [62090]: leader4 sn hosted-engin rn  ts 0 cs
23839828
2020-11-27 20:25:26 7428 [57456]: s40 add_lockspace fail result -226

Any help is welcome. Thank you,
Marco


On Fri, Nov 27, 2020 at 6:47 PM Marco Marino  wrote:

> Hi,
> I have an ovirt 4.4.3  with 2 clusters, hosted engine and iscsi storage.
> First cluster, composed of 2 servers (host1 and host2), is dedicated to the
> hosted engine, the second cluster is for vms. Furthermore, there is a SAN
> with 3 luns: one for hosted engine storage, one for vms and one unused. My
> SAN is built on top of a pacemaker/drbd cluster with 2 nodes with a virtual
> ip used as iscsi Portal IP. Starting from today, after a failover of the
> iscsi cluster, I'm unable to start the hosted engine. It seems that there
> is some problem with storage.
> Actually I have only one node (host1) running in the cluster. It seems
> there is some lock on lvs, but I'm not sure of this.
>
> Here are some details about the problem:
>
> 1. iscsiadm -m session
> iSCSI Transport Class version 2.0-870
> version 6.2.0.878-2
> Target: iqn.2003-01.org.linux-iscsi.s1-node1.x8664:sn.2a734f67d5b1
> (non-flash)
> Current Portal: 10.3.8.8:3260,1
> Persistent Portal: 10.3.8.8:3260,1
> **
> Interface:
> **
> Iface Name: default
> Iface Transport: tcp
> Iface Initiatorname: iqn.1994-05.com.redhat:4b668221d9a9
> Iface IPaddress: 10.3.8.10
> Iface HWaddress: default
> Iface Netdev: default
> SID: 1
> iSCSI Connection State: LOGGED IN
> iSCSI Session State: LOGGED_IN
> Internal iscsid Session State: NO CHANGE
> *
> Timeouts:
> *
> Recovery Timeout: 5
> Target Reset Timeout: 30
> LUN Reset Timeout: 30
> Abort Timeout: 15
> *
> CHAP:
> *
> username: 
> password: 
> username_in: 
> password_in: 
> 
> Negotiated iSCSI params:
> 
> HeaderDigest: None
> DataDigest: None
> MaxRecvDataSegmentLength: 262144
> MaxXmitDataSegmentLength: 262144
> FirstBurstLength: 65536
> MaxBurstLength: 262144
> ImmediateData: Yes
> InitialR2T: Yes
> MaxOutstandingR2T: 1
> 
> Attached SCSI devices:
> 
> Host Number: 7 State: running
> scsi7 Channel 00 Id 0 Lun: 0
> Attached scsi disk sdb State: running
> scsi7 Channel 00 Id 0 Lun: 1
> Attached scsi disk sdc State: running
>
> 2. vdsm.log errors:
>
> 2020-11-27 18:37:16,786+0100 INFO  (jsonrpc/0) [api] FINISH 

[ovirt-users] Re: Can not connect to gluster storage

2020-11-27 Thread Alex K
On Fri, Nov 27, 2020, 15:02 Stefan Wolf  wrote:

> Hello,
> I ve a host that can not connet to gluster storage.
> It has worked since I ve set up the environment, and today it stoped
> working
>
> this are the error messages in the webui
> The error message for connection kvm380.durchhalten.intern:/data returned
> by VDSM was: Failed to fetch Gluster Volume List
> Failed to connect Host kvm380.durchhalten.intern to the Storage Domains
> data.
> Failed to connect Host kvm380.durchhalten.intern to the Storage Domains
> hosted_storage.
>
>
> and here the vdsm.log
>
> StorageDomainDoesNotExist: Storage domain does not exist:
> (u'36663740-576a-4498-b28e-0a402628c6a7',)
> 2020-11-27 12:59:07,665+ INFO  (jsonrpc/2) [storage.TaskManager.Task]
> (Task='8bed48b8-0696-4d3f-966a-119219f3b013') aborting: Task is aborted:
> "Storage domain does not exist: (u'36663740-576a-4498-b28e-0a402628c6a7',)"
> - code 358 (task:1181)
> 2020-11-27 12:59:07,665+ ERROR (jsonrpc/2) [storage.Dispatcher] FINISH
> getStorageDomainInfo error=Storage domain does not exist:
> (u'36663740-576a-4498-b28e-0a402628c6a7',) (dispatcher:83)
> 2020-11-27 12:59:07,666+ INFO  (jsonrpc/2) [jsonrpc.JsonRpcServer] RPC
> call StorageDomain.getInfo failed (error 358) in 0.38 seconds (__init__:312)
> 2020-11-27 12:59:07,698+ INFO  (jsonrpc/7) [vdsm.api] START
> connectStorageServer(domType=7,
> spUUID=u'----', conList=[{u'id':
> u'e29cf818-5ee5-46e1-85c1-8aeefa33e95d', u'vfs_type': u'glusterfs',
> u'connection': u'kvm380.durchhalten.intern:/engine', u'user': u'kvm'}],
> options=None) from=::1,40964, task_id=3a3eeb80-50ef-4710-a4f4-9d35da2ff281
> (api:48)
> 2020-11-27 12:59:07,871+ ERROR (jsonrpc/7) [storage.HSM] Could not
> connect to storageServer (hsm:2420)
> Traceback (most recent call last):
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line 2417,
> in connectStorageServer
> conObj.connect()
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/storageServer.py",
> line 167, in connect
> self.validate()
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/storageServer.py",
> line 297, in validate
> if not self.volinfo:
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/storageServer.py",
> line 284, in volinfo
> self._volinfo = self._get_gluster_volinfo()
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/storageServer.py",
> line 329, in _get_gluster_volinfo
> self._volfileserver)
>   File "/usr/lib/python2.7/site-packages/vdsm/common/supervdsm.py", line
> 56, in __call__
> return callMethod()
>   File "/usr/lib/python2.7/site-packages/vdsm/common/supervdsm.py", line
> 54, in 
> **kwargs)
>   File "", line 2, in glusterVolumeInfo
>   File "/usr/lib64/python2.7/multiprocessing/managers.py", line 773, in
> _callmethod
> raise convert_to_error(kind, result)
> GlusterVolumesListFailedException: Volume list failed: rc=30806 out=()
> err=['Volume does not exist']
> 2020-11-27 12:59:07,871+ INFO  (jsonrpc/7) [vdsm.api] FINISH
> connectStorageServer return={'statuslist': [{'status': 4149, 'id':
> u'e29cf818-5ee5-46e1-85c1-8aeefa33e95d'}]} from=::1,40964,
> task_id=3a3eeb80-50ef-4710-a4f4-9d35da2ff281 (api:54)
> 2020-11-27 12:59:07,871+ INFO  (jsonrpc/7) [jsonrpc.JsonRpcServer] RPC
> call StoragePool.connectStorageServer succeeded in 0.18 seconds
> (__init__:312)
> 2020-11-27 12:59:08,474+ INFO  (Reactor thread)
> [ProtocolDetector.AcceptorImpl] Accepted connection from ::1:40966
> (protocoldetector:61)
> 2020-11-27 12:59:08,484+ INFO  (Reactor thread)
> [ProtocolDetector.Detector] Detected protocol stomp from ::1:40966
> (protocoldetector:125)
> 2020-11-27 12:59:08,484+ INFO  (Reactor thread) [Broker.StompAdapter]
> Processing CONNECT request (stompserver:95)
> 2020-11-27 12:59:08,485+ INFO  (JsonRpc (StompReactor))
> [Broker.StompAdapter] Subscribe command received (stompserver:124)
> 2020-11-27 12:59:08,525+ INFO  (jsonrpc/1) [jsonrpc.JsonRpcServer] RPC
> call Host.ping2 succeeded in 0.00 seconds (__init__:312)
> 2020-11-27 12:59:08,529+ INFO  (jsonrpc/0) [jsonrpc.JsonRpcServer] RPC
> call Host.ping2 succeeded in 0.00 seconds (__init__:312)
> 2020-11-27 12:59:08,533+ INFO  (jsonrpc/6) [vdsm.api] START
> getStorageDomainInfo(sdUUID=u'36663740-576a-4498-b28e-0a402628c6a7',
> options=None) from=::1,40966, task_id=ee3ac98e-6a93-4cb2-a626-5533c8fb78ad
> (api:48)
> 2020-11-27 12:59:08,909+ INFO  (jsonrpc/6) [vdsm.api] FINISH
> getStorageDomainInfo error=Storage domain does not exist:
> (u'36663740-576a-4498-b28e-0a402628c6a7',) from=::1,40966,
> task_id=ee3ac98e-6a93-4cb2-a626-5533c8fb78ad (api:52)
> 2020-11-27 12:59:08,910+ ERROR (jsonrpc/6) [storage.TaskManager.Task]
> (Task='ee3ac98e-6a93-4cb2-a626-5533c8fb78ad') Unexpected error (task:875)
> Traceback (most recent call last):
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 882,
> in _run
> return fn(*args, **kargs)
>   

[ovirt-users] Ovirt 4.4.3 - Unable to start hosted engine

2020-11-27 Thread Marco Marino
Hi,
I have an ovirt 4.4.3  with 2 clusters, hosted engine and iscsi storage.
First cluster, composed of 2 servers (host1 and host2), is dedicated to the
hosted engine, the second cluster is for vms. Furthermore, there is a SAN
with 3 luns: one for hosted engine storage, one for vms and one unused. My
SAN is built on top of a pacemaker/drbd cluster with 2 nodes with a virtual
ip used as iscsi Portal IP. Starting from today, after a failover of the
iscsi cluster, I'm unable to start the hosted engine. It seems that there
is some problem with storage.
Actually I have only one node (host1) running in the cluster. It seems
there is some lock on lvs, but I'm not sure of this.

Here are some details about the problem:

1. iscsiadm -m session
iSCSI Transport Class version 2.0-870
version 6.2.0.878-2
Target: iqn.2003-01.org.linux-iscsi.s1-node1.x8664:sn.2a734f67d5b1
(non-flash)
Current Portal: 10.3.8.8:3260,1
Persistent Portal: 10.3.8.8:3260,1
**
Interface:
**
Iface Name: default
Iface Transport: tcp
Iface Initiatorname: iqn.1994-05.com.redhat:4b668221d9a9
Iface IPaddress: 10.3.8.10
Iface HWaddress: default
Iface Netdev: default
SID: 1
iSCSI Connection State: LOGGED IN
iSCSI Session State: LOGGED_IN
Internal iscsid Session State: NO CHANGE
*
Timeouts:
*
Recovery Timeout: 5
Target Reset Timeout: 30
LUN Reset Timeout: 30
Abort Timeout: 15
*
CHAP:
*
username: 
password: 
username_in: 
password_in: 

Negotiated iSCSI params:

HeaderDigest: None
DataDigest: None
MaxRecvDataSegmentLength: 262144
MaxXmitDataSegmentLength: 262144
FirstBurstLength: 65536
MaxBurstLength: 262144
ImmediateData: Yes
InitialR2T: Yes
MaxOutstandingR2T: 1

Attached SCSI devices:

Host Number: 7 State: running
scsi7 Channel 00 Id 0 Lun: 0
Attached scsi disk sdb State: running
scsi7 Channel 00 Id 0 Lun: 1
Attached scsi disk sdc State: running

2. vdsm.log errors:

2020-11-27 18:37:16,786+0100 INFO  (jsonrpc/0) [api] FINISH getStats
error=Virtual machine does not exist: {'vmId':
'f3a1194d-0632-43c6-8e12-7f22518cff87'} (api:129)
.
2020-11-27 18:37:52,864+0100 INFO  (jsonrpc/4) [vdsm.api] FINISH
getVolumeInfo error=(-223, 'Sanlock resource read failure', 'Lease does not
exist on storage') from=::1,60880,
task_id=138a3615-d537-4e5f-a39c-335269ad0917 (api:52)
2020-11-27 18:37:52,864+0100 ERROR (jsonrpc/4) [storage.TaskManager.Task]
(Task='138a3615-d537-4e5f-a39c-335269ad0917') Unexpected error (task:880)
Traceback (most recent call last):
  File "/usr/lib/python3.6/site-packages/vdsm/storage/task.py", line 887,
in _run
return fn(*args, **kargs)
  File "", line 2, in getVolumeInfo
  File "/usr/lib/python3.6/site-packages/vdsm/common/api.py", line 50, in
method
ret = func(*args, **kwargs)
  File "/usr/lib/python3.6/site-packages/vdsm/storage/hsm.py", line 3142,
in getVolumeInfo
info = self._produce_volume(sdUUID, imgUUID, volUUID).getInfo()
  File "/usr/lib/python3.6/site-packages/vdsm/storage/volume.py", line 258,
in getInfo
leasestatus = self.getLeaseStatus()
  File "/usr/lib/python3.6/site-packages/vdsm/storage/volume.py", line 203,
in getLeaseStatus
self.volUUID)
  File "/usr/lib/python3.6/site-packages/vdsm/storage/sd.py", line 549, in
inquireVolumeLease
return self._domainLock.inquire(lease)
  File "/usr/lib/python3.6/site-packages/vdsm/storage/clusterlock.py", line
464, in inquire
sector=self._block_size)
sanlock.SanlockException: (-223, 'Sanlock resource read failure', 'Lease
does not exist on storage')
2020-11-27 18:37:52,865+0100 INFO  (jsonrpc/4) [storage.TaskManager.Task]
(Task='138a3615-d537-4e5f-a39c-335269ad0917') aborting: Task is aborted:
"value=(-223, 'Sanlock resource read failure', 'Lease does not exist on
storage') abortedcode=100" (task:1190)


3. supervdsm.log
MainProcess|monitor/de4645f::DEBUG::2020-11-27
18:41:25,286::commands::153::common.commands::(start) /usr/bin/taskset
--cpu-list 0-11 /usr/sbin/dmsetup remove
de4645fc--f379--4837--916b--a0c2b89927d9-dfa4e933--2b9c--4057--a4c5--aa4485b070e9
(cwd None)
MainProcess|monitor/de4645f::DEBUG::2020-11-27
18:41:25,293::commands::98::common.commands::(run) FAILED:  =
b'device-mapper: remove ioctl on
de4645fc--f379--4837--916b--a0c2b89927d9-dfa4e933--2b9c--4057--a4c5--aa4485b070e9
 failed: Device or resource busy\nCommand failed.\n';  = 1
MainProcess|monitor/de4645f::ERROR::2020-11-27
18:41:25,294::supervdsm_server::97::SuperVdsm.ServerCallback::(wrapper)
Error in devicemapper_removeMapping
Traceback (most recent call last):
  File "/usr/lib/python3.6/site-packages/vdsm/storage/devicemapper.py",
line 141, in removeMapping
commands.run(cmd)
  File "/usr/lib/python3.6/site-packages/vdsm/common/commands.py", line
101, in run
raise cmdutils.Error(args, p.returncode, out, err)
vdsm.common.cmdutils.Error: Command ['/usr/sbin/dmsetup', 'remove',

[ovirt-users] Can not connect to gluster storage

2020-11-27 Thread Stefan Wolf
Hello,
I ve a host that can not connet to gluster storage.
It has worked since I ve set up the environment, and today it stoped working

this are the error messages in the webui
The error message for connection kvm380.durchhalten.intern:/data returned by 
VDSM was: Failed to fetch Gluster Volume List
Failed to connect Host kvm380.durchhalten.intern to the Storage Domains data.
Failed to connect Host kvm380.durchhalten.intern to the Storage Domains 
hosted_storage.


and here the vdsm.log

StorageDomainDoesNotExist: Storage domain does not exist: 
(u'36663740-576a-4498-b28e-0a402628c6a7',)
2020-11-27 12:59:07,665+ INFO  (jsonrpc/2) [storage.TaskManager.Task] 
(Task='8bed48b8-0696-4d3f-966a-119219f3b013') aborting: Task is aborted: 
"Storage domain does not exist: (u'36663740-576a-4498-b28e-0a402628c6a7',)" - 
code 358 (task:1181)
2020-11-27 12:59:07,665+ ERROR (jsonrpc/2) [storage.Dispatcher] FINISH 
getStorageDomainInfo error=Storage domain does not exist: 
(u'36663740-576a-4498-b28e-0a402628c6a7',) (dispatcher:83)
2020-11-27 12:59:07,666+ INFO  (jsonrpc/2) [jsonrpc.JsonRpcServer] RPC call 
StorageDomain.getInfo failed (error 358) in 0.38 seconds (__init__:312)
2020-11-27 12:59:07,698+ INFO  (jsonrpc/7) [vdsm.api] START 
connectStorageServer(domType=7, spUUID=u'----', 
conList=[{u'id': u'e29cf818-5ee5-46e1-85c1-8aeefa33e95d', u'vfs_type': 
u'glusterfs', u'connection': u'kvm380.durchhalten.intern:/engine', u'user': 
u'kvm'}], options=None) from=::1,40964, 
task_id=3a3eeb80-50ef-4710-a4f4-9d35da2ff281 (api:48)
2020-11-27 12:59:07,871+ ERROR (jsonrpc/7) [storage.HSM] Could not connect 
to storageServer (hsm:2420)
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line 2417, in 
connectStorageServer
conObj.connect()
  File "/usr/lib/python2.7/site-packages/vdsm/storage/storageServer.py", line 
167, in connect
self.validate()
  File "/usr/lib/python2.7/site-packages/vdsm/storage/storageServer.py", line 
297, in validate
if not self.volinfo:
  File "/usr/lib/python2.7/site-packages/vdsm/storage/storageServer.py", line 
284, in volinfo
self._volinfo = self._get_gluster_volinfo()
  File "/usr/lib/python2.7/site-packages/vdsm/storage/storageServer.py", line 
329, in _get_gluster_volinfo
self._volfileserver)
  File "/usr/lib/python2.7/site-packages/vdsm/common/supervdsm.py", line 56, in 
__call__
return callMethod()
  File "/usr/lib/python2.7/site-packages/vdsm/common/supervdsm.py", line 54, in 

**kwargs)
  File "", line 2, in glusterVolumeInfo
  File "/usr/lib64/python2.7/multiprocessing/managers.py", line 773, in 
_callmethod
raise convert_to_error(kind, result)
GlusterVolumesListFailedException: Volume list failed: rc=30806 out=() 
err=['Volume does not exist']
2020-11-27 12:59:07,871+ INFO  (jsonrpc/7) [vdsm.api] FINISH 
connectStorageServer return={'statuslist': [{'status': 4149, 'id': 
u'e29cf818-5ee5-46e1-85c1-8aeefa33e95d'}]} from=::1,40964, 
task_id=3a3eeb80-50ef-4710-a4f4-9d35da2ff281 (api:54)
2020-11-27 12:59:07,871+ INFO  (jsonrpc/7) [jsonrpc.JsonRpcServer] RPC call 
StoragePool.connectStorageServer succeeded in 0.18 seconds (__init__:312)
2020-11-27 12:59:08,474+ INFO  (Reactor thread) 
[ProtocolDetector.AcceptorImpl] Accepted connection from ::1:40966 
(protocoldetector:61)
2020-11-27 12:59:08,484+ INFO  (Reactor thread) [ProtocolDetector.Detector] 
Detected protocol stomp from ::1:40966 (protocoldetector:125)
2020-11-27 12:59:08,484+ INFO  (Reactor thread) [Broker.StompAdapter] 
Processing CONNECT request (stompserver:95)
2020-11-27 12:59:08,485+ INFO  (JsonRpc (StompReactor)) 
[Broker.StompAdapter] Subscribe command received (stompserver:124)
2020-11-27 12:59:08,525+ INFO  (jsonrpc/1) [jsonrpc.JsonRpcServer] RPC call 
Host.ping2 succeeded in 0.00 seconds (__init__:312)
2020-11-27 12:59:08,529+ INFO  (jsonrpc/0) [jsonrpc.JsonRpcServer] RPC call 
Host.ping2 succeeded in 0.00 seconds (__init__:312)
2020-11-27 12:59:08,533+ INFO  (jsonrpc/6) [vdsm.api] START 
getStorageDomainInfo(sdUUID=u'36663740-576a-4498-b28e-0a402628c6a7', 
options=None) from=::1,40966, task_id=ee3ac98e-6a93-4cb2-a626-5533c8fb78ad 
(api:48)
2020-11-27 12:59:08,909+ INFO  (jsonrpc/6) [vdsm.api] FINISH 
getStorageDomainInfo error=Storage domain does not exist: 
(u'36663740-576a-4498-b28e-0a402628c6a7',) from=::1,40966, 
task_id=ee3ac98e-6a93-4cb2-a626-5533c8fb78ad (api:52)
2020-11-27 12:59:08,910+ ERROR (jsonrpc/6) [storage.TaskManager.Task] 
(Task='ee3ac98e-6a93-4cb2-a626-5533c8fb78ad') Unexpected error (task:875)
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 882, in 
_run
return fn(*args, **kargs)
  File "", line 2, in getStorageDomainInfo
  File "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 50, in method
ret = func(*args, **kwargs)
  File 

[ovirt-users] Re: BUG: after upgrading 4.4.2 to 4.4.3 ISO Domain show empty list

2020-11-27 Thread Dmitry Kharlamov
Recreated ISO domain, but ISO images are still not displayed.
When loading the ISO files in the Data Domain, is also not visible for VM.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/NQWS5L4EAZHUBAYKXDQWRP65QKRRCUAU/


[ovirt-users] [ANN] oVirt 4.4.4 Third Release Candidate is now available for testing

2020-11-27 Thread Sandro Bonazzola
oVirt 4.4.4 Third Release Candidate is now available for testing

The oVirt Project is pleased to announce the availability of oVirt 4.4.4
Third Release Candidate for testing, as of November 26th, 2020.

This update is the fourth in a series of stabilization updates to the 4.4
series.
How to prevent hosts entering emergency mode after upgrade from oVirt 4.4.1

Note: Upgrading from 4.4.2 GA or later should not require re-doing these
steps, if already performed while upgrading from 4.4.1 to 4.4.2 GA. These
are only required to be done once.

Due to Bug 1837864  -
Host enter emergency mode after upgrading to latest build

If you have your root file system on a multipath device on your hosts you
should be aware that after upgrading from 4.4.1 to 4.4.4 you may get your
host entering emergency mode.

In order to prevent this be sure to upgrade oVirt Engine first, then on
your hosts:

   1.

   Remove the current lvm filter while still on 4.4.1, or in emergency mode
   (if rebooted).
   2.

   Reboot.
   3.

   Upgrade to 4.4.4 (redeploy in case of already being on 4.4.4).
   4.

   Run vdsm-tool config-lvm-filter to confirm there is a new filter in
   place.
   5.

   Only if not using oVirt Node:
   - run "dracut --force --add multipath” to rebuild initramfs with the
   correct filter configuration
   6.

   Reboot.

Documentation

   -

   If you want to try oVirt as quickly as possible, follow the instructions
   on the Download  page.
   -

   For complete installation, administration, and usage instructions, see
   the oVirt Documentation .
   -

   For upgrading from a previous version, see the oVirt Upgrade Guide
   .
   -

   For a general overview of oVirt, see About oVirt
   .

Important notes before you try it

Please note this is a pre-release build.

The oVirt Project makes no guarantees as to its suitability or usefulness.

This pre-release must not be used in production.
Installation instructions

For installation instructions and additional information please refer to:

https://ovirt.org/documentation/

This release is available now on x86_64 architecture for:

* Red Hat Enterprise Linux 8.2 or newer

* CentOS Linux (or similar) 8.2 or newer

This release supports Hypervisor Hosts on x86_64 and ppc64le architectures
for:

* Red Hat Enterprise Linux 8.2 or newer

* CentOS Linux (or similar) 8.2 or newer

* oVirt Node 4.4 based on CentOS Linux 8.2 (available for x86_64 only)

See the release notes [1] for installation instructions and a list of new
features and bugs fixed.

Notes:

- oVirt Appliance is already available for CentOS Linux 8

- oVirt Node NG is already available for CentOS Linux 8

Additional Resources:

* Read more about the oVirt 4.4.4 release highlights:
http://www.ovirt.org/release/4.4.4/

* Get more oVirt project updates on Twitter: https://twitter.com/ovirt

* Check out the latest project news on the oVirt blog:
http://www.ovirt.org/blog/


[1] http://www.ovirt.org/release/4.4.4/

[2] http://resources.ovirt.org/pub/ovirt-4.4-pre/iso/


-- 

Sandro Bonazzola

MANAGER, SOFTWARE ENGINEERING, EMEA R RHV

Red Hat EMEA 

sbona...@redhat.com


*Red Hat respects your work life balance. Therefore there is no need to
answer this email out of your office hours.*
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/CPAHG5ZLNENLYV2W6OWOZESKDDIZTPNC/