[ovirt-users] Re: Problems after 4.3.8 update

2019-12-25 Thread Mahdi Adnan
We had an issue after upgrading Gluster nodes to 6.6, we had memory leak in
gluster self heal daemon which cause it to do infinite heal and ate all of
the nodes RAM "128GB", we had to downgrade to 6.5 to solve the problem.
I think it is related to the changes made to the SHD code in version 6.6 (
1760706 )
I might find the time and try to bug report the issue.
Anyway, glad you solved your issue.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/YFQYRHQHJ6VYWYUY6CY2V7SMDU7SGKIA/


Re: [ovirt-users] After upgrade to 4.2 some VM won't start

2018-02-24 Thread Mahdi Adnan
So if you create new VM and attach the same disk to it, it will run without 
issues ?


--

Respectfully
Mahdi A. Mahdi


From: users-boun...@ovirt.org  on behalf of Arsène 
Gschwind 
Sent: Saturday, February 24, 2018 11:03 AM
To: users@ovirt.org
Subject: Re: [ovirt-users] After upgrade to 4.2 some VM won't start


When creating an identical VM and attaching the one disk it will start and run 
perfectly. It seems that during the Cluster Compatibility Update something 
doesn't work right on running VM, this only happens on running VMs and I could 
reproduce it.

Is there a way to do some kind of diff between the new and the old VM settings 
to find out what may be different?

Thanks,
Arsene

On 02/23/2018 08:14 PM, Arsène Gschwind wrote:

Hi,

After upgrading cluster compatibility to 4.2 some VM won't start and I'm unable 
to figured out why, it throws a java exception.

I've attached the engine log.

Thanks for any help/hint.

rgds,
Arsene

--

Arsène Gschwind
Fa. Sapify AG im Auftrag der Universität Basel
IT Services
Klingelbergstr. 70 |  CH-4056 Basel  |  Switzerland
Tel. +41 79 449 25 63  |  http://its.unibas.ch 
ITS-ServiceDesk: support-...@unibas.ch | +41 61 
267 14 11



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


--

Arsène Gschwind
Fa. Sapify AG im Auftrag der Universität Basel
IT Services
Klingelbergstr. 70 |  CH-4056 Basel  |  Switzerland
Tel. +41 79 449 25 63  |  http://its.unibas.ch 
ITS-ServiceDesk: support-...@unibas.ch | +41 61 
267 14 11
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] rebooting hypervisors from time to time

2018-02-24 Thread Mahdi Adnan
Hi,

The log does't indicate HV reboot, and i see lots of errors in the logs.
During the reboot, what happened to the VM inside of the HV ? migrated ? paused 
? what about the system's logs ? does it indicate a graceful shutdown ?


--

Respectfully
Mahdi A. Mahdi


From: Erekle Magradze <erekle.magra...@recogizer.de>
Sent: Friday, February 23, 2018 2:48 PM
To: Mahdi Adnan; users@ovirt.org
Subject: Re: [ovirt-users] rebooting hypervisors from time to time


Thanks for the reply,

I've attached all the logs from yesterday, reboot has happened during the day 
but this is not the first time and this is not the only one hypervisor.

Kind Regards

Erekle

On 02/23/2018 09:00 AM, Mahdi Adnan wrote:
Hi,

Can you post the VDSM and Engine logs ?


--

Respectfully
Mahdi A. Mahdi


From: users-boun...@ovirt.org<mailto:users-boun...@ovirt.org> 
<users-boun...@ovirt.org><mailto:users-boun...@ovirt.org> on behalf of Erekle 
Magradze <erekle.magra...@recogizer.de><mailto:erekle.magra...@recogizer.de>
Sent: Thursday, February 22, 2018 11:48 PM
To: users@ovirt.org<mailto:users@ovirt.org>
Subject: Re: [ovirt-users] rebooting hypervisors from time to time

Dear all,

It would be great if someone will share any experience regarding the
similar case, would be great to have a hint where to start investigation.

Thanks again

Cheers

Erekle


On 02/22/2018 05:05 PM, Erekle Magradze wrote:
> Hello there,
>
> I am facing the following problem from time to time one of the
> hypervisor (there are 3 of them)s is rebooting, I am using
> ovirt-release42-4.2.1-1.el7.centos.noarch and glsuter as a storage
> backend (glusterfs-3.12.5-2.el7.x86_64).
>
> I am suspecting gluster because of the e.g. message bellow from one of
> the volumes,
>
> Could you please help and suggest to which direction should
> investigation go?
>
> Thanks in advance
>
> Cheers
>
> Erekle
>
>
> [2018-02-22 15:36:10.011687] and [2018-02-22 15:37:10.955013]
> [2018-02-22 15:41:10.198701] I [MSGID: 109063]
> [dht-layout.c:716:dht_layout_normalize] 0-virtimages-dht: Found
> anomalies in (null) (gfid = ----).
> Holes=1 overlaps=0
> [2018-02-22 15:41:10.198704] I [MSGID: 109063]
> [dht-layout.c:716:dht_layout_normalize] 0-virtimages-dht: Found
> anomalies in (null) (gfid = ----).
> Holes=1 overlaps=0
> [2018-02-22 15:42:11.293608] I [MSGID: 109063]
> [dht-layout.c:716:dht_layout_normalize] 0-virtimages-dht: Found
> anomalies in (null) (gfid = ----).
> Holes=1 overlaps=0
> [2018-02-22 15:53:16.245720] I [MSGID: 100030]
> [glusterfsd.c:2524:main] 0-/usr/sbin/glusterfs: Started running
> /usr/sbin/glusterfs version 3.12.5 (args: /usr/sbin/glusterfs
> --volfile-server=10.0.0.21 --volfi
> le-server=10.0.0.22 --volfile-server=10.0.0.23
> --volfile-id=/virtimages
> /rhev/data-center/mnt/glusterSD/10.0.0.21:_virtimages)
> [2018-02-22 15:53:16.263712] W [MSGID: 101002]
> [options.c:995:xl_opt_validate] 0-glusterfs: option 'address-family'
> is deprecated, preferred is 'transport.address-family', continuing
> with correction
> [2018-02-22 15:53:16.269595] I [MSGID: 101190]
> [event-epoll.c:613:event_dispatch_epoll_worker] 0-epoll: Started
> thread with index 1
> [2018-02-22 15:53:16.273483] I [MSGID: 101190]
> [event-epoll.c:613:event_dispatch_epoll_worker] 0-epoll: Started
> thread with index 2
> [2018-02-22 15:53:16.273594] W [MSGID: 101174]
> [graph.c:363:_log_if_unknown_option] 0-virtimages-readdir-ahead:
> option 'parallel-readdir' is not recognized
> [2018-02-22 15:53:16.273703] I [MSGID: 114020] [client.c:2360:notify]
> 0-virtimages-client-0: parent translators are ready, attempting
> connect on transport
> [2018-02-22 15:53:16.276455] I [MSGID: 114020] [client.c:2360:notify]
> 0-virtimages-client-1: parent translators are ready, attempting
> connect on transport
> [2018-02-22 15:53:16.276683] I [rpc-clnt.c:1986:rpc_clnt_reconfig]
> 0-virtimages-client-0: changing port to 49152 (from 0)
> [2018-02-22 15:53:16.279191] I [MSGID: 114020] [client.c:2360:notify]
> 0-virtimages-client-2: parent translators are ready, attempting
> connect on transport
> [2018-02-22 15:53:16.282126] I [MSGID: 114057]
> [client-handshake.c:1478:select_server_supported_programs]
> 0-virtimages-client-0: Using Program GlusterFS 3.3, Num (1298437),
> Version (330)
> [2018-02-22 15:53:16.282573] I [MSGID: 114046]
> [client-handshake.c:1231:client_setvolume_cbk] 0-virtimages-client-0:
> Connected to virtimages-client-0, attached to remote volume
> '/mnt/virtimages/virtimgs'.
> [2018-02-22 15:53:16.282584] I [MSGID: 114047]
> [client-handshake.c:1242:client_setvolume_cbk] 0-virtimag

Re: [ovirt-users] After upgrade to 4.2 some VM won't start

2018-02-23 Thread Mahdi Adnan
All VMs are using the same storage domain ?


--

Respectfully
Mahdi A. Mahdi


From: users-boun...@ovirt.org  on behalf of Arsène 
Gschwind 
Sent: Friday, February 23, 2018 10:14 PM
To: users
Subject: [ovirt-users] After upgrade to 4.2 some VM won't start


Hi,

After upgrading cluster compatibility to 4.2 some VM won't start and I'm unable 
to figured out why, it throws a java exception.

I've attached the engine log.

Thanks for any help/hint.

rgds,
Arsene

--

Arsène Gschwind
Fa. Sapify AG im Auftrag der Universität Basel
IT Services
Klingelbergstr. 70 |  CH-4056 Basel  |  Switzerland
Tel. +41 79 449 25 63  |  http://its.unibas.ch 
ITS-ServiceDesk: support-...@unibas.ch | +41 61 
267 14 11
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] rebooting hypervisors from time to time

2018-02-23 Thread Mahdi Adnan
Hi,

Can you post the VDSM and Engine logs ?


--

Respectfully
Mahdi A. Mahdi


From: users-boun...@ovirt.org  on behalf of Erekle 
Magradze 
Sent: Thursday, February 22, 2018 11:48 PM
To: users@ovirt.org
Subject: Re: [ovirt-users] rebooting hypervisors from time to time

Dear all,

It would be great if someone will share any experience regarding the
similar case, would be great to have a hint where to start investigation.

Thanks again

Cheers

Erekle


On 02/22/2018 05:05 PM, Erekle Magradze wrote:
> Hello there,
>
> I am facing the following problem from time to time one of the
> hypervisor (there are 3 of them)s is rebooting, I am using
> ovirt-release42-4.2.1-1.el7.centos.noarch and glsuter as a storage
> backend (glusterfs-3.12.5-2.el7.x86_64).
>
> I am suspecting gluster because of the e.g. message bellow from one of
> the volumes,
>
> Could you please help and suggest to which direction should
> investigation go?
>
> Thanks in advance
>
> Cheers
>
> Erekle
>
>
> [2018-02-22 15:36:10.011687] and [2018-02-22 15:37:10.955013]
> [2018-02-22 15:41:10.198701] I [MSGID: 109063]
> [dht-layout.c:716:dht_layout_normalize] 0-virtimages-dht: Found
> anomalies in (null) (gfid = ----).
> Holes=1 overlaps=0
> [2018-02-22 15:41:10.198704] I [MSGID: 109063]
> [dht-layout.c:716:dht_layout_normalize] 0-virtimages-dht: Found
> anomalies in (null) (gfid = ----).
> Holes=1 overlaps=0
> [2018-02-22 15:42:11.293608] I [MSGID: 109063]
> [dht-layout.c:716:dht_layout_normalize] 0-virtimages-dht: Found
> anomalies in (null) (gfid = ----).
> Holes=1 overlaps=0
> [2018-02-22 15:53:16.245720] I [MSGID: 100030]
> [glusterfsd.c:2524:main] 0-/usr/sbin/glusterfs: Started running
> /usr/sbin/glusterfs version 3.12.5 (args: /usr/sbin/glusterfs
> --volfile-server=10.0.0.21 --volfi
> le-server=10.0.0.22 --volfile-server=10.0.0.23
> --volfile-id=/virtimages
> /rhev/data-center/mnt/glusterSD/10.0.0.21:_virtimages)
> [2018-02-22 15:53:16.263712] W [MSGID: 101002]
> [options.c:995:xl_opt_validate] 0-glusterfs: option 'address-family'
> is deprecated, preferred is 'transport.address-family', continuing
> with correction
> [2018-02-22 15:53:16.269595] I [MSGID: 101190]
> [event-epoll.c:613:event_dispatch_epoll_worker] 0-epoll: Started
> thread with index 1
> [2018-02-22 15:53:16.273483] I [MSGID: 101190]
> [event-epoll.c:613:event_dispatch_epoll_worker] 0-epoll: Started
> thread with index 2
> [2018-02-22 15:53:16.273594] W [MSGID: 101174]
> [graph.c:363:_log_if_unknown_option] 0-virtimages-readdir-ahead:
> option 'parallel-readdir' is not recognized
> [2018-02-22 15:53:16.273703] I [MSGID: 114020] [client.c:2360:notify]
> 0-virtimages-client-0: parent translators are ready, attempting
> connect on transport
> [2018-02-22 15:53:16.276455] I [MSGID: 114020] [client.c:2360:notify]
> 0-virtimages-client-1: parent translators are ready, attempting
> connect on transport
> [2018-02-22 15:53:16.276683] I [rpc-clnt.c:1986:rpc_clnt_reconfig]
> 0-virtimages-client-0: changing port to 49152 (from 0)
> [2018-02-22 15:53:16.279191] I [MSGID: 114020] [client.c:2360:notify]
> 0-virtimages-client-2: parent translators are ready, attempting
> connect on transport
> [2018-02-22 15:53:16.282126] I [MSGID: 114057]
> [client-handshake.c:1478:select_server_supported_programs]
> 0-virtimages-client-0: Using Program GlusterFS 3.3, Num (1298437),
> Version (330)
> [2018-02-22 15:53:16.282573] I [MSGID: 114046]
> [client-handshake.c:1231:client_setvolume_cbk] 0-virtimages-client-0:
> Connected to virtimages-client-0, attached to remote volume
> '/mnt/virtimages/virtimgs'.
> [2018-02-22 15:53:16.282584] I [MSGID: 114047]
> [client-handshake.c:1242:client_setvolume_cbk] 0-virtimages-client-0:
> Server and Client lk-version numbers are not same, reopening the fds
> [2018-02-22 15:53:16.282665] I [MSGID: 108005]
> [afr-common.c:4929:__afr_handle_child_up_event]
> 0-virtimages-replicate-0: Subvolume 'virtimages-client-0' came back
> up; going online.
> [2018-02-22 15:53:16.282877] I [rpc-clnt.c:1986:rpc_clnt_reconfig]
> 0-virtimages-client-1: changing port to 49152 (from 0)
> [2018-02-22 15:53:16.282934] I [MSGID: 114035]
> [client-handshake.c:202:client_set_lk_version_cbk]
> 0-virtimages-client-0: Server lk version = 1
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users

--
Recogizer Group GmbH

Dr.rer.nat. Erekle Magradze
Lead Big Data Engineering & DevOps
Rheinwerkallee 2, 53227 Bonn
Tel: +49 228 29974555

E-Mail erekle.magra...@recogizer.de
recogizer.com

-

Recogizer Group GmbH
Geschäftsführer: Oliver Habisch, Carsten Kreutze
Handelsregister: Amtsgericht Bonn HRB 20724
Sitz der Gesellschaft: Bonn; USt-ID-Nr.: DE294195993
Diese E-Mail 

Re: [ovirt-users] Ovirt backups lead to unresponsive VM

2018-01-30 Thread Mahdi Adnan
I have Windows VMs, both client and server.
if you provide the engine.log file we might have a look at it.


--

Respectfully
Mahdi A. Mahdi


From: Alex K <rightkickt...@gmail.com>
Sent: Monday, January 29, 2018 5:40 PM
To: Mahdi Adnan
Cc: users
Subject: Re: [ovirt-users] Ovirt backups lead to unresponsive VM

Hi,

I have observed this logged at host when the issue occurs:

VDSM command GetStoragePoolInfoVDS failed: Connection reset by peer

or

VDSM host.domain command GetStatsVDS failed: Connection reset by peer

At engine logs have not been able to correlate.

Are you hosting Windows 2016 server and Windows 10 VMs?
The weird is that I have same setup on other clusters with no issues.

Thanx,
Alex

On Sun, Jan 28, 2018 at 9:21 PM, Mahdi Adnan 
<mahdi.ad...@outlook.com<mailto:mahdi.ad...@outlook.com>> wrote:
Hi,

We have a cluster of 17 nodes, backed by GlusterFS storage, and using this same 
script for backup.
we have no issues with it so far.
have you checked engine log file ?


--

Respectfully
Mahdi A. Mahdi


From: users-boun...@ovirt.org<mailto:users-boun...@ovirt.org> 
<users-boun...@ovirt.org<mailto:users-boun...@ovirt.org>> on behalf of Alex K 
<rightkickt...@gmail.com<mailto:rightkickt...@gmail.com>>
Sent: Wednesday, January 24, 2018 4:18 PM
To: users
Subject: [ovirt-users] Ovirt backups lead to unresponsive VM

Hi all,

I have a cluster with 3 nodes, using ovirt 4.1 in a self hosted setup on top 
glusterfs.
On some VMs (especially one Windows server 2016 64bit with 500 GB of disk). 
Guest agents are installed at VMs. i almost always observe that during the 
backup of the VM the VM is rendered unresponsive (dashboard shows a question 
mark at the VM status and VM does not respond to ping or to anything).

For scheduled backups I use:

https://github.com/wefixit-AT/oVirtBackup

The script does the following:

1. snapshot VM (this is done ok without any failure)

2. Clone snapshot (this steps renders the VM unresponsive)

3. Export Clone

4. Delete clone

5. Delete snapshot


Do you have any similar experience? Any suggestions to address this?

I have never seen such issue with hosted Linux VMs.

The cluster has enough storage to accommodate the clone.


Thanx,

Alex



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Ovirt backups lead to unresponsive VM

2018-01-28 Thread Mahdi Adnan
Hi,

We have a cluster of 17 nodes, backed by GlusterFS storage, and using this same 
script for backup.
we have no issues with it so far.
have you checked engine log file ?


--

Respectfully
Mahdi A. Mahdi


From: users-boun...@ovirt.org  on behalf of Alex K 

Sent: Wednesday, January 24, 2018 4:18 PM
To: users
Subject: [ovirt-users] Ovirt backups lead to unresponsive VM

Hi all,

I have a cluster with 3 nodes, using ovirt 4.1 in a self hosted setup on top 
glusterfs.
On some VMs (especially one Windows server 2016 64bit with 500 GB of disk). 
Guest agents are installed at VMs. i almost always observe that during the 
backup of the VM the VM is rendered unresponsive (dashboard shows a question 
mark at the VM status and VM does not respond to ping or to anything).

For scheduled backups I use:

https://github.com/wefixit-AT/oVirtBackup

The script does the following:

1. snapshot VM (this is done ok without any failure)

2. Clone snapshot (this steps renders the VM unresponsive)

3. Export Clone

4. Delete clone

5. Delete snapshot


Do you have any similar experience? Any suggestions to address this?

I have never seen such issue with hosted Linux VMs.

The cluster has enough storage to accommodate the clone.


Thanx,

Alex


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] No SPM after network issue

2017-08-25 Thread Mahdi Adnan
Hi,

Our oVirt DC became unresponsive after networking issue between "Engine, Hosts, 
and Gluster storage" after around 50 seconds network issue resolved but i lost 
SPM.
sanlock log:

2017-08-24 16:00:05+0300 73290 [1127]: s14191 lockspace 
1b34ff4c-5d9d-44f5-a22e-6ca411865833:1:/rhev/data-center/mnt/glusterSD/192.168.209.195:_ovirt__imgs/1b34ff4c-5d9d-44f5-a22e-6ca411865833/dom_md/ids:0
2017-08-24 16:00:05+0300 73290 [1249]: 1b34ff4c aio collect RD 
0x7fa6f40008c0:0x7fa6f40008d0:0x7fa6f4101000 result -5:0 match res
2017-08-24 16:00:05+0300 73290 [1249]: read_sectors delta_leader offset 0 rv -5 
/rhev/data-center/mnt/glusterSD/192.168.209.195:_ovirt__imgs/1b34ff4c-5d9d-44f5-a22e-6ca411865833/dom_md/ids
2017-08-24 16:00:06+0300 73291 [1127]: s14191 add_lockspace fail result -5
2017-08-24 16:00:08+0300 73293 [12039]: s14192 lockspace 
1b34ff4c-5d9d-44f5-a22e-6ca411865833:1:/rhev/data-center/mnt/glusterSD/192.168.209.195:_ovirt__imgs/1b34ff4c-5d9d-44f5-a22e-6ca411865833/dom_md/ids:0
2017-08-24 16:00:08+0300 73293 [1367]: 1b34ff4c aio collect RD 
0x7fa6f40008c0:0x7fa6f40008d0:0x7fa6f4101000 result -5:0 match res
2017-08-24 16:00:08+0300 73293 [1367]: read_sectors delta_leader offset 0 rv -5 
/rhev/data-center/mnt/glusterSD/192.168.209.195:_ovirt__imgs/1b34ff4c-5d9d-44f5-a22e-6ca411865833/dom_md/ids
2017-08-24 16:00:09+0300 73294 [12039]: s14192 add_lockspace fail result -5


---

i cant read anything from ids, it gives mr read io error.
how can i recreate the ids file or reset sanlock without losing the whole DC ?

Thanks.


--

Respectfully
Mahdi A. Mahdi

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] ovirt reports

2017-08-24 Thread Mahdi Adnan
Thank you very much Shirly.


--

Respectfully
Mahdi A. Mahdi


From: Shirly Radco <sra...@redhat.com>
Sent: Wednesday, August 23, 2017 11:18:14 AM
To: Mahdi Adnan
Cc: Ovirt Users
Subject: Re: [ovirt-users] ovirt reports

Hi Mahdi,

oVirt reports has been deprecated in 4.0.
We are working on the new oVirt metrics store solution.
Please see 
http://www.ovirt.org/develop/release-management/features/metrics/metrics-store/

Best regards,


--

SHIRLY RADCO

BI SOFTWARE ENGINEER

Red Hat Israel<https://www.redhat.com/>

[https://www.redhat.com/files/brand/email/sig-redhat.png]<https://red.ht/sig>
TRIED. TESTED. TRUSTED.<https://redhat.com/trusted>


On Mon, Aug 21, 2017 at 2:20 PM, Mahdi Adnan 
<mahdi.ad...@outlook.com<mailto:mahdi.ad...@outlook.com>> wrote:

Hi,


Im looking into getting reports out of ovirt, the documentation is not clear on 
how to get started with reporting, and the link to ovirt reports is broken;

http://www.ovirt.org/documentation/how-to/reports/dwh/Ovirt_Reports#documentation
 how to reports dwh Ovirt Reports


Any idea how to get data from DWH ?



--

Respectfully
Mahdi A. Mahdi


___
Users mailing list
Users@ovirt.org<mailto:Users@ovirt.org>
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] ovirt reports

2017-08-21 Thread Mahdi Adnan
Hi,


Im looking into getting reports out of ovirt, the documentation is not clear on 
how to get started with reporting, and the link to ovirt reports is broken;

http://www.ovirt.org/documentation/how-to/reports/dwh/Ovirt_Reports#documentation
 how to reports dwh Ovirt Reports


Any idea how to get data from DWH ?



--

Respectfully
Mahdi A. Mahdi

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] ovirt on sdcard?

2017-07-21 Thread Mahdi Adnan
Hello,


Same here, im running multiple servers on SD cards without issues.


--

Respectfully
Mahdi A. Mahdi


From: users-boun...@ovirt.org  on behalf of Arsène 
Gschwind 
Sent: Thursday, July 20, 2017 11:32:11 AM
To: users@ovirt.org
Subject: Re: [ovirt-users] ovirt on sdcard?


Hi Lionel,

I'm running such a setup since about 4 month without any problem so far, on 
Cisco UCS Blades.

rgds,
Arsène

On 07/19/2017 09:16 PM, Lionel Caignec wrote:

Hi,

i'm planning to install some new hypervisors (ovirt) and i'm wondering if it's 
possible to get it installed on sdcard.
I know there is write limitation on this kind of storage device.
Is it a viable solution? there is somewhere some tuto about tuning ovirt on 
this kind of storage?

Thanks

--
Lionel
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


--

Arsène Gschwind
Fa. Sapify AG im Auftrag der Universität Basel
IT Services
Klingelbergstr. 70 |  CH-4056 Basel  |  Switzerland
Tel. +41 79 449 25 63  |  http://its.unibas.ch 
ITS-ServiceDesk: support-...@unibas.ch | +41 61 
267 14 11
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Very poor GlusterFS performance

2017-07-08 Thread Mahdi Adnan
So ovirt access gluster vai FUSE ? i thought its using libgfapi.

When can we expect it to work with libgfapi ?

and what about the changelog of 4.1.3 ?

BZ 1022961 Gluster: running a VM from a gluster domain should use gluster URI 
instead of a fuse mount"


--

Respectfully
Mahdi A. Mahdi


From: users-boun...@ovirt.org  on behalf of Ralf 
Schenk 
Sent: Monday, June 19, 2017 7:32:45 PM
To: users@ovirt.org
Subject: Re: [ovirt-users] Very poor GlusterFS performance


Hello,

Gluster-Performance is bad. Thats why I asked for native qemu-libgfapi access 
for Ovirt-VM's to gluster volumes which I thought to be possible since 3.6.x. 
Documentation is misleading and still in 4.1.2 Ovirt is using fuse to mount 
gluster-based VM-Disks.

Bye

Am 19.06.2017 um 17:23 schrieb Darrell Budic:
Chris-

You probably need to head over to 
gluster-us...@gluster.org for help with 
performance issues.

That said, what kind of performance are you getting, via some form or testing 
like bonnie++ or even dd runs? Raw bricks vs gluster performance is useful to 
determine what kind of performance you’re actually getting.

Beyond that, I’d recommend dropping the arbiter bricks and re-adding them as 
full replicas, they can’t serve distributed data in this configuration and may 
be slowing things down on you. If you’ve got a storage network setup, make sure 
it’s using the largest MTU it can, and consider adding/testing these settings 
that I use on my main storage volume:

performance.io-thread-count: 32
client.event-threads: 8
server.event-threads: 3
performance.stat-prefetch: on

Good luck,

  -Darrell


On Jun 19, 2017, at 9:46 AM, Chris Boot 
> wrote:

Hi folks,

I have 3x servers in a "hyper-converged" oVirt 4.1.2 + GlusterFS 3.10
configuration. My VMs run off a replica 3 arbiter 1 volume comprised of
6 bricks, which themselves live on two SSDs in each of the servers (one
brick per SSD). The bricks are XFS on LVM thin volumes straight onto the
SSDs. Connectivity is 10G Ethernet.

Performance within the VMs is pretty terrible. I experience very low
throughput and random IO is really bad: it feels like a latency issue.
On my oVirt nodes the SSDs are not generally very busy. The 10G network
seems to run without errors (iperf3 gives bandwidth measurements of >=
9.20 Gbits/sec between the three servers).

To put this into perspective: I was getting better behaviour from NFS4
on a gigabit connection than I am with GlusterFS on 10G: that doesn't
feel right at all.

My volume configuration looks like this:

Volume Name: vmssd
Type: Distributed-Replicate
Volume ID: d5a5ddd1-a140-4e0d-b514-701cfe464853
Status: Started
Snapshot Count: 0
Number of Bricks: 2 x (2 + 1) = 6
Transport-type: tcp
Bricks:
Brick1: ovirt3:/gluster/ssd0_vmssd/brick
Brick2: ovirt1:/gluster/ssd0_vmssd/brick
Brick3: ovirt2:/gluster/ssd0_vmssd/brick (arbiter)
Brick4: ovirt3:/gluster/ssd1_vmssd/brick
Brick5: ovirt1:/gluster/ssd1_vmssd/brick
Brick6: ovirt2:/gluster/ssd1_vmssd/brick (arbiter)
Options Reconfigured:
nfs.disable: on
transport.address-family: inet6
performance.quick-read: off
performance.read-ahead: off
performance.io-cache: off
performance.stat-prefetch: off
performance.low-prio-threads: 32
network.remote-dio: off
cluster.eager-lock: enable
cluster.quorum-type: auto
cluster.server-quorum-type: server
cluster.data-self-heal-algorithm: full
cluster.locking-scheme: granular
cluster.shd-max-threads: 8
cluster.shd-wait-qlength: 1
features.shard: on
user.cifs: off
storage.owner-uid: 36
storage.owner-gid: 36
features.shard-block-size: 128MB
performance.strict-o-direct: on
network.ping-timeout: 30
cluster.granular-entry-heal: enable

I would really appreciate some guidance on this to try to improve things
because at this rate I will need to reconsider using GlusterFS altogether.

Cheers,
Chris

--
Chris Boot
bo...@bootc.net
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users




___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


--

[cid:part6.276D40AB.8385CD25@databay.de]

Ralf Schenk
fon +49 (0) 24 05 / 40 83 70
fax +49 (0) 24 05 / 40 83 759
mail r...@databay.de

Databay AG
Jens-Otto-Krag-Straße 11
D-52146 Würselen
www.databay.de

Sitz/Amtsgericht Aachen • HRB:8437 • USt-IdNr.: DE 210844202
Vorstand: Ralf Schenk, Dipl.-Ing. Jens Conze, Aresch Yavari, Dipl.-Kfm. Philipp 
Hermanns
Aufsichtsratsvorsitzender: Wilhelm Dohmen

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Very poor GlusterFS performance

2017-06-19 Thread Mahdi Adnan
Hi,


Can you put some numbers ? what tests are you doing ?

Im running oVirt with Gluster without performance issues, but im running 
replica 2 all SSDs.

Gluster logs might help too.


--

Respectfully
Mahdi A. Mahdi


From: users-boun...@ovirt.org  on behalf of Chris Boot 

Sent: Monday, June 19, 2017 5:46:08 PM
To: oVirt users
Subject: [ovirt-users] Very poor GlusterFS performance

Hi folks,

I have 3x servers in a "hyper-converged" oVirt 4.1.2 + GlusterFS 3.10
configuration. My VMs run off a replica 3 arbiter 1 volume comprised of
6 bricks, which themselves live on two SSDs in each of the servers (one
brick per SSD). The bricks are XFS on LVM thin volumes straight onto the
SSDs. Connectivity is 10G Ethernet.

Performance within the VMs is pretty terrible. I experience very low
throughput and random IO is really bad: it feels like a latency issue.
On my oVirt nodes the SSDs are not generally very busy. The 10G network
seems to run without errors (iperf3 gives bandwidth measurements of >=
9.20 Gbits/sec between the three servers).

To put this into perspective: I was getting better behaviour from NFS4
on a gigabit connection than I am with GlusterFS on 10G: that doesn't
feel right at all.

My volume configuration looks like this:

Volume Name: vmssd
Type: Distributed-Replicate
Volume ID: d5a5ddd1-a140-4e0d-b514-701cfe464853
Status: Started
Snapshot Count: 0
Number of Bricks: 2 x (2 + 1) = 6
Transport-type: tcp
Bricks:
Brick1: ovirt3:/gluster/ssd0_vmssd/brick
Brick2: ovirt1:/gluster/ssd0_vmssd/brick
Brick3: ovirt2:/gluster/ssd0_vmssd/brick (arbiter)
Brick4: ovirt3:/gluster/ssd1_vmssd/brick
Brick5: ovirt1:/gluster/ssd1_vmssd/brick
Brick6: ovirt2:/gluster/ssd1_vmssd/brick (arbiter)
Options Reconfigured:
nfs.disable: on
transport.address-family: inet6
performance.quick-read: off
performance.read-ahead: off
performance.io-cache: off
performance.stat-prefetch: off
performance.low-prio-threads: 32
network.remote-dio: off
cluster.eager-lock: enable
cluster.quorum-type: auto
cluster.server-quorum-type: server
cluster.data-self-heal-algorithm: full
cluster.locking-scheme: granular
cluster.shd-max-threads: 8
cluster.shd-wait-qlength: 1
features.shard: on
user.cifs: off
storage.owner-uid: 36
storage.owner-gid: 36
features.shard-block-size: 128MB
performance.strict-o-direct: on
network.ping-timeout: 30
cluster.granular-entry-heal: enable

I would really appreciate some guidance on this to try to improve things
because at this rate I will need to reconsider using GlusterFS altogether.

Cheers,
Chris

--
Chris Boot
bo...@bootc.net
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Changing MAC Pool

2017-06-19 Thread Mahdi Adnan
Thank you very much.


I am running engine 4.1 but the cluster version is still 4.0.

I extended the MAC pool and it worked fine.


Thanks again.


--

Respectfully
Mahdi A. Mahdi


From: Michael Burman <mbur...@redhat.com>
Sent: Monday, June 19, 2017 7:56:24 AM
To: Mahdi Adnan
Cc: Ovirt Users
Subject: Re: [ovirt-users] Changing MAC Pool

Hi Mahdi

What version are you running? it is possible to extend the MAC pool range.

Before 4.1 you can extend the MAC pool range globally with engine-config 
command, for example:

- engine-config -s 
MacPoolRanges=00:00:00:00:00:00-00:00:00:10:00:00,00:00:00:02:00:00-00:03:00:00:00:0A

- restart ovirt-engine service

>From version 4.1, the MAC pool range moved to be in the cluster level and it's 
>now possible to edit/create/extend the MAC pool range per each cluster 
>separately  via the UI:

- 'Clusters' > edit cluster > 'MAC Address pool' range sub tab > 
add/extend/edit/remove
- Or via 'Configure' it is possible to create MAC pool entities and then assign 
them to desired clusters.

Cheers)

On Sun, Jun 18, 2017 at 1:25 PM, Mahdi Adnan 
<mahdi.ad...@outlook.com<mailto:mahdi.ad...@outlook.com>> wrote:

Hi,


I ran into an issue where i have no more MAC in the MAC pool.

I used the default MAC pool and now i want to create a new one for the Cluster.

Is it possible to create new MAC pool for the cluster without affecting the VMs 
?


Appreciate your help.


--

Respectfully
Mahdi A. Mahdi


___
Users mailing list
Users@ovirt.org<mailto:Users@ovirt.org>
http://lists.ovirt.org/mailman/listinfo/users




--
Michael Burman
RedHat Israel, RHV-M Network QE

Mobile: 054-5355725
IRC: mburman
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Changing MAC Pool

2017-06-18 Thread Mahdi Adnan
Hi,


I ran into an issue where i have no more MAC in the MAC pool.

I used the default MAC pool and now i want to create a new one for the Cluster.

Is it possible to create new MAC pool for the cluster without affecting the VMs 
?


Appreciate your help.


--

Respectfully
Mahdi A. Mahdi

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hardware for Hyperconverged oVirt: Gluster storage best practice

2017-06-12 Thread Mahdi Adnan
Hi,


4 SSDs in "distributed replica 2" volume for VM images, with additional 20 HDDs 
in another volume.

We had some minor XFS issues with the HDDs volume.

as for monitoring, standard snmp with few scripts to read smart report, we're 
still looking for a better way to monitor Gluster.

hardware is Cisco UCS C220.


We have another setup but not HC, and its equipped with 96 SSDs only.

No major issues so far.


--

Respectfully
Mahdi A. Mahdi


From: ov...@fateknollogee.com <ov...@fateknollogee.com>
Sent: Sunday, June 11, 2017 4:45:30 PM
To: Mahdi Adnan
Cc: Barak Korren; Yaniv Kaul; Ovirt Users
Subject: Re: [ovirt-users] Hardware for Hyperconverged oVirt: Gluster storage 
best practice

Mahdi,

Can you share some more detail on your hardware?
How many total SSDs?
Have you had any drive failures?
How do you monitor for failed drives?
Was it a problem replacing failed drives?

On 2017-06-11 02:21, Mahdi Adnan wrote:
> Hi,
>
> In our setup, we used each SSD as a standalone brick "no RAID" and
> created distributed replica with sharding.
>
> Also, we are NOT managing Gluster from ovirt.
>
> --
>
> Respectfully
> MAHDI A. MAHDI
>
> -
>
> FROM: users-boun...@ovirt.org <users-boun...@ovirt.org> on behalf of
> Barak Korren <bkor...@redhat.com>
> SENT: Sunday, June 11, 2017 11:20:45 AM
> TO: Yaniv Kaul
> CC: ov...@fateknollogee.com; Ovirt Users
> SUBJECT: Re: [ovirt-users] Hardware for Hyperconverged oVirt: Gluster
> storage best practice
>
> On 11 June 2017 at 11:08, Yaniv Kaul <yk...@redhat.com> wrote:
>>
>>> I will install the o/s for each node on a SATADOM.
>>> Since each node will have 6x SSD for gluster storage.
>>> Should this be software RAID, hardware RAID or no RAID?
>>
>> I'd reckon that you should prefer HW RAID on software RAID, and some
> RAID on
>> no RAID at all, but it really depends on your budget, performance,
> and your
>> availability requirements.
>>
>
> Not sure that is the best advice, given the use of Gluster+SSDs for
> hosting individual VMs.
>
> Typical software or hardware RAID systems are designed for use with
> spinning disks, and may not yield any better performance on SSDs. RAID
> is also not very good when I/O is highly scattered as it probably is
> when running multiple different VMs.
>
> So we are left with using RAID solely for availability. I think
> Gluster may already provide that, so adding additional software or
> hardware layers for RAID may just degrade performance without
> providing any tangible benefits.
>
> I think just defining each SSD as a single Gluster brick may provide
> the best performance for VMs, but my understanding of this is
> theoretical, so I leave it to the Gluster people to provide further
> insight.
>
> --
> Barak Korren
> RHV DevOps team , RHCE, RHCi
> Red Hat EMEA
> redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hardware for Hyperconverged oVirt: Gluster storage best practice

2017-06-12 Thread Mahdi Adnan
Hi,


In our setup, we used each SSD as a standalone brick "no RAID" and created 
distributed replica with sharding.

Also, we are NOT managing Gluster from ovirt.

--

Respectfully
Mahdi A. Mahdi


From: users-boun...@ovirt.org  on behalf of Barak 
Korren 
Sent: Sunday, June 11, 2017 11:20:45 AM
To: Yaniv Kaul
Cc: ov...@fateknollogee.com; Ovirt Users
Subject: Re: [ovirt-users] Hardware for Hyperconverged oVirt: Gluster storage 
best practice

On 11 June 2017 at 11:08, Yaniv Kaul  wrote:
>
>> I will install the o/s for each node on a SATADOM.
>> Since each node will have 6x SSD for gluster storage.
>> Should this be software RAID, hardware RAID or no RAID?
>
> I'd reckon that you should prefer HW RAID on software RAID, and some RAID on
> no RAID at all, but it really depends on your budget, performance, and your
> availability requirements.
>

Not sure that is the best advice, given the use of Gluster+SSDs for
hosting individual VMs.

Typical software or hardware RAID systems are designed for use with
spinning disks, and may not yield any better performance on SSDs. RAID
is also not very good when I/O is highly scattered as it probably is
when running multiple different VMs.

So we are left with using RAID solely for availability. I think
Gluster may already provide that, so adding additional software or
hardware layers for RAID may just degrade performance without
providing any tangible benefits.

I think just defining each SSD as a single Gluster brick may provide
the best performance for VMs, but my understanding of this is
theoretical, so I leave it to the Gluster people to provide further
insight.

--
Barak Korren
RHV DevOps team , RHCE, RHCi
Red Hat EMEA
redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt Power Management with Cisco UCS C220 M4S

2017-05-21 Thread Mahdi Adnan
Thank you very much, i've been searching for this for so long.


--

Respectfully
Mahdi A. Mahdi


From: users-boun...@ovirt.org  on behalf of Abi 
Askushi 
Sent: Saturday, May 20, 2017 4:14:46 PM
To: users
Subject: [ovirt-users] oVirt Power Management with Cisco UCS C220 M4S

Hi All,

For anyone that might stumble on a Cisco UCS C220 M4S and wondering how to 
configure power management. below are the steps to configure it, as it took me 
some hours to figure it out...

1. enable IPMI on server. (Cisco has this documented)

2. at ovirt GUI, edit host -> power management, then select "ipmilan" and add 
lanplus=1 as an option. (the bold one was the tricky part)

To test from command line:
 ipmitool -I lanplus -H  -U admin -P somepass -v chassis power status
It will give the response: "Chassis Power is on"

Alex
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users