spectfully*
**Mahdi A. Mahdi*
----
*From:* Erekle Magradze
*Sent:* Friday, February 23, 2018 2:48 PM
*To:* Mahdi Adnan; users@ovirt.org
*Subject:* Re: [ovirt-users] rebooting hypervisors from time to time
Thanks for the reply,
I'
Dear all,
It would be great if someone will share any experience regarding the
similar case, would be great to have a hint where to start investigation.
Thanks again
Cheers
Erekle
On 02/22/2018 05:05 PM, Erekle Magradze wrote:
Hello there,
I am facing the following problem from time to
Hello there,
I am facing the following problem from time to time one of the
hypervisor (there are 3 of them)s is rebooting, I am using
ovirt-release42-4.2.1-1.el7.centos.noarch and glsuter as a storage
backend (glusterfs-3.12.5-2.el7.x86_64).
I am suspecting gluster because of the e.g. messa
Hi,
Look in OpenShift Origin
Cheers
Erekle
On 01/18/2018 05:57 PM, Nathanaël Blanchet wrote:
And without the kubernetes ui plugin, how can I achieve to use the
ovirt cloud provider described on the official kubernetes site :
https://kubernetes.io/docs/getting-started-guides/ovirt/ ?
Le 18
Hello,
Could you please share your experience with snapshotting the VMs in
oVirt 4.1 with GlusterFS as a storage?
Thanks in advance
Erekle
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
m side.
Thanks,
Piotr
On Mon, Oct 16, 2017 at 4:21 PM, Erekle Magradze
wrote:
Hi Piotr,
Several times I've restarted vdsm daemon on certain nods, that could be the
reason.
The failure, I've mentioned, has happened yesterday from 15:00 to 17:00
Cheers
Erekle
On 10/16/2017 04:13 P
sanlock which caused
connection reset by peer.
After vdsm restart storage looks good.
@Nir can you take a look?
Thanks,
Piotr
On Mon, Oct 16, 2017 at 3:59 PM, Erekle Magradze
wrote:
Hi,
The issue is the following, after installation of ovirt 4.1 on three nodes
with glusterFS as a storage, oVirt
It's was a typo in the failure message,
that's what I was getting:
*VDSM hostname command GetStatsVDS failed: Connection reset by peer*
On 10/16/2017 03:21 PM, Erekle Magradze wrote:
Hi,
It's getting clear now, indeed momd service is disabled
● momd.service - Memory Overcom
is enabled? Because there are two momd and mom-vdsm,
the second one is the one that should be enabled.
Best regards
Martin Sivak
On Mon, Oct 16, 2017 at 3:04 PM, Erekle Magradze
wrote:
Hi Martin,
Thanks for the answer, unfortunately this warning message persists, does it
mean that mom cannot commun
not resolve itself within a minute or so. If it happens only once
or twice after vdsm or mom restart then you are fine.
Best regards
--
Martin Sivak
SLA / oVirt
On Mon, Oct 16, 2017 at 2:44 PM, Erekle Magradze
wrote:
Hi,
after running
systemctl status vdsm I am getting that it's runnin
Hi,
after running
systemctl status vdsm I am getting that it's running and this message at
the end.
Oct 16 14:26:52 hostname vdsmd[2392]: vdsm throttled WARN MOM not available.
Oct 16 14:26:52 hostname vdsmd[2392]: vdsm throttled WARN MOM not
available, KSM stats will be missing.
Oct 16 14:2
Hello guys,
I have 3 node oVirt with GlusterFS volumes,
here is the kernel version: *3.10.0-693.2.2.el7.x86_64
*this is the version of gluster I am running:*3.8.15-2
*oVirt engine is running on an separate baremetal host*
*I am getting the following failure message:*
**
VDSM command GetStatsVDS
Hey Guys,
Is there a way to attach an SSD directly to the oVirt VM?
Thanks in advance
Cheers
Erekle
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
it wort on a larger deployment.
Regards
Fernando
On 07/08/2017 17:07, Erekle Magradze wrote:
Hi Fernando (sorry for misspelling your name, I used a different
keyboard),
So let's go with the following scenarios:
1. Let's say you have two servers (replication factor is 2), i.e. t
osted engine?
Thanks,
Moacir
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
--
Recogizer Group GmbH
Dr.rer.nat. Erekle Magradze
Lead Big Data Engineering & DevOps
Rheinwerkallee 2, 53227 Bonn
Tel: +49 228
e, can
still be retrieved to both serve clients and reconstruct the
equivalent disk when it is replaced ?
Fernando
On 07/08/2017 10:26, Erekle Magradze wrote:
Hi Frenando,
Here is my experience, if you consider a particular hard drive as a
brick for gluster volume and it dies, i.e. it be
ruct the
equivalent disk when it is replaced ?
Fernando
On 07/08/2017 10:26, Erekle Magradze wrote:
Hi Frenando,
Here is my experience, if you consider a particular hard drive as a
brick for gluster volume and it dies, i.e. it becomes not accessible
it's a huge hassle to discard that brick
Hi Frenando,
Here is my experience, if you consider a particular hard drive as a
brick for gluster volume and it dies, i.e. it becomes not accessible
it's a huge hassle to discard that brick and exchange with another one,
since gluster some tries to access that broken brick and it's causing
(
Dr.rer.nat. Erekle Magradze
Lead Big Data Engineering & DevOps
Rheinwerkallee 2, 53227 Bonn
Tel: +49 228 29974555
E-Mail erekle.magra...@recogizer.de
Web: www.recogizer.com
Recogizer auf LinkedIn https://www.linkedin.com/company-beta/10039182/
Folgen Sie uns auf Twitter https://twitter.com/recog
Thanks a lot Kasturl
On 06/23/2017 03:25 PM, knarra wrote:
On 06/23/2017 03:38 PM, Erekle Magradze wrote:
Hello,
I am using glusterfs as the storage backend for the VM images,
volumes for oVirt consist of three bricks, is it still necessary to
configure the arbiter to be on the safe side
Hello,
I am using glusterfs as the storage backend for the VM images, volumes
for oVirt consist of three bricks, is it still necessary to configure
the arbiter to be on the safe side? or since the number of bricks is odd
it will be done out of the box?
Thanks in advance
Cheers
Erekle
_
21 matches
Mail list logo