On Mon, Jul 13, 2015 at 5:57 AM, Shubhendu Tripathi wrote:
> On 07/12/2015 09:53 PM, Omer Frenkel wrote:
>>
>>
>> - Original Message -
>>>
>>> From: "Liron Aravot"
>>> To: "Ryan Groten"
>>> Cc: users@ovirt.org
>>> Sent: Sunday, July 12, 2015 5:44:28 PM
>>> Subject: Re: [ovirt-users] Conc
Il 10/07/2015 11:52, Николаев Алексей ha scritto:
> CC
>
> @Sandro Bonazzola, can you please tell us what status of oVirt project for
> CentOS 6.6 for hypervisors hosts? Are support CentOS 6.6 for hypervisors
> hosts is end?
CentOS Linux 6.6 won't be supported for cluster level 3.6. You may co
On 07/13/2015 01:42 PM, Piotr Kliczewski wrote:
On Mon, Jul 13, 2015 at 5:57 AM, Shubhendu Tripathi wrote:
On 07/12/2015 09:53 PM, Omer Frenkel wrote:
- Original Message -
From: "Liron Aravot"
To: "Ryan Groten"
Cc: users@ovirt.org
Sent: Sunday, July 12, 2015 5:44:28 PM
Subject: Re:
Thank you Roy!
BTW - I went to the IRC channel - it was just me, a bot and one other
person who did not respond.
I'll keep an eye on the chat in the future.
***
*Mark Steele*
CIO / VP Technical Operations | TelVue Corporation
TelVue - We Share Your Vision
800.885.8886 x128 | mste...@telvue.com
Hello,
I created (through oVirt web) a GlusterFS distributed volume with four
bricks.
When I try to add a New Domain - GlusterFS Data I am getting
Error while executing action Add Storage Connection: Internal Engine Error
and
Error validating master storage domain: ('MD read error',)
Full log
Dear Listmembers,
I have a 3 node ovirt system, it's based on self-host glusterfs.
Today, I did a vm migration but it was unsuccessful.
The vm got a paused state and it was unable to resume and it was impossible to
switch off. Cause I killed it on the compute node, but it's remains as paused
On 13/07/15 17:02, Demeter Tibor wrote:
> Dear Listmembers,
>
> I have a 3 node ovirt system, it's based on self-host glusterfs.
> Today, I did a vm migration but it was unsuccessful.
> The vm got a paused state and it was unable to resume and it was
> impossible to switch off. Cause I killed it
Hi,
I'm sorry, but i don't have "real" hosted engine. I have glusterfs on same
nodes with vms, but i don't use hosted engine.
Also, I've checked, the vm does not running on both servers.
What do you think, if i restart the vdsmd, that will solve this problem?
Thanks for fast reply,
Tibor
-
Thanks for the responses everyone and for the RFE. I do use HA in some places
at the moment, but I do see another timeout value called vdsConnectionTimeout.
Would HA use this value or vdsTimeout (set to 2 by default) when attempting to
contact the host?
-Original Message-
From: Shubhe
On 13/07/15 17:59, Demeter Tibor wrote:
> Hi,
>
> I'm sorry, but i don't have "real" hosted engine. I have glusterfs on same
> nodes with vms, but i don't use hosted engine.
> Also, I've checked, the vm does not running on both servers.
> What do you think, if i restart the vdsmd, that will solve
The good news: there is an an option for this.
The bad news: Only replica 3 is supported. Other options are for
development purposes.
[root@hv00 ~]# cat /etc/vdsm/vdsm.conf
...
[gluster]
# Only replica 3 is supported, this configuration is for development.
# Value is comma separated. For ex
Hi There,
This is not strictly oVirt, but is storage-related, so hopefully you
will indulge me?
Is there any detriment (performance or otherwise) in setting up a
single-node glusterFS storage? I know glusterFS is designed to be used
with multiple nodes, but I am wondering if there are any ill-ef
12 matches
Mail list logo