Hello Darell,
Le 2014-01-08 18:47, Darrell Budic a écrit :
Grégoire-
I think this is expected behavior. Well, at least the high glusterfsd
CPU use during disk creation, anyway. I tried creating a 10 G disk on
my test environment and observed similar high CPU usage by glusterfsd.
Did the
Hi,
Le 2014-01-08 10:24, Andrew Lau a écrit :
Hi,
I finally got around to running some tests on our environment, are
you getting the same case where as when one host drops the VM ends up
in a paused state and can't be migrated?
Yes, in the hosts panel, I have to manually confirm it was
Do you happen to notice what is consuming CPU?
When I check with top, glusterfs and glusterfsd are the only process who
use a significant amount of CPU. Load average is between 5 and 6, and I
don't have any started VM.
Since the same cluster
does both virtualization and storage, a GigE
Hello,
I have a Gluster volume in distributed/replicated mode. I have 2 hosts.
When I try to create a VM with a preallocated disk, it uses 100% of the
available CPU and bandwidth (I have 1 Gigabit network card).
The result is I can't even create a preallocated disk because the engine
detects a
Hi,
For the 2 host scenario, disable quorum will allow you to do this.
I just disabled quorum and disabled the auto migration for my cluster.
Here is what I get :
To remind, the path of my storage is localhost:/path and I selected
HOSTA as host.
Volume options are :
Hi,
There are some things I don't understand. First of all, why do we need
keepalived ? I thought that it would be transparent at this layer and
that glusterfs would manage all the replication thing by itself. Is that
because I've POSIXFS instead of GlusterFS or is it totally unrelated ?
Hi,
keepalived is only for grabbing the gluster volume info (eg. the
servers which host the bricks) in which from what I've noticed your
clients will then connect to the gluster servers directly
(not using keepalived anymore).
Can't keepalived be replace by naming the hostname by
Hello,
As I said in a previous email, I have this configuration with Ovirt 3.3
:
1 Ovirt Engine
2 Hosts Centos 6.5
I successfully setup GlusterFS. I created a distributed replicate volume
with 2 bricks : host1:/gluster and host2:/gluster.
Then, I created a storage storage_gluster POSIXFS
Hello,
I try to setup Ovirt with this configuration :
1 Engine Ovirt 3.3 (upgraded from 3.2)
2 Nodes Ovirt 3.3
Centos 6.5
GlusterFS
I tried to follow the gluster documentation :
http://www.gluster.org/2013/09/ovirt-3-3-glusterized/, using POSIXFS as
I use CentOs and not Fedora.
The problem
Hello,
I have an ovirt cluster which runs CentOS VM. I have a template to
create VM. On this template, there are two interfaces, eth0 and eth1.
When I create a VM using this template, the new interfaces are named
eth2 and eth3. It can be pretty annoying and I would like to know if it
would be
10 matches
Mail list logo