Hi Vijay,
i deleted the Cluster/Datacenter and set it up with two new (physical)
Hosts and now the performance looks great.
I dunno what i did wrong. Thanks a lot
On Mon, Feb 10, 2014 at 6:10 PM, Vijay Bellur wrote:
> On 02/09/2014 11:08 PM, ml ml wrote:
>
>> Yes, the only thing which bri
On 02/09/2014 11:08 PM, ml ml wrote:
Yes, the only thing which brings the wirte I/O almost on my Host Level
is by enabling viodiskcache = writeback.
As far as i can tell this is caching enabled for the guest and the host
which is critical if sudden power loss happens.
Can i turn this is on if i
Yes, the only thing which brings the wirte I/O almost on my Host Level is
by enabling viodiskcache = writeback.
As far as i can tell this is caching enabled for the guest and the host
which is critical if sudden power loss happens.
Can i turn this is on if i have a BBU in my Host System?
On Sun,
On 02/09/2014 09:11 PM, ml ml wrote:
I am on Cent OS 6.5 and i am using:
[root@node1 ~]# rpm -qa | grep gluster
glusterfs-rdma-3.4.2-1.el6.x86_64
glusterfs-server-3.4.2-1.el6.x86_64
glusterfs-fuse-3.4.2-1.el6.x86_64
glusterfs-libs-3.4.2-1.el6.x86_64
glusterfs-3.4.2-1.el6.x86_64
glusterfs-api-3.4
I am on Cent OS 6.5 and i am using:
[root@node1 ~]# rpm -qa | grep gluster
glusterfs-rdma-3.4.2-1.el6.x86_64
glusterfs-server-3.4.2-1.el6.x86_64
glusterfs-fuse-3.4.2-1.el6.x86_64
glusterfs-libs-3.4.2-1.el6.x86_64
glusterfs-3.4.2-1.el6.x86_64
glusterfs-api-3.4.2-1.el6.x86_64
glusterfs-cli-3.4.2-1.e
Hello,
What version of GlusterFS you are using?
ml ml kirjoitti 8.2.2014 kello 21.24:
> anyone?
>
> On Friday, February 7, 2014, ml ml wrote:
> Hello List,
>
> i set up a Cluster with 2 Nodes and Glusterfs.
>
>
> gluster> volume info all
>
> Volume Name: Repl2
> Type: Replicate
> Volume
anyone?
On Friday, February 7, 2014, ml ml wrote:
> Hello List,
>
> i set up a Cluster with 2 Nodes and Glusterfs.
>
>
> gluster> volume info all
>
> Volume Name: Repl2
> Type: Replicate
> Volume ID: 8af9b282-8b60-4d71-a0fd-9116b8fdcca7
> Status: Started
> Number of Bricks: 1 x 2 = 2
> Transport
Hello List,
i set up a Cluster with 2 Nodes and Glusterfs.
gluster> volume info all
Volume Name: Repl2
Type: Replicate
Volume ID: 8af9b282-8b60-4d71-a0fd-9116b8fdcca7
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: node1.local:/data
Brick2: node2.local:/data
Opti
8 matches
Mail list logo