I'm not sure why CentOS and Scientific Linux don't include it but I think it might just be an over site since Gluster 3.4 is new set of packages which were added in the latest release of RHEL 6.
In truth you can simply unpack the source RPM and rebuild it with Gluster support its not difficult but I don't remember the flags you need to pass the rpmbuild command off the top of my head. That said I'd probably use the ones off the Gluster site any way.
If you are using Gluster 3.5 the ones included in RHEL are incompatible because Gluster still has large API changes between minor releases and they were compiled against 3.4.
Also Gluster 3.5 is a brand new release so I wouldn't rule out the possibility of a bug. So if the Gluster enabled QEMU rpms don't help you may need to inquire on the Gluster mailing list.
-- Sent from my HP Pre3
On May 12, 2014 8:58, Tobias Honacker <[email protected]> wrote:
I've got the same issue using the same versions as Korsaks, using CentOS 6.5, too.
I could launch vms with gluster block backend driver using the standard package of libvirt from centos repo.
but ovirt does not run the VM with the qemu GFAPI integration, the path of the disk using the fuse mount.
<disk type='file' device='disk' snapshot='no'>
<driver name='qemu' type='raw' cache='none' error_policy='stop' io='threads'/>
<source file='/rhev/data-center/mnt/glusterSD/localhost:VMDATA/0add493f-0a7f-4b32-bcd5-ff25ca504b8b/images/68dbbc67-ea24-45a9-8727-4f85d100d1bb/8fe386f7-2aeb-43c2-bcb0-f76829c876b4'>
<seclabel model='selinux' relabel='no'/>
</source>
<target dev='vda' bus='virtio'/>
<serial></serial>
<alias name='virtio-disk0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
</disk>
or am i wrong?
On Mon, May 12, 2014 at 7:41 AM, Vadims Korsaks <[email protected]> wrote:
Underlying FS is XFS
GlusterFS - glusterfs-3.5.0-2.el6
I'm using CentOS, if this is problem could RHEL
packages be used? why CentOS packages are compiled
without native glusterfs support?
Citējot Paul Robert Marino <[email protected]> :
> What's the underlying filesystem for gluster is
it XFS?
> What version of gluster are you using?
> What distro are you using and if its not RHEL
or Fedora are you using a version of QEMU with
gluster support compiled in keep in mind the
versions with CentOS and Scientific Linux do not
include Gluster native support compiled in.
>
>
>
> -- Sent from my HP Pre3
>
> On May 11, 2014 5:40, Vadims Korsaks
<[email protected]> wrote:
>
> Citējot Vijay Bellur <[email protected]> :
> > On 05/11/2014 02:04 AM, Vadims Korsaks wrote:
> > > HI!
> > >
> > > Created 2 node setup with oVirt 3.4 and
> CentOS 6.5, for storage created
> > > 2 node replicated gluster (3.5) fs on same
> hosts with oVirt.
> > > mount looks like this:
> > > 127.0.0.1:/gluster01 on
> > >
>
/rhev/data-center/mnt/glusterSD/127.0.0.1:_gluster01
> type fuse.glusterfs
> > >
>
(rw,default_permissions,allow_other,max_read=131072)
> > >
> > > when i making gluster test with dd, something
> like
> > > dd if=/dev/zero bs=1M count=20000
> > >
>
of=/rhev/data-center/mnt/glusterSD/127.0.0.1\:_gluster01/kaka
> > > i'm gettting speed ~ 110 MB/s, so this is
> 1Gbps speed of ethernet adapter
> > >
> > > but with in VM created in oVirt speed is
> lower than 20 MB/s
> > >
> > > why there is so huge difference?
> > > how can improve VMs disks speed?
> > >
> >
> > What are your gluster volume settings? Have you
> applied the following
> > performance tunables in gluster's virt profile:
> >
> > eager-lock=enable
> > remote-dio=enable
> >
> > Regards,
> > Vijay
> >
> setting were:
> [root@centos155 ~]# gluster volume info gluster01
>
> Volume Name: gluster01
> Type: Replicate
> Volume ID: 436edaa3-ac8b-421f-aa35-68b5bd7064b6
> Status: Started
> Number of Bricks: 1 x 2 = 2
> Transport-type: tcp
> Bricks:
> Brick1: 10.2.75.152:/mnt/gluster01/brick
> Brick2: 10.2.75.155:/mnt/gluster01/brick
> Options Reconfigured:
> storage.owner-gid: 36
> storage.owner-uid: 36
>
>
> add your settings settings now it looks
>
> [root@centos155 ~]# gluster volume info gluster01
>
> Volume Name: gluster01
> Type: Replicate
> Volume ID: 436edaa3-ac8b-421f-aa35-68b5bd7064b6
> Status: Started
> Number of Bricks: 1 x 2 = 2
> Transport-type: tcp
> Bricks:
> Brick1: 10.2.75.152:/mnt/gluster01/brick
> Brick2: 10.2.75.155:/mnt/gluster01/brick
> Options Reconfigured:
> network.remote-dio: enable
> cluster.eager-lock: enable
> storage.owner-gid: 36
> storage.owner-uid: 36
>
>
> but this didn't affected performace in any big way
> should hosts to be restarted?
>
> _______________________________________________
> Users mailing list
> [email protected]
> http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________
Users mailing list
[email protected]
http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list [email protected] http://lists.ovirt.org/mailman/listinfo/users

