Hi
glusterfs-3.6.3beta1 has been released and can be found here.
http://download.gluster.org/pub/gluster/glusterfs/qa-releases/3.6.3beta1/
This beta release supposedly fixes the bugs listed below since 3.6.2 was
made available. Thanks to all who submitted the patches, reviewed the
changes.
Dear All,
I am actually using the following software stack:
debian wheezy with kernel 3.2.0-4-amd64, glusterfs 3.6.2, openstack Juno,
libvirt 1.2.9.
If I try to attach a block storage to a running vm, Openstack shows the
following error: DeviceIsBusy: The supplied device (vdc) is busy.
If I
268435456 bytes (268 MB) copied, 57.3145 s, 4.7 MB/s
Hi, I did the same test on various replicated volumes we have
on 3 KVM virtual machines.
* Gluster 3.6.0 Bricks on same disk as the system (ext4+lvm) : 1.0 MB/S, (1,6
MB/s for the sytem disk)
* Gluster 3.6.2 Bricks on same disk as the
Hi all,
If we set the read-only feature using the following command in the cli to a
volume in service, it will not work until the volume is restarted.
gluster volume set vol-name features.readonly on
It means that the service must be stopped temporarily. Does this make sense?
An alternative
glusterfs-3.6.3beta1 has been released and can be found here.
http://download.gluster.org/pub/gluster/glusterfs/qa-releases/3.6.3beta1/
http://download.gluster.org/pub/gluster/glusterfs/qa-releases/3.6.2beta2/
The above url is getting redirected to 3.6.2 beta2 location.
Use below url in that
Hi All,
Can any one clarify the issue e , we are facing with heal incorrect
report mentioned below , We are using gluster 3.3.2 .
*Issue:*
*Bug 1039544* https://bugzilla.redhat.com/show_bug.cgi?id=1039544
-[FEAT] gluster volume heal info should list the entries that actually
required to be
O_DIRECT support in fuse has been for quite some time now, surely well
before 3.4
On Fri, Feb 13, 2015, 02:37 Pedro Serotto pedro.sero...@yahoo.es wrote:
Dear All,
I am actually using the following software stack:
debian wheezy with kernel 3.2.0-4-amd64, glusterfs 3.6.2, openstack Juno,
For those interested here are the results of my tests using Gluster 3.5.2.
Nothing much better here neither...
shell$ dd bs=64k count=4k if=/dev/zero of=test oflag=dsync
4096+0 records in
4096+0 records out
268435456 bytes (268 MB) copied, 51.9808 s, 5.2 MB/s
shell$ dd bs=64k count=4k