On Wednesday 06 December 2017 11:08 AM, Hetz Ben Hamo wrote:
Thanks Jiffin,
Btw, the nfs-ganesha part in the release notes is having a wrong
header, so it's not highlighted.
One thing that it is still mystery to me: gluster 3.8.x does all what
the release notes of 3.9 says -
Thanks Jiffin,
Btw, the nfs-ganesha part in the release notes is having a wrong header, so
it's not highlighted.
One thing that it is still mystery to me: gluster 3.8.x does all what the
release notes of 3.9 says - automatically. Any chance that someone could
port it to 3.9?
Thanks for the
Hi,
On Monday 04 December 2017 07:43 PM, Hetz Ben Hamo wrote:
Hi Jiffin,
I looked at the document, and there are 2 things:
1. In Gluster 3.8 it seems you don't need to do that at all, it
creates this automatically, so why not in 3.10?
Kindly please refer the mail[1] and release note [2]
The Gluster community is pleased to announce the release of Gluster
3.10.8 (packages available at [1]).
Release notes for the release can be found at [2].
We are still working on a further fix for the corruption issue when
sharded volumes are rebalanced, details as below.
* Expanding a
Hi all,
I have a distributed / replicated pool consisting of 2 boxes, with 3 bricks
a piece. Each brick is mounted via a RAID 6 array consisting of 11 6 TB
disks. I'm running CentOS 7 with XFS and LVM. The 150 TB pool is loaded
with about 15 TB of data. Clients are connected via FUSE. I'm using
Hi,
just wanted to write the same thing.
there was once a post that suggested to kill the gluster processes
manually but I guess rebooting the machine will do the same. the
clients will stall for a while and then continue do access the volume
from the remaining node.
it is very important that you
On my setup at least, just issuing the reboot command works without any
issue. I've done a number of rolling reboots for software / kernel
upgrades in the manner you've described this way.
The one gotcha I've found is when the node comes back online. I
manually check healing to ensure that
Hi John,
thanks for your remark. However:
2017-12-05 16:47 GMT+01:00 Jim Kinney :
> Keep in mind a local disk is 3,6,12 Gbps but a network connection is
> typically 1Gbps. A local disk quad in raid 10 will outperform a 10G ethernet
> (especially using SAS drives).
Well, in
I am running gluster ver 3.8 in a distributed replica 2 config. I need to
reboot all my 8 cluster nodes to update my bios firmware. I would like to
do a rolling update to my bios and keep up my cluster so my clients don't
take an outage. Do I need to shutdown all gluster services on each node
Keep in mind a local disk is 3,6,12 Gbps but a network connection is typically
1Gbps. A local disk quad in raid 10 will outperform a 10G ethernet (especially
using SAS drives).
On December 5, 2017 6:11:38 AM EST, Riccardo Murri
wrote:
>Hello,
>
>I'm trying to set up a
Hello,
I'm trying to set up a SAMBA server serving a GlusterFS volume.
Everything works fine if I locally mount the GlusterFS volume (`mount
-t glusterfs ...`) and then serve the mounted FS through SAMBA, but
the performance is slower by a 2x/3x compared to a SAMBA server with a
local ext4
11 matches
Mail list logo