Hi,
1. If I create a 1 redundancy volume in the beginning, after I add more bricks,
can I increase redundancy to 2 or 3
No, You can not change the redundancy level of the same volume in future. So,
if you have created a 2+1 volume (2 data and 1 redundancy), you will have to
stick to it.
I'm working to convert my 3x3 arbiter replicated volume into a disperse
volume, however I have to work with the existing disks, maybe adding
another 1 or 2 new disks if necessary. I'm hoping to destroy the bricks on
one of the replicated nodes and build it into a
I'm opting to host this volume on
I have a pair of nodes configured with GlusterFS. I am trying to otimize
the read performance of the setup. with smallfiles_cli.py test I'm getting
4MiB/s on the CREATE and 14MiB/s on the READ
I'm using the FUSE mount, mounting to the local GlustertFS server:
localhost:/gv_wordpress_std_var
The Gluster community is pleased to announce the release of Gluster
4.0.1 (packages available at [1]).
Release notes for the release can be found at [2].
Major changes, features and limitations addressed in this release:
- Release 4.1.0 notes incorrectly reported that all python code in
Gluster
I have not documented this yet - I will send you the steps tomorrow.
Regards,
Nithya
On 30 July 2018 at 20:23, Rusty Bower wrote:
> That would be awesome. Where can I find these?
>
> Rusty
>
> Sent from my iPhone
>
> On Jul 30, 2018, at 03:40, Nithya Balachandran
> wrote:
>
> Hi Rusty,
>
>
Hi,
On 30 July 2018 at 00:45, Pat Haley wrote:
>
> Hi All,
>
> We are adding a new brick (91TB) to our existing gluster volume (328 TB).
> The new brick is on a new physical server and we want to make sure that we
> are doing this correctly (the existing volume had 2 bricks on a single
>
That would be awesome. Where can I find these?
Rusty
Sent from my iPhone
> On Jul 30, 2018, at 03:40, Nithya Balachandran wrote:
>
> Hi Rusty,
>
> Sorry for the delay getting back to you. I had a quick look at the rebalance
> logs - it looks like the estimates are based on the time taken to
The brick contains about 8T of live data using up about 61M inodes so
we have LOTS of directories with small files.
It took something like 45 days for the systems to sync the first time
and I believe it completed but some time over the last month or so it
has failed.
both sides are
Hi Aravinda,
Thanks for the info, somehow I wasn't aware about this new service. Now it's
clear and I updated my documentation.
Best regards,
M.
‐‐‐ Original Message ‐‐‐
On July 30, 2018 5:59 AM, Aravinda Vishwanathapura Krishna Murthy
wrote:
> On Mon, Jul 30, 2018 at 1:03 AM mabi
Hi Rusty,
Sorry for the delay getting back to you. I had a quick look at the
rebalance logs - it looks like the estimates are based on the time taken to
rebalance the smaller files.
We do have a scripting option where we can use virtual xattrs to trigger
file migration from a mount point. That
10 matches
Mail list logo