‐‐‐ Original Message ‐‐‐
On Tuesday, March 3, 2020 6:11 AM, Hari Gowtham wrote:
> I checked on the backport and found that this patch hasn't yet been
> backported to any of the release branches.
> If this is the fix, it would be great to have them backported for the next
> release.
Hi Strahil,
can you test /on non-prod system/ the latest minor version of gluster v6 ?
on the client side I can update the version to the latest minor version,
but the server still
remains in v6.0. Actually, we do not have a non-prod gluster system, so
it will take some time
to do this.
On March 3, 2020 4:13:38 AM GMT+02:00, David Cunningham
wrote:
>Hello,
>
>Thanks for that. When we re-tried with push-pem from cafs10 (on the
>A/master cluster) it failed with "Unable to mount and fetch slave
>volume
>details." and in the logs we see:
>
>[2020-03-03 02:07:42.614911] E
Hi Amar,
I checked on the backport and found that this patch hasn't yet been
backported to any of the release branches.
If this is the fix, it would be great to have them backported for the next
release.
On Tue, Mar 3, 2020 at 7:22 AM Amar Tumballi wrote:
> This is not normal at all.
>
> I
Hello,
Thanks for that. When we re-tried with push-pem from cafs10 (on the
A/master cluster) it failed with "Unable to mount and fetch slave volume
details." and in the logs we see:
[2020-03-03 02:07:42.614911] E
[name.c:258:af_inet_client_get_remote_sockaddr] 0-gvol0-client-0: DNS
resolution
This is not normal at all.
I guess the fix was
https://github.com/gluster/glusterfs/commit/1166df1920dd9b2bd5fce53ab49d27117db40238
I didn't check if its backported to other release branches.
Csaba, Rinku, hari can you please confirm on this?
Regards,
Amar
On Tue, Mar 3, 2020 at 4:25 AM
This was happening for us on our 3-node replicated server. For one day,
the log amassed to 3GBs. Over a week, it took over 15GBs.
Our gluster version is 6.5.
On Mon, Mar 2, 2020, 5:26 PM Strahil Nikolov wrote:
> Hi Felix,
>
> can you test /on non-prod system/ the latest minor version of
Hi Felix,
can you test /on non-prod system/ the latest minor version of gluster v6 ?
Best Regards,
Strahil Nikolov
В понеделник, 2 март 2020 г., 21:43:48 ч. Гринуич+2, Felix Kölzow
написа:
Dear Community,
this message appears for me to on GlusterFS 6.0.
Before that, we had GlusterFS
Dear Community,
this message appears for me to on GlusterFS 6.0.
Before that, we had GlusterFS 3.12 and the client log-file was almost
empty. After
upgrading to 6.0 we are facing this log entries.
Regards,
Felix
On 02/03/2020 15:17, mabi wrote:
Hello,
On the FUSE clients of my GlusterFS
All,
A quick question:
how can I get the "Gluster process" field to be larger when doing a
"gluster volume status" command?
It word-wraps that field so I end up with 2 lines for some bricks and 1
for others depending on the length of the path to the brick or hostname...
Brian Andrus
Hello,
On the FUSE clients of my GlusterFS 5.11 two-node replica+arbitrer I see quite
a lot of the following error message repeatedly:
[2020-03-02 14:12:40.297690] E [fuse-bridge.c:219:check_and_dump_fuse_W] (-->
Ronny Adsetts wrote on 01/03/2020 00:02:
[...]
>
> When I look at the FUSE-mounted volume, the file is there and correct
> but the file permissions of this and lots of others are screwed. Lots
> of dirs with d- permissions, lots of root:root owned files.
Replying to myself here...
I
Hi,
The Gluster community is pleased to announce the release of Gluster
5.12 (packages available at [1]).
Release notes for the release can be found at [2].
Major changes, features and limitations addressed in this release:
None
Thanks,
Gluster community
[1] Packages for 5.12:
13 matches
Mail list logo