.
Best regards,
Mabi
--- Original Message ---
On Wednesday, June 7th, 2023 at 6:56 AM, Strahil Nikolov
wrote:
> Have you checked this page:
> https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.5/html/administration_guide/brick_configuration
> ?
>
>
there? I had the following question in my previous mail in this regard:
"And once the primary site is back online how do you copy back or sync all data
changes done on the secondary volume on the secondary site back to the primary
volume on the primary site?"
Best regards,
Mabi
--
value I
need to use for this specific disk?
Best regards,
Mabi
Community Meeting Calendar:
Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users
regards,
Mabi
Community Meeting Calendar:
Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users
Thank you in advance for your help.
Regards,
Mabi
Community Meeting Calendar:
Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo
‐‐‐ Original Message ‐‐‐
On Thursday, November 5, 2020 3:28 PM, Yaniv Kaul wrote:
> Waiting for IO, just like the rest of those in D state.
> You may have a slow storage subsystem. How many cores do you have, btw?
> Y.
Strange because "iostat -xtcm 5" does not show that the disks are
0.0 0.0 0:00.00 cpuhp/1
Any clues anyone?
The load is really high around 20 now on the two nodes...
‐‐‐ Original Message ‐‐‐
On Thursday, November 5, 2020 11:50 AM, mabi wrote:
> Hello,
>
> I have a 3 node replica including arbiter GlusterFS 7.8 server with 3 volumes
1 0
18 0 0 22215740 32056 26067200 308 2524 9892 334537 2 83 14
1 0
18 0 0 22179348 32084 26082800 169 2038 8703 250351 1 88 10
0 0
I already tried rebooting but that did not help and there is nothing special in
the log files either.
Best reg
- can you provide the details
> ?
>
> Best Regards,
> Strahil Nikolov
>
> В понеделник, 26 октомври 2020 г., 20:38:38 Гринуич+2, mabi
> m...@protonmail.ch написа:
>
> Ok I see I won't go down that path of disabling quota.
>
> I could now remove the arbiter brick of my
/blob/devel/extras/quota/quota_fsck.py
>
> You can take a look in the mailing list for usage and more details.
>
> Best Regards,
> Strahil Nikolov
>
> В понеделник, 26 октомври 2020 г., 16:40:06 Гринуич+2, Diego Zuccato
> diego.zucc...@unibo.it написа:
>
> Il 26
‐‐‐ Original Message ‐‐‐
On Monday, October 26, 2020 3:39 PM, Diego Zuccato
wrote:
> Memory does not serve me well (there are 28 disks, not 26!), but bash
> history does :)
Yes, I also too often rely on history ;)
> gluster volume remove-brick BigVol replica 2
>
‐‐‐ Original Message ‐‐‐
On Monday, October 26, 2020 2:56 PM, Diego Zuccato
wrote:
> The volume is built by 26 10TB disks w/ genetic data. I currently don't
> have exact numbers, but it's still at the beginning, so there are a bit
> less than 10TB actually used.
> But you're only
On Monday, October 26, 2020 11:34 AM, Diego Zuccato
wrote:
> IIRC it's the same issue I had some time ago.
> I solved it by "degrading" the volume to replica 2, then cleared the
> arbiter bricks and upgraded again to replica 3 arbiter 1.
Thanks Diego for pointing out this workaround. How much
:glusterd_xfer_friend_add_resp] 0-glusterd: Responded
to node1.domain (0), ret: 0, op_ret: -1
Can someone please advise what I need to do in order to have my arbiter node up
and running again as soon as possible?
Thank you very much in advance for your help.
Best regards,
Mabi
‐‐‐ Original
, mabi wrote:
> Hello,
>
> I have a GlusterFS 6.9 cluster with two nodes and one arbitrer node with a
> replica volume and currently there are two files and two directories stuck to
> be self-healed.
>
> Node 1 and 3 (arbitrer) have the files and directories on the brick but no
n does not seem to be able to heal the two files and directories itself, I
would like to know what I can do here to fix this?
Thank you in advance for your help.
Best regards,
Mabi
Community Meeting Calendar:
Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https:
where more than one node will not be running the glusterfsd (brick) process so
this means that the quorum is lost and then my FUSE clients will loose
connection to the volume?
I just want to be sure that there will not be any downtime.
Best regards,
Mabi
‐‐‐ Original Message ‐‐‐
On Monday
could fix this issue or provide me a
workaround which works because version 6 of GlusterFS is not supported anymore
so I would really like to move on to the stable version 7.
Thank you very much in advance.
Best regards,
Mabi
‐‐‐ Original Message ‐‐‐
On Saturday, August 22, 2020 7:53 PM
But I do not see any solutions or workaround. So now I am stuck with a degraded
GlusterFS cluster.
Could someone please advise me as soon as possible on what I should do? Is
there maybe any workarounds?
Thank you very much in advance for your response.
Best regards,
Mabi
Community
Hi everyone,
So because upgrading introduces additional problems, does this means I should
stick with 5.x even if it is EOL?
Or what is a "safe" version to upgrade to?
Regards,
Mabi
‐‐‐ Original Message ‐‐‐
On Wednesday, May 6, 2020 2:44 AM, Artem Russakovskii
wrote:
the use version 7 in production or
should I better use version 6?
And is it possible to upgrade from 5.11 directly to 7.5?
Regards,
Mabi
‐‐‐ Original Message ‐‐‐
On Tuesday, May 5, 2020 1:40 PM, Hari Gowtham wrote:
> Hi,
>
> I don't see the above mentioned fix to be backport
surprised.
Best regards,
Mabi
‐‐‐ Original Message ‐‐‐
On Monday, May 4, 2020 9:57 PM, Artem Russakovskii wrote:
> I'm on 5.13, and these are the only error messages I'm still seeing (after
> downgrading from the failed v7 update):
>
> [2020-05-04 19:56:29.391121] E [fuse-
Hello,
Now that GlusterFS 5.13 has been released, could someone let me know if this
issue (see mail below) has been fixed in 5.13?
Thanks and regards,
Mabi
‐‐‐ Original Message ‐‐‐
On Monday, March 2, 2020 3:17 PM, mabi wrote:
> Hello,
>
> On the FUSE clients of my GlusterFS
‐‐‐ Original Message ‐‐‐
On Tuesday, March 3, 2020 6:11 AM, Hari Gowtham wrote:
> I checked on the backport and found that this patch hasn't yet been
> backported to any of the release branches.
> If this is the fix, it would be great to have them backported for the next
> release.
x-gnu/libc.so.6(clone+0x3f)[0x7f93d46ead0f] )
0-glusterfs-fuse: writing to fuse device failed: No such file or directory
Both the server and clients are Debian 9.
What exactly does this error message mean? And is it normal? or what should I
do to fix that?
Regards,
Mabi
Co
Thank you very much for your fast response and for adding the missing Debian
packages.
‐‐‐ Original Message ‐‐‐
On Friday, December 27, 2019 10:36 AM, Shwetha Acharya
wrote:
> Hi Mabi,
>
> Glusterfs 5.11 Debian amd64 stretch packages are now available.
>
> Reg
forgotten?
Thank you very much in advance.
Best regards,
Mabi
‐‐‐ Original Message ‐‐‐
On Wednesday, December 18, 2019 4:56 AM, Hari Gowtham
wrote:
> Hi,
>
> The Gluster community is pleased to announce the release of Gluster
> 5.11 (packages available at [1]).
>
Hello,
Is there a way to mount a GlusterFS volume using FUSE on an BSD machine such as
OpenBSD?
If not, what is the alternative, I guess NFS?
Regards,
M.
___
Gluster-users mailing list
Gluster-users@gluster.org
Hello,
I would like to upgrade my GlusterFS 4.1.8 cluster to 4.1.9 on my Debian
stretch nodes. Unfortunately the packages are missing as you can see here:
https://download.gluster.org/pub/gluster/glusterfs/4.1/4.1.9/Debian/stretch/amd64/apt/
As far as I know GlusterFS 4.1 is not yet EOL so I
what can I do about it?
Best regards,
Mabi
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users
ong)
and was actually wondering on GlusterFS what is the maximum length for a
filename?
I am using GlusterFS 4.1.6.
Regards,
Mabi
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users
for
this case.
Best regards,
Mabi
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users
the connect to my server and work correctly?
I am running 4.1.5 on my GlusterFS server and I am asking because I still have
a few clients on 3.12.14 which will need to stay longer on 3.12.14.
Regards,
Mabi
___
Gluster-users mailing list
Gluster-users
n and from that
> mount point, do a `find .|xargs stat >/dev/null`
>
>
> 3. Run`gluster volume heal $volname`
>
> HTH,
> Ravi
>
> On 11/16/2018 09:07 PM, mabi wrote:
>
> > And finally here is the output of a getfattr from both files from the 3
> >
dir4/dir5/dir6/dir7/dir8/dir9/dir10
>
> 2. Fuse mount the volume temporarily in some location and from that
> mount point, do a `find .|xargs stat >/dev/null`
>
>
> 3. Run`gluster volume heal $volname`
>
> HTH,
> Ravi
>
> On 11/16/2018 09:07 PM, mabi wrote:
‐‐‐ Original Message ‐‐‐
On Friday, November 16, 2018 5:14 AM, Ravishankar N
wrote:
> Okay, as asked in the previous mail, please share the getfattr output
> from all bricks for these 2 files. I think once we have this, we can try
> either 'adjusting' the the gfid and symlinks on node 2
‐‐‐ Original Message ‐‐‐
On Thursday, November 15, 2018 1:41 PM, Ravishankar N
wrote:
> Thanks, noted. One more query. Are there files inside each of these
> directories? Or is it just empty directories?
You will find below the content of each of these 3 directories taken the brick
on
(
(
))"
then if I check the
".../brick/.glusterfs/aa/e4/aae4098a-1a71-4155-9cc9-e564b89957cf" on node 1 or
node 3 it does not have any symlink to a file. Or am I looking at the wrong
place maybe or there is another trick in order to find the GFID->filename?
Regards,
Mab
‐‐‐ Original Message ‐‐‐
On Wednesday, November 14, 2018 5:34 AM, Ravishankar N
wrote:
> I thought it was missing which is why I asked you to create it. The
> trusted.gfid xattr for any given file or directory must be same in all 3
> bricks. But it looks like that isn't the case. Are
://bugzilla.redhat.com/show_bug.cgi?id=1567100
Is it possible that this bug has not made it yet into a release? or is it maybe
a regression?
Regards,
Mabi
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo
‐‐‐ Original Message ‐‐‐
On Friday, November 9, 2018 2:11 AM, Ravishankar N
wrote:
> Please re-create the symlink on node 2 to match how it is in the other
> nodes and launch heal again. Check if this is the case for other entries
> too.
> -Ravi
Please ignore my previous mail, I was
‐‐‐ Original Message ‐‐‐
On Friday, November 9, 2018 2:11 AM, Ravishankar N
wrote:
> Please re-create the symlink on node 2 to match how it is in the other
> nodes and launch heal again. Check if this is the case for other entries
> too.
> -Ravi
I can't create the missing symlink on
‐‐‐ Original Message ‐‐‐
On Thursday, November 8, 2018 11:05 AM, Ravishankar N
wrote:
> It is not a split-brain. Nodes 1 and 3 have xattrs indicating a pending
> entry heal on node2 , so heal must have happened ideally. Can you check
> a few things?
> - Is there any disconnects
nything is
> logged for these entries when you run 'gluster volume heal $volname'?
>
> Regards,
>
> Ravi
>
> On 11/07/2018 01:22 PM, mabi wrote:
>
> > To my eyes this specific case looks like a split-brain scenario but the
> > output of "volume info split-brain&q
al Message ‐‐‐
On Monday, November 5, 2018 4:36 PM, mabi wrote:
> Ravi, I did not yet modify the cluster.data-self-heal parameter to off
> because in the mean time node2 of my cluster had a memory shortage (this node
> has 32 GB of RAM) and as such I had to reboot it. After that rebo
has a directory from the time 14:12.
Again here the self-heal daemon doesn't seem to be doing anything... What do
you recommend me to do in order to heal these unsynced files?
‐‐‐ Original Message ‐‐‐
On Monday, November 5, 2018 2:42 AM, Ravishankar N
wrote:
>
>
> On 11
bug?
And by doing that, does it mean that my files pending heal are in danger of
being lost?
Also is it dangerous to leave "cluster.data-self-heal" to off?
‐‐‐ Original Message ‐‐‐
On Saturday, November 3, 2018 1:31 AM, Ravishankar N
wrote:
> Mabi,
>
> If bug 16
host:(null), port:0
That does not seem normal... what do you think?
‐‐‐ Original Message ‐‐‐
On Saturday, November 3, 2018 1:31 AM, Ravishankar N
wrote:
> Mabi,
>
> If bug 1637953 is what you are experiencing, then you need to follow the
> workarounds mentioned in
>
ou in advance for any feedback.
‐‐‐ Original Message ‐‐‐
On Wednesday, October 31, 2018 11:13 AM, mabi wrote:
> Hello,
>
> I have a GlusterFS 4.1.5 cluster with 3 nodes (including 1 arbiter) and
> currently have a volume with around 27174 files which are not being healed.
> The
-utils.c:12556:glusterd_remove_auxiliary_mount] 0-management: umount
on /var/run/gluster/myvol-private_quota_limit/ failed, reason : Success
Something must be wrong with the quotas?
‐‐‐ Original Message ‐‐‐
On Tuesday, October 30, 2018 6:24 PM, mabi wrote:
> Hello,
>
> Since I up
, October 31, 2018 11:13 AM, mabi wrote:
> Hello,
>
> I have a GlusterFS 4.1.5 cluster with 3 nodes (including 1 arbiter) and
> currently have a volume with around 27174 files which are not being healed.
> The "volume heal info" command shows the same 27k files under the fir
cbk] 0-myvol-private-client-0:
remote operation failed [Transport endpoint is not connected]
any idea what could be wrong here?
Regards,
Mabi
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users
‐‐‐ Original Message ‐‐‐
On Tuesday, October 30, 2018 6:24 PM, mabi wrote:
> Hello,
>
> Since I upgraded my 3-node (with arbiter) GlusreFS from 3.12.14 to 4.1.5 I
> see quite a lot of the following error message in the brick log file for one
> of my volumes where I have quota enabl
: error returned while attempting to connect to
host:(null), port:0
Is this a bug? should I file a bug report? or does anyone know what is wrong
here maybe with my system?
Best regards,
Mabi
___
Gluster-users mailing list
Gluster-users@gluster.org
https
.
If this is the case would it be possible to have just the glusterfs 4.1 client
package available for Debian 8?
Best regards,
M.
‐‐‐ Original Message ‐‐‐
On Monday, October 29, 2018 1:44 PM, Kaleb S. KEITHLEY
wrote:
> On 10/29/18 6:31 AM, mabi wrote:
>
> > Hello,
> > I
Hello,
I would like to know how I can contact the package maintainer for the GluserFS
4.1.x packages?
I have noticed that Debian 8 (jessie) is missing here:
https://download.gluster.org/pub/gluster/glusterfs/4.1/4.1.5/Debian/
Thank you very much in advance.
Best regards,
Mabi
, 2018 10:58 PM, mabi wrote:
> Hello,
>
> I just upgraded all my Debian 9 (stretch) GlusterFS servers from 3.12.14 to
> 4.1.5 but unfortunately my GlusterFS clients are all Debian 8 (jessie)
> machines and there are no single GlusterFS 4.1.x package available for Debian
> 8 a
or is not unsafe? I did not
upgrade the op-version on the server yet.
Thank you very much in advance.
Best regards,
Mabi
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users
weeks ago?
I never saw this type of problems in the past and it started to appear since I
upgraded to GluterFS 3.12.12.
Best regards,
Mabi
‐‐‐ Original Message ‐‐‐
On August 15, 2018 9:21 AM, mabi wrote:
> Great, you will then find attached here the statedump of the client using the
4.1.2, and the glustereventsd
> service was restarted. We use debian stretch; maybe it depends on the
> operating system?
>
> 2018-08-21 16:17 GMT+02:00 mabi m...@protonmail.ch:
>
> > Oops missed that part at the bottom, thanks Hu Bert!
> > Now the only thing missing from the
ions that access the volumes via gfapi (qemu, etc.)
> Install Gluster 4.1
>
> > > > Mount all gluster shares
>
> Start any applications that were stopped previously in step (2)
>
> 2018-08-21 15:33 GMT+02:00 mabi m...@protonmail.ch:
>
> > Hello,
> > I just
Hello,
I just upgraded from 4.0.2 to 4.1.2 using the official documentation:
https://docs.gluster.org/en/latest/Upgrade-Guide/upgrade_to_4.1/
I noticed that this documentation might be missing the following two additional
steps:
1) restart the glustereventsd service
2) umount and mount again
" line and
still send you the statedump file for analysis?
Thank you.
‐‐‐ Original Message ‐‐‐
On August 14, 2018 10:48 AM, Nithya Balachandran wrote:
> Thanks for letting us know. Sanoj, can you take a look at this?
>
> Thanks.
> Nithya
>
> On 14 August 2018
‐‐‐
On August 10, 2018 4:19 PM, Nithya Balachandran wrote:
> On 9 August 2018 at 19:54, mabi wrote:
>
>> Thanks for the documentation. On my client using FUSE mount I found the PID
>> by using ps (output below):
>>
>> root 456 1 4 14:17 ?00:05
, Raghavendra Gowdappa wrote:
> On Thu, Aug 9, 2018 at 6:47 PM, mabi wrote:
>
>> Hi Nithya,
>>
>> Thanks for the fast answer. Here the additional info:
>>
>> 1. gluster volume info
>>
>> Volume Name: myvol-private
>> Type: Replicate
>&g
")?
Regards,
M.
‐‐‐ Original Message ‐‐‐
On August 9, 2018 3:10 PM, Nithya Balachandran wrote:
> Hi,
>
> Please provide the following:
>
> - gluster volume info
> - statedump of the fuse process when it hangs
>
> Thanks,
> Nithya
>
> On 9 August 201
x46/0x90
[Thu Aug 9 14:21:07 2018] [] ? SYSC_newlstat+0x1d/0x40
[Thu Aug 9 14:21:07 2018] [] ? SyS_lgetxattr+0x58/0x80
[Thu Aug 9 14:21:07 2018] [] ?
system_call_fast_compare_end+0x10/0x15
My 3 gluster nodes are all Debian 9 and my client Debian 8.
Let me know if you need more informati
Hi Aravinda,
Thanks for the info, somehow I wasn't aware about this new service. Now it's
clear and I updated my documentation.
Best regards,
M.
‐‐‐ Original Message ‐‐‐
On July 30, 2018 5:59 AM, Aravinda Vishwanathapura Krishna Murthy
wrote:
> On Mon, Jul 30, 2018 at 1:03 AM m
Hi,
I just noticed that when I run a "systemctl stop glusterfs" on Debian 9 the
following glustereventsd processes are still running:
root 2471 1 0 22:03 ?00:00:00 python /usr/sbin/glustereventsd
--pid-file /var/run/glustereventsd.pid
root 2489 2471 0 22:03 ?
Hi Amar,
Just wanted to say that I think the quota feature in GlusterFS is really
useful. In my case I use it on one volume where I have many cloud installations
(mostly files) for different people and all these need to have a different
quota set on a specific directory. The GlusterFS quota
Thottan wrote:
> Hi Mabi,
>
> I have checked with afr maintainer, all of the required changes is merged in
> 3.12.
>
> Hence moving forward with 3.12.12 release
>
> Regards,
>
> Jiffin
>
> On Monday 09 July 2018 01:04 PM, mabi wrote:
>
>> Hi Jiffin
rect errno in post-op quorum check
afr: add quorum checks in post-op
Right now I only see the first one pending in the review dashboard. It would be
great if all of them could make it into this release.
Best regards,
Mabi
‐‐‐ Original Message ‐‐‐
On July 9, 2018 7:18 AM, Jiffin Tony Thot
that 3.12.9 should have fixed this issue but unfortunately it
didn't.
Best regards,
Mabi
‐‐‐ Original Message ‐‐‐
On July 4, 2018 5:41 PM, Ravishankar N wrote:
>
>
> Hi mabi, there are a couple of AFR patches from master that I'm
>
> currently back porting to
,
M.
‐‐‐ Original Message ‐‐‐
On June 22, 2018 4:44 PM, mabi wrote:
>
>
> Hi,
>
> Now that this issue has happened a few times I noticed a few things which
> might be helpful for debugging:
>
> - This problem happens when files are uploaded via a cloud ap
Hi,
In the past I was using geo-replication but unconfigured it on my two volumes
by using:
gluster volume geo-replication ... stop
gluster volume geo-replication ... delete
Now I found out that I still have some old files in /var/lib/misc/glusterfsd
belonging to my two volumes which were
and the arbiter is a Debian 9 virtual machine
with XFS as file system for the brick. To mount the volume I use a glusterfs
fuse mount on the web server which has Nextcloud running.
Regards,
M.
‐‐‐ Original Message ‐‐‐
On May 25, 2018 5:55 PM, mabi wrote:
>
>
> Thanks
directories, I
don't know if this is relevant or not but thought I would just mention.
‐‐‐ Original Message ‐‐‐
On May 23, 2018 9:25 AM, Ravishankar N <ravishan...@redhat.com> wrote:
>
>
> On 05/23/2018 12:47 PM, mabi wrote:
>
> > Hello,
> >
>
Hello,
I just wanted to ask if you had time to look into this bug I am encountering
and if there is anything else I can do?
For now in order to get rid of these 3 unsynched files shall I do the same
method that was suggested to me in this thread?
Thanks,
Mabi
‐‐‐ Original Message
1613e-2ac0-48bd-8ace-f2f723f3796c/2016.03.15 AVB_Photovoltaik-Versicherung
2013.pdf), client:
nextcloud.domain.com-7972-2018/05/10-20:31:46:163206-myvol-private-client-2-0-0,
error-xlator: myvol-private-posix [Directory not empty]
Best regards,
Mabi
‐‐‐ Original Message ‐‐‐
On May 17,
>
> On 05/15/2018 12:38 PM, mabi wrote:
>
> > Dear all,
> >
> > I have upgraded my replica 3 GlusterFS cluster (and clients) last Friday
> > from 3.12.7 to 3.12.9 in order to fix this bug but unfortunately I notice
> > that I still have exactly the same pro
right now 3 unsynched
files on my arbiter node like I used to do before upgrading. This problem
started since I upgraded to 3.12.7...
Thank you very much in advance for your advise.
Best regards,
Mabi
‐‐‐ Original Message ‐‐‐
On April 9, 2018 2:31 PM, Ravishankar N <ravis
han...@redhat.com> wrote:
> Mabi,
>
> It looks like one of the patches is not a straight forward cherry-pick to the
> 3.12 branch. Even though the conflict might be easy to resolve, I don't think
> it is a good idea to hurry it for tomorrow. We will definitely have it ready
> b
Dear Jiffin,
Would it be possible to have the following backported to 3.12:
https://bugzilla.redhat.com/show_bug.cgi?id=1482064
See my mail with subject "New 3.12.7 possible split-brain on replica 3" on the
list earlier this week for more details.
Thank you very much.
Best reg
Melekhov wrote:
>
> > 09.04.2018 16:18, Ravishankar N пишет:
> >
> > > On 04/09/2018 05:40 PM, mabi wrote:
> > >
> > > > Again thanks that worked and I have now no more unsynched files.
> > > >
> > > > You mentioned that this bug has be
Message ‐‐‐
On April 9, 2018 1:46 PM, Ravishankar N <ravishan...@redhat.com> wrote:
>
>
> On 04/09/2018 05:09 PM, mabi wrote:
>
> > Thanks Ravi for your answer.
> >
> > Stupid question but how do I delete the trusted.afr xattrs on this brick?
> >
<ravishan...@redhat.com> wrote:
>
>
> On 04/09/2018 04:36 PM, mabi wrote:
>
> > As I was suggested in the past by this mailing list a now ran a stat and
> > getfattr on one of the problematic files on all nodes and at the end a stat
> > on the fuse mount
‐‐‐ Original Message ‐‐‐
On April 9, 2018 9:49 AM, mabi <m...@protonmail.ch> wrote:
>
>
> Here would be also the corresponding log entries on a gluster node brick log
> file:
>
> [2018-04-09 06:58:47.363536] W [MSGID: 113093]
> [posix-gfid-path.c:8
/dir12_Archiv/azipfile.zip/OC_DEFAULT_MODULE
[No such file or directory]
Hope that helps to find out the issue.
‐‐‐ Original Message ‐‐‐
On April 9, 2018 9:37 AM, mabi <m...@protonmail.ch> wrote:
>
>
> Hello,
>
> Last Friday I upgraded my GlusterFS 3.10.7 3-way r
erfs/myvol-private.log) which I have included below in this
mail. It looks like some renaming has gone wrong because a directory is not
empty.
For your information I have upgraded my GlusterFS in offline mode and the
upgrade went smoothly.
What can I do to fix that issue?
Best regards,
Mabi
[201
myvolume geo.domain.tld::myvolume-geo
status detail
No active geo-replication sessions between myvolume and
geo.domain.tld::myvolume-geo
Any ideas how I can fix that?
Best regards,
Mabi
___
Gluster-users mailing list
Gluster-users@gluster.org
http
Hi,
Thanks for the link to the bug. We should be hopefully moving soon onto 3.12 so
I guess this bug is also fixed there.
Best regards,
M.
‐‐‐ Original Message ‐‐‐
On February 27, 2018 9:38 AM, Hari Gowtham <hgowt...@redhat.com> wrote:
>
>
> Hi Mabi,
>
> T
specific limit mentioned in the command.
>
> gluster volume quota list
>
> Make sure this path and the limit are set.
>
> If this works then you need to clean up the last stale entry.
>
> If this doesn't work we need to look further.
>
> Thanks Sanoj for the guida
nes even before you hit it).
>>Yes, you have to do a stat from the client through fuse mount.
>>On Tue, Feb 13, 2018 at 3:56 PM, mabi m...@protonmail.ch wrote:
>>>Thank you for your answer. This problem seem to have started since last
>>>week, so should I also send yo
t the beginning?
> If yes, what were the commands issued, before you noticed this problem.
> Is there any other error that you see other than this?
>
> And can you try looking up the directories the limits are set on and
> check if that fixes the error?
>
>> Original
owt...@redhat.com> wrote:
>Hi,
>
> A part of the log won't be enough to debug the issue.
> Need the whole log messages till date.
> You can send it as attachments.
>
> Yes the quota.conf is a binary file.
>
> And I need the volume status output too.
>
> On Tue, Feb 13, 2
2/13-08:16:09:933625-myvolume-client-0-0-0
Original Message
On February 13, 2018 12:47 AM, Hari Gowtham <hgowt...@redhat.com> wrote:
> Hi,
>
> Can you provide more information like, the volume configuration, quota.conf
> file and the log files.
>
> On
Would anyone be able to help me fix my quotas again?
Thanks
Original Message
On February 9, 2018 8:35 PM, mabi <m...@protonmail.ch> wrote:
>Hello,
>
> I am running GlusterFS 3.10.7 and just noticed by doing a "gluster volume
> quota list" t
Hello,
I am running GlusterFS 3.10.7 and just noticed by doing a "gluster volume quota
list" that my quotas on that volume are broken. The command returns
no output and no errors but by looking in /var/log/glusterfs.cli I found the
following errors:
[2018-02-09 19:31:24.242324] E
Hi Aravinda,
Very nice initiative, thank you very much! As as small recommendation it would
be nice to have a "nagios/icinga" mode, maybe through a "-n" parameter which
will do the health check and output the status ina nagios/icinga compatible
format. As such this tool could be directly used
gelogs directory ?
> Local Time: August 31, 2017 8:56 AM
> UTC Time: August 31, 2017 6:56 AM
> From: broglia...@gmail.com
> To: mabi <m...@protonmail.ch>
> Gluster Users <gluster-users@gluster.org>
>
> Hi Mabi,
> If you will not use that geo-replication volume sessi
1 - 100 of 189 matches
Mail list logo