Re: [Gluster-users] Failed to get quota limits

2018-02-13 Thread mabi
2/13-08:16:09:933625-myvolume-client-0-0-0 Original Message On February 13, 2018 12:47 AM, Hari Gowtham <hgowt...@redhat.com> wrote: > Hi, > > Can you provide more information like, the volume configuration, quota.conf > file and the log files. > > On

Re: [Gluster-users] Failed to get quota limits

2018-02-13 Thread mabi
t the beginning? > If yes, what were the commands issued, before you noticed this problem. > Is there any other error that you see other than this? > > And can you try looking up the directories the limits are set on and > check if that fixes the error? > >> Original

Re: [Gluster-users] Failed to get quota limits

2018-02-13 Thread mabi
nes even before you hit it). >>Yes, you have to do a stat from the client through fuse mount. >>On Tue, Feb 13, 2018 at 3:56 PM, mabi m...@protonmail.ch wrote: >>>Thank you for your answer. This problem seem to have started since last >>>week, so should I also send yo

Re: [Gluster-users] Failed to get quota limits

2018-02-24 Thread mabi
specific limit mentioned in the command. > > gluster volume quota list > > Make sure this path and the limit are set. > > If this works then you need to clean up the last stale entry. > > If this doesn't work we need to look further. > > Thanks Sanoj for the guida

Re: [Gluster-users] glustereventsd not being stopped by systemd script

2018-07-30 Thread mabi
Hi Aravinda, Thanks for the info, somehow I wasn't aware about this new service. Now it's clear and I updated my documentation. Best regards, M. ‐‐‐ Original Message ‐‐‐ On July 30, 2018 5:59 AM, Aravinda Vishwanathapura Krishna Murthy wrote: > On Mon, Jul 30, 2018 at 1:03 AM m

[Gluster-users] glustereventsd not being stopped by systemd script

2018-07-29 Thread mabi
Hi, I just noticed that when I run a "systemctl stop glusterfs" on Debian 9 the following glustereventsd processes are still running: root 2471 1 0 22:03 ?00:00:00 python /usr/sbin/glustereventsd --pid-file /var/run/glustereventsd.pid root 2489 2471 0 22:03 ?

[Gluster-users] blocking process on FUSE mount in directory which is using quota

2018-08-09 Thread mabi
x46/0x90 [Thu Aug 9 14:21:07 2018] [] ? SYSC_newlstat+0x1d/0x40 [Thu Aug 9 14:21:07 2018] [] ? SyS_lgetxattr+0x58/0x80 [Thu Aug 9 14:21:07 2018] [] ? system_call_fast_compare_end+0x10/0x15 My 3 gluster nodes are all Debian 9 and my client Debian 8. Let me know if you need more informati

Re: [Gluster-users] blocking process on FUSE mount in directory which is using quota

2018-08-09 Thread mabi
")? Regards, M. ‐‐‐ Original Message ‐‐‐ On August 9, 2018 3:10 PM, Nithya Balachandran wrote: > Hi, > > Please provide the following: > > - gluster volume info > - statedump of the fuse process when it hangs > > Thanks, > Nithya > > On 9 August 201

Re: [Gluster-users] blocking process on FUSE mount in directory which is using quota

2018-08-09 Thread mabi
, Raghavendra Gowdappa wrote: > On Thu, Aug 9, 2018 at 6:47 PM, mabi wrote: > >> Hi Nithya, >> >> Thanks for the fast answer. Here the additional info: >> >> 1. gluster volume info >> >> Volume Name: myvol-private >> Type: Replicate >&g

Re: [Gluster-users] blocking process on FUSE mount in directory which is using quota

2018-08-14 Thread mabi
‐‐‐ On August 10, 2018 4:19 PM, Nithya Balachandran wrote: > On 9 August 2018 at 19:54, mabi wrote: > >> Thanks for the documentation. On my client using FUSE mount I found the PID >> by using ps (output below): >> >> root 456 1 4 14:17 ?00:05

Re: [Gluster-users] blocking process on FUSE mount in directory which is using quota

2018-08-14 Thread mabi
" line and still send you the statedump file for analysis? Thank you. ‐‐‐ Original Message ‐‐‐ On August 14, 2018 10:48 AM, Nithya Balachandran wrote: > Thanks for letting us know. Sanoj, can you take a look at this? > > Thanks. > Nithya > > On 14 August 2018

Re: [Gluster-users] Possibly missing two steps in upgrade to 4.1 guide

2018-08-21 Thread mabi
ions that access the volumes via gfapi (qemu, etc.) > Install Gluster 4.1 > > > > > Mount all gluster shares > > Start any applications that were stopped previously in step (2) > > 2018-08-21 15:33 GMT+02:00 mabi m...@protonmail.ch: > > > Hello, > > I just

Re: [Gluster-users] Possibly missing two steps in upgrade to 4.1 guide

2018-08-21 Thread mabi
4.1.2, and the glustereventsd > service was restarted. We use debian stretch; maybe it depends on the > operating system? > > 2018-08-21 16:17 GMT+02:00 mabi m...@protonmail.ch: > > > Oops missed that part at the bottom, thanks Hu Bert! > > Now the only thing missing from the

[Gluster-users] Possibly missing two steps in upgrade to 4.1 guide

2018-08-21 Thread mabi
Hello, I just upgraded from 4.0.2 to 4.1.2 using the official documentation: https://docs.gluster.org/en/latest/Upgrade-Guide/upgrade_to_4.1/ I noticed that this documentation might be missing the following two additional steps: 1) restart the glustereventsd service 2) umount and mount again

Re: [Gluster-users] Proposal to mark few features as Deprecated / SunSet from Version 5.0

2018-07-19 Thread mabi
Hi Amar, Just wanted to say that I think the quota feature in GlusterFS is really useful. In my case I use it on one volume where I have many cloud installations (mostly files) for different people and all these need to have a different quota set on a specific directory. The GlusterFS quota

Re: [Gluster-users] blocking process on FUSE mount in directory which is using quota

2018-09-02 Thread mabi
weeks ago? I never saw this type of problems in the past and it started to appear since I upgraded to GluterFS 3.12.12. Best regards, Mabi ‐‐‐ Original Message ‐‐‐ On August 15, 2018 9:21 AM, mabi wrote: > Great, you will then find attached here the statedump of the client using the

Re: [Gluster-users] New 3.12.7 possible split-brain on replica 3

2018-07-05 Thread mabi
that 3.12.9 should have fixed this issue but unfortunately it didn't. Best regards, Mabi​​ ‐‐‐ Original Message ‐‐‐ On July 4, 2018 5:41 PM, Ravishankar N wrote: > ​​ > > Hi mabi, there are a couple of AFR patches  from master that I'm > > currently back porting to

Re: [Gluster-users] Release 3.12.12: Scheduled for the 11th of July

2018-07-12 Thread mabi
Thottan wrote: > Hi Mabi, > > I have checked with afr maintainer, all of the required changes is merged in > 3.12. > > Hence moving forward with 3.12.12 release > > Regards, > > Jiffin > > On Monday 09 July 2018 01:04 PM, mabi wrote: > >> Hi Jiffin

Re: [Gluster-users] Release 3.12.12: Scheduled for the 11th of July

2018-07-09 Thread mabi
rect errno in post-op quorum check afr: add quorum checks in post-op Right now I only see the first one pending in the review dashboard. It would be great if all of them could make it into this release. Best regards, Mabi ‐‐‐ Original Message ‐‐‐ On July 9, 2018 7:18 AM, Jiffin Tony Thot

Re: [Gluster-users] New 3.12.7 possible split-brain on replica 3

2018-07-04 Thread mabi
, M. ‐‐‐ Original Message ‐‐‐ On June 22, 2018 4:44 PM, mabi wrote: > ​​ > > Hi, > > Now that this issue has happened a few times I noticed a few things which > might be helpful for debugging: > > - This problem happens when files are uploaded via a cloud ap

Re: [Gluster-users] Failed to get quota limits

2018-02-27 Thread mabi
Hi, Thanks for the link to the bug. We should be hopefully moving soon onto 3.12 so I guess this bug is also fixed there. Best regards, M. ​ ‐‐‐ Original Message ‐‐‐ On February 27, 2018 9:38 AM, Hari Gowtham <hgowt...@redhat.com> wrote: > ​​ > > Hi Mabi, > > T

[Gluster-users] Can't stop volume using gluster volume stop

2018-04-06 Thread mabi
myvolume geo.domain.tld::myvolume-geo status detail No active geo-replication sessions between myvolume and geo.domain.tld::myvolume-geo ​​ Any ideas how I can fix that? Best regards, Mabi ___ Gluster-users mailing list Gluster-users@gluster.org http

Re: [Gluster-users] Release 3.12.8: Scheduled for the 12th of April

2018-04-11 Thread mabi
Dear Jiffin, Would it be possible to have the following backported to 3.12: https://bugzilla.redhat.com/show_bug.cgi?id=1482064 See my mail with subject "New 3.12.7 possible split-brain on replica 3" on the list earlier this week for more details. Thank you very much. Best reg

Re: [Gluster-users] Release 3.12.8: Scheduled for the 12th of April

2018-04-11 Thread mabi
han...@redhat.com> wrote: > Mabi, > > It looks like one of the patches is not a straight forward cherry-pick to the > 3.12 branch. Even though the conflict might be easy to resolve, I don't think > it is a good idea to hurry it for tomorrow. We will definitely have it ready > b

[Gluster-users] New 3.12.7 possible split-brain on replica 3

2018-04-09 Thread mabi
erfs/myvol-private.log) which I have included below in this mail. It looks like some renaming has gone wrong because a directory is not empty. For your information I have upgraded my GlusterFS in offline mode and the upgrade went smoothly. What can I do to fix that issue? Best regards, Mabi [201

Re: [Gluster-users] New 3.12.7 possible split-brain on replica 3

2018-04-09 Thread mabi
/dir12_Archiv/azipfile.zip/OC_DEFAULT_MODULE [No such file or directory] ​ Hope that helps to find out the issue. ‐‐‐ Original Message ‐‐‐ On April 9, 2018 9:37 AM, mabi <m...@protonmail.ch> wrote: > ​​ > > Hello, > > Last Friday I upgraded my GlusterFS 3.10.7 3-way r

Re: [Gluster-users] New 3.12.7 possible split-brain on replica 3

2018-04-09 Thread mabi
​​ ‐‐‐ Original Message ‐‐‐ On April 9, 2018 9:49 AM, mabi <m...@protonmail.ch> wrote: > ​​ > > Here would be also the corresponding log entries on a gluster node brick log > file: > > [2018-04-09 06:58:47.363536] W [MSGID: 113093] > [posix-gfid-path.c:8

Re: [Gluster-users] New 3.12.7 possible split-brain on replica 3

2018-04-09 Thread mabi
Message ‐‐‐ On April 9, 2018 1:46 PM, Ravishankar N <ravishan...@redhat.com> wrote: > ​​ > > On 04/09/2018 05:09 PM, mabi wrote: > > > Thanks Ravi for your answer. > > > > Stupid question but how do I delete the trusted.afr xattrs on this brick? > >

Re: [Gluster-users] New 3.12.7 possible split-brain on replica 3

2018-04-09 Thread mabi
<ravishan...@redhat.com> wrote: > ​​ > > On 04/09/2018 04:36 PM, mabi wrote: > > > As I was suggested in the past by this mailing list a now ran a stat and > > getfattr on one of the problematic files on all nodes and at the end a stat > > on the fuse mount

Re: [Gluster-users] New 3.12.7 possible split-brain on replica 3

2018-04-09 Thread mabi
Melekhov wrote: > > > 09.04.2018 16:18, Ravishankar N пишет: > > > > > On 04/09/2018 05:40 PM, mabi wrote: > > > > > > > Again thanks that worked and I have now no more unsynched files. > > > > > > > > You mentioned that this bug has be

Re: [Gluster-users] New 3.12.7 possible split-brain on replica 3

2018-06-22 Thread mabi
and the arbiter is a Debian 9 virtual machine with XFS as file system for the brick. To mount the volume I use a glusterfs fuse mount on the web server which has Nextcloud running. Regards, M.​​ ‐‐‐ Original Message ‐‐‐ On May 25, 2018 5:55 PM, mabi wrote: > ​​ > > Thanks

[Gluster-users] GlusterFS 4.1.x deb packages missing for Debian 8 (jessie)

2018-10-19 Thread mabi
or is not unsafe? I did not upgrade the op-version on the server yet. Thank you very much in advance. Best regards, Mabi ___ Gluster-users mailing list Gluster-users@gluster.org https://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] GlusterFS 4.1.x deb packages missing for Debian 8 (jessie)

2018-10-24 Thread mabi
, 2018 10:58 PM, mabi wrote: > Hello, > > I just upgraded all my Debian 9 (stretch) GlusterFS servers from 3.12.14 to > 4.1.5 but unfortunately my GlusterFS clients are all Debian 8 (jessie) > machines and there are no single GlusterFS 4.1.x package available for Debian > 8 a

[Gluster-users] Who is the package maintainer for GlusterFS 4.1?

2018-10-29 Thread mabi
Hello, I would like to know how I can contact the package maintainer for the GluserFS 4.1.x packages? I have noticed that Debian 8 (jessie) is missing here: https://download.gluster.org/pub/gluster/glusterfs/4.1/4.1.5/Debian/ Thank you very much in advance. Best regards, Mabi

Re: [Gluster-users] Who is the package maintainer for GlusterFS 4.1?

2018-10-29 Thread mabi
. If this is the case would it be possible to have just the glusterfs 4.1 client package available for Debian 8? Best regards, M. ‐‐‐ Original Message ‐‐‐ On Monday, October 29, 2018 1:44 PM, Kaleb S. KEITHLEY wrote: > On 10/29/18 6:31 AM, mabi wrote: > > > Hello, > > I

Re: [Gluster-users] Self-healing not healing 27k files on GlusterFS 4.1.5 3 nodes replica

2018-11-01 Thread mabi
, October 31, 2018 11:13 AM, mabi wrote: > Hello, > > I have a GlusterFS 4.1.5 cluster with 3 nodes (including 1 arbiter) and > currently have a volume with around 27174 files which are not being healed. > The "volume heal info" command shows the same 27k files under the fir

Re: [Gluster-users] quota: error returned while attempting to connect to host:(null), port:0

2018-11-01 Thread mabi
-utils.c:12556:glusterd_remove_auxiliary_mount] 0-management: umount on /var/run/gluster/myvol-private_quota_limit/ failed, reason : Success Something must be wrong with the quotas? ‐‐‐ Original Message ‐‐‐ On Tuesday, October 30, 2018 6:24 PM, mabi wrote: > Hello, > > Since I up

[Gluster-users] quota: error returned while attempting to connect to host:(null), port:0

2018-10-30 Thread mabi
: error returned while attempting to connect to host:(null), port:0 Is this a bug? should I file a bug report? or does anyone know what is wrong here maybe with my system? Best regards, Mabi ___ Gluster-users mailing list Gluster-users@gluster.org https

Re: [Gluster-users] quota: error returned while attempting to connect to host:(null), port:0

2018-10-31 Thread mabi
‐‐‐ Original Message ‐‐‐ On Tuesday, October 30, 2018 6:24 PM, mabi wrote: > Hello, > > Since I upgraded my 3-node (with arbiter) GlusreFS from 3.12.14 to 4.1.5 I > see quite a lot of the following error message in the brick log file for one > of my volumes where I have quota enabl

[Gluster-users] Self-healing not healing 27k files on GlusterFS 4.1.5 3 nodes replica

2018-10-31 Thread mabi
cbk] 0-myvol-private-client-0: remote operation failed [Transport endpoint is not connected] any idea what could be wrong here? Regards, Mabi ___ Gluster-users mailing list Gluster-users@gluster.org https://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Self-healing not healing 27k files on GlusterFS 4.1.5 3 nodes replica

2018-11-03 Thread mabi
host:(null), port:0 That does not seem normal... what do you think? ‐‐‐ Original Message ‐‐‐ On Saturday, November 3, 2018 1:31 AM, Ravishankar N wrote: > Mabi, > > If bug 1637953 is what you are experiencing, then you need to follow the > workarounds mentioned in >

Re: [Gluster-users] Self-healing not healing 27k files on GlusterFS 4.1.5 3 nodes replica

2018-11-03 Thread mabi
bug? And by doing that, does it mean that my files pending heal are in danger of being lost? Also is it dangerous to leave "cluster.data-self-heal" to off? ‐‐‐ Original Message ‐‐‐ On Saturday, November 3, 2018 1:31 AM, Ravishankar N wrote: > Mabi, > > If bug 16

Re: [Gluster-users] Self-healing not healing 27k files on GlusterFS 4.1.5 3 nodes replica

2018-11-05 Thread mabi
has a directory from the time 14:12. Again here the self-heal daemon doesn't seem to be doing anything... What do you recommend me to do in order to heal these unsynced files? ‐‐‐ Original Message ‐‐‐ On Monday, November 5, 2018 2:42 AM, Ravishankar N wrote: > > > On 11

Re: [Gluster-users] Self-healing not healing 27k files on GlusterFS 4.1.5 3 nodes replica

2018-11-02 Thread mabi
ou in advance for any feedback. ‐‐‐ Original Message ‐‐‐ On Wednesday, October 31, 2018 11:13 AM, mabi wrote: > Hello, > > I have a GlusterFS 4.1.5 cluster with 3 nodes (including 1 arbiter) and > currently have a volume with around 27174 files which are not being healed. > The

Re: [Gluster-users] Self-healing not healing 27k files on GlusterFS 4.1.5 3 nodes replica

2018-11-08 Thread mabi
‐‐‐ Original Message ‐‐‐ On Thursday, November 8, 2018 11:05 AM, Ravishankar N wrote: > It is not a split-brain. Nodes 1 and 3 have xattrs indicating a pending > entry heal on node2 , so heal must have happened ideally. Can you check > a few things? > - Is there any disconnects

Re: [Gluster-users] Self-healing not healing 27k files on GlusterFS 4.1.5 3 nodes replica

2018-11-07 Thread mabi
nything is > logged for these entries when you run 'gluster volume heal $volname'? > > Regards, > > Ravi > > On 11/07/2018 01:22 PM, mabi wrote: > > > To my eyes this specific case looks like a split-brain scenario but the > > output of "volume info split-brain&q

Re: [Gluster-users] Self-healing not healing 27k files on GlusterFS 4.1.5 3 nodes replica

2018-11-12 Thread mabi
‐‐‐ Original Message ‐‐‐ On Friday, November 9, 2018 2:11 AM, Ravishankar N wrote: > Please re-create the symlink on node 2 to match how it is in the other > nodes and launch heal again. Check if this is the case for other entries > too. > -Ravi I can't create the missing symlink on

Re: [Gluster-users] Self-healing not healing 27k files on GlusterFS 4.1.5 3 nodes replica

2018-11-12 Thread mabi
‐‐‐ Original Message ‐‐‐ On Friday, November 9, 2018 2:11 AM, Ravishankar N wrote: > Please re-create the symlink on node 2 to match how it is in the other > nodes and launch heal again. Check if this is the case for other entries > too. > -Ravi Please ignore my previous mail, I was

[Gluster-users] Directory selfheal failed: Unable to form layout for directory on 4.1.5 fuse client

2018-11-13 Thread mabi
://bugzilla.redhat.com/show_bug.cgi?id=1567100 Is it possible that this bug has not made it yet into a release? or is it maybe a regression? Regards, Mabi ___ Gluster-users mailing list Gluster-users@gluster.org https://lists.gluster.org/mailman/listinfo

Re: [Gluster-users] Self-healing not healing 27k files on GlusterFS 4.1.5 3 nodes replica

2018-11-15 Thread mabi
‐‐‐ Original Message ‐‐‐ On Thursday, November 15, 2018 1:41 PM, Ravishankar N wrote: > Thanks, noted. One more query. Are there files inside each of these > directories? Or is it just empty directories? You will find below the content of each of these 3 directories taken the brick on

Re: [Gluster-users] Self-healing not healing 27k files on GlusterFS 4.1.5 3 nodes replica

2018-11-06 Thread mabi
al Message ‐‐‐ On Monday, November 5, 2018 4:36 PM, mabi wrote: > Ravi, I did not yet modify the cluster.data-self-heal parameter to off > because in the mean time node2 of my cluster had a memory shortage (this node > has 32 GB of RAM) and as such I had to reboot it. After that rebo

Re: [Gluster-users] Self-healing not healing 27k files on GlusterFS 4.1.5 3 nodes replica

2018-11-14 Thread mabi
‐‐‐ Original Message ‐‐‐ On Wednesday, November 14, 2018 5:34 AM, Ravishankar N wrote: > I thought it was missing which is why I asked you to create it.  The > trusted.gfid xattr for any given file or directory must be same in all 3 > bricks.  But it looks like that isn't the case. Are

Re: [Gluster-users] Self-healing not healing 27k files on GlusterFS 4.1.5 3 nodes replica

2018-11-15 Thread mabi
( ( ))" then if I check the ".../brick/.glusterfs/aa/e4/aae4098a-1a71-4155-9cc9-e564b89957cf" on node 1 or node 3 it does not have any symlink to a file. Or am I looking at the wrong place maybe or there is another trick in order to find the GFID->filename? Regards, Mab

Re: [Gluster-users] Self-healing not healing 27k files on GlusterFS 4.1.5 3 nodes replica

2018-11-16 Thread mabi
‐‐‐ Original Message ‐‐‐ On Friday, November 16, 2018 5:14 AM, Ravishankar N wrote: > Okay, as asked in the previous mail, please share the getfattr output > from all bricks for these 2 files. I think once we have this, we can try > either 'adjusting' the the gfid and symlinks on node 2

[Gluster-users] S3-compatbile object storage on top of GlusterFS volume

2018-12-14 Thread mabi
for this case. Best regards, Mabi ___ Gluster-users mailing list Gluster-users@gluster.org https://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] op-version compatibility with older clients

2018-11-21 Thread mabi
the connect to my server and work correctly? I am running 4.1.5 on my GlusterFS server and I am asking because I still have a few clients on 3.12.14 which will need to stay longer on 3.12.14. Regards, Mabi ___ Gluster-users mailing list Gluster-users

Re: [Gluster-users] Self-healing not healing 27k files on GlusterFS 4.1.5 3 nodes replica

2018-11-17 Thread mabi
dir4/dir5/dir6/dir7/dir8/dir9/dir10 > > 2. Fuse mount the volume temporarily in some location and from that > mount point, do a `find .|xargs stat >/dev/null` > > > 3. Run`gluster volume heal $volname` > > HTH, > Ravi > > On 11/16/2018 09:07 PM, mabi wrote:

Re: [Gluster-users] Self-healing not healing 27k files on GlusterFS 4.1.5 3 nodes replica

2018-11-17 Thread mabi
n and from that > mount point, do a `find .|xargs stat >/dev/null` > > > 3. Run`gluster volume heal $volname` > > HTH, > Ravi > > On 11/16/2018 09:07 PM, mabi wrote: > > > And finally here is the output of a getfattr from both files from the 3 > >

[Gluster-users] Max length for filename

2019-01-28 Thread mabi
ong) and was actually wondering on GlusterFS what is the maximum length for a filename? I am using GlusterFS 4.1.6. Regards, Mabi ___ Gluster-users mailing list Gluster-users@gluster.org https://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] quotad error log warnings repeated

2019-02-06 Thread mabi
what can I do about it? Best regards, Mabi ___ Gluster-users mailing list Gluster-users@gluster.org https://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] GlusterFS 4.1.9 Debian stretch packages missing

2019-06-23 Thread mabi
Hello, I would like to upgrade my GlusterFS 4.1.8 cluster to 4.1.9 on my Debian stretch nodes. Unfortunately the packages are missing as you can see here: https://download.gluster.org/pub/gluster/glusterfs/4.1/4.1.9/Debian/stretch/amd64/apt/ As far as I know GlusterFS 4.1 is not yet EOL so I

[Gluster-users] GlusterFS FUSE client on BSD

2019-07-03 Thread mabi
Hello, Is there a way to mount a GlusterFS volume using FUSE on an BSD machine such as OpenBSD? If not, what is the alternative, I guess NFS? Regards, M. ___ Gluster-users mailing list Gluster-users@gluster.org

[Gluster-users] writing to fuse device failed: No such file or directory

2020-03-02 Thread mabi
x-gnu/libc.so.6(clone+0x3f)[0x7f93d46ead0f] ) 0-glusterfs-fuse: writing to fuse device failed: No such file or directory Both the server and clients are Debian 9. What exactly does this error message mean? And is it normal? or what should I do to fix that? Regards, Mabi Co

Re: [Gluster-users] writing to fuse device failed: No such file or directory

2020-03-02 Thread mabi
‐‐‐ Original Message ‐‐‐ On Tuesday, March 3, 2020 6:11 AM, Hari Gowtham wrote: > I checked on the backport and found that this patch hasn't yet been > backported to any of the release branches. > If this is the fix, it would be great to have them backported for the next > release.

Re: [Gluster-users] Announcing Gluster release 5.11

2019-12-27 Thread mabi
forgotten? Thank you very much in advance. Best regards, Mabi ‐‐‐ Original Message ‐‐‐ On Wednesday, December 18, 2019 4:56 AM, Hari Gowtham wrote: > Hi, > > The Gluster community is pleased to announce the release of Gluster > 5.11 (packages available at [1]). >

Re: [Gluster-users] Announcing Gluster release 5.11

2019-12-27 Thread mabi
Thank you very much for your fast response and for adding the missing Debian packages. ‐‐‐ Original Message ‐‐‐ On Friday, December 27, 2019 10:36 AM, Shwetha Acharya wrote: > Hi Mabi, > > Glusterfs 5.11 Debian amd64 stretch packages are now available. > > Reg

Re: [Gluster-users] writing to fuse device failed: No such file or directory

2020-05-04 Thread mabi
Hello, Now that GlusterFS 5.13 has been released, could someone let me know if this issue (see mail below) has been fixed in 5.13? Thanks and regards, Mabi ‐‐‐ Original Message ‐‐‐ On Monday, March 2, 2020 3:17 PM, mabi wrote: > Hello, > > On the FUSE clients of my GlusterFS

Re: [Gluster-users] writing to fuse device failed: No such file or directory

2020-05-05 Thread mabi
the use version 7 in production or should I better use version 6? And is it possible to upgrade from 5.11 directly to 7.5? Regards, Mabi ‐‐‐ Original Message ‐‐‐ On Tuesday, May 5, 2020 1:40 PM, Hari Gowtham wrote: > Hi, > > I don't see the above mentioned fix to be backport

Re: [Gluster-users] writing to fuse device failed: No such file or directory

2020-05-05 Thread mabi
surprised. Best regards, Mabi ‐‐‐ Original Message ‐‐‐ On Monday, May 4, 2020 9:57 PM, Artem Russakovskii wrote: > I'm on 5.13, and these are the only error messages I'm still seeing (after > downgrading from the failed v7 update): > > [2020-05-04 19:56:29.391121] E [fuse-

Re: [Gluster-users] writing to fuse device failed: No such file or directory

2020-05-06 Thread mabi
Hi everyone, So because upgrading introduces additional problems, does this means I should stick with 5.x even if it is EOL? Or what is a "safe" version to upgrade to? Regards, Mabi ‐‐‐ Original Message ‐‐‐ On Wednesday, May 6, 2020 2:44 AM, Artem Russakovskii wrote:

[Gluster-users] glustershd: EBADFD [File descriptor in bad state]

2020-10-09 Thread mabi
n does not seem to be able to heal the two files and directories itself, I would like to know what I can do here to fix this? Thank you in advance for your help. Best regards, Mabi Community Meeting Calendar: Schedule - Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC Bridge: https:

Re: [Gluster-users] glustershd: EBADFD [File descriptor in bad state]

2020-10-09 Thread mabi
, mabi wrote: > Hello, > > I have a GlusterFS 6.9 cluster with two nodes and one arbitrer node with a > replica volume and currently there are two files and two directories stuck to > be self-healed. > > Node 1 and 3 (arbitrer) have the files and directories on the brick but no

Re: [Gluster-users] Upgrade from 6.9 to 7.7 stuck (peer is rejected)

2020-08-23 Thread mabi
could fix this issue or provide me a workaround which works because version 6 of GlusterFS is not supported anymore so I would really like to move on to the stable version 7. Thank you very much in advance. Best regards, Mabi ‐‐‐ Original Message ‐‐‐ On Saturday, August 22, 2020 7:53 PM

[Gluster-users] Upgrade from 6.9 to 7.7 stuck (peer is rejected)

2020-08-22 Thread mabi
But I do not see any solutions or workaround. So now I am stuck with a degraded GlusterFS cluster. Could someone please advise me as soon as possible on what I should do? Is there maybe any workarounds? Thank you very much in advance for your response. Best regards, Mabi Community

Re: [Gluster-users] Upgrade from 6.9 to 7.7 stuck (peer is rejected)

2020-08-24 Thread mabi
where more than one node will not be running the glusterfsd (brick) process so this means that the quorum is lost and then my FUSE clients will loose connection to the volume? I just want to be sure that there will not be any downtime. Best regards, Mabi ‐‐‐ Original Message ‐‐‐ On Monday

Re: [Gluster-users] Upgrade from 6.9 to 7.7 stuck (peer is rejected)

2020-10-26 Thread mabi
‐‐‐ Original Message ‐‐‐ On Monday, October 26, 2020 3:39 PM, Diego Zuccato wrote: > Memory does not serve me well (there are 28 disks, not 26!), but bash > history does :) Yes, I also too often rely on history ;) > gluster volume remove-brick BigVol replica 2 >

Re: [Gluster-users] Upgrade from 6.9 to 7.7 stuck (peer is rejected)

2020-10-26 Thread mabi
/blob/devel/extras/quota/quota_fsck.py > > You can take a look in the mailing list for usage and more details. > > Best Regards, > Strahil Nikolov > > В понеделник, 26 октомври 2020 г., 16:40:06 Гринуич+2, Diego Zuccato > diego.zucc...@unibo.it написа: > > Il 26

Re: [Gluster-users] Upgrade from 6.9 to 7.7 stuck (peer is rejected)

2020-10-27 Thread mabi
- can you provide the details > ? > > Best Regards, > Strahil Nikolov > > В понеделник, 26 октомври 2020 г., 20:38:38 Гринуич+2, mabi > m...@protonmail.ch написа: > > Ok I see I won't go down that path of disabling quota. > > I could now remove the arbiter brick of my

Re: [Gluster-users] Upgrade from 6.9 to 7.7 stuck (peer is rejected)

2020-10-26 Thread mabi
:glusterd_xfer_friend_add_resp] 0-glusterd: Responded to node1.domain (0), ret: 0, op_ret: -1 Can someone please advise what I need to do in order to have my arbiter node up and running again as soon as possible? Thank you very much in advance for your help. Best regards, Mabi ‐‐‐ Original

Re: [Gluster-users] Upgrade from 6.9 to 7.7 stuck (peer is rejected)

2020-10-26 Thread mabi
On Monday, October 26, 2020 11:34 AM, Diego Zuccato wrote: > IIRC it's the same issue I had some time ago. > I solved it by "degrading" the volume to replica 2, then cleared the > arbiter bricks and upgraded again to replica 3 arbiter 1. Thanks Diego for pointing out this workaround. How much

Re: [Gluster-users] Upgrade from 6.9 to 7.7 stuck (peer is rejected)

2020-10-26 Thread mabi
‐‐‐ Original Message ‐‐‐ On Monday, October 26, 2020 2:56 PM, Diego Zuccato wrote: > The volume is built by 26 10TB disks w/ genetic data. I currently don't > have exact numbers, but it's still at the beginning, so there are a bit > less than 10TB actually used. > But you're only

[Gluster-users] Slow writes on replica+arbiter after upgrade to 7.8 (issue on github)

2020-11-06 Thread mabi
Thank you in advance for your help. Regards, Mabi Community Meeting Calendar: Schedule - Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC Bridge: https://meet.google.com/cpu-eiue-hvk Gluster-users mailing list Gluster-users@gluster.org https://lists.gluster.org/mailman/listinfo

[Gluster-users] How to find out what GlusterFS is doing

2020-11-05 Thread mabi
1 0 18 0 0 22215740 32056 26067200 308 2524 9892 334537 2 83 14 1 0 18 0 0 22179348 32084 26082800 169 2038 8703 250351 1 88 10 0 0 I already tried rebooting but that did not help and there is nothing special in the log files either. Best reg

Re: [Gluster-users] How to find out what GlusterFS is doing

2020-11-05 Thread mabi
0.0 0.0 0:00.00 cpuhp/1 Any clues anyone? The load is really high around 20 now on the two nodes... ‐‐‐ Original Message ‐‐‐ On Thursday, November 5, 2020 11:50 AM, mabi wrote: > Hello, > > I have a 3 node replica including arbiter GlusterFS 7.8 server with 3 volumes

Re: [Gluster-users] How to find out what GlusterFS is doing

2020-11-05 Thread mabi
‐‐‐ Original Message ‐‐‐ On Thursday, November 5, 2020 3:28 PM, Yaniv Kaul wrote: > Waiting for IO, just like the rest of those in D state. > You may have a slow storage subsystem. How many cores do you have, btw? > Y. Strange because "iostat -xtcm 5" does not show that the disks are

[Gluster-users] Geo replication procedure for DR

2023-06-05 Thread mabi
regards, Mabi Community Meeting Calendar: Schedule - Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC Bridge: https://meet.google.com/cpu-eiue-hvk Gluster-users mailing list Gluster-users@gluster.org https://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] How to find out data alignment for LVM thin volume brick

2023-06-05 Thread mabi
value I need to use for this specific disk? Best regards, Mabi Community Meeting Calendar: Schedule - Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC Bridge: https://meet.google.com/cpu-eiue-hvk Gluster-users mailing list Gluster-users@gluster.org https://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Geo replication procedure for DR

2023-06-07 Thread mabi
there? I had the following question in my previous mail in this regard: "And once the primary site is back online how do you copy back or sync all data changes done on the secondary volume on the secondary site back to the primary volume on the primary site?" Best regards, Mabi --

Re: [Gluster-users] How to find out data alignment for LVM thin volume brick

2023-06-07 Thread mabi
. Best regards, Mabi --- Original Message --- On Wednesday, June 7th, 2023 at 6:56 AM, Strahil Nikolov wrote: > Have you checked this page: > https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.5/html/administration_guide/brick_configuration > ? > >

<    1   2