Re: [Gluster-users] glusterfsd process spinning

2014-06-03 Thread Susant Palai
) and the current issue looks more similar. Well will look at the client logs for more information. Susant. - Original Message - From: Franco Broi franco.b...@iongeo.com To: Pranith Kumar Karampuri pkara...@redhat.com Cc: Susant Palai spa...@redhat.com, gluster-users@gluster.org, Raghavendra

Re: [Gluster-users] glusterfsd process spinning

2014-06-04 Thread Susant Palai
Pranith can you send the client and bricks logs. Thanks, Susant~ - Original Message - From: Pranith Kumar Karampuri pkara...@redhat.com To: Franco Broi franco.b...@iongeo.com Cc: gluster-users@gluster.org, Raghavendra Gowdappa rgowd...@redhat.com, spa...@redhat.com, kdhan...@redhat.com,

Re: [Gluster-users] glusterfsd process spinning

2014-06-17 Thread Susant Palai
.) Hi Lala, Can you provide the steps to downgrade to 3.4 from 3.5 ? Thanks :) - Original Message - From: Franco Broi franco.b...@iongeo.com To: Susant Palai spa...@redhat.com Cc: Pranith Kumar Karampuri pkara...@redhat.com, gluster-users@gluster.org, Raghavendra Gowdappa rgowd

Re: [Gluster-users] glusterfsd process spinning

2014-06-18 Thread Susant Palai
Can you figure out the failure from log and update here ? - Original Message - From: Franco Broi franco.b...@iongeo.com To: Lalatendu Mohanty lmoha...@redhat.com Cc: Susant Palai spa...@redhat.com, Niels de Vos nde...@redhat.com, Pranith Kumar Karampuri pkara...@redhat.com, gluster-users

Re: [Gluster-users] glusterfsd process spinning

2014-06-18 Thread Susant Palai
Hey sorry. couldn't notice you have already uploaded logs. Kaushal is looking at the issue now. - Original Message - From: Franco Broi franco.b...@iongeo.com To: Susant Palai spa...@redhat.com Cc: Lalatendu Mohanty lmoha...@redhat.com, Niels de Vos nde...@redhat.com, Pranith Kumar

Re: [Gluster-users] Unable to delete files but getfattr shows file is part of glusterfs

2014-06-18 Thread Susant Palai
Hi, Can you upload the logs ? Susant - Original Message - From: Pranith Kumar Karampuri pkara...@redhat.com To: SINCOCK John j.sinc...@fugro.com, gluster-users@gluster.org Cc: Susant Palai spa...@redhat.com Sent: Wednesday, 18 June, 2014 7:48:19 AM Subject: Re: [Gluster-users

Re: [Gluster-users] Folder disappeared on volume, but exists on bricks.

2014-12-02 Thread Susant Palai
Hi, In case the missing directory path is known, a fresh lookup on that path will heal the directory entry across the cluster and it will be shown on the mount point. e.g on the mount point: ls COMPLETE PATH of the directory. * The directory may not get a fresh lookup on the existing mount.

Re: [Gluster-users] Folder disappeared on volume, but exists on bricks.

2014-12-02 Thread Susant Palai
Hi Peter, As I mentioned in my previous mail, you need to send fresh lookups on the missing directories. :) Susant - Original Message - From: Peter B. p...@das-werkstatt.com To: gluster-users@gluster.org Sent: Tuesday, 2 December, 2014 5:29:57 PM Subject: Re: [Gluster-users] Folder

Re: [Gluster-users] Folder disappeared on volume, but exists on bricks.

2014-12-02 Thread Susant Palai
Hi Peter, I tried your scenario on my setup [deleted the directory on one of the brick{hashed}]. Hence, I don't see the directory on the mount point. So what I tried is, created a fresh mount and sent a lookup on the missing directory name. e.g /mnt/fresh is your new mount point. And

Re: [Gluster-users] Files not visible under mount point

2015-04-24 Thread Susant Palai
sharadshukl...@gmail.com To: Susant Palai spa...@redhat.com Cc: gluster-users Gluster-users@gluster.org Sent: Thursday, April 23, 2015 6:14:54 PM Subject: Re: [Gluster-users] Files not visible under mount point Hi Susant, i send you the xattrs of a file from the brick and from the mount

Re: [Gluster-users] rm: cannot remove `calendar-data': Directory not empty

2015-04-21 Thread Susant Palai
- Original Message - From: Pierre Léonard pleon...@jouy.inra.fr To: gluster-users@gluster.org Sent: Tuesday, 21 April, 2015 2:08:40 PM Subject: [Gluster-users] rm: cannot remove `calendar-data': Directory not empty Hi All, I have a list of directory as the following : rm:

Re: [Gluster-users] Files not visible under mount point

2015-04-23 Thread Susant Palai
Can you give stat of the files form the brick ? - Original Message - From: Sharad Shukla sharadshukl...@gmail.com To: Gluster-users@gluster.org Sent: Wednesday, 22 April, 2015 10:03:26 PM Subject: [Gluster-users] Files not visible under mount point Hi All, Somehow due to some

Re: [Gluster-users] Files not visible under mount point

2015-04-23 Thread Susant Palai
FILE_PATH. Regards, Susant - Original Message - From: Sharad Shukla sharadshukl...@gmail.com To: Susant Palai spa...@redhat.com Cc: gluster-users Gluster-users@gluster.org Sent: Thursday, 23 April, 2015 1:09:48 PM Subject: Re: [Gluster-users] Files not visible under mount point Hi

Re: [Gluster-users] Poor performance with small files

2015-04-30 Thread Susant Palai
We have addressed few parts of the rebalance performance which should be backported to 3.7 soon. Regards, Susant - Original Message - From: Raghavendra Bhat rab...@redhat.com To: Alex Crow ac...@integrafin.co.uk Cc: gluster-users@gluster.org Sent: Thursday, 30 April, 2015 2:30:41

Re: [Gluster-users] Write operations failing on clients

2015-05-06 Thread Susant Palai
Unaware of self heal and rebalance interaction. But rebalance and mount log will be helpful here. +CCING Ravi - Original Message - From: Ben Turner btur...@redhat.com To: Alex ale...@icecat.biz, Susant Palai spa...@redhat.com Cc: gluster-users@gluster.org Sent: Wednesday, May 6

Re: [Gluster-users] gluster 3.4.5,gluster client process was core dump

2015-05-25 Thread Susant Palai
We found a similar crash and the fix for the same is here http://review.gluster.org/#/c/10389/. You can find the RCA in the commit message. Regards, Susant - Original Message - From: Dang Zhiqiang dzq...@163.com To: gluster-users@gluster.org Sent: Monday, 25 May, 2015 3:30:16 PM

Re: [Gluster-users] Regarding the issues gluster DHT and Layouts of bricks

2015-05-21 Thread Susant Palai
Commets inline. - Original Message - From: Subrata Ghosh subrata.gh...@ericsson.com To: gluster-de...@gluster.org, gluster-users@gluster.org Cc: Nobin Mathew nobin.mat...@ericsson.com, Susant Palai spa...@redhat.com, Vijay Bellur vbel...@redhat.com Sent: Thursday, 21 May, 2015 4:26

Re: [Gluster-users] cluster.min-free-disk is not working in distributed disperse volume

2015-08-24 Thread Susant Palai
Hi, Cluster.min-free-disk controls new file creation on the bricks. If you happen to write to the existing files on the brick and that is leading to brick getting full, then most probably you should run a rebalance. Regards, Susant - Original Message - From: Mathieu Chateau

Re: [Gluster-users] Writing to distributed (non-replicated) volume with failed nodes

2015-10-08 Thread Susant Palai
Hi, If the file creation hashes to the brick which is down, then it fails with ENOENT. Susant - Original Message - From: "Leonid Isaev" To: gluster-users@gluster.org Sent: Thursday, 8 October, 2015 7:54:07 AM Subject: [Gluster-users] Writing to

Re: [Gluster-users] cluster.min-free-disk is not working in distributed disperse volume

2015-08-27 Thread Susant Palai
Comments inline. - Original Message - From: Mohamed Pakkeer mdfakk...@gmail.com To: Susant Palai spa...@redhat.com Cc: Mathieu Chateau mathieu.chat...@lotp.fr, gluster-users gluster-users@gluster.org, Gluster Devel gluster-de...@gluster.org, Vijay Bellur vbel...@redhat.com, Pranith

Re: [Gluster-users] cluster.min-free-disk is not working in distributed disperse volume

2015-08-27 Thread Susant Palai
comments inline. ++Ccing Pranith and Ashish to detail on disperse behaviour. - Original Message - From: Mohamed Pakkeer mdfakk...@gmail.com To: Susant Palai spa...@redhat.com, Vijay Bellur vbel...@redhat.com Cc: Mathieu Chateau mathieu.chat...@lotp.fr, gluster-users gluster-users

Re: [Gluster-users] cluster.min-free-disk is not working in distributed disperse volume

2015-08-25 Thread Susant Palai
Mohamed, Will investigate in to weighted rebalance behavior. Susant - Original Message - From: Mohamed Pakkeer mdfakk...@gmail.com To: Susant Palai spa...@redhat.com Cc: Mathieu Chateau mathieu.chat...@lotp.fr, gluster-users gluster-users@gluster.org, Gluster Devel gluster-de

Re: [Gluster-users] How to diagnose volume rebalance failure?

2015-12-16 Thread Susant Palai
Hi PuYun, Would you be able to run rebalance again and take state-dumps in intervals when you see high mem-usages. Here is the details. ##How to generate statedump We can find the directory where statedump files are created using 'gluster --print-statedumpdir' command. Create that directory

Re: [Gluster-users] How to diagnose volume rebalance failure?

2015-12-16 Thread Susant Palai
that will be helpfull? - Original Message - From: "Susant Palai" <spa...@redhat.com> To: "PuYun" <clou...@126.com> Cc: "gluster-users" <gluster-users@gluster.org> Sent: Thursday, 17 December, 2015 12:20:16 PM Subject: Re: [Gluster-users] How to diagnose v

Re: [Gluster-users] How to diagnose volume rebalance failure?

2015-12-14 Thread Susant Palai
Hi PuYun, We need to figure out some mechanism to get the huge log files. Until then here is something I can think can be reason that can affect the performance. The rebalance normally starts in medium level [performance wise] which means for you in this case will generate two threads for

Re: [Gluster-users] How to diagnose volume rebalance failure?

2015-12-17 Thread Susant Palai
usterfsd for bricks. Only 1 glusterfsd occupied very large mem and it is related to the newly added brick. The other 2 processes seems normal. If that happens again, I will send you the state dump. Thank you. PuYun From: Susant Palai Date: 2015-12-17 14:50 To: PuYun CC: gluster-use

Re: [Gluster-users] Problem rebalancing a distributed volume

2016-07-06 Thread Susant Palai
Hi, Please pass on the rebalance log from the 1st server for more analysis which can be found under /var/log/glusterfs/"$VOL-rebalance.log". And also we need the current layout xattrs from both the bricks, which can be extracted by the following command. "getfattr -m . -de hex <$BRICK_PATH>".

Re: [Gluster-users] rebalance immediately fails 3.7.11, 3.7.12, 3.8.0

2016-07-07 Thread Susant Palai
Hi Wade, Would request you to give the rebalance core file for further analysis. Thanks, Susant - Original Message - > From: "Wade Holler" > To: gluster-users@gluster.org > Sent: Wednesday, 6 July, 2016 12:07:05 AM > Subject: [Gluster-users] rebalance

Re: [Gluster-users] rebalance immediately fails 3.7.11, 3.7.12, 3.8.0

2016-07-07 Thread Susant Palai
Holler" <wade.hol...@gmail.com> > To: "Susant Palai" <spa...@redhat.com> > Cc: gluster-users@gluster.org > Sent: Thursday, 7 July, 2016 5:39:44 PM > Subject: Re: [Gluster-users] rebalance immediately fails 3.7.11, 3.7.12, 3.8.0 > > Ok. Could you please point m

Re: [Gluster-users] gluster reverting directory owndership?

2016-08-08 Thread Susant Palai
, Susant Palai - Original Message - > From: "Sergei Gerasenko" <gera...@gmail.com> > To: gluster-users@gluster.org > Sent: Wednesday, 3 August, 2016 6:46:45 PM > Subject: [Gluster-users] gluster reverting directory owndership? > > Hi, > > It

Re: [Gluster-users] [Gluster-devel] CFP for Gluster Developer Summit

2016-08-24 Thread Susant Palai
to data inconsistency in the files owing to successful writes by more than one client on a file incorrectly. In this talk, I will present the design of lock migration, its status and how this solves the problem of data inconsistency. Thanks, Susant Palai

Re: [Gluster-users] [Gluster-devel] cluster/dht: restrict migration of opened files

2018-01-18 Thread Susant Palai
This does not restrict tiered migrations. Susant On 18 Jan 2018 8:18 pm, "Milind Changire" wrote: On Tue, Jan 16, 2018 at 2:52 PM, Raghavendra Gowdappa wrote: > All, > > Patch [1] prevents migration of opened files during rebalance operation. > If

Re: [Gluster-users] Removing Brick in Distributed GlusterFS

2019-03-13 Thread Susant Palai
disk space available on the target nodes. You can start remove-brick again and it should move out the remaining set of files to the other bricks. > > Thanks > Taste > > > Am 12.03.2019 10:49:13, schrieb Susant Palai: > > Would it be possible for you to pass the rebala

Re: [Gluster-users] Removing Brick in Distributed GlusterFS

2019-03-13 Thread Susant Palai
dress-family: inet > > Ok since there is enough disk space on other Bricks and i actually didnt > complete brick-remove, can i rerun brick-remove to rebalance last Files and > Folders? > > Thanks > Taste > > > Am 12.03.2019 10:49:13, schrieb Susant Palai: > > Would

Re: [Gluster-users] Removing Brick in Distributed GlusterFS

2019-03-13 Thread Susant Palai
ts still a Bug. > Ok, then please file a bug with the details and we can discuss there. Susant > Thx. > > Am 13.03.2019 08:33:35, schrieb Susant Palai: > > > > On Tue, Mar 12, 2019 at 5:16 PM Taste-Of-IT > wrote: > > Hi Susant, > > and thanks for your fast reply and

Re: [Gluster-users] Removing Brick in Distributed GlusterFS

2019-03-12 Thread Susant Palai
Would it be possible for you to pass the rebalance log file on the node from which you want to remove the brick? (location : /var/log/glusterfs/) + the following information: 1 - gluster volume info 2 - gluster volume status 2 - df -h output on all 3 nodes Susant On Tue, Mar 12, 2019 at

Re: [Gluster-users] Tons of dht: Found anomalies in (null)

2020-05-19 Thread Susant Palai
On Tue, May 19, 2020 at 12:15 PM Aravinda VK wrote: > > > On 19-May-2020, at 12:05 PM, Susant Palai wrote: > > > > On Thu, Apr 30, 2020 at 6:31 AM Artem Russakovskii > wrote: > >> Hi, >> >> Every time I ls large dirs in our 1x4 replicate gluste

Re: [Gluster-users] Extremely slow file listing in folders with many files

2020-05-19 Thread Susant Palai
>From the logs it looks like, most of the directories needing heal and this could slow down the ls -R operation. Possible reason for holes=1 in the message could be that one of the brick was down when mkdir was going on or you might have added a new brick recently to the cluster. On Tue, May 19,

Re: [Gluster-users] Tons of dht: Found anomalies in (null)

2020-05-19 Thread Susant Palai
On Thu, Apr 30, 2020 at 6:31 AM Artem Russakovskii wrote: > Hi, > > Every time I ls large dirs in our 1x4 replicate gluster volume, I get a > ton of these in the logs. > > If I run the same ls right away again, they won't repeat, but inevitably, > in a couple of hours or days, they show up

Re: [Gluster-users] Extremely slow file listing in folders with many files

2020-05-20 Thread Susant Palai
s - it's been the same 4 > bricks. > > We need to get to the bottom of this. > > Sincerely, > Artem > > -- > Founder, Android Police <http://www.androidpolice.com>, APK Mirror > <http://www.apkmirror.com/>, Illogical Robot LLC > beerpla.net | @ArtemR <

Re: [Gluster-users] The dht-layout interval is missing

2020-05-29 Thread Susant Palai
On Fri, May 29, 2020 at 1:28 PM jifeng-call <17607319...@163.com> wrote: > Hi All, > I have 6 servers that form a glusterfs 2x3 distributed replication volume, > the details are as follows: > > [root@node1 ~]# gluster volume info > Volume Name: ksvd_vol > Type: Distributed-Replicate > Volume ID:

[Gluster-users] Rebalance improvement.

2020-08-02 Thread Susant Palai
. Would request our community to try out the feature and give us feedback. More information regarding the same will follow. Thanks & Regards, Susant Palai [1] https://review.gluster.org/#/c/glusterfs/+/24443/ <https://review.gluster.org/#/c/glusterfs/+/24443/> Community Meetin

Re: [Gluster-users] Rebalance improvement.

2020-08-03 Thread Susant Palai
at 11:16 AM Susant Palai wrote: > Hi, > Recently, we have pushed some performance improvements for Rebalance > Crawl which used to consume a significant amount of time, out of the entire > rebalance process. > > > The patch [1] is recently merged in upstream and may land

Re: [Gluster-users] Rebalance improvement.

2020-08-03 Thread Susant Palai
> On 03-Aug-2020, at 13:58, Aravinda VK wrote: > > Interesting numbers. Thanks for the effort. > > What is the unit of old/new numbers? seconds? Minutes. > >> On 03-Aug-2020, at 12:47 PM, Susant Palai > <mailto:spa...@redhat.com>> wrote: >>

Re: [Gluster-users] "Mismatching layouts" in glusterfs client logs after new brick addition and rebalance

2020-07-13 Thread Susant Palai
The log messages are fine. Since you added a new brick, the client is responding to that by syncing its in-memory layout with latest server layout. The performance drop could be because of locks taken during this layout sync. > On 02-Jul-2020, at 20:09, Shreyansh Shah > wrote: > > Hi All, >

Re: [Gluster-users] "Mismatching layouts" in glusterfs client logs after new brick addition and rebalance

2020-07-13 Thread Susant Palai
ered from kernel as part of a fop and the directory will be updated with the layout) > On Mon, Jul 13, 2020 at 1:35 PM Susant Palai <mailto:spa...@redhat.com>> wrote: > The log messages are fine. Since you added a new brick, the client is > responding to that by syncing its in-memory