) and the current issue
looks more similar. Well will look at the client logs for more information.
Susant.
- Original Message -
From: Franco Broi franco.b...@iongeo.com
To: Pranith Kumar Karampuri pkara...@redhat.com
Cc: Susant Palai spa...@redhat.com, gluster-users@gluster.org, Raghavendra
Pranith can you send the client and bricks logs.
Thanks,
Susant~
- Original Message -
From: Pranith Kumar Karampuri pkara...@redhat.com
To: Franco Broi franco.b...@iongeo.com
Cc: gluster-users@gluster.org, Raghavendra Gowdappa rgowd...@redhat.com,
spa...@redhat.com, kdhan...@redhat.com,
.)
Hi Lala,
Can you provide the steps to downgrade to 3.4 from 3.5 ?
Thanks :)
- Original Message -
From: Franco Broi franco.b...@iongeo.com
To: Susant Palai spa...@redhat.com
Cc: Pranith Kumar Karampuri pkara...@redhat.com, gluster-users@gluster.org,
Raghavendra Gowdappa rgowd
Can you figure out the failure from log and update here ?
- Original Message -
From: Franco Broi franco.b...@iongeo.com
To: Lalatendu Mohanty lmoha...@redhat.com
Cc: Susant Palai spa...@redhat.com, Niels de Vos nde...@redhat.com,
Pranith Kumar Karampuri pkara...@redhat.com, gluster-users
Hey sorry. couldn't notice you have already uploaded logs. Kaushal is looking
at the issue now.
- Original Message -
From: Franco Broi franco.b...@iongeo.com
To: Susant Palai spa...@redhat.com
Cc: Lalatendu Mohanty lmoha...@redhat.com, Niels de Vos
nde...@redhat.com, Pranith Kumar
Hi,
Can you upload the logs ?
Susant
- Original Message -
From: Pranith Kumar Karampuri pkara...@redhat.com
To: SINCOCK John j.sinc...@fugro.com, gluster-users@gluster.org
Cc: Susant Palai spa...@redhat.com
Sent: Wednesday, 18 June, 2014 7:48:19 AM
Subject: Re: [Gluster-users
Hi,
In case the missing directory path is known, a fresh lookup on that path will
heal the directory entry across the cluster and it will be shown on the mount
point.
e.g on the mount point: ls COMPLETE PATH of the directory.
* The directory may not get a fresh lookup on the existing mount.
Hi Peter,
As I mentioned in my previous mail, you need to send fresh lookups on the
missing directories. :)
Susant
- Original Message -
From: Peter B. p...@das-werkstatt.com
To: gluster-users@gluster.org
Sent: Tuesday, 2 December, 2014 5:29:57 PM
Subject: Re: [Gluster-users] Folder
Hi Peter,
I tried your scenario on my setup [deleted the directory on one of the
brick{hashed}]. Hence, I don't see the directory on the mount point.
So what I tried is, created a fresh mount and sent a lookup on the missing
directory name.
e.g /mnt/fresh is your new mount point. And
sharadshukl...@gmail.com
To: Susant Palai spa...@redhat.com
Cc: gluster-users Gluster-users@gluster.org
Sent: Thursday, April 23, 2015 6:14:54 PM
Subject: Re: [Gluster-users] Files not visible under mount point
Hi Susant,
i send you the xattrs of a file from the brick and from the mount
- Original Message -
From: Pierre Léonard pleon...@jouy.inra.fr
To: gluster-users@gluster.org
Sent: Tuesday, 21 April, 2015 2:08:40 PM
Subject: [Gluster-users] rm: cannot remove `calendar-data': Directory not
empty
Hi All,
I have a list of directory as the following :
rm:
Can you give stat of the files form the brick ?
- Original Message -
From: Sharad Shukla sharadshukl...@gmail.com
To: Gluster-users@gluster.org
Sent: Wednesday, 22 April, 2015 10:03:26 PM
Subject: [Gluster-users] Files not visible under mount point
Hi All,
Somehow due to some
FILE_PATH.
Regards,
Susant
- Original Message -
From: Sharad Shukla sharadshukl...@gmail.com
To: Susant Palai spa...@redhat.com
Cc: gluster-users Gluster-users@gluster.org
Sent: Thursday, 23 April, 2015 1:09:48 PM
Subject: Re: [Gluster-users] Files not visible under mount point
Hi
We have addressed few parts of the rebalance performance which should be
backported to 3.7 soon.
Regards,
Susant
- Original Message -
From: Raghavendra Bhat rab...@redhat.com
To: Alex Crow ac...@integrafin.co.uk
Cc: gluster-users@gluster.org
Sent: Thursday, 30 April, 2015 2:30:41
Unaware of self heal and rebalance interaction. But rebalance and mount log
will be helpful here.
+CCING Ravi
- Original Message -
From: Ben Turner btur...@redhat.com
To: Alex ale...@icecat.biz, Susant Palai spa...@redhat.com
Cc: gluster-users@gluster.org
Sent: Wednesday, May 6
We found a similar crash and the fix for the same is here
http://review.gluster.org/#/c/10389/. You can find the RCA in the commit
message.
Regards,
Susant
- Original Message -
From: Dang Zhiqiang dzq...@163.com
To: gluster-users@gluster.org
Sent: Monday, 25 May, 2015 3:30:16 PM
Commets inline.
- Original Message -
From: Subrata Ghosh subrata.gh...@ericsson.com
To: gluster-de...@gluster.org, gluster-users@gluster.org
Cc: Nobin Mathew nobin.mat...@ericsson.com, Susant Palai
spa...@redhat.com, Vijay Bellur
vbel...@redhat.com
Sent: Thursday, 21 May, 2015 4:26
Hi,
Cluster.min-free-disk controls new file creation on the bricks. If you happen
to write to the existing files on the brick and that is leading to brick
getting full, then most probably you should run a rebalance.
Regards,
Susant
- Original Message -
From: Mathieu Chateau
Hi,
If the file creation hashes to the brick which is down, then it fails with
ENOENT.
Susant
- Original Message -
From: "Leonid Isaev"
To: gluster-users@gluster.org
Sent: Thursday, 8 October, 2015 7:54:07 AM
Subject: [Gluster-users] Writing to
Comments inline.
- Original Message -
From: Mohamed Pakkeer mdfakk...@gmail.com
To: Susant Palai spa...@redhat.com
Cc: Mathieu Chateau mathieu.chat...@lotp.fr, gluster-users
gluster-users@gluster.org, Gluster Devel gluster-de...@gluster.org,
Vijay Bellur vbel...@redhat.com, Pranith
comments inline.
++Ccing Pranith and Ashish to detail on disperse behaviour.
- Original Message -
From: Mohamed Pakkeer mdfakk...@gmail.com
To: Susant Palai spa...@redhat.com, Vijay Bellur vbel...@redhat.com
Cc: Mathieu Chateau mathieu.chat...@lotp.fr, gluster-users
gluster-users
Mohamed,
Will investigate in to weighted rebalance behavior.
Susant
- Original Message -
From: Mohamed Pakkeer mdfakk...@gmail.com
To: Susant Palai spa...@redhat.com
Cc: Mathieu Chateau mathieu.chat...@lotp.fr, gluster-users
gluster-users@gluster.org, Gluster Devel gluster-de
Hi PuYun,
Would you be able to run rebalance again and take state-dumps in intervals
when you see high mem-usages. Here is the details.
##How to generate statedump
We can find the directory where statedump files are created using 'gluster
--print-statedumpdir' command.
Create that directory
that will be helpfull?
- Original Message -
From: "Susant Palai" <spa...@redhat.com>
To: "PuYun" <clou...@126.com>
Cc: "gluster-users" <gluster-users@gluster.org>
Sent: Thursday, 17 December, 2015 12:20:16 PM
Subject: Re: [Gluster-users] How to diagnose v
Hi PuYun,
We need to figure out some mechanism to get the huge log files. Until then
here is something I can think can be reason that can affect the performance.
The rebalance normally starts in medium level [performance wise] which means
for you in this case will generate two threads for
usterfsd
for bricks. Only 1 glusterfsd occupied very large mem and it is related to the
newly added brick. The other 2 processes seems normal. If that happens again, I
will send you the state dump.
Thank you.
PuYun
From: Susant Palai
Date: 2015-12-17 14:50
To: PuYun
CC: gluster-use
Hi,
Please pass on the rebalance log from the 1st server for more analysis which
can be found under /var/log/glusterfs/"$VOL-rebalance.log".
And also we need the current layout xattrs from both the bricks, which can be
extracted by the following command.
"getfattr -m . -de hex <$BRICK_PATH>".
Hi Wade,
Would request you to give the rebalance core file for further analysis.
Thanks,
Susant
- Original Message -
> From: "Wade Holler"
> To: gluster-users@gluster.org
> Sent: Wednesday, 6 July, 2016 12:07:05 AM
> Subject: [Gluster-users] rebalance
Holler" <wade.hol...@gmail.com>
> To: "Susant Palai" <spa...@redhat.com>
> Cc: gluster-users@gluster.org
> Sent: Thursday, 7 July, 2016 5:39:44 PM
> Subject: Re: [Gluster-users] rebalance immediately fails 3.7.11, 3.7.12, 3.8.0
>
> Ok. Could you please point m
,
Susant Palai
- Original Message -
> From: "Sergei Gerasenko" <gera...@gmail.com>
> To: gluster-users@gluster.org
> Sent: Wednesday, 3 August, 2016 6:46:45 PM
> Subject: [Gluster-users] gluster reverting directory owndership?
>
> Hi,
>
> It
to data inconsistency in the files owing to successful
writes by more than one client on a file incorrectly.
In this talk, I will present the design of lock migration, its status
and how this
solves the problem of data inconsistency.
Thanks,
Susant Palai
This does not restrict tiered migrations.
Susant
On 18 Jan 2018 8:18 pm, "Milind Changire" wrote:
On Tue, Jan 16, 2018 at 2:52 PM, Raghavendra Gowdappa
wrote:
> All,
>
> Patch [1] prevents migration of opened files during rebalance operation.
> If
disk space available on
the target nodes. You can start remove-brick again and it should move out
the remaining set of files to the other bricks.
>
> Thanks
> Taste
>
>
> Am 12.03.2019 10:49:13, schrieb Susant Palai:
>
> Would it be possible for you to pass the rebala
dress-family: inet
>
> Ok since there is enough disk space on other Bricks and i actually didnt
> complete brick-remove, can i rerun brick-remove to rebalance last Files and
> Folders?
>
> Thanks
> Taste
>
>
> Am 12.03.2019 10:49:13, schrieb Susant Palai:
>
> Would
ts still a Bug.
>
Ok, then please file a bug with the details and we can discuss there.
Susant
> Thx.
>
> Am 13.03.2019 08:33:35, schrieb Susant Palai:
>
>
>
> On Tue, Mar 12, 2019 at 5:16 PM Taste-Of-IT
> wrote:
>
> Hi Susant,
>
> and thanks for your fast reply and
Would it be possible for you to pass the rebalance log file on the node
from which you want to remove the brick? (location :
/var/log/glusterfs/)
+ the following information:
1 - gluster volume info
2 - gluster volume status
2 - df -h output on all 3 nodes
Susant
On Tue, Mar 12, 2019 at
On Tue, May 19, 2020 at 12:15 PM Aravinda VK wrote:
>
>
> On 19-May-2020, at 12:05 PM, Susant Palai wrote:
>
>
>
> On Thu, Apr 30, 2020 at 6:31 AM Artem Russakovskii
> wrote:
>
>> Hi,
>>
>> Every time I ls large dirs in our 1x4 replicate gluste
>From the logs it looks like, most of the directories needing heal and this
could slow down the ls -R operation. Possible reason for holes=1 in the
message could be that one of the brick was down when mkdir was going on or
you might have added a new brick recently to the cluster.
On Tue, May 19,
On Thu, Apr 30, 2020 at 6:31 AM Artem Russakovskii
wrote:
> Hi,
>
> Every time I ls large dirs in our 1x4 replicate gluster volume, I get a
> ton of these in the logs.
>
> If I run the same ls right away again, they won't repeat, but inevitably,
> in a couple of hours or days, they show up
s - it's been the same 4
> bricks.
>
> We need to get to the bottom of this.
>
> Sincerely,
> Artem
>
> --
> Founder, Android Police <http://www.androidpolice.com>, APK Mirror
> <http://www.apkmirror.com/>, Illogical Robot LLC
> beerpla.net | @ArtemR <
On Fri, May 29, 2020 at 1:28 PM jifeng-call <17607319...@163.com> wrote:
> Hi All,
> I have 6 servers that form a glusterfs 2x3 distributed replication volume,
> the details are as follows:
>
> [root@node1 ~]# gluster volume info
> Volume Name: ksvd_vol
> Type: Distributed-Replicate
> Volume ID:
.
Would request our community to try out the feature and give us feedback.
More information regarding the same will follow.
Thanks & Regards,
Susant Palai
[1] https://review.gluster.org/#/c/glusterfs/+/24443/
<https://review.gluster.org/#/c/glusterfs/+/24443/>
Community Meetin
at 11:16 AM Susant Palai wrote:
> Hi,
> Recently, we have pushed some performance improvements for Rebalance
> Crawl which used to consume a significant amount of time, out of the entire
> rebalance process.
>
>
> The patch [1] is recently merged in upstream and may land
> On 03-Aug-2020, at 13:58, Aravinda VK wrote:
>
> Interesting numbers. Thanks for the effort.
>
> What is the unit of old/new numbers? seconds?
Minutes.
>
>> On 03-Aug-2020, at 12:47 PM, Susant Palai > <mailto:spa...@redhat.com>> wrote:
>>
The log messages are fine. Since you added a new brick, the client is
responding to that by syncing its in-memory layout with latest server layout.
The performance drop could be because of locks taken during this layout sync.
> On 02-Jul-2020, at 20:09, Shreyansh Shah
> wrote:
>
> Hi All,
>
ered from
kernel as part of a fop and the directory will be updated with the layout)
> On Mon, Jul 13, 2020 at 1:35 PM Susant Palai <mailto:spa...@redhat.com>> wrote:
> The log messages are fine. Since you added a new brick, the client is
> responding to that by syncing its in-memory
46 matches
Mail list logo