Re: [Gluster-users] [Gluster-devel] GlusterFS and the logging framework

2014-05-01 Thread Nithya Balachandran
- From: Anand Subramanian ansub...@redhat.com To: Nithya Balachandran nbala...@redhat.com Cc: gluster-de...@gluster.org, gluster-users gluster-users@gluster.org Sent: Wednesday, 30 April, 2014 7:37:14 PM Subject: Re: [Gluster-devel] GlusterFS and the logging framework Thanks for the detailed

Re: [Gluster-users] [Gluster-devel] GlusterFS and the logging framework

2014-05-01 Thread Nithya Balachandran
Thanks Joe. Assuming you like approach#2, please let us know of anything else that you would find helpful in the gluster logs. Thanks, Nithya - Original Message - From: Joe Julian j...@julianfamily.org To: Nithya Balachandran nbala...@redhat.com, gluster-de...@gluster.org Cc: gluster

Re: [Gluster-users] [Gluster-devel] GlusterFS and the logging framework

2014-05-07 Thread Nithya Balachandran
Thanks Vijay. I will go ahead with approach 2. Regards, Nithya - Original Message - From: Vijay Bellur vbel...@redhat.com To: Nithya Balachandran nbala...@redhat.com Cc: Dan Lambright dlamb...@redhat.com, gluster-users gluster-users@gluster.org, gluster-de...@gluster.org Sent: Wednesday

Re: [Gluster-users] [Gluster-devel] GlusterFS and the logging framework

2014-05-16 Thread Nithya Balachandran
Agreed. Such a format would make log messages far more readable as well as making it easy for applications to parse them, Nithya - Original Message - From: Marcus Bointon mar...@synchromedia.co.uk To: gluster-users gluster-users@gluster.org, gluster-de...@gluster.org Sent: Tuesday, 13

Re: [Gluster-users] Is rebalance completely broken on 3.5.3 ?

2015-03-25 Thread Nithya Balachandran
Hi Alessandro, I am sorry to hear that you are facing problems with rebalance. Currently rebalance does not have the information as to how many files exist on the volume and so cannot calculate/estimate the time it will take to complete. Improving the rebalance status output to provide that

Re: [Gluster-users] lots of nfs.log activity since upgrading to 3.4.6

2015-03-25 Thread Nithya Balachandran
Hi, This was inadvertently introduced with another patch. The fix was made to master (http://review.gluster.org/#/c/8621/) 3.6 and 3.5 but it looks like it was not backported to the 3.4 branch Regards, Nithya - Original Message - From: Matt m...@mattlantis.com To:

Re: [Gluster-users] lots of nfs.log activity since upgrading to 3.4.6

2015-03-26 Thread Nithya Balachandran
Hi Matt, The fix is available at : http://review.gluster.org/#/c/10008 This will be taken in for 3.4.7. Regards, Nithya - Original Message - From: Matt m...@mattlantis.com To: Nithya Balachandran nbala...@redhat.com Sent: Wednesday, 25 March, 2015 6:24:25 PM Subject: Re: [Gluster

Re: [Gluster-users] Is rebalance completely broken on 3.5.3 ?

2015-03-26 Thread Nithya Balachandran
. Regards, Nithya - Original Message - From: Alessandro Ipe alessandro@meteo.be To: Nithya Balachandran nbala...@redhat.com Cc: gluster-users@gluster.org Sent: Wednesday, 25 March, 2015 5:42:02 PM Subject: Re: [Gluster-users] Is rebalance completely broken on 3.5.3 ? Hi Nithya, Thanks

Re: [Gluster-users] One host won't rebalance

2015-06-05 Thread Nithya Balachandran
To: Branden Timm Cc: Shyamsundar Ranganathan; Susant Palai; gluster-users@gluster.org; Atin Mukherjee; Nithya Balachandran Subject: Re: [Gluster-users] One host won't rebalance Sent from Samsung Galaxy S4 On 4 Jun 2015 22:18, Branden Timm bt...@wisc.edumailto:bt...@wisc.edu wrote

Re: [Gluster-users] One host won't rebalance

2015-06-08 Thread Nithya Balachandran
:client_setvolume_cbk] 0-bigdata2-client-1: SETVOLUME on remote-host failed: Authentication for all subvols on gluster-6. Can you send us the brick logs for those as well? Thanks, Nithya - Original Message - From: Branden Timm bt...@wisc.edu To: Nithya Balachandran nbala...@redhat.com Cc

Re: [Gluster-users] How to diagnose volume rebalance failure?

2015-12-14 Thread Nithya Balachandran
Hi, Can you send us the rebalance log? Regards, Nithya - Original Message - > From: "PuYun" > To: "gluster-users" > Sent: Monday, December 14, 2015 11:33:40 AM > Subject: Re: [Gluster-users] How to diagnose volume rebalance failure? > >

Re: [Gluster-users] DHT error

2016-06-07 Thread Nithya Balachandran
On Tue, Jun 7, 2016 at 2:01 PM, Emmanuel Dreyfus wrote: > Hello > > I get this message in the log, but I have trouble to figure > what it means. Any hint? > > [2016-06-07 06:41:17.366490] I [MSGID: 109036] > [dht-common.c:8173:dht_log_new_layout_for_dir_selfheal] 0-gfs-dht:

Re: [Gluster-users] Disk failed, how do I remove brick?

2016-06-14 Thread Nithya Balachandran
On Fri, Jun 10, 2016 at 1:25 AM, Phil Dumont wrote: > Just started trying gluster, to decide if we want to put it into > production. > > Running version 3.7.11-1 > > Replicated, distributed volume, two servers, 20 bricks per server: > > [root@storinator1 ~]#

Re: [Gluster-users] Always writeable distributed volume

2017-02-01 Thread Nithya Balachandran
On 1 February 2017 at 19:30, Jesper Led Lauridsen TS Infra server wrote: > Arbiter, isn't that only used where you want replica, but same storage > space. > > I would like a distributed volume where I can write, even if one of the > bricks fail. No replication. > > DHT does not

Re: [Gluster-users] rebalance and volume commit hash

2017-01-24 Thread Nithya Balachandran
On 20 January 2017 at 01:15, Shyam wrote: > > > On 01/17/2017 11:40 AM, Piotr Misiak wrote: > >> >> 17 sty 2017 17:10 Jeff Darcy napisał(a): >> >>> >>> Do you think that is wise to run rebalance process manually on every brick with the actual commit

Re: [Gluster-users] Question about heterogeneous bricks

2017-02-21 Thread Nithya Balachandran
Hi, Ideally, both bricks in a replica set should be of the same size. Ravi, can you confirm? Regards, Nithya On 21 February 2017 at 16:05, Daniele Antolini wrote: > Hi Serkan, > > thanks a lot for the answer. > > So, if you are correct, in a distributed with replica

Re: [Gluster-users] nfs-ganesha logs

2017-03-01 Thread Nithya Balachandran
On 1 March 2017 at 18:25, Soumya Koduri wrote: > I am not sure if there are any outstanding issues with exposing shard > volume via gfapi. CCin Krutika. > > On 02/28/2017 01:29 PM, Mahdi Adnan wrote: > >> Hi, >> >> >> We have a Gluster volume hosting VMs for ESXi exported via

Re: [Gluster-users] Please help

2016-10-26 Thread Nithya Balachandran
On 26 October 2016 at 19:47, Leung, Alex (398C) wrote: > Does anyone has any idea to troubleshoot the following problem? > > > > Alex > > > Can you please provide the gluster client logs (in /var/log/glusterfs) and the gluster volume info? Regards, Nithya > > >

Re: [Gluster-users] [Gluster-devel] Feedback on DHT option "cluster.readdir-optimize"

2016-11-10 Thread Nithya Balachandran
On 8 November 2016 at 20:21, Kyle Johnson wrote: > Hey there, > > We have a number of processes which daily walk our entire directory tree > and perform operations on the found files. > > Pre-gluster, this processes was able to complete within 24 hours of > starting. After

Re: [Gluster-users] Gluster File Abnormalities

2016-11-15 Thread Nithya Balachandran
Hi kevin, On 15 November 2016 at 20:56, Kevin Leigeb wrote: > All - > > > > We recently moved from an old cluster running 3.7.9 to a new one running > 3.8.4. To move the data we rsync’d all files from the old gluster nodes > that were not in the .glusterfs directory and

Re: [Gluster-users] Gluster File Abnormalities

2016-11-15 Thread Nithya Balachandran
om a backup. Regards, Nithya > Thanks, > > Kevin > > > > *From:* Nithya Balachandran [mailto:nbala...@redhat.com] > *Sent:* Tuesday, November 15, 2016 10:21 AM > *To:* Kevin Leigeb <kevin.lei...@wisc.edu> > *Cc:* gluster-users@gluster.org > *Subject:* Re:

Re: [Gluster-users] Gluster File Abnormalities

2016-11-16 Thread Nithya Balachandran
rnal operations that are performed. Thanks, Nithya > > > *From:* Nithya Balachandran [mailto:nbala...@redhat.com] > *Sent:* Tuesday, November 15, 2016 10:55 AM > *To:* Kevin Leigeb <kevin.lei...@wisc.edu> > > *Subject:* Re: [Gluster-users] Gluster File Abnormalities > >

Re: [Gluster-users] Rebalancing after adding larger bricks

2016-10-14 Thread Nithya Balachandran
On 11 October 2016 at 22:32, Jackie Tung wrote: > Joe, > > Thanks for that, that was educational. Gluster docs claim that since 3.7, > DHT hash ranges are weighted based on brick sizes by default: > > $ gluster volume get Option Value > >

Re: [Gluster-users] Gluster 3.8.10 rebalance VMs corruption

2017-03-20 Thread Nithya Balachandran
Hi, Do you know the GFIDs of the VM images which were corrupted? Regards, Nithya On 20 March 2017 at 20:37, Krutika Dhananjay wrote: > I looked at the logs. > > From the time the new graph (since the add-brick command you shared where > bricks 41 through 44 are added) is

Re: [Gluster-users] rebalance fix layout necessary

2017-04-04 Thread Nithya Balachandran
On 4 April 2017 at 12:33, Amudhan P wrote: > Hi, > > I have a query on rebalancing. > > let's consider following is my folder hierarchy. > > parent1-fol (parent folder) > |_ > class-fol-1 ( 1 st level subfolder) >

Re: [Gluster-users] Rebalance info

2017-04-17 Thread Nithya Balachandran
On 17 April 2017 at 16:04, Gandalf Corvotempesta < gandalf.corvotempe...@gmail.com> wrote: > Let's assume a replica 3 cluster with 3 bricks used at 95% > > If I add 3 bricks more , a rebalance (in addition to the corruption :-) ) > will move some shards to the newly added bricks so that old

Re: [Gluster-users] Cannot remove-brick/migrate data

2017-03-08 Thread Nithya Balachandran
On 8 March 2017 at 23:34, Jarsulic, Michael [CRI] < mjarsu...@bsd.uchicago.edu> wrote: > I am having issues with one of my systems that houses two bricks and want > to bring it down for maintenance. I was able to remove the first brick > successfully and committed the changes. The second brick is

Re: [Gluster-users] rebalance fix layout necessary

2017-04-06 Thread Nithya Balachandran
t-handshake.c:202:client_set_lk_version_cbk] > 2-gfs-vol-client-1045: Server lk version = 1 > > > Regards, > Amudhan > > On Tue, Apr 4, 2017 at 4:31 PM, Amudhan P <amudha...@gmail.com> wrote: > >> I mean time takes for listing folders and files? because of "rebala

Re: [Gluster-users] Rebalance task fails

2017-07-09 Thread Nithya Balachandran
On 7 July 2017 at 15:42, Szymon Miotk wrote: > Hello everyone, > > > I have problem rebalancing Gluster volume. > Gluster version is 3.7.3. > My 1x3 replicated volume become full, so I've added three more bricks > to make it 2x3 and wanted to rebalance. > But every time I

Re: [Gluster-users] Rebalance task fails

2017-07-13 Thread Nithya Balachandran
list with large > attachments. > Could someone explain what is index in Gluster? > Unfortunately index is popular word, so googling is not very helpful. > > Best regards, > Szymon Miotk > > On Sun, Jul 9, 2017 at 6:37 PM, Nithya Balachandran <nbala...@redhat.com> > wrot

Re: [Gluster-users] Rebalance task fails

2017-07-13 Thread Nithya Balachandran
process had already crashed. The index here is simply the value of the number of nodes on which the rebalance process should be running - it is used to track the rebalance status on all nodes. Best regards, > Szymon Miotk > > On Thu, Jul 13, 2017 at 10:12 AM, Nithya Balachand

Re: [Gluster-users] Hot Tier

2017-07-30 Thread Nithya Balachandran
Milind and Hari, Can you please take a look at this? Thanks, Nithya On 31 July 2017 at 05:12, Dmitri Chebotarov <4dim...@gmail.com> wrote: > Hi > > I'm looking for an advise on hot tier feature - how can I tell if the hot > tier is working? > > I've attached replicated-distributed hot tier to

Re: [Gluster-users] Reliability issues with Gluster 3.10 and shard

2017-05-15 Thread Nithya Balachandran
On 15 May 2017 at 11:01, Benjamin Kingston wrote: > I resolved this with the following settings, particularly disabling > features.ctr-enabled > That's odd. CTR should be enabled for tiered volumes. Was it enabled by default? > > olume Name: storage2 > Type:

Re: [Gluster-users] Deleting large files on sharded volume hangs and doesn't delete shards

2017-05-17 Thread Nithya Balachandran
I don't think we have tested shards with a tiered volume. Do you see such issues on non-tiered sharded volumes? Regards, Nithya On 18 May 2017 at 00:51, Walter Deignan wrote: > I have a reproducible issue where attempting to delete a file large enough > to have been

Re: [Gluster-users] Gluster Documentation Feedback

2017-06-19 Thread Nithya Balachandran
Gentle reminder ... On 15 June 2017 at 10:43, Nithya Balachandran <nbala...@redhat.com> wrote: > Hi, > > We are looking at improving our documentation ( > http://gluster.readthedocs.io/en/latest/) and would like your feedback. > > Please let us know what would make the d

Re: [Gluster-users] [Gluster-devel] [Gluster-Maintainers] Release 3.11.1: Scheduled for 20th of June

2017-06-23 Thread Nithya Balachandran
On 22 June 2017 at 22:44, Pranith Kumar Karampuri wrote: > > > On Wed, Jun 21, 2017 at 9:12 PM, Shyam wrote: > >> On 06/21/2017 11:37 AM, Pranith Kumar Karampuri wrote: >> >>> >>> >>> On Tue, Jun 20, 2017 at 7:37 PM, Shyam >>

Re: [Gluster-users] Distributed re-balance issue

2017-05-24 Thread Nithya Balachandran
On 24 May 2017 at 22:45, Nithya Balachandran <nbala...@redhat.com> wrote: > > > On 24 May 2017 at 21:55, Mahdi Adnan <mahdi.ad...@outlook.com> wrote: > >> Hi, >> >> >> Thank you for your response. >> >> I have around 15 files

Re: [Gluster-users] Distributed re-balance issue

2017-05-24 Thread Nithya Balachandran
On 24 May 2017 at 20:02, Mohammed Rafi K C wrote: > > > On 05/23/2017 08:53 PM, Mahdi Adnan wrote: > > Hi, > > > I have a distributed volume with 6 bricks, each have 5TB and it's hosting > large qcow2 VM disks (I know it's reliable but it's not important data) > > I started

Re: [Gluster-users] Distributed re-balance issue

2017-05-25 Thread Nithya Balachandran
ize columns until a file migration is complete, it looked like nothing was happening. > -- > > Respectfully > *Mahdi A. Mahdi* > > -- > *From:* Nithya Balachandran <nbala...@redhat.com> > *Sent:* Wednesday, May 24, 2017 8:16:53 PM > *

Re: [Gluster-users] gluster remove-brick problem

2017-05-19 Thread Nithya Balachandran
Hi, The rebalance could have failed because of any one of several reasons. You would need to check the rebalance log for the volume to figure out why it failed in this case. This should be /var/log/glusterfs/data-rebalance.log on bigdata-dlp-server00.xg01. I can take a look at the log if you

Re: [Gluster-users] FW: ATTN: nbalacha IRC - Gluster - BlackoutWNCT requested info for 0byte file issue

2017-05-31 Thread Nithya Balachandran
CCing Ravi (arbiter) , Poornima and Raghavendra (parallel readdir) Hi Joshua, I had a quick look at the files you sent across. To summarize the issue, you see empty linkto files on the mount point. >From the logs I see that parallel readdir is enabled for this volume:

[Gluster-users] Gluster Documentation Feedback

2017-06-14 Thread Nithya Balachandran
Hi, We are looking at improving our documentation (http://gluster.readthedocs. io/en/latest/) and would like your feedback. Please let us know what would make the documentation more useful by answering a few questions: - Which guides do you use (admin, developer)? - How easy is it to find

Re: [Gluster-users] Remove-brick failed

2017-05-05 Thread Nithya Balachandran
Hi, You need to check the rebalance logs (glu_linux_dr2_oracle-rebalance.log) on glustoretst03.net.dr.dk and glustoretst04.net.dr.dk to see what went wrong. Regards, Nithya On 4 May 2017 at 11:46, Jesper Led Lauridsen TS Infra server wrote: >

Re: [Gluster-users] [Gluster-devel] [Gluster-Maintainers] Release 3.11: Has been Branched (and pending feature notes)

2017-05-05 Thread Nithya Balachandran
We have one more blocker bug (opened today): https://bugzilla.redhat.com/show_bug.cgi?id=1448307 On 5 May 2017 at 15:31, Kaushal M wrote: > On Thu, May 4, 2017 at 6:40 PM, Kaushal M wrote: > > On Thu, May 4, 2017 at 4:38 PM, Niels de Vos

Re: [Gluster-users] [Gluster-devel] Don't allow data loss via add-brick (was Re: Add single server)

2017-05-02 Thread Nithya Balachandran
On 2 May 2017 at 16:59, Shyam wrote: > Talur, > > Please wait for this fix before releasing 3.10.2. > > We will take in the change to either prevent add-brick in > sharded+distrbuted volumes, or throw a warning and force the use of --force > to execute this. > > IIUC, the

Re: [Gluster-users] Distributed re-balance issue

2017-05-24 Thread Nithya Balachandran
few gigabytes left which will > fill in the next half hour or so. > > attached are the logs for all 6 bricks. > > Hi, Just to clarify, did you run a rebalance (gluster volume rebalance start) or did you only run remove-brick ? -- > > Respectfully > *Mahdi A. Mahdi* > &

Re: [Gluster-users] "Input/output error" on mkdir for PPC64 based client

2017-09-22 Thread Nithya Balachandran
hi Walter, I don't see EIO errors in the log snippet and the messages look fine so far. Can you send across the rest of the log where you see the EIO? On 20 September 2017 at 23:57, Walter Deignan wrote: > I put the share into debug mode and then repeated the process from a

Re: [Gluster-users] data corruption - any update?

2017-10-03 Thread Nithya Balachandran
On 3 October 2017 at 13:27, Gandalf Corvotempesta < gandalf.corvotempe...@gmail.com> wrote: > Any update about multiple bugs regarding data corruptions with > sharding enabled ? > > Is 3.12.1 ready to be used in production? > Most issues have been fixed but there appears to be one more race for

Re: [Gluster-users] Gluster CLI Feedback

2017-10-16 Thread Nithya Balachandran
Gentle reminder. Thanks to those who have already responded. Nithya On 11 October 2017 at 14:38, Nithya Balachandran <nbala...@redhat.com> wrote: > Hi, > > As part of our initiative to improve Gluster usability, we would like > feedback on the current Gluster CLI. Gl

Re: [Gluster-users] data corruption - any update?

2017-10-05 Thread Nithya Balachandran
That is correct. > -bill > > > > On 10/3/2017 10:30 PM, Krutika Dhananjay wrote: > > > > On Wed, Oct 4, 2017 at 10:51 AM, Nithya Balachandran <nbala...@redhat.com> > wrote: > >> >> >> On 3 October 2017 at 13:27, Gandalf Corvotempesta < >>

Re: [Gluster-users] Distribute rebalance issues

2017-10-17 Thread Nithya Balachandran
On 17 October 2017 at 14:48, Stephen Remde wrote: > Hi, > > > I have a rebalance that has failed on one peer twice now. Rebalance logs > below (directories anonomised and some irrelevant log lines cut). It looks > like it loses connection to the brick, but

Re: [Gluster-users] Distribute rebalance issues

2017-10-17 Thread Nithya Balachandran
connection > from node-dc4-02-29040-2017/08/04-09:31:22:842268-video-client-4-7-405 > [2017-10-17 03:54:05.433353] I [MSGID: 101055] > [client_t.c:415:gf_client_unref] 0-video-server: Shutting down connection > node-dc4-02-29040-2017/08/04-09:31:22:842268-video-client-4-7-405 &g

[Gluster-users] Gluster CLI Feedback

2017-10-11 Thread Nithya Balachandran
Hi, As part of our initiative to improve Gluster usability, we would like feedback on the current Gluster CLI. Gluster 4.0 upstream development is currently in progress and it is an ideal time to consider CLI changes. Answers to the following would be appreciated: 1. How often do you use the

[Gluster-users] Gluster CLI reference

2017-10-16 Thread Nithya Balachandran
Hi, As part of our initiative to improve our docs, we have made a few changes over the past few weeks. One of this is a CLI reference [1]. This is still a WIP so not all commands have been documented. Is this something you would find useful? Would you like to see more information captured as

Re: [Gluster-users] Gluster status fails

2017-08-30 Thread Nithya Balachandran
On 30 August 2017 at 20:54, mohammad kashif wrote: > Hi > > I am running a 400TB five node purely distributed gluster setup. I am > troubleshooting an issue where some times files creation fails. I found > that volume status is not working > > gluster volume status >

Re: [Gluster-users] data corruption - any update?

2017-10-11 Thread Nithya Balachandran
On 11 October 2017 at 22:21, wrote: > > corruption happens only in this cases: > > > > - volume with shard enabled > > AND > > - rebalance operation > > > > I believe so > > > So, what If I have to replace a failed brick/disks ? Will this trigger > > a rebalance and then

Re: [Gluster-users] glusterfs brick server use too high memory

2017-11-12 Thread Nithya Balachandran
total_allocs=661873332 > > TIme: 2017.11.9 17:15 (Today) > [features/locks.www-volume-locks - usage-type gf_common_mt_strdup > memusage] > size=792538295 > num_allocs=752904534 > max_size=792538295 > max_num_allocs=752904534 > total_allocs=800889589 > > The state

Re: [Gluster-users] Online Rebalancing

2017-12-13 Thread Nithya Balachandran
On 13 December 2017 at 17:34, mohammad kashif wrote: > Hi > > I have a five node 300 TB distributed gluster volume with zero > replication. I am planning to add two more servers which will add around > 120 TB. After fixing the layout, can I rebalance the volume while

Re: [Gluster-users] interval or event to evaluate free disk space?

2017-12-18 Thread Nithya Balachandran
On 19 December 2017 at 00:11, Stefan Solbrig wrote: > Hi all, > > with the option "cluster.min-free-disk" set, glusterfs avoids placing > files bricks that are "too full". > I'd like to understand when the free space on the bricks is calculated. > It seems to me that this

Re: [Gluster-users] Error logged in fuse-mount log file

2017-11-13 Thread Nithya Balachandran
upgrade it to latest one. I am sure this > would have fix . > > > Ashish > > > > > -- > *From: *"Nithya Balachandran" <nbala...@redhat.com> > *To: *"Amudhan P" <amudha...@gmail.com>, "Ashish Pa

Re: [Gluster-users] Error logged in fuse-mount log file

2017-11-13 Thread Nithya Balachandran
in disperse set for the folder. it > all same there is no difference. > > regards > Amudhan P > > > > > On Fri, Nov 10, 2017 at 9:02 AM, Nithya Balachandran <nbala...@redhat.com> > wrote: > >> Hi, >> >> Comments inline. >> >> Regards

Re: [Gluster-users] Missing files on one of the bricks

2017-11-16 Thread Nithya Balachandran
On 15 November 2017 at 19:57, Frederic Harmignies < frederic.harmign...@elementai.com> wrote: > Hello, we have 2x files that are missing from one of the bricks. No idea > how to fix this. > > Details: > > # gluster volume info > > Volume Name: data01 > Type: Replicate > Volume ID:

Re: [Gluster-users] Gluster clients can't see directories that exist or are created within a mounted volume, but can enter them.

2017-11-05 Thread Nithya Balachandran
Hi, Please provide the gluster volume info. Do you see any errors in the client mount log file (/var/log/glusterfs/var-lib-mountedgluster.log)? Thanks, Nithya On 6 November 2017 at 05:13, Sam McLeod wrote: > We've got an issue with Gluster (3.12.x) where clients

Re: [Gluster-users] glusterfs brick server use too high memory

2017-11-09 Thread Nithya Balachandran
On 8 November 2017 at 17:16, Yao Guotao wrote: > Hi all, > I'm glad to add glusterfs community. > > I have a glusterfs cluster: > Nodes: 4 > System: Centos7.1 > Glusterfs: 3.8.9 > Each Node: > CPU: 48 core > Mem: 128GB > Disk: 1*4T > > There is one Distributed Replicated

Re: [Gluster-users] Error logged in fuse-mount log file

2017-11-09 Thread Nithya Balachandran
Hi, Comments inline. Regards, Nithya On 9 November 2017 at 15:05, Amudhan Pandian wrote: > resending mail from another id, doubt on whether mail reaches mailing list. > > > -- Forwarded message -- > From: *Amudhan P* > Date: Tue, Nov

Re: [Gluster-users] Gluster clients can't see directories that exist or are created within a mounted volume, but can enter them.

2017-11-08 Thread Nithya Balachandran
full access to directories and files. > Also testing using the root user. > > > On Mon, Nov 6, 2017 at 1:55 PM, Nithya Balachandran <nbala...@redhat.com> > wrote: > >> Hi, >> >> Please provide the gluster volume info. Do you see any errors in the >> clie

[Gluster-users] Gluster Summit BOF - Rebalance

2017-11-06 Thread Nithya Balachandran
Hi, We had a BOF on Rebalance at the Gluster Summit to get feedback from Gluster users. - Performance has improved over the last few releases and it works well for large files. - However, it is still not fast enough on volumes which contain a lot of directories and small files. The bottleneck

Re: [Gluster-users] Client un-mounting since upgrade to 3.12.9-1 version

2018-06-13 Thread Nithya Balachandran
This is not the same issue as the one you are referring - that was in the RPC layer and caused the bricks to crash. This one is different as it seems to be in the dht and rda layers. It does look like a stack overflow though. @Mohammad, Please send the following information: 1. gluster volume

Re: [Gluster-users] Client un-mounting since upgrade to 3.12.9-1 version

2018-06-13 Thread Nithya Balachandran
+Poornima who works on parallel-readdir. @Poornima, Have you seen anything like this before? On 14 June 2018 at 10:07, Nithya Balachandran wrote: > This is not the same issue as the one you are referring - that was in the > RPC layer and caused the bricks to crash. This one is dif

Re: [Gluster-users] Client un-mounting since upgrade to 3.12.9-1 version

2018-06-15 Thread Nithya Balachandran
.allow: X.Y.Z.* > transport.address-family: inet > performance.readdir-ahead: on > nfs.disable: on > > > Thanks > > Kashif > > On Thu, Jun 14, 2018 at 5:39 AM, Nithya Balachandran > wrote: > >> +Poornima who works on parallel-readdir. >> >> @Poornim

Re: [Gluster-users] Client un-mounting since upgrade to 3.12.9-1 version

2018-06-15 Thread Nithya Balachandran
On 15 June 2018 at 13:45, Nithya Balachandran wrote: > Hi Mohammad, > > I was unable to reproduce this on a volume created on a system running > 3.12.9. > > Can you send me the FUSE volfiles for the volume atlasglust? They will be > in /var/lib/glusterd/vols/atlasglust/

Re: [Gluster-users] glusterfs using large amount of ram

2018-06-19 Thread Nithya Balachandran
Hi Jim, Which process is using up the memory? Please take a statedump of it at intervals and send those across. See [1] for details on how to take and read a statedump. Regards, Nithya [1] https://docs.gluster.org/en/v3/Troubleshooting/statedump/ On 20 June 2018 at 00:49, Jim Kusznir wrote:

Re: [Gluster-users] Client un-mounting since upgrade to 3.12.9-1 version

2018-06-20 Thread Nithya Balachandran
it happened again or > more frequently. > > Cheers > > Kashif > > On Wed, Jun 20, 2018 at 12:28 PM, Nithya Balachandran > wrote: > >> Hi Mohammad, >> >> This is a different crash. How often does it happen? >> >> >> We have managed

Re: [Gluster-users] Gluster rebalance taking many years

2018-05-02 Thread Nithya Balachandran
: /dev/md3 > Mount Options: rw,noatime,nodiratime,attr2,in > ode64,sunit=1024,swidth=3072,noquota > Inode Size : 256 > Disk Space Free : 10.7TB > Total Disk Space : 10.8TB > Inode Count : 2317811968 > Free Inodes : 2314218207 >

Re: [Gluster-users] Gluster rebalance taking many years

2018-04-30 Thread Nithya Balachandran
Hi, This value is an ongoing rough estimate based on the amount of data rebalance has migrated since it started. The values will cange as the rebalance progresses. A few questions: 1. How many files/dirs do you have on this volume? 2. What is the average size of the files? 3. What is

Re: [Gluster-users] Healing : No space left on device

2018-05-03 Thread Nithya Balachandran
Hi, We need some more information in order to debug this. The version of Gluster you were running before the upgrade The output of gluster volume info The brick logs for the volume when the operation is performed. Regards, Nithya On 2 May 2018 at 15:19, Hoggins! wrote:

Re: [Gluster-users] A Problem of readdir-optimize

2018-01-04 Thread Nithya Balachandran
palive-interval: 1 > server.keepalive-time: 2 > transport.keepalive: 1 > client.keepalive-count: 1 > client.keepalive-interval: 1 > client.keepalive-time: 2 > features.cache-invalidation: off > network.ping-timeout: 30 > user.smb.guest: no > user.id: 8148 > nfs.disable: on > s

Re: [Gluster-users] Creating cluster replica on 2 nodes 2 bricks each.

2018-01-10 Thread Nithya Balachandran
a/brick1/scratch > Brick2: gluster02ib:/gdata/brick1/scratch > Brick3: gluster01ib:/gdata/brick2/scratch > Brick4: gluster02ib:/gdata/brick2/scratch > Options Reconfigured: > performance.readdir-ahead: on > nfs.disable: on > [root@gluster01 ~]# > > > > ---

Re: [Gluster-users] "linkfile not having link" occurrs sometimes after renaming

2018-01-15 Thread Nithya Balachandran
Hi Paul, The rename operation internally consists of several operations including an unlink of the original file and linkto files if required. Can you provide details of the clients used, the volume type and the exact steps performed so we can try to reproduce this? Thanks, Nithya On 15

Re: [Gluster-users] [Possibile SPAM] Re: Strange messages in mnt-xxx.log

2018-01-22 Thread Nithya Balachandran
logged in 2 > situations: > > 1) A real "Hole" in DHT > > 2) A "virgin" file being created > > I think this is the second situation because that message appears only > when I create a new qcow2 volume to host VM image. > > These messages s

Re: [Gluster-users] [Possibile SPAM] Re: Strange messages in mnt-xxx.log

2018-01-23 Thread Nithya Balachandran
[1] https://bugzilla.redhat.com/show_bug.cgi?id=1537457 On 23 January 2018 at 11:16, Nithya Balachandran <nbala...@redhat.com> wrote: > > > On 17 January 2018 at 16:04, Ing. Luca Lazzeroni - Trend Servizi Srl < > l...@trendservizi.it> wrote: > >> Here's the

Re: [Gluster-users] Strange messages in mnt-xxx.log

2018-01-16 Thread Nithya Balachandran
Hi, On 16 January 2018 at 18:56, Ing. Luca Lazzeroni - Trend Servizi Srl < l...@trendservizi.it> wrote: > Hi, > > I'm testing gluster 3.12.4 and, by inspecting log files > /var/log/glusterfs/mnt-gv0.log (gv0 is the volume name), I found many lines > saying: > > [2018-01-15 09:45:41.066914] I

Re: [Gluster-users] Creating cluster replica on 2 nodes 2 bricks each.

2018-01-09 Thread Nithya Balachandran
Hi, Please let us know what commands you ran so far and the output of the *gluster volume info* command. Thanks, Nithya On 9 January 2018 at 23:06, Jose Sanchez wrote: > Hello > > We are trying to setup Gluster for our project/scratch storage HPC machine > using a

Re: [Gluster-users] Creating cluster replica on 2 nodes 2 bricks each.

2018-01-12 Thread Nithya Balachandran
-- Forwarded message -- From: Jose Sanchez <joses...@carc.unm.edu> Date: 11 January 2018 at 22:05 Subject: Re: [Gluster-users] Creating cluster replica on 2 nodes 2 bricks each. To: Nithya Balachandran <nbala...@redhat.com> Cc: gluster-users <gluster-users@gluster.o

Re: [Gluster-users] df does not show full volume capacity after update to 3.12.4

2018-01-30 Thread Nithya Balachandran
Hi Eva, Can you send us the following: gluster volume info gluster volume status The log files and tcpdump for df on a fresh mount point for that volume. Thanks, Nithya On 31 January 2018 at 07:17, Freer, Eva B. wrote: > After OS update to CentOS 7.4 or RedHat 6.9 and

Re: [Gluster-users] df does not show full volume capacity after update to 3.12.4

2018-01-30 Thread Nithya Balachandran
der from any one node so I can confirm if this is the problem? Regards, Nithya On 31 January 2018 at 10:47, Nithya Balachandran <nbala...@redhat.com> wrote: > Hi Eva, > > One more question. What version of gluster were you running before the > upgrade? > > Thanks, > Ni

Re: [Gluster-users] [Gluster-devel] Release 3.12.6: Scheduled for the 12th of February

2018-02-02 Thread Nithya Balachandran
On 2 February 2018 at 11:16, Jiffin Tony Thottan wrote: > Hi, > > It's time to prepare the 3.12.6 release, which falls on the 10th of > each month, and hence would be 12-02-2018 this time around. > > This mail is to call out the following, > > 1) Are there any pending

Re: [Gluster-users] Run away memory with gluster mount

2018-01-28 Thread Nithya Balachandran
Csaba, Could this be the problem of the inodes not getting freed in the fuse process? Daniel, as Ravi requested, please provide access to the statedumps. You can strip out the filepath information. Does your data set include a lot of directories? Thanks, Nithya On 27 January 2018 at 10:23,

Re: [Gluster-users] df does not show full volume capacity after update to 3.12.4

2018-01-31 Thread Nithya Balachandran
l block size: 4096 > > Blocks: Total: 15626471424 Free: 15106888485 Available: 15106888485 > > Inodes: Total: 1250159424 Free: 1250122139 > > File: "/bricks/data_B4" > > ID: 831 Namelen: 255 Type: xfs > > Block size: 4096 Fundamental

Re: [Gluster-users] df does not show full volume capacity after update to 3.12.4

2018-01-31 Thread Nithya Balachandran
. Let me get back to you on this tomorrow. Regards, Nithya > Thanks, > > Eva (865) 574-6894 > > > > *From: *Nithya Balachandran <nbala...@redhat.com> > *Date: *Wednesday, January 31, 2018 at 11:14 AM > *To: *Eva Freer <free...@ornl.gov> > *Cc: *"Greene, Tam

Re: [Gluster-users] df does not show full volume capacity after update to 3.12.4

2018-01-31 Thread Nithya Balachandran
t; Nithya, > > > > I will be out of the office for ~10 days starting tomorrow. Is there any > way we could possibly resolve it today? > > > > Thanks, > > Eva (865) 574-6894 > > > > *From: *Nithya Balachandran <nbala...@redhat.com> > *Date: *We

Re: [Gluster-users] Run away memory with gluster mount

2018-02-01 Thread Nithya Balachandran
gt;>> To: "Raghavendra Gowdappa" <rgowd...@redhat.com>, "Ravishankar N" < >>> ravishan...@redhat.com> >>> Cc: gluster-users@gluster.org, "Csaba Henk" <ch...@redhat.com>, "Niels >>> de Vos" <nde...@redhat.com

Re: [Gluster-users] Error - Disk Full - No Space Left

2018-02-05 Thread Nithya Balachandran
Hi, I have already replied to your earlier email. Did not receive it? Regards, Nithya On 5 February 2018 at 14:52, Taste-Of-IT wrote: > Hi to all, > > its sad that no one can help. I testet the 3 Bricks and created a new > Volume. But if i want to create an folder, i

Re: [Gluster-users] Fwd: Troubleshooting glusterfs

2018-02-05 Thread Nithya Balachandran
=875076 - > seems starting with one brick is not a good idea.. so we are going to try > starting with 2 bricks. > Please let me know if there are anything else we should consider changing > in our strategy. > > Many thanks in advance! > Nikita Yeryomin > > 2018-02-05 7

Re: [Gluster-users] Fwd: Troubleshooting glusterfs

2018-02-05 Thread Nithya Balachandran
On 5 February 2018 at 15:40, Nithya Balachandran <nbala...@redhat.com> wrote: > Hi, > > > I see a lot of the following messages in the logs: > [2018-02-04 03:22:01.56] I [glusterfsd-mgmt.c:1821:mgmt_getspec_cbk] > 0-glusterfs: No change in volfile,continuing > [20

Re: [Gluster-users] df does not show full volume capacity after update to 3.12.4

2018-01-31 Thread Nithya Balachandran
, > > Eva (865) 574-6894 > > > > *From: *Amar Tumballi <atumb...@redhat.com> > *Date: *Wednesday, January 31, 2018 at 12:15 PM > *To: *Eva Freer <free...@ornl.gov> > *Cc: *Nithya Balachandran <nbala...@redhat.com>, "Greene, Tami McFarlin"

Re: [Gluster-users] df does not show full volume capacity after update to 3.12.4

2018-01-31 Thread Nithya Balachandran
Please note, the file needs to be copied to all nodes. On 1 February 2018 at 09:31, Nithya Balachandran <nbala...@redhat.com> wrote: > Hi, > > I think we have a workaround for until we have a fix in the code. The > following worked on my system. > > Copy the attached file

Re: [Gluster-users] Error - Disk Full - No Space Left

2018-02-04 Thread Nithya Balachandran
Hi, This might be because of: https://github.com/gluster/glusterfs/blob/release-3.13/doc/release-notes/3.13.0.md#ability-to-reserve-back-end-storage-space Please try running the following and see if it solves the problem: gluster volume set storage.reserve 0 Regards, Nithya On 4 February

Re: [Gluster-users] Fwd: Troubleshooting glusterfs

2018-02-04 Thread Nithya Balachandran
Hi, Please provide the log for the mount process from the node on which you have mounted the volume. This should be in /var/log/glusterfs and the name of the file will the the hyphenated path of the mount point. For e.g., If the volume in mounted at /mnt/glustervol, the log file will be

Re: [Gluster-users] Error - Disk Full - No Space Left

2018-02-06 Thread Nithya Balachandran
Hi, Please send me the following : gluster volume info The client mount and brick log files (see http://docs.gluster.org/en/latest/Administrator%20Guide/Logging/ for where they exist and the naming convention) Regards, Nithya On 6 February 2018 at 15:18, Taste-Of-IT

  1   2   3   >