Re: [Gluster-users] Client disconnections, memory use

2019-11-12 Thread Nithya Balachandran
Hi, For the memory increase, please capture statedumps of the process at intervals of an hour and send it across. https://docs.gluster.org/en/latest/Troubleshooting/statedump/ describes how to generate a statedump for the client process. Regards, Nithya On Wed, 13 Nov 2019 at 05:18, Jamie

Re: [Gluster-users] Symbolic Links under .glusterfs deleted

2019-11-07 Thread Nithya Balachandran
On Thu, 7 Nov 2019 at 15:15, Shreyansh Shah wrote: > Hi, > Running distributed gluster 5.10 with 6 node and 2 bricks on each node (12 > in total) > Due to some reason the files under /.glusterfs were deleted. > Post that we have loads of broken symlinks, but the data on the disks exist > Due to

Re: [Gluster-users] Disappearing files on gluster mount

2019-09-22 Thread Nithya Balachandran
Fermi National Accelerator Laboratory > www.fnal.gov > www.scientificlinux.org > > > ________ > From: Nithya Balachandran > Sent: Thursday, September 19, 2019 10:14 PM > To: Patrick Riehecky > Cc: gluster-users > Subject: Re: [Gluster-user

Re: [Gluster-users] File visible in listing but inaccessible over libgfapi

2019-09-20 Thread Nithya Balachandran
On Thu, 19 Sep 2019 at 15:40, Milewski Daniel wrote: > I've observed an interesting behavior in Gluster 5.6. I had a file > which was placed on incorrect subvolume (aparrently by the rebalancing > process). I could stat and read the file just fine over FUSE mount > point, with this entry

Re: [Gluster-users] Disappearing files on gluster mount

2019-09-19 Thread Nithya Balachandran
Hi Pat, Do you still see the problem of missing files? If yes please provide the following : 1. gluster volume info 2. ls -l of the directory containing the missing files from the mount point and from the individual bricks. Regards, Niyhya On Thu, 29 Aug 2019 at 18:57, Pat Riehecky wrote:

Re: [Gluster-users] Rebalancing newly added bricks

2019-09-18 Thread Nithya Balachandran
ff986c54ce0] ) > 0-tank-client-10: forced unwinding frame type(GlusterFS 3.3) > op(FXATTROP(34)) called at 2019-09-08 15:40:44.040333 (xid=0x7f8cfac) > > Does this type of failure cause data corruption? What is the best course > of action at this point? > > Thanks, > >

Re: [Gluster-users] Rebalancing newly added bricks

2019-09-11 Thread Nithya Balachandran
re not doing anything. Hope this helps. Regards, Nithya > behaviour. > > > > > > > >Regards, > > > > > >Nithya > > > Best Regards, > Strahil Nikolov > On Sep 9, 2019 06:36, Nithya Balachandran wrote: > > > > On Sat,

Re: [Gluster-users] Rebalancing newly added bricks

2019-09-08 Thread Nithya Balachandran
nly a single node per replica set would migrate files in the version used in this case . Regards, Nithya Best Regards, > Strahil Nikolov > > В петък, 6 септември 2019 г., 15:29:20 ч. Гринуич+3, Herb Burnswell < > herbert.burnsw...@gmail.com> написа: > > > > > On Th

Re: [Gluster-users] Rebalancing newly added bricks

2019-09-05 Thread Nithya Balachandran
idance.. > > What is the output of the rebalance status command? Can you check if there are any errors in the rebalance logs on the node on which you see rebalance activity? If there are a lot of small files on the volume, the rebalance is expected to take time. Regards, Nithya

Re: [Gluster-users] Rebalancing newly added bricks

2019-09-02 Thread Nithya Balachandran
On Sat, 31 Aug 2019 at 22:59, Herb Burnswell wrote: > Thank you for the reply. > > I started a rebalance with force on serverA as suggested. Now I see > 'activity' on that node: > > # gluster vol rebalance tank status > Node Rebalanced-files size >

Re: [Gluster-users] Continue to work in "degraded mode" (missing brick)

2019-08-08 Thread Nithya Balachandran
Hi, This is the expected behaviour for a distribute volume. Files that hash to a brick that is down will not be created. This is to prevent issues in case the file already exists on that brick. To prevent this, please use distribute-replicate volumes. Regards, Nithya On Thu, 8 Aug 2019 at

Re: [Gluster-users] Gluster eating up a lot of ram

2019-07-30 Thread Nithya Balachandran
er.org/en/latest/Administrator%20Guide/Accessing%20Gluster%20from%20Windows/ > > Diego > > > > On Mon, Jul 29, 2019 at 11:52 PM Nithya Balachandran > wrote: > >> >> Hi Diego, >> >> Please do the following: >> >> gluster v get readdir-ahead

Re: [Gluster-users] Gluster eating up a lot of ram

2019-07-29 Thread Nithya Balachandran
ill the actual process or simply trigger the dump? Which > process should I kill? The brick process in the system or the fuse mount? > > Diego > > On Mon, Jul 29, 2019, 23:27 Nithya Balachandran > wrote: > >> >> >> On Tue, 30 Jul 2019 at 05:44, Diego Remolina wro

Re: [Gluster-users] Gluster eating up a lot of ram

2019-07-29 Thread Nithya Balachandran
On Tue, 30 Jul 2019 at 05:44, Diego Remolina wrote: > Unfortunately statedump crashes on both machines, even freshly rebooted. > Do you see any statedump files in /var/run/gluster? This looks more like the gluster cli crashed. > > [root@ysmha01 ~]# gluster --print-statedumpdir >

Re: [Gluster-users] Brick missing trusted.glusterfs.dht xattr

2019-07-28 Thread Nithya Balachandran
but I wasn't sure. > > Thanks, > -Matthew > > -- > Matthew Benstead > System Administrator > Pacific Climate Impacts Consortium <https://pacificclimate.org/> > University of Victoria, UH1 > PO Box 1800, STN CSC > Victoria, BC, V8W 2Y2 > Phone: +1-250-721-8432 >

Re: [Gluster-users] Brick missing trusted.glusterfs.dht xattr

2019-07-26 Thread Nithya Balachandran
cts Consortium <https://pacificclimate.org/> > University of Victoria, UH1 > PO Box 1800, STN CSC > Victoria, BC, V8W 2Y2 > Phone: +1-250-721-8432 > Email: matth...@uvic.ca > On 7/24/19 9:30 PM, Nithya Balachandran wrote: > > > > On Wed, 24 Jul 2019 at 22:1

Re: [Gluster-users] Brick missing trusted.glusterfs.dht xattr

2019-07-24 Thread Nithya Balachandran
62T 77% /storage > [root@gluster07 ~]# getfattr --absolute-names -m . -d -e hex > /mnt/raid6-storage/storage/ > # file: /mnt/raid6-storage/storage/ > > security.selinux=0x756e636f6e66696e65645f753a6f626a6563745f723a756e6c6162656c65645f743a733000 > trusted.gfid=0x00

Re: [Gluster-users] Brick missing trusted.glusterfs.dht xattr

2019-07-18 Thread Nithya Balachandran
01 > > trusted.glusterfs.6f95525a-94d7-4174-bac4-e1a18fe010a2.xtime=0x5d307baa00023ec0 > trusted.glusterfs.quota.dirty=0x3000 > > trusted.glusterfs.quota.size.2=0x1b71d5279e763e320005cd53 > trusted.glusterfs.volume-id=0x6f95525a94d74174bac4e1a18fe010a2 >

Re: [Gluster-users] Parallel process hang on gluster volume

2019-07-05 Thread Nithya Balachandran
Did you see this behaviour with previous Gluster versions? Regards, Nithya On Wed, 3 Jul 2019 at 21:41, wrote: > Am I alone having this problem ? > > - Mail original - > De: n...@furyweb.fr > À: "gluster-users" > Envoyé: Vendredi 21 Juin 2019 09:48:47 > Objet: [Gluster-users] Parallel

Re: [Gluster-users] Removing subvolume from dist/rep volume

2019-07-02 Thread Nithya Balachandran
ored. Ravi and Krutika, please take a look at the other files. Regards, Nithya On Fri, 28 Jun 2019 at 19:56, Dave Sherohman wrote: > On Thu, Jun 27, 2019 at 12:17:10PM +0530, Nithya Balachandran wrote: > > There are some edge cases that may prevent a file from being migrated > > duri

Re: [Gluster-users] Removing subvolume from dist/rep volume

2019-06-28 Thread Nithya Balachandran
On Fri, 28 Jun 2019 at 14:34, Dave Sherohman wrote: > On Thu, Jun 27, 2019 at 12:17:10PM +0530, Nithya Balachandran wrote: > > On Tue, 25 Jun 2019 at 15:26, Dave Sherohman wrote: > > > My objective is to remove nodes B and C entirely. > > > > > > First up is

Re: [Gluster-users] Removing subvolume from dist/rep volume

2019-06-27 Thread Nithya Balachandran
On Thu, 27 Jun 2019 at 12:17, Nithya Balachandran wrote: > Hi, > > > On Tue, 25 Jun 2019 at 15:26, Dave Sherohman wrote: > >> I have a 9-brick, replica 2+A cluster and plan to (permanently) remove >> one of the three subvolumes. I think I've worked out how to do it,

Re: [Gluster-users] Removing subvolume from dist/rep volume

2019-06-27 Thread Nithya Balachandran
Hi, On Tue, 25 Jun 2019 at 15:26, Dave Sherohman wrote: > I have a 9-brick, replica 2+A cluster and plan to (permanently) remove > one of the three subvolumes. I think I've worked out how to do it, but > want to verify first that I've got it right, since downtime or data loss > would be Bad

Re: [Gluster-users] Does replace-brick migrate data?

2019-06-07 Thread Nithya Balachandran
On Sat, 8 Jun 2019 at 01:29, Alan Orth wrote: > Dear Ravi, > > In the last week I have completed a fix-layout and a full INDEX heal on > this volume. Now I've started a rebalance and I see a few terabytes of data > going around on different bricks since yesterday, which I'm sure is good. > >

Re: [Gluster-users] Memory leak in glusterfs

2019-06-06 Thread Nithya Balachandran
cted to glusterfs community. > > Regards, > Abhishek > > On Thu, Jun 6, 2019, 16:08 Nithya Balachandran > wrote: > >> Hi Abhishek, >> >> I am still not clear as to the purpose of the tests. Can you clarify why >> you are using valgrind and why you think t

Re: [Gluster-users] Memory leak in glusterfs

2019-06-06 Thread Nithya Balachandran
the below script to see the memory increase whihle the script is > above script is running in background. > > *ps_mem.py* > > I am attaching the script files as well as the result got after testing > the scenario. > > On Wed, Jun 5, 2019 at 7:23 PM Nithya Balachandran >

Re: [Gluster-users] Memory leak in glusterfs

2019-06-05 Thread Nithya Balachandran
Hi, Writing to a volume should not affect glusterd. The stack you have shown in the valgrind looks like the memory used to initialise the structures glusterd uses and will free only when it is stopped. Can you provide more details to what it is you are trying to test? Regards, Nithya On Tue,

Re: [Gluster-users] remove-brick failure on distributed with 5.6

2019-05-24 Thread Nithya Balachandran
Hi Brandon, Please send the following: 1. the gluster volume info 2. Information about which brick was removed 3. The rebalance log file for all nodes hosting removed bricks. Regards, Nithya On Fri, 24 May 2019 at 19:33, Ravishankar N wrote: > Adding a few DHT folks for some possible

Re: [Gluster-users] add-brick: failed: Commit failed

2019-05-19 Thread Nithya Balachandran
On Fri, 17 May 2019 at 06:01, David Cunningham wrote: > Hello, > > We're adding an arbiter node to an existing volume and having an issue. > Can anyone help? The root cause error appears to be > "----0001: failed to resolve (Transport > endpoint is not connected)", as

Re: [Gluster-users] Cannot see all data in mount

2019-05-16 Thread Nithya Balachandran
On Thu, 16 May 2019 at 14:17, Paul van der Vlis wrote: > Op 16-05-19 om 05:43 schreef Nithya Balachandran: > > > > > > On Thu, 16 May 2019 at 03:05, Paul van der Vlis > <mailto:p...@vandervlis.nl>> wrote: > > > > Op 15-05-19 om 15:45 s

Re: [Gluster-users] Cannot see all data in mount

2019-05-15 Thread Nithya Balachandran
On Thu, 16 May 2019 at 03:05, Paul van der Vlis wrote: > Op 15-05-19 om 15:45 schreef Nithya Balachandran: > > Hi Paul, > > > > A few questions: > > Which version of gluster are you using? > > On the server and some clients: glusterfs 4.1.2 > On a new c

Re: [Gluster-users] Cannot see all data in mount

2019-05-15 Thread Nithya Balachandran
Hi Paul, A few questions: Which version of gluster are you using? Did this behaviour start recently? As in were the contents of that directory visible earlier? Regards, Nithya On Wed, 15 May 2019 at 18:55, Paul van der Vlis wrote: > Hello Strahil, > > Thanks for your answer. I don't find the

Re: [Gluster-users] Extremely slow Gluster performance

2019-04-23 Thread Nithya Balachandran
Hi Patrick, Did this start only after the upgrade? How do you determine which brick process to kill? Are there a lot of files to be healed on the volume? Can you provide a tcpdump of the slow listing from a separate test client mount ? 1. Mount the gluster volume on a different mount point

Re: [Gluster-users] Inconsistent issues with a client

2019-03-28 Thread Nithya Balachandran
Hi, If you know which directories are problematic, please check and see if the permissions on them are correct on the individual bricks. Please also provide the following: - *gluster volume info* for the volume - The gluster version you are running regards, Nithya On Wed, 27 Mar 2019 at

Re: [Gluster-users] Prioritise local bricks for IO?

2019-03-28 Thread Nithya Balachandran
On Wed, 27 Mar 2019 at 20:27, Poornima Gurusiddaiah wrote: > This feature is not under active development as it was not used widely. > AFAIK its not supported feature. > +Nithya +Raghavendra for further clarifications. > This is not actively supported - there has been no work done on this

Re: [Gluster-users] Transport endpoint is not connected failures in

2019-03-27 Thread Nithya Balachandran
On Wed, 27 Mar 2019 at 21:47, wrote: > Hello Amar and list, > > > > I wanted to follow-up to confirm that upgrading to 5.5 seem to fix the > “Transport endpoint is not connected failures” for us. > > > > We did not have any of these failures in this past weekend backups cycle. > > > > Thank you

Re: [Gluster-users] .glusterfs GFID links

2019-03-20 Thread Nithya Balachandran
On Wed, 20 Mar 2019 at 22:59, Jim Kinney wrote: > I have half a zillion broken symlinks in the .glusterfs folder on 3 of 11 > volumes. It doesn't make sense to me that a GFID should linklike some of > the ones below: > >

Re: [Gluster-users] Upgrade 5.3 -> 5.4 on debian: public IP is used instead of LAN IP

2019-03-19 Thread Nithya Balachandran
Hi Artem, I think you are running into a different crash. The ones reported which were prevented by turning off write-behind are now fixed. We will need to look into the one you are seeing to see why it is happening. Regards, Nithya On Tue, 19 Mar 2019 at 20:25, Artem Russakovskii wrote: >

Re: [Gluster-users] / - is in split-brain

2019-03-19 Thread Nithya Balachandran
Hi, What is the output of the gluster volume info ? Thanks, Nithya On Wed, 20 Mar 2019 at 01:58, Pablo Schandin wrote: > Hello all! > > I had a volume with only a local brick running vms and recently added a > second (remote) brick to the volume. After adding the brick, the heal > command

Re: [Gluster-users] gluster 5.3: file or directory not read-/writeable, although it exists - cache?

2019-02-19 Thread Nithya Balachandran
On Tue, 19 Feb 2019 at 15:18, Hu Bert wrote: > Hello @ll, > > one of our backend developers told me that, in the tomcat logs, he > sees errors that directories on a glusterfs mount aren't readable. > Within tomcat the errors look like this: > > 2019-02-19 07:39:27,124 WARN Path >

Re: [Gluster-users] Files on Brick not showing up in ls command

2019-02-13 Thread Nithya Balachandran
> balancing the new brick and will resync the files onto the full gluster > volume when that completes > > On Wed, Feb 13, 2019, 10:28 PM Nithya Balachandran > wrote: > >> >> >> On Tue, 12 Feb 2019 at 08:30, Patrick Nixon wrote: >> >>> T

Re: [Gluster-users] Files on Brick not showing up in ls command

2019-02-13 Thread Nithya Balachandran
On Tue, 12 Feb 2019 at 08:30, Patrick Nixon wrote: > The files are being written to via the glusterfs mount (and read on the > same client and a different client). I try not to do anything on the nodes > directly because I understand that can cause weirdness. As far as I can > tell, there

Re: [Gluster-users] Message repeated over and over after upgrade from 4.1 to 5.3: W [dict.c:761:dict_ref] (-->/usr/lib64/glusterfs/5.3/xlator/performance/quick-read.so(+0x7329) [0x7fd966fcd329] -->/us

2019-02-12 Thread Nithya Balachandran
gt;>> beerpla.net | +ArtemRussakovskii >>>>>> <https://plus.google.com/+ArtemRussakovskii> | @ArtemR >>>>>> <http://twitter.com/ArtemR> >>>>>> >>>>>> >>>>>> On Fri, Feb 8, 2019 at 7:22 PM Ra

Re: [Gluster-users] Message repeated over and over after upgrade from 4.1 to 5.3: W [dict.c:761:dict_ref] (-->/usr/lib64/glusterfs/5.3/xlator/performance/quick-read.so(+0x7329) [0x7fd966fcd329] -->/us

2019-02-08 Thread Nithya Balachandran
alidation: on >>> performance.stat-prefetch: on >>> features.cache-invalidation-timeout: 600 >>> features.cache-invalidation: on >>> cluster.readdir-optimize: on >>> performance.io-thread-count: 32 >>> server.event-threads: 4 >>&g

Re: [Gluster-users] Message repeated over and over after upgrade from 4.1 to 5.3: W [dict.c:761:dict_ref] (-->/usr/lib64/glusterfs/5.3/xlator/performance/quick-read.so(+0x7329) [0x7fd966fcd329] -->/us

2019-02-07 Thread Nithya Balachandran
ork.inode-lru-limit: 50 >>> performance.md-cache-timeout: 600 >>> performance.cache-invalidation: on >>> performance.stat-prefetch: on >>> features.cache-invalidation-timeout: 600 >>> features.cache-invalidation: on >>> cluster.readdir-optimize: on >>

Re: [Gluster-users] gluster 5.3: transport endpoint gets disconnected - Assertion failed: GF_MEM_TRAILER_MAGIC

2019-02-06 Thread Nithya Balachandran
"reaching this limit (0 means 'unlimited')", }, > > This seems to be the default already? Set it explicitly? > > Regards, > Hubert > > Am Mi., 6. Feb. 2019 um 09:26 Uhr schrieb Nithya Balachandran > : > > > > Hi, > > > > The client

Re: [Gluster-users] gluster 5.3: transport endpoint gets disconnected - Assertion failed: GF_MEM_TRAILER_MAGIC

2019-02-06 Thread Nithya Balachandran
Hi, The client logs indicates that the mount process has crashed. Please try mounting the volume with the volume option lru-limit=0 and see if it still crashes. Thanks, Nithya On Thu, 24 Jan 2019 at 12:47, Hu Bert wrote: > Good morning, > > we currently transfer some data to a new glusterfs

Re: [Gluster-users] Message repeated over and over after upgrade from 4.1 to 5.3: W [dict.c:761:dict_ref] (-->/usr/lib64/glusterfs/5.3/xlator/performance/quick-read.so(+0x7329) [0x7fd966fcd329] -->/us

2019-02-06 Thread Nithya Balachandran
Hi Artem, Do you still see the crashes with 5.3? If yes, please try mount the volume using the mount option lru-limit=0 and see if that helps. We are looking into the crashes and will update when have a fix. Also, please provide the gluster volume info for the volume in question. regards,

Re: [Gluster-users] Getting timedout error while rebalancing

2019-02-05 Thread Nithya Balachandran
566 0 201completed > 0:00:08 > > Is the rebalancing option working fine? Why did gluster throw the error > saying that "Error : Request timed out"? > .On Tue, Feb 5, 2019 at 4:23 PM Nithya Balachandran > wrote: > >> Hi, >> Pleas

Re: [Gluster-users] Getting timedout error while rebalancing

2019-02-05 Thread Nithya Balachandran
Hi, Please provide the exact step at which you are seeing the error. It would be ideal if you could copy-paste the command and the error. Regards, Nithya On Tue, 5 Feb 2019 at 15:24, deepu srinivasan wrote: > HI everyone. I am getting "Error : Request timed out " while doing > rebalance . I

Re: [Gluster-users] gluster remove-brick

2019-02-04 Thread Nithya Balachandran
and for very long time there > was no failures and then at some point these 17000 failures appeared and it > stayed like that. > > Thanks > > Kashif > > > > > > Let me explain a little bit of background. > > > On Mon, Feb 4, 2019 at 5:09 AM Nithya

Re: [Gluster-users] gluster remove-brick

2019-02-03 Thread Nithya Balachandran
Hi, The status shows quite a few failures. Please check the rebalance logs to see why that happened. We can decide what to do based on the errors. Once you run a commit, the brick will no longer be part of the volume and you will not be able to access those files via the client. Do you have

Re: [Gluster-users] Files losing permissions in GlusterFS 3.12

2019-01-31 Thread Nithya Balachandran
On Wed, 30 Jan 2019 at 19:12, Gudrun Mareike Amedick < g.amed...@uni-luebeck.de> wrote: > Hi, > > a bit additional info inlineAm Montag, den 28.01.2019, 10:23 +0100 schrieb > Frank Ruehlemann: > > Am Montag, den 28.01.2019, 09:50 +0530 schrieb Nithya Balachandran: > &g

Re: [Gluster-users] Files losing permissions in GlusterFS 3.12

2019-01-27 Thread Nithya Balachandran
On Fri, 25 Jan 2019 at 20:51, Gudrun Mareike Amedick < g.amed...@uni-luebeck.de> wrote: > Hi all, > > we have a problem with a distributed dispersed volume (GlusterFS 3.12). We > have files that lost their permissions or gained sticky bits. The files > themselves seem to be okay. > > It looks

Re: [Gluster-users] usage of harddisks: each hdd a brick? raid?

2019-01-22 Thread Nithya Balachandran
On Tue, 22 Jan 2019 at 11:42, Amar Tumballi Suryanarayan < atumb...@redhat.com> wrote: > > > On Thu, Jan 10, 2019 at 1:56 PM Hu Bert wrote: > >> Hi, >> >> > > We ara also using 10TB disks, heal takes 7-8 days. >> > > You can play with "cluster.shd-max-threads" setting. It is default 1 I >> > >

Re: [Gluster-users] invisible files in some directory

2019-01-18 Thread Nithya Balachandran
On Fri, 18 Jan 2019 at 14:25, Mauro Tridici wrote: > Dear Users, > > I’m facing with a new problem on our gluster volume (v. 3.12.14). > Sometime it happen that “ls” command execution, in a specified directory, > return empty output. > “ls” command output is empty, but I know that the involved

Re: [Gluster-users] Input/output error on FUSE log

2019-01-10 Thread Nithya Balachandran
I don't see write failures in the log but I do see fallocate failing with EIO. [2019-01-07 19:16:44.846187] W [MSGID: 109011] [dht-layout.c:163:dht_layout_search] 0-gv1-dht: no subvolume for hash (value) = 1285124113 [2019-01-07 19:16:44.846194] D [MSGID: 0]

Re: [Gluster-users] A broken file that can not be deleted

2019-01-10 Thread Nithya Balachandran
On Wed, 9 Jan 2019 at 19:49, Dmitry Isakbayev wrote: > I am seeing a broken file that exists on 2 out of 3 nodes. The > application trying to use the file throws file permissions error. ls, rm, > mv, touch all throw "Input/output error" > > $ ls -la > ls: cannot access

Re: [Gluster-users] update to 4.1.6-1 and fix-layout failing

2019-01-07 Thread Nithya Balachandran
t one hour and I can't see any > new directories being created. > > Thanks > > Kashif > > > On Fri, Jan 4, 2019 at 10:42 AM Nithya Balachandran > wrote: > >> >> >> On Fri, 4 Jan 2019 at 15:48, mohammad kashif >> wrote: >> >>> Hi &g

Re: [Gluster-users] update to 4.1.6-1 and fix-layout failing

2019-01-04 Thread Nithya Balachandran
On Fri, 4 Jan 2019 at 15:48, mohammad kashif wrote: > Hi > > I have updated our distributed gluster storage from 3.12.9-1 to 4.1.6-1. > The existing cluster had seven servers totalling in around 450 TB. OS is > Centos7. The update went OK and I could access files. > Then I added two more

Re: [Gluster-users] [Stale file handle] in shard volume

2019-01-03 Thread Nithya Balachandran
e if a file is stale or not? > these criteria are just based observations i made, moving the stale files > manually. After removing them i was able to start the VM again..until some > time later it hangs on another stale shard file unfortunate. > > Thanks Olaf > > Op wo 2 jan. 2019 om

Re: [Gluster-users] [Stale file handle] in shard volume

2019-01-02 Thread Nithya Balachandran
On Mon, 31 Dec 2018 at 01:27, Olaf Buitelaar wrote: > Dear All, > > till now a selected group of VM's still seem to produce new stale file's > and getting paused due to this. > I've not updated gluster recently, however i did change the op version > from 31200 to 31202 about a week before this

Re: [Gluster-users] distribute remove-brick has started migrating the wrong brick (glusterfs 3.8.13)

2018-12-18 Thread Nithya Balachandran
> > > Steve > > On Tue, 18 Dec 2018 at 15:37, Nithya Balachandran > wrote: > >> >> >> On Tue, 18 Dec 2018 at 14:56, Stephen Remde >> wrote: >> >>> Nithya, >>> >>> I've realised, I will not have enough space on the other br

Re: [Gluster-users] distribute remove-brick has started migrating the wrong brick (glusterfs 3.8.13)

2018-12-18 Thread Nithya Balachandran
Brick9: 10.0.0.42:/export/md3/brick > Brick10: 10.0.0.43:/export/md1/brick > Options Reconfigured: > cluster.rebal-throttle: aggressive > cluster.min-free-disk: 1% > transport.address-family: inet > performance.readdir-ahead: on > nfs.disable: on > > > Best, > > Steve >

Re: [Gluster-users] Invisible files

2018-12-18 Thread Nithya Balachandran
On Fri, 14 Dec 2018 at 19:10, Raghavendra Gowdappa wrote: > > > On Fri, Dec 14, 2018 at 6:38 PM Lindolfo Meira > wrote: > >> It happened to me using gluster 5.0, on OpenSUSE Leap 15, during a >> benchmark with IOR: the volume would seem normally mounted, but I was >> unable to overwrite files,

Re: [Gluster-users] Should I be using gluster 3 or gluster 4?

2018-11-05 Thread Nithya Balachandran
On 6 November 2018 at 12:24, Jeevan Patnaik wrote: > Hi Vlad, > > I'm still confused of gluster releases. :( > Is 3.13 an official gluster release? It's not mentioned in > www.gluster.org/release-schedule > > 3.13 is EOL. It was a short term release. Which is more stable 3.13.2 or 3.12.6 or

Re: [Gluster-users] Wrong volume size for distributed dispersed volume on 4.1.5

2018-10-16 Thread Nithya Balachandran
On 16 October 2018 at 20:04, wrote: > Hi, > > > > So we did a quick grep shared-brick-count > > > /var/lib/glusterd/vols/data_vol1/* > on all boxes and found that on 5 out of 6 boxes this was > shared-brick-count=0 for all bricks on remote boxes and 1 for local bricks. > > > > > > Is this the

Re: [Gluster-users] Wrong volume size for distributed dispersed volume on 4.1.5

2018-10-16 Thread Nithya Balachandran
Hi, On 16 October 2018 at 18:20, wrote: > Hi everybody, > > I have created a distributed dispersed volume on 4.1.5 under centos7 like > this a few days ago: > > gluster volume create data_vol1 disperse-data 4 redundancy 2 transport tcp > \ > \ > gf-p-d-01.isec.foobar.com:/bricks/brick1/brick

Re: [Gluster-users] Rebalance failed on Distributed Disperse volume based on 3.12.14 version

2018-10-08 Thread Nithya Balachandran
> Brick29: s02-stg:/gluster/mnt10/brick > Brick30: s03-stg:/gluster/mnt10/brick > Brick31: s01-stg:/gluster/mnt11/brick > Brick32: s02-stg:/gluster/mnt11/brick > Brick33: s03-stg:/gluster/mnt11/brick > Brick34: s01-stg:/gluster/mnt12/brick > Brick35: s02-stg:/gluster/mnt12/brick &

Re: [Gluster-users] Found anomalies in ganesha-gfapi.log

2018-10-04 Thread Nithya Balachandran
ing the > backup from only one client) I find it a little worrying even if it’s an > INFO log level. > > > > *De :* Nithya Balachandran [mailto:nbala...@redhat.com] > *Envoyé :* 4 octobre 2018 09:34 > *À :* Renaud Fortier > *Cc :* gluster-users@gluster.org > > *Objet :*

Re: [Gluster-users] Found anomalies in ganesha-gfapi.log

2018-10-04 Thread Nithya Balachandran
On 4 October 2018 at 17:39, Renaud Fortier wrote: > Yes ! > > 2 clients using the same export connected to the same IP. Do you see > something wrong with that ? > > Thank you > Not necessarily wrong. This message shows up if DHT does not find a complete layout set on the directory when it does

Re: [Gluster-users] Rebalance failed on Distributed Disperse volume based on 3.12.14 version

2018-10-04 Thread Nithya Balachandran
Hi Mauro, The files on s04 and s05 can be deleted safely as long as those bricks have been removed from the volume and their brick processes are not running. .glusterfs/indices/xattrop/xattrop-* are links to files that need to be healed.

Re: [Gluster-users] Rebalance failed on Distributed Disperse volume based on 3.12.14 version

2018-10-03 Thread Nithya Balachandran
l check every files on each removed bricks. > > So, if I understand, I can proceed with deletion of directories and files > left on the bricks only if each file have T tag, right? > > Thank you in advance, > Mauro > > > Il giorno 03 ott 2018, alle ore 16:49, Nithya Balach

Re: [Gluster-users] Rebalance failed on Distributed Disperse volume based on 3.12.14 version

2018-10-03 Thread Nithya Balachandran
On 1 October 2018 at 15:35, Mauro Tridici wrote: > Good morning Ashish, > > your explanations are always very useful, thank you very much: I will > remember these suggestions for any future needs. > Anyway, during the week-end, the remove-brick procedures ended > successfully and we were able to

Re: [Gluster-users] Rebalance failed on Distributed Disperse volume based on 3.12.14 version

2018-09-28 Thread Nithya Balachandran
Hi Mauro, Please send the rebalance logs from s04-stg. I will take a look and get back. Regards, Nithya On 28 September 2018 at 16:21, Mauro Tridici wrote: > > Hi Ashish, > > as I said in my previous message, we adopted the first approach you > suggested (setting network.ping-timeout option

Re: [Gluster-users] Gluster tier in progress after attach

2018-09-24 Thread Nithya Balachandran
Please check the rebalance log to see why those files were not migrated. Regards, Nithya On 24 September 2018 at 21:21, Jeevan Patnaik wrote: > Hi, > > Yes, I did. The status shows all as completed. > > Regards, > Jeevan. > > On Mon 24 Sep, 2018, 9:20 PM Nithy

Re: [Gluster-users] Gluster tier in progress after attach

2018-09-24 Thread Nithya Balachandran
there's any posix locks as the files are not > accessed by any application except gluster itself. > > Regards, > Jeevan. > > > On Mon 24 Sep, 2018, 8:45 PM Nithya Balachandran, > wrote: > >> It has been a while since I worked on tier but here is what I remember:

Re: [Gluster-users] Gluster tier in progress after attach

2018-09-24 Thread Nithya Balachandran
wing a wrong status. > What command did you use to detach the tier? And how did you check that the files still exist on the hot tier? Regards, Nithya > > I am using gluster 3.12.3 and server hosts include RHEL 6.7 and 7.2 hosts > also. > > Regards, > Jeevan. > > On Mon 24 S

Re: [Gluster-users] Gluster tier in progress after attach

2018-09-24 Thread Nithya Balachandran
Are those files being accessed? Tiering will only promote those that have been accessed recently. Regards, Nithya On 24 September 2018 at 18:32, Jeevan Patnaik wrote: > I see it still not promoting any files. Do we need to run any command to > force the movement? > > This would be an issue if

Re: [Gluster-users] Failures during rebalance on gluster distributed disperse volume

2018-09-15 Thread Nithya Balachandran
to know if it could be a “virtually dangerous" procedure and if it will be >> the risk of losing data :-) >> Unfortunately, I can’t do a preventive copy of the volume data in another >> location. >> If it is possible, could you please illustrate the right steps needed to >>

Re: [Gluster-users] Failures during rebalance on gluster distributed disperse volume

2018-09-14 Thread Nithya Balachandran
our suggestions. > > Regards, > Mauro > > Il giorno 13 set 2018, alle ore 13:38, Nithya Balachandran < > nbala...@redhat.com> ha scritto: > > This looks like an issue because rebalance switched to using fallocate > which EC did not have implemented at that point. > > @

Re: [Gluster-users] Failures during rebalance on gluster distributed disperse volume

2018-09-13 Thread Nithya Balachandran
This looks like an issue because rebalance switched to using fallocate which EC did not have implemented at that point. @Pranith, @Ashish, which version of gluster had support for fallocate in EC? Regards, Nithya On 12 September 2018 at 19:24, Mauro Tridici wrote: > Dear All, > > I recently

Re: [Gluster-users] mv lost some files ?

2018-09-05 Thread Nithya Balachandran
On 5 September 2018 at 14:10, Nithya Balachandran wrote: > > > On 5 September 2018 at 14:02, Nithya Balachandran > wrote: > >> Hi, >> >> Please try turning off cluster.readdir-optimize and see it if helps. >> > > You can also try tu

Re: [Gluster-users] mv lost some files ?

2018-09-05 Thread Nithya Balachandran
On 5 September 2018 at 14:02, Nithya Balachandran wrote: > Hi, > > Please try turning off cluster.readdir-optimize and see it if helps. > You can also try turning off parallel-readdir. > If not, please send us the client mount logs and a tcpdump of when the > *ls* is perform

Re: [Gluster-users] mv lost some files ?

2018-09-05 Thread Nithya Balachandran
Hi, Please try turning off cluster.readdir-optimize and see it if helps. If not, please send us the client mount logs and a tcpdump of when the *ls* is performed from the client. Use the following to capture the dump: tcpdump -i any -s 0 -w /var/tmp/dirls.pcap tcp and not port 22 Thanks,

Re: [Gluster-users] gluster 3.12.8 fuse consume huge memory

2018-08-30 Thread Nithya Balachandran
Hi, Please take statedumps of the 3.12.13 client process at intervals when the memory is increasing and send those across. We will also need the gluster volume info for the volume int question. Thanks, Nithya On 31 August 2018 at 08:32, huting3 wrote: > Thanks for your reply, I also test

Re: [Gluster-users] gluster connection interrupted during transfer

2018-08-30 Thread Nithya Balachandran
Hi Richard, On 29 August 2018 at 18:11, Richard Neuboeck wrote: > Hi Gluster Community, > > I have problems with a glusterfs 'Transport endpoint not connected' > connection abort during file transfers that I can replicate (all the > time now) but not pinpoint as to why this is happening. > >

Re: [Gluster-users] Gluster release 3.12.13 (Long Term Maintenance) Canceled for 10th of August, 2018

2018-08-14 Thread Nithya Balachandran
I agree as well. This is a bug that is impacting users. On 14 August 2018 at 16:30, Ravishankar N wrote: > +1 > > Considering that master is no longer locked, it would be nice if a release > can be made sooner. Amar sent a missing back port [1] which also fixes a > mem leak issue on the client

Re: [Gluster-users] blocking process on FUSE mount in directory which is using quota

2018-08-14 Thread Nithya Balachandran
e > problem is gone (no more blocking processes such as "ls") so there must be > an issue or bug with the quota part of gluster. > > > > ‐‐‐ Original Message ‐‐‐ > On August 10, 2018 4:19 PM, Nithya Balachandran > wrote: > > > > On 9 August 2018 at 1

Re: [Gluster-users] blocking process on FUSE mount in directory which is using quota

2018-08-10 Thread Nithya Balachandran
lient.event-threads: 4 >> server.event-threads: 4 >> auth.allow: 192.168.100.92 >> >> >> >> 2. Sorry I have no clue how to take a "statedump" of a process on Linux. >> Which command should I use for that? and which process would you like, the &

Re: [Gluster-users] blocking process on FUSE mount in directory which is using quota

2018-08-09 Thread Nithya Balachandran
Hi, Please provide the following: 1. gluster volume info 2. statedump of the fuse process when it hangs Thanks, Nithya On 9 August 2018 at 18:24, mabi wrote: > Hello, > > I recently upgraded my GlusterFS replica 2+1 (aribter) to version 3.12.12 > and now I see a weird behaviour on my

Re: [Gluster-users] gluster 3.12 memory leak

2018-08-03 Thread Nithya Balachandran
Hi, What version of gluster were you using before you upgraded? Regards, Nithya On 3 August 2018 at 16:56, Alex K wrote: > Hi all, > > I am using gluster 3.12.9-1 on ovirt 4.1.9 and I have observed consistent > high memory use which at some point renders the hosts unresponsive. This >

Re: [Gluster-users] Rebalance taking > 2 months

2018-08-01 Thread Nithya Balachandran
018 at 11:40 AM, Nithya Balachandran > wrote: > >> >> >> On 31 July 2018 at 19:44, Rusty Bower wrote: >> >>> I'll figure out what hasn't been rebalanced yet and run the script. >>> >>> There's only a single client accessing this gluster

Re: [Gluster-users] Rebalance taking > 2 months

2018-07-31 Thread Nithya Balachandran
til the disk space used reduces >>on the older bricks. >> >> This is a very simple script. Let me know how it works - we can always >> tweak it for your particular data set. >> >> >> >and performance is basically garbage while it rebalances

Re: [Gluster-users] Rebalance taking > 2 months

2018-07-31 Thread Nithya Balachandran
uly 2018 at 22:18, Nithya Balachandran wrote: > I have not documented this yet - I will send you the steps tomorrow. > > Regards, > Nithya > > On 30 July 2018 at 20:23, Rusty Bower wrote: > >> That would be awesome. Where can I find these? >> >> Rusty >&

Re: [Gluster-users] Rebalance taking > 2 months

2018-07-30 Thread Nithya Balachandran
I have not documented this yet - I will send you the steps tomorrow. Regards, Nithya On 30 July 2018 at 20:23, Rusty Bower wrote: > That would be awesome. Where can I find these? > > Rusty > > Sent from my iPhone > > On Jul 30, 2018, at 03:40, Nithya Balachandran

Re: [Gluster-users] Questions on adding brick to gluster

2018-07-30 Thread Nithya Balachandran
Hi, On 30 July 2018 at 00:45, Pat Haley wrote: > > Hi All, > > We are adding a new brick (91TB) to our existing gluster volume (328 TB). > The new brick is on a new physical server and we want to make sure that we > are doing this correctly (the existing volume had 2 bricks on a single >

Re: [Gluster-users] Rebalance taking > 2 months

2018-07-30 Thread Nithya Balachandran
shoot you the log files offline :) >> >> Thanks! >> Rusty >> >> On Mon, Jul 23, 2018 at 3:12 AM, Nithya Balachandran > > wrote: >> >>> Hi Rusty, >>> >>> Sorry I took so long to get back to you. >>> >>> Which is th

Re: [Gluster-users] Rebalance taking > 2 months

2018-07-23 Thread Nithya Balachandran
processed=43108318798 tmp_cnt = > 55419279917056,rate_processed=446560.991961, > elapsed = 96534.00 > [2018-07-16 17:38:08.899791] I [dht-rebalance.c:5130:gf_defrag_status_get] > 0-glusterfs: TIME: Estimated total time to complete (size)= 124102375 > seconds, seconds left = 124005

Re: [Gluster-users] Rebalance taking > 2 months

2018-07-16 Thread Nithya Balachandran
If possible, please send the rebalance logs as well. On 16 July 2018 at 10:14, Nithya Balachandran wrote: > Hi Rusty, > > We need the following information: > >1. The exact gluster version you are running >2. gluster volume info >3. gluster rebalance statu

  1   2   3   >