Hi,
For the memory increase, please capture statedumps of the process at
intervals of an hour and send it across.
https://docs.gluster.org/en/latest/Troubleshooting/statedump/ describes how
to generate a statedump for the client process.
Regards,
Nithya
On Wed, 13 Nov 2019 at 05:18, Jamie
On Thu, 7 Nov 2019 at 15:15, Shreyansh Shah
wrote:
> Hi,
> Running distributed gluster 5.10 with 6 node and 2 bricks on each node (12
> in total)
> Due to some reason the files under /.glusterfs were deleted.
> Post that we have loads of broken symlinks, but the data on the disks exist
> Due to
Fermi National Accelerator Laboratory
> www.fnal.gov
> www.scientificlinux.org
>
>
> ________
> From: Nithya Balachandran
> Sent: Thursday, September 19, 2019 10:14 PM
> To: Patrick Riehecky
> Cc: gluster-users
> Subject: Re: [Gluster-user
On Thu, 19 Sep 2019 at 15:40, Milewski Daniel
wrote:
> I've observed an interesting behavior in Gluster 5.6. I had a file
> which was placed on incorrect subvolume (aparrently by the rebalancing
> process). I could stat and read the file just fine over FUSE mount
> point, with this entry
Hi Pat,
Do you still see the problem of missing files? If yes please provide the
following :
1. gluster volume info
2. ls -l of the directory containing the missing files from the mount point
and from the individual bricks.
Regards,
Niyhya
On Thu, 29 Aug 2019 at 18:57, Pat Riehecky wrote:
ff986c54ce0] )
> 0-tank-client-10: forced unwinding frame type(GlusterFS 3.3)
> op(FXATTROP(34)) called at 2019-09-08 15:40:44.040333 (xid=0x7f8cfac)
>
> Does this type of failure cause data corruption? What is the best course
> of action at this point?
>
> Thanks,
>
>
re not doing anything.
Hope this helps.
Regards,
Nithya
> behaviour.
>
> >
> >
>
> >Regards,
>
> >
>
> >Nithya
>
>
> Best Regards,
> Strahil Nikolov
> On Sep 9, 2019 06:36, Nithya Balachandran wrote:
>
>
>
> On Sat,
nly a single node per replica set would migrate files in
the version used in this case .
Regards,
Nithya
Best Regards,
> Strahil Nikolov
>
> В петък, 6 септември 2019 г., 15:29:20 ч. Гринуич+3, Herb Burnswell <
> herbert.burnsw...@gmail.com> написа:
>
>
>
>
> On Th
idance..
>
>
What is the output of the rebalance status command?
Can you check if there are any errors in the rebalance logs on the node on
which you see rebalance activity?
If there are a lot of small files on the volume, the rebalance is expected
to take time.
Regards,
Nithya
On Sat, 31 Aug 2019 at 22:59, Herb Burnswell
wrote:
> Thank you for the reply.
>
> I started a rebalance with force on serverA as suggested. Now I see
> 'activity' on that node:
>
> # gluster vol rebalance tank status
> Node Rebalanced-files size
>
Hi,
This is the expected behaviour for a distribute volume. Files that hash to
a brick that is down will not be created. This is to prevent issues in case
the file already exists on that brick.
To prevent this, please use distribute-replicate volumes.
Regards,
Nithya
On Thu, 8 Aug 2019 at
er.org/en/latest/Administrator%20Guide/Accessing%20Gluster%20from%20Windows/
>
> Diego
>
>
>
> On Mon, Jul 29, 2019 at 11:52 PM Nithya Balachandran
> wrote:
>
>>
>> Hi Diego,
>>
>> Please do the following:
>>
>> gluster v get readdir-ahead
ill the actual process or simply trigger the dump? Which
> process should I kill? The brick process in the system or the fuse mount?
>
> Diego
>
> On Mon, Jul 29, 2019, 23:27 Nithya Balachandran
> wrote:
>
>>
>>
>> On Tue, 30 Jul 2019 at 05:44, Diego Remolina wro
On Tue, 30 Jul 2019 at 05:44, Diego Remolina wrote:
> Unfortunately statedump crashes on both machines, even freshly rebooted.
>
Do you see any statedump files in /var/run/gluster? This looks more like
the gluster cli crashed.
>
> [root@ysmha01 ~]# gluster --print-statedumpdir
>
but I wasn't sure.
>
> Thanks,
> -Matthew
>
> --
> Matthew Benstead
> System Administrator
> Pacific Climate Impacts Consortium <https://pacificclimate.org/>
> University of Victoria, UH1
> PO Box 1800, STN CSC
> Victoria, BC, V8W 2Y2
> Phone: +1-250-721-8432
>
cts Consortium <https://pacificclimate.org/>
> University of Victoria, UH1
> PO Box 1800, STN CSC
> Victoria, BC, V8W 2Y2
> Phone: +1-250-721-8432
> Email: matth...@uvic.ca
> On 7/24/19 9:30 PM, Nithya Balachandran wrote:
>
>
>
> On Wed, 24 Jul 2019 at 22:1
62T 77% /storage
> [root@gluster07 ~]# getfattr --absolute-names -m . -d -e hex
> /mnt/raid6-storage/storage/
> # file: /mnt/raid6-storage/storage/
>
> security.selinux=0x756e636f6e66696e65645f753a6f626a6563745f723a756e6c6162656c65645f743a733000
> trusted.gfid=0x00
01
>
> trusted.glusterfs.6f95525a-94d7-4174-bac4-e1a18fe010a2.xtime=0x5d307baa00023ec0
> trusted.glusterfs.quota.dirty=0x3000
>
> trusted.glusterfs.quota.size.2=0x1b71d5279e763e320005cd53
> trusted.glusterfs.volume-id=0x6f95525a94d74174bac4e1a18fe010a2
>
Did you see this behaviour with previous Gluster versions?
Regards,
Nithya
On Wed, 3 Jul 2019 at 21:41, wrote:
> Am I alone having this problem ?
>
> - Mail original -
> De: n...@furyweb.fr
> À: "gluster-users"
> Envoyé: Vendredi 21 Juin 2019 09:48:47
> Objet: [Gluster-users] Parallel
ored. Ravi and Krutika, please take a look at the other files.
Regards,
Nithya
On Fri, 28 Jun 2019 at 19:56, Dave Sherohman wrote:
> On Thu, Jun 27, 2019 at 12:17:10PM +0530, Nithya Balachandran wrote:
> > There are some edge cases that may prevent a file from being migrated
> > duri
On Fri, 28 Jun 2019 at 14:34, Dave Sherohman wrote:
> On Thu, Jun 27, 2019 at 12:17:10PM +0530, Nithya Balachandran wrote:
> > On Tue, 25 Jun 2019 at 15:26, Dave Sherohman wrote:
> > > My objective is to remove nodes B and C entirely.
> > >
> > > First up is
On Thu, 27 Jun 2019 at 12:17, Nithya Balachandran
wrote:
> Hi,
>
>
> On Tue, 25 Jun 2019 at 15:26, Dave Sherohman wrote:
>
>> I have a 9-brick, replica 2+A cluster and plan to (permanently) remove
>> one of the three subvolumes. I think I've worked out how to do it,
Hi,
On Tue, 25 Jun 2019 at 15:26, Dave Sherohman wrote:
> I have a 9-brick, replica 2+A cluster and plan to (permanently) remove
> one of the three subvolumes. I think I've worked out how to do it, but
> want to verify first that I've got it right, since downtime or data loss
> would be Bad
On Sat, 8 Jun 2019 at 01:29, Alan Orth wrote:
> Dear Ravi,
>
> In the last week I have completed a fix-layout and a full INDEX heal on
> this volume. Now I've started a rebalance and I see a few terabytes of data
> going around on different bricks since yesterday, which I'm sure is good.
>
>
cted to glusterfs community.
>
> Regards,
> Abhishek
>
> On Thu, Jun 6, 2019, 16:08 Nithya Balachandran
> wrote:
>
>> Hi Abhishek,
>>
>> I am still not clear as to the purpose of the tests. Can you clarify why
>> you are using valgrind and why you think t
the below script to see the memory increase whihle the script is
> above script is running in background.
>
> *ps_mem.py*
>
> I am attaching the script files as well as the result got after testing
> the scenario.
>
> On Wed, Jun 5, 2019 at 7:23 PM Nithya Balachandran
>
Hi,
Writing to a volume should not affect glusterd. The stack you have shown in
the valgrind looks like the memory used to initialise the structures
glusterd uses and will free only when it is stopped.
Can you provide more details to what it is you are trying to test?
Regards,
Nithya
On Tue,
Hi Brandon,
Please send the following:
1. the gluster volume info
2. Information about which brick was removed
3. The rebalance log file for all nodes hosting removed bricks.
Regards,
Nithya
On Fri, 24 May 2019 at 19:33, Ravishankar N wrote:
> Adding a few DHT folks for some possible
On Fri, 17 May 2019 at 06:01, David Cunningham
wrote:
> Hello,
>
> We're adding an arbiter node to an existing volume and having an issue.
> Can anyone help? The root cause error appears to be
> "----0001: failed to resolve (Transport
> endpoint is not connected)", as
On Thu, 16 May 2019 at 14:17, Paul van der Vlis wrote:
> Op 16-05-19 om 05:43 schreef Nithya Balachandran:
> >
> >
> > On Thu, 16 May 2019 at 03:05, Paul van der Vlis > <mailto:p...@vandervlis.nl>> wrote:
> >
> > Op 15-05-19 om 15:45 s
On Thu, 16 May 2019 at 03:05, Paul van der Vlis wrote:
> Op 15-05-19 om 15:45 schreef Nithya Balachandran:
> > Hi Paul,
> >
> > A few questions:
> > Which version of gluster are you using?
>
> On the server and some clients: glusterfs 4.1.2
> On a new c
Hi Paul,
A few questions:
Which version of gluster are you using?
Did this behaviour start recently? As in were the contents of that
directory visible earlier?
Regards,
Nithya
On Wed, 15 May 2019 at 18:55, Paul van der Vlis wrote:
> Hello Strahil,
>
> Thanks for your answer. I don't find the
Hi Patrick,
Did this start only after the upgrade?
How do you determine which brick process to kill?
Are there a lot of files to be healed on the volume?
Can you provide a tcpdump of the slow listing from a separate test client
mount ?
1. Mount the gluster volume on a different mount point
Hi,
If you know which directories are problematic, please check and see if the
permissions on them are correct on the individual bricks.
Please also provide the following:
- *gluster volume info* for the volume
- The gluster version you are running
regards,
Nithya
On Wed, 27 Mar 2019 at
On Wed, 27 Mar 2019 at 20:27, Poornima Gurusiddaiah
wrote:
> This feature is not under active development as it was not used widely.
> AFAIK its not supported feature.
> +Nithya +Raghavendra for further clarifications.
>
This is not actively supported - there has been no work done on this
On Wed, 27 Mar 2019 at 21:47, wrote:
> Hello Amar and list,
>
>
>
> I wanted to follow-up to confirm that upgrading to 5.5 seem to fix the
> “Transport endpoint is not connected failures” for us.
>
>
>
> We did not have any of these failures in this past weekend backups cycle.
>
>
>
> Thank you
On Wed, 20 Mar 2019 at 22:59, Jim Kinney wrote:
> I have half a zillion broken symlinks in the .glusterfs folder on 3 of 11
> volumes. It doesn't make sense to me that a GFID should linklike some of
> the ones below:
>
>
Hi Artem,
I think you are running into a different crash. The ones reported which
were prevented by turning off write-behind are now fixed.
We will need to look into the one you are seeing to see why it is happening.
Regards,
Nithya
On Tue, 19 Mar 2019 at 20:25, Artem Russakovskii
wrote:
>
Hi,
What is the output of the gluster volume info ?
Thanks,
Nithya
On Wed, 20 Mar 2019 at 01:58, Pablo Schandin wrote:
> Hello all!
>
> I had a volume with only a local brick running vms and recently added a
> second (remote) brick to the volume. After adding the brick, the heal
> command
On Tue, 19 Feb 2019 at 15:18, Hu Bert wrote:
> Hello @ll,
>
> one of our backend developers told me that, in the tomcat logs, he
> sees errors that directories on a glusterfs mount aren't readable.
> Within tomcat the errors look like this:
>
> 2019-02-19 07:39:27,124 WARN Path
>
> balancing the new brick and will resync the files onto the full gluster
> volume when that completes
>
> On Wed, Feb 13, 2019, 10:28 PM Nithya Balachandran
> wrote:
>
>>
>>
>> On Tue, 12 Feb 2019 at 08:30, Patrick Nixon wrote:
>>
>>> T
On Tue, 12 Feb 2019 at 08:30, Patrick Nixon wrote:
> The files are being written to via the glusterfs mount (and read on the
> same client and a different client). I try not to do anything on the nodes
> directly because I understand that can cause weirdness. As far as I can
> tell, there
gt;>> beerpla.net | +ArtemRussakovskii
>>>>>> <https://plus.google.com/+ArtemRussakovskii> | @ArtemR
>>>>>> <http://twitter.com/ArtemR>
>>>>>>
>>>>>>
>>>>>> On Fri, Feb 8, 2019 at 7:22 PM Ra
alidation: on
>>> performance.stat-prefetch: on
>>> features.cache-invalidation-timeout: 600
>>> features.cache-invalidation: on
>>> cluster.readdir-optimize: on
>>> performance.io-thread-count: 32
>>> server.event-threads: 4
>>&g
ork.inode-lru-limit: 50
>>> performance.md-cache-timeout: 600
>>> performance.cache-invalidation: on
>>> performance.stat-prefetch: on
>>> features.cache-invalidation-timeout: 600
>>> features.cache-invalidation: on
>>> cluster.readdir-optimize: on
>>
"reaching this limit (0 means 'unlimited')",
},
>
> This seems to be the default already? Set it explicitly?
>
> Regards,
> Hubert
>
> Am Mi., 6. Feb. 2019 um 09:26 Uhr schrieb Nithya Balachandran
> :
> >
> > Hi,
> >
> > The client
Hi,
The client logs indicates that the mount process has crashed.
Please try mounting the volume with the volume option lru-limit=0 and see
if it still crashes.
Thanks,
Nithya
On Thu, 24 Jan 2019 at 12:47, Hu Bert wrote:
> Good morning,
>
> we currently transfer some data to a new glusterfs
Hi Artem,
Do you still see the crashes with 5.3? If yes, please try mount the volume
using the mount option lru-limit=0 and see if that helps. We are looking
into the crashes and will update when have a fix.
Also, please provide the gluster volume info for the volume in question.
regards,
566 0 201completed
> 0:00:08
>
> Is the rebalancing option working fine? Why did gluster throw the error
> saying that "Error : Request timed out"?
> .On Tue, Feb 5, 2019 at 4:23 PM Nithya Balachandran
> wrote:
>
>> Hi,
>> Pleas
Hi,
Please provide the exact step at which you are seeing the error. It would
be ideal if you could copy-paste the command and the error.
Regards,
Nithya
On Tue, 5 Feb 2019 at 15:24, deepu srinivasan wrote:
> HI everyone. I am getting "Error : Request timed out " while doing
> rebalance . I
and for very long time there
> was no failures and then at some point these 17000 failures appeared and it
> stayed like that.
>
> Thanks
>
> Kashif
>
>
>
>
>
> Let me explain a little bit of background.
>
>
> On Mon, Feb 4, 2019 at 5:09 AM Nithya
Hi,
The status shows quite a few failures. Please check the rebalance logs to
see why that happened. We can decide what to do based on the errors.
Once you run a commit, the brick will no longer be part of the volume and
you will not be able to access those files via the client.
Do you have
On Wed, 30 Jan 2019 at 19:12, Gudrun Mareike Amedick <
g.amed...@uni-luebeck.de> wrote:
> Hi,
>
> a bit additional info inlineAm Montag, den 28.01.2019, 10:23 +0100 schrieb
> Frank Ruehlemann:
> > Am Montag, den 28.01.2019, 09:50 +0530 schrieb Nithya Balachandran:
> &g
On Fri, 25 Jan 2019 at 20:51, Gudrun Mareike Amedick <
g.amed...@uni-luebeck.de> wrote:
> Hi all,
>
> we have a problem with a distributed dispersed volume (GlusterFS 3.12). We
> have files that lost their permissions or gained sticky bits. The files
> themselves seem to be okay.
>
> It looks
On Tue, 22 Jan 2019 at 11:42, Amar Tumballi Suryanarayan <
atumb...@redhat.com> wrote:
>
>
> On Thu, Jan 10, 2019 at 1:56 PM Hu Bert wrote:
>
>> Hi,
>>
>> > > We ara also using 10TB disks, heal takes 7-8 days.
>> > > You can play with "cluster.shd-max-threads" setting. It is default 1 I
>> > >
On Fri, 18 Jan 2019 at 14:25, Mauro Tridici wrote:
> Dear Users,
>
> I’m facing with a new problem on our gluster volume (v. 3.12.14).
> Sometime it happen that “ls” command execution, in a specified directory,
> return empty output.
> “ls” command output is empty, but I know that the involved
I don't see write failures in the log but I do see fallocate failing with
EIO.
[2019-01-07 19:16:44.846187] W [MSGID: 109011]
[dht-layout.c:163:dht_layout_search] 0-gv1-dht: no subvolume for hash
(value) = 1285124113
[2019-01-07 19:16:44.846194] D [MSGID: 0]
On Wed, 9 Jan 2019 at 19:49, Dmitry Isakbayev wrote:
> I am seeing a broken file that exists on 2 out of 3 nodes. The
> application trying to use the file throws file permissions error. ls, rm,
> mv, touch all throw "Input/output error"
>
> $ ls -la
> ls: cannot access
t one hour and I can't see any
> new directories being created.
>
> Thanks
>
> Kashif
>
>
> On Fri, Jan 4, 2019 at 10:42 AM Nithya Balachandran
> wrote:
>
>>
>>
>> On Fri, 4 Jan 2019 at 15:48, mohammad kashif
>> wrote:
>>
>>> Hi
&g
On Fri, 4 Jan 2019 at 15:48, mohammad kashif wrote:
> Hi
>
> I have updated our distributed gluster storage from 3.12.9-1 to 4.1.6-1.
> The existing cluster had seven servers totalling in around 450 TB. OS is
> Centos7. The update went OK and I could access files.
> Then I added two more
e if a file is stale or not?
> these criteria are just based observations i made, moving the stale files
> manually. After removing them i was able to start the VM again..until some
> time later it hangs on another stale shard file unfortunate.
>
> Thanks Olaf
>
> Op wo 2 jan. 2019 om
On Mon, 31 Dec 2018 at 01:27, Olaf Buitelaar
wrote:
> Dear All,
>
> till now a selected group of VM's still seem to produce new stale file's
> and getting paused due to this.
> I've not updated gluster recently, however i did change the op version
> from 31200 to 31202 about a week before this
>
>
> Steve
>
> On Tue, 18 Dec 2018 at 15:37, Nithya Balachandran
> wrote:
>
>>
>>
>> On Tue, 18 Dec 2018 at 14:56, Stephen Remde
>> wrote:
>>
>>> Nithya,
>>>
>>> I've realised, I will not have enough space on the other br
Brick9: 10.0.0.42:/export/md3/brick
> Brick10: 10.0.0.43:/export/md1/brick
> Options Reconfigured:
> cluster.rebal-throttle: aggressive
> cluster.min-free-disk: 1%
> transport.address-family: inet
> performance.readdir-ahead: on
> nfs.disable: on
>
>
> Best,
>
> Steve
>
On Fri, 14 Dec 2018 at 19:10, Raghavendra Gowdappa
wrote:
>
>
> On Fri, Dec 14, 2018 at 6:38 PM Lindolfo Meira
> wrote:
>
>> It happened to me using gluster 5.0, on OpenSUSE Leap 15, during a
>> benchmark with IOR: the volume would seem normally mounted, but I was
>> unable to overwrite files,
On 6 November 2018 at 12:24, Jeevan Patnaik wrote:
> Hi Vlad,
>
> I'm still confused of gluster releases. :(
> Is 3.13 an official gluster release? It's not mentioned in
> www.gluster.org/release-schedule
>
>
3.13 is EOL. It was a short term release.
Which is more stable 3.13.2 or 3.12.6 or
On 16 October 2018 at 20:04, wrote:
> Hi,
>
> > > So we did a quick grep shared-brick-count
> > > /var/lib/glusterd/vols/data_vol1/*
> on all boxes and found that on 5 out of 6 boxes this was
> shared-brick-count=0 for all bricks on remote boxes and 1 for local bricks.
> > >
> > > Is this the
Hi,
On 16 October 2018 at 18:20, wrote:
> Hi everybody,
>
> I have created a distributed dispersed volume on 4.1.5 under centos7 like
> this a few days ago:
>
> gluster volume create data_vol1 disperse-data 4 redundancy 2 transport tcp
> \
> \
> gf-p-d-01.isec.foobar.com:/bricks/brick1/brick
> Brick29: s02-stg:/gluster/mnt10/brick
> Brick30: s03-stg:/gluster/mnt10/brick
> Brick31: s01-stg:/gluster/mnt11/brick
> Brick32: s02-stg:/gluster/mnt11/brick
> Brick33: s03-stg:/gluster/mnt11/brick
> Brick34: s01-stg:/gluster/mnt12/brick
> Brick35: s02-stg:/gluster/mnt12/brick
&
ing the
> backup from only one client) I find it a little worrying even if it’s an
> INFO log level.
>
>
>
> *De :* Nithya Balachandran [mailto:nbala...@redhat.com]
> *Envoyé :* 4 octobre 2018 09:34
> *À :* Renaud Fortier
> *Cc :* gluster-users@gluster.org
>
> *Objet :*
On 4 October 2018 at 17:39, Renaud Fortier
wrote:
> Yes !
>
> 2 clients using the same export connected to the same IP. Do you see
> something wrong with that ?
>
> Thank you
>
Not necessarily wrong. This message shows up if DHT does not find a
complete layout set on the directory when it does
Hi Mauro,
The files on s04 and s05 can be deleted safely as long as those bricks have
been removed from the volume and their brick processes are not running.
.glusterfs/indices/xattrop/xattrop-* are links to files that need to be healed.
l check every files on each removed bricks.
>
> So, if I understand, I can proceed with deletion of directories and files
> left on the bricks only if each file have T tag, right?
>
> Thank you in advance,
> Mauro
>
>
> Il giorno 03 ott 2018, alle ore 16:49, Nithya Balach
On 1 October 2018 at 15:35, Mauro Tridici wrote:
> Good morning Ashish,
>
> your explanations are always very useful, thank you very much: I will
> remember these suggestions for any future needs.
> Anyway, during the week-end, the remove-brick procedures ended
> successfully and we were able to
Hi Mauro,
Please send the rebalance logs from s04-stg. I will take a look and get
back.
Regards,
Nithya
On 28 September 2018 at 16:21, Mauro Tridici wrote:
>
> Hi Ashish,
>
> as I said in my previous message, we adopted the first approach you
> suggested (setting network.ping-timeout option
Please check the rebalance log to see why those files were not migrated.
Regards,
Nithya
On 24 September 2018 at 21:21, Jeevan Patnaik wrote:
> Hi,
>
> Yes, I did. The status shows all as completed.
>
> Regards,
> Jeevan.
>
> On Mon 24 Sep, 2018, 9:20 PM Nithy
there's any posix locks as the files are not
> accessed by any application except gluster itself.
>
> Regards,
> Jeevan.
>
>
> On Mon 24 Sep, 2018, 8:45 PM Nithya Balachandran,
> wrote:
>
>> It has been a while since I worked on tier but here is what I remember:
wing a wrong status.
>
What command did you use to detach the tier? And how did you check that the
files still exist on the hot tier?
Regards,
Nithya
>
> I am using gluster 3.12.3 and server hosts include RHEL 6.7 and 7.2 hosts
> also.
>
> Regards,
> Jeevan.
>
> On Mon 24 S
Are those files being accessed? Tiering will only promote those that have
been accessed recently.
Regards,
Nithya
On 24 September 2018 at 18:32, Jeevan Patnaik wrote:
> I see it still not promoting any files. Do we need to run any command to
> force the movement?
>
> This would be an issue if
to know if it could be a “virtually dangerous" procedure and if it will be
>> the risk of losing data :-)
>> Unfortunately, I can’t do a preventive copy of the volume data in another
>> location.
>> If it is possible, could you please illustrate the right steps needed to
>>
our suggestions.
>
> Regards,
> Mauro
>
> Il giorno 13 set 2018, alle ore 13:38, Nithya Balachandran <
> nbala...@redhat.com> ha scritto:
>
> This looks like an issue because rebalance switched to using fallocate
> which EC did not have implemented at that point.
>
> @
This looks like an issue because rebalance switched to using fallocate
which EC did not have implemented at that point.
@Pranith, @Ashish, which version of gluster had support for fallocate in EC?
Regards,
Nithya
On 12 September 2018 at 19:24, Mauro Tridici wrote:
> Dear All,
>
> I recently
On 5 September 2018 at 14:10, Nithya Balachandran
wrote:
>
>
> On 5 September 2018 at 14:02, Nithya Balachandran
> wrote:
>
>> Hi,
>>
>> Please try turning off cluster.readdir-optimize and see it if helps.
>>
>
> You can also try tu
On 5 September 2018 at 14:02, Nithya Balachandran
wrote:
> Hi,
>
> Please try turning off cluster.readdir-optimize and see it if helps.
>
You can also try turning off parallel-readdir.
> If not, please send us the client mount logs and a tcpdump of when the
> *ls* is perform
Hi,
Please try turning off cluster.readdir-optimize and see it if helps.
If not, please send us the client mount logs and a tcpdump of when the *ls*
is performed from the client. Use the following to capture the dump:
tcpdump -i any -s 0 -w /var/tmp/dirls.pcap tcp and not port 22
Thanks,
Hi,
Please take statedumps of the 3.12.13 client process at intervals when the
memory is increasing and send those across.
We will also need the gluster volume info for the volume int question.
Thanks,
Nithya
On 31 August 2018 at 08:32, huting3 wrote:
> Thanks for your reply, I also test
Hi Richard,
On 29 August 2018 at 18:11, Richard Neuboeck wrote:
> Hi Gluster Community,
>
> I have problems with a glusterfs 'Transport endpoint not connected'
> connection abort during file transfers that I can replicate (all the
> time now) but not pinpoint as to why this is happening.
>
>
I agree as well. This is a bug that is impacting users.
On 14 August 2018 at 16:30, Ravishankar N wrote:
> +1
>
> Considering that master is no longer locked, it would be nice if a release
> can be made sooner. Amar sent a missing back port [1] which also fixes a
> mem leak issue on the client
e
> problem is gone (no more blocking processes such as "ls") so there must be
> an issue or bug with the quota part of gluster.
>
>
>
> ‐‐‐ Original Message ‐‐‐
> On August 10, 2018 4:19 PM, Nithya Balachandran
> wrote:
>
>
>
> On 9 August 2018 at 1
lient.event-threads: 4
>> server.event-threads: 4
>> auth.allow: 192.168.100.92
>>
>>
>>
>> 2. Sorry I have no clue how to take a "statedump" of a process on Linux.
>> Which command should I use for that? and which process would you like, the
&
Hi,
Please provide the following:
1. gluster volume info
2. statedump of the fuse process when it hangs
Thanks,
Nithya
On 9 August 2018 at 18:24, mabi wrote:
> Hello,
>
> I recently upgraded my GlusterFS replica 2+1 (aribter) to version 3.12.12
> and now I see a weird behaviour on my
Hi,
What version of gluster were you using before you upgraded?
Regards,
Nithya
On 3 August 2018 at 16:56, Alex K wrote:
> Hi all,
>
> I am using gluster 3.12.9-1 on ovirt 4.1.9 and I have observed consistent
> high memory use which at some point renders the hosts unresponsive. This
>
018 at 11:40 AM, Nithya Balachandran > wrote:
>
>>
>>
>> On 31 July 2018 at 19:44, Rusty Bower wrote:
>>
>>> I'll figure out what hasn't been rebalanced yet and run the script.
>>>
>>> There's only a single client accessing this gluster
til the disk space used reduces
>>on the older bricks.
>>
>> This is a very simple script. Let me know how it works - we can always
>> tweak it for your particular data set.
>>
>>
>> >and performance is basically garbage while it rebalances
uly 2018 at 22:18, Nithya Balachandran wrote:
> I have not documented this yet - I will send you the steps tomorrow.
>
> Regards,
> Nithya
>
> On 30 July 2018 at 20:23, Rusty Bower wrote:
>
>> That would be awesome. Where can I find these?
>>
>> Rusty
>&
I have not documented this yet - I will send you the steps tomorrow.
Regards,
Nithya
On 30 July 2018 at 20:23, Rusty Bower wrote:
> That would be awesome. Where can I find these?
>
> Rusty
>
> Sent from my iPhone
>
> On Jul 30, 2018, at 03:40, Nithya Balachandran
Hi,
On 30 July 2018 at 00:45, Pat Haley wrote:
>
> Hi All,
>
> We are adding a new brick (91TB) to our existing gluster volume (328 TB).
> The new brick is on a new physical server and we want to make sure that we
> are doing this correctly (the existing volume had 2 bricks on a single
>
shoot you the log files offline :)
>>
>> Thanks!
>> Rusty
>>
>> On Mon, Jul 23, 2018 at 3:12 AM, Nithya Balachandran > > wrote:
>>
>>> Hi Rusty,
>>>
>>> Sorry I took so long to get back to you.
>>>
>>> Which is th
processed=43108318798 tmp_cnt =
> 55419279917056,rate_processed=446560.991961,
> elapsed = 96534.00
> [2018-07-16 17:38:08.899791] I [dht-rebalance.c:5130:gf_defrag_status_get]
> 0-glusterfs: TIME: Estimated total time to complete (size)= 124102375
> seconds, seconds left = 124005
If possible, please send the rebalance logs as well.
On 16 July 2018 at 10:14, Nithya Balachandran wrote:
> Hi Rusty,
>
> We need the following information:
>
>1. The exact gluster version you are running
>2. gluster volume info
>3. gluster rebalance statu
1 - 100 of 235 matches
Mail list logo