client.event-threads: 4
server.event-threads: 4
performance.cache-invalidation: on
diagnostics.brick-log-level: WARNING
diagnostics.client-log-level: WARNING
Thanks
On Mon, Feb 4, 2019 at 11:37 AM Nithya Balachandran
wrote:
> Hi,
>
>
> On Mon, 4 Feb 2019 at 16:39, mohammad kashif
>
un a commit, the brick will no longer be part of the volume and
> you will not be able to access those files via the client.
> Do you have sufficient space on the remaining bricks for the files on the
> removed brick?
>
> Regards,
> Nithya
>
> On Mon, 4 Feb 2019 at 03:50,
Hi
I have a pure distributed gluster volume with nine nodes and trying to
remove one node, I ran
gluster volume remove-brick atlasglust nodename:/glusteratlas/brick007/gv0
start
It completed but with around 17000 failures
Node Rebalanced-files size scanned failures
ski
wrote:
>
>
> On Fri, 4 Jan 2019 at 15:48, mohammad kashif
> wrote:
>
>> Hi
>>
>> I have updated our distributed gluster storage from 3.12.9-1 to 4.1.6-1.
>> The existing cluster had seven servers totalling in around 450 TB. OS is
>> Centos7. The update wen
Hi
I have updated our distributed gluster storage from 3.12.9-1 to 4.1.6-1.
The existing cluster had seven servers totalling in around 450 TB. OS is
Centos7. The update went OK and I could access files.
Then I added two more servers of 90TB each to cluster and started fix-layout
gluster volume r
different crash. How often does it happen?
>
>
> We have managed to reproduce the first crash you reported and a bug has
> been filed at [1].
> We will work on a fix for this.
>
>
> Regards,
> Nithya
>
> [1] https://bugzilla.redhat.com/show_bug.cgi?id=1593199
>
rp) at the
>>> same offset (of value 2). This may be a bug in DHT (winding down readdirp
>>> with wrong offset) or in readdir-ahead (populating incorrect offset values
>>> in dentries it returns as readdirp response).
>>>
>>
>> It looks to be a corru
.9.
>>
>> Can you send me the FUSE volfiles for the volume atlasglust? They will
>> be in /var/lib/glusterd/vols/atlasglust/ on any of the gluster servers
>> hosting the volume and called *.tcp-fuse.vol.
>>
>
> Can you also send the same files after enabling paralle
;>> you encountered just a while back ?
>>> i.e. recursive readdir() response definition in the XDR
>>>
>>> [1] http://www-pnp.physics.ox.ac.uk/~mohammad/backtrace.log
>>>
>>>
>>> On Wed, Jun 13, 2018 at 4:29 PM, mohammad kashif
>>&
:
> Kashif,
> FYI: http://debuginfo.centos.org/centos/6/storage/x86_64/
>
>
> On Wed, Jun 13, 2018 at 3:21 PM, mohammad kashif
> wrote:
>
>> Hi Milind
>>
>> There is no glusterfs-debuginfo available for gluster-3.12 from
>> http://mirror.centos.org/centos/6/stor
I can't find debug package for glusterfs-fuse either
Thanks from the pit of despair ;)
Kashif
On Tue, Jun 12, 2018 at 5:01 PM, mohammad kashif
wrote:
> Hi Milind
>
> I will send you links for logs.
>
> I collected these core dumps at client and there is no glusterd process
Or maybe the lines with the crash backtrace lines
>
> Also, you've mentioned that you straced glusterd, but when you ran gdb,
> you ran it over /usr/sbin/glusterfs
>
>
> On Tue, Jun 12, 2018 at 8:19 PM, Vijay Bellur wrote:
>
>>
>>
>> On Tue, Jun 12,
d argument]
Kashif
On Tue, Jun 12, 2018 at 3:49 PM, Vijay Bellur wrote:
>
>
> On Tue, Jun 12, 2018 at 7:40 AM, mohammad kashif
> wrote:
>
>> Hi Milind
>>
>> The operating system is Scientific Linux 6 which is based on RHEL6. The
>> cpu arch is Intel x86_64.
&g
l really appreciate any help as whole file system has become
>> unusable
>>
>> Thanks
>>
>> Kashif
>>
>>
>>
>>
>> On Tue, Jun 12, 2018 at 12:26 PM, Milind Changire
>> wrote:
>>
>>> Kashif,
>>> You can change the log lev
can change the log-level to DEBUG instead of
> TRACE.
>
>
>
> On Tue, Jun 12, 2018 at 3:37 PM, mohammad kashif
> wrote:
>
>> Hi Vijay
>>
>> Now it is unmounting every 30 mins !
>>
>> The server log at /var/log/glusterfs/bricks/glusteratl
, Jun 11, 2018 at 11:52 PM, Vijay Bellur wrote:
>
>
> On Mon, Jun 11, 2018 at 8:50 AM, mohammad kashif
> wrote:
>
>> Hi
>>
>> Since I have updated our gluster server and client to latest version
>> 3.12.9-1, I am having this issue of gluster getting unmounted
Hi
Since I have updated our gluster server and client to latest version
3.12.9-1, I am having this issue of gluster getting unmounted from client
very regularly. It was not a problem before update.
Its a distributed file system with no replication. We have seven servers
totaling around 480TB data
> *From:* gluster-users-boun...@gluster.org gluster.org> on behalf of Alex Chekholko
> *Sent:* 01 May 2018 18:45
> *To:* mohammad kashif
> *Cc:* gluster-users
> *Subject:* Re: [Gluster-users] Usage monitoring per user
>
> Hi,
>
> There are several program
Hi
Is there any easy way to find usage per user in Gluster? We have 300TB
storage with almost 100 million files. Running du take too much time. Are
people aware of any other tool which can be used to break up storage per
user?
Thanks
Kashif
___
Gluster
Hi Nithya
Thanks for your quick response. I am plannig to run run fix-layout first
and then rebalance. Volume is around 88% full.
Cheers
Kashif
On Wed, Dec 13, 2017 at 1:41 PM, Nithya Balachandran
wrote:
>
>
> On 13 December 2017 at 17:34, mohammad kashif
> wrote:
>
>&
Hi
I have a five node 300 TB distributed gluster volume with zero
replication. I am planning to add two more servers which will add around
120 TB. After fixing the layout, can I rebalance the volume while clients
are online and accessing the data?
Thanks
Kashif
_
Hi
I am running a 400TB five node purely distributed gluster setup. I am
troubleshooting an issue where some times files creation fails. I found
that volume status is not working
gluster volume status
Another transaction is in progress for atlasglust. Please try again after
sometime.
When I tri
he mount point after
> enabling readdir-optimize.
>
> Regards,
> Vijay
>
>
> On 07/11/2017 11:06 AM, mohammad kashif wrote:
>
>> Hi Vijay and Experts
>>
>> I didn't want to experiment with my production setup so started a
>> parallel system with t
t;
> Thanks,
> Vijay
>
> [1] https://gluster.readthedocs.io/en/latest/release-notes/3.9.0/
>
> [2] https://github.com/gluster/glusterfs/issues/166
>
> On Fri, Jun 16, 2017 at 2:49 PM, mohammad kashif
> wrote:
>
>> Hi Vijay
>>
>> Did you manage to
Hi Vijay
Did you manage to look into the gluster profile logs ?
Thanks
Kashif
On Mon, Jun 12, 2017 at 11:40 AM, mohammad kashif
wrote:
> Hi Vijay
>
> I have enabled client profiling and used this script
> https://github.com/bengland2/gluster-profile-analysis/blob/
> master/gv
ashif
On Fri, Jun 9, 2017 at 2:34 PM, Vijay Bellur wrote:
> Can you please provide more details about your volume configuration and
> the version of gluster that you are using?
>
> Regards,
> Vijay
>
> On Fri, Jun 9, 2017 at 5:35 PM, mohammad kashif
> wrote:
>
>>
Hi
I have just moved our 400 TB HPC storage from lustre to gluster. It is part
of a research institute and users have very small files to big files ( few
KB to 20GB) . Our setup consists of 5 servers, each with 96TB RAID 6 disks.
All servers are connected through 10G ethernet but not all clients.
27 matches
Mail list logo