already associated with directory
(/media/disk13/brick13/.glusterfs/61/9a/619a5949-9001-4457-94c1-8f02c45996ba/client_folder2).
Hence, both directories will share same gfid and this can lead to
inconsistencies.
regards
Amudhan P
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users
.glusterfs and gfids of real dirs that are chained links to
> other dirs. It's still a user space symlink error. It's just compounded by
> gluster.
>
> On August 21, 2019 3:49:45 AM EDT, Amudhan P wrote:
>>
>> it is definitely issue with gluster there is no symlink involved
2:22:44 AM EDT, Amudhan P wrote:
>>
>> Hi,
>>
>> Can anyone suggest what could be the error and to fix this issue?
>>
>> regards
>> Amudhan P
>>
>> On Sat, Aug 17, 2019 at 6:59 PM Amudhan P wrote:
>>
>>> Hi,
>>>
>
Hi,
Can anyone suggest what could be the error and to fix this issue?
regards
Amudhan P
On Sat, Aug 17, 2019 at 6:59 PM Amudhan P wrote:
> Hi,
>
> I am using Gluster version 3.10.1.
>
> Mounting volume through fuse mount and I have run the command du -hs
> "dire
wrote:
> Hm... I thought that 3.10.X is no longer on support.
>
> Can you test from a gluster client with newer version of gluster and FUSE
> respectively?
>
> Best Regards,
> Strahil Nikolov
> On Aug 17, 2019 16:29, Amudhan P wrote:
>
> Hi,
>
> I am using
t you have a corrupted file system.
NOTIFY YOUR SYSTEM MANAGER.
The following directory is part of the cycle:
what could be the issue or what should be done to fix this problem?
regards
Amudhan P
___
Gluster-users mailing list
Gluster-users@gluster
anybody know what is the reason for this? For me it looks loke a bug.
>
> Regards
> David
>
> Am Fr., 1. März 2019 um 08:58 Uhr schrieb Amudhan P :
>
>> Hi David,
>>
>> I have also tested the bitrot signature process by default it takes < 250
>> KB/s.
>
Hi David,
I have also tested the bitrot signature process by default it takes < 250
KB/s.
regards
Amudhan P
On Fri, Mar 1, 2019 at 1:19 PM David Spisla wrote:
> Hello folks,
>
> I did some observations concerning the bitrot daemon. It seems to be that
> the bitrot signer i
Hi Poornima,
Instead of killing process stopping volume followed by stopping service in
nodes and update glusterfs.
can't we follow the above step?
regards
Amudhan
On Thu, Feb 28, 2019 at 8:16 AM Poornima Gurusiddaiah
wrote:
>
>
> On Wed, Feb 27, 2019, 11:52 PM Ingo Fischer wrote:
>
>> Hi
Hi,
What is the option to improve folder listing speed in glusterfs 4.1.6 with
distributed-disperse volume?
regards
Amudhan
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users
replace
> node operation you somehow landed into this situation by an incorrect step.
> Until and unless you could elaborate more on what all steps you have
> performed in the cluster, it'd be difficult to figure out the exact cause.
>
> On Wed, Jan 30, 2019 at 7:25 PM Amudhan P
usterd wasn't able to resolve the brick(s) hosted on node2.
>
> Can you please pick up 0083ec0c-40bf-472a-a128-458924e56c96 file from
> /var/lib/glusterd/peers/ from node 4 and place it in the same location in
> node 3 and then restart glusterd service on node 3?
>
>
> On Thu, Ja
Bitrot feature in Glusterfs is production ready or is it in beta phase?
On Mon, Jan 14, 2019 at 12:46 PM Amudhan P wrote:
> Resending mail.
>
> I have a total size of 50GB files per node and it has crossed 5 days but
> till now not completed bitrot signature process? yet
Ok, no problem.
On Sat 19 Jan, 2019, 7:55 AM Atin Mukherjee I have received but haven’t got a chance to look at them. I can only come
> back on this sometime early next week based on my schedule.
>
> On Fri, 18 Jan 2019 at 16:52, Amudhan P wrote:
>
>> Hi Atin,
>>
>
st you to share the content
> of /var/lib/glusterd from all the nodes. Also do mention which particular
> node the glusterd service is unable to come up.
>
> On Thu, Jan 17, 2019 at 11:34 AM Amudhan P wrote:
>
>> I have created the folder in the path as said but still, servi
sed as the mount path for the brick.
>
> On Wed, Jan 16, 2019 at 5:24 PM Amudhan P wrote:
>
>> Yes, I did mount bricks but the folder 'brick4' was still not created
>> inside the brick.
>> Do I need to create this folder because when I run replace-brick it wil
PM Atin Mukherjee wrote:
>
>
> On Wed, Jan 16, 2019 at 5:02 PM Amudhan P wrote:
>
>> Atin,
>> I have copied the content of 'gfs-tst' from vol folder in another node.
>> when starting service again fails with error msg in glusterd.log file.
>>
>> [20
connected.
>
> On Wed, Jan 16, 2019 at 3:49 PM Amudhan P wrote:
>
>> Hi,
>>
>> In short, when I started glusterd service I am getting following error
>> msg in the glusterd.log file in one server.
>> what needs to be done?
>>
>> error logged in gl
Hi,
In short, when I started glusterd service I am getting following error msg
in the glusterd.log file in one server.
what needs to be done?
error logged in glusterd.log
[2019-01-15 17:50:13.956053] I [MSGID: 100030] [glusterfsd.c:2741:main]
0-/usr/local/sbin/glusterd: Started running
Resending mail.
I have a total size of 50GB files per node and it has crossed 5 days but
till now not completed bitrot signature process? yet 20GB+ files are
pending for completion.
On Fri, Jan 11, 2019 at 12:02 PM Amudhan P wrote:
> Hi,
>
> How do I increase the speed of bitrot file
Hi,
How do I increase the speed of bitrot file signature process in
glusterfs 4.1.6?
Currently, it's processing 250 KB/s. is there any way to do the changes
thru gluster cli?
regards
Amudhan
___
Gluster-users mailing list
Gluster-users@gluster.org
mount point where the
> old disk was mounted
> Step 4 :- run command "gluster v start volname force" >>>>>>>>>>>> This
> should be done using "reset-brick commit force" command. This will trigger
> the heal. Follow the link.
> Step 5 :
Hi,
I am working on Glusterfs 4.1.6 on a test machine. I am trying to replace a
faulty disk and below are the steps I did but wasn't successful with that.
3 Nodes, 2 disks per node, Disperse Volume 4+2 :-
Step 1 :- kill pid of the faulty brick in node
Step 2 :- running volume status, shows "N/A"
Can I skip this warning message in tail mail and continue with the
installation?
On Thu, Dec 27, 2018 at 5:11 PM Amudhan P wrote:
> Thanks, Ravishankar it worked.
> also, I am getting the following warning message when running `make` is it
> safe to skip?
>
> dht-layout
^
../../contrib/fuse-lib/mount-common.c:64:25: warning: ignoring return value
of ‘setreuid’, declared with attribute warn_unused_result [-Wunused-result]
setreuid (ruid, -1);
^
CCLD fusermount-glusterfs
Amudhan
On Thu, Dec 27, 2018 at
Hi,
I am trying to compile & install Glusterfs-4.1.6 using tar file and I am
getting this error message when running `make`.
```
CC afr-self-heal-name.lo
CC afr.lo
In file included from afr.c:18:0:
afr-common.c: In function ‘afr_lookup_entry_heal’:
afr-common.c:2892:29: error:
lease
>> upgrade it to latest one. I am sure this
>> would have fix .
>>
>>
>> Ashish
>>
>>
>>
>>
>> --
>> *From: *"Nithya Balachandran" <nbala...@redhat.com>
>> *To: *"Amudhan P" <amudha...@gmail.com>, "Ashish
Hi Nithya,
I have checked gfid in all the bricks in disperse set for the folder. it
all same there is no difference.
regards
Amudhan P
On Fri, Nov 10, 2017 at 9:02 AM, Nithya Balachandran <nbala...@redhat.com>
wrote:
> Hi,
>
> Comments inline.
>
> Regards,
> Nithy
stale linkfile
/thi_fol1/thi_fol2/thi_fol3/thi_fol4/thi_fol5/thi_fol6/thi_file1 on
glustervol-disperse-77 (hashed subvol is glustervol-disperse-106)
regards
Amudhan P
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/m
Any update?
On Fri, Oct 13, 2017 at 1:14 PM, Amudhan P <amudha...@gmail.com> wrote:
> any update?.
>
> why is it marked bad?
>
> Any way to find out what happened to the file?
>
>
> On Tue, Oct 3, 2017 at 12:44 PM, Amudhan P <amudha...@gmail.com> wrote:
>
does RIO improves folder listing and rebalance, when compared to 3.x?
if yes, do you have any performance data comparing RIO and DHT?
On Thu, Nov 2, 2017 at 4:12 PM, Kaushal M <kshlms...@gmail.com> wrote:
> On Thu, Nov 2, 2017 at 4:00 PM, Amudhan P <amudha...@gmail.com> wrot
if doing an upgrade from 3.10.1 to 4.0 or 4.1, will I be able to access
volume without any challenge?
I am asking this because 4.0 comes with DHT2?
On Thu, Nov 2, 2017 at 2:26 PM, Kaushal M wrote:
> We're fast approaching the time for Gluster-4.0. And we would like to
>
any update?.
why is it marked bad?
Any way to find out what happened to the file?
On Tue, Oct 3, 2017 at 12:44 PM, Amudhan P <amudha...@gmail.com> wrote:
>
> my volume is distributed disperse volume 8+2 EC.
> file1 and file2 are different files lying in same brick. I
nd Regards,
> Kotresh HR
>
> On Mon, Sep 25, 2017 at 4:19 PM, Amudhan P <amudha...@gmail.com> wrote:
>
>> resending mail.
>>
>>
>> On Fri, Sep 22, 2017 at 5:30 PM, Amudhan P <amudha...@gmail.com> wrote:
>>
>>> ok, from bitrot code I figu
resending mail.
On Fri, Sep 22, 2017 at 5:30 PM, Amudhan P <amudha...@gmail.com> wrote:
> ok, from bitrot code I figured out gluster using sha256 hashing algo.
>
>
> Now coming to the problem, during scrub run in my cluster some of my files
> were marked as bad in few s
rusted.ec.version=0x1a981a99
trusted.gfid=0x10d8920e42cd42cf9448b8bf3941c192
most of the bitrot bad files are in the set of new nodes and data were
uploaded using gluster 3.10.1. no drive issues are any kind of error msgs
in logs.
what could be gone wrong?
regards
Amudh
Hi,
I have a file in my brick which was signed by bitrot and latter when
running scrub it was marked as bad.
Now, I want to verify file again manually. just to clarify my doubt
how can I do this?
regards
Amudhan
___
Gluster-users mailing list
ailable
CPUs in the node.
4. Why was this crash?
regards
Amudhan P
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users
Ashish, which version has this issue fixed?
On Tue, Aug 29, 2017 at 6:38 PM, Amudhan P <amudha...@gmail.com> wrote:
> I am using 3.10.1 from which version this update is available.
>
>
> On Tue, Aug 29, 2017 at 5:03 PM, Ashish Pandey <aspan...@redhat.com>
> wrote:
>
fixed it in latest release https://review.gluster.org/16468
> Now, we check if fop actually needs heal or not.
>
>
> Ashish
>
>
>
> *From: *"Amudhan P" <amudha...@gmail.com>
> *To: *"Gluster Users" <gluster-users@gluster.
Hi ,
I need some clarification for below error msg in the glustershd.log file.
What is this msg? Why is this showing up?. currently using glusterfs 3.10.1
when ever I start write process to volume (volume mounted thru fuse) I am
seeing this kind of error and glustershd process consumes some
Hi,
I am using bitrot feature in 3.10.1 and I have few issues to point.
1. during a disk failure situation or some reason to run "volume start
force" command triggers bitrot crawl process in all the bricks in the
volume. it runs on disk which has already completed crawl process. I feel
it's
63 (
> https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1655842), seeing how
> you are running 4.4.0-62 I would suggest upgrading and see if the error
> persists.
>
> Edvin Ekström,
>
> On 2017-04-26 09:09, Amudhan P wrote:
>
> I did volume start force and now self-heal da
:01 PM, Amudhan P <amudha...@gmail.com> wrote:
> Yes, I have enabled bitrot process and it's currently running signer
> process in some nodes.
>
> Disabling and enabling bitrot doesn't makes difference it will start crawl
> process again right.
>
>
> On Tuesday,
5, 2017 at 9:22 PM, Amudhan P <amudha...@gmail.com> wrote:
>>
>> Hi Pranith,
>> if I restart glusterd service in the node alone will it work. bcoz I
feel that doing volume force start will trigger bitrot process to crawl
disks in all nodes.
>
> Hav
leaks in this particular version that
> we might have missed in our testing?
>
> On Tue, Apr 25, 2017 at 8:37 PM, Amudhan P <amudha...@gmail.com> wrote:
>
>> Hi,
>>
>> In one of my node glustershd process is killed due to OOM and this
>> happened only in one nod
k there are 3
files missing but it is a kb size files and this file was deleted in 3.7
but it's still in bricks.
Why is this size difference?
regards
Amudhan P
On Wed, Apr 19, 2017 at 4:05 PM, Pranith Kumar Karampuri <
pkara...@redhat.com> wrote:
> Some thoughts based on this mail thr
com> wrote:
>
> On Wed, 19 Apr 2017 at 4:58 PM, Amudhan P <amudha...@gmail.com> wrote:
>
>> Hi,
>>
>> Does rebalance fix-layout triggers automatically by any chance?
>>
>> because my cluster currently showing rebalance in progress and running
>>
old nodes.
checking rebalance log in new nodes states it was started on 12th April.
strange what would have triggered rebalance process?
regards
Amudhan
On Thu, Apr 13, 2017 at 12:51 PM, Amudhan P <amudha...@gmail.com> wrote:
> I have another issue now after expanding cluster folder lis
the files was in disperse set
where heal is going on. reading that file would have also trigerred heal
but was slow and read speed from mount point was 400KB.
>
> Ashish
>
> --
> *From: *"Serkan Çoban" <cobanser...@gmail.com>
> *To: *&q
est of the 9 disks.
>
> My use case is write heavy, we barely read data, so I do not know if read
> speed degrades during heal. But I know write speed do not change during
> heal.
>
> How big is your files? How many files on average in each directory?
>
> On Tue, A
3.10.x and parallel disperse heal, will post the
> results here...
>
>
> On Tue, Apr 18, 2017 at 9:51 AM, Amudhan P <amudha...@gmail.com> wrote:
> > Serkan,
> >
> > I have initially changed shd-max-thread 1 to 2 saw a little difference
> and
> > changing i
Serkan,
I have initially changed shd-max-thread 1 to 2 saw a little difference and
changing it to 4 & 8. doesn't make any difference.
disk write speed was about <1MB and data passed in thru network for healing
node from other node were 4MB combined.
Also, I tried ls -l from mount point to the
After 6 days have passed and till now 89% (3.3 TB) of data heal completed.
now read speed of healed data is normal now.
heal operation still going on.
On Thu, Apr 13, 2017 at 12:13 PM, Amudhan P <amudha...@gmail.com> wrote:
> Hi,
>
> I am using 3.10.1 with EC disperse volum
irp: off
cluster.readdir-optimize: on
cluster.heal-timeout: 43200
cluster.data-self-heal: on
On Fri, Apr 7, 2017 at 7:35 PM, Amudhan P <amudha...@gmail.com> wrote:
> Volume type:
> Disperse Volume 8+2 = 1080 bricks
>
> First time added 8+2 * 3 sets and it started giving issue in listi
is available in rest of the 9 bricks from fuse
mount I get a 400kbs as read speed.
I have changed following options.
disperse.shd-max-threads - 8
disperse.shd-wait-qlength - 2048
disperse.background-heals 16
regards
Amudhan P
___
Gluster-users mailing list
wrote:
>
>
> On 6 April 2017 at 14:56, Amudhan P <amudha...@gmail.com> wrote:
>
>> Hi,
>>
>> I was able to add bricks to the volume successfully.
>> Client was reading, writing and listing data from mount point.
>> But after adding bricks I ha
Tue, Apr 4, 2017 at 4:31 PM, Amudhan P <amudha...@gmail.com> wrote:
> I mean time takes for listing folders and files? because of "rebalance fix
> layout" was not done.
>
>
> On Tue, Apr 4, 2017 at 1:51 PM, Amudhan P <amudha...@gmail.com> wrote:
>
>>
I mean time takes for listing folders and files? because of "rebalance fix
layout" was not done.
On Tue, Apr 4, 2017 at 1:51 PM, Amudhan P <amudha...@gmail.com> wrote:
> Ok, good to hear.
>
> will there be any impact in listing folder and files?.
>
>
> On T
Ok, good to hear.
will there be any impact in listing folder and files?.
On Tue, Apr 4, 2017 at 1:43 PM, Nithya Balachandran <nbala...@redhat.com>
wrote:
>
>
> On 4 April 2017 at 12:33, Amudhan P <amudha...@gmail.com> wrote:
>
>> Hi,
>>
>> I have a
Hi,
I have a query on rebalancing.
let's consider following is my folder hierarchy.
parent1-fol (parent folder)
|_
class-fol-1 ( 1 st level subfolder)
|_
A ( 2 nd level subfolder)
Thanks Atin, after upgrading to 3.10.1 it all working now.
On Sun, Apr 2, 2017 at 8:29 PM, Atin Mukherjee <amukh...@redhat.com> wrote:
>
> On Sun, 2 Apr 2017 at 17:53, Amudhan P <amudha...@gmail.com> wrote:
>
>> Hi Atin,
>>
>> Thanks for the quick reply.
&
3.8.3 to
3.10.0.
regards
Amudhan P
On Sun, Apr 2, 2017 at 5:45 PM, Atin Mukherjee <amukh...@redhat.com> wrote:
> Please upgrade to 3.10.1, we had a regression on 3.10.0.
>
> On Sun, 2 Apr 2017 at 17:43, Amudhan P <amudha...@gmail.com> wrote:
>
>> I Have upgrade
:30.083580] I [MSGID: 114049]
[client-handshake.c:1243:client_setvolume_cbk] 0-gfs-clusterdr-client-862:
sending AUTH_FAILED event
regards
Amudhan P
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster
Hi All,
Using 'configure --disable-events` fixes above problem.
Thank you, Niklas for forwarding this info.
regards
Amudhan
On Sat, Dec 17, 2016 at 5:49 PM, Amudhan P <amudha...@gmail.com> wrote:
> Hi All,
>
> Did anyone faced below issue in installing 3.9.0 using
Hi Kotresh, Any update on this bug status?
On Thu, Sep 22, 2016 at 6:57 PM, Amudhan P <amudha...@gmail.com> wrote:
> Hi Kotresh,
>
> I have raised bug.
>
> https://bugzilla.redhat.com/show_bug.cgi?id=1378466
>
> Thanks
> Amudhan
>
> On Thu, Sep 22, 2016 at 2
Hi All,
Did anyone faced below issue in installing 3.9.0 using source tar file in
Ubuntu.
On Thu, Dec 15, 2016 at 4:51 PM, Amudhan P <amudha...@gmail.com> wrote:
> Hi,
>
> I am trying to install Gluster 3.9.0 from tarball, downloaded from gluster
> site.
>
> configure
Hi,
I am trying to install Gluster 3.9.0 from tarball, downloaded from gluster
site.
configure and make are completing successfully.
getting below error message when running "make install"
/usr/bin/install: cannot stat 'glustereventsd-Debian': No such file or
directory
make[3]: *** [Debain]
>
> - Original Message -
> > From: "Amudhan P" <amudha...@gmail.com>
> > To: "Kotresh Hiremath Ravishankar" <khire...@redhat.com>
> > Cc: "Gluster Users" <gluster-users@gluster.org>
> > Sent: Thursday, Sept
t-rep
> volume,
> and see if you are observing similar behavior.
>
> In any case please raise a bug for the same with your observations. We
> will work
> on it.
>
> Thanks and Regards,
> Kotresh H R
>
> - Original Message -
> > From: "Amudhan P
figure out
why brick
> process is holding that fd for such a long time.
>
> Thanks and Regards,
> Kotresh H R
>
> - Original Message -
>> From: "Amudhan P" <amudha...@gmail.com>
>> To: "Kotresh Hiremath Ravishankar" <khire...@redhat.com&g
le.
>
> Thanks and Regards,
> Kotresh H R
>
> - Original Message -
> > From: "Amudhan P" <amudha...@gmail.com>
> > To: "Kotresh Hiremath Ravishankar" <khire...@redhat.com>
> > Cc: "Gluster Users" <gluster-users@gl
alculation unless
> there is an fd open (might be internal). Could you please cross verify
> if there are any fd opened on that file by looking into /proc? I will
> also test it out in the mean time and get back to you.
>
> Thanks and Regards,
> Kotresh H R
>
> - Origina
24288
Thanks
Amudhan P
On Mon, Sep 19, 2016 at 1:00 PM, Kotresh Hiremath Ravishankar <
khire...@redhat.com> wrote:
> Hi Amudhan,
>
> Thanks for testing out the bitrot feature and sorry for the delayed
> response.
> Please find the answers inline.
>
> T
Hi,
Can anyone reply to this mail.
On Tue, Sep 13, 2016 at 12:49 PM, Amudhan P <amudha...@gmail.com> wrote:
> Hi,
>
> I am testing bitrot feature in Gluster 3.8.3 with disperse EC volume 4+1.
>
> When i write single small file (< 10MB) after 2 seconds i can see bitr
Hi,
I am testing bitrot feature in Gluster 3.8.3 with disperse EC volume 4+1.
When i write single small file (< 10MB) after 2 seconds i can see bitrot
signature in bricks for the file, but when i write multiple files with
different size ( > 10MB) it takes long time (> 24hrs) to see bitrot
Sorry for delayed reply. it works now.
On Wed, Aug 3, 2016 at 8:31 PM, Kaleb S. KEITHLEY <kkeit...@redhat.com>
wrote:
> On 08/03/2016 10:42 AM, Yannick Perret wrote:
> > Le 03/08/2016 à 15:33, Amudhan P a écrit :
> >> Hi,
> >>
> >> I am trying to in
Hi,
I am trying to install gluster 3.8.1 from tarball in Ubuntu 14.04.
1. when i run "./configure --disable-tiering" at the end showing msg
configure: WARNING: cache variable ac_cv_build contains a newline
configure: WARNING: cache variable ac_cv_host contains a newline
2. running "make"
pls change ulimit and kernel parameter to above 1024 in datanode and
client. i had similar problem it was solved by doing this.
On Wed, Mar 23, 2016 at 7:40 PM, Serkan Çoban wrote:
> For your information, I recreate the volume using IP addresses instead
> host names and
hostUuid":
"9408184a-298e-4b6c-9273-d1a460ca139a"}, {"brickpath": "/media/disk6",
"brickaddress": "172.16.5.66", "hostUuid":
"9408184a-298e-4b6c-9273-d1a460ca139a"}, {"brickpath": "/media/disk7",
"b
the error below, it looks like the rpm has not updated the
> nrpe.cfg correctly. You may need to edit the spec file for the config file
> paths on Ubuntu and rebuild.
>
>
> On 10/01/2015 05:45 PM, Amudhan P wrote:
>
> OSError: [Errno 2] No such file or directory is now s
Hi,
I am getting a error when i run discovery.py.
discovery.py -c vm-gfs -H 192.168.1.11
Traceback (most recent call last):
File "discovery.py", line 541, in
clusterdata = discoverCluster(args.hostip, args.cluster, args.timeout)
File "discovery.py", line 90, in discoverCluster
> /usr/lib64/nagios/plugins/check_nrpe -H 192.168.1.11 -c
> discover_volume_list
>
>
>
>
> On 10/01/2015 04:10 PM, Amudhan P wrote:
>
> Hi,
>
> I am getting a error when i run discovery.py.
>
> discovery.py -c vm-gfs -H 192.168.1.11
>
> Traceback (most re
83 matches
Mail list logo