[Gluster-users] Need hardware suggestions for gluster + archival usecase

2021-07-15 Thread Pranith Kumar Karampuri
Hi, I am researching the kind of hardware that would be best for archival use case. We probably need to keep the data anywhere between 20-40 years. Do let us know what you think would be best. Pranith Community Meeting Calendar: Schedule - Every 2nd and 4th Tuesday at 14:30 IST

Re: [Gluster-users] write request hung in write-behind

2019-06-06 Thread Pranith Kumar Karampuri
On Tue, Jun 4, 2019 at 7:36 AM Xie Changlong wrote: > To me, all 'df' commands on specific(not all) nfs client hung forever. > The temporary solution is disable performance.nfs.write-behind and > cluster.eager-lock. > > I'll try to get more info back if encounter this problem again . > If you

Re: [Gluster-users] POSIX locks and disconnections between clients and bricks

2019-03-27 Thread Pranith Kumar Karampuri
On Wed, Mar 27, 2019 at 8:38 PM Xavi Hernandez wrote: > On Wed, Mar 27, 2019 at 2:20 PM Pranith Kumar Karampuri < > pkara...@redhat.com> wrote: > >> >> >> On Wed, Mar 27, 2019 at 6:38 PM Xavi Hernandez >> wrote: >> >>> On Wed, Mar 2

Re: [Gluster-users] POSIX locks and disconnections between clients and bricks

2019-03-27 Thread Pranith Kumar Karampuri
On Wed, Mar 27, 2019 at 6:38 PM Xavi Hernandez wrote: > On Wed, Mar 27, 2019 at 1:13 PM Pranith Kumar Karampuri < > pkara...@redhat.com> wrote: > >> >> >> On Wed, Mar 27, 2019 at 5:13 PM Xavi Hernandez >> wrote: >> >>> On Wed, Mar

Re: [Gluster-users] POSIX locks and disconnections between clients and bricks

2019-03-27 Thread Pranith Kumar Karampuri
On Wed, Mar 27, 2019 at 5:13 PM Xavi Hernandez wrote: > On Wed, Mar 27, 2019 at 11:52 AM Raghavendra Gowdappa > wrote: > >> >> >> On Wed, Mar 27, 2019 at 12:56 PM Xavi Hernandez >> wrote: >> >>> Hi Raghavendra, >>> >>> On Wed, Mar 27, 2019 at 2:49 AM Raghavendra Gowdappa < >>>

Re: [Gluster-users] Question about CVE-2018-10924

2018-10-30 Thread Pranith Kumar Karampuri
On Tue, Oct 30, 2018 at 2:51 PM Hongzhi, Song wrote: > Hi Pranith and other friends, > > Does this CVE apply for glusger-v3.11.1? > It was later found to be not a CVE, only a memory leak. No, this bug is introduced in 3.12 branch and fixed in 3.12 branch as well. Patch that introduced leak:

Re: [Gluster-users] [Gluster-devel] Crash in glusterfs!!!

2018-09-26 Thread Pranith Kumar Karampuri
andling at early start of gluster ? > As far as I understand there shouldn't be. But I would like to double check that indeed is the case if there are any steps to re-create the issue. > > > Regards, > > Abhishek > > On Tue, Sep 25, 2018 at 2:27 PM Pranith Kumar Karampuri

Re: [Gluster-users] [Gluster-devel] Crash in glusterfs!!!

2018-09-25 Thread Pranith Kumar Karampuri
t in between. > But the crash happened inside exit() code for which will be in libc which doesn't access any data structures in glusterfs. > > Regards, > Abhishek > > On Mon, Sep 24, 2018 at 9:11 PM Pranith Kumar Karampuri < > pkara...@redhat.com> wrote: > >>

Re: [Gluster-users] [Gluster-devel] Crash in glusterfs!!!

2018-09-24 Thread Pranith Kumar Karampuri
that the RC is correct and then I will send out the fix. > > Regards, > Abhishek > > On Mon, Sep 24, 2018 at 3:12 PM Pranith Kumar Karampuri < > pkara...@redhat.com> wrote: > >> >> >> On Mon, Sep 24, 2018 at 2:09 PM ABHISHEK PALIWAL >> wr

Re: [Gluster-users] sharding in glusterfs

2018-09-20 Thread Pranith Kumar Karampuri
the maintained releases and run the workloads you have for some time to test things out, once you feel confident, you can put it in production. HTH > > Thanks > Ashayam Gupta > > On Tue, Sep 18, 2018 at 11:00 AM Pranith Kumar Karampuri < > pkara...@redhat.com> wrote: > >&g

Re: [Gluster-users] Data on gluster volume gone

2018-09-20 Thread Pranith Kumar Karampuri
> 89: option count-fop-hits off > 90: subvolumes gvol0-md-cache > 91: end-volume > 92: > 93: volume meta-autoload > 94: type meta > 95: subvolumes gvol0 > 96: end-volume > 97: > > +------------

Re: [Gluster-users] Data on gluster volume gone

2018-09-20 Thread Pranith Kumar Karampuri
Please also attach the logs for the mount points and the glustershd.logs On Thu, Sep 20, 2018 at 11:41 AM Pranith Kumar Karampuri < pkara...@redhat.com> wrote: > How did you do the upgrade? > > On Thu, Sep 20, 2018 at 11:01 AM Raghavendra Gowdappa > wrote: > >> >

Re: [Gluster-users] Data on gluster volume gone

2018-09-20 Thread Pranith Kumar Karampuri
How did you do the upgrade? On Thu, Sep 20, 2018 at 11:01 AM Raghavendra Gowdappa wrote: > > > On Thu, Sep 20, 2018 at 1:29 AM, Raghavendra Gowdappa > wrote: > >> Can you give volume info? Looks like you are using 2 way replica. >> > > Yes indeed. > gluster volume create gvol0 replica 2

Re: [Gluster-users] sharding in glusterfs

2018-09-17 Thread Pranith Kumar Karampuri
On Mon, Sep 17, 2018 at 4:14 AM Ashayam Gupta wrote: > Hi All, > > We are currently using glusterfs for storing large files with write-once > and multiple concurrent reads, and were interested in understanding one of > the features of glusterfs called sharding for our use case. > > So far from

Re: [Gluster-users] Kicking a stuck heal

2018-09-10 Thread Pranith Kumar Karampuri
On Fri, Sep 7, 2018 at 7:31 PM Dave Sherohman wrote: > On Fri, Sep 07, 2018 at 10:46:01AM +0530, Pranith Kumar Karampuri wrote: > > On Tue, Sep 4, 2018 at 6:06 PM Dave Sherohman > wrote: > > > > > On Tue, Sep 04, 2018 at 05:32:53AM -0500, Dave Sherohman wrote: > &g

Re: [Gluster-users] Kicking a stuck heal

2018-09-06 Thread Pranith Kumar Karampuri
On Tue, Sep 4, 2018 at 6:06 PM Dave Sherohman wrote: > On Tue, Sep 04, 2018 at 05:32:53AM -0500, Dave Sherohman wrote: > > Is there anything I can do to kick the self-heal back into action and > > get those final 59 entries cleaned up? > > In response to the request about what version of gluster

Re: [Gluster-users] Kicking a stuck heal

2018-09-04 Thread Pranith Kumar Karampuri
Which version of glusterfs are you using? On Tue, Sep 4, 2018 at 4:26 PM Dave Sherohman wrote: > Last Friday, I rebooted one of my gluster nodes and it didn't properly > mount the filesystem holding its brick (I had forgotten to add it to > fstab...), so, when I got back to work on Monday, its

Re: [Gluster-users] Gluter 3.12.12: performance during heal and in general

2018-09-02 Thread Pranith Kumar Karampuri
go up when more users will cause more traffic -> more > >>> work on servers), 'gluster volume heal shared info' shows no entries, > >>> status: > >>> > >>> Status of volume: shared > >>> Gluster process TCP Po

Re: [Gluster-users] Gluter 3.12.12: performance during heal and in general

2018-08-23 Thread Pranith Kumar Karampuri
dler] > 0-transport: EPOLLERR - disconnecting now > [2018-08-22 06:19:23.809366] I [input.c:31:cli_batch] 0-: Exiting with: 0 > > Just wondered if this could related anyhow. > > 2018-08-21 8:17 GMT+02:00 Pranith Kumar Karampuri : > > > > > > On Tue, Aug 21, 2018

Re: [Gluster-users] Gluter 3.12.12: performance during heal and in general

2018-08-21 Thread Pranith Kumar Karampuri
; > Or do you want to download the file /tmp/perf.gluster11.bricksdd1.out > and examine it yourself? If so i could send you a link. > Thank you! yes a link would be great. I am not as good with kernel side of things. So I will have to show this information to someone else who know

Re: [Gluster-users] Gluter 3.12.12: performance during heal and in general

2018-08-20 Thread Pranith Kumar Karampuri
On Tue, Aug 21, 2018 at 10:13 AM Pranith Kumar Karampuri < pkara...@redhat.com> wrote: > > > On Mon, Aug 20, 2018 at 3:20 PM Hu Bert wrote: > >> Regarding hardware the machines are identical. Intel Xeon E5-1650 v3 >> Hexa-Core; 64 GB DDR4 ECC; Dell PERC H330 8 P

Re: [Gluster-users] Gluter 3.12.12: performance during heal and in general

2018-08-20 Thread Pranith Kumar Karampuri
teriotwr3[kernel.kallsyms] [k] > do_syscall_64 > > Do you need different or additional information? > This looks like there are lot of readdirs going on which is different from what we observed earlier, how many seconds did you do perf record for? Will it be possible for you to d

Re: [Gluster-users] Gluter 3.12.12: performance during heal and in general

2018-08-20 Thread Pranith Kumar Karampuri
>> i hope i did get it right. > >> > >> gluster volume profile shared start > >> wait 10 minutes > >> gluster volume profile shared info > >> gluster volume profile shared stop > >> > >> If that's ok, i've attached the output o

Re: [Gluster-users] Gluter 3.12.12: performance during heal and in general

2018-08-17 Thread Pranith Kumar Karampuri
ile shared info > gluster volume profile shared stop > > If that's ok, i've attached the output of the info command. > > > 2018-08-17 8:31 GMT+02:00 Pranith Kumar Karampuri : > > Please do volume profile also for around 10 minutes when CPU% is high. > > > > On Fr

Re: [Gluster-users] Gluter 3.12.12: performance during heal and in general

2018-08-17 Thread Pranith Kumar Karampuri
Please do volume profile also for around 10 minutes when CPU% is high. On Fri, Aug 17, 2018 at 11:56 AM Pranith Kumar Karampuri < pkara...@redhat.com> wrote: > As per the output, all io-threads are using a lot of CPU. It is better to > check what the volume profile is to see wha

Re: [Gluster-users] Gluter 3.12.12: performance during heal and in general

2018-08-17 Thread Pranith Kumar Karampuri
: "Running GlusterFS Volume Profile Command"and attach output of "gluster volume profile info", On Fri, Aug 17, 2018 at 11:24 AM Hu Bert wrote: > Good morning, > > i ran the command during 100% CPU usage and attached the file. > Hopefully it helps. > > 2018-08-17

Re: [Gluster-users] Gluter 3.12.12: performance during heal and in general

2018-08-16 Thread Pranith Kumar Karampuri
Could you do the following on one of the nodes where you are observing high CPU usage and attach that file to this thread? We can find what threads/processes are leading to high usage. Do this for say 10 minutes when you see the ~100% CPU. top -bHd 5 > /tmp/top.${HOSTNAME}.txt On Wed, Aug 15,

Re: [Gluster-users] Gluter 3.12.12: performance during heal and in general

2018-07-27 Thread Pranith Kumar Karampuri
On Fri, Jul 27, 2018 at 1:32 PM, Hu Bert wrote: > 2018-07-27 9:22 GMT+02:00 Pranith Kumar Karampuri : > > > > > > On Fri, Jul 27, 2018 at 12:36 PM, Hu Bert > wrote: > >> > >> 2018-07-27 8:52 GMT+02:00 Pranith Kumar Karampuri >: > >> >

Re: [Gluster-users] Gluter 3.12.12: performance during heal and in general

2018-07-27 Thread Pranith Kumar Karampuri
On Fri, Jul 27, 2018 at 12:36 PM, Hu Bert wrote: > 2018-07-27 8:52 GMT+02:00 Pranith Kumar Karampuri : > > > > > > On Fri, Jul 27, 2018 at 11:53 AM, Hu Bert > wrote: > >> > >> > Do you already have all the 19 directories already created? If not

Re: [Gluster-users] Gluter 3.12.12: performance during heal and in general

2018-07-27 Thread Pranith Kumar Karampuri
On Fri, Jul 27, 2018 at 11:53 AM, Hu Bert wrote: > > Do you already have all the 19 directories already created? If not > could you find out which of the paths need it and do a stat directly > instead of find? > > Quite probable not all of them have been created (but counting how > much

Re: [Gluster-users] Gluter 3.12.12: performance during heal and in general

2018-07-26 Thread Pranith Kumar Karampuri
ted? If not could you find out which of the paths need it and do a stat directly instead of find? > > 2018-07-26 11:29 GMT+02:00 Pranith Kumar Karampuri : > > > > > > On Thu, Jul 26, 2018 at 2:41 PM, Hu Bert wrote: > >> > >> > Sorry, bad copy/paste :-(. &

Re: [Gluster-users] Gluter 3.12.12: performance during heal and in general

2018-07-26 Thread Pranith Kumar Karampuri
n the good bricks, so this is expected. > (sry, mail twice, didn't go to the list, but maybe others are > interested... :-) ) > > 2018-07-26 10:17 GMT+02:00 Pranith Kumar Karampuri : > > > > > > On Thu, Jul 26, 2018 at 12:59 PM, Hu Bert > wrote: > >&

Re: [Gluster-users] Gluter 3.12.12: performance during heal and in general

2018-07-26 Thread Pranith Kumar Karampuri
ble. You can follow that issue to see progress and when it is fixed etc. > > 2018-07-26 8:56 GMT+02:00 Pranith Kumar Karampuri : > > Thanks a lot for detailed write-up, this helps find the bottlenecks > easily. > > On a high level, to handle this directory hierarchy i.e. lots

Re: [Gluster-users] Gluter 3.12.12: performance during heal and in general

2018-07-26 Thread Pranith Kumar Karampuri
gits of > ID)/$ID/$misc_formats.jpg > > That's why we have that many (sub-)directories. Files are only stored > in the lowest directory hierarchy. I hope i could make our structure > at least a bit more transparent. > > i hope there's something we can do to raise perfor

Re: [Gluster-users] Gluter 3.12.12: performance during heal and in general

2018-07-24 Thread Pranith Kumar Karampuri
On Mon, Jul 23, 2018 at 4:16 PM, Hu Bert wrote: > Well, over the weekend about 200GB were copied, so now there are > ~400GB copied to the brick. That's far beyond a speed of 10GB per > hour. If I copied the 1.6 TB directly, that would be done within max 2 > days. But with the self heal this will

[Gluster-users] Subject: Help needed in improving monitoring in Gluster

2018-07-23 Thread Pranith Kumar Karampuri
Hi, We want gluster's monitoring/observability to be as easy as possible going forward. As part of reaching this goal we are starting this initiative to add improvements to existing apis/commands and create new apis/commands to gluster so that the admin can integrate it with whichever

Re: [Gluster-users] Any feature allow to add lock on a file between different apps?

2018-04-06 Thread Pranith Kumar Karampuri
You can use posix-locks i.e. fnctl based advisory locks on glusterfs just like any other fs. On Wed, Apr 4, 2018 at 8:30 AM, Lei Gong wrote: > Hello there, > > > > I want to know if there is a feature allow user to add lock on a file when > their app is modifying that file, so

Re: [Gluster-users] Expected performance for WORM scenario

2018-03-13 Thread Pranith Kumar Karampuri
ot need to scale out, stick with a single server > (+DRBD optionally for HA), it will give you the best performance > > > > Ondrej > > > > > > *From:* Pranith Kumar Karampuri [mailto:pkara...@redhat.com] > *Sent:* Tuesday, March 13, 2018 9:10 AM > > *To:* O

Re: [Gluster-users] Expected performance for WORM scenario

2018-03-13 Thread Pranith Kumar Karampuri
On Tue, Mar 13, 2018 at 12:58 PM, Pranith Kumar Karampuri < pkara...@redhat.com> wrote: > > > On Mon, Mar 12, 2018 at 6:23 PM, Ondrej Valousek < > ondrej.valou...@s3group.com> wrote: > >> Hi, >> >> Gluster will never perform well for small files.

Re: [Gluster-users] [Gluster-devel] Removal of use-compound-fops option in afr

2018-03-04 Thread Pranith Kumar Karampuri
On Mon, Mar 5, 2018 at 9:19 AM, Amar Tumballi wrote: > Pranith, > > > >> We found that compound fops is not giving better performance in >> replicate and I am thinking of removing that code. Sent the patch at >> https://review.gluster.org/19655 >> >> > If I understand

Re: [Gluster-users] [Gluster-Maintainers] [Gluster-devel] Release 4.0: RC1 tagged

2018-02-28 Thread Pranith Kumar Karampuri
I found the following memory leak present in 3.13, 4.0 and master: https://bugzilla.redhat.com/show_bug.cgi?id=1550078 I will clone/port to 4.0 as soon as the patch is merged. On Wed, Feb 28, 2018 at 5:55 PM, Javier Romero wrote: > Hi all, > > Have tested on CentOS Linux

Re: [Gluster-users] Stale locks on shards

2018-01-29 Thread Pranith Kumar Karampuri
On Mon, Jan 29, 2018 at 1:26 PM, Samuli Heinonen <samp...@neutraali.net> wrote: > Pranith Kumar Karampuri kirjoitti 29.01.2018 07:32: > >> On 29 Jan 2018 10:50 am, "Samuli Heinonen" <samp...@neutraali.net> >> wrote: >> >> Hi! >

Re: [Gluster-users] Stale locks on shards

2018-01-28 Thread Pranith Kumar Karampuri
tell more about bricks crashing after releasing locks? Under what circumstances that does happen? Is it only process exporting the brick crashes or is there a possibility of data corruption? No data corruption. Brick process where you did clear-locks may crash. Best regards, Samuli Heinonen Pranith K

Re: [Gluster-users] Stale locks on shards

2018-01-28 Thread Pranith Kumar Karampuri
Hi, Did you find the command from strace? On 25 Jan 2018 1:52 pm, "Pranith Kumar Karampuri" <pkara...@redhat.com> wrote: > > > On Thu, Jan 25, 2018 at 1:49 PM, Samuli Heinonen <samp...@neutraali.net> > wrote: > >> Pranith Kumar Karampuri kirjoi

Re: [Gluster-users] parallel-readdir is not recognized in GlusterFS 3.12.4

2018-01-26 Thread Pranith Kumar Karampuri
Adding devs who work on it On 23 Jan 2018 10:40 pm, "Alan Orth" wrote: > Hello, > > I saw that parallel-readdir was an experimental feature in GlusterFS > version 3.10.0, became stable in version 3.11.0, and is now recommended for > small file workloads in the Red Hat

Re: [Gluster-users] Stale locks on shards

2018-01-25 Thread Pranith Kumar Karampuri
On Thu, Jan 25, 2018 at 1:49 PM, Samuli Heinonen <samp...@neutraali.net> wrote: > Pranith Kumar Karampuri kirjoitti 25.01.2018 07:09: > >> On Thu, Jan 25, 2018 at 2:27 AM, Samuli Heinonen >> <samp...@neutraali.net> wrote: >> >> Hi! >>> >

Re: [Gluster-users] Stale locks on shards

2018-01-24 Thread Pranith Kumar Karampuri
mand? > > Best regards, > Samuli Heinonen > > Pranith Kumar Karampuri <mailto:pkara...@redhat.com> >> 23 January 2018 at 10.30 >> >> >> On Tue, Jan 23, 2018 at 1:38 PM, Samuli Heinonen <samp...@neutraali.net >> <mailto:samp...@neutraali.net>>

Re: [Gluster-users] Stale locks on shards

2018-01-23 Thread Pranith Kumar Karampuri
On Tue, Jan 23, 2018 at 1:38 PM, Samuli Heinonen <samp...@neutraali.net> wrote: > Pranith Kumar Karampuri kirjoitti 23.01.2018 09:34: > >> On Mon, Jan 22, 2018 at 12:33 AM, Samuli Heinonen >> <samp...@neutraali.net> wrote: >> >> Hi again, >>> >

Re: [Gluster-users] Stale locks on shards

2018-01-22 Thread Pranith Kumar Karampuri
On Tue, Jan 23, 2018 at 1:04 PM, Pranith Kumar Karampuri < pkara...@redhat.com> wrote: > > > On Mon, Jan 22, 2018 at 12:33 AM, Samuli Heinonen <samp...@neutraali.net> > wrote: > >> Hi again, >> >> here is more information regarding issue described earl

Re: [Gluster-users] [Gluster-devel] cluster/dht: restrict migration of opened files

2018-01-18 Thread Pranith Kumar Karampuri
On Tue, Jan 16, 2018 at 2:52 PM, Raghavendra Gowdappa wrote: > All, > > Patch [1] prevents migration of opened files during rebalance operation. > If patch [1] affects you, please voice out your concerns. [1] is a stop-gap > fix for the problem discussed in issues [2][3] >

Re: [Gluster-users] Integration of GPU with glusterfs

2018-01-11 Thread Pranith Kumar Karampuri
On Thu, Jan 11, 2018 at 10:44 PM, Darrell Budic wrote: > Sounds like a good option to look into, but I wouldn’t want it to take > time & resources away from other, non-GPU based, methods of improving this. > Mainly because I don’t have discrete GPUs in most of my systems.

Re: [Gluster-users] 3.10.5 vs 3.12.0 huge performance loss

2017-09-12 Thread Pranith Kumar Karampuri
Serkan, Will it be possible to provide gluster volume profile info output with 3.10.5 vs 3.12.0? That should give us clues about what could be happening. On Tue, Sep 12, 2017 at 1:51 PM, Serkan Çoban wrote: > Hi, > Servers are in production with 3.10.5, so I

Re: [Gluster-users] [Gluster-devel] Error while mounting gluster volume

2017-07-20 Thread Pranith Kumar Karampuri
The following generally means it is not able to connect to any of the glusterds in the cluster. [1970-01-02 10:54:04.420406] E [glusterfsd-mgmt.c:1818:mgmt_rpc_notify] 0-glusterfsd-mgmt: failed to connect with remote-host: 128.224.95.140 (Success) [1970-01-02 10:54:04.420422] I [MSGID: 101190]

Re: [Gluster-users] Replicated volume, one slow brick

2017-07-15 Thread Pranith Kumar Karampuri
Adding gluster-devel Raghavendra, I remember we discussing about handling these kinds of errors by ping-timer expiry? I may have missed the final decision on how this was decided to be handled. So asking you again ;-) On Thu, Jul 13, 2017 at 2:14 PM, Øyvind Krosby wrote:

Re: [Gluster-users] [Gluster-devel] gfid and volume-id extended attributes lost

2017-07-13 Thread Pranith Kumar Karampuri
this out and let you know. > > > > Thanks and Regards, > > Ram > > *From:* Pranith Kumar Karampuri [mailto:pkara...@redhat.com] > *Sent:* Monday, July 10, 2017 8:31 AM > *To:* Sanoj Unnikrishnan > *Cc:* Ankireddypalle Reddy; Gluster Devel (gluster-de...@gluster.org)

Re: [Gluster-users] Upgrading Gluster revision (3.8.12 to 3.8.13) caused underlying VM fs corruption

2017-07-11 Thread Pranith Kumar Karampuri
On Tue, Jul 11, 2017 at 5:12 PM, Diego Remolina wrote: > > > > You should first upgrade servers and then clients. New servers can > > understand old clients, but it is not easy for old servers to understand > new > > clients in case it started doing something new. > > But

Re: [Gluster-users] Upgrading Gluster revision (3.8.12 to 3.8.13) caused underlying VM fs corruption

2017-07-10 Thread Pranith Kumar Karampuri
On Mon, Jul 10, 2017 at 10:33 PM, Mahdi Adnan wrote: > I upgraded from 3.8.12 to 3.8.13 without issues. > > Two replicated volumes with online update, upgraded clients first and > followed by servers upgrade, "stop glusterd, pkill gluster*, update > gluster*, start

Re: [Gluster-users] [Gluster-devel] gfid and volume-id extended attributes lost

2017-07-10 Thread Pranith Kumar Karampuri
the pid on all removexattr call and also >> print the backtrace of the glusterfsd process when trigerring removing >> xattr. >> I will write the script and reply back. >> >> On Sat, Jul 8, 2017 at 7:06 AM, Pranith Kumar Karampuri < >> pkara...@redhat.com> w

Re: [Gluster-users] [Gluster-devel] gfid and volume-id extended attributes lost

2017-07-07 Thread Pranith Kumar Karampuri
glusterfs6sds.commvault.com:/ws/disk8/ws_brick > > Brick58: glusterfs4sds.commvault.com:/ws/disk9/ws_brick > > Brick59: glusterfs5sds.commvault.com:/ws/disk9/ws_brick > > Brick60: glusterfs6sds.commvault.com:/ws/disk9/ws_brick > > Options Reconfigured: > > perform

Re: [Gluster-users] [Gluster-devel] gfid and volume-id extended attributes lost

2017-07-07 Thread Pranith Kumar Karampuri
e issue. The bricks were > mounted after the reboot. One more thing that I noticed was when the > attributes were manually set when glusterd was up then on starting the > volume the attributes were again lost. Had to stop glusterd set attributes > and then start glusterd. After that the volume sta

Re: [Gluster-users] [Gluster-devel] gfid and volume-id extended attributes lost

2017-07-07 Thread Pranith Kumar Karampuri
when glusterd was up then on starting the > volume the attributes were again lost. Had to stop glusterd set attributes > and then start glusterd. After that the volume start succeeded. > Which version is this? > > > Thanks and Regards, > > Ram > > > > *From:* Pra

Re: [Gluster-users] [Gluster-devel] gfid and volume-id extended attributes lost

2017-07-07 Thread Pranith Kumar Karampuri
On Fri, Jul 7, 2017 at 9:15 PM, Pranith Kumar Karampuri <pkara...@redhat.com > wrote: > Did anything special happen on these two bricks? It can't happen in the > I/O path: > posix_removexattr() has: > 0 if (!strcmp (GFID_XATTR_KEY, name)) > { > > > 1

Re: [Gluster-users] [Gluster-devel] gfid and volume-id extended attributes lost

2017-07-07 Thread Pranith Kumar Karampuri
Did anything special happen on these two bricks? It can't happen in the I/O path: posix_removexattr() has: 0 if (!strcmp (GFID_XATTR_KEY, name)) { 1 gf_msg (this->name, GF_LOG_WARNING, 0, P_MSG_XATTR_NOT_REMOVED, 2 "Remove xattr called on gfid

Re: [Gluster-users] afr-self-heald.c:479:afr_shd_index_sweep

2017-06-29 Thread Pranith Kumar Karampuri
On Thu, Jun 29, 2017 at 8:12 PM, Paolo Margara <paolo.marg...@polito.it> wrote: > Il 29/06/2017 16:27, Pranith Kumar Karampuri ha scritto: > > > > On Thu, Jun 29, 2017 at 7:48 PM, Paolo Margara <paolo.marg...@polito.it> > wrote: > >> Hi Pranith, >&g

Re: [Gluster-users] afr-self-heald.c:479:afr_shd_index_sweep

2017-06-29 Thread Pranith Kumar Karampuri
w I'm restarting every brick process (and waiting for the heal to > complete), this is fixing my problem. > > Many thanks for the help. > > > Greetings, > > Paolo > > Il 29/06/2017 13:03, Pranith Kumar Karampuri ha scritto: > > Paolo, > Which document did

Re: [Gluster-users] afr-self-heald.c:479:afr_shd_index_sweep

2017-06-29 Thread Pranith Kumar Karampuri
but ensure there are no pending heals like Pranith mentioned. > https://gluster.readthedocs.io/en/latest/Upgrade-Guide/upgrade_to_3.7/ > lists the steps for upgrade to 3.7 but the steps mentioned there are > similar for any rolling upgrade. > > -Ravi > > > Greetings, > >

Re: [Gluster-users] afr-self-heald.c:479:afr_shd_index_sweep

2017-06-28 Thread Pranith Kumar Karampuri
On Wed, Jun 28, 2017 at 9:45 PM, Ravishankar N wrote: > On 06/28/2017 06:52 PM, Paolo Margara wrote: > >> Hi list, >> >> yesterday I noted the following lines into the glustershd.log log file: >> >> [2017-06-28 11:53:05.000890] W [MSGID: 108034] >>

Re: [Gluster-users] afr-self-heald.c:479:afr_shd_index_sweep

2017-06-28 Thread Pranith Kumar Karampuri
hi Paolo, I just checked code in v3.8.12 and it should have been created when the brick starts after you upgrade the node. How did you do the upgrade? On Wed, Jun 28, 2017 at 6:52 PM, Paolo Margara wrote: > Hi list, > > yesterday I noted the following lines into the

Re: [Gluster-users] Slow write times to gluster disk

2017-06-26 Thread Pranith Kumar Karampuri
e gluster mounted via NFS > to respect the group write permissions? > +Niels, +Jiffin I added 2 more guys who work on NFS to check why this problem happens in your environment. Let's see what information they may need to find the problem and solve this issue. > > Thanks > > Pat

Re: [Gluster-users] Slow write times to gluster disk

2017-06-23 Thread Pranith Kumar Karampuri
On Fri, Jun 23, 2017 at 9:10 AM, Pranith Kumar Karampuri < pkara...@redhat.com> wrote: > > > On Fri, Jun 23, 2017 at 2:23 AM, Pat Haley <pha...@mit.edu> wrote: > >> >> Hi, >> >> Today we experimented with some of the FUSE options that we found

Re: [Gluster-users] Slow write times to gluster disk

2017-06-22 Thread Pranith Kumar Karampuri
t.com> <btur...@redhat.com> > Sent: Monday, June 12, 2017 4:54:00 PM > Subject: Re: [Gluster-users] Slow write times to gluster disk > > > Hi Ben, > > I guess I'm confused about what you mean by replication. If I look at > the underlying bricks I only ever have

Re: [Gluster-users] [Gluster-Maintainers] [Gluster-devel] Release 3.11.1: Scheduled for 20th of June

2017-06-22 Thread Pranith Kumar Karampuri
On Wed, Jun 21, 2017 at 9:12 PM, Shyam <srang...@redhat.com> wrote: > On 06/21/2017 11:37 AM, Pranith Kumar Karampuri wrote: > >> >> >> On Tue, Jun 20, 2017 at 7:37 PM, Shyam <srang...@redhat.com >> <mailto:srang...@redhat.com>> wrote: >> &

Re: [Gluster-users] [Gluster-Maintainers] [Gluster-devel] Release 3.11.1: Scheduled for 20th of June

2017-06-21 Thread Pranith Kumar Karampuri
On Tue, Jun 20, 2017 at 7:37 PM, Shyam wrote: > Hi, > > Release tagging has been postponed by a day to accommodate a fix for a > regression that has been introduced between 3.11.0 and 3.11.1 (see [1] for > details). > > As a result 3.11.1 will be tagged on the 21st June as

Re: [Gluster-users] Cloud storage with glusterfs

2017-06-20 Thread Pranith Kumar Karampuri
On Tue, Jun 20, 2017 at 10:49 AM, atris adam wrote: > Hello everybody > > I have 3 datacenters in different regions, Can I deploy my own cloud > storage with the help of glusterfs on the physical nodes?If I can, what are > the differences between cloud storage glusterfs and

Re: [Gluster-users] [Gluster-Maintainers] Release 3.11.1: Scheduled for 20th of June

2017-06-20 Thread Pranith Kumar Karampuri
On Tue, Jun 6, 2017 at 6:54 PM, Shyam wrote: > Hi, > > It's time to prepare the 3.11.1 release, which falls on the 20th of > each month [4], and hence would be June-20th-2017 this time around. > > This mail is to call out the following, > > 1) Are there any pending *blocker*

Re: [Gluster-users] How to remove dead peer, osrry urgent again :(

2017-06-12 Thread Pranith Kumar Karampuri
On Sun, Jun 11, 2017 at 2:12 PM, Atin Mukherjee wrote: > > On Sun, 11 Jun 2017 at 06:25, Lindsay Mathieson < > lindsay.mathie...@gmail.com> wrote: > >> On 11/06/2017 10:46 AM, WK wrote: >> > I thought you had removed vna as defective and then ADDED in vnh as >> > the

Re: [Gluster-users] Urgent :) Procedure for replacing Gluster Node on 3.8.12

2017-06-09 Thread Pranith Kumar Karampuri
On Sat, Jun 10, 2017 at 2:53 AM, Lindsay Mathieson < lindsay.mathie...@gmail.com> wrote: > On 9/06/2017 9:56 PM, Pranith Kumar Karampuri wrote: > > > gluster volume remove-brick datastore4 replica 2 >> > vna.proxmox.softlog:/tank/vmdata/datastore4 force >>

Re: [Gluster-users] Urgent :) Procedure for replacing Gluster Node on 3.8.12

2017-06-09 Thread Pranith Kumar Karampuri
On Fri, Jun 9, 2017 at 12:41 PM, wrote: > > I'm thinking the following: > > > > gluster volume remove-brick datastore4 replica 2 > > vna.proxmox.softlog:/tank/vmdata/datastore4 force > > > > gluster volume add-brick datastore4 replica 3 > >

Re: [Gluster-users] Files Missing on Client Side; Still available on bricks

2017-06-08 Thread Pranith Kumar Karampuri
+Raghavendra/Nithya On Tue, Jun 6, 2017 at 7:41 PM, Jarsulic, Michael [CRI] < mjarsu...@bsd.uchicago.edu> wrote: > Hello, > > I am still working at recovering from a few failed OS hard drives on my > gluster storage and have been removing, and re-adding bricks quite a bit. I > noticed yesterday

Re: [Gluster-users] ?==?utf-8?q? Heal operation detail of EC volumes

2017-06-08 Thread Pranith Kumar Karampuri
This mail was not there in the same thread as earlier because the subject has extra "?==?utf-8?q? " so thought it was not answered and answered again. Sorry about that. On Sat, Jun 3, 2017 at 1:45 AM, Xavier Hernandez wrote: > Hi Serkan, > > On Thursday, June 01, 2017

Re: [Gluster-users] Heal operation detail of EC volumes

2017-06-08 Thread Pranith Kumar Karampuri
On Thu, Jun 8, 2017 at 12:49 PM, Pranith Kumar Karampuri < pkara...@redhat.com> wrote: > > > On Fri, Jun 2, 2017 at 1:01 AM, Serkan Çoban <cobanser...@gmail.com> > wrote: > >> >Is it possible that this matches your observations ? >> Yes that matches wh

Re: [Gluster-users] Heal operation detail of EC volumes

2017-06-08 Thread Pranith Kumar Karampuri
On Fri, Jun 2, 2017 at 1:01 AM, Serkan Çoban wrote: > >Is it possible that this matches your observations ? > Yes that matches what I see. So 19 files is being in parallel by 19 > SHD processes. I thought only one file is being healed at a time. > Then what is the meaning

Re: [Gluster-users] Slow write times to gluster disk

2017-05-30 Thread Pranith Kumar Karampuri
t; dd tests are in > > http://mseas.mit.edu/download/phaley/GlusterUsers/TestVol/ > dd_testvol_gluster.txt > > Pat > > > On 05/30/2017 09:27 PM, Pranith Kumar Karampuri wrote: > > Pat, >What is the command you used? As per the following output, it se

Re: [Gluster-users] Slow write times to gluster disk

2017-05-30 Thread Pranith Kumar Karampuri
bricks (in .glusterfs): 1.4 GB/s > > The profile for the gluster test-volume is in > > http://mseas.mit.edu/download/phaley/GlusterUsers/TestVol/ > profile_testvol_gluster.txt > > Thanks > > Pat > > > > > On 05/30/2017 12:10 PM, Pranith Kumar Karampuri wrote

Re: [Gluster-users] Slow write times to gluster disk

2017-05-30 Thread Pranith Kumar Karampuri
, > > Thanks for the tip. We now have the gluster volume mounted under /home. > What tests do you recommend we run? > > Thanks > > Pat > > > > On 05/17/2017 05:01 AM, Pranith Kumar Karampuri wrote: > > > > On Tue, May 16, 2017 at 9:20 PM, Pat Hale

Re: [Gluster-users] Fwd: Re: GlusterFS removal from Openstack Cinder

2017-05-27 Thread Pranith Kumar Karampuri
On Wed, May 24, 2017 at 9:10 PM, Joe Julian wrote: > Forwarded for posterity and follow-up. > > Forwarded Message > Subject: Re: GlusterFS removal from Openstack Cinder > Date: Fri, 05 May 2017 21:07:27 + > From: Amye Scavarda

Re: [Gluster-users] gluster remove-brick problem

2017-05-19 Thread Pranith Kumar Karampuri
Adding gluster-users, developers who work on distribute module of gluster. On Fri, May 19, 2017 at 12:58 PM, 郭鸿岩(基础平台部) wrote: > Hello, > > I am a user of Gluster 3.8 from beijing, China. > I met a problem. I added a brick to a volume, but the

Re: [Gluster-users] GlusterFS+heketi+Kubernetes snapshots fail

2017-05-17 Thread Pranith Kumar Karampuri
+Snapshot maintainer. I think he is away for a week or so. You may have to wait a bit more. On Wed, May 10, 2017 at 2:39 AM, Chris Jones wrote: > Hi All, > > This was discussed briefly on IRC, but got no resolution. I have a > Kubernetes cluster running heketi and GlusterFS

Re: [Gluster-users] Bad perf for small files on large EC volume

2017-05-17 Thread Pranith Kumar Karampuri
On Tue, May 9, 2017 at 12:57 PM, Ingard Mevåg wrote: > You're not counting wrong. We won't necessarily transfer all of these > files to one volume though. It was more an example of the distribution of > file sizes. > But as you say healing might be a problem, but then again.

Re: [Gluster-users] GlusterFS Stuck Processes

2017-05-17 Thread Pranith Kumar Karampuri
Could you provide gluster volume info, gluster volume status and output of 'top' command so that we know which processes are acting up in the volume? On Mon, May 15, 2017 at 8:02 AM, Joshua Coyle wrote: > Hey Guys, > > > > I think I’ve got a couple of stuck processes on

Re: [Gluster-users] Mounting GlusterFS volume in a client.

2017-05-17 Thread Pranith Kumar Karampuri
Volume size of the client doesn't matter for mounting volume from server. On Mon, May 15, 2017 at 8:22 PM, Dwijadas Dey wrote: > Hi >List users >I am trying to mount a GlusterFS server volume to a > Gluster Client in /var directory. My intention

Re: [Gluster-users] 3.9.1 in docker: problems when one of peers is unavailable.

2017-05-17 Thread Pranith Kumar Karampuri
Hey, 3.9.1 reached its EndOfLife, you can use either 3.8.x or 3.10.x. which are active at the moment. On Tue, May 16, 2017 at 11:03 AM, Rafał Radecki wrote: > Hi All. > > I have a 9 node dockerized glusterfs cluster and I am seeing a situation > that: > 1) docker

Re: [Gluster-users] Slow write times to gluster disk

2017-05-17 Thread Pranith Kumar Karampuri
On Wed, May 17, 2017 at 9:54 PM, Joe Julian <j...@julianfamily.org> wrote: > On 05/17/17 02:02, Pranith Kumar Karampuri wrote: > > On Tue, May 16, 2017 at 9:38 PM, Joe Julian <j...@julianfamily.org> wrote: > >> On 04/13/17 23:50, Pranith Kumar Karampuri wrote: &

Re: [Gluster-users] 120k context switches on GlsuterFS nodes

2017-05-17 Thread Pranith Kumar Karampuri
+ gluster-devel On Wed, May 17, 2017 at 10:50 PM, mabi wrote: > I don't know exactly what kind of context-switches it was but what I know > is that it is the "cs" number under "system" when you run vmstat. > > Also I use the percona linux monitoring template for cacti ( >

Re: [Gluster-users] 120k context switches on GlsuterFS nodes

2017-05-17 Thread Pranith Kumar Karampuri
+gluster-devel. I would expect it to be high because context switch is switching CPU from one task to other and syscalls do that. All bricks do are syscalls at the end. That said, I am not sure how to measure what is normal. Adding gluster-devel. On Tue, May 16, 2017 at 11:13 PM, mabi

Re: [Gluster-users] Slow write times to gluster disk

2017-05-17 Thread Pranith Kumar Karampuri
On Tue, May 16, 2017 at 9:38 PM, Joe Julian <j...@julianfamily.org> wrote: > On 04/13/17 23:50, Pranith Kumar Karampuri wrote: > > > > On Sat, Apr 8, 2017 at 10:28 AM, Ravishankar N <ravishan...@redhat.com> > wrote: > >> Hi Pat, >> >> I'

Re: [Gluster-users] Slow write times to gluster disk

2017-05-17 Thread Pranith Kumar Karampuri
one such engineer. Ben, Have any suggestions? > > Thanks > > Pat > > > > On 05/11/2017 12:06 PM, Pranith Kumar Karampuri wrote: > > > > On Thu, May 11, 2017 at 9:32 PM, Pat Haley <pha...@mit.edu> wrote: > >> >> Hi Pranith, >> >> Th

Re: [Gluster-users] Slow write times to gluster disk

2017-05-12 Thread Pranith Kumar Karampuri
On Sat, May 13, 2017 at 8:44 AM, Pranith Kumar Karampuri < pkara...@redhat.com> wrote: > > > On Fri, May 12, 2017 at 8:04 PM, Pat Haley <pha...@mit.edu> wrote: > >> >> Hi Pranith, >> >> My question was about setting up a gluster volume on an ext4 p

Re: [Gluster-users] Slow write times to gluster disk

2017-05-12 Thread Pranith Kumar Karampuri
It works fine. > > Pat > > > > On 05/11/2017 12:06 PM, Pranith Kumar Karampuri wrote: > > > > On Thu, May 11, 2017 at 9:32 PM, Pat Haley <pha...@mit.edu> wrote: > >> >> Hi Pranith, >> >> The /home partition is mounted as ext4 >&

Re: [Gluster-users] Slow write times to gluster disk

2017-05-11 Thread Pranith Kumar Karampuri
sks, we can create plain distribute volume inside one of those directories. After we are done, we can remove the setup. What do you say? > > Pat > > > > > On 05/11/2017 07:05 AM, Pranith Kumar Karampuri wrote: > > > > On Thu, May 11, 2017 at 2:48 AM, Pat Haley <pha...@mit

  1   2   3   4   5   6   7   8   9   >