Re: [Gluster-devel] Removing problematic language in geo-replication

2020-07-22 Thread Kotresh Hiremath Ravishankar
+1 On Wed, Jul 22, 2020 at 2:34 PM Ravishankar N wrote: > Hi, > > The gluster code base has some words and terminology (blacklist, > whitelist, master, slave etc.) that can be considered hurtful/offensive > to people in a global open source setting. Some of words can be fixed > trivially but

Re: [Gluster-devel] could you help to check about a glusterfs issue seems to be related to ctime

2020-03-17 Thread Kotresh Hiremath Ravishankar
ynthia > > > > *From:* Amar Tumballi > *Sent:* 2020年3月17日 13:18 > *To:* Zhou, Cynthia (NSB - CN/Hangzhou) > *Cc:* Kotresh Hiremath Ravishankar ; Gluster Devel < > gluster-devel@gluster.org> > *Subject:* Re: [Gluster-devel] could you help to check about a glusterfs

Re: [Gluster-devel] could you help to check about a glusterfs issue seems to be related to ctime

2020-03-12 Thread Kotresh Hiremath Ravishankar
t; > > *From:* Zhou, Cynthia (NSB - CN/Hangzhou) > *Sent:* 2020年3月12日 12:53 > *To:* 'Kotresh Hiremath Ravishankar' > *Cc:* 'Gluster Devel' > *Subject:* RE: could you help to check about a glusterfs issue seems to > be related to ctime > > > > From my local test onl

Re: [Gluster-devel] could you help to check about a glusterfs issue seems to be related to ctime

2020-03-11 Thread Kotresh Hiremath Ravishankar
greater but that would open up the race when two clients are updating the same file. This would result in keeping the older time than the latest. This requires code change and I don't think that should be done. Thanks, Kotresh On Wed, Mar 11, 2020 at 3:02 PM Kotresh Hiremath Ravishankar < kh

Re: [Gluster-devel] Solving Ctime Issue with legacy files [BUG 1593542]

2019-06-18 Thread Kotresh Hiremath Ravishankar
Hi Xavi, On Tue, Jun 18, 2019 at 12:28 PM Xavi Hernandez wrote: > Hi Kotresh, > > On Tue, Jun 18, 2019 at 8:33 AM Kotresh Hiremath Ravishankar < > khire...@redhat.com> wrote: > >> Hi Xavi, >> >> Reply inline. >> >> On Mon, Jun 17, 2019 at

Re: [Gluster-devel] Solving Ctime Issue with legacy files [BUG 1593542]

2019-06-18 Thread Kotresh Hiremath Ravishankar
Hi Xavi, Reply inline. On Mon, Jun 17, 2019 at 5:38 PM Xavi Hernandez wrote: > Hi Kotresh, > > On Mon, Jun 17, 2019 at 1:50 PM Kotresh Hiremath Ravishankar < > khire...@redhat.com> wrote: > >> Hi All, >> >> The ctime feature is enabled by default from

Re: [Gluster-devel] Bitrot: Time of signing depending on the file size???

2019-03-05 Thread Kotresh Hiremath Ravishankar
s and they already had a signature. I don't know the > reason for this. Maybe the client still keep th fd open? I opened a bug for > this: > https://bugzilla.redhat.com/show_bug.cgi?id=1685023 > > Regards > David > > Am Fr., 1. März 2019 um 18:29 Uhr schrieb Kotresh Hi

Re: [Gluster-devel] Bitrot: Time of signing depending on the file size???

2019-03-01 Thread Kotresh Hiremath Ravishankar
Interesting observation! But as discussed in the thread bitrot signing processes depends 2 min timeout (by default) after last fd closes. It doesn't have any co-relation with the size of the file. Did you happen to verify that the fd was still open for large files for some reason? On Fri, Mar

Re: [Gluster-devel] Geo-rep tests failing on master Cent7-regressions

2018-12-04 Thread Kotresh Hiremath Ravishankar
On Tue, Dec 4, 2018 at 10:02 PM Amar Tumballi wrote: > Looks like that is correct, but that also is failing in another regression > shard/zero-flag.t > It's not related to this as it doesn't involve any code changes. Changes are restricted to tests.. > On Tue, Dec 4, 2018 at 7:40 PM Shyam

Re: [Gluster-devel] [Gluster-Maintainers] Release 5: Branched and further dates

2018-10-08 Thread Kotresh Hiremath Ravishankar
Had forgot to add milind, ccing. On Mon, Oct 8, 2018 at 11:41 AM Kotresh Hiremath Ravishankar < khire...@redhat.com> wrote: > > > On Fri, Oct 5, 2018 at 10:31 PM Shyam Ranganathan > wrote: > >> On 10/05/2018 10:59 AM, Shyam Ranganathan wrote: >> > On 10/04/20

Re: [Gluster-devel] [Gluster-Maintainers] Release 5: Branched and further dates

2018-10-08 Thread Kotresh Hiremath Ravishankar
On Fri, Oct 5, 2018 at 10:31 PM Shyam Ranganathan wrote: > On 10/05/2018 10:59 AM, Shyam Ranganathan wrote: > > On 10/04/2018 11:33 AM, Shyam Ranganathan wrote: > >> On 09/13/2018 11:10 AM, Shyam Ranganathan wrote: > >>> RC1 would be around 24th of Sep. with final release tagging around 1st >

Re: [Gluster-devel] Release 5: Branched and further dates

2018-10-04 Thread Kotresh Hiremath Ravishankar
On Thu, Oct 4, 2018 at 9:03 PM Shyam Ranganathan wrote: > On 09/13/2018 11:10 AM, Shyam Ranganathan wrote: > > RC1 would be around 24th of Sep. with final release tagging around 1st > > of Oct. > > RC1 now stands to be tagged tomorrow, and patches that are being > targeted for a back port

Re: [Gluster-devel] Python3 build process

2018-09-27 Thread Kotresh Hiremath Ravishankar
On Thu, Sep 27, 2018 at 5:38 PM Kaleb S. KEITHLEY wrote: > On 9/26/18 8:28 PM, Shyam Ranganathan wrote: > > Hi, > > > > With the introduction of default python 3 shebangs and the change in > > configure.ac to correct these to py2 if the build is being attempted on > > a machine that does not

Re: [Gluster-devel] Clang-Formatter for GlusterFS.

2018-09-18 Thread Kotresh Hiremath Ravishankar
On Tue, Sep 18, 2018 at 2:44 PM, Amar Tumballi wrote: > > > On Tue, Sep 18, 2018 at 2:33 PM, Kotresh Hiremath Ravishankar < > khire...@redhat.com> wrote: > >> I have a different problem. clang is complaining on the 4.1 back port of >> a patch which is merged

Re: [Gluster-devel] Clang-Formatter for GlusterFS.

2018-09-18 Thread Kotresh Hiremath Ravishankar
I have a different problem. clang is complaining on the 4.1 back port of a patch which is merged in master before clang-format is brought in. Is there a way I can get smoke +1 for 4.1 as it won't be neat to have clang changes in 4.1 and not in master for same patch. It might further affect the

Re: [Gluster-devel] Cloudsync with AFR

2018-09-16 Thread Kotresh Hiremath Ravishankar
Hi Anuradha, To enable the c-time (consistent time) feature. Please enable following two options. gluster vol set utime on gluster vol set ctime on Thanks, Kotresh HR On Fri, Sep 14, 2018 at 12:18 PM, Rafi Kavungal Chundattu Parambil < rkavu...@redhat.com> wrote: > Hi Anuradha, > > We have

Re: [Gluster-devel] [Gluster-infra] Setting up machines from softserve in under 5 mins

2018-08-14 Thread Kotresh Hiremath Ravishankar
In the /etc/hosts, I think it is adding different IP On Mon, Aug 13, 2018 at 5:59 PM, Rafi Kavungal Chundattu Parambil < rkavu...@redhat.com> wrote: > This is so nice. I tried it and succesfully created a test machine. It > would be great if there is a provision to extend the lifetime of vm's >

Re: [Gluster-devel] [Gluster-Maintainers] Master branch lock down status (Wed, August 08th)

2018-08-10 Thread Kotresh Hiremath Ravishankar
Hi Shyam/Atin, I have posted the patch[1] for geo-rep test cases failure: tests/00-geo-rep/georep-basic-dr-rsync.t tests/00-geo-rep/georep-basic-dr-tarssh.t tests/00-geo-rep/00-georep-verify-setup.t Please include patch [1] while triggering tests. The instrumentation patch [2] which

Re: [Gluster-devel] [Gluster-Maintainers] Master branch lock down status

2018-08-08 Thread Kotresh Hiremath Ravishankar
Hi Atin/Shyam For geo-rep test retrials. Could you take this instrumentation patch [1] and give a run? I am have tried thrice on the patch with brick mux enabled and without but couldn't hit geo-rep failure. May be some race and it's not happening with instrumentation patch. [1]

Re: [Gluster-devel] [Gluster-Maintainers] Release 5: Master branch health report (Week of 30th July)

2018-08-03 Thread Kotresh Hiremath Ravishankar
Hi Du/Poornima, I was analysing bitrot and geo-rep failures and I suspect there is a bug in some perf xlator that was one of the cause. I was seeing following behaviour in few runs. 1. Geo-rep synced data to slave. It creats empty file and then rsync syncs data. But test does "stat --format

Re: [Gluster-devel] [Gluster-Maintainers] Release 5: Master branch health report (Week of 30th July)

2018-08-02 Thread Kotresh Hiremath Ravishankar
Have attached in the Bug https://bugzilla.redhat.com/show_bug.cgi?id=1611635 On Thu, 2 Aug 2018, 22:21 Raghavendra Gowdappa, wrote: > > > On Thu, Aug 2, 2018 at 5:48 PM, Kotresh Hiremath Ravishankar < > khire...@redhat.com> wrote: > >> I am facing different

Re: [Gluster-devel] [Gluster-Maintainers] Release 5: Master branch health report (Week of 30th July)

2018-08-02 Thread Kotresh Hiremath Ravishankar
] E [fuse-bridge.c:4382:fuse_first_lookup] 0-fuse: first lookup on root failed (Transport endpoint is not connected) - On Thu, Aug 2, 2018 at 5:35 PM, Nigel Babu wrote: > On Thu, Aug 2, 2018 at 5:12 PM Kotresh Hiremath Ravishankar < > khire...@r

Re: [Gluster-devel] [Gluster-Maintainers] Release 5: Master branch health report (Week of 30th July)

2018-08-02 Thread Kotresh Hiremath Ravishankar
On Thu, Aug 2, 2018 at 5:05 PM, Atin Mukherjee wrote: > > > On Thu, Aug 2, 2018 at 4:37 PM Kotresh Hiremath Ravishankar < > khire...@redhat.com> wrote: > >> >> >> On Thu, Aug 2, 2018 at 3:49 PM, Xavi Hernandez >> wrote: >> >>&

Re: [Gluster-devel] [Gluster-Maintainers] Release 5: Master branch health report (Week of 30th July)

2018-08-02 Thread Kotresh Hiremath Ravishankar
On Thu, Aug 2, 2018 at 4:50 PM, Amar Tumballi wrote: > > > On Thu, Aug 2, 2018 at 4:37 PM, Kotresh Hiremath Ravishankar < > khire...@redhat.com> wrote: > >> >> >> On Thu, Aug 2, 2018 at 3:49 PM, Xavi Hernandez >> wrote: >> >>&

Re: [Gluster-devel] [Gluster-Maintainers] Release 5: Master branch health report (Week of 30th July)

2018-08-02 Thread Kotresh Hiremath Ravishankar
On Thu, Aug 2, 2018 at 11:43 AM, Xavi Hernandez wrote: > On Thu, Aug 2, 2018 at 6:14 AM Atin Mukherjee wrote: > >> >> >> On Tue, Jul 31, 2018 at 10:11 PM Atin Mukherjee >> wrote: >> >>> I just went through the nightly regression report of brick mux runs and >>> here's what I can summarize. >>>

Re: [Gluster-devel] [Gluster-Maintainers] Update: Gerrit review system has one more command now

2018-05-21 Thread Kotresh Hiremath Ravishankar
This will be very useful. Thank you. On Mon, May 21, 2018 at 11:45 PM, Vijay Bellur wrote: > > > On Mon, May 21, 2018 at 2:29 AM, Amar Tumballi > wrote: > >> Hi all, >> >> As a push towards more flexibility to our developers, and options to run >> more

Re: [Gluster-devel] Release 4.1: LTM release targeted for end of May

2018-03-21 Thread Kotresh Hiremath Ravishankar
Hi Shyam, Rafi and Me are proposing consistent time across replica feature for 4.1 https://github.com/gluster/glusterfs/issues/208 Thanks, Kotresh H R On Wed, Mar 21, 2018 at 2:05 PM, Ravishankar N wrote: > > > On 03/20/2018 07:07 PM, Shyam Ranganathan wrote: > >> On

Re: [Gluster-devel] [Gluster-infra] Infra-related Regression Failures and What We're Doing

2018-01-21 Thread Kotresh Hiremath Ravishankar
On Mon, Jan 22, 2018 at 12:21 PM, Nigel Babu wrote: > Hello folks, > > As you may have noticed, we've had a lot of centos6-regression failures > lately. The geo-replication failures are the new ones which particularly > concern me. These failures have nothing to do with the

Re: [Gluster-devel] [Gluster-infra] Recent regression failures

2018-01-12 Thread Kotresh Hiremath Ravishankar
Nigel, Could you give a machine where geo-rep is failing even with bashrc fix to debug ? Thanks, Kotresh HR On Fri, Jan 12, 2018 at 3:54 PM, Amar Tumballi wrote: > Can we have a separate test case to validate for all the basic necessity > for whole test suite to pass?

Re: [Gluster-devel] [Gluster-users] Request for Comments: Upgrades from 3.x to 4.0+

2017-11-02 Thread Kotresh Hiremath Ravishankar
Hi Amudhan, Please go through the following that would clarify up-gradation concerns from DHT to RIO in 4.0 1. RIO would not deprecate DHT. Both DHT and RIO would co-exist. 2. DHT volumes would not be migrated to RIO. DHT volumes would still be using DHT code. 3. The new volume

Re: [Gluster-devel] Release 3.12: Status of features (Require responses!)

2017-07-24 Thread Kotresh Hiremath Ravishankar
Answers inline. On Sat, Jul 22, 2017 at 1:36 AM, Shyam wrote: > Hi, > > Prepare for a lengthy mail, but needed for the 3.12 release branching, so > here is a key to aid the impatient, > > Key: > 1) If you asked for an exception to a feature (meaning delayed backport to >

[Gluster-devel] 3.12 Review Request

2017-07-24 Thread Kotresh Hiremath Ravishankar
Hi, Following patches are targeted for 3.12. It has undergone few reviews and yet it to merged. Please take some time to review and merge if it looks good. https://review.gluster.org/#/c/17744/ https://review.gluster.org/#/c/17785/ -- Thanks and Regards, Kotresh H R

[Gluster-devel] ./tests/bugs/distribute/bug-1389697.t generates a core file

2017-06-30 Thread Kotresh Hiremath Ravishankar
Hi, The above mentioned distribute test case generated a core which is not related to the patch. https://build.gluster.org/job/centos6-regression/5218/consoleFull Here is the backtrace. #0 0x7f3222fbaebf in dht_build_root_loc (inode=0xa800, loc=0x7f3220b10e50) at

Re: [Gluster-devel] Adding xxhash to gluster code base

2017-06-28 Thread Kotresh Hiremath Ravishankar
rnative available in the majority of > distributions, this is the only sensible approach. Much of the code in > contrib/ is not maintained at all. We should prevent this from happening > with new code and assigning an owner/maintainer and peer(s) just like > for ot

Re: [Gluster-devel] Adding xxhash to gluster code base

2017-06-27 Thread Kotresh Hiremath Ravishankar
Sure, I can do that. On Tue, Jun 27, 2017 at 12:28 PM, Amar Tumballi <atumb...@redhat.com> wrote: > > > On Tue, Jun 27, 2017 at 12:25 PM, Kotresh Hiremath Ravishankar < > khire...@redhat.com> wrote: > >> Hi, >> >> We were looking for faster non-crypt

[Gluster-devel] Adding xxhash to gluster code base

2017-06-27 Thread Kotresh Hiremath Ravishankar
Hi, We were looking for faster non-cryptographic hash to be used for the gfid2path infra [1] The initial testing was done with md5 128bit checksum which was a slow, cryptographic hash and using it makes software not complaint to FIPS [2] On searching online a bit we found out xxhash [3] seems to

Re: [Gluster-devel] ./tests/encryption/crypt.t fails regression with core

2017-06-22 Thread Kotresh Hiremath Ravishankar
aintainers meeting also, no one volunteered to > fix encryption translator. So, I am fine with taking it out for now. Anyone > has objections? > > -Amar > > >> On Wed, Jun 21, 2017 at 10:41 PM, Kotresh Hiremath Ravishankar < >> khire...@redhat.com> wrote: >>

[Gluster-devel] ./tests/encryption/crypt.t fails regression with core

2017-06-21 Thread Kotresh Hiremath Ravishankar
Hi ./tests/encryption/crypt.t fails regression on https://build.gluster.org/job/centos6-regression/5112/consoleFull with a core. It doesn't seem to be related to the patch. Can somebody take a look at it? Following is the backtrace. Program terminated with signal SIGSEGV, Segmentation fault. #0

Re: [Gluster-devel] Announcing release 3.11 : Scope, schedule and feature tracking

2017-04-26 Thread Kotresh Hiremath Ravishankar
Hi Shyam, Following RFE is merged in master with github issue and would go in 3.11 GitHub issue: https://github.com/gluster/glusterfs/issues/191 Patch:https://review.gluster.org/#/c/15472/ Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=1443373 Thanks and Regards, Kotresh H R

Re: [Gluster-devel] [Gluster-users] Announcing release 3.11 : Scope, schedule and feature tracking

2017-04-25 Thread Kotresh Hiremath Ravishankar
this is an RFE, it would be available from 3.11 and would not be back ported to 3.10.x Thanks and Regards, Kotresh H R - Original Message - > From: "Serkan Çoban" <cobanser...@gmail.com> > To: "Kotresh Hiremath Ravishankar" <khire...@redhat.com> > Cc: &quo

Re: [Gluster-devel] Announcing release 3.11 : Scope, schedule and feature tracking

2017-04-25 Thread Kotresh Hiremath Ravishankar
Hi https://github.com/gluster/glusterfs/issues/188 is merged in master and needs to go in 3.11 Thanks and Regards, Kotresh H R - Original Message - > From: "Kaushal M" > To: "Shyam" > Cc: gluster-us...@gluster.org, "Gluster Devel"

Re: [Gluster-devel] [Gluster-users] High load on glusterfsd process

2017-04-25 Thread Kotresh Hiremath Ravishankar
impossible to upgrade to latest version, atleast 3.7.20 would do. It has minimal conflicts. I can help you out with that. Thanks and Regards, Kotresh H R - Original Message - > From: "ABHISHEK PALIWAL" <abhishpali...@gmail.com> > To: "Kotresh Hiremath Ravishankar&

Re: [Gluster-devel] [Gluster-users] High load on glusterfsd process

2017-04-24 Thread Kotresh Hiremath Ravishankar
..@redhat.com> > Cc: "Gluster Devel" <gluster-devel@gluster.org>, "gluster-users" > <gluster-us...@gluster.org>, "Kotresh Hiremath > Ravishankar" <khire...@redhat.com> > Sent: Monday, April 24, 2017 11:30:57 AM > Subject:

Re: [Gluster-devel] 3.9. feature freeze status check

2016-08-28 Thread Kotresh Hiremath Ravishankar
Hi Pranith, Please add the following feature of bitrot to 3.9 road map page. It is merged. feature/bitrot: Ondemand scrub option for bitrot (http://review.gluster.org/#/c/15111/) Thanks and Regards, Kotresh H R - Original Message - > From: "Poornima Gurusiddaiah"

Re: [Gluster-devel] CFP for Gluster Developer Summit

2016-08-23 Thread Kotresh Hiremath Ravishankar
Hi, We would like to propose the following talk. Title: Gluster Geo-replication Theme: Stability and Performance We plan to cover the following things. - Introduction - New Features - Stability and Usability Improvements - Performance Improvements. -

[Gluster-devel] ./tests/basic/afr/granular-esh/add-brick.t suprious failure

2016-07-26 Thread Kotresh Hiremath Ravishankar
Hi, Above mentioned AFT test has failed and is not related to the below patch. https://build.gluster.org/job/rackspace-regression-2GB-triggered/22485/consoleFull Can someone from AFR team look into it? Thanks and Regards, Kotresh H R ___

[Gluster-devel] ./tests/basic/afr/entry-self-heal.t regressin failure

2016-07-21 Thread Kotresh Hiremath Ravishankar
Hi, One more AFR test has failed for the patch http://review.gluster.org/14903/ and is not related to the patch. Can someone from AFR team look into it? https://build.gluster.org/job/rackspace-regression-2GB-triggered/22357/console Thanks and Regards, Kotresh H R

Re: [Gluster-devel] Regression failures in last 3 days

2016-07-20 Thread Kotresh Hiremath Ravishankar
luster-devel@gluster.org>, "Kotresh Hiremath > Ravishankar" <khire...@redhat.com>, "Rajesh > Joseph" <rjos...@redhat.com>, "Ravishankar N" <ravishan...@redhat.com>, > "Ashish Pandey" <aspan...@redhat.com> > Sent: Wednesday

[Gluster-devel] ./tests/basic/afr/split-brain-favorite-child-policy.t regressin failure on NetBSD

2016-07-18 Thread Kotresh Hiremath Ravishankar
Hi, The above mentioned test has failed for the patch http://review.gluster.org/#/c/14927/1 and is not related to my patch. Can someone from AFR team look into it? https://build.gluster.org/job/rackspace-netbsd7-regression-triggered/18132/console Thanks and Regards, Kotresh H R

Re: [Gluster-devel] [Gluster-infra] Please test Gerrit 2.12.2

2016-05-31 Thread Kotresh Hiremath Ravishankar
Hi Prasanna, 'Fix' button is visible. May be you are missing something, please check. Thanks and Regards, Kotresh H R - Original Message - > From: "Prasanna Kalever" > To: "Nigel Babu" > Cc: "gluster-infra" ,

[Gluster-devel] 3.8: Centos Regression Failure

2016-05-05 Thread Kotresh Hiremath Ravishankar
Hi ./tests/bugs/replicate/bug-977797.t fails in the following run. https://build.gluster.org/job/rackspace-regression-2GB-triggered/20473/console It succeeds in my local machine. It could be spurious. Could someone from replication team look into it? Thanks and Regards, Kotresh H R

[Gluster-devel] Bitrot Review Request

2016-04-29 Thread Kotresh Hiremath Ravishankar
Hi Pranith, You had a concern of consuming I/O threads when bit-rot uses rchecksum interface to signing, normal scrubbing and on-demand scrubbing with tiering. http://review.gluster.org/#/c/13833/5/xlators/storage/posix/src/posix.c As discussed over comments, the concern is valid and

Re: [Gluster-devel] [Gluster-Maintainers] Update on 3.7.10 - on schedule to be tagged at 2200PDT 30th March.

2016-03-31 Thread Kotresh Hiremath Ravishankar
Point noted, will keep informed from next time! Thanks and Regards, Kotresh H R - Original Message - > From: "Kaushal M" <kshlms...@gmail.com> > To: "Kotresh Hiremath Ravishankar" <khire...@redhat.com> > Cc: "Aravinda" <avish...@redh

Re: [Gluster-devel] [Gluster-Maintainers] Update on 3.7.10 - on schedule to be tagged at 2200PDT 30th March.

2016-03-31 Thread Kotresh Hiremath Ravishankar
m> > To: "Aravinda" <avish...@redhat.com> > Cc: "Gluster Devel" <gluster-devel@gluster.org>, maintain...@gluster.org, > "Kotresh Hiremath Ravishankar" > <khire...@redhat.com> > Sent: Thursday, March 31, 2016 6:56:18 PM > Subject:

Re: [Gluster-devel] [Gluster-Maintainers] Update on 3.7.10 - on schedule to be tagged at 2200PDT 30th March.

2016-03-31 Thread Kotresh Hiremath Ravishankar
Inline... - Original Message - > From: "Aravinda" <avish...@redhat.com> > To: "Kaushal M" <kshlms...@gmail.com>, "Gluster Devel" > <gluster-devel@gluster.org>, maintain...@gluster.org, "Kotresh > Hiremath Ravishankar&q

[Gluster-devel] NetBSD Regression failure on 3.7: ./tests/features/trash.t

2016-03-14 Thread Kotresh Hiremath Ravishankar
Hi, trash.t is failing on 3.7 branch for below patch. https://build.gluster.org/job/rackspace-netbsd7-regression-triggered/15153/console Could someone look into it? Thanks and Regards, Kotresh H R ___ Gluster-devel mailing list

Re: [Gluster-devel] [ANNOUNCE] Maintainer Update

2016-03-07 Thread Kotresh Hiremath Ravishankar
Congrats and all the best Aravinda! Thanks and Regards, Kotresh H R - Original Message - > From: "Venky Shankar" > To: "Gluster Devel" > Cc: maintain...@gluster.org > Sent: Tuesday, March 8, 2016 10:49:46 AM > Subject: [Gluster-devel]

[Gluster-devel] CentOS Regression generated core by .tests/basic/tier/tier-file-create.t

2016-03-07 Thread Kotresh Hiremath Ravishankar
Hi All, The regression run has caused the core to generate for below patch. https://build.gluster.org/job/rackspace-regression-2GB-triggered/18859/console >From the initial analysis, it's a tiered setup where ec sub-volume is the cold >tier and afr is the hot tier. The crash has happened

[Gluster-devel] Using geo-replication as backup solution using gluster volume snapshot!

2016-03-07 Thread Kotresh Hiremath Ravishankar
Hi All, Here is the idea, we can use geo-replication as backup solution using gluster volume snapshots on slave side. One of the drawbacks of geo-replication is that it's a continuous asynchronous replication and would not help in getting the last week's or yesterday's data. So if we use

Re: [Gluster-devel] Cores generated with ./tests/geo-rep/georep-basic-dr-tarssh.t

2016-03-03 Thread Kotresh Hiremath Ravishankar
Hi, Yes, with this patch we need not set conn->trans to NULL in rpc_clnt_disable Thanks and Regards, Kotresh H R - Original Message - > From: "Soumya Koduri" <skod...@redhat.com> > To: "Kotresh Hiremath Ravishankar" <khire...@redhat.com>, &q

Re: [Gluster-devel] Cores generated with ./tests/geo-rep/georep-basic-dr-tarssh.t

2016-03-03 Thread Kotresh Hiremath Ravishankar
://review.gluster.org/#/c/13592/ Thanks and Regards, Kotresh H R - Original Message - > From: "Kotresh Hiremath Ravishankar" <khire...@redhat.com> > To: "Soumya Koduri" <skod...@redhat.com> > Cc: "Raghavendra G" <raghaven...@glust

Re: [Gluster-devel] Cores generated with ./tests/geo-rep/georep-basic-dr-tarssh.t

2016-03-01 Thread Kotresh Hiremath Ravishankar
ll. And also the cleanup happens in failure path. So the memory leak can happen, if there is no try for rpc invocation after DISCONNECT. It will be cleaned up otherwise. [1] http://review.gluster.org/#/c/13507/ Thanks and Regards, Kotresh H R - Original Message ----- > From: "Kotresh

Re: [Gluster-devel] Cores generated with ./tests/geo-rep/georep-basic-dr-tarssh.t

2016-02-29 Thread Kotresh Hiremath Ravishankar
Hi Soumya, I just tested that it is reproducible only with your patch both in master and 3.76 branch. The geo-rep test cases are marked bad in master. So it's not hit in master. rpc is introduced in changelog xlator to communicate to applications via libgfchangelog. Venky/Me will check why is

Re: [Gluster-devel] Bitrot stub forget()

2016-02-17 Thread Kotresh Hiremath Ravishankar
I will take care of putting up the patch upstream. Thanks and Regards, Kotresh H R - Original Message - > From: "Venky Shankar" <vshan...@redhat.com> > To: "FNU Raghavendra Manjunath" <rab...@redhat.com> > Cc: "Gluster Devel" <gluste

Re: [Gluster-devel] glusterfsd core on NetBSD (https://build.gluster.org/job/rackspace-netbsd7-regression-triggered/14139/consoleFull)

2016-02-10 Thread Kotresh Hiremath Ravishankar
The crash reported on the above link is same as bug 1221629. But stack trace mentioned below looks to be from different regression run? Can I get the link for the same? It is strange the bt says 'rpcsvc_record_build_header' calling 'gf_history_changelog' which does not! Am I missing something?

Re: [Gluster-devel] changelog bug

2016-02-09 Thread Kotresh Hiremath Ravishankar
el...@redhat.com> > To: "Kotresh Hiremath Ravishankar" <khire...@redhat.com>, "Manikandan > Selvaganesh" <mselv...@redhat.com> > Cc: gluster-devel@gluster.org, "cyril peponnet" > <cyril.pepon...@alcatel-lucent.com> > Sent: Tuesday, February 9

Re: [Gluster-devel] changelog bug

2016-02-09 Thread Kotresh Hiremath Ravishankar
ique causes, > wouldn't it? > > On 02/09/2016 08:27 PM, Kotresh Hiremath Ravishankar wrote: > > Hi, > > > > This crash can't be same as BZ 1221629. The crash in the BZ 1221629 > > is with the rpc introduced in changelog in 3.7 along with bitrot. > > Co

Re: [Gluster-devel] changelog bug

2016-02-07 Thread Kotresh Hiremath Ravishankar
Hi, This bug is already tracked BZ 1221629 I will start working on this and will update once it is fixed. Thanks and Regards, Kotresh H R - Original Message - > From: "Manikandan Selvaganesh" > To: "Emmanuel Dreyfus" > Cc:

Re: [Gluster-devel] Possible spurious test tests/bitrot/br-stub.t

2016-02-02 Thread Kotresh Hiremath Ravishankar
I will have a look at it! Thanks and Regards, Kotresh H R - Original Message - > From: "Venky Shankar" <vshan...@redhat.com> > To: "Sakshi Bansal" <saban...@redhat.com> > Cc: "Gluster Devel" <gluster-devel@gluster.org>, "

Re: [Gluster-devel] Possible spurious test tests/bitrot/br-stub.t

2016-02-02 Thread Kotresh Hiremath Ravishankar
] [client-rpc-fops.c:2664:client3_3_readdirp_cbk] 0-patchy-client-0: remote operation failed [Invalid argument] I talked to Soumya (nfs team) and she will be looking into it. Thanks and Regards, Kotresh H R - Original Message - > From: "Kotresh Hiremath Ravishankar" <khi

Re: [Gluster-devel] Glusterd crash in regression

2016-01-05 Thread Kotresh Hiremath Ravishankar
Hi Atin, The same test caused glusterd crash for my patch as well. https://build.gluster.org/job/rackspace-regression-2GB-triggered/17289/consoleFull Core was generated by `glusterd'. Program terminated with signal SIGSEGV, Segmentation fault. #0 0x7f8a77b7e223 in dict_lookup_common

Re: [Gluster-devel] Glusterd crash in regression

2016-01-05 Thread Kotresh Hiremath Ravishankar
Hi Atin, Here is the bug. https://bugzilla.redhat.com/show_bug.cgi?id=1296004 Thanks and Regards, Kotresh H R - Original Message - > From: "Atin Mukherjee" <amukh...@redhat.com> > To: "Kotresh Hiremath Ravishankar" <khire...@redhat.com> > Cc: &qu

Re: [Gluster-devel] compound fop design first cut

2015-12-09 Thread Kotresh Hiremath Ravishankar
Geo-rep requirements inline. Thanks and Regards, Kotresh H R - Original Message - > From: "Pranith Kumar Karampuri" > To: "Vijay Bellur" , "Jeff Darcy" , > "Raghavendra Gowdappa" > , "Ira Cooper"

[Gluster-devel] NetBSD regression not kicking off!

2015-11-29 Thread Kotresh Hiremath Ravishankar
Hi, I am consistently getting the following errors for my patch. https://build.gluster.org/job/rackspace-netbsd7-regression-triggered/12193/console Building remotely on nbslave7i.cloud.gluster.org (netbsd7_regression) in workspace

Re: [Gluster-devel] Need advice re some major issues with glusterfind

2015-10-23 Thread Kotresh Hiremath Ravishankar
said deleting changelogs is not recommended. If you don't have use cases of above kind, you can delete changelogs. Thanks and Regards, Kotresh H R - Original Message - > From: "John Sincock [FLCPTY]" <j.sinc...@fugro.com> > To: "Kotresh Hiremath Ravishankar&

Re: [Gluster-devel] Need advice re some major issues with glusterfind

2015-10-23 Thread Kotresh Hiremath Ravishankar
Hi John, The changelog files are generated every 15 secs recording the changes happened to filesystem within that span. So every 15 sec, once the new changelog file is generated, it is ready to be consumed by glusterfind or any other consumers. The 15 sec time period is a tune-able. e.g.,

Re: [Gluster-devel] Spurious failures

2015-09-27 Thread Kotresh Hiremath Ravishankar
Thanks Michael! Thanks and Regards, Kotresh H R - Original Message - > From: "Michael Scherer" <msche...@redhat.com> > To: "Kotresh Hiremath Ravishankar" <khire...@redhat.com> > Cc: "Krutika Dhananjay" <kdhan...@redhat.com>, "

Re: [Gluster-devel] Spurious failures

2015-09-24 Thread Kotresh Hiremath Ravishankar
If it takes sometime, we should consider moving all geo-rep testcases under bad tests till then. Thanks and Regards, Kotresh H R - Original Message - > From: "Michael Scherer" <msche...@redhat.com> > To: "Kotresh Hiremath Ravishankar" <khire...@redh

Re: [Gluster-devel] Spurious failures

2015-09-24 Thread Kotresh Hiremath Ravishankar
Thank you:) and also please check the script I had given passes in all machines Thanks and Regards, Kotresh H R - Original Message - > From: "Michael Scherer" <msche...@redhat.com> > To: "Kotresh Hiremath Ravishankar" <khire...@redhat.com> > Cc:

Re: [Gluster-devel] Spurious failures

2015-09-24 Thread Kotresh Hiremath Ravishankar
l Message - > From: "Michael Scherer" <msche...@redhat.com> > To: "Kotresh Hiremath Ravishankar" <khire...@redhat.com> > Cc: "Krutika Dhananjay" <kdhan...@redhat.com>, "Atin Mukherjee" > <amukh...@redhat.com>, "Gaurav Garg&

Re: [Gluster-devel] Spurious failures

2015-09-23 Thread Kotresh Hiremath Ravishankar
As of now it is supported only on Linux, it has known issues with other platforms such as NetBSD... Thanks and Regards, Kotresh H R - Original Message - > From: "Michael Scherer" <msche...@redhat.com> > To: "Kotresh Hiremath Ravishankar" <khire

Re: [Gluster-devel] Spurious failures

2015-09-23 Thread Kotresh Hiremath Ravishankar
;Krutika Dhananjay" <kdhan...@redhat.com> > To: "Atin Mukherjee" <amukh...@redhat.com> > Cc: "Gluster Devel" <gluster-devel@gluster.org>, "Gaurav Garg" > <gg...@redhat.com>, "Aravinda" <avish...@redhat.com>, > &quo

Re: [Gluster-devel] Spurious failures

2015-09-23 Thread Kotresh Hiremath Ravishankar
er version. --- #!/bin/bash function SSHM() { ssh -q \ -oPasswordAuthentication=no \ -oStrictHostKeyChecking=no \ -oControlMaster=yes \ "$@"; } function cmd_slave() { local cmd_line; cmd_line=$(cat < From: "Kotresh

[Gluster-devel] Geo-rep: Solving changelog ordering problem!

2015-09-03 Thread Kotresh Hiremath Ravishankar
Hi DHT Team and Others, Changelog is a server side translator sits above POSIX and records FOPs. Hence, the order of operation is true only for that brick and the order of operation is lost across bricks. e.g.,(f1 hashes to brick1 and f2 to brick2) brick1 brick2

Re: [Gluster-devel] Introducing georepsetup - Gluster Geo-replication Setup Tool

2015-09-02 Thread Kotresh Hiremath Ravishankar
Hi Aravinda, I used it yesterday. It greatly simplifies the geo-rep setup. It would be great if it is enhanced to troubleshoot what's wrong in already corrupted setup. Thanks and Regards, Kotresh H R - Original Message - > From: "Aravinda" > To: "Gluster Devel"

Re: [Gluster-devel] Geo-rep portability issues with NetBSD

2015-08-31 Thread Kotresh Hiremath Ravishankar
Original Message - > From: "Emmanuel Dreyfus" <m...@netbsd.org> > To: "Kotresh Hiremath Ravishankar" <khire...@redhat.com>, "Gluster Devel" > <gluster-devel@gluster.org> > Cc: "Aravinda" <avish...@redhat.com&

[Gluster-devel] Geo-rep portability issues with NetBSD

2015-08-28 Thread Kotresh Hiremath Ravishankar
Hi Emmanuel and others, Geo-rep has few issues that's needs to be addressed to work with NetBSD. The following bug is raised to track the same. https://bugzilla.redhat.com/show_bug.cgi?id=1257847 So till these issues are fixed, I think we should disable only in NetBSD and let it run in linux.

Re: [Gluster-devel] NetBSD regression failures

2015-08-19 Thread Kotresh Hiremath Ravishankar
Dreyfus m...@netbsd.org To: Kotresh Hiremath Ravishankar khire...@redhat.com, Avra Sengupta aseng...@redhat.com Cc: gluster-infra gluster-in...@gluster.org, Gluster Devel gluster-devel@gluster.org Sent: Wednesday, August 19, 2015 12:28:18 PM Subject: Re: [Gluster-devel] NetBSD regression

Re: [Gluster-devel] NetBSD regression failures

2015-08-19 Thread Kotresh Hiremath Ravishankar
- From: Kotresh Hiremath Ravishankar khire...@redhat.com To: Avra Sengupta aseng...@redhat.com Cc: gluster-infra gluster-in...@gluster.org, Gluster Devel gluster-devel@gluster.org Sent: Tuesday, August 18, 2015 11:50:00 AM Subject: Re: [Gluster-devel] NetBSD regression failures Yes

Re: [Gluster-devel] NetBSD regression failures

2015-08-18 Thread Kotresh Hiremath Ravishankar
Yes, it makes sense to move both geo-rep test to bad tests for now till. the issue gets fixed in netBSD. I am looking into it netbsd failures. Thanks and Regards, Kotresh H R - Original Message - From: Avra Sengupta aseng...@redhat.com To: Atin Mukherjee amukh...@redhat.com, Gluster

Re: [Gluster-devel] testcase ./tests/geo-rep/georep-basic-dr-rsync.t failure

2015-08-17 Thread Kotresh Hiremath Ravishankar
Thanks Emmanuel, I could not look into it as I was out of station. I will debug it today. Thanks and Regards, Kotresh H R - Original Message - From: Emmanuel Dreyfus m...@netbsd.org To: Kotresh Hiremath Ravishankar khire...@redhat.com Cc: Gluster Devel gluster-devel@gluster.org Sent

Re: [Gluster-devel] testcase ./tests/geo-rep/georep-basic-dr-rsync.t failure

2015-08-12 Thread Kotresh Hiremath Ravishankar
for other netbsd slave machines when those are brought up. Thanks and Regards, Kotresh H R - Original Message - From: Kotresh Hiremath Ravishankar khire...@redhat.com To: Susant Palai spa...@redhat.com, Emmanuel Dreyfus m...@netbsd.org Cc: Gluster Devel gluster-devel@gluster.org Sent

Re: [Gluster-devel] NetBSD regression tests not Initializing...

2015-07-05 Thread Kotresh Hiremath Ravishankar
Thanks Emmanuel. Thanks and Regards, Kotresh H R - Original Message - From: Emmanuel Dreyfus m...@netbsd.org To: Kotresh Hiremath Ravishankar khire...@redhat.com, Gluster Devel gluster-devel@gluster.org Sent: Sunday, July 5, 2015 12:52:23 AM Subject: Re: [Gluster-devel] NetBSD

[Gluster-devel] NetBSD regression tests not Initializing...

2015-07-03 Thread Kotresh Hiremath Ravishankar
Hi NetBSD regressions are not initializing because of following error consistently with multiple re-triggers. I see the same error for quite a few patches. http://review.gluster.org/#/c/11443/ Building remotely on nbslave72.cloud.gluster.org (netbsd7_regression) in workspace

Re: [Gluster-devel] Regression Failure: ./tests/basic/quota.t

2015-07-02 Thread Kotresh Hiremath Ravishankar
Comments inline. Thanks and Regards, Kotresh H R - Original Message - From: Susant Palai spa...@redhat.com To: Sachin Pandit span...@redhat.com Cc: Kotresh Hiremath Ravishankar khire...@redhat.com, Gluster Devel gluster-devel@gluster.org Sent: Thursday, July 2, 2015 12:35:08 PM

Re: [Gluster-devel] Build and Regression failure in master branch!

2015-06-28 Thread Kotresh Hiremath Ravishankar
logs to new logging framework. Thanks and Regards, Kotresh H R - Original Message - From: Kotresh Hiremath Ravishankar khire...@redhat.com To: Gluster Devel gluster-devel@gluster.org Sent: Sunday, June 28, 2015 12:01:22 PM Subject: [Gluster-devel] Build and Regression failure in master

Re: [Gluster-devel] Build and Regression failure in master branch!

2015-06-28 Thread Kotresh Hiremath Ravishankar
Message - From: Atin Mukherjee atin.mukherje...@gmail.com To: Kotresh Hiremath Ravishankar khire...@redhat.com Cc: Gluster Devel gluster-devel@gluster.org Sent: Sunday, June 28, 2015 12:56:21 PM Subject: Re: [Gluster-devel] Build and Regression failure in master branch! -Atin Sent from one

Re: [Gluster-devel] Regresssion Failure (3.7 branch): afr-quota-xattr-mdata-heal.t

2015-06-25 Thread Kotresh Hiremath Ravishankar
Ok, Thanks. I have re-triggered it. Thanks and Regards, Kotresh H R - Original Message - From: Pranith Kumar Karampuri pkara...@redhat.com To: Kotresh Hiremath Ravishankar khire...@redhat.com, Gluster Devel gluster-devel@gluster.org Sent: Thursday, June 25, 2015 11:55:22 AM Subject

[Gluster-devel] Regression Failure: bug-1134822-read-only-default-in-graph.t

2015-06-24 Thread Kotresh Hiremath Ravishankar
Hi All, The above mentioned testcase failed for me which is not related to the patch. Could someone look into it? http://build.gluster.org/job/rackspace-regression-2GB-triggered/11267/consoleFull Thanks and Regards, Kotresh H R ___ Gluster-devel

[Gluster-devel] Regresssion Failure (3.7 branch): afr-quota-xattr-mdata-heal.t

2015-06-24 Thread Kotresh Hiremath Ravishankar
Hi, I see the above test case failing for my patch which is not related. Could some one from AFR team look into it? http://build.gluster.org/job/rackspace-regression-2GB-triggered/11332/consoleFull Thanks and Regards, Kotresh H R ___ Gluster-devel

  1   2   >