Re: [Gluster-devel] glusterfsd memory leak issue found after enable ssl

2019-05-08 Thread Milind Changire
awesome! well done! thank you for taking pain to fix the memory leak On Wed, May 8, 2019 at 1:28 PM Zhou, Cynthia (NSB - CN/Hangzhou) < cynthia.z...@nokia-sbell.com> wrote: > Hi 'Milind Changire' , > > The leak is getting more and more clear to me now. the unsolved memory &g

Re: [Gluster-devel] Should we enable contention notification by default ?

2019-05-02 Thread Milind Changire
On Thu, May 2, 2019 at 6:44 PM Xavi Hernandez wrote: > Hi Ashish, > > On Thu, May 2, 2019 at 2:17 PM Ashish Pandey wrote: > >> Xavi, >> >> I would like to keep this option (features.lock-notify-contention) >> enabled by default. >> However, I can see that there is one more option which will

Re: [Gluster-devel] glusterfsd memory leak issue found after enable ssl

2019-04-22 Thread Milind Changire
According to BIO_new_socket() man page ... *If the close flag is set then the socket is shut down and closed when the BIO is freed.* For Gluster to have more control over the socket shutdown, the BIO_NOCLOSE flag is set. Otherwise, SSL takes control of socket shutdown whenever BIO is freed.

Re: [Gluster-devel] glusterfsd memory leak issue found after enable ssl

2019-04-22 Thread Milind Changire
ULL) > > +BIO_free(priv->ssl_sbio); > > +priv->ssl_ssl = NULL; > > + priv->ssl_sbio = NULL; > > + } > > if (priv->ssl_private_key) { > > GF_FREE(priv->ssl_private_key); > > > > > > *From:* Milind Changire

Re: [Gluster-devel] glusterfsd memory leak issue found after enable ssl

2019-04-21 Thread Milind Changire
This probably went unnoticed until now. On Mon, Apr 22, 2019 at 10:45 AM Zhou, Cynthia (NSB - CN/Hangzhou) < cynthia.z...@nokia-sbell.com> wrote: > Why there is no bio_free called in ssl_teardown_connection then? > > > > cynthia > > > > *From:* Milind Changire

Re: [Gluster-devel] [Gluster-users] Version uplift query

2019-02-27 Thread Milind Changire
you might want to check what build.log says ... especially at the very bottom Here's a hint from StackExhange . On Thu, Feb 28, 2019 at 12:42 PM ABHISHEK PALIWAL wrote: > I am trying to build Gluster5.4 but getting below

Re: [Gluster-devel] [master][FAILED] brick-mux-regression

2018-12-02 Thread Milind Changire
On Mon, Dec 3, 2018 at 8:32 AM Raghavendra Gowdappa wrote: > On Mon, Dec 3, 2018 at 8:25 AM Raghavendra Gowdappa > wrote: > >> On Sat, Dec 1, 2018 at 11:02 AM Milind Changire >> wrote: >> >>> failed brick-mux-regression job: >>> https://build.glust

[Gluster-devel] [master][FAILED] brick-mux-regression

2018-11-30 Thread Milind Changire
failed brick-mux-regression job: https://build.gluster.org/job/regression-on-demand-multiplex/411/console patch: https://review.gluster.org/c/glusterfs/+/21719 stack trace: $ gdb -ex 'set sysroot ./' -ex 'core-file ./build/install/cores/glfs_epoll000-964.core' ./build/install/sbin/glusterfsd GNU

Re: [Gluster-devel] [Gluster-Maintainers] Release 5: Master branch health report (Week of 30th July)

2018-08-03 Thread Milind Changire
On Fri, Aug 3, 2018 at 11:04 AM, Pranith Kumar Karampuri < pkara...@redhat.com> wrote: > On Thu, Aug 2, 2018 at 10:03 PM Pranith Kumar Karampuri < > pkara...@redhat.com> wrote: > >> On Thu, Aug 2, 2018 at 7:19 PM Atin Mukherjee >> wrote: >> >>> New addition - tests/basic/volume.t - failed twice

[Gluster-devel] [master][FAILED] ./tests/bugs/rpc/bug-954057.t

2018-06-15 Thread Milind Changire
Jenkins Job: https://build.gluster.org/job/centos7-regression/1421/console Patch: https://review.gluster.org/15811 -- Milind ___ Gluster-devel mailing list Gluster-devel@gluster.org http://lists.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] [master][FAILED] test ./tests/bugs/cli/bug-1169302.t

2018-06-10 Thread Milind Changire
The test fails for my patch https://review.gluster.org/15811 Could somebody take a look and see what the issue is https://build.gluster.org/job/centos7-regression/1372/consoleFull I've tried to reproduce the issue on my CentOS 7.x VM system but the test passes without problems. @Poornima Since

[Gluster-devel] [master] double-free corruption

2018-05-09 Thread Milind Changire
double-free corruption when running mutrace during a CREATE 5000 test run using small-file test script on upstream master following is the backtrace: # mutrace --max=30 -d /usr/local/sbin/glusterfsd -s testsystem1 --volfile-id testvol.testsystem1.gluster-brick1-testvol -p

Re: [Gluster-devel] Release 3.12.8: Scheduled for the 12th of April

2018-04-13 Thread Milind Changire
On Wed, Apr 11, 2018 at 8:46 AM, Jiffin Tony Thottan wrote: > Hi, > > It's time to prepare the 3.12.8 release, which falls on the 10th of > each month, and hence would be 12-04-2018 this time around. > > This mail is to call out the following, > > 1) Are there any pending

Re: [Gluster-devel] [release-4.0] FAILED ./tests/bugs/ec/bug-1236065.t

2018-03-28 Thread Milind Changire
On Tue, Mar 20, 2018 at 8:42 PM, Milind Changire <mchan...@redhat.com> > wrote: > >> Jenkins Job: https://build.gluster.org/job/centos7-regression/405/console >> Full >> >> -- >> Milind >> >> >> ___

[Gluster-devel] [release-4.0] FAILED ./tests/bugs/ec/bug-1236065.t

2018-03-20 Thread Milind Changire
Jenkins Job: https://build.gluster.org/job/centos7-regression/405/consoleFull -- Milind ___ Gluster-devel mailing list Gluster-devel@gluster.org http://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] tests/bugs/rpc/bug-921072.t - fails almost all the times in mainline

2018-02-20 Thread Milind Changire
wow! the very first test run on my CentOS 6.9 VM passed successfully within 1 minute I'll now try this on a CentOS 7 VM On Wed, Feb 21, 2018 at 9:59 AM, Nigel Babu wrote: > The immediate cause of this failure is that we merged the timeout patch > which gives each test 200

Re: [Gluster-devel] gluster volume stop and the regressions

2018-02-13 Thread Milind Changire
s and wait till > it finishes for 30 secs and still volume stop fails with rebalance session > in progress error, that means either (a) rebalance session took more than > the timeout which has been passed to EXPECT_WITHIN or (b) there's a bug in > the code. > > On Thu, Feb 1,

[Gluster-devel] gluster volume stop and the regressions

2018-01-31 Thread Milind Changire
If a *volume stop* fails at a user's production site with a reason like *rebalance session is active* then the admin will wait for the session to complete and then reissue a *volume stop*; So, in essence, the failed volume stop is not fatal; for the regression tests, I would like to propose to

[Gluster-devel] [FAILED][master] tests/basic/afr/durability-off.t

2018-01-25 Thread Milind Changire
could AFR engineers check why tests/basic/afr/durability-off.t fails in brick-mux mode; here's the job URL: https://build.gluster.org/job/centos6-regression/8654/console -- Milind ___ Gluster-devel mailing list Gluster-devel@gluster.org

Re: [Gluster-devel] Setting up dev environment

2018-01-19 Thread Milind Changire
If you are a Gluster contributor then you'll need to have a GitHub Account with your Public SSH Key uploaded at GitHub to use the ssh transport. If you are not a Gluster contributor, then you might just want to use the https transport instead of the ssh transport to clone the glusterfs repo off

Re: [Gluster-devel] cluster/dht: restrict migration of opened files

2018-01-18 Thread Milind Changire
On Tue, Jan 16, 2018 at 2:52 PM, Raghavendra Gowdappa wrote: > All, > > Patch [1] prevents migration of opened files during rebalance operation. > If patch [1] affects you, please voice out your concerns. [1] is a stop-gap > fix for the problem discussed in issues [2][3] > >

Re: [Gluster-devel] Integration of GPU with glusterfs

2018-01-10 Thread Milind Changire
bit-rot is another feature that consumes much CPU to calculate the file content hash On Thu, Jan 11, 2018 at 11:42 AM, Ashish Pandey wrote: > Hi, > > We have been thinking of exploiting GPU capabilities to enhance > performance of glusterfs. We would like to know others

Re: [Gluster-devel] trash.t failure with brick multiplexing [Was Re: Build failed in Jenkins: regression-test-with-multiplex #574]

2018-01-02 Thread Milind Changire
On Tue, Jan 2, 2018 at 5:32 PM, Milind Changire <mchan...@redhat.com> wrote: > > On Tue, Jan 2, 2018 at 10:44 AM, Atin Mukherjee <amukh...@redhat.com> > wrote: > >> >> >> On Thu, Dec 21, 2017 at 7:27 PM, Atin Mukherjee <amukh...@redhat.com> >&g

Re: [Gluster-devel] trash.t failure with brick multiplexing [Was Re: Build failed in Jenkins: regression-test-with-multiplex #574]

2018-01-02 Thread Milind Changire
On Tue, Jan 2, 2018 at 10:44 AM, Atin Mukherjee wrote: > > > On Thu, Dec 21, 2017 at 7:27 PM, Atin Mukherjee > wrote: > >> >> >> On Wed, Dec 20, 2017 at 11:58 AM, Atin Mukherjee >> wrote: >> >>>

Re: [Gluster-devel] Need inputs on patch #17985

2017-12-06 Thread Milind Changire
With the tests conducted, I could not find any evidence of a performance regression in quick-read. On Thu, Nov 30, 2017 at 11:01 AM, Raghavendra G wrote: > I think this caused regression in quick-read. On going through code, I > realized Quick-read doesn't fetch

[Gluster-devel] [FAILED] [master] snapshot test failed and generated core on master

2017-12-01 Thread Milind Changire
Snapshot team, Please take a look at https://build.gluster.org/job/centos6-regression/7804/console to help me understand if there's anything amiss from my end. Job URL: https://build.gluster.org/job/centos6-regression/7804/console -- Milind ___

[Gluster-devel] [FAILED] [master] ./tests/basic/afr/split-brain-favorite-child-policy.t

2017-11-23 Thread Milind Changire
Request AFR team to take a peek at: https://build.gluster.org/job/centos6-regression/7623/console FYI: My patch addresses changes related to SSL communication. -- Milind ___ Gluster-devel mailing list Gluster-devel@gluster.org

Re: [Gluster-devel] [Gluster-Maintainers] Changing Submit Type on review.gluster.org

2017-09-07 Thread Milind Changire
*Squashed Patches* I believe, individual engineers have to own the responsibility of maintaining history of all appropriate Change-Ids as part of the commit message when multiple patches have been squashed/merged into one commit. On Thu, Sep 7, 2017 at 11:50 AM, Nigel Babu

Re: [Gluster-devel] Glusterd2 - Some anticipated changes to glusterfs source

2017-08-03 Thread Milind Changire
On Thu, Aug 3, 2017 at 12:56 PM, Kaushal M wrote: > On Thu, Aug 3, 2017 at 2:14 AM, Niels de Vos wrote: > > On Wed, Aug 02, 2017 at 05:03:35PM +0530, Prashanth Pai wrote: > >> Hi all, > >> > >> The ongoing work on glusterd2 necessitates following

Re: [Gluster-devel] brick multiplexing and memory consumption

2017-06-24 Thread Milind Changire
could we GF_DISABLE_MEMPOOL and use -fsanitize=address, -fsanitize=thread and -fsanitize=leak ? Or have these been tried and tested or are implied with the other compiler options we use already ? ___ Gluster-devel mailing list Gluster-devel@gluster.org

[Gluster-devel] glusterfind: requerst for reviews

2017-06-08 Thread Milind Changire
tools/glusterfind: add --end-time option tools/glusterfind: add --field-separator option -- Milind ___ Gluster-devel mailing list Gluster-devel@gluster.org

[Gluster-devel] [master] [FAILED] ./tests/bugs/core/bug-1432542-mpx-restart-crash.t: 12 new core files

2017-05-22 Thread Milind Changire
Job: https://build.gluster.org/job/centos6-regression/4731/console -- Milind ___ Gluster-devel mailing list Gluster-devel@gluster.org http://lists.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] [release-3.8] FAILED ./tests/bitrot/br-state-check.t: 1 new core files

2017-05-18 Thread Milind Changire
FYI: https://build.gluster.org/job/netbsd7-regression/4167/consoleFull -- Milind ___ Gluster-devel mailing list Gluster-devel@gluster.org http://lists.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] Gluster RPC Internals - Lecture #2 - recording

2017-03-07 Thread Milind Changire
https://bluejeans.com/s/G4Nx@/ To download the recording: Please hover mouse over the thumbnail at the bottom and you should see a download icon appear at the right bottom corner of the thumbnail. The icon remains hidden until you move the mouse over the thumbnail. -- Milind

[Gluster-devel] [REMINDER] Gluster RPC Internals - Lecture #2 - TODAY

2017-03-06 Thread Milind Changire
Blue Jeans Meeting ID: 1546612044 Start Time: 7:30pm India Time (UTC+0530) Duration: 2 hours https://www.bluejeans.com/ -- Milind ___ Gluster-devel mailing list Gluster-devel@gluster.org http://lists.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] Gluster RPC Internals - Lecture #2

2017-03-02 Thread Milind Changire via Blue Jeans Network
BEGIN:VCALENDAR VERSION:2.0 METHOD:REQUEST PRODID:-//PYVOBJECT//NONSGML Version 1//EN BEGIN:VTIMEZONE TZID:Asia/Kolkata TZURL:http://tzurl.org/zoneinfo-outlook/Asia/Kolkata X-LIC-LOCATION:Asia/Kolkata BEGIN:STANDARD TZOFFSETFROM:+0530 TZOFFSETTO:+0530 TZNAME:IST DTSTART:19700101T00

[Gluster-devel] Gluster RPC Internals - Lecture #1 - recording

2017-03-01 Thread Milind Changire
https://bluejeans.com/s/e59Wh/ -- Milind ___ Gluster-devel mailing list Gluster-devel@gluster.org http://lists.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] [master] [FAILED] [centos6] tests/bitrot/bug-1373520.t

2017-02-28 Thread Milind Changire
2 of 2 runs failed: https://build.gluster.org/job/centos6-regression/3451/consoleFull https://build.gluster.org/job/centos6-regression/3458/consoleFull ___ Gluster-devel mailing list Gluster-devel@gluster.org

[Gluster-devel] updatedb and gluster volumes

2017-02-25 Thread Milind Changire
Would it be wise to prevent updatedb from crawling ALL Gluster volumes ? i.e. at the brick for servers as well as on the mount point for clients The implementation would be to add glusterfs as a file system type to updatedb.conf against the PRUNEFS variable setting. -- Milind

[Gluster-devel] Gluster RPC Internals Lecture by Raghavendra G

2017-02-21 Thread Milind Changire via Blue Jeans Network
BEGIN:VCALENDAR VERSION:2.0 METHOD:REQUEST PRODID:-//PYVOBJECT//NONSGML Version 1//EN BEGIN:VTIMEZONE TZID:Asia/Kolkata TZURL:http://tzurl.org/zoneinfo-outlook/Asia/Kolkata X-LIC-LOCATION:Asia/Kolkata BEGIN:STANDARD TZOFFSETFROM:+0530 TZOFFSETTO:+0530 TZNAME:IST DTSTART:19700101T00

Re: [Gluster-devel] patch for "limited performance for disperse volumes"

2017-02-10 Thread Milind Changire
, Raghavendra Gowdappa wrote: +gluster-devel - Original Message - From: "Milind Changire" <mchan...@redhat.com> To: "Raghavendra Gowdappa" <rgowd...@redhat.com> Cc: "rhs-zteam" <rhs-zt...@redhat.com> Sent: Thursday, February 9, 2017 11:00:18 PM Subj

Re: [Gluster-devel] decoupling network.ping-timeout and transport.tcp-user-timeout

2017-01-11 Thread Milind Changire
+gluster-users Milind On 01/11/2017 03:21 PM, Milind Changire wrote: The management connection uses network.ping-timeout to time out and retry connection to a different server if the existing connection end-point is unreachable from the client. Due to the nature of the parameters involved

[Gluster-devel] decoupling network.ping-timeout and transport.tcp-user-timeout

2017-01-11 Thread Milind Changire
The management connection uses network.ping-timeout to time out and retry connection to a different server if the existing connection end-point is unreachable from the client. Due to the nature of the parameters involved in the TCP/IP network stack, it becomes imperative to control the other

[Gluster-devel] Release 3.10 feature proposal: multi-threaded promotions and demotions in tiering

2016-12-09 Thread Milind Changire
Currently, there's a single promotion thread and a single demotion thread serving every tier volume. The individual threads iterate over the bricks every pass. This can be improved in performance by assigning multiple promotion and demotion threads to a single brick. Different bricks can

Re: [Gluster-devel] tiering: emergency demotions

2016-10-13 Thread Milind Changire
ing the coldest file first 2. approximate Choosing this strategy could mean we choose the the first available file from the database query and demote it even if it is hot and actively written to Milind On 08/12/2016 08:25 PM, Milind Changire wrote: Patch for review: http://review.glust

Re: [Gluster-devel] tiering: emergency demotions

2016-08-12 Thread Milind Changire
Patch for review: http://review.gluster.org/15158 Milind On 08/12/2016 07:27 PM, Milind Changire wrote: On 08/10/2016 12:06 PM, Milind Changire wrote: Emergency demotions will be required whenever writes breach the hi-watermark. Emergency demotions are required to avoid ENOSPC in case

Re: [Gluster-devel] tiering: emergency demotions

2016-08-12 Thread Milind Changire
On 08/10/2016 12:06 PM, Milind Changire wrote: Emergency demotions will be required whenever writes breach the hi-watermark. Emergency demotions are required to avoid ENOSPC in case of continuous writes that originate on the hot tier. There are two concerns in this area: 1. enforcing max-cycle

[Gluster-devel] tiering: emergency demotions

2016-08-10 Thread Milind Changire
Emergency demotions will be required whenever writes breach the hi-watermark. Emergency demotions are required to avoid ENOSPC in case of continuous writes that originate on the hot tier. There are two concerns in this area: 1. enforcing max-cycle-time during emergency demotions

Re: [Gluster-devel] mount_dir value seems clobbered in all /var/lib/glusterd/vols//bricks/: files

2016-08-05 Thread Milind Changire
by snapshot, however I am just wondering how are we surviving this case. ~Atin On Thu, Aug 4, 2016 at 5:39 PM, Milind Changire <mchan...@redhat.com <mailto:mchan...@redhat.com>> wrote: here's one of the brick definition files for a volume named "twoXtwo" [root@f24node0

Re: [Gluster-devel] regression burn-in summary over the last 7 days

2016-08-04 Thread Milind Changire
On 08/04/2016 05:40 PM, Kaleb KEITHLEY wrote: On 08/04/2016 08:07 AM, Niels de Vos wrote: On Thu, Aug 04, 2016 at 12:00:53AM +0200, Niels de Vos wrote: On Wed, Aug 03, 2016 at 10:30:28AM -0400, Vijay Bellur wrote: ... ./tests/bugs/gfapi/bug-1093594.t ; Failed 1 times Regression

[Gluster-devel] mount_dir value seems clobbered in all /var/lib/glusterd/vols//bricks/: files

2016-08-04 Thread Milind Changire
here's one of the brick definition files for a volume named "twoXtwo" [root@f24node0 bricks]# cat f24node1\:-glustervols-twoXtwo-dir hostname=f24node1 path=/glustervols/twoXtwo/dir real_path=/glustervols/twoXtwo/dir listen-port=0 rdma.listen-port=0 decommissioned=0 brick-id=twoXtwo-client-1

Re: [Gluster-devel] tier: breaking down the monolith processing function

2016-07-21 Thread Milind Changire
On 07/21/2016 03:03 AM, Vijay Bellur wrote: On 07/19/2016 07:54 AM, Milind Changire wrote: I've attempted to break the tier_migrate_using_query_file() function into relatively smaller functions. The important one is tier_migrate_link(). Can tier_migrate_link() be broken down further? Having

Re: [Gluster-devel] tier: breaking down the monolith processing function for glusterfs-3.9.0

2016-07-19 Thread Milind Changire
I'm planning to get this into the upstream 3.9 release. Milind On 07/19/2016 05:24 PM, Milind Changire wrote: I've attempted to break the tier_migrate_using_query_file() function into relatively smaller functions. The important one is tier_migrate_link(). Please take a look at http

[Gluster-devel] tier: breaking down the monolith processing function

2016-07-19 Thread Milind Changire
I've attempted to break the tier_migrate_using_query_file() function into relatively smaller functions. The important one is tier_migrate_link(). Please take a look at http://review.gluster.org/14957 and voice your opinions. A prelude to this effort is similar work as part of

Re: [Gluster-devel] Reduce memcpy in glfs read and write

2016-06-21 Thread Milind Changire
Would https://bugzilla.redhat.com/show_bug.cgi?id=1233136 be related to Sachin's problem? Milind On 06/21/2016 06:28 PM, Pranith Kumar Karampuri wrote: Hey!! Hope you are doing good. I took a look at the bt. So when flush comes write-behind has to flush all the writes down. I see the

[Gluster-devel] [master] FAILED jobs: freebsd-smoke and glusterfs-devrpms

2016-05-27 Thread Milind Changire
These seem to be failing at the same points for the same patch: http://build.gluster.org/job/freebsd-smoke/15026/ http://build.gluster.org/job/freebsd-smoke/15024/ http://build.gluster.org/job/glusterfs-devrpms/16744/ http://build.gluster.org/job/glusterfs-devrpms/16742/ Any advice for

[Gluster-devel] [release-3.7] smoke build failed

2016-05-13 Thread Milind Changire
Job: https://build.gluster.org/job/smoke/27731/console Error: bin/mkdir: cannot create directory `/usr/lib/python2.6/site-packages/gluster': Permission denied Please advise. Do I just resubmit the job? Would restarting the VM be of help here? This is the second time the smoke test has failed

Re: [Gluster-devel] [release-3.8] Need update on the status of "Glusterfind and Bareos Integration"

2016-05-11 Thread Milind Changire
Looks like all relevant patches to glusterfind and libgfapi have made it to the release-3.8 branch. Once the official release has been done it can be communicated to Bareos and they can resume testing against the release. Milind On 05/11/2016 11:46 PM, Niels de Vos wrote: Hi Milind, could you

[Gluster-devel] MKDIR_P and mkdir_p

2016-05-09 Thread Milind Changire
On Mon, May 09, 2016 at 12:02:56PM +0530, Milind Changire wrote: > Niels, Kaleb, > With Niels' commit 4ac2ff18db62db192c49affd8591e846c810667a > reverting Manu's commit 1fbcecb72ef3525823536b640d244d1e5127a37f > on upstream master, and w.r.t. Kaleb's patch > http://review.gl

Re: [Gluster-devel] [master] FAILED: NetBSD regression for tests/performance/open-behind.t

2016-03-14 Thread Milind Changire
Thanks manu. -- Milind - Original Message - From: "Emmanuel Dreyfus" <m...@netbsd.org> To: "Milind Changire" <mchan...@redhat.com>, gluster-devel@gluster.org Sent: Tuesday, March 15, 2016 8:12:09 AM Subject: Re: [Gluster-devel] [master] FAILED: NetBSD r

[Gluster-devel] [master] FAILED: NetBSD regression for tests/performance/open-behind.t

2016-03-14 Thread Milind Changire
https://build.gluster.org/job/rackspace-netbsd7-regression-triggered/15170/ [06:30:49] Running tests in file ./tests/performance/open-behind.t tar: Failed open to read/write on

[Gluster-devel] [master] FAILED: freebsd smoke

2016-03-11 Thread Milind Changire
https://build.gluster.org/job/freebsd-smoke/12914/ = Making install in nsr-server --- install-recursive --- Making install in src --- nsr-cg.c --- /usr/local/bin/python /usr/home/jenkins/root/workspace/freebsd-smoke/xlators/experimental/nsr-server/src/gen-fops.py

[Gluster-devel] [master] FAILED NetBSD regression: quota.t

2016-03-08 Thread Milind Changire
https://build.gluster.org/job/rackspace-netbsd7-regression-triggered/14776/ == Running tests in file ./tests/basic/quota.t [06:08:03] ./tests/basic/quota.t .. not ok 75 not ok 76 Failed 2/76 subtests [06:08:03] Test Summary Report

Re: [Gluster-devel] [master] FAILED: bug-1303028-Rebalance-glusterd-rpc-connection-issue.t

2016-03-08 Thread Milind Changire
oops! how did I miss that :) https://build.gluster.org/job/rackspace-regression-2GB-triggered/18683/ -- Milind - Original Message - From: "Mohammed Rafi K C" <rkavu...@redhat.com> To: "Raghavendra Gowdappa" <rgowd...@redhat.com>, "Milind Changir

[Gluster-devel] [master] FAILED: bug-1303028-Rebalance-glusterd-rpc-connection-issue.t

2016-03-07 Thread Milind Changire
== Running tests in file ./tests/bugs/glusterd/bug-1303028-Rebalance-glusterd-rpc-connection-issue.t [07:27:48] ./tests/bugs/glusterd/bug-1303028-Rebalance-glusterd-rpc-connection-issue.t .. not ok 11 Got "1" instead of "0" not ok 14 Got "1"

[Gluster-devel] [FAILED] NetBSD-regression for ./tests/basic/quota-anon-fd-nfs.t, ./tests/basic/tier/fops-during-migration.t, ./tests/basic/tier/record-metadata-heat.t

2016-02-08 Thread Milind Changire
http://build.gluster.org/job/rackspace-netbsd7-regression-triggered/14096/consoleFull [11:56:33] ./tests/basic/quota-anon-fd-nfs.t .. not ok 21 not ok 22 not ok 24 not ok 26 not ok 28 not ok 30 not ok 32 not ok 34 not ok 36 Failed 9/40 subtests [12:10:07]

[Gluster-devel] [FAILED] NetBSD-regression for ./tests/basic/afr/self-heald.t

2016-02-08 Thread Milind Changire
https://build.gluster.org/job/rackspace-netbsd7-regression-triggered/14089/consoleFull [08:44:20] ./tests/basic/afr/self-heald.t .. not ok 37 Got "0" instead of "1" not ok 52 Got "0" instead of "1" not ok 67 Failed 4/83 subtests Please advise. -- Milind

[Gluster-devel] compare-bug-version-and-git-branch.sh FAILING

2016-01-06 Thread Milind Changire
for patch: http://review.gluster.org/13186 Jenkins failed job: https://build.gluster.org/job/compare-bug-version-and-git-branch/14201/ I had mistakenly entered a downstream BUG ID for rfc.sh and then later amended the commit message with the correct mainline BUG ID and resubmitted via rfc.sh. I

[Gluster-devel] RENAME syscall semantics

2015-12-11 Thread Milind Changire
Gluster uses changelogs to perform geo-replication. The changelogs record syscalls which are forwarded from the master cluster and played on slave cluster to provide the geo-replication feature. If two hard-links (h1 and h2) point to the same inode and a Python statement of os.rename(h1, h2) is

Re: [Gluster-devel] compound fop design first cut

2015-12-11 Thread Milind Changire
On Wed, Dec 9, 2015 at 8:02 PM, Jeff Darcy wrote: > > > > On December 9, 2015 at 7:07:06 AM, Ira Cooper (i...@redhat.com) wrote: > > A simple "abort on failure" and let the higher levels clean it up is > > probably right for the type of compounding I propose. It is what SMB2 >

Re: [Gluster-devel] what rpm sub-package do /usr/{libexec, sbin}/gfind_missing_files belong to?

2015-11-11 Thread Milind Changire
They are indeed part of the geo-rep sub package ... the package listing (rpm -qlp) says so. But I guess if somebody attempts a server build without geo-replication then the gfind_missing_files will get appended to the %files ganesha section which is defined just above the %files geo-replication

[Gluster-devel] RHEL-5 Client build failed

2015-10-16 Thread Milind Changire
Following commit to release-3.7 branch causes RHEL-5 Client build to fail because there isn't any available on RHEL-5 ca5b466d rpc/rpc-transport/socket/src/socket.h (Emmanuel Dreyfus 2015-07-30 14:02:43 +0200 22) #include This commit is also not available in upstream

[Gluster-devel] TEST FAILED ./tests/basic/mount-nfs-auth.t

2015-10-09 Thread Milind Changire
https://build.gluster.org/job/rackspace-netbsd7-regression-triggered/10776/consoleFull says [06:18:00] ./tests/basic/mount-nfs-auth.t .. not ok 62 Got "N" instead of "Y" not ok 64 Got "N" instead of "Y" not ok 65 Got "N" instead of "Y" not ok 67 Got "N" instead of "Y" Failed 4/87 subtests

[Gluster-devel] BUILD FAILED while copying log tarball

2015-10-08 Thread Milind Changire
https://build.gluster.org/job/rackspace-regression-2GB-triggered/14782/consoleFull says Going to copy log tarball for processing on http://elk.cloud.gluster.org/ scp: /srv/jenkins-logs/upload/jenkins-rackspace-regression-2GB-triggered-14782.tgz: No space left on device Build step 'Execute

[Gluster-devel] [FAILED] patch verification builds for release-3.7 branch

2015-08-05 Thread Milind Changire
http://build.gluster.org/job/compare-bug-version-and-git-branch/10799/ http://build.gluster.org/job/rackspace-regression-2GB-triggered/13106/consoleFull Please advise. -- Milind ___ Gluster-devel mailing list Gluster-devel@gluster.org

[Gluster-devel] [FAILED] regression tests: tests/bugs/distribute/bug-1066798.t, tests/basic/volume-snapshot.t

2015-07-20 Thread Milind Changire
http://build.gluster.org/job/rackspace-regression-2GB-triggered/12541/consoleFull http://build.gluster.org/job/rackspace-regression-2GB-triggered/12499/consoleFull Please advise. -- Milind ___ Gluster-devel mailing list Gluster-devel@gluster.org

[Gluster-devel] [FAILED] /opt/qa/tools/posix-compliance/tests/chmod/00.t

2015-06-09 Thread Milind Changire
Job Console Output: http://build.gluster.org/job/smoke/18470/console My patch is Python code and does not change Gluster Internals behavior. This test failure doesn't seem to be directly associated with my patch implementation. Please look into the issue. - Test Summary Report

[Gluster-devel] [FAILED] tests/bugs/glusterd/bug-857330/xml.t

2015-06-02 Thread Milind Changire
Please see http://build.gluster.org/job/rackspace-regression-2GB-triggered/9994/consoleFull for details Kindly advise regarding resolution -- Milind ___ Gluster-devel mailing list Gluster-devel@gluster.org