Below patch submitted upstream. This fixes the testcase
'./tests/basic/quota-nfs.t'
http://review.gluster.org/#/c/12075/
Thanks,
Vijay
On Tuesday 01 September 2015 11:38 AM, Vijaikumar M wrote:
We will look into this issue.
Thanks,
Vijay
On Tuesday 01 September 2015 11:03 AM, Atin
On Friday 21 August 2015 10:21 AM, Avra Sengupta wrote:
+ Adding Vijaikumar
On 08/20/2015 04:19 PM, Niels de Vos wrote:
On Thu, Aug 20, 2015 at 03:05:56AM -0400, Susant Palai wrote:
Hi,
I tried running netbsd regression twice on a patch. And twice it failed at
the same point. Here is
On Monday 17 August 2015 12:22 PM, Avra Sengupta wrote:
Hi,
The NetBSD regression tests are continuously failing with errors in
the following tests:
./tests/basic/mount-nfs-auth.t
./tests/basic/quota-anon-fd-nfs.t
quota-anon-fd-nfs.t is known issues with NFS client caching so it is
marked
On Monday 13 July 2015 11:14 PM, Joseph Fernandes wrote:
Hi All,
These are some of the recent hit spurious failures on 3.7 branch
http://build.gluster.org/job/rackspace-regression-2GB-triggered/12356/consoleFull
./tests/bugs/snapshot/bug-1109889.t
is blocking
http://review.gluster.org/11649
Patch submitted upstream which fixes this issue:
http://review.gluster.org/#/c/11583/
Will submit the fix for 3.7 as well.
Thanks,
Vijay
On Friday 10 July 2015 01:19 PM, Joseph Fernandes wrote:
http://build.gluster.org/job/rackspace-regression-2GB-triggered/12204/consoleFull
NetBSD tests arefailing again:
http://build.gluster.org/job/rackspace-netbsd7-regression-triggered/8123/console
Triggered by Gerrit:http://review.gluster.org/11616 in silent mode.
Building remotely onnbslave74.cloud.gluster.org
http://build.gluster.org/computer/nbslave74.cloud.gluster.org
On Wednesday 08 July 2015 03:42 PM, Kaushal M wrote:
I've been hitting spurious failures in Linux regression runs for my change [1].
The following tests failed,
./tests/basic/afr/replace-brick-self-heal.t [2]
./tests/bugs/replicate/bug-1238508-self-heal.t [3]
On Wednesday 08 July 2015 03:53 PM, Vijaikumar M wrote:
On Wednesday 08 July 2015 03:42 PM, Kaushal M wrote:
I've been hitting spurious failures in Linux regression runs for my
change [1].
The following tests failed,
./tests/basic/afr/replace-brick-self-heal.t [2]
./tests/bugs/replicate
Karampuri pkara...@redhat.com
Cc: Vijay Bellur vbel...@redhat.com, Vijaikumar M
vmall...@redhat.com, Gluster Devel
gluster-devel@gluster.org, Raghavendra Gowdappa rgowd...@redhat.com,
Nagaprasad Sathyanarayana
nsath...@redhat.com
Sent: Thursday, July 2, 2015 10:54:44 AM
Subject: Re: Huge memory
We look into this issue
Thanks,
Vijay
On Thursday 02 July 2015 11:46 AM, Kotresh Hiremath Ravishankar wrote:
Hi,
I see quota.t regression failure for the following. The changes are related to
example programs in libgfchangelog.
Hi,
The new marker xlator uses syncop framework to update quota-size in the
background, it uses one synctask per write FOP.
If there are 100 parallel writes with all different inodes but on the
same directory '/dir', there will be ~100 txn waiting in queue to
acquire a lock on on its parent
On Friday 26 June 2015 12:59 PM, Susant Palai wrote:
Comment inline.
- Original Message -
From: christ1...@sina.com
To: gluster-devel gluster-devel@gluster.org
Sent: Thursday, 25 June, 2015 7:56:45 PM
Subject: [Gluster-devel] Three Issues Confused me recently
Hi, everyone!
Hi
Upstream regression failure with test-case ./tests/basic/tier/tier.t
My patch# 11315 regression failed twice with
test-case./tests/basic/tier/tier.t. Anyone seeing this issue with other
patches?
http://build.gluster.org/job/rackspace-regression-2GB-triggered/11396/consoleFull
, Vijaikumar M wrote:
On Tuesday 23 June 2015 04:28 PM, Niels de Vos wrote:
On Tue, Jun 23, 2015 at 03:45:43PM +0530, Vijaikumar M wrote:
I have submitted below patch which fixes this issue. I am handling memory
clean-up with reference countmechanism.
http://review.gluster.org/#/c/11361
...@redhat.com
To: Raghavendra Gowdappa rgowd...@redhat.com
Cc: Niels de Vos nde...@redhat.com, Vijaikumar M vmall...@redhat.com,
Raghavendra G
raghaven...@gluster.com, Gluster Devel gluster-devel@gluster.org
Sent: Wednesday, June 24, 2015 10:55:02 AM
Subject: Re: [Gluster-devel] /tests/bugs/quota
On Tuesday 23 June 2015 04:28 PM, Niels de Vos wrote:
On Tue, Jun 23, 2015 at 03:45:43PM +0530, Vijaikumar M wrote:
I have submitted below patch which fixes this issue. I am handling memory
clean-up with reference countmechanism.
http://review.gluster.org/#/c/11361
Is there a reason you can
I have submitted below patch which fixes this issue. I am handling
memory clean-up with reference countmechanism.
http://review.gluster.org/#/c/11361
Thanks,
Vijay
On Tuesday 23 June 2015 12:58 PM, Raghavendra G wrote:
Multiple replies to same query. Pick one ;).
On Tue, Jun 23, 2015 at
Here is the status on quota test-case spurious failure:
There were 3 issues
1) Quota exceeding the limit because of parallel writes - Merged
Upstream, patch submitted to release-3.7 #10910
./tests/bugs/quota/bug-1038598.t
./tests/bugs/distribute/bug-1161156.t
2) Quoting
On Tuesday 19 May 2015 09:50 PM, Shyam wrote:
On 05/19/2015 11:23 AM, Vijaikumar M wrote:
On Tuesday 19 May 2015 08:36 PM, Shyam wrote:
On 05/19/2015 08:10 AM, Raghavendra G wrote:
After discussion with Vijaykumar mallikarjuna and other inputs in this
thread, we are proposing all quota
On Thursday 21 May 2015 06:48 PM, Shyam wrote:
On 05/21/2015 04:04 AM, Vijaikumar M wrote:
On Tuesday 19 May 2015 09:50 PM, Shyam wrote:
On 05/19/2015 11:23 AM, Vijaikumar M wrote:
Did that (in the attached script that I sent) and it still failed.
Please note:
- This dd command passes
On Tuesday 19 May 2015 09:50 PM, Shyam wrote:
On 05/19/2015 11:23 AM, Vijaikumar M wrote:
On Tuesday 19 May 2015 08:36 PM, Shyam wrote:
On 05/19/2015 08:10 AM, Raghavendra G wrote:
After discussion with Vijaykumar mallikarjuna and other inputs in this
thread, we are proposing all quota
On Tuesday 19 May 2015 06:13 AM, Shyam wrote:
On 05/18/2015 07:05 PM, Shyam wrote:
On 05/18/2015 03:49 PM, Shyam wrote:
On 05/18/2015 10:33 AM, Vijay Bellur wrote:
The etherpad did not call out, ./tests/bugs/distribute/bug-1161156.t
which did not have an owner, and so I took a stab at it
On Tuesday 19 May 2015 08:36 PM, Shyam wrote:
On 05/19/2015 08:10 AM, Raghavendra G wrote:
After discussion with Vijaykumar mallikarjuna and other inputs in this
thread, we are proposing all quota tests to comply to following
criteria:
* use dd always with oflag=append (to make sure there
Hi Emmanuel,
I have submitted another patch: http://review.gluster.org/#/c/9478/ for
addressing the spurious failure with quota-nfs.t
Thanks,
Vijay
On Wednesday 18 March 2015 07:40 PM, Emmanuel Dreyfus wrote:
On Wed, Mar 18, 2015 at 10:28:37AM +, Emmanuel Dreyfus wrote:
Indeed, the
Hi Justin,
I have submitted patch 'http://review.gluster.org/#/c/9703/', used a
different approach to generate a random string.
Thanks,
Vijay
On Thursday 19 February 2015 05:21 PM, Vijaikumar M wrote:
On Wednesday 18 February 2015 10:42 PM, Justin Clift wrote:
Hi Vijaikumar,
As part
Hi All,
This is regarding quota accounting for hard-links. Currently, accounting
is done only once for the links created within the same directory and
accounting is done separately when links are created in separate directory.
With this approach account may go wrong when rename is performed on
Hi Kiran,
Testcase './tests/basic/quota-anon-fd-nfs.t' is removed from the test
suite.There are some issues with this testcase, we are working on it. We
will add this test-case soon once the issue is fixed.
Thanks,
Vijay
On Tuesday 27 January 2015 06:11 PM, Vijaikumar M wrote:
Hi Kiran
Hi Kiran,
quota.t failure issue has been fixed with patch
http://review.gluster.org/#/c/9410/. Can you please re-try the test with
this patch and see if it works?
Thanks,
Vijay
On Wednesday 19 November 2014 10:32 AM, Pranith Kumar Karampuri wrote:
On 11/19/2014 10:30 AM, Atin Mukherjee
Hi Raghuram,
Thanks for reporting the problem.
We will submit the fix upstream soon.
Thanks,
Vijay
On Wednesday 14 January 2015 01:50 PM, Raghuram BK wrote:
When I issue quota list command with the xml option, it seems to
return non-xml data :
[root@fractalio-66f2 fractalio]# gluster
I see below error in the log file. I think some how old mount is not
cleaned properly.
File: cli.log
[2014-12-30 11:23:19.553912] W
[cli-cmd-volume.c:886:gf_cli_create_auxiliary_mount] 0-cli: failed to
mount glusterfs client. Please check the log file
On Wednesday 24 December 2014 02:30 PM, Raghavendra Bhat wrote:
Hi,
I have a doubt. In user serviceable snapshots as of now statfs call is
not implemented. There are 2 ways how statfs can be handled.
1) Whenever snapview-client xlator gets statfs call on a path that
belongs to snapshot
On Thursday 04 December 2014 08:32 PM, Niels de Vos wrote:
On Fri, Nov 28, 2014 at 01:08:29PM +0530, Vijay Bellur wrote:
Hi All,
To supplement our ongoing effort of better patch management, I am proposing
the addition of more sub-maintainers for various components. The rationale
behind this
On Monday 01 December 2014 05:36 PM, Raghavendra Bhat wrote:
On Monday 01 December 2014 04:51 PM, Raghavendra G wrote:
On Fri, Nov 28, 2014 at 6:48 PM, RAGHAVENDRA TALUR
raghavendra.ta...@gmail.com mailto:raghavendra.ta...@gmail.com wrote:
On Thu, Nov 27, 2014 at 2:59 PM, Raghavendra
Testcase /var/tmp/quota-xattr.txt
for file in `find $B0 -type d`; do echo $file; getfattr -d -m . -e hex
$file; echo; done /var/tmp/quota-xattr.txt
echo /var/tmp/quota-xattr.txt
Thanks,
Vijay
On Wednesday 19 November 2014 01:17 PM, Vijaikumar M wrote:
Hi Kiran,
Can we get the brick
Hi Kiran,
Can we get the brick, client and quotad logs?
Thanks,
Vijay
On Tuesday 18 November 2014 10:35 PM, Pranith Kumar Karampuri wrote:
On 11/12/2014 04:52 PM, Kiran Patil wrote:
I have create zpool with name d and mnt and they appear in filesystem
as follows.
d on /d type zfs
Hi Jeff,
Missed to add this:
SSL_pending was 0 before calling SSL_readand hence SSL_get_errorreturned
'SSL_ERROR_WANT_READ'
Thanks,
Vijay
On Tuesday 24 June 2014 05:15 PM, Vijaikumar M wrote:
Hi Jeff,
This is regarding the patch http://review.gluster.org/#/c/3842/
(epoll: edge triggered
return r;
217 pfd.fd = priv-sock;
221 if (poll(pfd,1,-1) 0) {
/code
Thanks,
Vijay
On Tuesday 24 June 2014 03:55 PM, Vijaikumar M wrote:
From the stack trace we found that function 'socket_submit_request' is
waiting
, Vijaikumar M wrote:
From the log:
http://build.gluster.org:443/logs/glusterfs-logs-20140520%3a17%3a10%3a51.tgzit
looks like glusterd was hung:
*Glusterd log:**
* 5305 [2014-05-20 20:08:55.040665] E
[glusterd-snapshot.c:3805:glusterd_add_brick_to_snap_volume]
0-management: Unable to fetch snap
Hi Joseph,
In the log mentioned below, it say ping-time is set to default value
30sec.I think issue is different.
Can you please point me to the logs where you where able to re-create
the problem.
Thanks,
Vijay
On Monday 19 May 2014 09:39 AM, Pranith Kumar Karampuri wrote:
hi Vijai,
39 matches
Mail list logo