Re: [Gluster-devel] New NetBSD spurious failure in mount-auth-nfs.t

2015-04-09 Thread Emmanuel Dreyfus
Emmanuel Dreyfus  wrote:

> Perhaps you need a EXPECT_WITHIN $AUTH_REFRESH_INTERVAL "Y"
> check_mount_success $V0 at test 65 seems to fix it?

Indeed that fixes the problem. Please review:
http://review.gluster.org/10182

-- 
Emmanuel Dreyfus
http://hcpnet.free.fr/pubz
m...@netbsd.org
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] New NetBSD spurious failure in mount-auth-nfs.t

2015-04-09 Thread Emmanuel Dreyfus
Hi

It seems this change introduced a spuious failure in NetBSD regression
for  /tests/basic/mount-nfs-auth.t
  http://review.gluster.org/10047

It now frequently fails here:

TEST 65 (line 265): Y check_mount_success patchy
mount_nfs: can't access /patchy: Permission denied

not ok 65 Got "N" instead of "Y"
RESULT 65: 1
=
TEST 66 (line 266): Y small_write
[18:16:27] ./tests/basic/mount-nfs-auth.t .. 66/90 RESULT 66: 0
=
TEST 67 (line 267): Y umount_nfs /mnt/nfs/0
umount: /mnt/nfs/0: not currently mounted
umount: /mnt/nfs/0: not currently mounted
umount: /mnt/nfs/0: not currently mounted
umount: /mnt/nfs/0: not currently mounted
[18:16:27] ./tests/basic/mount-nfs-auth.t .. 67/90 not ok 67 Got "N"
instead of "Y"
RESULT 67: 1
=
TEST 68 (line 271): Y check_mount_success patchy/
RESULT 68: 0

Perhaps you need a EXPECT_WITHIN $AUTH_REFRESH_INTERVAL "Y"
check_mount_success $V0 at test 65 seems to fix it?



-- 
Emmanuel Dreyfus
http://hcpnet.free.fr/pubz
m...@netbsd.org
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] New Defects reported by Coverity Scan for gluster/glusterfs

2015-04-09 Thread scan-admin

Hi,

Please find the latest report on new defect(s) introduced to gluster/glusterfs 
found with Coverity Scan.

5 new defect(s) introduced to gluster/glusterfs found with Coverity Scan.
33 defect(s), reported by Coverity Scan earlier, were marked fixed in the 
recent build analyzed by Coverity Scan.

New defect(s) Reported-by: Coverity Scan
Showing 5 of 5 defect(s)


** CID 1293504:(CHECKED_RETURN)
/xlators/mgmt/glusterd/src/glusterd-volgen.c:  in 
volume_volgen_graph_build_clusters_tier()
/xlators/mgmt/glusterd/src/glusterd-volgen.c: 3334 in 
volume_volgen_graph_build_clusters_tier()



*** CID 1293504:(CHECKED_RETURN)
/xlators/mgmt/glusterd/src/glusterd-volgen.c:  in 
volume_volgen_graph_build_clusters_tier()
3327 hxl = first_of(graph);
3328 
3329 volinfo->type   = GF_CLUSTER_TYPE_TIER;
3330 xl = volgen_graph_add_nolink (graph, "cluster/tier", "%s",
3331   "tier-dht", 0);
3332 gf_asprintf(&rule, "%s-hot-dht", st_volname);
>>> CID 1293504:(CHECKED_RETURN)
>>> Calling "xlator_set_option" without checking return value (as is done 
>>> elsewhere 75 out of 81 times).
 xlator_set_option(xl, "rule", rule);
3334 xlator_set_option(xl, "xattr-name", "trusted.tier-gfid");
3335 
3336 ret = volgen_xlator_link (xl, cxl);
3337 ret = volgen_xlator_link (xl, hxl);
3338 
/xlators/mgmt/glusterd/src/glusterd-volgen.c: 3334 in 
volume_volgen_graph_build_clusters_tier()
3328 
3329 volinfo->type   = GF_CLUSTER_TYPE_TIER;
3330 xl = volgen_graph_add_nolink (graph, "cluster/tier", "%s",
3331   "tier-dht", 0);
3332 gf_asprintf(&rule, "%s-hot-dht", st_volname);
 xlator_set_option(xl, "rule", rule);
>>> CID 1293504:(CHECKED_RETURN)
>>> Calling "xlator_set_option" without checking return value (as is done 
>>> elsewhere 75 out of 81 times).
3334 xlator_set_option(xl, "xattr-name", "trusted.tier-gfid");
3335 
3336 ret = volgen_xlator_link (xl, cxl);
3337 ret = volgen_xlator_link (xl, hxl);
3338 
3339 st_type = GF_CLUSTER_TYPE_TIER;

** CID 1293503:  Null pointer dereferences  (FORWARD_NULL)
/xlators/storage/posix/src/posix.c: 4137 in posix_fgetxattr()



*** CID 1293503:  Null pointer dereferences  (FORWARD_NULL)
/xlators/storage/posix/src/posix.c: 4137 in posix_fgetxattr()
4131 gf_log (this->name, GF_LOG_WARNING,
4132 "Failed to set dictionary value for 
%s",
4133 name);
4134 goto done;
4135 }
4136 
>>> CID 1293503:  Null pointer dereferences  (FORWARD_NULL)
>>> Passing null pointer "name" to "strncmp", which dereferences it.
4137 if (strncmp (name, GLUSTERFS_GET_OBJECT_SIGNATURE,
4138   strlen (GLUSTERFS_GET_OBJECT_SIGNATURE)) == 0) {
4139 op_ret = posix_fdget_objectsignature (_fd, dict);
4140 if (op_ret < 0) {
4141 op_errno = -op_ret;
4142 op_ret = -1;

** CID 1293502:  Null pointer dereferences  (NULL_RETURNS)
/xlators/mgmt/glusterd/src/glusterd-volgen.c: 3330 in 
volume_volgen_graph_build_clusters_tier()



*** CID 1293502:  Null pointer dereferences  (NULL_RETURNS)
/xlators/mgmt/glusterd/src/glusterd-volgen.c: 3330 in 
volume_volgen_graph_build_clusters_tier()
3324 if (ret == -1)
3325 goto out;
3326 
3327 hxl = first_of(graph);
3328 
3329 volinfo->type   = GF_CLUSTER_TYPE_TIER;
>>> CID 1293502:  Null pointer dereferences  (NULL_RETURNS)
>>> Assigning: "xl" = null return value from "volgen_graph_add_nolink".
3330 xl = volgen_graph_add_nolink (graph, "cluster/tier", "%s",
3331   "tier-dht", 0);
3332 gf_asprintf(&rule, "%s-hot-dht", st_volname);
 xlator_set_option(xl, "rule", rule);
3334 xlator_set_option(xl, "xattr-name", "trusted.tier-gfid");
3335 

** CID 1293501:  Null pointer dereferences  (REVERSE_INULL)
/xlators/storage/posix/src/posix.c: 4148 in posix_fgetxattr()



*** CID 1293501:  Null pointer dereferences  (REVERSE_INULL)
/xlators/storage/posix/src/posix.c: 4148 in posix_fgetxattr()
4142

Re: [Gluster-devel] Rebalance improvement design

2015-04-09 Thread Susant Palai
Thanks Ben. RPM is not available and I am planning to refresh the patch in two 
days with some more regression fixes. I think we can run the tests post that. 
Any larger data-set will be good(say 3 to 5 TB).

Thanks,
Susant

- Original Message -
From: "Benjamin Turner" 
To: "Vijay Bellur" 
Cc: "Susant Palai" , "Gluster Devel" 

Sent: Thursday, 9 April, 2015 2:10:30 AM
Subject: Re: [Gluster-devel] Rebalance improvement design


I have some rebalance perf regression stuff I have been working on, is there an 
RPM with these patches anywhere so that I can try it on my systems? If not I'll 
just build from: 


git fetch git:// review.gluster.org/glusterfs refs/changes/57/9657/8 && git 
cherry-pick FETCH_HEAD 



I will have _at_least_ 10TB of storage, how many TBs of data should I run with? 


-b 


On Tue, Apr 7, 2015 at 9:07 AM, Vijay Bellur < vbel...@redhat.com > wrote: 




On 04/07/2015 03:08 PM, Susant Palai wrote: 


Here is one test performed on a 300GB data set and around 100%(1/2 the time) 
improvement was seen. 

[root@gprfs031 ~]# gluster v i 

Volume Name: rbperf 
Type: Distribute 
Volume ID: 35562662-337e-4923-b862- d0bbb0748003 
Status: Started 
Number of Bricks: 4 
Transport-type: tcp 
Bricks: 
Brick1: gprfs029-10ge:/bricks/ gprfs029/brick1 
Brick2: gprfs030-10ge:/bricks/ gprfs030/brick1 
Brick3: gprfs031-10ge:/bricks/ gprfs031/brick1 
Brick4: gprfs032-10ge:/bricks/ gprfs032/brick1 


Added server 32 and started rebalance force. 

Rebalance stat for new changes: 
[root@gprfs031 ~]# gluster v rebalance rbperf status 
Node Rebalanced-files size scanned failures skipped status run time in secs 
- --- --- --- --- --- 
 -- 
localhost 74639 36.1GB 297319 0 0 completed 1743.00 
172.17.40.30 67512 33.5GB 269187 0 0 completed 1395.00 
gprfs029-10ge 79095 38.8GB 284105 0 0 completed 1559.00 
gprfs032-10ge 0 0Bytes 0 0 0 completed 402.00 
volume rebalance: rbperf: success: 

Rebalance stat for old model: 
[root@gprfs031 ~]# gluster v rebalance rbperf status 
Node Rebalanced-files size scanned failures skipped status run time in secs 
- --- --- --- --- --- 
 -- 
localhost 86493 42.0GB 634302 0 0 completed 3329.00 
gprfs029-10ge 94115 46.2GB 687852 0 0 completed 3328.00 
gprfs030-10ge 74314 35.9GB 651943 0 0 completed 3072.00 
gprfs032-10ge 0 0Bytes 594166 0 0 completed 1943.00 
volume rebalance: rbperf: success: 


This is interesting. Thanks for sharing & well done! Maybe we should attempt a 
much larger data set and see how we fare there :). 

Regards, 


Vijay 


__ _ 
Gluster-devel mailing list 
Gluster-devel@gluster.org 
http://www.gluster.org/ mailman/listinfo/gluster-devel 

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Possibly root cause for the Gluster regression test cores?

2015-04-09 Thread Pranith Kumar Karampuri


On 04/08/2015 07:08 PM, Justin Clift wrote:

On 8 Apr 2015, at 14:13, Pranith Kumar Karampuri  wrote:

On 04/08/2015 06:20 PM, Justin Clift wrote:



Hagarth mentioned in the weekly IRC meeting that you have an
idea what might be causing the regression tests to generate
cores?

Can you outline that quickly, as Jeff has some time and might
be able to help narrow it down further. :)

(and these core files are really annoying :/)

I feel it is a lot like https://bugzilla.redhat.com/show_bug.cgi?id=1184417. 
clear-locks command is not handled properly after we did the client_t refactor. 
I believe that is the reason for the crashes but I could be wrong. But After 
looking at the code I feel there is high probability that this is the issue. I 
didn't find it easy to fix. We will need to change the lock structure list 
maintenance heavily. Easier thing would be to disable clear-locks functionality 
tests in the regression as it is not something that is used by the users IMO 
and see if it indeed is the same issue. There are 2 tests using this command:
18:34:00 :) ⚡ git grep clear-locks tests
tests/bugs/disperse/bug-1179050.t:TEST $CLI volume clear-locks $V0 / kind all 
inode
tests/bugs/glusterd/bug-824753-file-locker.c: "gluster volume clear-locks %s /%s 
kind all posix 0,7-1 |"

If even after disabling these two tests it fails then we will need to look 
again. I think jeff's patch which will find the test which triggered the core 
should help here.

Thanks Pranith. :)

Is this other "problem when disconnecting" BZ possibly related, or is that a
different thing?

   https://bugzilla.redhat.com/show_bug.cgi?id=1195415

I feel 1195415 could be a duplicate of 1184417.

Pranith


+ Justin

--
GlusterFS - http://www.gluster.org

An open source, distributed file system scaling to several
petabytes, and handling thousands of clients.

My personal twitter: twitter.com/realjustinclift



___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel