Re: [Gluster-devel] 3.5.1-beta2 Problems with suid and sgid bits on directories
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 On 2014-06-24 22:26, Shyamsundar Ranganathan wrote: - Original Message - From: Anders Blomdell anders.blomd...@control.lth.se To: Niels de Vos nde...@redhat.com Cc: Shyamsundar Ranganathan srang...@redhat.com, Gluster Devel gluster-devel@gluster.org, Susant Palai spa...@redhat.com Sent: Tuesday, June 24, 2014 4:09:52 AM Subject: Re: [Gluster-devel] 3.5.1-beta2 Problems with suid and sgid bits on directories On 2014-06-23 12:03, Niels de Vos wrote: On Tue, Jun 17, 2014 at 11:49:26AM -0400, Shyamsundar Ranganathan wrote: You maybe looking at the problem being fixed here, [1]. On a lookup attribute mismatch was not being healed across directories, and this patch attempts to address the same. Currently the version of the patch does not heal the S_ISUID and S_ISGID bits, which is work in progress (but easy enough to incorporate and test based on the patch at [1]). On a separate note, add-brick just adds a brick to the cluster, the lookup is where the heal (or creation of the directory across all sub volumes in DHT xlator) is being done. I assume that this is not a regression between 3.5.0 and 3.5.1? If that is the case, we can pull the fix in 3.5.2 because 3.5.1 really should not get delayed much longer. No, it does not work in 3.5.0 either :-( I ran these tests using your scripts and observed similar behavior and need to dig into this a little further to understand how to make this work reliably. This might be a root cause, probably should be resolved first: https://bugzilla.redhat.com/show_bug.cgi?id=1113050 The proposed patch does not work as intended, with the following hieararchy 7550: 0 /mnt/gluster 27770:1000 /mnt/gluster/test 2755 1000:1000 /mnt/gluster/test/dir1 2755 1000:1000 /mnt/gluster/test/dir1/dir2 In the (approx 25%) of cases where my test-script does trigger a self heal on disk2, 10% ends up with (giving access error on client): 00: 0 /data/disk2/gluster/test 755 1000:1000 /data/disk2/gluster/test/dir1 755 1000:1000 /data/disk2/gluster/test/dir1/dir2 or 27770:1000 /data/disk2/gluster/test 00: 0 /data/disk2/gluster/test/dir1 755 1000:1000 /data/disk2/gluster/test/dir1/dir2 or 27770:1000 /data/disk2/gluster/test 2755 1000:1000 /data/disk2/gluster/test/dir1 00: 0 /data/disk2/gluster/test/dir1/dir2 and 73% ends up with either partially healed directories (/data/disk2/gluster/test/dir1/dir2 or /data/disk2/gluster/test/dir1 missing) or the sgid bit [randomly] set on some of the directories. Since I don't even understand how to reliably trigger a self-heal of the directories, I'm currently clueless to the reason for this behaviour. Soo, I think that the comment from susant in http://review.gluster.org/#/c/6983/3/xlators/cluster/dht/src/dht-common.c: susant palaiJun 13 9:04 AM I think we dont have to worry about that. Rebalance does not interfere with directory SUID/GID/STICKY bits. unfortunately is wrong :-(, and I'm on too deep water to understand how to fix this at the moment. Currently in the test case rebalance is not run, so the above comment in relation to rebalance is sort of different that what is observed. Just a note. I stand corrected :-) So far only self-heal has interfered. N.B: with 00777 flags on the /mnt/gluster/test directory I have not been able to trigger any unreadable directories /Anders Thanks, Niels Shyam [1] http://review.gluster.org/#/c/6983/ - Original Message - From: Anders Blomdell anders.blomd...@control.lth.se To: Gluster Devel gluster-devel@gluster.org Sent: Tuesday, June 17, 2014 10:53:52 AM Subject: [Gluster-devel] 3.5.1-beta2 Problems with suid and sgid bits on directories With a glusterfs-3.5.1-0.3.beta2.fc20.x86_64 with a reverted 3dc56cbd16b1074d7ca1a4fe4c5bf44400eb63ff (due to local lack of IPv4 addresses), I get weird behavior if I: 1. Create a directory with suid/sgid/sticky bit set (/mnt/gluster/test) 2. Make a subdirectory of #1 (/mnt/gluster/test/dir1) 3. Do an add-brick Before add-brick 755 /mnt/gluster 7775 /mnt/gluster/test 2755 /mnt/gluster/test/dir1 After add-brick 755 /mnt/gluster 1775 /mnt/gluster/test 755 /mnt/gluster/test/dir1 On the server it looks like this: 7775 /data/disk1/gluster/test 2755 /data/disk1/gluster/test/dir1 1775 /data/disk2/gluster/test 755 /data/disk2/gluster/test/dir1 Filed as bug: https://bugzilla.redhat.com/show_bug.cgi?id=1110262 If somebody can point me to where the logic of add-brick is placed, I can give it a shot (a find/grep on mkdir didn't immediately point me to the right place). /Anders - -- Anders Blomdell Email: anders.blomd...@control.lth.se Department of Automatic Control Lund University Phone:+46 46 222 4625 P.O.
Re: [Gluster-devel] 3.5.1-beta2 Problems with suid and sgid bits on directories
Hi Anders, There are multiple problems that I see in the test provided, here is answering one of them and the reason why this occurs. It does get into the code and functions a bit, but bottom line is that on a code path the setattr that DHT does, misses setting the SGID bit causing the problem observed. - When directory is healed on a newly added brick it loses the SGID mode bit This is happening due to 2 reasons, mkdir does not honor the SGID mode bit [1]. So when initially creating the directory when there is a single brick, an strace of the mkdir command shows an fchmod which actually changes the mode of the file to add the SGID bit to it. In DHT we get into dht_lookup_dir_cbk, as a part of the lookup when creating the new directory .../dir2, as the graph has changed due to a brick addition (otherwise we would have gone into revalidate path where the previous fix was made). Here we call the function, dht_selfheal_directory which would create the missing directories, with the expected attributes. DHT winds a call to mkdir as a part of the dht_selfheal_directory (in dht_selfheal_dir_mkdir where it winds a call to mkdir for all subvolumes that have the directory missing) with the right mode bits (in this case with the SGID bit). As the POSIX layer on the brick calls mkdir, the SGID bit is not set for the newly created directory due to [1]. Further to calling mkdir DHT now winds an setattr to set the mode bits straight, but ends up using the mode bits that are returned in the iatt (stat) information by the just concluded mkdir wind, which has the SGID bit missing, as mkdir returns the stat information from posix_mkdir, by doing a stat post mkdir. Hence we never end up setting the SGID bit in the setattr part of DHT. Rectification of the problem would be in (need to close out some more analysis) dht_selfheal_dir_mkdir_cbk, where we need to pass to the subsequent dht_selfheal_dir_setattr the right mode bits to set on the directories. I will provide a patch for the above issue, post testing out the same with the provided script, possibly tomorrow. This would make the directory equal on all the bricks, and further discrepancies from the mount point or on the backed should not be seen. One of the other problems seems to stem from which stat information we pick in DHT to return for the mount, the above fix would take care of that issue as well, but still something that needs some understanding and possible correction. [1] see NOTES in, man 2 mkdir Shyam ___ Gluster-devel mailing list Gluster-devel@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-devel
Re: [Gluster-devel] 3.5.1-beta2 Problems with suid and sgid bits on directories
On Tue, Jun 17, 2014 at 11:49:26AM -0400, Shyamsundar Ranganathan wrote: You maybe looking at the problem being fixed here, [1]. On a lookup attribute mismatch was not being healed across directories, and this patch attempts to address the same. Currently the version of the patch does not heal the S_ISUID and S_ISGID bits, which is work in progress (but easy enough to incorporate and test based on the patch at [1]). On a separate note, add-brick just adds a brick to the cluster, the lookup is where the heal (or creation of the directory across all sub volumes in DHT xlator) is being done. I assume that this is not a regression between 3.5.0 and 3.5.1? If that is the case, we can pull the fix in 3.5.2 because 3.5.1 really should not get delayed much longer. Thanks, Niels Shyam [1] http://review.gluster.org/#/c/6983/ - Original Message - From: Anders Blomdell anders.blomd...@control.lth.se To: Gluster Devel gluster-devel@gluster.org Sent: Tuesday, June 17, 2014 10:53:52 AM Subject: [Gluster-devel] 3.5.1-beta2 Problems with suid and sgid bits on directories -BEGIN PGP SIGNED MESSAGE- Hash: SHA1 With a glusterfs-3.5.1-0.3.beta2.fc20.x86_64 with a reverted 3dc56cbd16b1074d7ca1a4fe4c5bf44400eb63ff (due to local lack of IPv4 addresses), I get weird behavior if I: 1. Create a directory with suid/sgid/sticky bit set (/mnt/gluster/test) 2. Make a subdirectory of #1 (/mnt/gluster/test/dir1) 3. Do an add-brick Before add-brick 755 /mnt/gluster 7775 /mnt/gluster/test 2755 /mnt/gluster/test/dir1 After add-brick 755 /mnt/gluster 1775 /mnt/gluster/test 755 /mnt/gluster/test/dir1 On the server it looks like this: 7775 /data/disk1/gluster/test 2755 /data/disk1/gluster/test/dir1 1775 /data/disk2/gluster/test 755 /data/disk2/gluster/test/dir1 Filed as bug: https://bugzilla.redhat.com/show_bug.cgi?id=1110262 If somebody can point me to where the logic of add-brick is placed, I can give it a shot (a find/grep on mkdir didn't immediately point me to the right place). Regards Anders Blomdell - -- Anders Blomdell Email: anders.blomd...@control.lth.se Department of Automatic Control Lund University Phone:+46 46 222 4625 P.O. Box 118 Fax: +46 46 138118 SE-221 00 Lund, Sweden -BEGIN PGP SIGNATURE- Version: GnuPG v1 Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/ iQEcBAEBAgAGBQJToFZ/AAoJENZYyvaDG8NcIVgH/0FnyTuB/yutrAdKhOCFTGGY fKqWEozJjiUB4TE8hvAnYw7DalT6jlPLUre6vGzUuioS6TQNn8emTFA7GN9Ghklv pc2I8NWtwju2iXqLO5ACjBDRuFcYaDLQRVzBFiQpOoOkwrly0uEvcSgUKFxrSuMx NrUZKgYTjZb+8kwnSsFv/QNlcPR7zWAiyqbu7rh2a2Q9ArwEsLyTi+se6z/T3PIH ASEIR86jWywaP/JDRoSIUX0PIIS8v7mciFtCVGmgIHfugmEwDH2ZxQtbrkxHOC3/ UjOaGY0TYwPNRnlzk2qkk6Yo3bALGzHa4SUfdRf+gvNa0wZLQWFTdnhWP1dPMc0= =tMUX -END PGP SIGNATURE- ___ Gluster-devel mailing list Gluster-devel@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-devel ___ Gluster-devel mailing list Gluster-devel@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-devel ___ Gluster-devel mailing list Gluster-devel@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-devel
Re: [Gluster-devel] 3.5.1-beta2 Problems with suid and sgid bits on directories
On 2014-06-19 13:48, Susant Palai wrote: Adding Susant Unfortunately things don't go so well here, with --brick-log-level=DEBUG, I get very weird results (probably because the first brick is slower to respond while it's printing debug info), I suspect I trigger some timing related bug. I attach my testscript and a log of 20 runs (with 02777 flags). The real worrisome thing here is: backing: 0 0:0 /data/disk2/gluster/test/dir1 which means that the backing store has an unreadable dir, which gets propagated to clients... /Anders - Original Message - From: Anders Blomdell anders.blomd...@control.lth.se To: Shyamsundar Ranganathan srang...@redhat.com Cc: Gluster Devel gluster-devel@gluster.org Sent: Wednesday, 18 June, 2014 9:33:04 PM Subject: Re: [Gluster-devel] 3.5.1-beta2 Problems with suid and sgid bits on directories On 2014-06-17 18:47, Anders Blomdell wrote: On 2014-06-17 17:49, Shyamsundar Ranganathan wrote: You maybe looking at the problem being fixed here, [1]. On a lookup attribute mismatch was not being healed across directories, and this patch attempts to address the same. Currently the version of the patch does not heal the S_ISUID and S_ISGID bits, which is work in progress (but easy enough to incorporate and test based on the patch at [1]). Thanks, will look into it tomorrow. On a separate note, add-brick just adds a brick to the cluster, the lookup is where the heal (or creation of the directory across all sub volumes in DHT xlator) is being done. Thanks for the clarification (I guess that a rebalance would trigger it as well?) Attached slightly modified version of patch [1] seems to work correctly after a rebalance that is allowed to run to completion on its own, if directories are traversed during rebalance, some 0 dirs show spurious 01777, 0 and sometimes ends up with the wrong permission. Continuing debug tomorrow... Shyam [1] http://review.gluster.org/#/c/6983/ - Original Message - From: Anders Blomdell anders.blomd...@control.lth.se To: Gluster Devel gluster-devel@gluster.org Sent: Tuesday, June 17, 2014 10:53:52 AM Subject: [Gluster-devel] 3.5.1-beta2 Problems with suid and sgid bits on directories With a glusterfs-3.5.1-0.3.beta2.fc20.x86_64 with a reverted 3dc56cbd16b1074d7ca1a4fe4c5bf44400eb63ff (due to local lack of IPv4 addresses), I get weird behavior if I: 1. Create a directory with suid/sgid/sticky bit set (/mnt/gluster/test) 2. Make a subdirectory of #1 (/mnt/gluster/test/dir1) 3. Do an add-brick Before add-brick 755 /mnt/gluster 7775 /mnt/gluster/test 2755 /mnt/gluster/test/dir1 After add-brick 755 /mnt/gluster 1775 /mnt/gluster/test 755 /mnt/gluster/test/dir1 On the server it looks like this: 7775 /data/disk1/gluster/test 2755 /data/disk1/gluster/test/dir1 1775 /data/disk2/gluster/test 755 /data/disk2/gluster/test/dir1 Filed as bug: https://bugzilla.redhat.com/show_bug.cgi?id=1110262 If somebody can point me to where the logic of add-brick is placed, I can give it a shot (a find/grep on mkdir didn't immediately point me to the right place). /Anders /Anders -- Anders Blomdell Email: anders.blomd...@control.lth.se Department of Automatic Control Lund University Phone:+46 46 222 4625 P.O. Box 118 Fax: +46 46 138118 SE-221 00 Lund, Sweden bug-add-brick.sh Description: application/shellscript volume create: testvol: success: please start the volume to access data volume start: testvol: success mounted: 755 0:0 /mnt/gluster mounted: 2777 0:1600 /mnt/gluster/test mounted: 2755 247:1600 /mnt/gluster/test/dir1 Before add-brick 755 /mnt/gluster 2777 /mnt/gluster/test 2755 /mnt/gluster/test/dir1 volume add-brick: success volume set: success Files /tmp/tmp.3lK6STezID and /tmp/tmp.Z2Pr46kVu1 differ ## Differ tor jun 19 15:30:01 CEST 2014 -mounted: 755 0:0 /mnt/gluster -mounted: 2777 0:1600 /mnt/gluster/test -mounted: 2755 247:1600 /mnt/gluster/test/dir1 -mounted: 2755 247:1600 /mnt/gluster/test/dir1/dir2 +755 0:0 /mnt/gluster +2777 0:1600 /mnt/gluster/test +2755 247:1600 /mnt/gluster/test/dir1 +2755 247:1600 /mnt/gluster/test/dir1/dir2 ## TIMEOUT tor jun 19 15:30:06 CEST 2014 mounted: 755 0:0 /mnt/gluster mounted: 2777 0:1600 /mnt/gluster/test mounted: 2755 247:1600 /mnt/gluster/test/dir1 mounted: 2755 247:1600 /mnt/gluster/test/dir1/dir2 backing: 2777 0:1600 /data/disk1/gluster/test backing: 2755 247:1600 /data/disk1/gluster/test/dir1 backing: 2755 247:1600 /data/disk1/gluster/test/dir1/dir2 volume create: testvol: success: please start the volume to access data volume start: testvol: success mounted: 755 0:0 /mnt/gluster mounted: 2777 0:1600 /mnt/gluster/test mounted: 2755 247:1600 /mnt/gluster/test/dir1 Before add-brick 755 /mnt/gluster 2777 /mnt/gluster/test 2755 /mnt/gluster/test/dir1 volume add-brick: success volume set: success Files /tmp/tmp.5DWFQY6fus and /tmp/tmp.p7BxWShXLg differ
Re: [Gluster-devel] 3.5.1-beta2 Problems with suid and sgid bits on directories
On 06/19/2014 03:39 PM, Anders Blomdell wrote: On 2014-06-19 13:48, Susant Palai wrote: Adding Susant Unfortunately things don't go so well here, with --brick-log-level=DEBUG, I get very weird results (probably because the first brick is slower to respond while it's printing debug info), I suspect I trigger some timing related bug. I attach my testscript and a log of 20 runs (with 02777 flags). The real worrisome thing here is: backing: 0 0:0 /data/disk2/gluster/test/dir1 which means that the backing store has an unreadable dir, which gets propagated to clients... I have an embryo of an theory of what happens: 1. directories are created on the first brick. 2. fuse starts to read directories from the first brick. 3. getdents64 or fstatat64 to first brick takes too long, and is redirected to second brick. 4. self-heal is initiated on second brick. On monday, I will see if I can come up with some clever firewall tricks to trigger this behaviour in a reliable way. /Anders -- Anders Blomdell Email: anders.blomd...@control.lth.se Department of Automatic Control Lund University Phone:+46 46 222 4625 P.O. Box 118 SE-221 00 Lund, Sweden ___ Gluster-devel mailing list Gluster-devel@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-devel
Re: [Gluster-devel] 3.5.1-beta2 Problems with suid and sgid bits on directories
On 2014-06-17 18:47, Anders Blomdell wrote: On 2014-06-17 17:49, Shyamsundar Ranganathan wrote: You maybe looking at the problem being fixed here, [1]. On a lookup attribute mismatch was not being healed across directories, and this patch attempts to address the same. Currently the version of the patch does not heal the S_ISUID and S_ISGID bits, which is work in progress (but easy enough to incorporate and test based on the patch at [1]). Thanks, will look into it tomorrow. On a separate note, add-brick just adds a brick to the cluster, the lookup is where the heal (or creation of the directory across all sub volumes in DHT xlator) is being done. Thanks for the clarification (I guess that a rebalance would trigger it as well?) Attached slightly modified version of patch [1] seems to work correctly after a rebalance that is allowed to run to completion on its own, if directories are traversed during rebalance, some 0 dirs show spurious 01777, 0 and sometimes ends up with the wrong permission. Continuing debug tomorrow... Shyam [1] http://review.gluster.org/#/c/6983/ - Original Message - From: Anders Blomdell anders.blomd...@control.lth.se To: Gluster Devel gluster-devel@gluster.org Sent: Tuesday, June 17, 2014 10:53:52 AM Subject: [Gluster-devel] 3.5.1-beta2 Problems with suid and sgid bits on directories With a glusterfs-3.5.1-0.3.beta2.fc20.x86_64 with a reverted 3dc56cbd16b1074d7ca1a4fe4c5bf44400eb63ff (due to local lack of IPv4 addresses), I get weird behavior if I: 1. Create a directory with suid/sgid/sticky bit set (/mnt/gluster/test) 2. Make a subdirectory of #1 (/mnt/gluster/test/dir1) 3. Do an add-brick Before add-brick 755 /mnt/gluster 7775 /mnt/gluster/test 2755 /mnt/gluster/test/dir1 After add-brick 755 /mnt/gluster 1775 /mnt/gluster/test 755 /mnt/gluster/test/dir1 On the server it looks like this: 7775 /data/disk1/gluster/test 2755 /data/disk1/gluster/test/dir1 1775 /data/disk2/gluster/test 755 /data/disk2/gluster/test/dir1 Filed as bug: https://bugzilla.redhat.com/show_bug.cgi?id=1110262 If somebody can point me to where the logic of add-brick is placed, I can give it a shot (a find/grep on mkdir didn't immediately point me to the right place). /Anders /Anders -- Anders Blomdell Email: anders.blomd...@control.lth.se Department of Automatic Control Lund University Phone:+46 46 222 4625 P.O. Box 118 Fax: +46 46 138118 SE-221 00 Lund, Sweden diff -urb glusterfs-3.5.1beta2/xlators/cluster/dht/src/dht-common.c glusterfs-3.5.1.orig/xlators/cluster/dht/src/dht-common.c --- glusterfs-3.5.1beta2/xlators/cluster/dht/src/dht-common.c 2014-06-10 18:55:22.0 +0200 +++ glusterfs-3.5.1.orig/xlators/cluster/dht/src/dht-common.c 2014-06-17 22:46:28.710636632 +0200 @@ -523,6 +523,28 @@ } int +permission_changed (ia_prot_t *local, ia_prot_t *stbuf) +{ +if( (local-owner.read != stbuf-owner.read) || +(local-owner.write != stbuf-owner.write) || +(local-owner.exec != stbuf-owner.exec) || +(local-group.read != stbuf-group.read) || +(local-group.write != stbuf-group.write) || +(local-group.exec != stbuf-group.exec) || +(local-other.read != stbuf-other.read) || +(local-other.write != stbuf-other.write) || +(local-other.exec != stbuf-other.exec ) || +(local-suid != stbuf-suid ) || +(local-sgid != stbuf-sgid ) || +(local-sticky != stbuf-sticky )) +{ +return 1; +} else { +return 0; +} +} + +int dht_revalidate_cbk (call_frame_t *frame, void *cookie, xlator_t *this, int op_ret, int op_errno, inode_t *inode, struct iatt *stbuf, dict_t *xattr, @@ -617,12 +639,16 @@ stbuf-ia_ctime_nsec)) { local-prebuf.ia_gid = stbuf-ia_gid; local-prebuf.ia_uid = stbuf-ia_uid; +local-prebuf.ia_prot = stbuf-ia_prot; } } if (local-stbuf.ia_type != IA_INVAL) { if ((local-stbuf.ia_gid != stbuf-ia_gid) || -(local-stbuf.ia_uid != stbuf-ia_uid)) { +(local-stbuf.ia_uid != stbuf-ia_uid) || +(permission_changed ((local-stbuf.ia_prot) +, ((stbuf-ia_prot) +{ local-need_selfheal = 1; } } @@ -669,6 +695,8 @@ uuid_copy (local-gfid, local-stbuf.ia_gfid); local-stbuf.ia_gid =
Re: [Gluster-devel] 3.5.1-beta2 Problems with suid and sgid bits on directories
You maybe looking at the problem being fixed here, [1]. On a lookup attribute mismatch was not being healed across directories, and this patch attempts to address the same. Currently the version of the patch does not heal the S_ISUID and S_ISGID bits, which is work in progress (but easy enough to incorporate and test based on the patch at [1]). On a separate note, add-brick just adds a brick to the cluster, the lookup is where the heal (or creation of the directory across all sub volumes in DHT xlator) is being done. Shyam [1] http://review.gluster.org/#/c/6983/ - Original Message - From: Anders Blomdell anders.blomd...@control.lth.se To: Gluster Devel gluster-devel@gluster.org Sent: Tuesday, June 17, 2014 10:53:52 AM Subject: [Gluster-devel] 3.5.1-beta2 Problems with suid and sgid bits on directories -BEGIN PGP SIGNED MESSAGE- Hash: SHA1 With a glusterfs-3.5.1-0.3.beta2.fc20.x86_64 with a reverted 3dc56cbd16b1074d7ca1a4fe4c5bf44400eb63ff (due to local lack of IPv4 addresses), I get weird behavior if I: 1. Create a directory with suid/sgid/sticky bit set (/mnt/gluster/test) 2. Make a subdirectory of #1 (/mnt/gluster/test/dir1) 3. Do an add-brick Before add-brick 755 /mnt/gluster 7775 /mnt/gluster/test 2755 /mnt/gluster/test/dir1 After add-brick 755 /mnt/gluster 1775 /mnt/gluster/test 755 /mnt/gluster/test/dir1 On the server it looks like this: 7775 /data/disk1/gluster/test 2755 /data/disk1/gluster/test/dir1 1775 /data/disk2/gluster/test 755 /data/disk2/gluster/test/dir1 Filed as bug: https://bugzilla.redhat.com/show_bug.cgi?id=1110262 If somebody can point me to where the logic of add-brick is placed, I can give it a shot (a find/grep on mkdir didn't immediately point me to the right place). Regards Anders Blomdell - -- Anders Blomdell Email: anders.blomd...@control.lth.se Department of Automatic Control Lund University Phone:+46 46 222 4625 P.O. Box 118 Fax: +46 46 138118 SE-221 00 Lund, Sweden -BEGIN PGP SIGNATURE- Version: GnuPG v1 Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/ iQEcBAEBAgAGBQJToFZ/AAoJENZYyvaDG8NcIVgH/0FnyTuB/yutrAdKhOCFTGGY fKqWEozJjiUB4TE8hvAnYw7DalT6jlPLUre6vGzUuioS6TQNn8emTFA7GN9Ghklv pc2I8NWtwju2iXqLO5ACjBDRuFcYaDLQRVzBFiQpOoOkwrly0uEvcSgUKFxrSuMx NrUZKgYTjZb+8kwnSsFv/QNlcPR7zWAiyqbu7rh2a2Q9ArwEsLyTi+se6z/T3PIH ASEIR86jWywaP/JDRoSIUX0PIIS8v7mciFtCVGmgIHfugmEwDH2ZxQtbrkxHOC3/ UjOaGY0TYwPNRnlzk2qkk6Yo3bALGzHa4SUfdRf+gvNa0wZLQWFTdnhWP1dPMc0= =tMUX -END PGP SIGNATURE- ___ Gluster-devel mailing list Gluster-devel@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-devel ___ Gluster-devel mailing list Gluster-devel@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-devel
Re: [Gluster-devel] 3.5.1-beta2 Problems with suid and sgid bits on directories
On 2014-06-17 17:49, Shyamsundar Ranganathan wrote: You maybe looking at the problem being fixed here, [1]. On a lookup attribute mismatch was not being healed across directories, and this patch attempts to address the same. Currently the version of the patch does not heal the S_ISUID and S_ISGID bits, which is work in progress (but easy enough to incorporate and test based on the patch at [1]). Thanks, will look into it tomorrow. On a separate note, add-brick just adds a brick to the cluster, the lookup is where the heal (or creation of the directory across all sub volumes in DHT xlator) is being done. Thanks for the clarification (I guess that a rebalance would trigger it as well?) Shyam [1] http://review.gluster.org/#/c/6983/ - Original Message - From: Anders Blomdell anders.blomd...@control.lth.se To: Gluster Devel gluster-devel@gluster.org Sent: Tuesday, June 17, 2014 10:53:52 AM Subject: [Gluster-devel] 3.5.1-beta2 Problems with suid and sgid bits ondirectories With a glusterfs-3.5.1-0.3.beta2.fc20.x86_64 with a reverted 3dc56cbd16b1074d7ca1a4fe4c5bf44400eb63ff (due to local lack of IPv4 addresses), I get weird behavior if I: 1. Create a directory with suid/sgid/sticky bit set (/mnt/gluster/test) 2. Make a subdirectory of #1 (/mnt/gluster/test/dir1) 3. Do an add-brick Before add-brick 755 /mnt/gluster 7775 /mnt/gluster/test 2755 /mnt/gluster/test/dir1 After add-brick 755 /mnt/gluster 1775 /mnt/gluster/test 755 /mnt/gluster/test/dir1 On the server it looks like this: 7775 /data/disk1/gluster/test 2755 /data/disk1/gluster/test/dir1 1775 /data/disk2/gluster/test 755 /data/disk2/gluster/test/dir1 Filed as bug: https://bugzilla.redhat.com/show_bug.cgi?id=1110262 If somebody can point me to where the logic of add-brick is placed, I can give it a shot (a find/grep on mkdir didn't immediately point me to the right place). /Anders -- Anders Blomdell Email: anders.blomd...@control.lth.se Department of Automatic Control Lund University Phone:+46 46 222 4625 P.O. Box 118 Fax: +46 46 138118 SE-221 00 Lund, Sweden ___ Gluster-devel mailing list Gluster-devel@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-devel