Re: [Gluster-devel] Spurious failure - ./tests/bugs/bug-859581.t

2014-06-17 Thread Pranith Kumar Karampuri


On 06/18/2014 10:11 AM, Atin Mukherjee wrote:


On 06/18/2014 10:04 AM, Pranith Kumar Karampuri wrote:

On 06/18/2014 09:39 AM, Atin Mukherjee wrote:

Pranith,

Regression test mentioned in $SUBJECT failed (testcase : 14 & 16)

Console log can be found at
http://build.gluster.org/job/rackspace-regression-2GB/227/consoleFull

My initial suspect is on HEAL_TIMEOUT (set to 60 seconds) where healing
might not have been completed within this time frame and i.e. why
EXPECT_WITHIN fails.


I am not sure on what basis this HEAL_TIMEOUT's value was derived.
Probably you would be the better to analyse it. Having a larger time out
value might help here?

I don't think it is a spurious failure. There seems to be a bug in
afr-v2. I will have to fix that.

If its not a spurious failure why its not failing every time?
Depends on which subvolume afr picks in readdir. If it reads the one 
with the directory it will succeed. Otherwise it will fail.


Pranith

Pranith

Cheers,
Atin




___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Spurious failure - ./tests/bugs/bug-859581.t

2014-06-17 Thread Atin Mukherjee


On 06/18/2014 10:04 AM, Pranith Kumar Karampuri wrote:
> 
> On 06/18/2014 09:39 AM, Atin Mukherjee wrote:
>> Pranith,
>>
>> Regression test mentioned in $SUBJECT failed (testcase : 14 & 16)
>>
>> Console log can be found at
>> http://build.gluster.org/job/rackspace-regression-2GB/227/consoleFull
>>
>> My initial suspect is on HEAL_TIMEOUT (set to 60 seconds) where healing
>> might not have been completed within this time frame and i.e. why
>> EXPECT_WITHIN fails.
>>
>>
>> I am not sure on what basis this HEAL_TIMEOUT's value was derived.
>> Probably you would be the better to analyse it. Having a larger time out
>> value might help here?
> I don't think it is a spurious failure. There seems to be a bug in
> afr-v2. I will have to fix that.
If its not a spurious failure why its not failing every time?
> 
> Pranith
>>
>> Cheers,
>> Atin
>>
>>
> 
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Spurious failure - ./tests/bugs/bug-859581.t

2014-06-17 Thread Pranith Kumar Karampuri


On 06/18/2014 09:39 AM, Atin Mukherjee wrote:

Pranith,

Regression test mentioned in $SUBJECT failed (testcase : 14 & 16)

Console log can be found at
http://build.gluster.org/job/rackspace-regression-2GB/227/consoleFull

My initial suspect is on HEAL_TIMEOUT (set to 60 seconds) where healing
might not have been completed within this time frame and i.e. why
EXPECT_WITHIN fails.


I am not sure on what basis this HEAL_TIMEOUT's value was derived.
Probably you would be the better to analyse it. Having a larger time out
value might help here?
I don't think it is a spurious failure. There seems to be a bug in 
afr-v2. I will have to fix that.


Pranith


Cheers,
Atin




___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Spurious failure - ./tests/bugs/bug-859581.t

2014-06-17 Thread Atin Mukherjee
Pranith,

Regression test mentioned in $SUBJECT failed (testcase : 14 & 16)

Console log can be found at
http://build.gluster.org/job/rackspace-regression-2GB/227/consoleFull

My initial suspect is on HEAL_TIMEOUT (set to 60 seconds) where healing
might not have been completed within this time frame and i.e. why
EXPECT_WITHIN fails.


I am not sure on what basis this HEAL_TIMEOUT's value was derived.
Probably you would be the better to analyse it. Having a larger time out
value might help here?

Cheers,
Atin


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] 3.5.1 beta 2 Sanity tests

2014-06-17 Thread Justin Clift
On 17/06/2014, at 11:33 PM, Benjamin Turner wrote:
> Here are the tests that failed.  Note that n0 is a generated name, name255 is 
> a 255 character string, and path 1023 is a 1023 long path
> 
> /opt/qa/tools/posix-testsuite/tests/link/02.t(Wstat: 0 Tests: 10 Failed: 
> 2)
>   Failed tests:  4, 6
> 
> expect 0 link ${n0} ${name255}   #4
> expect 0 unlink ${n0} #5   <- this passed
> expect 0 unlink ${name255}   #6 
> 
> /opt/qa/tools/posix-testsuite/tests/link/03.t(Wstat: 0 Tests: 16 Failed: 
> 2)
>   Failed tests:  8-9
> 
> expect 0 link ${n0} ${path1023}  #8
> expect 0 unlink ${path1023}   #9
> 
> I gotta go for the day, I'll try to repro outside the script tomorrow.

As a data point, people have occasionally mentioned to me in IRC
and via email that these "posix" tests fail for them... even when
run against a (non-glustered) ext4/xfs filesystem.  So, it _could_
be just some weird spurious thing.  If you figure out what though,
that'd be cool. :)

+ Justin

--
GlusterFS - http://www.gluster.org

An open source, distributed file system scaling to several
petabytes, and handling thousands of clients.

My personal twitter: twitter.com/realjustinclift

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Bootstrapping glusterfsiostat

2014-06-17 Thread Vipul Nayyar
Hello,

Basing my first patch to Gluster as a stepping stone, I've written a small 
utility glusterfsiostat, in python which you can find attached with this email. 
Currently, the modifications done by my patch to io-stats which is under review 
as of now, dumps private information from the xlator object to the proper file 
for private info in the meta directory. This includes total bytes read/written 
along with read/write speed in the previous 10 seconds. The speed at every 1 
second is identified by it's respective unix timestamp and hence given out in 
bytes/second. These values at discrete points of time can be used to generate a 
graph. 

The python tool first identifies all gluster mounts in the system, identifies 
the mount path and parses the meta xlator output in order to generate output 
similar to the iostat tool. Passing '-j' option gives you extra information in 
a consumable json format. By default, the tool pretty prints the basic stats 
which are human readable. This tool is supposed to be a framework on which 
other applications can be built upon. I'm putting this out for community 
feedback so as to improve it further.

Do you think the stats and the way they're generated as of now is something 
that might be usable by someone? Any other implementation suggestions or 
addition of some more stats that I can/should possibly provide with the utility?

Note: In order to test this, you need to apply my 
patch(http://review.gluster.org/#/c/8030/) in your repo first, build and then 
mount a volume. Preferably perform a big read/write operation with a file on 
your Gluster mount before executing the python script. Then run it as 'python 
stat.py' or 'python stat.py -j'

Regards
Vipul Nayyar 


On Sunday, 8 June 2014 8:51 PM, Vipul Nayyar  wrote:
 


Hi,

If I have a say in this decision, then I'd like to go with the way of keeping 
the io-stats xlator in it's place with the addition of dumpops. Or atleast 
postpone duplicating the functionality of io-stats in latency.c. I think 
focusing on my framework without the worry of a major upheaval in the xlator 
world would help me better in achieving my goals.

Regards
Vipul Nayyar 



On Saturday, 7 June 2014 1:05 AM, Anand Avati  wrote:
 






On Fri, Jun 6, 2014 at 10:13 AM, Vipul Nayyar  wrote:

Hello,
>
>
>I'm Vipul and I'll be working on a tool called glusterfsiostat under GSOC this 
>summer with KP as my mentor. Based on our discussion, the plan for the future 
>is to build an initial working version of the tool in python and improve it 
>later based on feedback. This tool will display i/o information about every 
>glusterfs mount in the system.  The primary source for getting stats would be 
>the .meta folder accessible in every mount. As of now, the meta xlator is a 
>good resource to get the hierarchy and basic information about every 
>translator being used in a mount. But in terms of statistics, it only shows 
>latency of every FOP for each xlator. 
>
>
>Since, our aim is to provide information similar to the tool nfsiostat, we're 
>hungry for more information, which is available in the private data structures 
>maintained by io-stats. One way that I see to achieve this is to define a 
>dumpops structure in io-stats and a priv function in it, which can be used to 
>dump custom info into the private file in the .meta folder. If you feel that 
>there's a better way for doing this, please do guide me.

You could do that, or enhance the latency capturing functionality with more 
stats, essentially making io-stats xlator redundant on the client side.
 
On a second note, the tool would provide stats in properly formatted manner by 
default for human consumption and also in json format if it is needed by any 
other application.

That sounds good.

Thanks# glusterfsiostat - A client side tool to gather I/O stats from every Gluster mount.
# Author : Vipul Nayyar 

import commands
import re
import os
import json
import sys
from optparse import OptionParser

status, output=commands.getstatusoutput('mount')
mountlines = output.split("\n")

if status != 0:
	print "Unable to gather mount statistics"
	exit(1)

mntarr = []

for i in mountlines:
	matchobj = re.search(r" type fuse.glusterfs \(.*\)$",i)
	if matchobj:
		i = i.replace(matchobj.group(),"")
		i = i.split(" on ")
		mntname = i[0]
		mntpath = i[1]
		temp = {}
		temp["mount_path"] = mntpath
		temp["name"] = mntname
		mntarr.append(temp)

for i in xrange(0,len(mntarr)):
	os.chdir(mntarr[i]["mount_path"])
	os.chdir(".meta")
	os.chdir("graphs/active")

	status, output=commands.getstatusoutput("ls")
	if status != 0:
		print mntpath + ": components not accessible"
		continue
	
	lsarr = output.split('\n')

	for j in lsarr:
		io_stats_path = ""
		status, output=commands.getstatusoutput("cat "+ j + "/type")
		if output == "debug/io-stats":
			os.chdir(j)
			io_stats_path = os.getcwd()
			break
	if io_stats_path == "": continue

	priv_content = commands.getstatusoutput("cat private")

	priv_content = priv_

Re: [Gluster-devel] 3.5.1 beta 2 Sanity tests

2014-06-17 Thread Benjamin Turner
Here are the tests that failed.  Note that n0 is a generated name, name255
is a 255 character string, and path 1023 is a 1023 long path

/opt/qa/tools/posix-testsuite/tests/link/02.t(Wstat: 0 Tests: 10
Failed: 2)
  Failed tests:  4, 6

expect 0 link ${n0} ${name255}   #4
expect 0 unlink ${n0} #5   <- this passed
expect 0 unlink ${name255}   #6

/opt/qa/tools/posix-testsuite/tests/link/03.t(Wstat: 0 Tests: 16
Failed: 2)
  Failed tests:  8-9

expect 0 link ${n0} ${path1023}  #8
expect 0 unlink ${path1023}   #9

I gotta go for the day, I'll try to repro outside the script tomorrow.

-b


On Tue, Jun 17, 2014 at 11:09 AM, Benjamin Turner 
wrote:

> I ran through fs sanity on the beta 2 bits:
>
>  final pass/fail report =
>Test Date: Mon Jun 16 23:41:51 EDT 2014
>Total : [44]
>Passed: [42]
>Failed: [2]
>Abort : [0]
>Crash : [0]
> -
>[   PASS   ]  FS Sanity Setup
>[   PASS   ]  Running tests.
>[   PASS   ]  FS SANITY TEST - arequal
>[   PASS   ]  FS SANITY LOG SCAN - arequal
>[   PASS   ]  FS SANITY TEST - bonnie
>[   PASS   ]  FS SANITY LOG SCAN - bonnie
>[   PASS   ]  FS SANITY TEST - glusterfs_build
>[   PASS   ]  FS SANITY LOG SCAN - glusterfs_build
>[   PASS   ]  FS SANITY TEST - compile_kernel
>[   PASS   ]  FS SANITY LOG SCAN - compile_kernel
>[   PASS   ]  FS SANITY TEST - dbench
>[   PASS   ]  FS SANITY LOG SCAN - dbench
>[   PASS   ]  FS SANITY TEST - dd
>[   PASS   ]  FS SANITY LOG SCAN - dd
>[   PASS   ]  FS SANITY TEST - ffsb
>[   PASS   ]  FS SANITY LOG SCAN - ffsb
>[   PASS   ]  FS SANITY TEST - fileop
>[   PASS   ]  FS SANITY LOG SCAN - fileop
>[   PASS   ]  FS SANITY TEST - fsx
>[   PASS   ]  FS SANITY LOG SCAN - fsx
>[   PASS   ]  FS SANITY TEST - fs_mark
>[   PASS   ]  FS SANITY LOG SCAN - fs_mark
>[   PASS   ]  FS SANITY TEST - iozone
>[   PASS   ]  FS SANITY LOG SCAN - iozone
>[   PASS   ]  FS SANITY TEST - locks
>[   PASS   ]  FS SANITY LOG SCAN - locks
>[   PASS   ]  FS SANITY TEST - ltp
>[   PASS   ]  FS SANITY LOG SCAN - ltp
>[   PASS   ]  FS SANITY TEST - multiple_files
>[   PASS   ]  FS SANITY LOG SCAN - multiple_files
>[   PASS   ]  FS SANITY LOG SCAN - posix_compliance
>[   PASS   ]  FS SANITY TEST - postmark
>[   PASS   ]  FS SANITY LOG SCAN - postmark
>[   PASS   ]  FS SANITY TEST - read_large
>[   PASS   ]  FS SANITY LOG SCAN - read_large
>[   PASS   ]  FS SANITY TEST - rpc
>[   PASS   ]  FS SANITY LOG SCAN - rpc
>[   PASS   ]  FS SANITY TEST - syscallbench
>[   PASS   ]  FS SANITY LOG SCAN - syscallbench
>[   PASS   ]  FS SANITY TEST - tiobench
>[   PASS   ]  FS SANITY LOG SCAN - tiobench
>[   PASS   ]  FS Sanity Cleanup
>
>[   FAIL   ]  FS SANITY TEST - posix_compliance
>[   FAIL   ]  
> /rhs-tests/beaker/rhs/auto-tests/components/sanity/fs-sanity-tests-v2
>
>
> The posix_compliance failures are:
>
> /opt/qa/tools/posix-testsuite/tests/link/02.t ..
>
> Failed 2/10 subtests
>
> /opt/qa/tools/posix-testsuite/tests/link/03.t ..
> Failed 2/16 subtests
>
> I am looking into the failures now as well as running on NFS mounts, I will 
> open a BZ if they are valid.
>
> -b
>
>
>
>
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Beta2 NFS sanity tests.

2014-06-17 Thread Benjamin Turner
I saw 1 failure on NFS mounts, I am investigating:

 final pass/fail report =
   Test Date: Tue Jun 17 16:15:38 EDT 2014
   Total : [43]
   Passed: [41]
   Failed: [2]
   Abort : [0]
   Crash : [0]
-
   [   PASS   ]  FS Sanity Setup
   [   PASS   ]  Running tests.
   [   PASS   ]  FS SANITY TEST - arequal
   [   PASS   ]  FS SANITY LOG SCAN - arequal
   [   PASS   ]  FS SANITY TEST - bonnie
   [   PASS   ]  FS SANITY LOG SCAN - bonnie
   [   PASS   ]  FS SANITY TEST - glusterfs_build
   [   PASS   ]  FS SANITY LOG SCAN - glusterfs_build
   [   PASS   ]  FS SANITY TEST - compile_kernel
   [   PASS   ]  FS SANITY LOG SCAN - compile_kernel
   [   PASS   ]  FS SANITY TEST - dbench
   [   PASS   ]  FS SANITY LOG SCAN - dbench
   [   PASS   ]  FS SANITY TEST - dd
   [   PASS   ]  FS SANITY LOG SCAN - dd
   [   PASS   ]  FS SANITY TEST - ffsb
   [   PASS   ]  FS SANITY LOG SCAN - ffsb
   [   PASS   ]  FS SANITY TEST - fileop
   [   PASS   ]  FS SANITY LOG SCAN - fileop
   [   PASS   ]  FS SANITY TEST - fsx
   [   PASS   ]  FS SANITY LOG SCAN - fsx
   [   PASS   ]  FS SANITY LOG SCAN - fs_mark
   [   PASS   ]  FS SANITY TEST - iozone
   [   PASS   ]  FS SANITY LOG SCAN - iozone
   [   PASS   ]  FS SANITY TEST - locks
   [   PASS   ]  FS SANITY LOG SCAN - locks
   [   PASS   ]  FS SANITY TEST - ltp
   [   PASS   ]  FS SANITY LOG SCAN - ltp
   [   PASS   ]  FS SANITY TEST - multiple_files
   [   PASS   ]  FS SANITY LOG SCAN - multiple_files
   [   PASS   ]  FS SANITY LOG SCAN - posix_compliance
   [   PASS   ]  FS SANITY TEST - postmark
   [   PASS   ]  FS SANITY LOG SCAN - postmark
   [   PASS   ]  FS SANITY TEST - read_large
   [   PASS   ]  FS SANITY LOG SCAN - read_large
   [   PASS   ]  FS SANITY TEST - rpc
   [   PASS   ]  FS SANITY LOG SCAN - rpc
   [   PASS   ]  FS SANITY TEST - syscallbench
   [   PASS   ]  FS SANITY LOG SCAN - syscallbench
   [   PASS   ]  FS SANITY TEST - tiobench
   [   PASS   ]  FS SANITY LOG SCAN - tiobench
   [   PASS   ]  FS Sanity Cleanup

   [   FAIL   ]  FS SANITY TEST - fs_mark
   [   FAIL   ]
/rhs-tests/beaker/rhs/auto-tests/components/sanity/fs-sanity-tests-v2

The failed test was:

#  fs_mark  -d  .  -D  4  -t  4  -S  1
#   Version 3.3, 4 thread(s) starting at Tue Jun 17 13:39:36 2014
#   Sync method: INBAND FSYNC: fsync() per file in write loop.
#   Directories:  Time based hash between directories across 4
subdirectories with 180 seconds per subdirectory.
#   File names: 40 bytes long, (16 initial bytes of time stamp with 24
random bytes at end of name)
#   Files info: size 51200 bytes, written with an IO size of 16384 bytes 
per write
#   App overhead is time in microseconds spent in the test not doing
file writing related system calls.

FSUse%Count SizeFiles/sec App Overhead
Error in unlink of ./00/53a07d587SRWZLFBMIUOEVGM4RY9F5P3 : No
such file or directory
fopen failed to open: fs_log.txt.19509
fs-mark pass # 1 failed

I will investigate and open a BZ if this is reproducible.

-b
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] 3.5.1-beta2 Problems with suid and sgid bits on directories

2014-06-17 Thread Anders Blomdell
On 2014-06-17 17:49, Shyamsundar Ranganathan wrote:
> You maybe looking at the problem being fixed here, [1].
> 
> On a lookup attribute mismatch was not being healed across
> directories, and this patch attempts to address the same. Currently
> the version of the patch does not heal the S_ISUID and S_ISGID bits,
> which is work in progress (but easy enough to incorporate and test
> based on the patch at [1]).
Thanks, will look into it tomorrow.

> On a separate note, add-brick just adds a brick to the cluster, the
> lookup is where the heal (or creation of the directory across all sub
> volumes in DHT xlator) is being done.
Thanks for the clarification (I guess that a rebalance would trigger it as 
well?)

> 
> Shyam
> 
> [1] http://review.gluster.org/#/c/6983/
> 
> - Original Message -
>> From: "Anders Blomdell"  To:
>> "Gluster Devel"  Sent: Tuesday, June 17,
>> 2014 10:53:52 AM Subject: [Gluster-devel] 3.5.1-beta2 Problems with
>> suid and sgid bits ondirectories
>> 
> With a glusterfs-3.5.1-0.3.beta2.fc20.x86_64 with a reverted 
> 3dc56cbd16b1074d7ca1a4fe4c5bf44400eb63ff (due to local lack of IPv4 
> addresses), I get weird behavior if I:
> 
> 1. Create a directory with suid/sgid/sticky bit set
> (/mnt/gluster/test) 2. Make a subdirectory of #1
> (/mnt/gluster/test/dir1) 3. Do an add-brick
> 
> Before add-brick
> 
> 755 /mnt/gluster 7775 /mnt/gluster/test 2755 /mnt/gluster/test/dir1
> 
> After add-brick
> 
> 755 /mnt/gluster 1775 /mnt/gluster/test 755 /mnt/gluster/test/dir1
> 
> On the server it looks like this:
> 
> 7775 /data/disk1/gluster/test 2755 /data/disk1/gluster/test/dir1 1775
> /data/disk2/gluster/test 755 /data/disk2/gluster/test/dir1
> 
> Filed as bug:
> 
> https://bugzilla.redhat.com/show_bug.cgi?id=1110262
> 
> If somebody can point me to where the logic of add-brick is placed, I
> can give it a shot (a find/grep on mkdir didn't immediately point me
> to the right place).
> 
> 
/Anders

-- 
Anders Blomdell  Email: anders.blomd...@control.lth.se
Department of Automatic Control
Lund University  Phone:+46 46 222 4625
P.O. Box 118 Fax:  +46 46 138118
SE-221 00 Lund, Sweden

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] 3.5.1-beta2 Problems with suid and sgid bits on directories

2014-06-17 Thread Shyamsundar Ranganathan
You maybe looking at the problem being fixed here, [1].

On a lookup attribute mismatch was not being healed across directories, and 
this patch attempts to address the same. Currently the version of the patch 
does not heal the S_ISUID and S_ISGID bits, which is work in progress (but easy 
enough to incorporate and test based on the patch at [1]).

On a separate note, add-brick just adds a brick to the cluster, the lookup is 
where the heal (or creation of the directory across all sub volumes in DHT 
xlator) is being done.

Shyam

[1] http://review.gluster.org/#/c/6983/

- Original Message -
> From: "Anders Blomdell" 
> To: "Gluster Devel" 
> Sent: Tuesday, June 17, 2014 10:53:52 AM
> Subject: [Gluster-devel] 3.5.1-beta2 Problems with suid and sgid bits on  
> directories
> 
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
> 
> With a glusterfs-3.5.1-0.3.beta2.fc20.x86_64 with a reverted
> 3dc56cbd16b1074d7ca1a4fe4c5bf44400eb63ff (due to local lack of IPv4
> addresses), I get
> weird behavior if I:
> 
> 1. Create a directory with suid/sgid/sticky bit set (/mnt/gluster/test)
> 2. Make a subdirectory of #1 (/mnt/gluster/test/dir1)
> 3. Do an add-brick
> 
> Before add-brick
> 
>755 /mnt/gluster
>   7775 /mnt/gluster/test
>   2755 /mnt/gluster/test/dir1
> 
> After add-brick
> 
>755 /mnt/gluster
>   1775 /mnt/gluster/test
>755 /mnt/gluster/test/dir1
> 
> On the server it looks like this:
> 
>   7775 /data/disk1/gluster/test
>   2755 /data/disk1/gluster/test/dir1
>   1775 /data/disk2/gluster/test
>755 /data/disk2/gluster/test/dir1
> 
> Filed as bug:
> 
>   https://bugzilla.redhat.com/show_bug.cgi?id=1110262
> 
> If somebody can point me to where the logic of add-brick is placed, I can
> give
> it a shot (a find/grep on mkdir didn't immediately point me to the right
> place).
> 
> 
> Regards
> 
> Anders Blomdell
> 
> 
> 
> 
> - --
> Anders Blomdell  Email: anders.blomd...@control.lth.se
> Department of Automatic Control
> Lund University  Phone:+46 46 222 4625
> P.O. Box 118 Fax:  +46 46 138118
> SE-221 00 Lund, Sweden
> 
> -BEGIN PGP SIGNATURE-
> Version: GnuPG v1
> Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/
> 
> iQEcBAEBAgAGBQJToFZ/AAoJENZYyvaDG8NcIVgH/0FnyTuB/yutrAdKhOCFTGGY
> fKqWEozJjiUB4TE8hvAnYw7DalT6jlPLUre6vGzUuioS6TQNn8emTFA7GN9Ghklv
> pc2I8NWtwju2iXqLO5ACjBDRuFcYaDLQRVzBFiQpOoOkwrly0uEvcSgUKFxrSuMx
> NrUZKgYTjZb+8kwnSsFv/QNlcPR7zWAiyqbu7rh2a2Q9ArwEsLyTi+se6z/T3PIH
> ASEIR86jWywaP/JDRoSIUX0PIIS8v7mciFtCVGmgIHfugmEwDH2ZxQtbrkxHOC3/
> UjOaGY0TYwPNRnlzk2qkk6Yo3bALGzHa4SUfdRf+gvNa0wZLQWFTdnhWP1dPMc0=
> =tMUX
> -END PGP SIGNATURE-
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-devel
> 
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] 3.5.1 beta 2 Sanity tests

2014-06-17 Thread Benjamin Turner
I ran through fs sanity on the beta 2 bits:

 final pass/fail report =
   Test Date: Mon Jun 16 23:41:51 EDT 2014
   Total : [44]
   Passed: [42]
   Failed: [2]
   Abort : [0]
   Crash : [0]
-
   [   PASS   ]  FS Sanity Setup
   [   PASS   ]  Running tests.
   [   PASS   ]  FS SANITY TEST - arequal
   [   PASS   ]  FS SANITY LOG SCAN - arequal
   [   PASS   ]  FS SANITY TEST - bonnie
   [   PASS   ]  FS SANITY LOG SCAN - bonnie
   [   PASS   ]  FS SANITY TEST - glusterfs_build
   [   PASS   ]  FS SANITY LOG SCAN - glusterfs_build
   [   PASS   ]  FS SANITY TEST - compile_kernel
   [   PASS   ]  FS SANITY LOG SCAN - compile_kernel
   [   PASS   ]  FS SANITY TEST - dbench
   [   PASS   ]  FS SANITY LOG SCAN - dbench
   [   PASS   ]  FS SANITY TEST - dd
   [   PASS   ]  FS SANITY LOG SCAN - dd
   [   PASS   ]  FS SANITY TEST - ffsb
   [   PASS   ]  FS SANITY LOG SCAN - ffsb
   [   PASS   ]  FS SANITY TEST - fileop
   [   PASS   ]  FS SANITY LOG SCAN - fileop
   [   PASS   ]  FS SANITY TEST - fsx
   [   PASS   ]  FS SANITY LOG SCAN - fsx
   [   PASS   ]  FS SANITY TEST - fs_mark
   [   PASS   ]  FS SANITY LOG SCAN - fs_mark
   [   PASS   ]  FS SANITY TEST - iozone
   [   PASS   ]  FS SANITY LOG SCAN - iozone
   [   PASS   ]  FS SANITY TEST - locks
   [   PASS   ]  FS SANITY LOG SCAN - locks
   [   PASS   ]  FS SANITY TEST - ltp
   [   PASS   ]  FS SANITY LOG SCAN - ltp
   [   PASS   ]  FS SANITY TEST - multiple_files
   [   PASS   ]  FS SANITY LOG SCAN - multiple_files
   [   PASS   ]  FS SANITY LOG SCAN - posix_compliance
   [   PASS   ]  FS SANITY TEST - postmark
   [   PASS   ]  FS SANITY LOG SCAN - postmark
   [   PASS   ]  FS SANITY TEST - read_large
   [   PASS   ]  FS SANITY LOG SCAN - read_large
   [   PASS   ]  FS SANITY TEST - rpc
   [   PASS   ]  FS SANITY LOG SCAN - rpc
   [   PASS   ]  FS SANITY TEST - syscallbench
   [   PASS   ]  FS SANITY LOG SCAN - syscallbench
   [   PASS   ]  FS SANITY TEST - tiobench
   [   PASS   ]  FS SANITY LOG SCAN - tiobench
   [   PASS   ]  FS Sanity Cleanup

   [   FAIL   ]  FS SANITY TEST - posix_compliance
   [   FAIL   ]
/rhs-tests/beaker/rhs/auto-tests/components/sanity/fs-sanity-tests-v2


The posix_compliance failures are:

/opt/qa/tools/posix-testsuite/tests/link/02.t ..

Failed 2/10 subtests

/opt/qa/tools/posix-testsuite/tests/link/03.t ..
Failed 2/16 subtests

I am looking into the failures now as well as running on NFS mounts, I
will open a BZ if they are valid.

-b
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] 3.5.1-beta2 Problems with suid and sgid bits on directories

2014-06-17 Thread Anders Blomdell
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

With a glusterfs-3.5.1-0.3.beta2.fc20.x86_64 with a reverted 
3dc56cbd16b1074d7ca1a4fe4c5bf44400eb63ff (due to local lack of IPv4 addresses), 
I get
weird behavior if I:

1. Create a directory with suid/sgid/sticky bit set (/mnt/gluster/test)
2. Make a subdirectory of #1 (/mnt/gluster/test/dir1)
3. Do an add-brick

Before add-brick

   755 /mnt/gluster
  7775 /mnt/gluster/test
  2755 /mnt/gluster/test/dir1

After add-brick

   755 /mnt/gluster
  1775 /mnt/gluster/test
   755 /mnt/gluster/test/dir1

On the server it looks like this:

  7775 /data/disk1/gluster/test
  2755 /data/disk1/gluster/test/dir1
  1775 /data/disk2/gluster/test
   755 /data/disk2/gluster/test/dir1

Filed as bug:

  https://bugzilla.redhat.com/show_bug.cgi?id=1110262

If somebody can point me to where the logic of add-brick is placed, I can give
it a shot (a find/grep on mkdir didn't immediately point me to the right place).


Regards

Anders Blomdell




- -- 
Anders Blomdell  Email: anders.blomd...@control.lth.se
Department of Automatic Control
Lund University  Phone:+46 46 222 4625
P.O. Box 118 Fax:  +46 46 138118
SE-221 00 Lund, Sweden

-BEGIN PGP SIGNATURE-
Version: GnuPG v1
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQEcBAEBAgAGBQJToFZ/AAoJENZYyvaDG8NcIVgH/0FnyTuB/yutrAdKhOCFTGGY
fKqWEozJjiUB4TE8hvAnYw7DalT6jlPLUre6vGzUuioS6TQNn8emTFA7GN9Ghklv
pc2I8NWtwju2iXqLO5ACjBDRuFcYaDLQRVzBFiQpOoOkwrly0uEvcSgUKFxrSuMx
NrUZKgYTjZb+8kwnSsFv/QNlcPR7zWAiyqbu7rh2a2Q9ArwEsLyTi+se6z/T3PIH
ASEIR86jWywaP/JDRoSIUX0PIIS8v7mciFtCVGmgIHfugmEwDH2ZxQtbrkxHOC3/
UjOaGY0TYwPNRnlzk2qkk6Yo3bALGzHa4SUfdRf+gvNa0wZLQWFTdnhWP1dPMc0=
=tMUX
-END PGP SIGNATURE-
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Better-SSL thought

2014-06-17 Thread James
On Tue, 2014-06-17 at 09:07 -0400, Jeff Darcy wrote:
> 
> - Original Message -
> > On Tue, Jun 17, 2014 at 12:39 AM, Jeff Darcy  wrote:
> > > Unfortunately, *distributing* those keys and
> > > certificates securely is always going to be a bit of a problem.
> > 
> > 
> > Well, as we had discussed, puppet-gluster could be an easy way to
> > solve this... 
> 
> How does puppet-gluster distribute those keys etc. *securely*?  Are
> there techniques we could borrow for those who run GlusterFS without
> puppet?

Good question. There are different options, depending on how much the
puppet module author cares about security, or his/her module... There
are a few possibilities:

* Use a similar technique as discussed here:
https://ttboj.wordpress.com/2014/06/06/securely-managing-secrets-for-freeipa-with-puppet/

Basically this amounts to local key generation on a server.

* Generate private key yourself and store in puppet. I think this is
sort of a bad practice, but it's extremely common. Since puppet has root
on your boxes anyways, you're already sort of p0wned, but I don't like
to make the situation worse.

* Combination of distributed local key generation, plus secure partner
exchange. Depending on your API, I'd probably go this route if it's
possible. Basically each member would generate locally a key pair and
exchange the public parts. Then they would use this cryptography to
exchange individual private chunks to make up the key. Alternatively you
could elect one master to generate the key instead of generating it in a
distributed way.

Which reminds me, what about your interface/API?

Cheers,
James


signature.asc
Description: This is a digitally signed message part
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Better-SSL thought

2014-06-17 Thread Jeff Darcy


- Original Message -
> On Tue, Jun 17, 2014 at 12:39 AM, Jeff Darcy  wrote:
> > Unfortunately, *distributing* those keys and
> > certificates securely is always going to be a bit of a problem.
> 
> 
> Well, as we had discussed, puppet-gluster could be an easy way to
> solve this... 

How does puppet-gluster distribute those keys etc. *securely*?  Are
there techniques we could borrow for those who run GlusterFS without
puppet?
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] quota tests and usage of sleep

2014-06-17 Thread Pranith Kumar Karampuri


On 06/17/2014 11:14 AM, Varun Shastry wrote:



> - Original Message -
>> hi,
>>   Could you guys remove 'sleep' for quota tests authored
by you guys
>> if it can be done. They are leading to spurious failures.



I don't get how sleep can cause the failures. But for script 
bug-1087198.t  in my name, it is part of the testing. I can reduce it 
to a smaller value but we need to have the test which waits for a 
small amount of time.
Sleeps are added to make sure a certain condition is met in that time. 
Sometimes what happens is that the condition could take longer to meet 
causing spurious failure. So we are going through the tests and 
eliminating 'sleeps' that is not required. Or use better expect_within 
statement to make sure the condition is met using explicit tests.


Pranith


- Varun Shastry


>> I will be sending out a patch removing 'sleep' in other tests.
>>
>> Pranith
>>








___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel