Re: [Gluster-devel] GlusterD-2.0 - thread safety

2014-11-05 Thread Harshavardhana
On Wed, Nov 5, 2014 at 6:25 PM, Justin Clift jus...@gluster.org wrote:
 Forwarding this, as it's possibly very relevant for the
 GlusterD-2.0 stuff.  It looks like the threading behaviour of Go
 is very different to other languages (somewhat practical outline
 given below).

 (Start with the Normally, in any mainstream language ... bit)

 It's probably worth being aware of sooner rather than later, and
 thinking through potential ramifications before we decide to go
 down that route. ;)

 Regards and best wishes,

 Justin Clift
 


You didn't post the reply to that thread -
http://permalink.gmane.org/gmane.comp.db.sqlite.general/91532

-- 
Religious confuse piety with mere ritual, the virtuous confuse
regulation with outcomes
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-users] Please test GlusterFS 3.6.0 beta3 on OSX!

2014-10-25 Thread Harshavardhana
On Sat, Oct 25, 2014 at 1:16 AM, Dennis Schafroth den...@schafroth.dk wrote:
 Trying to test OSX towards a Linux (debian 7.7) server, but seems to get the
 same error on both OS X and Linux when attempting to mount:

 Running beta3 on both.

 sudo mount -t glusterfs -o
 'log-file=/usr/local/var/log/glusterfs/dist-rep.log' bd:dist3 ~/dist-rep

 [2014-10-25 08:00:22.267617] I [MSGID: 100030] [glusterfsd.c:2018:main]
 0-/usr/local/sbin/glusterfs: Started running /usr/local/sbin/glusterfs
 version 3.6.0beta3 (args: /usr/local/sbin/glusterfs
 --log-file=/usr/local/var/log/glusterfs/dist-rep.log --volfile-server=bd
 --volfile-id=dist3 /Users/dennis/dist-rep)
 [2014-10-25 08:00:22.268115] I [options.c:1163:xlator_option_init_double]
 0-fuse: option attribute-timeout convertion failed value 1.0
 [2014-10-25 08:00:22.268132] E [xlator.c:425:xlator_init] 0-fuse:
 Initialization of volume 'fuse' failed, review your volfile again


Seen it on few customer environments, but never been able to reproduce
it internally,  Cloned an internal bugzilla to upstream -
https://bugzilla.redhat.com/show_bug.cgi?id=1157107

The work around is to use 'entry-timeout=1,attribute-timeout=1' - as
additional mount options, it is something related to floating point
conversion.

-- 
Religious confuse piety with mere ritual, the virtuous confuse
regulation with outcomes
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-users] Please test GlusterFS 3.6.0 beta3 on OSX!

2014-10-25 Thread Harshavardhana
On Sat, Oct 25, 2014 at 8:47 AM, Dennis Schafroth den...@schafroth.dk wrote:
 Mounting a distributed replicated linux server sometimes shows duplicate 
 directory entries on mac.


Go ahead and open a bug :-)

-- 
Religious confuse piety with mere ritual, the virtuous confuse
regulation with outcomes
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-users] Please test GlusterFS 3.6.0 beta3 on OSX!

2014-10-25 Thread Harshavardhana
On Sat, Oct 25, 2014 at 11:13 AM, Harshavardhana
har...@harshavardhana.net wrote:
 On Sat, Oct 25, 2014 at 8:47 AM, Dennis Schafroth den...@schafroth.dk wrote:
 Mounting a distributed replicated linux server sometimes shows duplicate 
 directory entries on mac.


 Go ahead and open a bug :-)


bash-3.2# find etc/ -type f | wc -l
 254
bash-3.2# find /private/etc -type f | wc -l
 254
bash-3.2# ls
a   b   etc
bash-3.2#

I do see some issues when there are symlinks

bash-3.2# ln -s b c
bash-3.2# ls
a   b   c   etc
bash-3.2# ls -l
ls: c: No such file or directory
total 16
-rw-r--r--@  1 root  wheel 0 Oct 25 12:56 a
-rw-r--r--@  1 root  wheel 0 Oct 25 12:56 b
drwxr-xr-x@ 57 root  wheel  4148 Oct 25 00:36 etc
bash-3.2#

bash-3.2# ls -l
total 24
-rw-r--r--@  1 root  wheel 0 Oct 25 12:56 a
-rw-r--r--@  1 root  wheel 0 Oct 25 12:56 b

ls: ./c: No such file or directory
lrwxrwxrwx@  0 root  wheel 1 Oct 25 13:00 c
drwxr-xr-x@ 57 root  wheel  4148 Oct 25 00:36 etc
bash-3.2#

-- 
Religious confuse piety with mere ritual, the virtuous confuse
regulation with outcomes
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-users] Please test GlusterFS 3.6.0 beta3 on OSX!

2014-10-24 Thread Harshavardhana

 Sure, I wasn't questioning that. I was expecting (guessing) that compiling
 would need system fuse headers, but looking deeper I see that that seems not
 to be the case.


Ah no not really, it exactly like how we build FUSE client for Linux -
there is no
external dependency other than the kernel module.

 This is what configure reports, and the brew packages I have installed. How
 does this compare to what you get, is it the same?

 GlusterFS configure summary
 ===
 FUSE client  : yes
 Infiniband verbs : no
 epoll IO multiplex   : no
 argp-standalone  : yes
 fusermount   : no
 readline : no (present but missing undo)
 georeplication   : no
 Linux-AIO: no
 Enable Debug : no
 systemtap: yes
 Block Device xlator  : no
 glupy: no
 Use syslog   : yes
 XML output   : no
 QEMU Block formats   : no
 Encryption xlator: no
 Erasure Code xlator  : yes


Yes this is what i get.


 [~/src/git/glusterfs-release-3.6] kaleb% brew list
 autoconfgdbmopenssl python
 automakelibtool osxfuse readline
 cmockery2   libxml2 pkg-config  sqlite


$ brew list
autoconfbash-completion cmockery2   gettext libtool
mobile-shellopenssl protobufwgetyasm
automakecmake   emacs   libeventlibxml2
nodepkg-config  tmuxxz   yuicompressor

-- 
Religious confuse piety with mere ritual, the virtuous confuse
regulation with outcomes
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-users] Please test GlusterFS 3.6.0 beta3 on OSX!

2014-10-23 Thread Harshavardhana
 I have done a `brew fetch osxfuse`  and I've got
 /Library/Caches/Homebrew/osxfuse-2.7.2.yosemite.bottle.tar.gz.

 ??? Seems like I could just untar that in /usr/local/Cellar. Seem
 reasonable?

We don't necessarily need OSXFUSE headers all, all we need is
/dev/fuse with the
kernel module loaded - so if the kext is installed and loadable it
should be fine :-).

You should have done

# brew remove osxfuse

And perhaps then installed through 'dmg', and the 'dmg doesn't install
in '/usr/local/Cellar', since it isn't
part of the brew build.

@Ryan - can you send us the link we can test it out locally?

Thanks
-- 
Religious confuse piety with mere ritual, the virtuous confuse
regulation with outcomes
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Another spurious failure

2014-09-23 Thread Harshavardhana
On Tue, Sep 23, 2014 at 11:10 AM, Justin Clift jus...@gluster.org wrote:
 On 23/09/2014, at 1:00 AM, Harshavardhana wrote:
 Test Summary Report
 ---
 ./tests/bugs/bug-1109770.t (Wstat: 0 Tests: 19
 Failed: 1)
  Failed test:  13
 Files=280, Tests=8149, 5589 wallclock secs ( 3.35 usr  1.97 sys +
 487.14 cusr 657.94 csys = 1150.40 CPU)
 snip

 Yeah, seen this several times as well.  We're being impacted by spurious
 failures pretty often again lately. :(

 git blame shows Raghavendra Bhat and Manu as the two authors for that
 test.  Maybe one of them would have time to look into it?

 + Justin


A patch was sent out and its fixed apparently in latest master and release-3.6

-- 
Religious confuse piety with mere ritual, the virtuous confuse
regulation with outcomes
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Regarding the coding guidelines script

2014-09-22 Thread Harshavardhana
On Mon, Sep 22, 2014 at 10:57 AM, Kaushal M kshlms...@gmail.com wrote:
 I'm using Perl 5.20.1. I was wondering how no one caught a mistake in
 the checker.

 ~kaushal


http://review.gluster.com/#/c/8811/ - here you go, please verify.

-- 
Religious confuse piety with mere ritual, the virtuous confuse
regulation with outcomes
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Patch merged in master branch for coding policy and patch submission

2014-09-19 Thread Harshavardhana
On Fri, Sep 19, 2014 at 6:06 PM, Emmanuel Dreyfus m...@netbsd.org wrote:
 Harshavardhana har...@harshavardhana.net wrote:

 This is to bring in adherence to coding policy, prior to patch
 submission for review.

  - no tabs
  - no whitespace
  - indentation (linux style) etc.

 What do we do whan the code surrounding the change has tabs? Should we
 adapt to the local style, or change everything?

It ignores changes in certain types of files, but .c and .h shouldn't have tabs

-- 
Religious confuse piety with mere ritual, the virtuous confuse
regulation with outcomes
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Patch merged in master branch for coding policy and patch submission

2014-09-19 Thread Harshavardhana

 What do we do whan the code surrounding the change has tabs? Should we
 adapt to the local style, or change everything?

 It ignores changes in certain types of files, but .c and .h shouldn't have 
 tabs


So there are two possibilities

- Whitespace
- erroneous spaces between functions
- Malformed sign-off
- Wrong jenkins URL

are hard errors and you wouldn't be allowed to submit. NOTE: yes it
makes sure to ignore Markdown files which require white-space as their
return character.

Majority of them are treated as Warning, like tabs so which can be
safely ignored or feel to fix them, a choice is given (y/n)

-- 
Religious confuse piety with mere ritual, the virtuous confuse
regulation with outcomes
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Something wrong with release-3.6 branch

2014-09-10 Thread Harshavardhana
   
 /home/jenkins/root/workspace/rackspace-regression-2GB/xlators/cluster/dht/src/dht-common.c:
  In function ‘dht_lookup_everywhere_done’:
   
 /home/jenkins/root/workspace/rackspace-regression-2GB/xlators/cluster/dht/src/dht-common.c:1229:
  warning: implicit declaration of function 
 ‘dht_fill_dict_to_avoid_unlink_of_migrating_file’

 Guessing these are significant. :)


00:08:09: harsha@sysrq:~/repos/glusterfs(master)$ git grep
dht_fill_dict_to_avoid_unlink_of_migrating_file
xlators/cluster/dht/src/dht-common.c:dht_fill_dict_to_avoid_unlink_of_migrating_file
(dict_t *dict) {
xlators/cluster/dht/src/dht-common.c:ret =
dht_fill_dict_to_avoid_unlink_of_migrating_file
xlators/cluster/dht/src/dht-common.c:  ret =
dht_fill_dict_to_avoid_unlink_of_migrating_file
xlators/cluster/dht/src/dht-common.c:ret =
dht_fill_dict_to_avoid_unlink_of_migrating_file
xlators/cluster/dht/src/dht-common.h:dht_fill_dict_to_avoid_unlink_of_migrating_file
(dict_t *dict);
00:08:10: harsha@sysrq:~/repos/glusterfs(master)$


00:08:23: harsha@sysrq:~/repos/glusterfs(release-3.6)$ git grep
dht_fill_dict_to_avoid_unlink_of_migrating_file
xlators/cluster/dht/src/dht-common.c:  ret =
dht_fill_dict_to_avoid_unlink_of_migrating_file
00:08:24: harsha@sysrq:~/repos/glusterfs(release-3.6)$

Interestingly not implemented or defined, part of this patch.


commit 13a044ab4d643a39d8138ab33226162ef125dbd3
Author: Venkatesh Somyajulu vsomy...@redhat.com
Date:   Thu Sep 4 14:08:18 2014 -0400

cluster/dht: Added keys in dht_lookup_everywhere_done

Case where both cached  (C1)  and hashed file are found,
but hash does not point to above cached node (C1), then
dont unlink if either fd-is-open on hashed or
linkto-xattr is not found.

Change-Id: I7ef49b88d2c88bf9d25d3aa7893714e6c0766c67
BUG: 1138385
Signed-off-by: Venkatesh Somyajulu vsomy...@redhat.com

Change-Id: I86d0a21d4c0501c45d837101ced4f96d6fedc5b9
Signed-off-by: Venkatesh Somyajulu vsomy...@redhat.com
Reviewed-on-master: http://review.gluster.org/8429
Tested-by: Gluster Build System jenk...@build.gluster.com
Reviewed-by: susant palai spa...@redhat.com
Reviewed-by: Raghavendra G rgowd...@redhat.com
Reviewed-by: Vijay Bellur vbel...@redhat.com
Reviewed-on: http://review.gluster.org/8607
Reviewed-by: Jeff Darcy jda...@redhat.com
~~~

I would blame dumb GCC, clang would have never progressed this even
for a build.

-- 
Religious confuse piety with mere ritual, the virtuous confuse
regulation with outcomes
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] basic/afr/gfid-self-heal.t on release-3.6/NetBSD

2014-09-06 Thread Harshavardhana

 Right, but that is a bug fixed in master but still present in
 release-3.6, isn't it? Why not backport that change to release-3.6?

 The patch will not apply cleanly, it requires previous changes, but
 perhaps it is worth working on it?



That is left to the changeset owner, perhaps it will be done in upcoming
weeks.

-- 
Religious confuse piety with mere ritual, the virtuous confuse
regulation with outcomes
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] NetBSD port passes POSIX test suite

2014-09-05 Thread Harshavardhana
On Fri, Sep 5, 2014 at 9:34 AM, Emmanuel Dreyfus m...@netbsd.org wrote:
 Hi

 netbsd0.cloud.gluster.org now passes the POSIX test suite:
 (...)
 All tests successful.
 Files=191, Tests=2069, 135 wallclock secs ( 1.03 usr  0.45 sys + 26.81 cusr 
 37.51 csys = 65.80 CPU)
 Result: PASS

 I have patches to pullup to netbsd-7 stable branch so that
 it will be available in next release, but everything is
 already installed on netbsd0.cloud.gluster.org.

 On the glusterFS side there are also a few fixes to merge
 before I can run the POSIX test suite in master autobuilds.
 See here:
 http://review.gluster.org/#/q/owner:%22Emmanuel+Dreyfus%22+status:open,n,z

This is great stuff, thank you for all the hardwork.

release-3.6 is already done some time ago - you have to post all the
patches to release-3.6 branch.

-- 
Religious confuse piety with mere ritual, the virtuous confuse
regulation with outcomes
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Rackspace regression slaves hung?

2014-08-28 Thread Harshavardhana

 There are couple of patches [1] submitted by me are resulting in hang. I 
 think these slaves were spawned to test the patch [1] and its dependencies. 
 If yes, they can be killed.

 [1] http://review.gluster.com/#/c/8523/


One should be able to manually abort them in Jenkins.

-- 
Religious confuse piety with mere ritual, the virtuous confuse
regulation with outcomes
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] rpc-coverage.t questions

2014-08-28 Thread Harshavardhana
 In test_fstat():
  msg=$(sh -c 'tail -f $PFX/dir/file --pid=$$  sleep 1  echo hooha
 $PFX/dir/file  sleep 1');

 NetBSD does not have the --pid option. I propose this change, which
 seems to obtain the same result with less complexity. Opinion?

 -msg=$(sh -c 'tail -f $PFX/dir/file --pid=$$  sleep 1  echo hooha
 $PFX/dir/file  sleep 1');
 +echo hooha  $PFX/dir/file
 +sleep 1
 +msg=$(sh -c 'tail $PFX/dir/file')



Same changes i did for FreeBSD, these are applicable for OSX too - Thanks +1 :-)

-- 
Religious confuse piety with mere ritual, the virtuous confuse
regulation with outcomes
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Bug regarding quota.t

2014-08-26 Thread Harshavardhana
 Can you please a file a bug on this? we'll try to address it.


 Sure submitting one right away.


https://bugzilla.redhat.com/show_bug.cgi?id=1133820 - here you go!

-- 
Religious confuse piety with mere ritual, the virtuous confuse
regulation with outcomes
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Spurious regression of tests/basic/mgmt_v3-locks.t

2014-08-23 Thread Harshavardhana
On Fri, Aug 22, 2014 at 10:23 PM, Atin Mukherjee amukh...@redhat.com wrote:
 IIRC, we were marking the verified as +1 in case of a known spurious
 failure, can't we continue to do the same for the known spurious
 failures just to unblock the patches getting merged till the problems
 are resolved?

While its understood that such is the case, the premise is rather
wrong - we should run
a spurious failure again and get the +1 since we know it only fails
spuriously :-). If it fails
consistently then there is something odd with the patch. All it
requires is another trigger in
Jenkins.

There is a reason to slow down merging patches, in the long run it
increases quality of the
codebase and indeed it has done that to GlusterFS. Our master is
readily usable for beta
testing anyday while historically we used to patches which were merged
which generated
segfaults upon mount - we even had patches which would fail to compile
but were hastily
pushed by Developers.

Yes there is always a balance though, so we should be careful while
doing +1 to patches
which requires quick merging as a premise.

-- 
Religious confuse piety with mere ritual, the virtuous confuse
regulation with outcomes
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] update-link-count-parent POSIX xlator parameter

2014-08-21 Thread Harshavardhana
On Wed, Aug 20, 2014 at 10:04 PM, Prashanth Pai p...@redhat.com wrote:
 Hi,

 I recently came across this when I was looking for various ways to convert 
 GFID to Path[1].
 I have tested build-pgfid option for basic sanity. It's documented in the 
 commit msg of change[2]. Found one small issue[3]. There could be others.
 Users should be aware that these xattrs generated only for newly created 
 dirs/files and are not built for already existing dirs/files.

 [1]: 
 https://github.com/prashanthpai/sof-object-listing/blob/master/changelog/gfid-to-path.md
 [2]: http://review.gluster.org/5951
 [3]: http://review.gluster.org/8352


Explored quite a few options here

- https://gist.github.com/harshavardhana/b5d3c69db5312b84022f - was
one approach which was worked by Brad Hubbard which makes used of
kernel dentry caches.
- For ext4 you have
  # debugfs 'ncheck inode_num' device
- XFS provides something similar, but requires a umount
  # xfs_ncheck -i ino device
- Then stumbled across this since its specifically used by Quota, do
not know how hard its to expose this. But Avati indicated that this
would involve additional overheads for every call()
- The best possible solution is to use a database for such a mapping
(quite tricky) and requires a lot of internal work.

So in the end haven't come across an easier way out of this :-)

-- 
Religious confuse piety with mere ritual, the virtuous confuse
regulation with outcomes
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] update-link-count-parent POSIX xlator parameter

2014-08-20 Thread Harshavardhana
On Wed, Aug 20, 2014 at 1:55 PM, Ben England bengl...@redhat.com wrote:
 What is update-link-count-parent POSIX xlator parameter for?  Is it ever set 
 by anyone and why?  It appears to be off by default.  Didn't see this 
 documented, but the log message in posix.c says:  update-link-count-parent 
 is enabled. Thus for each file an extended attribute representing the number 
 of hardlinks for that file within the same parent directory is set.  Why 
 would this be necessary?


It was supposed to be used by Quota or it uses it internally i am not
sure for ancestry paths and gfid to path conversion - AFAIK

# gluster volume set volname build-pgfid on

Enables this feature. Never tested it personally.

-- 
Religious confuse piety with mere ritual, the virtuous confuse
regulation with outcomes
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Change in glusterfs[master]: Regression test portability: mktemp

2014-08-20 Thread Harshavardhana
Again with the unpack issue with git


 Build Failed

 http://build.gluster.org/job/glusterfs-rpms/556/ : SUCCESS

 http://build.gluster.org/job/glusterfs-rpms-el6/556/ : FAILURE


Receiving objects:   2% (2239/81742), 579.83 KiB | 854.00 KiB/s
Receiving objects:   3% (2453/81742), 579.83 KiB | 854.00 KiB/s
remote: internal server error
Receiving objects:   4% (3270/81742), 579.83 KiB | 854.00 KiB/s
fatal: early EOF
fatal: index-pack failed

at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandIn(CliGitAPIImpl.java:774)
at org.jenkinsci.plugins.gitclient.CliGitAPIImpl.clone(CliGitAPIImpl.java:218)
... 11 more
Trying next repository
ERROR: Could not clone repository
...
...
...
Finished: FAILURE

If you remember this bogged us while setting up FreeBSD ?? - there
seems something going wrong we have to verify as it may be disastrous
in future.

Perhaps a 'git gc' and we have to see if the git packs on the
git.gluster.org are not corrupted.

-- 
Religious confuse piety with mere ritual, the virtuous confuse
regulation with outcomes
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Fwd: Change in glusterfs[master]: Regression test portability: mktemp

2014-08-20 Thread Harshavardhana
On Wed, Aug 20, 2014 at 2:49 PM, Justin Clift jus...@gluster.org wrote:
 Any ideas?



Avati just fixed it

We had to run 'git gc' directly on the gerrit server - gerrit gc
failed (could be a gerrit bug).

Suspecting the EOF unpack errors to be gerrit bug as well, we should
be updating it to latest release perhaps?

Thanks
-- 
Religious confuse piety with mere ritual, the virtuous confuse
regulation with outcomes
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Change in glusterfs[master]: Regression test portability: mktemp

2014-08-20 Thread Harshavardhana
 Using git over ssh often fails, is it the same problem? I never
 investigated, but using HTTP is reliable. I often have to run rfc.sh
 three or four times before it works.


It seems like a 'gerrit' bug at the moment, its fixed with 'git gc' on
the server no corruption reported :-).

-- 
Religious confuse piety with mere ritual, the virtuous confuse
regulation with outcomes
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Automated split-brain resolution

2014-08-14 Thread Harshavardhana
 Not sure. We can figure this out by traversing up the softlinks for
 directories. But for files there is no way to find the parent at the moment.


This email from Dan Mons puts some perspective on what actually people
expect - https://www.mail-archive.com/gluster-devel@gluster.org/msg00392.html

-- 
Religious confuse piety with mere ritual, the virtuous confuse
regulation with outcomes
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Automated split-brain resolution

2014-08-14 Thread Harshavardhana
On Thu, Aug 14, 2014 at 11:12 AM, Joe Julian j...@julianfamily.org wrote:
 Some people. Depends on use case. Dan's is pretty specific.

Those are majority of the users/customers.

-- 
Religious confuse piety with mere ritual, the virtuous confuse
regulation with outcomes
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] how does meta xlator work?

2014-08-13 Thread Harshavardhana
 I made a patch that lets meta.t pass on NetBSD. Unfortunately, the
 direct IO flag gets attached to the vnode and not to the file descriptor,
 which means it is not possible to have a fd with direct IO and another 
 without.

 But perhaps it is just good enough.

Can you point me to the change? would like to test it out.

Thanks
-- 
Religious confuse piety with mere ritual, the virtuous confuse
regulation with outcomes
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] how does meta xlator work?

2014-08-13 Thread Harshavardhana

 These are only NetBSD fixes:
 http://ftp.espci.fr/shadow/manu/iodirect2.patch

 netbsd0.cloud.gluster.org runs with them right now.

Looks like i can use this as a way to get direct io done for FUSE4BSD, this is
implemented exactly how i thought would be for FreeBSD. Let me see how much
effort this might take.

Thanks
-- 
Religious confuse piety with mere ritual, the virtuous confuse
regulation with outcomes
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] in dict.c, this gets replace by environment

2014-08-13 Thread Harshavardhana
On Wed, Aug 13, 2014 at 9:45 PM, Anand Avati av...@gluster.org wrote:
 Can you post a bt full?



Adding to that, is it a 32bit node?  looks like it.

-- 
Religious confuse piety with mere ritual, the virtuous confuse
regulation with outcomes
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Automated split-brain resolution

2014-08-11 Thread Harshavardhana
 This is a standard problem where there are split-brains in distributed
 systems. For example even in git there are cases where it gives up asking
 users to fix the file i.e. merge conflicts. If the user doesn't want
 split-brains they should move to replica-3 and enable client-quorum. But if
 the user made a conscious decision to live with split-brain problems
 favouring availability/using replica-2, then split-brains do happen and it
 needs user intervention. All we are trying to do is to make this process a
 bit painless by coming up with meaningful policies.


Agreed, split brains do require manual intervention no one argues about that,
but it shouldn't be quite as tedious as GlusterFS wants it to be.

I do agree that it is way simpler than perhaps some other distributed
filesystems but at
any point we ask some one to write a script to fix our internal
structure - that is not a
feature its a bug.

We all appreciate the effort, but my wish we incorporate some pain
points which we have
seen personally over the years and fix it right when we are at it.

 If the user knows his workload is append only and there are split-brains the
 only command he needs to execute is:
 'gluster volume heal volname split-brain bigger-file'
 no grep, no finding file paths, nothing.


Adding to this - we need to provide additional sanity checks that
split brains were
indeed fixed - since this looks quite destructive operation, are you
planning a rollback
at any point during this process?

 There were also instances where the user knows the brick he/she would like
 to be the source but he/she is worried that old brick which comes back up
 would cause split-brains so he/she had to erase the whole brick which was
 down and bring it back up.
 Instead we can suggest him/her to use 'gluster volume heal VOLNAME
 split-brain source-brick brick_name' after bringing the brick back up so
 that not all the contents needs to be healed.
 1) gluster volume heal volname info split-brain should give output in some
 'format' giving stat/pending-matrix etc for all the files in split-brain.
   - Unfortunately we still don't have a way to provide with file paths
 without doing 'find' on the bricks.

Critical setups require fixing split-brain with quick turn around no
one really has the
luxury running a find on a large volume. So i still do not understand,
if a 'find' can do
a gfid -- inum -- path - how hard it is for Gluster management
daemon to know this?
just to provide better tooling?

-- Harsha
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] how does meta xlator work?

2014-08-10 Thread Harshavardhana
 I am working on tests/basic/meta.t on NetBSD
 It fails because .meta/frames is empty, like all files in .meta. A quick
 investigation in source code shows that the function responsible for
 filling the code (frames_file_fill) is never called.


Same experience here.

It does work on OSX, but does not work on FreeBSD for similar reasons
haven't figured it out yet what is causing the issue.

-- 
Religious confuse piety with mere ritual, the virtuous confuse
regulation with outcomes
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Automated split-brain resolution

2014-08-08 Thread Harshavardhana
On Thu, Aug 7, 2014 at 1:35 AM, Ravishankar N ravishan...@redhat.com wrote:

 Manual resolution of split-brains [1] has been a tedious task involving
 understanding and modifying AFR's changelog extended attributes. To simplify
 and to an extent automate this task, we are proposing a new CLI command with
 which the user can  specify  what the source brick/file is, and
 automatically heal the files in the appropriate direction.

 Command: gluster volume resolve-split-brain VOLNAME {bigger_file  |
 source-brick brick_name [file] }

 Breaking up the command into its possible options, we have:

 a) gluster volume resolve-split-brain VOLNAME bigger_file
 When this command is executed, AFR will consider the brick having the
 highest file size as the source and heal it to all other bricks (including
 all other sources and sinks) in that replica subvolume. If the file size is
 same in all the bricks, it does *not* heal the file.

 b) gluster volume resolve-split-brain VOLNAME  source-brick brick_name 
 [file]

 When this command is executed, if file is specified, AFR heals the file
 from the source-brick brick_name to all other bricks of that replica
 subvolume. For resolving multiple files, the command must be run
 iteratively, once per file.
 If file is not specified, AFR heals all the files that have an entry in
 .glusterfs/indices/xattrop *and* are in split-brain. As before, heals happen
 from source-brick brick_name to all other bricks.

 Future work could also include extending the command to add other policies
 like choosing the file having the latest mtime as the source, integration
 with trash xlator wherein the files deleted from the sink are moved to the
 trash dir etc.


I have a few queries regarding the overall design itself.

Here are the caveats

   - Adding a new option rather than extending an existing option
'gluster volume heal'.
   - Asking user to input the filename which is not necessary as
default since such files are already
 available through the 'gluster volume heal volname info split-brain'

What would be ideal is the following making it seamless and much more
user friendly

Extend the existing CLI as following

 - 'gluster volume heal volname split-brain'

Healing split-brained files is more palpable and has a rather more
convincing tone for a sys-admin IMHO.

An example version of this extension would be.

'gluster volume heal volname split-brain [file|gfid as canonical form]

In-fact since we already know the list of split-brained files we can
just loop through them and ask interactive questions

# gluster volume heal volname split-brain
WARNING: About to start fixing split brained files on an active
GlusterFS volume, do you wish to proceed? y

WARNING: files removed would be actively backed up in '.trash' under
your brick path for future recovery.
...
WARNING: Found 1000 files in split brain
...
File on pair 'host1:host2' is in split brain, file with latest
time-stamp found on host1 - Fix? y
File on pair 'host3:host5' is in split brain. file with biggest size
found on host5 - Fix? y




 Fixed (1000 split brain files) 

# gluster volume heal volname split-brain
INFO: no split brains present on this volume

The real pain point of fixing the split brain is not taking getfattr
outputs and figuring out what is the file under conflict, the real
pain point is doing the gfid to the actual file translation when there
are millions of files. Gathering this list takes more time than
actually fixing the split brain and i have personally spent countless
hrs doing these.

Now this list is easily available to GlusterFS and also its gfid to
path translation - why isn't it simple enough for us to ask the user
what we think is the right choice - we do certainly know which is the
bigger file too.

My general contention is when we know what is the right thing to do
under certain conditions we should be making it easier for example:
Directory metadata split brains - we just fix it automatically today
but certainly wasn't the case in the past. We learnt to do the right
thing when its necessary from experience.

A greater UI experience make it really 'automated' as you intend to
do, to make larger decisions ourselves and users are left with simple
choices to be made so that its not confusing.

-- 
Religious confuse piety with mere ritual, the virtuous confuse
regulation with outcomes
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Automated split-brain resolution

2014-08-08 Thread Harshavardhana
 Wait, directories *are* supposed to automatically heal from split-brain?
 Guess I need to file a bug report. That doesn't happen. All the metadata and
 gfid can be the same, but since the trusted.afr are both dirty, it'll stay
 split-brain forever.

Conservative merge happens, but 'directories' are not cleared off
their extended attributes so you might see
messages in logs AFAIK.

-- 
Religious confuse piety with mere ritual, the virtuous confuse
regulation with outcomes
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Automated split-brain resolution

2014-08-08 Thread Harshavardhana
On Fri, Aug 8, 2014 at 12:53 PM, Joe Julian j...@julianfamily.org wrote:
 Thinking about it more, I'd still rather have this functionality exposed at
 the client through xattrs. For 5 years I've thought about this, and the more
 I encounter split-brain, the more I think this is the needed approach.

 getfattr -n trusted.glusterfs.stat returns
 xml/json/some_madeup_datastructure with the results of stat from each brick
 getfattr -n trusted.glusterfs.afr returns the afr matrix
 setfattr -n trusted.glusterfs.sb-pick -v server2:/srv/brick1

 That gives us the tools we need to choose what to do with any given
 split-brain. For large swaths of automated repair, we can use find.

 I suppose that last bit could still be implemented through that cli command.

Even this makes sense, my overall pain point was the proposed CLI isn't solving
anything worthwhile.

-- 
Religious confuse piety with mere ritual, the virtuous confuse
regulation with outcomes
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Cmockery2 -- patch-ready at FreeBSD ports for inclusion

2014-08-07 Thread Harshavardhana
On Thu, Aug 7, 2014 at 2:41 AM, Niels de Vos nde...@redhat.com wrote:
 On Wed, Aug 06, 2014 at 06:57:46PM -0700, Harshavardhana wrote:
 FYI

 https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=192420

 Nice! Is there a plan to propose glusterfs as a FreeBSD port too?


Yes, that is currently being handled by FreeNAS/FreeBSD folks, all our
FreeBSD support is going in for FreeNAS - building it part of their
offering for NAS storage with ZFS backend.  But we need a release yet
to make that a reality, currently its being used as alpha package -
https://github.com/freenas/ports/tree/freenas/9-stable/sysutils/glusterfs

Thanks to Manu we are able to get some regression tests running.
-- 
Religious confuse piety with mere ritual, the virtuous confuse
regulation with outcomes
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] FreeBSD vote enabled as part of smoke tests

2014-08-07 Thread Harshavardhana
FreeBSD can vote as part of smoke tests, just like NetBSD.

Happy Porting!
-- 
Religious confuse piety with mere ritual, the virtuous confuse
regulation with outcomes
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Cmockery2 -- patch-ready at FreeBSD ports for inclusion

2014-08-06 Thread Harshavardhana
FYI

https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=192420

Thanks
-- 
Religious confuse piety with mere ritual, the virtuous confuse
regulation with outcomes
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Monotonically increasing memory

2014-07-31 Thread Harshavardhana
On Thu, Jul 31, 2014 at 11:31 AM, Anders Blomdell
anders.blomd...@control.lth.se wrote:
 During rsync of 35 files, memory consumption of glusterfs
 rose to 12 GB (after approx 14 hours), I take it that this is a
 bug I should try to track down?


Does it ever come down? what happens if you repeatedly the same files
again? does it OOM?

-- 
Religious confuse piety with mere ritual, the virtuous confuse
regulation with outcomes
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] NetBSD and FreeBSD smoke tests are enabled

2014-07-28 Thread Harshavardhana
On Mon, Jul 28, 2014 at 8:15 AM, Emmanuel Dreyfus m...@netbsd.org wrote:
 On Mon, Jul 28, 2014 at 01:13:42PM +0100, Justin Clift wrote:
 Just a heads up.  The NetBSD and FreeBSD smoke tests have been enabled
 for (at least) the release-3.6 and master branches.

 release-3.6 did not build for NetBSD. Someone please merge that one so that
 we can enable NetBSD voltiing for release-3.6: http://review.gluster.com/8381

FreeBSD can only be enabled after we merge -
http://review.gluster.com/#/c/8246/

-- 
Religious confuse piety with mere ritual, the virtuous confuse
regulation with outcomes
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Jenkins success/failure with *BSD thought

2014-07-28 Thread Harshavardhana

 I've been meaning to ask (or suggest) this for some time

 To me a smoke test should be a simple compile and run the executable(s) to
 make sure they don't crash with the most basic configuration.

 I think the posix tests that are in smoke.sh today should really be part of
 the regression test, e.g. .../basic/posix.t


This is an issue only now since regression testing was automated, we
could move all the tests together. All of them are done simultaneously
- failures among any of them gets NACK on jenkins.  We have to test it
though, do not know how much we can configure Jenkins this way.

-- 
Religious confuse piety with mere ritual, the virtuous confuse
regulation with outcomes
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] NetBSD autobuild and cmockery2

2014-07-24 Thread Harshavardhana

 This is The Right Way in my opinion. I think the current implementation
 should not have been merged, but I do not track changes close enough to
 had the opportunity to cast a -2 code review in time.

 Note that a voting NetBSD build would have catched it. This changes restores
 the build, we could re-enable NetBSD autobuild vote one it is merged:
 http://review.gluster.org/#/c/8340/


+1 - lets vote it hard then!

-- 
Religious confuse piety with mere ritual, the virtuous confuse
regulation with outcomes
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Cmockery2 in GlusterFS

2014-07-21 Thread Harshavardhana
Cmockery2 is a hard dependency before GlusterFS can be compiled in
upstream master now - we could make it conditional
and enable if necessary? since we know we do not have the cmockery2
packages available on all systems?

On Mon, Jul 21, 2014 at 10:16 AM, Luis Pabon lpa...@redhat.com wrote:
 Niels you are correct. Let me take a look.

 Luis


 -Original Message-
 From: Niels de Vos [nde...@redhat.com]
 Received: Monday, 21 Jul 2014, 10:41AM
 To: Luis Pabon [lpa...@redhat.com]
 CC: Anders Blomdell [anders.blomd...@control.lth.se];
 gluster-devel@gluster.org
 Subject: Re: [Gluster-devel] Cmockery2 in GlusterFS


 On Mon, Jul 21, 2014 at 04:27:18PM +0200, Anders Blomdell wrote:
 On 2014-07-21 16:17, Anders Blomdell wrote:
  On 2014-07-20 16:01, Niels de Vos wrote:
  On Fri, Jul 18, 2014 at 02:52:18PM -0400, Luis Pabón wrote:
  Hi all,
  A few months ago, the unit test framework based on cmockery2 was
  in the repo for a little while, then removed while we improved the
  packaging method.  Now support for cmockery2 (
  http://review.gluster.org/#/c/7538/ ) has been merged into the repo
  again.  This will most likely require you to install cmockery2 on
  your development systems by doing the following:
 
  * Fedora/EPEL:
  $ sudo yum -y install cmockery2-devel
 
  * All other systems please visit the following page:
  https://github.com/lpabon/cmockery2/blob/master/doc/usage.md#installation
 
  Here is also some information about Cmockery2 and how to use it:
 
  * Introduction to Unit Tests in C Presentation:
  http://slides-lpabon.rhcloud.com/feb24_glusterfs_unittest.html#/
  * Cmockery2 Usage Guide:
  https://github.com/lpabon/cmockery2/blob/master/doc/usage.md
  * Using Cmockery2 with GlusterFS:
  https://github.com/gluster/glusterfs/blob/master/doc/hacker-guide/en-US/markdown/unittest.md
 
 
  When starting out writing unit tests, I would suggest writing unit
  tests for non-xlator interface files when you start.  Once you feel
  more comfortable writing unit tests, then move to writing them for
  the xlators interface files.
 
  Awesome, many thanks! I'd like to add some unittests for the RPC and
  NFS
  layer. Several functions (like ip-address/netmask matching for ACLs)
  look very suitable.
 
  Did you have any particular functions in mind that you would like to
  see
  unittests for? If so, maybe you can file some bugs for the different
  tests so that we won't forget about it? Depending on the tests, these
  bugs may get the EasyFix keyword if there is a clear description and
  some pointers to examples.
 
  Looks like parts of cmockery was forgotten in glusterfs.spec.in:
 
  # rpm -q -f  `which gluster`
  glusterfs-cli-3.7dev-0.9.git5b8de97.fc20.x86_64
  # ldd `which gluster`
   linux-vdso.so.1 =  (0x74dfe000)
   libglusterfs.so.0 = /lib64/libglusterfs.so.0 (0x7fe034cc4000)
   libreadline.so.6 = /lib64/libreadline.so.6 (0x7fe034a7d000)
   libncurses.so.5 = /lib64/libncurses.so.5 (0x7fe034856000)
   libtinfo.so.5 = /lib64/libtinfo.so.5 (0x7fe03462c000)
   libgfxdr.so.0 = /lib64/libgfxdr.so.0 (0x7fe034414000)
   libgfrpc.so.0 = /lib64/libgfrpc.so.0 (0x7fe0341f8000)
   libxml2.so.2 = /lib64/libxml2.so.2 (0x7fe033e8f000)
   libz.so.1 = /lib64/libz.so.1 (0x7fe033c79000)
   libm.so.6 = /lib64/libm.so.6 (0x7fe033971000)
   libdl.so.2 = /lib64/libdl.so.2 (0x7fe03376d000)
   libcmockery.so.0 = not found
   libpthread.so.0 = /lib64/libpthread.so.0 (0x7fe03354f000)
   libcrypto.so.10 = /lib64/libcrypto.so.10 (0x7fe033168000)
   libc.so.6 = /lib64/libc.so.6 (0x7fe032da9000)
   libcmockery.so.0 = not found
   libcmockery.so.0 = not found
   libcmockery.so.0 = not found
   liblzma.so.5 = /lib64/liblzma.so.5 (0x7fe032b82000)
   /lib64/ld-linux-x86-64.so.2 (0x7fe0351f1000)
 
  Should I file a bug report or could someone on the fast-lane fix this?
 My bad (installation with --nodeps --force :-()

 Actually, I was not expecting a dependency on cmockery2. My
 understanding was that only some temporary test-applications would be
 linked with libcmockery and not any binaries that would get packaged in
 the RPMs.

 Luis, could you clarify that?

 Thanks,
 Niels

 ___
 Gluster-devel mailing list
 Gluster-devel@gluster.org
 http://supercolony.gluster.org/mailman/listinfo/gluster-devel




-- 
Religious confuse piety with mere ritual, the virtuous confuse
regulation with outcomes
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] glustershd status

2014-07-17 Thread Harshavardhana
On a side note while looking into this issue  - I uncovered a memory
leak too which after successful registration with glusterd, Self-heal
daemon and NFS server are killed by FreeBSD memory manager. Have you
observed any memory leaks?
I have the valgrind output and it clearly indicates of large memory
leaks - perhaps it could be just FreeBSD thing!

On Wed, Jul 16, 2014 at 11:29 PM, Emmanuel Dreyfus m...@netbsd.org wrote:
 On Wed, Jul 16, 2014 at 09:54:49PM -0700, Harshavardhana wrote:
 In-fact this is true on Linux as well - there is smaller time window
 observe the below output , immediately run 'volume status' after a
 'volume start' event

 I observe the same lapse on NetBSD if the volume is created and started.
 If it is stoped and started again, glustershd will never report back.

 --
 Emmanuel Dreyfus
 m...@netbsd.org



-- 
Religious confuse piety with mere ritual, the virtuous confuse
regulation with outcomes
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] glustershd status

2014-07-17 Thread Harshavardhana
KP,

I do have a 3.2Gigs worth of valgrind output which indicates this
issue, trying to reproduce this on Linux.

My hunch says that 'compiling' with --disable-epoll might actually
trigger this issue on Linux too. Will update here
once i have done that testing.


On Wed, Jul 16, 2014 at 11:44 PM, Krishnan Parthasarathi
kpart...@redhat.com wrote:
 Emmanuel,

 Could you take statedump* of the glustershd process when it has leaked
 enough memory to be able to observe and share the output? This might
 give us what kind of objects are we allocating abnormally high.

 * statedump of a glusterfs process
 #kill -USR1 pid of process

 HTH,
 Krish


 - Original Message -
 On Wed, Jul 16, 2014 at 11:32:06PM -0700, Harshavardhana wrote:
  On a side note while looking into this issue  - I uncovered a memory
  leak too which after successful registration with glusterd, Self-heal
  daemon and NFS server are killed by FreeBSD memory manager. Have you
  observed any memory leaks?
  I have the valgrind output and it clearly indicates of large memory
  leaks - perhaps it could be just FreeBSD thing!

 I observed memory leaks on long terme usage. My favourite test case
 is building NetBSD on a replicated/distributed volume, and I can see
 processes growing a lot during the build. I reported it some time ago,
 and some leaks were plugged, but obviosuly some remain.

 valgrind was never ported to NetBSD, hence I lack investigative tools,
 but I bet the leaks exist on FreeBSD and Linux as well.

 --
 Emmanuel Dreyfus
 m...@netbsd.org
 ___
 Gluster-devel mailing list
 Gluster-devel@gluster.org
 http://supercolony.gluster.org/mailman/listinfo/gluster-devel




-- 
Religious confuse piety with mere ritual, the virtuous confuse
regulation with outcomes
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] glustershd status

2014-07-17 Thread Harshavardhana
This is a small memory system like 1024M and a disk space for the
volume is 9gig, i do not think it has anything to do with AFR per se -
same bug is also reproducible on the bricks, nfs server too.  Also it
might be that we aren't able to capture glusterdumps on non Linux
platforms properly - one of reasons i used Valgrind output.

In Valgrind it indicates about 'lost memory' blocks - You can see the
screenshots too which indicate memory ramp ups in seconds with no i/o,
in-fact no data on the volume.

The work-around i have seen to contain this issue is to disable
self-heal-deamon and NFS - after that the memory remains proper. On an
interesting observation after running Gluster management daemon in
debugging more - i can see that

RPCLNT_CONNECT event() is constantly being triggered - which should
only occur once?? per process notification?


On Thu, Jul 17, 2014 at 3:38 AM, Krishnan Parthasarathi
kpart...@redhat.com wrote:
 Harsha,

 I don't actively work on AFR, so I might have missed some things.
 I looked for the following things in the statedump for any memory allocation
 related oddities.
 1) grep pool-misses *dump*
 This tells us if there were any objects whose allocated mem-pool wasn't 
 sufficient
 for the load it was working under.
 I see that the pool-misses were zero, which means we are doing good with the 
 mem-pools we allocated.

 2) grep hot-count *dump*
 This tells us the no. of objects of any kind that is 'active' in the process 
 while the state-dump
 was taken. This should allow us to see if the numbers we see are explicable.
 I see the maximum hot-count across statedumps of processes is 50, which isn't 
 alarming or pointing any obvious memory leaks.

 The above observations indicate that some object that is not mem-pool 
 allocated is being leaked.

 Hope this helps,
 Krish

 - Original Message -
 Here you go KP - https://bugzilla.redhat.com/show_bug.cgi?id=1120570

 On Thu, Jul 17, 2014 at 12:37 AM, Krishnan Parthasarathi
 kpart...@redhat.com wrote:
  Harsha,
 
  In addition to the valgrind output, statedump output of glustershd process
  when the leak is observed would be really helpful.
 
  thanks,
  Krish
 
  - Original Message -
  Nope spoke too early, using poll() has no effect on the memory usage
  on Linux, so actually back to FreeBSD.
 
  On Thu, Jul 17, 2014 at 12:07 AM, Harshavardhana
  har...@harshavardhana.net wrote:
   KP,
  
   I do have a 3.2Gigs worth of valgrind output which indicates this
   issue, trying to reproduce this on Linux.
  
   My hunch says that 'compiling' with --disable-epoll might actually
   trigger this issue on Linux too. Will update here
   once i have done that testing.
  
  
   On Wed, Jul 16, 2014 at 11:44 PM, Krishnan Parthasarathi
   kpart...@redhat.com wrote:
   Emmanuel,
  
   Could you take statedump* of the glustershd process when it has leaked
   enough memory to be able to observe and share the output? This might
   give us what kind of objects are we allocating abnormally high.
  
   * statedump of a glusterfs process
   #kill -USR1 pid of process
  
   HTH,
   Krish
  
  
   - Original Message -
   On Wed, Jul 16, 2014 at 11:32:06PM -0700, Harshavardhana wrote:
On a side note while looking into this issue  - I uncovered a memory
leak too which after successful registration with glusterd,
Self-heal
daemon and NFS server are killed by FreeBSD memory manager. Have you
observed any memory leaks?
I have the valgrind output and it clearly indicates of large memory
leaks - perhaps it could be just FreeBSD thing!
  
   I observed memory leaks on long terme usage. My favourite test case
   is building NetBSD on a replicated/distributed volume, and I can see
   processes growing a lot during the build. I reported it some time ago,
   and some leaks were plugged, but obviosuly some remain.
  
   valgrind was never ported to NetBSD, hence I lack investigative tools,
   but I bet the leaks exist on FreeBSD and Linux as well.
  
   --
   Emmanuel Dreyfus
   m...@netbsd.org
   ___
   Gluster-devel mailing list
   Gluster-devel@gluster.org
   http://supercolony.gluster.org/mailman/listinfo/gluster-devel
  
  
  
  
   --
   Religious confuse piety with mere ritual, the virtuous confuse
   regulation with outcomes
 
 
 
  --
  Religious confuse piety with mere ritual, the virtuous confuse
  regulation with outcomes
 



 --
 Religious confuse piety with mere ritual, the virtuous confuse
 regulation with outcomes




-- 
Religious confuse piety with mere ritual, the virtuous confuse
regulation with outcomes
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Change DEFAULT_WORKDIR from hard-coded value

2014-07-17 Thread Harshavardhana
Anyone?

On Mon, Jul 14, 2014 at 6:44 PM, Harshavardhana
har...@harshavardhana.net wrote:
 http://review.gluster.org/#/c/8246/

 Two important things it achieves

 - Break-way from '/var/lib/glusterd' hard-coded previously,
   instead rely on 'configure' value from 'localstatedir'
 - Provide 's/lib/db' as default working directory for gluster
   management daemon for BSD and Darwin based installations

 ${localstatedir}/db was used for FreeBSD (In fact FreeNAS) - where
 we are planning for a 3.6 release integration as an experimental
 option perhaps to begin with.

 Since ${localstatedir}/lib is non-existent on non-linux platforms,
 it was decided as a natural choice.

 Future changes in next set would be migrate all the 'tests/*' to
 handle non '/var/{lib,db}' directories.

 Need your reviews on the present patchset, please chime in - thank you!

 --
 Religious confuse piety with mere ritual, the virtuous confuse
 regulation with outcomes



-- 
Religious confuse piety with mere ritual, the virtuous confuse
regulation with outcomes
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] glustershd status

2014-07-17 Thread Harshavardhana
Sure will do that! - if i get any clues i might send out a patch :-)

On Thu, Jul 17, 2014 at 9:05 PM, Krishnan Parthasarathi
kpart...@redhat.com wrote:
 Harsha,
 I haven't gotten around looking at the valgrind output. I am not sure if I 
 will be able to do it soon since I am travelling next week.
 Are you seeing an equal no. of disconnect messages in glusterd logs? What is 
 the ip:port you observe in the RPC_CLNT_CONNECT messages? Could you attach 
 the logs to the bug?

 thanks,
 Krish

 - Original Message -
 This is a small memory system like 1024M and a disk space for the
 volume is 9gig, i do not think it has anything to do with AFR per se -
 same bug is also reproducible on the bricks, nfs server too.  Also it
 might be that we aren't able to capture glusterdumps on non Linux
 platforms properly - one of reasons i used Valgrind output.

 In Valgrind it indicates about 'lost memory' blocks - You can see the
 screenshots too which indicate memory ramp ups in seconds with no i/o,
 in-fact no data on the volume.

 The work-around i have seen to contain this issue is to disable
 self-heal-deamon and NFS - after that the memory remains proper. On an
 interesting observation after running Gluster management daemon in
 debugging more - i can see that

 RPCLNT_CONNECT event() is constantly being triggered - which should
 only occur once?? per process notification?


 On Thu, Jul 17, 2014 at 3:38 AM, Krishnan Parthasarathi
 kpart...@redhat.com wrote:
  Harsha,
 
  I don't actively work on AFR, so I might have missed some things.
  I looked for the following things in the statedump for any memory
  allocation
  related oddities.
  1) grep pool-misses *dump*
  This tells us if there were any objects whose allocated mem-pool wasn't
  sufficient
  for the load it was working under.
  I see that the pool-misses were zero, which means we are doing good with
  the mem-pools we allocated.
 
  2) grep hot-count *dump*
  This tells us the no. of objects of any kind that is 'active' in the
  process while the state-dump
  was taken. This should allow us to see if the numbers we see are
  explicable.
  I see the maximum hot-count across statedumps of processes is 50, which
  isn't alarming or pointing any obvious memory leaks.
 
  The above observations indicate that some object that is not mem-pool
  allocated is being leaked.
 
  Hope this helps,
  Krish
 
  - Original Message -
  Here you go KP - https://bugzilla.redhat.com/show_bug.cgi?id=1120570
 
  On Thu, Jul 17, 2014 at 12:37 AM, Krishnan Parthasarathi
  kpart...@redhat.com wrote:
   Harsha,
  
   In addition to the valgrind output, statedump output of glustershd
   process
   when the leak is observed would be really helpful.
  
   thanks,
   Krish
  
   - Original Message -
   Nope spoke too early, using poll() has no effect on the memory usage
   on Linux, so actually back to FreeBSD.
  
   On Thu, Jul 17, 2014 at 12:07 AM, Harshavardhana
   har...@harshavardhana.net wrote:
KP,
   
I do have a 3.2Gigs worth of valgrind output which indicates this
issue, trying to reproduce this on Linux.
   
My hunch says that 'compiling' with --disable-epoll might actually
trigger this issue on Linux too. Will update here
once i have done that testing.
   
   
On Wed, Jul 16, 2014 at 11:44 PM, Krishnan Parthasarathi
kpart...@redhat.com wrote:
Emmanuel,
   
Could you take statedump* of the glustershd process when it has
leaked
enough memory to be able to observe and share the output? This might
give us what kind of objects are we allocating abnormally high.
   
* statedump of a glusterfs process
#kill -USR1 pid of process
   
HTH,
Krish
   
   
- Original Message -
On Wed, Jul 16, 2014 at 11:32:06PM -0700, Harshavardhana wrote:
 On a side note while looking into this issue  - I uncovered a
 memory
 leak too which after successful registration with glusterd,
 Self-heal
 daemon and NFS server are killed by FreeBSD memory manager. Have
 you
 observed any memory leaks?
 I have the valgrind output and it clearly indicates of large
 memory
 leaks - perhaps it could be just FreeBSD thing!
   
I observed memory leaks on long terme usage. My favourite test case
is building NetBSD on a replicated/distributed volume, and I can
see
processes growing a lot during the build. I reported it some time
ago,
and some leaks were plugged, but obviosuly some remain.
   
valgrind was never ported to NetBSD, hence I lack investigative
tools,
but I bet the leaks exist on FreeBSD and Linux as well.
   
--
Emmanuel Dreyfus
m...@netbsd.org
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel
   
   
   
   
--
Religious confuse piety with mere ritual, the virtuous confuse

Re: [Gluster-devel] glustershd status

2014-07-16 Thread Harshavardhana
So here is what i found long email please bare with me

Looks like the management daemon and these other daemons

eg: brick, nfs-server, gluster self-heal daemon

They work in non-blocking manner, as in notifying back to Gluster
management daemon when they are available and when they are not. This
is done through a notify() callback mechanism

A registered notify() handler is supposed to call setter() functions
which update the state of the notified instance with in gluster
management daemon

Taking self-heal daemon as an example:

conf-shd-online --- is the primary value which should be set
during this notify call back where self-heal-daemon informs of its
availability to Gluster management daemon - this happens during a
RPCCLNT_CONNECT event()

During this event glusterd_nodesvc_set_online_status() sets all the
necessary state online/offline.

What happens in FreeBSD/NetBSD is that this notify event doesn't occur
at all for some odd reason - there in-fact a first notify() event but
that in-fact sets the value as offline i.e status == 0 (gf_boolean_t
== _gf_false)

In-fact this is true on Linux as well - there is smaller time window
observe the below output , immediately run 'volume status' after a
'volume start' event

# gluster volume status
Status of volume: repl
Gluster process PortOnline  Pid
--
Brick 127.0.1.1:/d/backends/brick1  49152   Y   29082
Brick 127.0.1.1:/d/backends/brick2  49153   Y   29093
NFS Server on localhost N/A N   N/A
Self-heal Daemon on localhost   N/A N   N/A

Task Status of Volume repl
--
There are no active volume tasks

Both these commands are 1 sec apart

# gluster volume status
Status of volume: repl
Gluster process PortOnline  Pid
--
Brick 127.0.1.1:/d/backends/brick1  49152   Y   29082
Brick 127.0.1.1:/d/backends/brick2  49153   Y   29093
NFS Server on localhost 2049Y   29115
Self-heal Daemon on localhost   N/A Y   29110

Task Status of Volume repl
--
There are no active volume tasks

So the change happens but sadly this doesn't happen on non-Linux
platform, my general speculation is that this is related to
poll()/epoll() -  i have to debug this further.

In-fact restarting 'gluster management daemon' fixes these issues
which is understandable :-)

On Wed, Jul 16, 2014 at 9:41 AM, Emmanuel Dreyfus m...@netbsd.org wrote:
 Harshavardhana har...@harshavardhana.net wrote:

 Its pretty much the same on FreeBSD, i didn't spend much time debugging
 it. Let me do it right away and let you know what i find.

 Right. Once you will have this one, I have Linux-specific truncate and
 md5csum replacements to contribute. I do not send them now since I
 cannot test them.


 --
 Emmanuel Dreyfus
 http://hcpnet.free.fr/pubz
 m...@netbsd.org



-- 
Religious confuse piety with mere ritual, the virtuous confuse
regulation with outcomes
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Patches to be merged before 3.6 branching

2014-07-15 Thread Harshavardhana
I need reviews here to get these changes in for 3.6 , La FreeBSD -
http://review.gluster.com/#/c/8246/

On Tue, Jul 15, 2014 at 1:32 AM, Soumya Koduri skod...@redhat.com wrote:
 Hi Vijay,

 I suggest below patch to be merged -
 http://review.gluster.org/#/c/7976/

 Its not a critical one but fixes an issue with gfid-healing in the libgfapi
 path.

 Thanks,
 Soumya


 On 07/14/2014 07:33 PM, Vijay Bellur wrote:

 Hi All,

 I intend creating the 3.6 branch tomorrow. After that, the branch will
 be restricted to bug fixes only. If you have any major patches to be
 reviewed and merged for release-3.6, please update this thread.

 Thanks,
 Vijay
 ___
 Gluster-devel mailing list
 Gluster-devel@gluster.org
 http://supercolony.gluster.org/mailman/listinfo/gluster-devel

 ___
 Gluster-devel mailing list
 Gluster-devel@gluster.org
 http://supercolony.gluster.org/mailman/listinfo/gluster-devel



-- 
Religious confuse piety with mere ritual, the virtuous confuse
regulation with outcomes
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Patches to be merged before 3.6 branching

2014-07-15 Thread Harshavardhana
Also for fuse_readlink bug from MacOSX testing -
http://review.gluster.com/#/c/8300/

On Tue, Jul 15, 2014 at 1:46 AM, Harshavardhana
har...@harshavardhana.net wrote:
 I need reviews here to get these changes in for 3.6 , La FreeBSD -
 http://review.gluster.com/#/c/8246/

 On Tue, Jul 15, 2014 at 1:32 AM, Soumya Koduri skod...@redhat.com wrote:
 Hi Vijay,

 I suggest below patch to be merged -
 http://review.gluster.org/#/c/7976/

 Its not a critical one but fixes an issue with gfid-healing in the libgfapi
 path.

 Thanks,
 Soumya


 On 07/14/2014 07:33 PM, Vijay Bellur wrote:

 Hi All,

 I intend creating the 3.6 branch tomorrow. After that, the branch will
 be restricted to bug fixes only. If you have any major patches to be
 reviewed and merged for release-3.6, please update this thread.

 Thanks,
 Vijay
 ___
 Gluster-devel mailing list
 Gluster-devel@gluster.org
 http://supercolony.gluster.org/mailman/listinfo/gluster-devel

 ___
 Gluster-devel mailing list
 Gluster-devel@gluster.org
 http://supercolony.gluster.org/mailman/listinfo/gluster-devel



 --
 Religious confuse piety with mere ritual, the virtuous confuse
 regulation with outcomes



-- 
Religious confuse piety with mere ritual, the virtuous confuse
regulation with outcomes
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Is it OK to pick Code-Reviewer(s)

2014-07-14 Thread Harshavardhana
On Mon, Jul 14, 2014 at 9:31 AM, Anders Blomdell
anders.blomd...@control.lth.se wrote:
 When submitting patches where there is an/some obvious person(s) to blame,
 is it OK/desirable to request them as Code-Reviewers in gerrit?

Gist of adding Core-Reviewers is to find faults in oneself - not the
other way around :-)

-- 
Religious confuse piety with mere ritual, the virtuous confuse
regulation with outcomes
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Change DEFAULT_WORKDIR from hard-coded value

2014-07-14 Thread Harshavardhana
http://review.gluster.org/#/c/8246/

Two important things it achieves

- Break-way from '/var/lib/glusterd' hard-coded previously,
  instead rely on 'configure' value from 'localstatedir'
- Provide 's/lib/db' as default working directory for gluster
  management daemon for BSD and Darwin based installations

${localstatedir}/db was used for FreeBSD (In fact FreeNAS) - where
we are planning for a 3.6 release integration as an experimental
option perhaps to begin with.

Since ${localstatedir}/lib is non-existent on non-linux platforms,
it was decided as a natural choice.

Future changes in next set would be migrate all the 'tests/*' to
handle non '/var/{lib,db}' directories.

Need your reviews on the present patchset, please chime in - thank you!

-- 
Religious confuse piety with mere ritual, the virtuous confuse
regulation with outcomes
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] regarding warnings on master

2014-07-10 Thread Harshavardhana
Do not know, they do not show up locally on my laptop, can you point
me to a build so that i can investigate?

On Thu, Jul 10, 2014 at 6:45 PM, Pranith Kumar Karampuri
pkara...@redhat.com wrote:
 hi Harsha,
  Know anything about the following warnings on latest master?
 In file included from msg-nfs3.h:20:0,
  from msg-nfs3.c:22:
 nlm4-xdr.h:6:14: warning: extra tokens at end of #ifndef directive [enabled
 by default]
  #ifndef _NLM4-XDR_H_RPCGEN
   ^
 nlm4-xdr.h:7:14: warning: missing whitespace after the macro name [enabled
 by default]
  #define _NLM4-XDR_H_RPCGEN

 Pranith



-- 
Religious confuse piety with mere ritual, the virtuous confuse
regulation with outcomes
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] regarding warnings on master

2014-07-10 Thread Harshavardhana
On Thu, Jul 10, 2014 at 7:30 PM, Harshavardhana
har...@harshavardhana.net wrote:
 Do not know, they do not show up locally on my laptop, can you point
 me to a build so that i can investigate?


I think these are related to C99 standards, are you using clang? -
this must be xdrgen bug that it doesn't produce the proper
#ifdef's,#define

-- 
Religious confuse piety with mere ritual, the virtuous confuse
regulation with outcomes
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] v3.5qa2 tag name on master is annoying

2014-07-09 Thread Harshavardhana
I thought pkg-version in build-aux should have fixed this properly?

On Wed, Jul 9, 2014 at 1:33 PM, Justin Clift jus...@gluster.org wrote:
 That v3.5qa2 tag name on master is annoying, due to the RPM
 naming it causes when building on master.

 Did we figure out a solution?

 Maybe we should do a v3.6something tag at feature freeze
 time or something?

 + Justin

 --
 GlusterFS - http://www.gluster.org

 An open source, distributed file system scaling to several
 petabytes, and handling thousands of clients.

 My personal twitter: twitter.com/realjustinclift

 ___
 Gluster-devel mailing list
 Gluster-devel@gluster.org
 http://supercolony.gluster.org/mailman/listinfo/gluster-devel



-- 
Religious confuse piety with mere ritual, the virtuous confuse
regulation with outcomes
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] bug-822830.t fails on release-3.5 branch

2014-07-04 Thread Harshavardhana
On Thu, Jul 3, 2014 at 11:30 PM, Santosh Pradhan sprad...@redhat.com wrote:
 Thanks guys for looking into this. I am just wondering how this passed the
 regression before Niels could merged this in? Good part is test case needs
 modification not code ;)

We need a single maintainer for test cases alone to keep stability
across, this would occur
if some changes introduce races as we add more and more test cases.

For example chmod.t from posix-compliance fails once in a while and it
is not never maintained
by us.
-- 
Religious confuse piety with mere ritual, the virtuous confuse
regulation with outcomes
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] bug-822830.t fails on release-3.5 branch

2014-07-04 Thread Harshavardhana
 There seems to be some bug in our regression testing code. Even though the
 regression failed it gave the verdict as SUCCESS
 http://build.gluster.org/job/rackspace-regression-2GB-triggered/97/consoleFull


This was fixed by Justin Clift recently

-- 
Religious confuse piety with mere ritual, the virtuous confuse
regulation with outcomes
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Reviewing patches early

2014-07-02 Thread Harshavardhana
 Yeah, lets try this out.  We can add the checkpatch.pl script to the
 patch acceptance tests, and have an automatically triggered job that
 runs it on patch submission.  Should be pretty straightforward.

Let me work on the checkpatch script more to clean it up and make it
report properly for
importantant conditions

   - Tabs
   - Whitespace
   - Unnecessary spaces
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Feature review: Improved rebalance performance

2014-06-30 Thread Harshavardhana
 Besides bandwidth limits, there also needs to be monitors on brick latency.
 We don't want so many queued iops that operating performance is impacted.


AFAIK - rebalance and self-heal threads run in low-priority queue in
io-threads by default.

-- 
Religious confuse piety with mere ritual, the virtuous confuse
regulation with outcomes
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] regarding packaging issue in 3.5.1 for .deb

2014-06-29 Thread Harshavardhana
Surprisingly why isn't this available in 'master' branch? isn't it
'master' first and then backport it to release-3.5?

On Sun, Jun 29, 2014 at 9:56 PM, Pranith Kumar Karampuri
pkara...@redhat.com wrote:
 hi Louis,
 It seems like 3.5.1 deb does not include the binary /usr/sbin/glfsheal
 which is used for doing gluster volume heal volname info. Could this be
 a packaging issue? patch - http://review.gluster.org/6511 introduces it. I
 see it has changes in glusterfs.spec.in to add the binary for packaging. Is
 this not good enough for .deb packaging? If it is not, what extra things
 need to be done to make sure this binary is present in deb packaging as
 well?

 Also CCed Peter who reported the issue:
 https://bugzilla.redhat.com/show_bug.cgi?id=1113778

 Pranith
 ___
 Gluster-devel mailing list
 Gluster-devel@gluster.org
 http://supercolony.gluster.org/mailman/listinfo/gluster-devel



-- 
Religious confuse piety with mere ritual, the virtuous confuse
regulation with outcomes
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Reviewing patches early

2014-06-26 Thread Harshavardhana
http://review.gluster.org/#/c/8181/ - posted a new change, wouldn't it
be worth to add this in smoke tests? rather than at ./rfc.sh ? - we
can provide a detailed summary - since we do not have 'commit/push'
style patch submission.

We can leverage our smoke tests, thoughts?

On Wed, Jun 25, 2014 at 7:12 PM, James purplei...@gmail.com wrote:
 On Wed, Jun 25, 2014 at 9:35 PM, Jeff Darcy jda...@redhat.com wrote:
 While I agree with everything you said. Complaining about tabs/spaces
 should be done by a script. Something like
 http://review.gluster.com/#/c/5404
 Some one who knows perl should help us with rebasing it and getting it in

 Agreed.  If there are standards that are going to be enforced even if
 they're counterproductive (and I think several aspects of our coding
 standard fit that description) then we should at least enforce them
 quickly via a checkin script instead of making people run regressions
 twice.


 I too agree that using spaces instead of tabs is counterproductive. ;)

 It could be worse though: http://www.emacswiki.org/emacs/TabsSpacesBoth
 ___
 Gluster-devel mailing list
 Gluster-devel@gluster.org
 http://supercolony.gluster.org/mailman/listinfo/gluster-devel



-- 
Religious confuse piety with mere ritual, the virtuous confuse
regulation with outcomes
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] epel-7 mock broken in rpm.t (due to ftp.redhat.com change?)

2014-06-13 Thread Harshavardhana
Interesting - looks like all the sources have been moved? do we know where?

On Thu, Jun 12, 2014 at 10:48 PM, Justin Clift jus...@gluster.org wrote:
 Hi Kaleb,

 This just started showing up in rpm.t test output:

   ERROR: 
 Exception(/home/jenkins/root/workspace/rackspace-regression-2GB/rpmbuild-mock.d/glusterfs-3.5qa2-0.621.gita22a2f0.el6.src.rpm)
  Config(epel-7-x86_64) 0 minutes 2 seconds
   INFO: Results and/or logs in: 
 /home/jenkins/root/workspace/rackspace-regression-2GB/rpmbuild-mock.d/mock.d/epel-7-x86_64
   INFO: Cleaning up build root ('clean_on_failure=True')
   Start: lock buildroot
   Start: clean chroot
   INFO: chroot (/var/lib/mock/epel-7-x86_64) unlocked and deleted
   Finish: clean chroot
   Finish: lock buildroot
   ERROR: Command failed:
# ['/usr/bin/yum', '--installroot', '/var/lib/mock/epel-7-x86_64/root/', 
 'install', '@buildsys-build']

   http://ftp.redhat.com/pub/redhat/rhel/beta/7/x86_64/os/repodata/repomd.xml
   : [Errno 14] PYCURL ERROR 22 - The requested URL returned error: 404 Not 
 Found
   Trying other mirror.
   Error: Cannot retrieve repository metadata (repomd.xml) for repository: el. 
 Please verify its path and try again

 Seems to be due to ftp.redhat.com changing their layout or something, which
 seems to have broken mock.

 Guessing we'll need to disable epel-7 testing until this gets
 fixed.

 + Justin

 --
 GlusterFS - http://www.gluster.org

 An open source, distributed file system scaling to several
 petabytes, and handling thousands of clients.

 My personal twitter: twitter.com/realjustinclift

 ___
 Gluster-devel mailing list
 Gluster-devel@gluster.org
 http://supercolony.gluster.org/mailman/listinfo/gluster-devel



-- 
Religious confuse piety with mere ritual, the virtuous confuse
regulation with outcomes
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Shall we revert quota-anon-fd.t?

2014-06-10 Thread Harshavardhana
Agreed! +1

On Tue, Jun 10, 2014 at 7:51 PM, Pranith Kumar Karampuri
pkara...@redhat.com wrote:
 hi,
I see that quota-anon-fd.t is causing too many spurious failures. I think
 we should revert it and raise a bug so that it can be fixed and committed
 again along with the fix.

 Pranith
 ___
 Gluster-devel mailing list
 Gluster-devel@gluster.org
 http://supercolony.gluster.org/mailman/listinfo/gluster-devel



-- 
Religious confuse piety with mere ritual, the virtuous confuse
regulation with outcomes
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] struct dirent in snapview-server.c

2014-06-01 Thread Harshavardhana
Emmanuel i sent a patch to disable building snapview on OSX, since it
was of no use
on 'darwin' .

You could see if snapshots and snapview would work on NetBSD (LVM
support), in such cases one
doesn't have to disable it for NetBSD.

d_off might be necessary internally, turning it off might not make
snapshots properly work for BSD.

Snapshot team can explain better.

On the other hand NetBSD has 'sys/compat/linux/*' - can we not
leverage it? is it some
dependency which is not warranted?


On Sun, Jun 1, 2014 at 10:31 AM, Emmanuel Dreyfus m...@netbsd.org wrote:
 Emmanuel Dreyfus m...@netbsd.org wrote:

 Linux and NetBSD struct dirent do not have the same layout, and in fact
 the whole buffer returned by readdir() has a different layout and is not
 straightforward to convert.

 After reading further, there are struct dirent used in many other places
 without a hitch. The build breaks here because the d_off field is
 copied, and this field does not exist in NetBSD struct dirent.

 Is d_off used anywhere else? If not then I can fix the build with this:

 --- a/xlators/features/snapview-server/src/snapview-server.c
 +++ b/xlators/features/snapview-server/src/snapview-server.c
 @@ -1600,7 +1600,9 @@ svs_glfs_readdir (xlator_t *this, glfs_fd_t *glfd,
  strerror (errno));
  break;
  }
 +#ifdef linux
  entry-d_off = de.d_off;
 +#endif
  entry-d_ino = de.d_ino;
  entry-d_type = de.d_type;
  iatt_from_stat (buf, statbuf);

 --
 Emmanuel Dreyfus
 http://hcpnet.free.fr/pubz
 m...@netbsd.org
 ___
 Gluster-devel mailing list
 Gluster-devel@gluster.org
 http://supercolony.gluster.org/mailman/listinfo/gluster-devel



-- 
Religious confuse piety with mere ritual, the virtuous confuse
regulation with outcomes
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] struct dirent in snapview-server.c

2014-06-01 Thread Harshavardhana

 It is always possible to translate structures, the question is whether
 it is useful of not. d_off is the offset of this struct dirent within
 the buffer for the whole directory returned by getdents(2) system call.

 Since we glusterfs does not use getdents(2) but upper level
 opendir(3)/readdir(3), which use getdents(2) themselves, it never has
 the whole buffer, and therefore I am not sure it can make any use of
 d_off.

Understood that makes sense.

-- 
Religious confuse piety with mere ritual, the virtuous confuse
regulation with outcomes
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Gluster on OSX

2014-05-23 Thread Harshavardhana

 Do you reckon we should get that Mac Mini in the Westford
 lab set up to automatically test Gluster builds each
 night or something?

 If so, we should probably take/claim ownership of it,
 upgrade the memory in it, and (possibly) see if it can be
 put in the DMZ.

Up to you guys, it would be great. I am doing it manually for now once
in 2days :-)

-- 
Religious confuse piety with mere ritual, the virtuous confuse
regulation with outcomes
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] regarding special treatment of ENOTSUP for setxattr

2014-05-22 Thread Harshavardhana
Here are the important locations in the XFS tree coming from 2.6.32 branch

STATIC int
xfs_set_acl(struct inode *inode, int type, struct posix_acl *acl)
{
struct xfs_inode *ip = XFS_I(inode);
unsigned char *ea_name;
int error;

if (S_ISLNK(inode-i_mode))  I would
generally think this is the issue.
return -EOPNOTSUPP;

STATIC long
xfs_vn_fallocate(
struct inode*inode,
int mode,
loff_t  offset,
loff_t  len)
{
longerror;
loff_t  new_size = 0;
xfs_flock64_t   bf;
xfs_inode_t *ip = XFS_I(inode);
int cmd = XFS_IOC_RESVSP;
int attr_flags = XFS_ATTR_NOLOCK;

if (mode  ~(FALLOC_FL_KEEP_SIZE | FALLOC_FL_PUNCH_HOLE))
return -EOPNOTSUPP;

STATIC int
xfs_ioc_setxflags(
xfs_inode_t *ip,
struct file *filp,
void__user *arg)
{
struct fsxattr  fa;
unsigned intflags;
unsigned intmask;
int error;

if (copy_from_user(flags, arg, sizeof(flags)))
return -EFAULT;

if (flags  ~(FS_IMMUTABLE_FL | FS_APPEND_FL | \
  FS_NOATIME_FL | FS_NODUMP_FL | \
  FS_SYNC_FL))
return -EOPNOTSUPP;

Perhaps some sort of system level acl's are being propagated by us
over symlinks() ? - perhaps this is the related to the same issue of
following symlinks?

On Sun, May 18, 2014 at 10:48 AM, Pranith Kumar Karampuri
pkara...@redhat.com wrote:
 Sent the following patch to remove the special treatment of ENOTSUP here: 
 http://review.gluster.org/7788

 Pranith
 - Original Message -
 From: Kaleb KEITHLEY kkeit...@redhat.com
 To: gluster-devel@gluster.org
 Sent: Tuesday, May 13, 2014 8:01:53 PM
 Subject: Re: [Gluster-devel] regarding special treatment of ENOTSUP for  
  setxattr

 On 05/13/2014 08:00 AM, Nagaprasad Sathyanarayana wrote:
  On 05/07/2014 03:44 PM, Pranith Kumar Karampuri wrote:
 
  - Original Message -
  From: Raghavendra Gowdappa rgowd...@redhat.com
  To: Pranith Kumar Karampuri pkara...@redhat.com
  Cc: Vijay Bellur vbel...@redhat.com, gluster-devel@gluster.org,
  Anand Avati aav...@redhat.com
  Sent: Wednesday, May 7, 2014 3:42:16 PM
  Subject: Re: [Gluster-devel] regarding special treatment of ENOTSUP
  for setxattr
 
  I think with repetitive log message suppression patch being merged, we
  don't really need gf_log_occasionally (except if they are logged in
  DEBUG or
  TRACE levels).
  That definitely helps. But still, setxattr calls are not supposed to
  fail with ENOTSUP on FS where we support gluster. If there are special
  keys which fail with ENOTSUPP, we can conditionally log setxattr
  failures only when the key is something new?

 I know this is about EOPNOTSUPP (a.k.a. ENOTSUPP) returned by
 setxattr(2) for legitimate attrs.

 But I can't help but wondering if this isn't related to other bugs we've
 had with, e.g., lgetxattr(2) called on invalid xattrs?

 E.g. see https://bugzilla.redhat.com/show_bug.cgi?id=765202. We have a
 hack where xlators communicate with each other by getting (and setting?)
 invalid xattrs; the posix xlator has logic to filter out  invalid
 xattrs, but due to bugs this hasn't always worked perfectly.

 It would be interesting to know which xattrs are getting errors and on
 which fs types.

 FWIW, in a quick perusal of a fairly recent (3.14.3) kernel, in xfs
 there are only six places where EOPNOTSUPP is returned, none of them
 related to xattrs. In ext[34] EOPNOTSUPP can be returned if the
 user_xattr option is not enabled (enabled by default in ext4.) And in
 the higher level vfs xattr code there are many places where EOPNOTSUPP
 _might_ be returned, primarily only if subordinate function calls aren't
 invoked which would clear the default or return a different error.

 --

 Kaleb





 ___
 Gluster-devel mailing list
 Gluster-devel@gluster.org
 http://supercolony.gluster.org/mailman/listinfo/gluster-devel

 ___
 Gluster-devel mailing list
 Gluster-devel@gluster.org
 http://supercolony.gluster.org/mailman/listinfo/gluster-devel



-- 
Religious confuse piety with mere ritual, the virtuous confuse
regulation with outcomes
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] regarding special treatment of ENOTSUP for setxattr

2014-05-22 Thread Harshavardhana
http://review.gluster.com/#/c/7823/ - the fix here

On Thu, May 22, 2014 at 1:41 PM, Harshavardhana
har...@harshavardhana.net wrote:
 Here are the important locations in the XFS tree coming from 2.6.32 branch

 STATIC int
 xfs_set_acl(struct inode *inode, int type, struct posix_acl *acl)
 {
 struct xfs_inode *ip = XFS_I(inode);
 unsigned char *ea_name;
 int error;

 if (S_ISLNK(inode-i_mode))  I would
 generally think this is the issue.
 return -EOPNOTSUPP;

 STATIC long
 xfs_vn_fallocate(
 struct inode*inode,
 int mode,
 loff_t  offset,
 loff_t  len)
 {
 longerror;
 loff_t  new_size = 0;
 xfs_flock64_t   bf;
 xfs_inode_t *ip = XFS_I(inode);
 int cmd = XFS_IOC_RESVSP;
 int attr_flags = XFS_ATTR_NOLOCK;

 if (mode  ~(FALLOC_FL_KEEP_SIZE | FALLOC_FL_PUNCH_HOLE))
 return -EOPNOTSUPP;

 STATIC int
 xfs_ioc_setxflags(
 xfs_inode_t *ip,
 struct file *filp,
 void__user *arg)
 {
 struct fsxattr  fa;
 unsigned intflags;
 unsigned intmask;
 int error;

 if (copy_from_user(flags, arg, sizeof(flags)))
 return -EFAULT;

 if (flags  ~(FS_IMMUTABLE_FL | FS_APPEND_FL | \
   FS_NOATIME_FL | FS_NODUMP_FL | \
   FS_SYNC_FL))
 return -EOPNOTSUPP;

 Perhaps some sort of system level acl's are being propagated by us
 over symlinks() ? - perhaps this is the related to the same issue of
 following symlinks?

 On Sun, May 18, 2014 at 10:48 AM, Pranith Kumar Karampuri
 pkara...@redhat.com wrote:
 Sent the following patch to remove the special treatment of ENOTSUP here: 
 http://review.gluster.org/7788

 Pranith
 - Original Message -
 From: Kaleb KEITHLEY kkeit...@redhat.com
 To: gluster-devel@gluster.org
 Sent: Tuesday, May 13, 2014 8:01:53 PM
 Subject: Re: [Gluster-devel] regarding special treatment of ENOTSUP for 
   setxattr

 On 05/13/2014 08:00 AM, Nagaprasad Sathyanarayana wrote:
  On 05/07/2014 03:44 PM, Pranith Kumar Karampuri wrote:
 
  - Original Message -
  From: Raghavendra Gowdappa rgowd...@redhat.com
  To: Pranith Kumar Karampuri pkara...@redhat.com
  Cc: Vijay Bellur vbel...@redhat.com, gluster-devel@gluster.org,
  Anand Avati aav...@redhat.com
  Sent: Wednesday, May 7, 2014 3:42:16 PM
  Subject: Re: [Gluster-devel] regarding special treatment of ENOTSUP
  for setxattr
 
  I think with repetitive log message suppression patch being merged, we
  don't really need gf_log_occasionally (except if they are logged in
  DEBUG or
  TRACE levels).
  That definitely helps. But still, setxattr calls are not supposed to
  fail with ENOTSUP on FS where we support gluster. If there are special
  keys which fail with ENOTSUPP, we can conditionally log setxattr
  failures only when the key is something new?

 I know this is about EOPNOTSUPP (a.k.a. ENOTSUPP) returned by
 setxattr(2) for legitimate attrs.

 But I can't help but wondering if this isn't related to other bugs we've
 had with, e.g., lgetxattr(2) called on invalid xattrs?

 E.g. see https://bugzilla.redhat.com/show_bug.cgi?id=765202. We have a
 hack where xlators communicate with each other by getting (and setting?)
 invalid xattrs; the posix xlator has logic to filter out  invalid
 xattrs, but due to bugs this hasn't always worked perfectly.

 It would be interesting to know which xattrs are getting errors and on
 which fs types.

 FWIW, in a quick perusal of a fairly recent (3.14.3) kernel, in xfs
 there are only six places where EOPNOTSUPP is returned, none of them
 related to xattrs. In ext[34] EOPNOTSUPP can be returned if the
 user_xattr option is not enabled (enabled by default in ext4.) And in
 the higher level vfs xattr code there are many places where EOPNOTSUPP
 _might_ be returned, primarily only if subordinate function calls aren't
 invoked which would clear the default or return a different error.

 --

 Kaleb





 ___
 Gluster-devel mailing list
 Gluster-devel@gluster.org
 http://supercolony.gluster.org/mailman/listinfo/gluster-devel

 ___
 Gluster-devel mailing list
 Gluster-devel@gluster.org
 http://supercolony.gluster.org/mailman/listinfo/gluster-devel



 --
 Religious confuse piety with mere ritual, the virtuous confuse
 regulation with outcomes



-- 
Religious confuse piety with mere ritual, the virtuous confuse
regulation with outcomes
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] NetBSD status on master branch

2014-05-16 Thread Harshavardhana

 duplo# ls -li /mnt
 total 4
   4203451048 drwxr-xr-x  3 manu  wheel  1024 May 16 14:08 manu
   4203451048 drwxr-xr-x  3 manu  wheel  1024 May 16 14:08 manu
   3471060024 drwxrwxrwt  2 root  wheel  1024 May 16 18:40 tmp
   3471060024 drwxrwxrwt  2 root  wheel  1024 May 16 18:40 tmp


Haven't seen this on OSX - need to check the current master branch,
but i do have a
infinite symlink loop issue which i still need to debug.

-- 
Religious confuse piety with mere ritual, the virtuous confuse
regulation with outcomes
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] OS X porting merged

2014-05-04 Thread Harshavardhana
 You could try using that (then update your PATH to find /usr/local
 stuff), too see if the problem goes away on OSX 10.8 with that.

 If it works, that's a useful data point... ;)



Using gcc on OSX will mask a lot of apparent warning and cross
platform issues. clang values
conformance and Apps on OSX are Xcode based. Having a '.dmg' package
at a later point in future
wouldn't be possible if we gcc to make our job easier :-)

-- 
Religious confuse piety with mere ritual, the virtuous confuse
regulation with outcomes
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel