Re: [Gluster-devel] [Gluster-users] glusterfs-3.5.1beta released

2014-05-28 Thread Franco Broi

BTW, if I update gluster to 3.5.1-0.1.beta1.el6 using yum,
glusterfs-libs doesn't install properly.

  Cleanup: glusterfs-libs-3.5qa2-0.510.git5811f5c.el6.x86_64
   11/11 
Non-fatal POSTUN scriptlet failure in rpm package glusterfs-libs

# gluster --version
gluster: error while loading shared libraries: libglusterfs.so.0: cannot open 
shared object file: No such file or directory

Doing yum reinstall glusterfs-libs fixes it

# gluster --version
glusterfs 3.5.1beta built on May 26 2014 18:38:25


On Thu, 2014-05-29 at 10:34 +0800, Franco Broi wrote: 
> 
> OK. I've just unmounted the data2 volume from a machine called tape1,
> and now try to remount - it's hanging.
> 
> /bin/sh /sbin/mount.glusterfs nas5-10g:/data2 /data2 -o 
> rw,log-level=info,log-file=/var/log/glusterfs/glusterfs_data2.log
> 
> on the server 
> # gluster vol status
> Status of volume: data2
> Gluster process   PortOnline  
> Pid
> --
> Brick nas5-10g:/data17/gvol   49152   Y   6553
> Brick nas5-10g:/data18/gvol   49153   Y   6564
> Brick nas5-10g:/data19/gvol   49154   Y   6575
> Brick nas5-10g:/data20/gvol   49155   Y   6586
> Brick nas6-10g:/data21/gvol   49160   Y   20608
> Brick nas6-10g:/data22/gvol   49161   Y   20613
> Brick nas6-10g:/data23/gvol   49162   Y   20614
> Brick nas6-10g:/data24/gvol   49163   Y   20621
>  
> Task Status of Volume data2
> --
> There are no active volume tasks
> 
> Sending the sigusr1 killed the mount processes and I don't see any state
> dumps. What path should they be in? I'm running Gluster installed via
> rpm and I don't see a /var/run/gluster directory.
> 
> On Wed, 2014-05-28 at 22:09 -0400, Krishnan Parthasarathi wrote: 
> > Franco,
> > 
> > When your clients perceive a hang, could you check the status of the bricks 
> > by running,
> > # gluster volume status VOLNAME  (run this on one of the 'server' machines 
> > in the cluster.)
> > 
> > Could you also provide the statedump of the client(s),
> > by issuing the following command.
> > 
> > # kill -SIGUSR1 pid-of-mount-process (run this on the 'client' machine.)
> > 
> > This would dump the state information of the client, like the file 
> > operations in progress,
> > memory consumed etc, onto a file under $INSTALL_PREFIX/var/run/gluster. 
> > Please attach this
> > file in your response.
> > 
> > thanks,
> > Krish
> > 
> > - Original Message -
> > > Hi
> > > 
> > > My clients are running 3.4.1, when I try to mount from lots of machine
> > > simultaneously, some of the mounts hang. Stopping and starting the
> > > volume clears the hung mounts.
> > > 
> > > Errors in the client logs
> > > 
> > > [2014-05-28 01:47:15.930866] E
> > > [client-handshake.c:1741:client_query_portmap_cbk] 0-data2-client-3: 
> > > failed
> > > to get the port number for remote subvolume. Please run 'gluster volume
> > > status' on server to see if brick process is running.
> > > 
> > > Let me know if you want more information.
> > > 
> > > Cheers,
> > > 
> > > On Sun, 2014-05-25 at 11:55 +0200, Niels de Vos wrote:
> > > > On Sat, 24 May, 2014 at 11:34:36PM -0700, Gluster Build System wrote:
> > > > > 
> > > > > SRC:
> > > > > http://bits.gluster.org/pub/gluster/glusterfs/src/glusterfs-3.5.1beta.tar.gz
> > > > 
> > > > This beta release is intended to verify the changes that should resolve
> > > > the bugs listed below. We appreciate tests done by anyone. Please leave
> > > > a comment in the respective bugreport with a short description of the
> > > > success or failure. Visiting one of the bugreports is as easy as opening
> > > > the bugzilla.redhat.com/$BUG URL, for the first in the list, this
> > > > results in http://bugzilla.redhat.com/765202 .
> > > > 
> > > > Bugs expected to be fixed (31 in total since 3.5.0):
> > > > 
> > > >  #765202 - lgetxattr called with invalid keys on the bricks
> > > >  #833586 - inodelk hang from marker_rename_release_newp_lock
> > > >  #859581 - self-heal process can sometimes create directories instead of
> > > >  symlinks for the root gfid file in .glusterfs
> > > >  #986429 - Backupvolfile server option should work internal to GlusterFS
> > > >  framework
> > > > #1039544 - [FEAT] "gluster volume heal info" should list the entries 
> > > > that
> > > > actually required to be healed.
> > > > #1046624 - Unable to heal symbolic Links
> > > > #1046853 - AFR : For every file self-heal there are warning messages
> > > > reported in glustershd.log file
> > > > #1063190 - [RHEV-RHS] Volume was not accessible after server side quor

Re: [Gluster-devel] [Gluster-users] glusterfs-3.5.1beta released

2014-05-28 Thread Franco Broi

Also forgot to mention that it's impossible to change any volume
settings while 3.4 clients are attached but I can unmount them all,
change the setting and them mount them all again.

gluster vol set data2 nfs.disable off
volume set: failed: Staging failed on nas5-10g. Error: One or more
connected clients cannot support the feature being set. These clients
need to be upgraded or disconnected before running this command again


On Thu, 2014-05-29 at 10:34 +0800, Franco Broi wrote: 
> 
> OK. I've just unmounted the data2 volume from a machine called tape1,
> and now try to remount - it's hanging.
> 
> /bin/sh /sbin/mount.glusterfs nas5-10g:/data2 /data2 -o 
> rw,log-level=info,log-file=/var/log/glusterfs/glusterfs_data2.log
> 
> on the server 
> # gluster vol status
> Status of volume: data2
> Gluster process   PortOnline  
> Pid
> --
> Brick nas5-10g:/data17/gvol   49152   Y   6553
> Brick nas5-10g:/data18/gvol   49153   Y   6564
> Brick nas5-10g:/data19/gvol   49154   Y   6575
> Brick nas5-10g:/data20/gvol   49155   Y   6586
> Brick nas6-10g:/data21/gvol   49160   Y   20608
> Brick nas6-10g:/data22/gvol   49161   Y   20613
> Brick nas6-10g:/data23/gvol   49162   Y   20614
> Brick nas6-10g:/data24/gvol   49163   Y   20621
>  
> Task Status of Volume data2
> --
> There are no active volume tasks
> 
> Sending the sigusr1 killed the mount processes and I don't see any state
> dumps. What path should they be in? I'm running Gluster installed via
> rpm and I don't see a /var/run/gluster directory.
> 
> On Wed, 2014-05-28 at 22:09 -0400, Krishnan Parthasarathi wrote: 
> > Franco,
> > 
> > When your clients perceive a hang, could you check the status of the bricks 
> > by running,
> > # gluster volume status VOLNAME  (run this on one of the 'server' machines 
> > in the cluster.)
> > 
> > Could you also provide the statedump of the client(s),
> > by issuing the following command.
> > 
> > # kill -SIGUSR1 pid-of-mount-process (run this on the 'client' machine.)
> > 
> > This would dump the state information of the client, like the file 
> > operations in progress,
> > memory consumed etc, onto a file under $INSTALL_PREFIX/var/run/gluster. 
> > Please attach this
> > file in your response.
> > 
> > thanks,
> > Krish
> > 
> > - Original Message -
> > > Hi
> > > 
> > > My clients are running 3.4.1, when I try to mount from lots of machine
> > > simultaneously, some of the mounts hang. Stopping and starting the
> > > volume clears the hung mounts.
> > > 
> > > Errors in the client logs
> > > 
> > > [2014-05-28 01:47:15.930866] E
> > > [client-handshake.c:1741:client_query_portmap_cbk] 0-data2-client-3: 
> > > failed
> > > to get the port number for remote subvolume. Please run 'gluster volume
> > > status' on server to see if brick process is running.
> > > 
> > > Let me know if you want more information.
> > > 
> > > Cheers,
> > > 
> > > On Sun, 2014-05-25 at 11:55 +0200, Niels de Vos wrote:
> > > > On Sat, 24 May, 2014 at 11:34:36PM -0700, Gluster Build System wrote:
> > > > > 
> > > > > SRC:
> > > > > http://bits.gluster.org/pub/gluster/glusterfs/src/glusterfs-3.5.1beta.tar.gz
> > > > 
> > > > This beta release is intended to verify the changes that should resolve
> > > > the bugs listed below. We appreciate tests done by anyone. Please leave
> > > > a comment in the respective bugreport with a short description of the
> > > > success or failure. Visiting one of the bugreports is as easy as opening
> > > > the bugzilla.redhat.com/$BUG URL, for the first in the list, this
> > > > results in http://bugzilla.redhat.com/765202 .
> > > > 
> > > > Bugs expected to be fixed (31 in total since 3.5.0):
> > > > 
> > > >  #765202 - lgetxattr called with invalid keys on the bricks
> > > >  #833586 - inodelk hang from marker_rename_release_newp_lock
> > > >  #859581 - self-heal process can sometimes create directories instead of
> > > >  symlinks for the root gfid file in .glusterfs
> > > >  #986429 - Backupvolfile server option should work internal to GlusterFS
> > > >  framework
> > > > #1039544 - [FEAT] "gluster volume heal info" should list the entries 
> > > > that
> > > > actually required to be healed.
> > > > #1046624 - Unable to heal symbolic Links
> > > > #1046853 - AFR : For every file self-heal there are warning messages
> > > > reported in glustershd.log file
> > > > #1063190 - [RHEV-RHS] Volume was not accessible after server side quorum
> > > > was met
> > > > #1064096 - The old Python Translator code (not Glupy) should be removed
> > > > #1066996 - Using sanlock on a gluster mount with re

[Gluster-devel] rackspace-regression job history disappeared?

2014-05-28 Thread Justin Clift
Hi Luis,

Any idea what causes the job history for a project
(eg rackspace-regression) to disappear?

Just checked the rackspace-regression project in
Jenkins, and all of the historical job history is
gone.  :(  Not sure why.

I created the rackspace-regression project by copying
the "regression" one, so I was expecting the job
history to be retained in the same way.

Any ideas?

Regards and best wishes,

Justin Clift

--
Open Source and Standards @ Red Hat

twitter.com/realjustinclift

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [wireshark] TODO features

2014-05-28 Thread Pranith Kumar Karampuri


- Original Message -
> From: "Vikhyat Umrao" 
> To: "Pranith Kumar Karampuri" 
> Cc: "Niels de Vos" , gluster-devel@gluster.org
> Sent: Thursday, May 29, 2014 9:04:38 AM
> Subject: Re: [Gluster-devel] [wireshark] TODO features
> 
> 
> 
> - Original Message -
> 
> From: "Pranith Kumar Karampuri" 
> To: "Vikhyat Umrao" 
> Cc: "Niels de Vos" , gluster-devel@gluster.org
> Sent: Wednesday, May 28, 2014 9:12:23 PM
> Subject: Re: [Gluster-devel] [wireshark] TODO features
> 
> 
> 
> - Original Message -
> > From: "Vikhyat Umrao" 
> > To: "Niels de Vos" 
> > Cc: gluster-devel@gluster.org
> > Sent: Wednesday, May 28, 2014 3:37:47 PM
> > Subject: Re: [Gluster-devel] [wireshark] TODO features
> > 
> > Hi Niels,
> > 
> > Thanks for all your inputs and help, I have submitted a patch:
> > 
> > 
> > https://code.wireshark.org/review/1833
> 
> >I have absolutely no idea how this is supposed to work, but just wanted to
> >ask what will the 'name' variable be if the file name is '' i.e.
> >RPC_STRING_EMPTY
> 
> Thanks Pranith for brining it in , it is good catch.
> 
> With this new patch it will solve this :
> http://ur1.ca/hegl3

Looks good to me. Approved :-)

Pranith
> 
> Thanks Niels for your inputs.
> 
> Regards,
> Vikhyat
> 
> >Pranith
> > 
> > 
> > 
> > 
> > 
> > glusterfs: show filenames in the summary for common procedures
> > 
> > With this patch we will have filename on the summary for procedures MKDIR,
> > CREATE and LOOKUP.
> > 
> > 
> > 
> > 
> > Example output:
> > 
> > 173 18.309307 192.168.100.3 -> 192.168.100.4 GlusterFS 224 MKDIR V330 MKDIR
> > Call, Filename: testdir
> > 2606 36.767766 192.168.100.3 -> 192.168.100.4 GlusterFS 376 LOOKUP V330
> > LOOKUP Call, Filename: 1.txt
> > 2612 36.768242 192.168.100.3 -> 192.168.100.4 GlusterFS 228 CREATE V330
> > CREATE Call, Filename: 1.txt
> 
> That looks good :-)
> 
> Pranith
> > 
> > Thanks,
> > Vikhyat
> > 
> > 
> > From: "Niels de Vos" 
> > To: "Vikhyat Umrao" 
> > Cc: gluster-devel@gluster.org
> > Sent: Tuesday, April 29, 2014 11:16:20 PM
> > Subject: Re: [Gluster-devel] [wireshark] TODO features
> > 
> > On Tue, Apr 29, 2014 at 06:25:15AM -0400, Vikhyat Umrao wrote:
> > > Hi,
> > > 
> > > I am interested in TODO wireshark features for GlusterFS :
> > > I can start from below given feature for one procedure:
> > > => display the filename or filehandle on the summary for common
> > > procedures
> > 
> > Things to get you and others prepared:
> > 
> > 1. go to https://forge.gluster.org/wireshark/pages/Todo
> > 2. login and edit the wiki page, add your name to the topic
> > 3. clone the wireshark repository:
> > $ git clone g...@forge.gluster.org:wireshark/wireshark.git
> > (you have been added to the 'wireshark' group, so you should have
> > push access over ssh)
> > 4. create a new branch for your testing
> > $ git checkout -t -b wip/master/visible-filenames upstream/master
> > 5. make sure you have all the dependencies for compiling Wireshark
> > (quite a lot are needed)
> > $ ./autogen.sh
> > $ ./configure --disable-wireshark
> > (I tend to build only the commandline tools like 'tshark')
> > $ make
> > 6. you should now have a ./tshark executable that you can use for
> > testing
> > 
> > 
> > The changes you want to make are in epan/dissectors/packet-glusterfs.c.
> > For example, start with adding the name of the file/dir that is passed
> > to LOOKUP. The work to dissect the data in the network packet is done in
> > glusterfs_gfs3_3_op_lookup_call(). It does not really matter on how that
> > function gets executed, that is more a thing for an other task (add
> > support for new procedures).
> > 
> > In the NFS-dissector, you can see how this is done. Check the
> > implementation of the dissect_nfs3_lookup_call() function in
> > epan/dissectors/packet-nfs.c. The col_append_fstr() function achieves
> > what you want to do.
> > 
> > Of course, you really should share your changes! Now, 'git commit' your
> > change with a suitable commit message and do
> > 
> > $ git push origin wip/master/visible-filenames
> > 
> > Your branch should now be visible under
> > https://forge.gluster.org/wireshark/wireshark. Let me know, and I'll
> > give it a whirl.
> > 
> > Now you've done the filename for LOOKUP, I'm sure you can think of other
> > things that make sense to get displayed.
> > 
> > Do ask questions and send corrections if something is missing, or not
> > working as explained here. This email should probably get included in
> > the projects wiki https://forge.gluster.org/wireshark/pages/Home some
> > where.
> > 
> > Good luck,
> > Niels
> > 
> > 
> > ___
> > Gluster-devel mailing list
> > Gluster-devel@gluster.org
> > http://supercolony.gluster.org/mailman/listinfo/gluster-devel
> > 
> 
> 
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [wireshark] TODO features

2014-05-28 Thread Vikhyat Umrao


- Original Message -

From: "Pranith Kumar Karampuri"  
To: "Vikhyat Umrao"  
Cc: "Niels de Vos" , gluster-devel@gluster.org 
Sent: Wednesday, May 28, 2014 9:12:23 PM 
Subject: Re: [Gluster-devel] [wireshark] TODO features 



- Original Message - 
> From: "Vikhyat Umrao"  
> To: "Niels de Vos"  
> Cc: gluster-devel@gluster.org 
> Sent: Wednesday, May 28, 2014 3:37:47 PM 
> Subject: Re: [Gluster-devel] [wireshark] TODO features 
> 
> Hi Niels, 
> 
> Thanks for all your inputs and help, I have submitted a patch: 
> 
> 
> https://code.wireshark.org/review/1833 

>I have absolutely no idea how this is supposed to work, but just wanted to ask 
>what will the 'name' variable be if the file name is '' i.e. 
>RPC_STRING_EMPTY 

Thanks Pranith for brining it in , it is good catch. 

With this new patch it will solve this : 
http://ur1.ca/hegl3 

Thanks Niels for your inputs. 

Regards, 
Vikhyat 

>Pranith 
> 
> 
> 
> 
> 
> glusterfs: show filenames in the summary for common procedures 
> 
> With this patch we will have filename on the summary for procedures MKDIR, 
> CREATE and LOOKUP. 
> 
> 
> 
> 
> Example output: 
> 
> 173 18.309307 192.168.100.3 -> 192.168.100.4 GlusterFS 224 MKDIR V330 MKDIR 
> Call, Filename: testdir 
> 2606 36.767766 192.168.100.3 -> 192.168.100.4 GlusterFS 376 LOOKUP V330 
> LOOKUP Call, Filename: 1.txt 
> 2612 36.768242 192.168.100.3 -> 192.168.100.4 GlusterFS 228 CREATE V330 
> CREATE Call, Filename: 1.txt 

That looks good :-) 

Pranith 
> 
> Thanks, 
> Vikhyat 
> 
> 
> From: "Niels de Vos"  
> To: "Vikhyat Umrao"  
> Cc: gluster-devel@gluster.org 
> Sent: Tuesday, April 29, 2014 11:16:20 PM 
> Subject: Re: [Gluster-devel] [wireshark] TODO features 
> 
> On Tue, Apr 29, 2014 at 06:25:15AM -0400, Vikhyat Umrao wrote: 
> > Hi, 
> > 
> > I am interested in TODO wireshark features for GlusterFS : 
> > I can start from below given feature for one procedure: 
> > => display the filename or filehandle on the summary for common procedures 
> 
> Things to get you and others prepared: 
> 
> 1. go to https://forge.gluster.org/wireshark/pages/Todo 
> 2. login and edit the wiki page, add your name to the topic 
> 3. clone the wireshark repository: 
> $ git clone g...@forge.gluster.org:wireshark/wireshark.git 
> (you have been added to the 'wireshark' group, so you should have 
> push access over ssh) 
> 4. create a new branch for your testing 
> $ git checkout -t -b wip/master/visible-filenames upstream/master 
> 5. make sure you have all the dependencies for compiling Wireshark 
> (quite a lot are needed) 
> $ ./autogen.sh 
> $ ./configure --disable-wireshark 
> (I tend to build only the commandline tools like 'tshark') 
> $ make 
> 6. you should now have a ./tshark executable that you can use for 
> testing 
> 
> 
> The changes you want to make are in epan/dissectors/packet-glusterfs.c. 
> For example, start with adding the name of the file/dir that is passed 
> to LOOKUP. The work to dissect the data in the network packet is done in 
> glusterfs_gfs3_3_op_lookup_call(). It does not really matter on how that 
> function gets executed, that is more a thing for an other task (add 
> support for new procedures). 
> 
> In the NFS-dissector, you can see how this is done. Check the 
> implementation of the dissect_nfs3_lookup_call() function in 
> epan/dissectors/packet-nfs.c. The col_append_fstr() function achieves 
> what you want to do. 
> 
> Of course, you really should share your changes! Now, 'git commit' your 
> change with a suitable commit message and do 
> 
> $ git push origin wip/master/visible-filenames 
> 
> Your branch should now be visible under 
> https://forge.gluster.org/wireshark/wireshark. Let me know, and I'll 
> give it a whirl. 
> 
> Now you've done the filename for LOOKUP, I'm sure you can think of other 
> things that make sense to get displayed. 
> 
> Do ask questions and send corrections if something is missing, or not 
> working as explained here. This email should probably get included in 
> the projects wiki https://forge.gluster.org/wireshark/pages/Home some 
> where. 
> 
> Good luck, 
> Niels 
> 
> 
> ___ 
> Gluster-devel mailing list 
> Gluster-devel@gluster.org 
> http://supercolony.gluster.org/mailman/listinfo/gluster-devel 
> 

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-users] glusterfs-3.5.1beta released

2014-05-28 Thread Franco Broi


OK. I've just unmounted the data2 volume from a machine called tape1,
and now try to remount - it's hanging.

/bin/sh /sbin/mount.glusterfs nas5-10g:/data2 /data2 -o 
rw,log-level=info,log-file=/var/log/glusterfs/glusterfs_data2.log

on the server 
# gluster vol status
Status of volume: data2
Gluster process PortOnline  Pid
--
Brick nas5-10g:/data17/gvol 49152   Y   6553
Brick nas5-10g:/data18/gvol 49153   Y   6564
Brick nas5-10g:/data19/gvol 49154   Y   6575
Brick nas5-10g:/data20/gvol 49155   Y   6586
Brick nas6-10g:/data21/gvol 49160   Y   20608
Brick nas6-10g:/data22/gvol 49161   Y   20613
Brick nas6-10g:/data23/gvol 49162   Y   20614
Brick nas6-10g:/data24/gvol 49163   Y   20621
 
Task Status of Volume data2
--
There are no active volume tasks

Sending the sigusr1 killed the mount processes and I don't see any state
dumps. What path should they be in? I'm running Gluster installed via
rpm and I don't see a /var/run/gluster directory.

On Wed, 2014-05-28 at 22:09 -0400, Krishnan Parthasarathi wrote: 
> Franco,
> 
> When your clients perceive a hang, could you check the status of the bricks 
> by running,
> # gluster volume status VOLNAME  (run this on one of the 'server' machines in 
> the cluster.)
> 
> Could you also provide the statedump of the client(s),
> by issuing the following command.
> 
> # kill -SIGUSR1 pid-of-mount-process (run this on the 'client' machine.)
> 
> This would dump the state information of the client, like the file operations 
> in progress,
> memory consumed etc, onto a file under $INSTALL_PREFIX/var/run/gluster. 
> Please attach this
> file in your response.
> 
> thanks,
> Krish
> 
> - Original Message -
> > Hi
> > 
> > My clients are running 3.4.1, when I try to mount from lots of machine
> > simultaneously, some of the mounts hang. Stopping and starting the
> > volume clears the hung mounts.
> > 
> > Errors in the client logs
> > 
> > [2014-05-28 01:47:15.930866] E
> > [client-handshake.c:1741:client_query_portmap_cbk] 0-data2-client-3: failed
> > to get the port number for remote subvolume. Please run 'gluster volume
> > status' on server to see if brick process is running.
> > 
> > Let me know if you want more information.
> > 
> > Cheers,
> > 
> > On Sun, 2014-05-25 at 11:55 +0200, Niels de Vos wrote:
> > > On Sat, 24 May, 2014 at 11:34:36PM -0700, Gluster Build System wrote:
> > > > 
> > > > SRC:
> > > > http://bits.gluster.org/pub/gluster/glusterfs/src/glusterfs-3.5.1beta.tar.gz
> > > 
> > > This beta release is intended to verify the changes that should resolve
> > > the bugs listed below. We appreciate tests done by anyone. Please leave
> > > a comment in the respective bugreport with a short description of the
> > > success or failure. Visiting one of the bugreports is as easy as opening
> > > the bugzilla.redhat.com/$BUG URL, for the first in the list, this
> > > results in http://bugzilla.redhat.com/765202 .
> > > 
> > > Bugs expected to be fixed (31 in total since 3.5.0):
> > > 
> > >  #765202 - lgetxattr called with invalid keys on the bricks
> > >  #833586 - inodelk hang from marker_rename_release_newp_lock
> > >  #859581 - self-heal process can sometimes create directories instead of
> > >  symlinks for the root gfid file in .glusterfs
> > >  #986429 - Backupvolfile server option should work internal to GlusterFS
> > >  framework
> > > #1039544 - [FEAT] "gluster volume heal info" should list the entries that
> > > actually required to be healed.
> > > #1046624 - Unable to heal symbolic Links
> > > #1046853 - AFR : For every file self-heal there are warning messages
> > > reported in glustershd.log file
> > > #1063190 - [RHEV-RHS] Volume was not accessible after server side quorum
> > > was met
> > > #1064096 - The old Python Translator code (not Glupy) should be removed
> > > #1066996 - Using sanlock on a gluster mount with replica 3 (quorum-type
> > > auto) leads to a split-brain
> > > #1071191 - [3.5.1] Sporadic SIGBUS with mmap() on a sparse file created
> > > with open(), seek(), write()
> > > #1078061 - Need ability to heal mismatching user extended attributes
> > > without any changelogs
> > > #1078365 - New xlators are linked as versioned .so files, creating
> > > .so.0.0.0
> > > #1086748 - Add documentation for the Feature: AFR CLI enhancements
> > > #1086750 - Add documentation for the Feature: File Snapshots in GlusterFS
> > > #1086752 - Add documentation for the Feature: On-Wire
> > > Compression/Decompression
> > > #1086756 - Add documentation for the Feature: zerofill API for GlusterFS
> > > #

[Gluster-devel] Rackspace regression testing: ready for initial usage

2014-05-28 Thread Justin Clift
Hi all,

There's a new "rackspace-regression" testing testing project on Jenkins:

  http://build.gluster.org/job/rackspace-regression/

This one runs regression testing jobs on specially set up Rackspace
VM's.  Two of them atm, but I'll setup more over next few days as
weird kinks get worked out.

Please submit regression jobs to them.  They'll update Gerrit with
+1/-1/"MERGE CONFLICT" as appropriate, exactly the same as the normal
regression project.

Some differences from the normal project:

 * Gluster is build using --enable-debug on the ./configure line

   Hopefully helps with investigating problems. :)

 * The state dir is in /var instead of /build/install/var

   This helps with permissions of /build (jenkins:jenkins),
   as otherwise sudo needs to be embedded in scripts in weird
   ways.

 * Source for the build system is here:

   https://forge.gluster.org/gluster-patch-acceptance-tests

Feel free to send through merge requests for build system improvements
and stuff.  Also, if anyone needs access to the build VM's themselves,
just let me know.  Happy to provide root pw. :)

Given a week or two of usage and working-out-any-weirdness, hopefully
we can use this approach instead of the old regression project.

+ Justin

--
Open Source and Standards @ Red Hat

twitter.com/realjustinclift

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-users] glusterfs-3.5.1beta released

2014-05-28 Thread Krishnan Parthasarathi
Franco,

When your clients perceive a hang, could you check the status of the bricks by 
running,
# gluster volume status VOLNAME  (run this on one of the 'server' machines in 
the cluster.)

Could you also provide the statedump of the client(s),
by issuing the following command.

# kill -SIGUSR1 pid-of-mount-process (run this on the 'client' machine.)

This would dump the state information of the client, like the file operations 
in progress,
memory consumed etc, onto a file under $INSTALL_PREFIX/var/run/gluster. Please 
attach this
file in your response.

thanks,
Krish

- Original Message -
> Hi
> 
> My clients are running 3.4.1, when I try to mount from lots of machine
> simultaneously, some of the mounts hang. Stopping and starting the
> volume clears the hung mounts.
> 
> Errors in the client logs
> 
> [2014-05-28 01:47:15.930866] E
> [client-handshake.c:1741:client_query_portmap_cbk] 0-data2-client-3: failed
> to get the port number for remote subvolume. Please run 'gluster volume
> status' on server to see if brick process is running.
> 
> Let me know if you want more information.
> 
> Cheers,
> 
> On Sun, 2014-05-25 at 11:55 +0200, Niels de Vos wrote:
> > On Sat, 24 May, 2014 at 11:34:36PM -0700, Gluster Build System wrote:
> > > 
> > > SRC:
> > > http://bits.gluster.org/pub/gluster/glusterfs/src/glusterfs-3.5.1beta.tar.gz
> > 
> > This beta release is intended to verify the changes that should resolve
> > the bugs listed below. We appreciate tests done by anyone. Please leave
> > a comment in the respective bugreport with a short description of the
> > success or failure. Visiting one of the bugreports is as easy as opening
> > the bugzilla.redhat.com/$BUG URL, for the first in the list, this
> > results in http://bugzilla.redhat.com/765202 .
> > 
> > Bugs expected to be fixed (31 in total since 3.5.0):
> > 
> >  #765202 - lgetxattr called with invalid keys on the bricks
> >  #833586 - inodelk hang from marker_rename_release_newp_lock
> >  #859581 - self-heal process can sometimes create directories instead of
> >  symlinks for the root gfid file in .glusterfs
> >  #986429 - Backupvolfile server option should work internal to GlusterFS
> >  framework
> > #1039544 - [FEAT] "gluster volume heal info" should list the entries that
> > actually required to be healed.
> > #1046624 - Unable to heal symbolic Links
> > #1046853 - AFR : For every file self-heal there are warning messages
> > reported in glustershd.log file
> > #1063190 - [RHEV-RHS] Volume was not accessible after server side quorum
> > was met
> > #1064096 - The old Python Translator code (not Glupy) should be removed
> > #1066996 - Using sanlock on a gluster mount with replica 3 (quorum-type
> > auto) leads to a split-brain
> > #1071191 - [3.5.1] Sporadic SIGBUS with mmap() on a sparse file created
> > with open(), seek(), write()
> > #1078061 - Need ability to heal mismatching user extended attributes
> > without any changelogs
> > #1078365 - New xlators are linked as versioned .so files, creating
> > .so.0.0.0
> > #1086748 - Add documentation for the Feature: AFR CLI enhancements
> > #1086750 - Add documentation for the Feature: File Snapshots in GlusterFS
> > #1086752 - Add documentation for the Feature: On-Wire
> > Compression/Decompression
> > #1086756 - Add documentation for the Feature: zerofill API for GlusterFS
> > #1086758 - Add documentation for the Feature: Changelog based parallel
> > geo-replication
> > #1086760 - Add documentation for the Feature: Write Once Read Many (WORM)
> > volume
> > #1086762 - Add documentation for the Feature: BD Xlator - Block Device
> > translator
> > #1088848 - Spelling errors in rpc/rpc-transport/rdma/src/rdma.c
> > #1089054 - gf-error-codes.h is missing from source tarball
> > #1089470 - SMB: Crash on brick process during compile kernel.
> > #1089934 - list dir with more than N files results in Input/output error
> > #1091340 - Doc: Add glfs_fini known issue to release notes 3.5
> > #1091392 - glusterfs.spec.in: minor/nit changes to sync with Fedora spec
> > #1095775 - Add support in libgfapi to fetch volume info from glusterd.
> > #1095971 - Stopping/Starting a Gluster volume resets ownership
> > #1096040 - AFR : self-heal-daemon not clearing the change-logs of all the
> > sources after self-heal
> > #1096425 - i/o error when one user tries to access RHS volume over NFS with
> > 100+ GIDs
> > #1099878 - Need support for handle based Ops to fetch/modify extended
> > attributes of a file
> > 
> > 
> > Before a final glusterfs-3.5.1 release is made, we hope to have all the
> > blocker bugs fixed. There are currently 13 bugs marked that still need
> > some work done:
> > 
> > #1081016 - glusterd needs xfsprogs and e2fsprogs packages
> > #1086743 - Add documentation for the Feature: RDMA-connection manager
> > (RDMA-CM)
> > #1086749 - Add documentation for the Feature: Exposing Volume Capabilities
> > #1086751 - Add documentation for the Feature: gfid-access
> > #1086754 - Add documentation 

[Gluster-devel] Spurious failure in ./tests/bugs/bug-1038598.t [28]

2014-05-28 Thread Pranith Kumar Karampuri
hi Anuradha,
  Please look into this.

Patch ==> http://review.gluster.com/#/c/7880/1
Author==>  Emmanuel Dreyfus m...@netbsd.org
Build triggered by==> kkeithle
Build-url ==> 
http://build.gluster.org/job/regression/4603/consoleFull
Download-log-at   ==> 
http://build.gluster.org:443/logs/regression/glusterfs-logs-20140528:18:25:12.tgz
Test written by   ==> Author: Anuradha 

./tests/bugs/bug-1038598.t [28]
0 #!/bin/bash
1 .  $(dirname $0)/../include.rc
2 .  $(dirname $0)/../volume.rc
3 
4 cleanup;
5 
6 TEST glusterd
7 TEST pidof glusterd
8 TEST $CLI volume info;
9 
   10 TEST $CLI volume create $V0 replica 2  $H0:$B0/${V0}{1,2};
   11 
   12 function hard_limit()
   13 {
   14 local QUOTA_PATH=$1;
   15 $CLI volume quota $V0 list $QUOTA_PATH | grep "$QUOTA_PATH" | awk 
'{print $2}'
   16 }
   17 
   18 function soft_limit()
   19 {
   20 local QUOTA_PATH=$1;
   21 $CLI volume quota $V0 list $QUOTA_PATH | grep "$QUOTA_PATH" | awk 
'{print $3}'
   22 }
   23 
   24 function usage()
   25 {
   26 local QUOTA_PATH=$1;
   27 $CLI volume quota $V0 list $QUOTA_PATH | grep "$QUOTA_PATH" | awk 
'{print $4}'
   28 }
   29 
   30 function sl_exceeded()
   31 {
   32 local QUOTA_PATH=$1;
   33 $CLI volume quota $V0 list $QUOTA_PATH | grep "$QUOTA_PATH" | awk 
'{print $6}'
   34 }
   35 
   36 function hl_exceeded()
   37 {
   38 local QUOTA_PATH=$1;
   39 $CLI volume quota $V0 list $QUOTA_PATH | grep "$QUOTA_PATH" | awk 
'{print $7}'
   40 
   41 }
   42 
   43 EXPECT "$V0" volinfo_field $V0 'Volume Name';
   44 EXPECT 'Created' volinfo_field $V0 'Status';
   45 EXPECT '2' brick_count $V0
   46 
   47 TEST $CLI volume start $V0;
   48 EXPECT 'Started' volinfo_field $V0 'Status';
   49 
   50 TEST $CLI volume quota $V0 enable
   51 sleep 5
   52 
   53 TEST glusterfs -s $H0 --volfile-id $V0 $M0;
   54 
   55 TEST mkdir -p $M0/test_dir
   56 TEST $CLI volume quota $V0 limit-usage /test_dir 10MB 50
   57 
   58 EXPECT "10.0MB" hard_limit "/test_dir";
   59 EXPECT "50%" soft_limit "/test_dir";
   60 
   61 TEST dd if=/dev/zero of=$M0/test_dir/file1.txt bs=1M count=4
   62 EXPECT "4.0MB" usage "/test_dir";
   63 EXPECT 'No' sl_exceeded "/test_dir";
   64 EXPECT 'No' hl_exceeded "/test_dir";
   65 
   66 TEST dd if=/dev/zero of=$M0/test_dir/file1.txt bs=1M count=6
   67 EXPECT "6.0MB" usage "/test_dir";
   68 EXPECT 'Yes' sl_exceeded "/test_dir";
   69 EXPECT 'No' hl_exceeded "/test_dir";
   70 
   71 #set timeout to 0 so that quota gets enforced without any lag
   72 TEST $CLI volume set $V0 features.hard-timeout 0
   73 TEST $CLI volume set $V0 features.soft-timeout 0
   74 
   75 TEST ! dd if=/dev/zero of=$M0/test_dir/file1.txt bs=1M count=15
   76 EXPECT 'Yes' sl_exceeded "/test_dir";
***77 EXPECT 'Yes' hl_exceeded "/test_dir";
   78 
   79 cleanup;

Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Spurious failure in ./tests/basic/volume.t [14, 15, 16]

2014-05-28 Thread Krishnan Parthasarathi
Anuradha,

I can help you debug this issue. But I won't be able to directly
involve myself in debugging the issue until I resolve another
regression failure, which I am looking into now.

thanks,
Krish

- Original Message -
> Kp,
>Could you help Anuradha debug this issue.
> 
> Patch ==> http://review.gluster.com/#/c/7870/1
> Author==>  Anuradha ata...@redhat.com
> Build triggered by==> kkeithle
> Build-url ==>
> http://build.gluster.org/job/regression/4600/consoleFull
> Download-log-at   ==>
> http://build.gluster.org:443/logs/regression/glusterfs-logs-20140528:13:56:30.tgz
> Test written by   ==> Author: Anand Avati 
> 
> ./tests/basic/volume.t [14, 15, 16]
> 0 #!/bin/bash
> 1
> 2 . $(dirname $0)/../include.rc
> 3 . $(dirname $0)/../volume.rc
> 4
> 5 cleanup;
> 6
> 7 TEST glusterd
> 8 TEST pidof glusterd
> 9 TEST $CLI volume info;
>10
>11 TEST $CLI volume create $V0 replica 2 stripe 2
>$H0:$B0/${V0}{1,2,3,4,5,6,7,8};
>12
>13
>14 EXPECT "$V0" volinfo_field $V0 'Volume Name';
>15 EXPECT 'Created' volinfo_field $V0 'Status';
>16 EXPECT '8' brick_count $V0
>17
>18 TEST $CLI volume start $V0;
>19 EXPECT 'Started' volinfo_field $V0 'Status';
>20
>21 TEST $CLI volume add-brick $V0 $H0:$B0/${V0}{9,10,11,12};
>22 EXPECT '12' brick_count $V0
>23
>24 TEST $CLI volume remove-brick $V0 $H0:$B0/${V0}{1,2,3,4} force;
>25 EXPECT '8' brick_count $V0
>26
> ***27 TEST $CLI volume stop $V0;
> ***28 EXPECT 'Stopped' volinfo_field $V0 'Status';
>29
> ***30 TEST $CLI volume delete $V0;
>31 TEST ! $CLI volume info $V0;
>32
>33 cleanup;
> 
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Spurious failire ./tests/bugs/bug-1049834.t [16]

2014-05-28 Thread Pranith Kumar Karampuri
Avra,
Patch submitted by you http://review.gluster.com/#/c/7889 also failed this 
test today.

Pranith

- Original Message -
> From: "Avra Sengupta" 
> To: "Pranith Kumar Karampuri" 
> Cc: "Gluster Devel" 
> Sent: Wednesday, May 28, 2014 5:04:40 PM
> Subject: Re: [Gluster-devel] Spurious failire ./tests/bugs/bug-1049834.t [16]
> 
> Pranith am looking into a priority issue for
> snapshot(https://bugzilla.redhat.com/show_bug.cgi?id=1098045) right now,
> I will get started with this spurious failure as soon as I finish it,
> which should be max by eod tomorrow.
> 
> Regards,
> Avra
> 
> On 05/28/2014 06:46 AM, Pranith Kumar Karampuri wrote:
> > FYI, this test failed more than once yesterday. Same test failed both the
> > times.
> >
> > Pranith
> > - Original Message -
> >> From: "Pranith Kumar Karampuri" 
> >> To: "Avra Sengupta" 
> >> Cc: "Gluster Devel" 
> >> Sent: Wednesday, May 28, 2014 6:43:52 AM
> >> Subject: Re: [Gluster-devel] Spurious failire ./tests/bugs/bug-1049834.t
> >> [16]
> >>
> >>
> >> CC gluster-devel
> >>
> >> Pranith
> >> - Original Message -
> >>> From: "Pranith Kumar Karampuri" 
> >>> To: "Avra Sengupta" 
> >>> Sent: Wednesday, May 28, 2014 6:42:53 AM
> >>> Subject: Spurious failire ./tests/bugs/bug-1049834.t [16]
> >>>
> >>> hi Avra,
> >>> Could you look into it.
> >>>
> >>> Patch ==> http://review.gluster.com/7889/1
> >>> Author==>  Avra Sengupta aseng...@redhat.com
> >>> Build triggered by==> amarts
> >>> Build-url ==>
> >>> http://build.gluster.org/job/regression/4586/consoleFull
> >>> Download-log-at   ==>
> >>> http://build.gluster.org:443/logs/regression/glusterfs-logs-20140527:14:51:09.tgz
> >>> Test written by   ==> Author: Avra Sengupta 
> >>>
> >>> ./tests/bugs/bug-1049834.t [16]
> >>>#!/bin/bash
> >>>
> >>>. $(dirname $0)/../include.rc
> >>>. $(dirname $0)/../cluster.rc
> >>>. $(dirname $0)/../volume.rc
> >>>. $(dirname $0)/../snapshot.rc
> >>>
> >>>cleanup;
> >>>  1 TEST verify_lvm_version
> >>>  2 TEST launch_cluster 2
> >>>  3 TEST setup_lvm 2
> >>>
> >>>  4 TEST $CLI_1 peer probe $H2
> >>>  5 EXPECT_WITHIN $PROBE_TIMEOUT 1 peer_count
> >>>
> >>>  6 TEST $CLI_1 volume create $V0 $H1:$L1 $H2:$L2
> >>>  7 EXPECT 'Created' volinfo_field $V0 'Status'
> >>>
> >>>  8 TEST $CLI_1 volume start $V0
> >>>  9 EXPECT 'Started' volinfo_field $V0 'Status'
> >>>
> >>>#Setting the snap-max-hard-limit to 4
> >>> 10 TEST $CLI_1 snapshot config $V0 snap-max-hard-limit 4
> >>>PID_1=$!
> >>>wait $PID_1
> >>>
> >>>#Creating 3 snapshots on the volume (which is the soft-limit)
> >>> 11 TEST create_n_snapshots $V0 3 $V0_snap
> >>> 12 TEST snapshot_n_exists $V0 3 $V0_snap
> >>>
> >>>#Creating the 4th snapshot on the volume and expecting it to be
> >>>created
> >>># but with the deletion of the oldest snapshot i.e 1st snapshot
> >>> 13 TEST  $CLI_1 snapshot create ${V0}_snap4 ${V0}
> >>> 14 TEST  snapshot_exists 1 ${V0}_snap4
> >>> 15 TEST ! snapshot_exists 1 ${V0}_snap1
> >>> ***16 TEST $CLI_1 snapshot delete ${V0}_snap4
> >>> 17 TEST $CLI_1 snapshot create ${V0}_snap1 ${V0}
> >>> 18 TEST snapshot_exists 1 ${V0}_snap1
> >>>
> >>>#Deleting the 4 snaps
> >>>#TEST delete_n_snapshots $V0 4 $V0_snap
> >>>#TEST ! snapshot_n_exists $V0 4 $V0_snap
> >>>
> >>>cleanup;
> >>>
> >>> Pranith
> >> ___
> >> Gluster-devel mailing list
> >> Gluster-devel@gluster.org
> >> http://supercolony.gluster.org/mailman/listinfo/gluster-devel
> >>
> 
> 
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Spurious failure in ./tests/basic/volume.t [14, 15, 16]

2014-05-28 Thread Pranith Kumar Karampuri
Kp,
   Could you help Anuradha debug this issue.

Patch ==> http://review.gluster.com/#/c/7870/1
Author==>  Anuradha ata...@redhat.com
Build triggered by==> kkeithle
Build-url ==> 
http://build.gluster.org/job/regression/4600/consoleFull
Download-log-at   ==> 
http://build.gluster.org:443/logs/regression/glusterfs-logs-20140528:13:56:30.tgz
Test written by   ==> Author: Anand Avati 

./tests/basic/volume.t [14, 15, 16]
0 #!/bin/bash
1 
2 . $(dirname $0)/../include.rc
3 . $(dirname $0)/../volume.rc
4 
5 cleanup;
6 
7 TEST glusterd
8 TEST pidof glusterd
9 TEST $CLI volume info;
   10 
   11 TEST $CLI volume create $V0 replica 2 stripe 2 
$H0:$B0/${V0}{1,2,3,4,5,6,7,8};
   12 
   13 
   14 EXPECT "$V0" volinfo_field $V0 'Volume Name';
   15 EXPECT 'Created' volinfo_field $V0 'Status';
   16 EXPECT '8' brick_count $V0
   17 
   18 TEST $CLI volume start $V0;
   19 EXPECT 'Started' volinfo_field $V0 'Status';
   20 
   21 TEST $CLI volume add-brick $V0 $H0:$B0/${V0}{9,10,11,12};
   22 EXPECT '12' brick_count $V0
   23 
   24 TEST $CLI volume remove-brick $V0 $H0:$B0/${V0}{1,2,3,4} force;
   25 EXPECT '8' brick_count $V0
   26 
***27 TEST $CLI volume stop $V0;
***28 EXPECT 'Stopped' volinfo_field $V0 'Status';
   29 
***30 TEST $CLI volume delete $V0;
   31 TEST ! $CLI volume info $V0;
   32 
   33 cleanup;
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Need testers for GlusterFS 3.4.4

2014-05-28 Thread James
On Wed, May 28, 2014 at 5:02 PM, Justin Clift  wrote:
> Hi all,
>
> Are there any Community members around who can test the GlusterFS 3.4.4
> beta (rpms are available)?

I've provided all the tools and how-to to do this yourself. Should
probably take about ~20 min.

Old example:

https://ttboj.wordpress.com/2014/01/16/testing-glusterfs-during-glusterfest/

Same process should work, except base your testing on the latest
vagrant article:

https://ttboj.wordpress.com/2014/05/13/vagrant-on-fedora-with-libvirt-reprise/

If you haven't set it up already.

Cheers,
James


>
> We're looking for success/failure reports before releasing 3.4.4 final. :)
>
> Regards and best wishes,
>
> Justin Clift
>
> --
> Open Source and Standards @ Red Hat
>
> twitter.com/realjustinclift
>
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-devel
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Need testers for GlusterFS 3.4.4

2014-05-28 Thread Justin Clift
Hi all,

Are there any Community members around who can test the GlusterFS 3.4.4
beta (rpms are available)?

We're looking for success/failure reports before releasing 3.4.4 final. :)

Regards and best wishes,

Justin Clift

--
Open Source and Standards @ Red Hat

twitter.com/realjustinclift

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] IP addresses in peer status/pool list

2014-05-28 Thread Paul Cuzner
- Original Message - 

> From: "James" 
> To: "Paul Cuzner" 
> Sent: Thursday, 29 May, 2014 4:20:02 AM
> Subject: Re: [Gluster-devel] IP addresses in peer status/pool list

> On Tue, May 27, 2014 at 7:57 PM, Paul Cuzner  wrote:
> > Hi,
> >
> > Can anyone shed any light on why I see IP addresses in peer status or pool
> > list output instead of names?
> >
> > In clusters where the names were used in the probes, and volumes built with
> > node names I see IP's in the peer status output?
> >
> > This has been the case for a while - I just like to understand why?
> >
> > I've even seen some output where 2 nodes are listed by IP instead of
> > names..
> >
> > Cheers,
> >
> > Paul C
> >
> >
> > ___
> > Gluster-devel mailing list
> > Gluster-devel@gluster.org
> > http://supercolony.gluster.org/mailman/listinfo/gluster-devel
> >

> Is your DNS setup correctly?
> Also cat /etc/hosts

I hope so - the issue isn't in my test environment - it's in Red Hat's 
production IT environment!

If you look at the vol file, all the remote host definitions are by fqdn, but a 
peer status shows 2 of the 8 nodes as IP address not names. My expectation 
would be only one IP address in the peer status list - which is one of the 
things I wanted someone on the list to confirmjust in case I was having a 
bad day...it happens :)


> PS: you should try to send plain text email if possible :)
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] mempool disabling for regression tests

2014-05-28 Thread Justin Clift
On 23/05/2014, at 6:26 PM, Kaushal M wrote:

>>  $SRC/configure --prefix=$P/install --with-mountutildir=$P/install/sbin 
>> --with-initdir=$P/install/etc --enable-bd-xlator=yes --silent
> 
> A --enable-debug flag to configure should enable the debug build.


Testing this now in one of the new Rackspace slave nodes (rackspace-slave1).

Without the --enable-debug flag, things are passing the regression test
fully.  Lets see what happens... ;)

+ Justin

--
Open Source and Standards @ Red Hat

twitter.com/realjustinclift

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Chnage in exit codes of #gluster volume heal command

2014-05-28 Thread James
On Wed, May 28, 2014 at 1:10 PM, Jeff Darcy  wrote:
>> In addition, I would recommend considering using --xml and that
>> picking out the field you want to look at with a quick xml parser, and
>> then going with that directly. More stable than watching for a
>> specific return code.
>
> Parsing XML to get one bit of information doesn't seem like a great
> idea.  If the script needs exact counts then sure, parse away, but if
> all it needs to know is whether any are non-zero then what's wrong
> with having that reflected in the exit code?  That's far easier for a
> script to interpret, and there's plenty of precedent for using exit
> codes this way.


Parsing an exit code for 2 vs. 4 instead of 2 vs. zero and then doing
something based on that IMO is a bit flimsy.
I agree with checking if a code is non-zero is highly useful.
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Chnage in exit codes of #gluster volume heal command

2014-05-28 Thread Jeff Darcy
> In addition, I would recommend considering using --xml and that
> picking out the field you want to look at with a quick xml parser, and
> then going with that directly. More stable than watching for a
> specific return code.

Parsing XML to get one bit of information doesn't seem like a great
idea.  If the script needs exact counts then sure, parse away, but if
all it needs to know is whether any are non-zero then what's wrong
with having that reflected in the exit code?  That's far easier for a
script to interpret, and there's plenty of precedent for using exit
codes this way.
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Chnage in exit codes of #gluster volume heal command

2014-05-28 Thread James
On Wed, May 28, 2014 at 6:38 AM, Vikhyat Umrao  wrote:
> Hi,
>
> Can we change the exit codes of #gluster volume heal command if fields like
> `to be healed`, `heal-failed` and `split-brain` have non zero values.
> It will help in a monitoring script to capture the heal details.
>
> Please let me know your inputs.
I have the same question Pranith has.

In addition, I would recommend considering using --xml and that
picking out the field you want to look at with a quick xml parser, and
then going with that directly. More stable than watching for a
specific return code.

An example of how to parse the xml can be found in:
https://github.com/purpleidea/puppet-gluster/blob/master/files/xml.py
(although there are surely other better ways too!)

HTH
James
>
> Thanks,
> Vikhyat
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Reminder: Weekly Gluster Community meeting is in 30 mins

2014-05-28 Thread Justin Clift
Oops, forgot the meeting URLs. ;)

Meeting summary:

  
http://meetbot.fedoraproject.org/gluster-meeting/2014-05-28/gluster-meeting.2014-05-28-15.00.html

Full logs:

  
http://meetbot.fedoraproject.org/gluster-meeting/2014-05-28/gluster-meeting.2014-05-28-15.00.log.html

+ Justin

--
Open Source and Standards @ Red Hat

twitter.com/realjustinclift

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Reminder: Weekly Gluster Community meeting is in 30 mins

2014-05-28 Thread Justin Clift
On 28/05/2014, at 3:29 PM, Justin Clift wrote:
> Reminder!!!
> 
> The weekly Gluster Community meeting is in 30 mins, in
> #gluster-meeting on IRC.
> 
> This is a completely public meeting, everyone is encouraged
> to attend and be a part of it. :)


Thanks for attending everyone. :)

* Lots of action items from last week covered
* GlusterFS 3.4.4 beta and 3.5.1 beta need testers to report
  back
* We'll get automated nightly build tests happening for OSX
  in the next two weeks

Regards and best wishes,

Justin Clift

--
Open Source and Standards @ Red Hat

twitter.com/realjustinclift

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [wireshark] TODO features

2014-05-28 Thread Pranith Kumar Karampuri


- Original Message -
> From: "Vikhyat Umrao" 
> To: "Niels de Vos" 
> Cc: gluster-devel@gluster.org
> Sent: Wednesday, May 28, 2014 3:37:47 PM
> Subject: Re: [Gluster-devel] [wireshark] TODO features
> 
> Hi Niels,
> 
> Thanks for all your inputs and help, I have submitted a patch:
> 
> 
> https://code.wireshark.org/review/1833

I have absolutely no idea how this is supposed to work, but just wanted to ask 
what will the 'name' variable be if the file name is '' i.e. 
RPC_STRING_EMPTY

Pranith
> 
> 
> 
> 
> 
> glusterfs: show filenames in the summary for common procedures
> 
> With this patch we will have filename on the summary for procedures MKDIR,
> CREATE and LOOKUP.
> 
> 
> 
> 
> Example output:
> 
> 173 18.309307 192.168.100.3 -> 192.168.100.4 GlusterFS 224 MKDIR V330 MKDIR
> Call, Filename: testdir
> 2606 36.767766 192.168.100.3 -> 192.168.100.4 GlusterFS 376 LOOKUP V330
> LOOKUP Call, Filename: 1.txt
> 2612 36.768242 192.168.100.3 -> 192.168.100.4 GlusterFS 228 CREATE V330
> CREATE Call, Filename: 1.txt

That looks good :-)

Pranith
> 
> Thanks,
> Vikhyat
> 
> 
> From: "Niels de Vos" 
> To: "Vikhyat Umrao" 
> Cc: gluster-devel@gluster.org
> Sent: Tuesday, April 29, 2014 11:16:20 PM
> Subject: Re: [Gluster-devel] [wireshark] TODO features
> 
> On Tue, Apr 29, 2014 at 06:25:15AM -0400, Vikhyat Umrao wrote:
> > Hi,
> > 
> > I am interested in TODO wireshark features for GlusterFS :
> > I can start from below given feature for one procedure:
> > => display the filename or filehandle on the summary for common procedures
> 
> Things to get you and others prepared:
> 
> 1. go to https://forge.gluster.org/wireshark/pages/Todo
> 2. login and edit the wiki page, add your name to the topic
> 3. clone the wireshark repository:
> $ git clone g...@forge.gluster.org:wireshark/wireshark.git
> (you have been added to the 'wireshark' group, so you should have
> push access over ssh)
> 4. create a new branch for your testing
> $ git checkout -t -b wip/master/visible-filenames upstream/master
> 5. make sure you have all the dependencies for compiling Wireshark
> (quite a lot are needed)
> $ ./autogen.sh
> $ ./configure --disable-wireshark
> (I tend to build only the commandline tools like 'tshark')
> $ make
> 6. you should now have a ./tshark executable that you can use for
> testing
> 
> 
> The changes you want to make are in epan/dissectors/packet-glusterfs.c.
> For example, start with adding the name of the file/dir that is passed
> to LOOKUP. The work to dissect the data in the network packet is done in
> glusterfs_gfs3_3_op_lookup_call(). It does not really matter on how that
> function gets executed, that is more a thing for an other task (add
> support for new procedures).
> 
> In the NFS-dissector, you can see how this is done. Check the
> implementation of the dissect_nfs3_lookup_call() function in
> epan/dissectors/packet-nfs.c. The col_append_fstr() function achieves
> what you want to do.
> 
> Of course, you really should share your changes! Now, 'git commit' your
> change with a suitable commit message and do
> 
> $ git push origin wip/master/visible-filenames
> 
> Your branch should now be visible under
> https://forge.gluster.org/wireshark/wireshark. Let me know, and I'll
> give it a whirl.
> 
> Now you've done the filename for LOOKUP, I'm sure you can think of other
> things that make sense to get displayed.
> 
> Do ask questions and send corrections if something is missing, or not
> working as explained here. This email should probably get included in
> the projects wiki https://forge.gluster.org/wireshark/pages/Home some
> where.
> 
> Good luck,
> Niels
> 
> 
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-devel
> 
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Reminder: Weekly Gluster Community meeting is in 30 mins

2014-05-28 Thread Justin Clift
Reminder!!!

The weekly Gluster Community meeting is in 30 mins, in
#gluster-meeting on IRC.

This is a completely public meeting, everyone is encouraged
to attend and be a part of it. :)

To add Agenda items
***

Just add them to the main text of the Etherpad, and be at
the meeting. :)

  https://public.pad.fsfe.org/p/gluster-community-meetings

Regards and best wishes,

Justin Clift

--
Open Source and Standards @ Red Hat

twitter.com/realjustinclift

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Chnage in exit codes of #gluster volume heal command

2014-05-28 Thread Pranith Kumar Karampuri


- Original Message -
> From: "Vikhyat Umrao" 
> To: gluster-devel@gluster.org
> Sent: Wednesday, May 28, 2014 4:08:14 PM
> Subject: [Gluster-devel] Chnage in exit codes of #gluster volume heal command
> 
> Hi,
> 
> Can we change the exit codes of #gluster volume heal command if fields like
> `to be healed`, `heal-failed` and `split-brain` have non zero values.
> It will help in a monitoring script to capture the heal details.
> 
> Please let me know your inputs.

Vikhyat,
 Could you please tell us the exact use case which has the problem?

Pranith
> 
> Thanks,
> Vikhyat
> 
> 
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-devel
> 
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Chnage in exit codes of #gluster volume heal command

2014-05-28 Thread Vikhyat Umrao
Hi, 

Can we change the exit codes of #gluster volume heal command if fields like `to 
be healed`, `heal-failed` and `split-brain` have non zero values. 
It will help in a monitoring script to capture the heal details. 

Please let me know your inputs. 

Thanks, 
Vikhyat 

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [wireshark] TODO features

2014-05-28 Thread Vikhyat Umrao
Hi Niels, 

Thanks for all your inputs and help, I have submitted a patch: 


https://code.wireshark.org/review/1833 





glusterfs: show filenames in the summary for common procedures 

With this patch we will have filename on the summary for procedures MKDIR, 
CREATE and LOOKUP. 




Example output: 

173 18.309307 192.168.100.3 -> 192.168.100.4 GlusterFS 224 MKDIR V330 MKDIR 
Call, Filename: testdir 
2606 36.767766 192.168.100.3 -> 192.168.100.4 GlusterFS 376 LOOKUP V330 LOOKUP 
Call, Filename: 1.txt 
2612 36.768242 192.168.100.3 -> 192.168.100.4 GlusterFS 228 CREATE V330 CREATE 
Call, Filename: 1.txt 

Thanks, 
Vikhyat 

- Original Message -

From: "Niels de Vos"  
To: "Vikhyat Umrao"  
Cc: gluster-devel@gluster.org 
Sent: Tuesday, April 29, 2014 11:16:20 PM 
Subject: Re: [Gluster-devel] [wireshark] TODO features 

On Tue, Apr 29, 2014 at 06:25:15AM -0400, Vikhyat Umrao wrote: 
> Hi, 
> 
> I am interested in TODO wireshark features for GlusterFS : 
> I can start from below given feature for one procedure: 
> => display the filename or filehandle on the summary for common procedures 

Things to get you and others prepared: 

1. go to https://forge.gluster.org/wireshark/pages/Todo 
2. login and edit the wiki page, add your name to the topic 
3. clone the wireshark repository: 
$ git clone g...@forge.gluster.org:wireshark/wireshark.git 
(you have been added to the 'wireshark' group, so you should have 
push access over ssh) 
4. create a new branch for your testing 
$ git checkout -t -b wip/master/visible-filenames upstream/master 
5. make sure you have all the dependencies for compiling Wireshark 
(quite a lot are needed) 
$ ./autogen.sh 
$ ./configure --disable-wireshark 
(I tend to build only the commandline tools like 'tshark') 
$ make 
6. you should now have a ./tshark executable that you can use for 
testing 


The changes you want to make are in epan/dissectors/packet-glusterfs.c. 
For example, start with adding the name of the file/dir that is passed 
to LOOKUP. The work to dissect the data in the network packet is done in 
glusterfs_gfs3_3_op_lookup_call(). It does not really matter on how that 
function gets executed, that is more a thing for an other task (add 
support for new procedures). 

In the NFS-dissector, you can see how this is done. Check the 
implementation of the dissect_nfs3_lookup_call() function in 
epan/dissectors/packet-nfs.c. The col_append_fstr() function achieves 
what you want to do. 

Of course, you really should share your changes! Now, 'git commit' your 
change with a suitable commit message and do 

$ git push origin wip/master/visible-filenames 

Your branch should now be visible under 
https://forge.gluster.org/wireshark/wireshark. Let me know, and I'll 
give it a whirl. 

Now you've done the filename for LOOKUP, I'm sure you can think of other 
things that make sense to get displayed. 

Do ask questions and send corrections if something is missing, or not 
working as explained here. This email should probably get included in 
the projects wiki https://forge.gluster.org/wireshark/pages/Home some 
where. 

Good luck, 
Niels 

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-users] glusterfs-3.5.1beta released

2014-05-28 Thread Franco Broi
Hi

My clients are running 3.4.1, when I try to mount from lots of machine
simultaneously, some of the mounts hang. Stopping and starting the
volume clears the hung mounts.

Errors in the client logs

[2014-05-28 01:47:15.930866] E 
[client-handshake.c:1741:client_query_portmap_cbk] 0-data2-client-3: failed to 
get the port number for remote subvolume. Please run 'gluster volume status' on 
server to see if brick process is running.

Let me know if you want more information.

Cheers,

On Sun, 2014-05-25 at 11:55 +0200, Niels de Vos wrote: 
> On Sat, 24 May, 2014 at 11:34:36PM -0700, Gluster Build System wrote:
> > 
> > SRC: 
> > http://bits.gluster.org/pub/gluster/glusterfs/src/glusterfs-3.5.1beta.tar.gz
> 
> This beta release is intended to verify the changes that should resolve 
> the bugs listed below. We appreciate tests done by anyone. Please leave 
> a comment in the respective bugreport with a short description of the 
> success or failure. Visiting one of the bugreports is as easy as opening 
> the bugzilla.redhat.com/$BUG URL, for the first in the list, this 
> results in http://bugzilla.redhat.com/765202 .
> 
> Bugs expected to be fixed (31 in total since 3.5.0):
> 
>  #765202 - lgetxattr called with invalid keys on the bricks
>  #833586 - inodelk hang from marker_rename_release_newp_lock
>  #859581 - self-heal process can sometimes create directories instead of 
> symlinks for the root gfid file in .glusterfs
>  #986429 - Backupvolfile server option should work internal to GlusterFS 
> framework
> #1039544 - [FEAT] "gluster volume heal info" should list the entries that 
> actually required to be healed.
> #1046624 - Unable to heal symbolic Links
> #1046853 - AFR : For every file self-heal there are warning messages reported 
> in glustershd.log file
> #1063190 - [RHEV-RHS] Volume was not accessible after server side quorum was 
> met
> #1064096 - The old Python Translator code (not Glupy) should be removed
> #1066996 - Using sanlock on a gluster mount with replica 3 (quorum-type auto) 
> leads to a split-brain
> #1071191 - [3.5.1] Sporadic SIGBUS with mmap() on a sparse file created with 
> open(), seek(), write()
> #1078061 - Need ability to heal mismatching user extended attributes without 
> any changelogs
> #1078365 - New xlators are linked as versioned .so files, creating 
> .so.0.0.0
> #1086748 - Add documentation for the Feature: AFR CLI enhancements
> #1086750 - Add documentation for the Feature: File Snapshots in GlusterFS
> #1086752 - Add documentation for the Feature: On-Wire 
> Compression/Decompression
> #1086756 - Add documentation for the Feature: zerofill API for GlusterFS
> #1086758 - Add documentation for the Feature: Changelog based parallel 
> geo-replication
> #1086760 - Add documentation for the Feature: Write Once Read Many (WORM) 
> volume
> #1086762 - Add documentation for the Feature: BD Xlator - Block Device 
> translator
> #1088848 - Spelling errors in rpc/rpc-transport/rdma/src/rdma.c
> #1089054 - gf-error-codes.h is missing from source tarball
> #1089470 - SMB: Crash on brick process during compile kernel.
> #1089934 - list dir with more than N files results in Input/output error
> #1091340 - Doc: Add glfs_fini known issue to release notes 3.5
> #1091392 - glusterfs.spec.in: minor/nit changes to sync with Fedora spec
> #1095775 - Add support in libgfapi to fetch volume info from glusterd.
> #1095971 - Stopping/Starting a Gluster volume resets ownership
> #1096040 - AFR : self-heal-daemon not clearing the change-logs of all the 
> sources after self-heal
> #1096425 - i/o error when one user tries to access RHS volume over NFS with 
> 100+ GIDs
> #1099878 - Need support for handle based Ops to fetch/modify extended 
> attributes of a file
> 
> 
> Before a final glusterfs-3.5.1 release is made, we hope to have all the 
> blocker bugs fixed. There are currently 13 bugs marked that still need 
> some work done:
> 
> #1081016 - glusterd needs xfsprogs and e2fsprogs packages
> #1086743 - Add documentation for the Feature: RDMA-connection manager 
> (RDMA-CM)
> #1086749 - Add documentation for the Feature: Exposing Volume Capabilities
> #1086751 - Add documentation for the Feature: gfid-access
> #1086754 - Add documentation for the Feature: Quota Scalability
> #1086755 - Add documentation for the Feature: readdir-ahead
> #1086759 - Add documentation for the Feature: Improved block device translator
> #1086766 - Add documentation for the Feature: Libgfapi
> #1086774 - Add documentation for the Feature: Access Control List - Version 3 
> support for Gluster NFS
> #1086781 - Add documentation for the Feature: Eager locking
> #1086782 - Add documentation for the Feature: glusterfs and  oVirt integration
> #1086783 - Add documentation for the Feature: qemu 1.3 - libgfapi integration
> #1095595 - Stick to IANA standard while allocating brick ports
> 
> A more detailed overview of the status of each of these bugs is here:
> - https://bugzilla.redhat.com/showdependencytree.cgi?id=

Re: [Gluster-devel] Spurious failure in ./tests/bugs/bug-948686.t [14, 15, 16]

2014-05-28 Thread Krishnan Parthasarathi
I am looking into this issue. I will update this email thread
once I have the root cause.

thanks,
Krish

- Original Message -
> hi kp,
>  Could you look into it.
> 
> Patch ==> http://review.gluster.com/7889/1
> Author==>  Avra Sengupta aseng...@redhat.com
> Build triggered by==> amarts
> Build-url ==>
> http://build.gluster.org/job/regression/4586/consoleFull
> Download-log-at   ==>
> http://build.gluster.org:443/logs/regression/glusterfs-logs-20140527:14:51:09.tgz
> Test written by   ==> Author: Krishnan Parthasarathi
> 
> 
> ./tests/bugs/bug-948686.t [14, 15, 16]
>   #!/bin/bash
>   
>   . $(dirname $0)/../include.rc
>   . $(dirname $0)/../volume.rc
>   . $(dirname $0)/../cluster.rc
>   
>   function check_peers {
>   $CLI_1 peer status | grep 'Peer in Cluster (Connected)' | wc -l
>   }
>   cleanup;
>   #setup cluster and test volume
> 1 TEST launch_cluster 3; # start 3-node virtual cluster
> 2 TEST $CLI_1 peer probe $H2; # peer probe server 2 from server 1 cli
> 3 TEST $CLI_1 peer probe $H3; # peer probe server 3 from server 1 cli
>   
> 4 EXPECT_WITHIN $PROBE_TIMEOUT 2 check_peers;
>   
> 5 TEST $CLI_1 volume create $V0 replica 2 $H1:$B1/$V0 $H1:$B1/${V0}_1
> $H2:$B2/$V0 $H3:$B3/$V0
> 6 TEST $CLI_1 volume start $V0
> 7 TEST glusterfs --volfile-server=$H1 --volfile-id=$V0 $M0
>   
>   #kill a node
> 8 TEST kill_node 3
>   
>   #modify volume config to see change in volume-sync
> 9 TEST $CLI_1 volume set $V0 write-behind off
>   #add some files to the volume to see effect of volume-heal cmd
>10 TEST touch $M0/{1..100};
>11 TEST $CLI_1 volume stop $V0;
>12 TEST $glusterd_3;
>13 EXPECT_WITHIN $PROBE_TIMEOUT 2 check_peers;
> ***14 TEST $CLI_3 volume start $V0;
> ***15 TEST $CLI_2 volume stop $V0;
> ***16 TEST $CLI_2 volume delete $V0;
>   
>   cleanup;
>   
>17 TEST glusterd;
>18 TEST $CLI volume create $V0 $H0:$B0/$V0
>19 TEST $CLI volume start $V0
>   pkill glusterd;
>   pkill glusterfsd;
>20 TEST glusterd
>21 TEST $CLI volume status $V0
>   
>   cleanup;
> 
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Spurious failire ./tests/bugs/bug-1049834.t [16]

2014-05-28 Thread Pranith Kumar Karampuri


- Original Message -
> From: "Avra Sengupta" 
> To: "Pranith Kumar Karampuri" 
> Cc: "Gluster Devel" 
> Sent: Wednesday, May 28, 2014 5:04:40 PM
> Subject: Re: [Gluster-devel] Spurious failire ./tests/bugs/bug-1049834.t [16]
> 
> Pranith am looking into a priority issue for
> snapshot(https://bugzilla.redhat.com/show_bug.cgi?id=1098045) right now,
> I will get started with this spurious failure as soon as I finish it,
> which should be max by eod tomorrow.

Thanks for the ack Avra.

Pranith

> 
> Regards,
> Avra
> 
> On 05/28/2014 06:46 AM, Pranith Kumar Karampuri wrote:
> > FYI, this test failed more than once yesterday. Same test failed both the
> > times.
> >
> > Pranith
> > - Original Message -
> >> From: "Pranith Kumar Karampuri" 
> >> To: "Avra Sengupta" 
> >> Cc: "Gluster Devel" 
> >> Sent: Wednesday, May 28, 2014 6:43:52 AM
> >> Subject: Re: [Gluster-devel] Spurious failire ./tests/bugs/bug-1049834.t
> >> [16]
> >>
> >>
> >> CC gluster-devel
> >>
> >> Pranith
> >> - Original Message -
> >>> From: "Pranith Kumar Karampuri" 
> >>> To: "Avra Sengupta" 
> >>> Sent: Wednesday, May 28, 2014 6:42:53 AM
> >>> Subject: Spurious failire ./tests/bugs/bug-1049834.t [16]
> >>>
> >>> hi Avra,
> >>> Could you look into it.
> >>>
> >>> Patch ==> http://review.gluster.com/7889/1
> >>> Author==>  Avra Sengupta aseng...@redhat.com
> >>> Build triggered by==> amarts
> >>> Build-url ==>
> >>> http://build.gluster.org/job/regression/4586/consoleFull
> >>> Download-log-at   ==>
> >>> http://build.gluster.org:443/logs/regression/glusterfs-logs-20140527:14:51:09.tgz
> >>> Test written by   ==> Author: Avra Sengupta 
> >>>
> >>> ./tests/bugs/bug-1049834.t [16]
> >>>#!/bin/bash
> >>>
> >>>. $(dirname $0)/../include.rc
> >>>. $(dirname $0)/../cluster.rc
> >>>. $(dirname $0)/../volume.rc
> >>>. $(dirname $0)/../snapshot.rc
> >>>
> >>>cleanup;
> >>>  1 TEST verify_lvm_version
> >>>  2 TEST launch_cluster 2
> >>>  3 TEST setup_lvm 2
> >>>
> >>>  4 TEST $CLI_1 peer probe $H2
> >>>  5 EXPECT_WITHIN $PROBE_TIMEOUT 1 peer_count
> >>>
> >>>  6 TEST $CLI_1 volume create $V0 $H1:$L1 $H2:$L2
> >>>  7 EXPECT 'Created' volinfo_field $V0 'Status'
> >>>
> >>>  8 TEST $CLI_1 volume start $V0
> >>>  9 EXPECT 'Started' volinfo_field $V0 'Status'
> >>>
> >>>#Setting the snap-max-hard-limit to 4
> >>> 10 TEST $CLI_1 snapshot config $V0 snap-max-hard-limit 4
> >>>PID_1=$!
> >>>wait $PID_1
> >>>
> >>>#Creating 3 snapshots on the volume (which is the soft-limit)
> >>> 11 TEST create_n_snapshots $V0 3 $V0_snap
> >>> 12 TEST snapshot_n_exists $V0 3 $V0_snap
> >>>
> >>>#Creating the 4th snapshot on the volume and expecting it to be
> >>>created
> >>># but with the deletion of the oldest snapshot i.e 1st snapshot
> >>> 13 TEST  $CLI_1 snapshot create ${V0}_snap4 ${V0}
> >>> 14 TEST  snapshot_exists 1 ${V0}_snap4
> >>> 15 TEST ! snapshot_exists 1 ${V0}_snap1
> >>> ***16 TEST $CLI_1 snapshot delete ${V0}_snap4
> >>> 17 TEST $CLI_1 snapshot create ${V0}_snap1 ${V0}
> >>> 18 TEST snapshot_exists 1 ${V0}_snap1
> >>>
> >>>#Deleting the 4 snaps
> >>>#TEST delete_n_snapshots $V0 4 $V0_snap
> >>>#TEST ! snapshot_n_exists $V0 4 $V0_snap
> >>>
> >>>cleanup;
> >>>
> >>> Pranith
> >> ___
> >> Gluster-devel mailing list
> >> Gluster-devel@gluster.org
> >> http://supercolony.gluster.org/mailman/listinfo/gluster-devel
> >>
> 
> 
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Spurious failire ./tests/bugs/bug-1049834.t [16]

2014-05-28 Thread Avra Sengupta
Pranith am looking into a priority issue for 
snapshot(https://bugzilla.redhat.com/show_bug.cgi?id=1098045) right now, 
I will get started with this spurious failure as soon as I finish it, 
which should be max by eod tomorrow.


Regards,
Avra

On 05/28/2014 06:46 AM, Pranith Kumar Karampuri wrote:

FYI, this test failed more than once yesterday. Same test failed both the times.

Pranith
- Original Message -

From: "Pranith Kumar Karampuri" 
To: "Avra Sengupta" 
Cc: "Gluster Devel" 
Sent: Wednesday, May 28, 2014 6:43:52 AM
Subject: Re: [Gluster-devel] Spurious failire ./tests/bugs/bug-1049834.t [16]


CC gluster-devel

Pranith
- Original Message -

From: "Pranith Kumar Karampuri" 
To: "Avra Sengupta" 
Sent: Wednesday, May 28, 2014 6:42:53 AM
Subject: Spurious failire ./tests/bugs/bug-1049834.t [16]

hi Avra,
Could you look into it.

Patch ==> http://review.gluster.com/7889/1
Author==>  Avra Sengupta aseng...@redhat.com
Build triggered by==> amarts
Build-url ==>
http://build.gluster.org/job/regression/4586/consoleFull
Download-log-at   ==>
http://build.gluster.org:443/logs/regression/glusterfs-logs-20140527:14:51:09.tgz
Test written by   ==> Author: Avra Sengupta 

./tests/bugs/bug-1049834.t [16]
   #!/bin/bash
   
   . $(dirname $0)/../include.rc

   . $(dirname $0)/../cluster.rc
   . $(dirname $0)/../volume.rc
   . $(dirname $0)/../snapshot.rc
   
   cleanup;

 1 TEST verify_lvm_version
 2 TEST launch_cluster 2
 3 TEST setup_lvm 2
   
 4 TEST $CLI_1 peer probe $H2

 5 EXPECT_WITHIN $PROBE_TIMEOUT 1 peer_count
   
 6 TEST $CLI_1 volume create $V0 $H1:$L1 $H2:$L2

 7 EXPECT 'Created' volinfo_field $V0 'Status'
   
 8 TEST $CLI_1 volume start $V0

 9 EXPECT 'Started' volinfo_field $V0 'Status'
   
   #Setting the snap-max-hard-limit to 4

10 TEST $CLI_1 snapshot config $V0 snap-max-hard-limit 4
   PID_1=$!
   wait $PID_1
   
   #Creating 3 snapshots on the volume (which is the soft-limit)

11 TEST create_n_snapshots $V0 3 $V0_snap
12 TEST snapshot_n_exists $V0 3 $V0_snap
   
   #Creating the 4th snapshot on the volume and expecting it to be

   created
   # but with the deletion of the oldest snapshot i.e 1st snapshot
13 TEST  $CLI_1 snapshot create ${V0}_snap4 ${V0}
14 TEST  snapshot_exists 1 ${V0}_snap4
15 TEST ! snapshot_exists 1 ${V0}_snap1
***16 TEST $CLI_1 snapshot delete ${V0}_snap4
17 TEST $CLI_1 snapshot create ${V0}_snap1 ${V0}
18 TEST snapshot_exists 1 ${V0}_snap1
   
   #Deleting the 4 snaps

   #TEST delete_n_snapshots $V0 4 $V0_snap
   #TEST ! snapshot_n_exists $V0 4 $V0_snap
   
   cleanup;


Pranith

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel



___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] regarding special treatment of ENOTSUP for setxattr

2014-05-28 Thread Pranith Kumar Karampuri
Vijay,
   Could you please merge http://review.gluster.com/7788 if there are no more 
concerns.

Pranith
- Original Message -
> From: "Pranith Kumar Karampuri" 
> To: "Harshavardhana" 
> Cc: "Gluster Devel" 
> Sent: Monday, May 26, 2014 1:18:18 PM
> Subject: Re: [Gluster-devel] regarding special treatment of ENOTSUP for 
> setxattr
> 
> Please review http://review.gluster.com/7788 submitted to remove the
> filtering of that error.
> 
> Pranith
> - Original Message -
> > From: "Harshavardhana" 
> > To: "Pranith Kumar Karampuri" 
> > Cc: "Kaleb KEITHLEY" , "Gluster Devel"
> > 
> > Sent: Friday, May 23, 2014 2:12:02 AM
> > Subject: Re: [Gluster-devel] regarding special treatment of ENOTSUP for
> > setxattr
> > 
> > http://review.gluster.com/#/c/7823/ - the fix here
> > 
> > On Thu, May 22, 2014 at 1:41 PM, Harshavardhana
> >  wrote:
> > > Here are the important locations in the XFS tree coming from 2.6.32
> > > branch
> > >
> > > STATIC int
> > > xfs_set_acl(struct inode *inode, int type, struct posix_acl *acl)
> > > {
> > > struct xfs_inode *ip = XFS_I(inode);
> > > unsigned char *ea_name;
> > > int error;
> > >
> > > if (S_ISLNK(inode->i_mode)) > I would
> > > generally think this is the issue.
> > > return -EOPNOTSUPP;
> > >
> > > STATIC long
> > > xfs_vn_fallocate(
> > > struct inode*inode,
> > > int mode,
> > > loff_t  offset,
> > > loff_t  len)
> > > {
> > > longerror;
> > > loff_t  new_size = 0;
> > > xfs_flock64_t   bf;
> > > xfs_inode_t *ip = XFS_I(inode);
> > > int cmd = XFS_IOC_RESVSP;
> > > int attr_flags = XFS_ATTR_NOLOCK;
> > >
> > > if (mode & ~(FALLOC_FL_KEEP_SIZE | FALLOC_FL_PUNCH_HOLE))
> > > return -EOPNOTSUPP;
> > >
> > > STATIC int
> > > xfs_ioc_setxflags(
> > > xfs_inode_t *ip,
> > > struct file *filp,
> > > void__user *arg)
> > > {
> > > struct fsxattr  fa;
> > > unsigned intflags;
> > > unsigned intmask;
> > > int error;
> > >
> > > if (copy_from_user(&flags, arg, sizeof(flags)))
> > > return -EFAULT;
> > >
> > > if (flags & ~(FS_IMMUTABLE_FL | FS_APPEND_FL | \
> > >   FS_NOATIME_FL | FS_NODUMP_FL | \
> > >   FS_SYNC_FL))
> > > return -EOPNOTSUPP;
> > >
> > > Perhaps some sort of system level acl's are being propagated by us
> > > over symlinks() ? - perhaps this is the related to the same issue of
> > > following symlinks?
> > >
> > > On Sun, May 18, 2014 at 10:48 AM, Pranith Kumar Karampuri
> > >  wrote:
> > >> Sent the following patch to remove the special treatment of ENOTSUP
> > >> here:
> > >> http://review.gluster.org/7788
> > >>
> > >> Pranith
> > >> - Original Message -
> > >>> From: "Kaleb KEITHLEY" 
> > >>> To: gluster-devel@gluster.org
> > >>> Sent: Tuesday, May 13, 2014 8:01:53 PM
> > >>> Subject: Re: [Gluster-devel] regarding special treatment of ENOTSUP for
> > >>> setxattr
> > >>>
> > >>> On 05/13/2014 08:00 AM, Nagaprasad Sathyanarayana wrote:
> > >>> > On 05/07/2014 03:44 PM, Pranith Kumar Karampuri wrote:
> > >>> >>
> > >>> >> - Original Message -
> > >>> >>> From: "Raghavendra Gowdappa" 
> > >>> >>> To: "Pranith Kumar Karampuri" 
> > >>> >>> Cc: "Vijay Bellur" , gluster-devel@gluster.org,
> > >>> >>> "Anand Avati" 
> > >>> >>> Sent: Wednesday, May 7, 2014 3:42:16 PM
> > >>> >>> Subject: Re: [Gluster-devel] regarding special treatment of ENOTSUP
> > >>> >>> for setxattr
> > >>> >>>
> > >>> >>> I think with "repetitive log message suppression" patch being
> > >>> >>> merged,
> > >>> >>> we
> > >>> >>> don't really need gf_log_occasionally (except if they are logged in
> > >>> >>> DEBUG or
> > >>> >>> TRACE levels).
> > >>> >> That definitely helps. But still, setxattr calls are not supposed to
> > >>> >> fail with ENOTSUP on FS where we support gluster. If there are
> > >>> >> special
> > >>> >> keys which fail with ENOTSUPP, we can conditionally log setxattr
> > >>> >> failures only when the key is something new?
> > >>>
> > >>> I know this is about EOPNOTSUPP (a.k.a. ENOTSUPP) returned by
> > >>> setxattr(2) for legitimate attrs.
> > >>>
> > >>> But I can't help but wondering if this isn't related to other bugs
> > >>> we've
> > >>> had with, e.g., lgetxattr(2) called on invalid xattrs?
> > >>>
> > >>> E.g. see https://bugzilla.redhat.com/show_bug.cgi?id=765202. We have a
> > >>> hack where xlators communicate with each other by getting (and
> > >>> setting?)
> > >>> invalid xattrs; the posix xlator has logic to filter out  invalid
> > >>> xattrs, but due to bugs this hasn't always worked perfectly.
> > >>>
> > >>> It would be interesting to know which xattrs are getting erro

Re: [Gluster-devel] IP addresses in peer status/pool list

2014-05-28 Thread Paul Cuzner
Hi, 

Thanks for confirming the presence and reasons for 1 IP address - that's what I 
would expect to see. 

The system in question is an internal red hat cluster - that we're trying to 
understand, since it's tripping up gstatus :) 

a gluster peer status for an 8 node cluster is returning **2** nodes as IP 
addresses 

redhat-storage-server-2.1.1.0-6.el6rhs.noarch 
glusterfs-server-3.4.0.44rhs-1.el6rhs.x86_64 

What I wanted to understand was whether there is something 'obvious' to explain 
the IP adddresses. 

Since there isn't - I'll get the sysadmin to raise a BZ, and post the BZ to the 
list. 

Cheers, 

PC 

- Original Message -

> From: "Kaushal M" 
> To: "Paul Cuzner" 
> Cc: "Gluster Devel" 
> Sent: Wednesday, 28 May, 2014 5:49:10 PM
> Subject: Re: [Gluster-devel] IP addresses in peer status/pool list

> This is surprising. There hasn't been any change that could cause this
> kind of behaviour AFAICT.

> Normally, the peer from which the probes were performed would be shown
> using IPs on the other nodes. But if a reverse probe with hostnames
> should fix this too.

> Could you share more information on your setup (version, logs, etc)?

> ~kaushal

> On Wed, May 28, 2014 at 5:27 AM, Paul Cuzner  wrote:
> > Hi,
> >
> > Can anyone shed any light on why I see IP addresses in peer status or pool
> > list output instead of names?
> >
> > In clusters where the names were used in the probes, and volumes built with
> > node names I see IP's in the peer status output?
> >
> > This has been the case for a while - I just like to understand why?
> >
> > I've even seen some output where 2 nodes are listed by IP instead of
> > names..
> >
> > Cheers,
> >
> > Paul C
> >
> >
> > ___
> > Gluster-devel mailing list
> > Gluster-devel@gluster.org
> > http://supercolony.gluster.org/mailman/listinfo/gluster-devel
> >
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel