Re: [Gluster-devel] glustershd status

2014-07-17 Thread Emmanuel Dreyfus
On Wed, Jul 16, 2014 at 09:54:49PM -0700, Harshavardhana wrote:
 In-fact this is true on Linux as well - there is smaller time window
 observe the below output , immediately run 'volume status' after a
 'volume start' event

I observe the same lapse on NetBSD if the volume is created and started.
If it is stoped and started again, glustershd will never report back.

-- 
Emmanuel Dreyfus
m...@netbsd.org
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] glustershd status

2014-07-17 Thread Harshavardhana
On a side note while looking into this issue  - I uncovered a memory
leak too which after successful registration with glusterd, Self-heal
daemon and NFS server are killed by FreeBSD memory manager. Have you
observed any memory leaks?
I have the valgrind output and it clearly indicates of large memory
leaks - perhaps it could be just FreeBSD thing!

On Wed, Jul 16, 2014 at 11:29 PM, Emmanuel Dreyfus m...@netbsd.org wrote:
 On Wed, Jul 16, 2014 at 09:54:49PM -0700, Harshavardhana wrote:
 In-fact this is true on Linux as well - there is smaller time window
 observe the below output , immediately run 'volume status' after a
 'volume start' event

 I observe the same lapse on NetBSD if the volume is created and started.
 If it is stoped and started again, glustershd will never report back.

 --
 Emmanuel Dreyfus
 m...@netbsd.org



-- 
Religious confuse piety with mere ritual, the virtuous confuse
regulation with outcomes
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] glustershd status

2014-07-17 Thread Krishnan Parthasarathi
Emmanuel,

Could you take statedump* of the glustershd process when it has leaked
enough memory to be able to observe and share the output? This might 
give us what kind of objects are we allocating abnormally high.

* statedump of a glusterfs process 
#kill -USR1 pid of process

HTH,
Krish


- Original Message -
 On Wed, Jul 16, 2014 at 11:32:06PM -0700, Harshavardhana wrote:
  On a side note while looking into this issue  - I uncovered a memory
  leak too which after successful registration with glusterd, Self-heal
  daemon and NFS server are killed by FreeBSD memory manager. Have you
  observed any memory leaks?
  I have the valgrind output and it clearly indicates of large memory
  leaks - perhaps it could be just FreeBSD thing!
 
 I observed memory leaks on long terme usage. My favourite test case
 is building NetBSD on a replicated/distributed volume, and I can see
 processes growing a lot during the build. I reported it some time ago,
 and some leaks were plugged, but obviosuly some remain.
 
 valgrind was never ported to NetBSD, hence I lack investigative tools,
 but I bet the leaks exist on FreeBSD and Linux as well.
 
 --
 Emmanuel Dreyfus
 m...@netbsd.org
 ___
 Gluster-devel mailing list
 Gluster-devel@gluster.org
 http://supercolony.gluster.org/mailman/listinfo/gluster-devel
 
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] glustershd status

2014-07-17 Thread Harshavardhana
KP,

I do have a 3.2Gigs worth of valgrind output which indicates this
issue, trying to reproduce this on Linux.

My hunch says that 'compiling' with --disable-epoll might actually
trigger this issue on Linux too. Will update here
once i have done that testing.


On Wed, Jul 16, 2014 at 11:44 PM, Krishnan Parthasarathi
kpart...@redhat.com wrote:
 Emmanuel,

 Could you take statedump* of the glustershd process when it has leaked
 enough memory to be able to observe and share the output? This might
 give us what kind of objects are we allocating abnormally high.

 * statedump of a glusterfs process
 #kill -USR1 pid of process

 HTH,
 Krish


 - Original Message -
 On Wed, Jul 16, 2014 at 11:32:06PM -0700, Harshavardhana wrote:
  On a side note while looking into this issue  - I uncovered a memory
  leak too which after successful registration with glusterd, Self-heal
  daemon and NFS server are killed by FreeBSD memory manager. Have you
  observed any memory leaks?
  I have the valgrind output and it clearly indicates of large memory
  leaks - perhaps it could be just FreeBSD thing!

 I observed memory leaks on long terme usage. My favourite test case
 is building NetBSD on a replicated/distributed volume, and I can see
 processes growing a lot during the build. I reported it some time ago,
 and some leaks were plugged, but obviosuly some remain.

 valgrind was never ported to NetBSD, hence I lack investigative tools,
 but I bet the leaks exist on FreeBSD and Linux as well.

 --
 Emmanuel Dreyfus
 m...@netbsd.org
 ___
 Gluster-devel mailing list
 Gluster-devel@gluster.org
 http://supercolony.gluster.org/mailman/listinfo/gluster-devel




-- 
Religious confuse piety with mere ritual, the virtuous confuse
regulation with outcomes
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] glustershd status

2014-07-17 Thread Krishnan Parthasarathi
Harsha,

In addition to the valgrind output, statedump output of glustershd process when 
the leak is observed would be really helpful. 

thanks,
Krish

- Original Message -
 Nope spoke too early, using poll() has no effect on the memory usage
 on Linux, so actually back to FreeBSD.
 
 On Thu, Jul 17, 2014 at 12:07 AM, Harshavardhana
 har...@harshavardhana.net wrote:
  KP,
 
  I do have a 3.2Gigs worth of valgrind output which indicates this
  issue, trying to reproduce this on Linux.
 
  My hunch says that 'compiling' with --disable-epoll might actually
  trigger this issue on Linux too. Will update here
  once i have done that testing.
 
 
  On Wed, Jul 16, 2014 at 11:44 PM, Krishnan Parthasarathi
  kpart...@redhat.com wrote:
  Emmanuel,
 
  Could you take statedump* of the glustershd process when it has leaked
  enough memory to be able to observe and share the output? This might
  give us what kind of objects are we allocating abnormally high.
 
  * statedump of a glusterfs process
  #kill -USR1 pid of process
 
  HTH,
  Krish
 
 
  - Original Message -
  On Wed, Jul 16, 2014 at 11:32:06PM -0700, Harshavardhana wrote:
   On a side note while looking into this issue  - I uncovered a memory
   leak too which after successful registration with glusterd, Self-heal
   daemon and NFS server are killed by FreeBSD memory manager. Have you
   observed any memory leaks?
   I have the valgrind output and it clearly indicates of large memory
   leaks - perhaps it could be just FreeBSD thing!
 
  I observed memory leaks on long terme usage. My favourite test case
  is building NetBSD on a replicated/distributed volume, and I can see
  processes growing a lot during the build. I reported it some time ago,
  and some leaks were plugged, but obviosuly some remain.
 
  valgrind was never ported to NetBSD, hence I lack investigative tools,
  but I bet the leaks exist on FreeBSD and Linux as well.
 
  --
  Emmanuel Dreyfus
  m...@netbsd.org
  ___
  Gluster-devel mailing list
  Gluster-devel@gluster.org
  http://supercolony.gluster.org/mailman/listinfo/gluster-devel
 
 
 
 
  --
  Religious confuse piety with mere ritual, the virtuous confuse
  regulation with outcomes
 
 
 
 --
 Religious confuse piety with mere ritual, the virtuous confuse
 regulation with outcomes
 
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Developer Documentation for datastructures in gluster

2014-07-17 Thread Ravishankar N

On 07/15/2014 04:39 PM, Pranith Kumar Karampuri wrote:

hi,
  Please respond if you guys volunteer to add documentation for 
any of the following things that are not already taken.


client_t - pranith
integration with statedump - pranith
mempool - Pranith

event-hostory + circ-buff - Raghavendra Bhat
inode - Raghavendra Bhat

call-stub
fd
iobuf
graph
xlator
option-framework
rbthash
runner-framework
stack/frame
strfd
timer
store
gid-cache(source is heavily documented)
dict
event-poll




I'll take up event-poll. I have created an etherpad link with the 
components and volunteers thus far:

https://etherpad.wikimedia.org/p/glusterdoc
Feel free to update this doc with your patch details, other components etc.

- Ravi


Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Unnecessary bugs entered in BZ, ...

2014-07-17 Thread Anders Blomdell
...and I'm sorry about that, the following documents are somewhat contradictory:


http://gluster.org/community/documentation/index.php/Simplified_dev_workflow

The script will ask you to enter a bugzilla bug id. Every 
change submitted to GlusterFS needs a bugzilla entry to be 
accepted. If you do not already have a bug id, file a new 
bug at Red Hat Bugzilla. If the patch is submitted for review, 
the rfc.sh script will return the gerrit url for the review request. 


www.gluster.org/community/documentation/index.php/Development_Work_Flow

Prompt for a Bug Id for each commit (if it was not already provded) and 
include it as a BUG: tag in the commit log. You can just hit enter at 
this prompt if your submission is purely for review purposes. 


/Anders
-- 
Anders Blomdell  Email: anders.blomd...@control.lth.se
Department of Automatic Control
Lund University  Phone:+46 46 222 4625
P.O. Box 118 Fax:  +46 46 138118
SE-221 00 Lund, Sweden

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Unnecessary bugs entered in BZ, ...

2014-07-17 Thread Kaushal M
I don't understand your confusion, but maybe these docs need to be reworded.

What both of these want to say is that for a patch to be merged into
glusterfs, it needs to be associated with a bug-id. This association
is done by adding a 'BUG: id' line in the commit message. If you
haven't manually added a bug-id in the commit message, the rfc.sh
script will prompt you to enter one and add it to the commit-message.
But, it is possible ignore this prompt and submit a patch for review.
A patch submitted for review in this manner will only be reviewed. It
will not be merged.

The simplified workflow document doesn't mention this as it was
targeted at new developers, and I felt having this details was TMI for
them. But now when I rethink it, it's the seasoned developers who are
beginning to contribute to gluster, who are more likely to use that
doc.

~kaushal

On Thu, Jul 17, 2014 at 6:19 PM, Anders Blomdell
anders.blomd...@control.lth.se wrote:
 ...and I'm sorry about that, the following documents are somewhat 
 contradictory:


 http://gluster.org/community/documentation/index.php/Simplified_dev_workflow

 The script will ask you to enter a bugzilla bug id. Every
 change submitted to GlusterFS needs a bugzilla entry to be
 accepted. If you do not already have a bug id, file a new
 bug at Red Hat Bugzilla. If the patch is submitted for review,
 the rfc.sh script will return the gerrit url for the review request.


 www.gluster.org/community/documentation/index.php/Development_Work_Flow

 Prompt for a Bug Id for each commit (if it was not already provded) and
 include it as a BUG: tag in the commit log. You can just hit enter at
 this prompt if your submission is purely for review purposes.


 /Anders
 --
 Anders Blomdell  Email: anders.blomd...@control.lth.se
 Department of Automatic Control
 Lund University  Phone:+46 46 222 4625
 P.O. Box 118 Fax:  +46 46 138118
 SE-221 00 Lund, Sweden

 ___
 Gluster-devel mailing list
 Gluster-devel@gluster.org
 http://supercolony.gluster.org/mailman/listinfo/gluster-devel
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Unnecessary bugs entered in BZ, ...

2014-07-17 Thread Anders Blomdell
On 2014-07-17 15:44, Kaushal M wrote:
 I don't understand your confusion, but maybe these docs need to be reworded.
Not confused, just observed that some of my patches regarding development
workflow does not need to show up as bugs for gluster (at least not until 
somebody has verified that they could be useful to somebody else but me).

 What both of these want to say is that for a patch to be merged into
 glusterfs, it needs to be associated with a bug-id. This association
 is done by adding a 'BUG: id' line in the commit message. If you
 haven't manually added a bug-id in the commit message, the rfc.sh
 script will prompt you to enter one and add it to the commit-message.
 But, it is possible ignore this prompt and submit a patch for review.
 A patch submitted for review in this manner will only be reviewed. It
 will not be merged.
 
 The simplified workflow document doesn't mention this as it was
 targeted at new developers, and I felt having this details was TMI for
 them. But now when I rethink it, it's the seasoned developers who are
 beginning to contribute to gluster, who are more likely to use that
s/seasoned developers/ignorant old fools/g :-)

 doc.
 
 ~kaushal
 
 On Thu, Jul 17, 2014 at 6:19 PM, Anders Blomdell
 anders.blomd...@control.lth.se wrote:
 ...and I'm sorry about that, the following documents are somewhat 
 contradictory:


 http://gluster.org/community/documentation/index.php/Simplified_dev_workflow

 The script will ask you to enter a bugzilla bug id. Every
 change submitted to GlusterFS needs a bugzilla entry to be
 accepted. If you do not already have a bug id, file a new
 bug at Red Hat Bugzilla. If the patch is submitted for review,
 the rfc.sh script will return the gerrit url for the review request.


 www.gluster.org/community/documentation/index.php/Development_Work_Flow

 Prompt for a Bug Id for each commit (if it was not already provded) and
 include it as a BUG: tag in the commit log. You can just hit enter at
 this prompt if your submission is purely for review purposes.


/Anders


-- 
Anders Blomdell  Email: anders.blomd...@control.lth.se
Department of Automatic Control
Lund University  Phone:+46 46 222 4625
P.O. Box 118 Fax:  +46 46 138118
SE-221 00 Lund, Sweden

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Unnecessary bugs entered in BZ, ...

2014-07-17 Thread Kaushal M
Bugs/bug in the documents are not really what you think they mean.

We use bugzilla to track glusterfs development. It is used for track
defects (bugs) in the software, enhancements and new features. What we
really mean by a patch needs a bug id is that we need it to have an
entry on bugzilla, so that it's lifecycle can be tracked correctly. So
bug just means bugzilla entry. You have bug bugs, enhancement
bugs, feature request bugs and so on.

tl;dr:  bug == bugzilla entry.



On Thu, Jul 17, 2014 at 8:58 PM, Anders Blomdell
anders.blomd...@control.lth.se wrote:
 On 2014-07-17 15:44, Kaushal M wrote:
 I don't understand your confusion, but maybe these docs need to be reworded.
 Not confused, just observed that some of my patches regarding development
 workflow does not need to show up as bugs for gluster (at least not until
 somebody has verified that they could be useful to somebody else but me).

 What both of these want to say is that for a patch to be merged into
 glusterfs, it needs to be associated with a bug-id. This association
 is done by adding a 'BUG: id' line in the commit message. If you
 haven't manually added a bug-id in the commit message, the rfc.sh
 script will prompt you to enter one and add it to the commit-message.
 But, it is possible ignore this prompt and submit a patch for review.
 A patch submitted for review in this manner will only be reviewed. It
 will not be merged.

 The simplified workflow document doesn't mention this as it was
 targeted at new developers, and I felt having this details was TMI for
 them. But now when I rethink it, it's the seasoned developers who are
 beginning to contribute to gluster, who are more likely to use that
 s/seasoned developers/ignorant old fools/g :-)

 doc.

 ~kaushal

 On Thu, Jul 17, 2014 at 6:19 PM, Anders Blomdell
 anders.blomd...@control.lth.se wrote:
 ...and I'm sorry about that, the following documents are somewhat 
 contradictory:


 http://gluster.org/community/documentation/index.php/Simplified_dev_workflow

 The script will ask you to enter a bugzilla bug id. Every
 change submitted to GlusterFS needs a bugzilla entry to be
 accepted. If you do not already have a bug id, file a new
 bug at Red Hat Bugzilla. If the patch is submitted for review,
 the rfc.sh script will return the gerrit url for the review request.


 www.gluster.org/community/documentation/index.php/Development_Work_Flow

 Prompt for a Bug Id for each commit (if it was not already provded) and
 include it as a BUG: tag in the commit log. You can just hit enter at
 this prompt if your submission is purely for review purposes.


 /Anders


 --
 Anders Blomdell  Email: anders.blomd...@control.lth.se
 Department of Automatic Control
 Lund University  Phone:+46 46 222 4625
 P.O. Box 118 Fax:  +46 46 138118
 SE-221 00 Lund, Sweden

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] glustershd status

2014-07-17 Thread Harshavardhana
This is a small memory system like 1024M and a disk space for the
volume is 9gig, i do not think it has anything to do with AFR per se -
same bug is also reproducible on the bricks, nfs server too.  Also it
might be that we aren't able to capture glusterdumps on non Linux
platforms properly - one of reasons i used Valgrind output.

In Valgrind it indicates about 'lost memory' blocks - You can see the
screenshots too which indicate memory ramp ups in seconds with no i/o,
in-fact no data on the volume.

The work-around i have seen to contain this issue is to disable
self-heal-deamon and NFS - after that the memory remains proper. On an
interesting observation after running Gluster management daemon in
debugging more - i can see that

RPCLNT_CONNECT event() is constantly being triggered - which should
only occur once?? per process notification?


On Thu, Jul 17, 2014 at 3:38 AM, Krishnan Parthasarathi
kpart...@redhat.com wrote:
 Harsha,

 I don't actively work on AFR, so I might have missed some things.
 I looked for the following things in the statedump for any memory allocation
 related oddities.
 1) grep pool-misses *dump*
 This tells us if there were any objects whose allocated mem-pool wasn't 
 sufficient
 for the load it was working under.
 I see that the pool-misses were zero, which means we are doing good with the 
 mem-pools we allocated.

 2) grep hot-count *dump*
 This tells us the no. of objects of any kind that is 'active' in the process 
 while the state-dump
 was taken. This should allow us to see if the numbers we see are explicable.
 I see the maximum hot-count across statedumps of processes is 50, which isn't 
 alarming or pointing any obvious memory leaks.

 The above observations indicate that some object that is not mem-pool 
 allocated is being leaked.

 Hope this helps,
 Krish

 - Original Message -
 Here you go KP - https://bugzilla.redhat.com/show_bug.cgi?id=1120570

 On Thu, Jul 17, 2014 at 12:37 AM, Krishnan Parthasarathi
 kpart...@redhat.com wrote:
  Harsha,
 
  In addition to the valgrind output, statedump output of glustershd process
  when the leak is observed would be really helpful.
 
  thanks,
  Krish
 
  - Original Message -
  Nope spoke too early, using poll() has no effect on the memory usage
  on Linux, so actually back to FreeBSD.
 
  On Thu, Jul 17, 2014 at 12:07 AM, Harshavardhana
  har...@harshavardhana.net wrote:
   KP,
  
   I do have a 3.2Gigs worth of valgrind output which indicates this
   issue, trying to reproduce this on Linux.
  
   My hunch says that 'compiling' with --disable-epoll might actually
   trigger this issue on Linux too. Will update here
   once i have done that testing.
  
  
   On Wed, Jul 16, 2014 at 11:44 PM, Krishnan Parthasarathi
   kpart...@redhat.com wrote:
   Emmanuel,
  
   Could you take statedump* of the glustershd process when it has leaked
   enough memory to be able to observe and share the output? This might
   give us what kind of objects are we allocating abnormally high.
  
   * statedump of a glusterfs process
   #kill -USR1 pid of process
  
   HTH,
   Krish
  
  
   - Original Message -
   On Wed, Jul 16, 2014 at 11:32:06PM -0700, Harshavardhana wrote:
On a side note while looking into this issue  - I uncovered a memory
leak too which after successful registration with glusterd,
Self-heal
daemon and NFS server are killed by FreeBSD memory manager. Have you
observed any memory leaks?
I have the valgrind output and it clearly indicates of large memory
leaks - perhaps it could be just FreeBSD thing!
  
   I observed memory leaks on long terme usage. My favourite test case
   is building NetBSD on a replicated/distributed volume, and I can see
   processes growing a lot during the build. I reported it some time ago,
   and some leaks were plugged, but obviosuly some remain.
  
   valgrind was never ported to NetBSD, hence I lack investigative tools,
   but I bet the leaks exist on FreeBSD and Linux as well.
  
   --
   Emmanuel Dreyfus
   m...@netbsd.org
   ___
   Gluster-devel mailing list
   Gluster-devel@gluster.org
   http://supercolony.gluster.org/mailman/listinfo/gluster-devel
  
  
  
  
   --
   Religious confuse piety with mere ritual, the virtuous confuse
   regulation with outcomes
 
 
 
  --
  Religious confuse piety with mere ritual, the virtuous confuse
  regulation with outcomes
 



 --
 Religious confuse piety with mere ritual, the virtuous confuse
 regulation with outcomes




-- 
Religious confuse piety with mere ritual, the virtuous confuse
regulation with outcomes
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] release-3.6 branch created

2014-07-17 Thread Vijay Bellur

Hi All,

A new branch, 'release-3.6', has been branched from this commit in master:

commit 950f9d8abe714708ca62b86f304e7417127e1132
Author: Jeff Darcy jda...@redhat.com
Date:   Tue Jul 8 21:56:04 2014 -0400

dht: fix rename race


You can checkout this branch through:

$git checkout -b release-3.6 origin/release-3.6

rfc.sh is being updated to send patches to the appropriate branch. The 
plan is to have all 3.6.x releases happen off this branch. If you need 
any fix to be part of a 3.4.x release, please send out a backport of the 
same from master to release-3.4 after it has been accepted in master. 
More notes on backporting are available at [1].


The branch can be tracked from git.gluster.org and github. 
forge.gluster.org will be updated to mirror this branch in a bit.


Thanks,
Vijay

[1] 
http://www.gluster.org/community/documentation/index.php/Backport_Guidelines




___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] What's the impact of enabling the profiler?

2014-07-17 Thread Joe Julian
What impact, if any, does starting profiling (gluster volume profile 
$vol start) have on performance?

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Change DEFAULT_WORKDIR from hard-coded value

2014-07-17 Thread Harshavardhana
Anyone?

On Mon, Jul 14, 2014 at 6:44 PM, Harshavardhana
har...@harshavardhana.net wrote:
 http://review.gluster.org/#/c/8246/

 Two important things it achieves

 - Break-way from '/var/lib/glusterd' hard-coded previously,
   instead rely on 'configure' value from 'localstatedir'
 - Provide 's/lib/db' as default working directory for gluster
   management daemon for BSD and Darwin based installations

 ${localstatedir}/db was used for FreeBSD (In fact FreeNAS) - where
 we are planning for a 3.6 release integration as an experimental
 option perhaps to begin with.

 Since ${localstatedir}/lib is non-existent on non-linux platforms,
 it was decided as a natural choice.

 Future changes in next set would be migrate all the 'tests/*' to
 handle non '/var/{lib,db}' directories.

 Need your reviews on the present patchset, please chime in - thank you!

 --
 Religious confuse piety with mere ritual, the virtuous confuse
 regulation with outcomes



-- 
Religious confuse piety with mere ritual, the virtuous confuse
regulation with outcomes
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Updating blog entry list on www.gluster.org?

2014-07-17 Thread Humble Chirammal


- Original Message -
| From: Eco Willson ewill...@redhat.com
| To: Justin Clift jus...@gluster.org
| Cc: Gluster Devel gluster-devel@gluster.org, Humble Chirammal 
hchir...@redhat.com
| Sent: Thursday, July 17, 2014 5:32:13 AM
| Subject: Re: [Gluster-devel] Updating blog entry list on www.gluster.org?
| 
| This was supposed to be happening automatically, I pinged Garrett LeSage who
| implemented the function to look into the issue.
| 

Thanks Eco! 

Afaict, the same goes for Forge Activity as well. 

--Humble


| 
| - Original Message -
| From: Justin Clift jus...@gluster.org
| To: Eco Willson ewill...@redhat.com
| Cc: Gluster Devel gluster-devel@gluster.org, Humble Chirammal
| hchir...@redhat.com
| Sent: Monday, July 14, 2014 6:03:48 AM
| Subject: [Gluster-devel] Updating blog entry list on www.gluster.org?
| 
| Hi Eco,
| 
| Humble just published a blog post about the new GlusterFS 3.4.5
| beta2 RPMs:
| 
|   http://blog.gluster.org/2014/07/glusterfs-3-4-5beta2-rpms-are-available-now/
| 
| How do we get the www.gluster.org website updated, so new blog
| posts appear automagically?
| 
| Regards and best wishes,
| 
| Justin Clift
| 
| --
| GlusterFS - http://www.gluster.org
| 
| An open source, distributed file system scaling to several
| petabytes, and handling thousands of clients.
| 
| My personal twitter: twitter.com/realjustinclift
| 
| ___
| Gluster-devel mailing list
| Gluster-devel@gluster.org
| http://supercolony.gluster.org/mailman/listinfo/gluster-devel
| 
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-users] What's the impact of enabling the profiler?

2014-07-17 Thread Pranith Kumar Karampuri


On 07/18/2014 03:05 AM, Joe Julian wrote:
What impact, if any, does starting profiling (gluster volume profile 
$vol start) have on performance?

Joe,
According to the code the only extra things it does is calling 
gettimeofday() call at the beginning and end of the FOP to calculate 
latency, increment some variables. So I guess not much?


Pranith

___
Gluster-users mailing list
gluster-us...@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-users] Gluster Community meeting starting in 15 minutes

2014-07-17 Thread Justin Clift
On 17/07/2014, at 7:38 PM, Vijay Bellur wrote:
snip
 Had a discussion with Pranith and we felt that 3.5.2 beta is of
 more importance than 3.6 community test days.


Definitely agree personally.  People have 3.5.x in production, so
to my thinking issues with that should receive priority.  3.6 will
arrive whenever its ready.

+ Justin

--
GlusterFS - http://www.gluster.org

An open source, distributed file system scaling to several
petabytes, and handling thousands of clients.

My personal twitter: twitter.com/realjustinclift

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] glustershd status

2014-07-17 Thread Krishnan Parthasarathi
Harsha,
I haven't gotten around looking at the valgrind output. I am not sure if I will 
be able to do it soon since I am travelling next week.
Are you seeing an equal no. of disconnect messages in glusterd logs? What is 
the ip:port you observe in the RPC_CLNT_CONNECT messages? Could you attach the 
logs to the bug? 

thanks,
Krish

- Original Message -
 This is a small memory system like 1024M and a disk space for the
 volume is 9gig, i do not think it has anything to do with AFR per se -
 same bug is also reproducible on the bricks, nfs server too.  Also it
 might be that we aren't able to capture glusterdumps on non Linux
 platforms properly - one of reasons i used Valgrind output.
 
 In Valgrind it indicates about 'lost memory' blocks - You can see the
 screenshots too which indicate memory ramp ups in seconds with no i/o,
 in-fact no data on the volume.
 
 The work-around i have seen to contain this issue is to disable
 self-heal-deamon and NFS - after that the memory remains proper. On an
 interesting observation after running Gluster management daemon in
 debugging more - i can see that
 
 RPCLNT_CONNECT event() is constantly being triggered - which should
 only occur once?? per process notification?
 
 
 On Thu, Jul 17, 2014 at 3:38 AM, Krishnan Parthasarathi
 kpart...@redhat.com wrote:
  Harsha,
 
  I don't actively work on AFR, so I might have missed some things.
  I looked for the following things in the statedump for any memory
  allocation
  related oddities.
  1) grep pool-misses *dump*
  This tells us if there were any objects whose allocated mem-pool wasn't
  sufficient
  for the load it was working under.
  I see that the pool-misses were zero, which means we are doing good with
  the mem-pools we allocated.
 
  2) grep hot-count *dump*
  This tells us the no. of objects of any kind that is 'active' in the
  process while the state-dump
  was taken. This should allow us to see if the numbers we see are
  explicable.
  I see the maximum hot-count across statedumps of processes is 50, which
  isn't alarming or pointing any obvious memory leaks.
 
  The above observations indicate that some object that is not mem-pool
  allocated is being leaked.
 
  Hope this helps,
  Krish
 
  - Original Message -
  Here you go KP - https://bugzilla.redhat.com/show_bug.cgi?id=1120570
 
  On Thu, Jul 17, 2014 at 12:37 AM, Krishnan Parthasarathi
  kpart...@redhat.com wrote:
   Harsha,
  
   In addition to the valgrind output, statedump output of glustershd
   process
   when the leak is observed would be really helpful.
  
   thanks,
   Krish
  
   - Original Message -
   Nope spoke too early, using poll() has no effect on the memory usage
   on Linux, so actually back to FreeBSD.
  
   On Thu, Jul 17, 2014 at 12:07 AM, Harshavardhana
   har...@harshavardhana.net wrote:
KP,
   
I do have a 3.2Gigs worth of valgrind output which indicates this
issue, trying to reproduce this on Linux.
   
My hunch says that 'compiling' with --disable-epoll might actually
trigger this issue on Linux too. Will update here
once i have done that testing.
   
   
On Wed, Jul 16, 2014 at 11:44 PM, Krishnan Parthasarathi
kpart...@redhat.com wrote:
Emmanuel,
   
Could you take statedump* of the glustershd process when it has
leaked
enough memory to be able to observe and share the output? This might
give us what kind of objects are we allocating abnormally high.
   
* statedump of a glusterfs process
#kill -USR1 pid of process
   
HTH,
Krish
   
   
- Original Message -
On Wed, Jul 16, 2014 at 11:32:06PM -0700, Harshavardhana wrote:
 On a side note while looking into this issue  - I uncovered a
 memory
 leak too which after successful registration with glusterd,
 Self-heal
 daemon and NFS server are killed by FreeBSD memory manager. Have
 you
 observed any memory leaks?
 I have the valgrind output and it clearly indicates of large
 memory
 leaks - perhaps it could be just FreeBSD thing!
   
I observed memory leaks on long terme usage. My favourite test case
is building NetBSD on a replicated/distributed volume, and I can
see
processes growing a lot during the build. I reported it some time
ago,
and some leaks were plugged, but obviosuly some remain.
   
valgrind was never ported to NetBSD, hence I lack investigative
tools,
but I bet the leaks exist on FreeBSD and Linux as well.
   
--
Emmanuel Dreyfus
m...@netbsd.org
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel
   
   
   
   
--
Religious confuse piety with mere ritual, the virtuous confuse
regulation with outcomes
  
  
  
   --
   Religious confuse piety with mere ritual, the virtuous confuse
   regulation with outcomes
  
 
 
 
  --
  

Re: [Gluster-devel] glustershd status

2014-07-17 Thread Harshavardhana
Sure will do that! - if i get any clues i might send out a patch :-)

On Thu, Jul 17, 2014 at 9:05 PM, Krishnan Parthasarathi
kpart...@redhat.com wrote:
 Harsha,
 I haven't gotten around looking at the valgrind output. I am not sure if I 
 will be able to do it soon since I am travelling next week.
 Are you seeing an equal no. of disconnect messages in glusterd logs? What is 
 the ip:port you observe in the RPC_CLNT_CONNECT messages? Could you attach 
 the logs to the bug?

 thanks,
 Krish

 - Original Message -
 This is a small memory system like 1024M and a disk space for the
 volume is 9gig, i do not think it has anything to do with AFR per se -
 same bug is also reproducible on the bricks, nfs server too.  Also it
 might be that we aren't able to capture glusterdumps on non Linux
 platforms properly - one of reasons i used Valgrind output.

 In Valgrind it indicates about 'lost memory' blocks - You can see the
 screenshots too which indicate memory ramp ups in seconds with no i/o,
 in-fact no data on the volume.

 The work-around i have seen to contain this issue is to disable
 self-heal-deamon and NFS - after that the memory remains proper. On an
 interesting observation after running Gluster management daemon in
 debugging more - i can see that

 RPCLNT_CONNECT event() is constantly being triggered - which should
 only occur once?? per process notification?


 On Thu, Jul 17, 2014 at 3:38 AM, Krishnan Parthasarathi
 kpart...@redhat.com wrote:
  Harsha,
 
  I don't actively work on AFR, so I might have missed some things.
  I looked for the following things in the statedump for any memory
  allocation
  related oddities.
  1) grep pool-misses *dump*
  This tells us if there were any objects whose allocated mem-pool wasn't
  sufficient
  for the load it was working under.
  I see that the pool-misses were zero, which means we are doing good with
  the mem-pools we allocated.
 
  2) grep hot-count *dump*
  This tells us the no. of objects of any kind that is 'active' in the
  process while the state-dump
  was taken. This should allow us to see if the numbers we see are
  explicable.
  I see the maximum hot-count across statedumps of processes is 50, which
  isn't alarming or pointing any obvious memory leaks.
 
  The above observations indicate that some object that is not mem-pool
  allocated is being leaked.
 
  Hope this helps,
  Krish
 
  - Original Message -
  Here you go KP - https://bugzilla.redhat.com/show_bug.cgi?id=1120570
 
  On Thu, Jul 17, 2014 at 12:37 AM, Krishnan Parthasarathi
  kpart...@redhat.com wrote:
   Harsha,
  
   In addition to the valgrind output, statedump output of glustershd
   process
   when the leak is observed would be really helpful.
  
   thanks,
   Krish
  
   - Original Message -
   Nope spoke too early, using poll() has no effect on the memory usage
   on Linux, so actually back to FreeBSD.
  
   On Thu, Jul 17, 2014 at 12:07 AM, Harshavardhana
   har...@harshavardhana.net wrote:
KP,
   
I do have a 3.2Gigs worth of valgrind output which indicates this
issue, trying to reproduce this on Linux.
   
My hunch says that 'compiling' with --disable-epoll might actually
trigger this issue on Linux too. Will update here
once i have done that testing.
   
   
On Wed, Jul 16, 2014 at 11:44 PM, Krishnan Parthasarathi
kpart...@redhat.com wrote:
Emmanuel,
   
Could you take statedump* of the glustershd process when it has
leaked
enough memory to be able to observe and share the output? This might
give us what kind of objects are we allocating abnormally high.
   
* statedump of a glusterfs process
#kill -USR1 pid of process
   
HTH,
Krish
   
   
- Original Message -
On Wed, Jul 16, 2014 at 11:32:06PM -0700, Harshavardhana wrote:
 On a side note while looking into this issue  - I uncovered a
 memory
 leak too which after successful registration with glusterd,
 Self-heal
 daemon and NFS server are killed by FreeBSD memory manager. Have
 you
 observed any memory leaks?
 I have the valgrind output and it clearly indicates of large
 memory
 leaks - perhaps it could be just FreeBSD thing!
   
I observed memory leaks on long terme usage. My favourite test case
is building NetBSD on a replicated/distributed volume, and I can
see
processes growing a lot during the build. I reported it some time
ago,
and some leaks were plugged, but obviosuly some remain.
   
valgrind was never ported to NetBSD, hence I lack investigative
tools,
but I bet the leaks exist on FreeBSD and Linux as well.
   
--
Emmanuel Dreyfus
m...@netbsd.org
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel
   
   
   
   
--
Religious confuse piety with mere ritual, the virtuous confuse

Re: [Gluster-devel] Inspiration for improving our contributor documentation

2014-07-17 Thread Pranith Kumar Karampuri


On 07/17/2014 07:25 PM, Kaushal M wrote:

I came across mediawiki's developer documentation and guides when
browsing. These docs felt really good to me, and easy to approach.
I feel that we should take inspiration from them and start enhancing
our docs. (Outright copying with modifications as necessary, could
work too. But that just doesn't feel right)

Any volunteers?
(I'll start as soon as I finish with the developer documentation for
data structures for the components I volunteered earlier)

~kaushal

[0] - https://www.mediawiki.org/wiki/Developer_hub
I love the idea but not sure about the implementation. i.e. considering 
we already started with .md pages, why not have same kind of pages as 
.md files in /doc of gluster? We can modify the README in our project so 
that people can browse all the details in github? Please let me know 
your thoughts.


Pranith

[1] - https://www.mediawiki.org/wiki/Category:New_contributors
[2] - https://www.mediawiki.org/wiki/Gerrit/Code_review
[3] - https://www.mediawiki.org/wiki/Gerrit
[4] - https://www.mediawiki.org/wiki/Gerrit/Tutorial
[5] - https://www.mediawiki.org/wiki/Gerrit/Getting_started
[6] - https://www.mediawiki.org/wiki/Gerrit/Advanced_usage
... and lots more.
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Inspiration for improving our contributor documentation

2014-07-17 Thread Kaushal M
On Fri, Jul 18, 2014 at 11:11 AM, Pranith Kumar Karampuri
pkara...@redhat.com wrote:

 On 07/17/2014 07:25 PM, Kaushal M wrote:

 I came across mediawiki's developer documentation and guides when
 browsing. These docs felt really good to me, and easy to approach.
 I feel that we should take inspiration from them and start enhancing
 our docs. (Outright copying with modifications as necessary, could
 work too. But that just doesn't feel right)

 Any volunteers?
 (I'll start as soon as I finish with the developer documentation for
 data structures for the components I volunteered earlier)

 ~kaushal

 [0] - https://www.mediawiki.org/wiki/Developer_hub

 I love the idea but not sure about the implementation. i.e. considering we
 already started with .md pages, why not have same kind of pages as .md files
 in /doc of gluster? We can modify the README in our project so that people
 can browse all the details in github? Please let me know your thoughts.

These kinds of docs need to indexable and searchable by search
engines. Only then will they be useful. I don't think markdown files
in the source would be good place for these.
The other docs related to source/code documentation can be provided in
the source as we are attempting to provide now. These need to be
directly accessible for devs when developing, so having them in the
git repo is good.

 Pranith

 [1] - https://www.mediawiki.org/wiki/Category:New_contributors
 [2] - https://www.mediawiki.org/wiki/Gerrit/Code_review
 [3] - https://www.mediawiki.org/wiki/Gerrit
 [4] - https://www.mediawiki.org/wiki/Gerrit/Tutorial
 [5] - https://www.mediawiki.org/wiki/Gerrit/Getting_started
 [6] - https://www.mediawiki.org/wiki/Gerrit/Advanced_usage
 ... and lots more.
 ___
 Gluster-devel mailing list
 Gluster-devel@gluster.org
 http://supercolony.gluster.org/mailman/listinfo/gluster-devel


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel