Re: [Gluster-devel] CFP for Gluster Developer Summit
Here's a proposal ... Title: State of Gluster Performance Theme: Stability and Performance I hope to achieve the following in this talk: * present a brief overview of current performance for the broad workload classes: large-file sequential and random workloads, small-file and metadata-intensive workloads. * highlight some use-cases where we are seeing really good performance. * highlight some of the areas of concerns, covering in some detail the state of analysis and work in progress. Regards, Manoj - Original Message - > Hey All, > > Gluster Developer Summit 2016 is fast approaching [1] on us. We are > looking to have talks and discussions related to the following themes in > the summit: > > 1. Gluster.Next - focusing on features shaping the future of Gluster > > 2. Experience - Description of real world experience and feedback from: > a> Devops and Users deploying Gluster in production > b> Developers integrating Gluster with other ecosystems > > 3. Use cases - focusing on key use cases that drive Gluster.today and > Gluster.Next > > 4. Stability & Performance - focusing on current improvements to reduce > our technical debt backlog > > 5. Process & infrastructure - focusing on improving current workflow, > infrastructure to make life easier for all of us! > > If you have a talk/discussion proposal that can be part of these themes, > please send out your proposal(s) by replying to this thread. Please > clearly mention the theme for which your proposal is relevant when you > do so. We will be ending the CFP by 12 midnight PDT on August 31st, 2016. > > If you have other topics that do not fit in the themes listed, please > feel free to propose and we might be able to accommodate some of them as > lightening talks or something similar. > > Please do reach out to me or Amye if you have any questions. > > Thanks! > Vijay > > [1] https://www.gluster.org/events/summit2016/ > ___ > Gluster-devel mailing list > Gluster-devel@gluster.org > http://www.gluster.org/mailman/listinfo/gluster-devel > ___ Gluster-devel mailing list Gluster-devel@gluster.org http://www.gluster.org/mailman/listinfo/gluster-devel
Re: [Gluster-devel] CFP Gluster Developer Summit
On 17/08/16 19:26, Kaleb S. KEITHLEY wrote: I propose to present on one or more of the following topics: * NFS-Ganesha Architecture, Roadmap, and Status Sorry for the late notice. I am willing to be a co-presenter for the above topic. -- Jiffin * Architecture of the High Availability Solution for Ganesha and Samba - detailed walk through and demo of current implementation - difference between the current and storhaug implementations * High Level Overview of autoconf/automake/libtool configuration (I gave a presentation in BLR in 2015, so this is perhaps less interesting?) * Packaging Howto — RPMs and .debs (maybe a breakout session or a BOF. Would like to (re)enlist volunteers to help build packages.) ___ Gluster-devel mailing list Gluster-devel@gluster.org http://www.gluster.org/mailman/listinfo/gluster-devel
Re: [Gluster-devel] [Gluster-users] release-3.6 end of life
On Fri, Aug 19, 2016 at 3:03 AM, Lindsay Mathieson < lindsay.mathie...@gmail.com> wrote: > On 19/08/2016 3:45 AM, Diego Remolina wrote: > >> The one thing that still remains a mystery to me is how to downgrade >> glusterfs packages in Ubuntu. I have never been able to do that. There >> was also a post from someone about it recently on the list and I do >> not think it got any replies. >> > > I would have assumed something like: > > > 1. stop volume(s) > > 2. if needed: reset gluster options not available is older version > > 3. if needed: downgrade op-version > Downgrading op-version is not possible until and unless changes are done from back end. So my suggestion is to swap the step 3 & step 4. We need to first stop all the gluster services, change the op-version in glusterd.info file in all the nodes and then proceed to step 5. > > 4. stop all gluster daemons :) > > 5. sudo apt-get purge gluster* > > 6. sudo ppa-purge > > 7. Install older ppa. > > 8. install older gluster service > > 9. Start services > > 10. check peers and status > > 11. start volume > > 12. test > > > if 2) and 3) can't be done then I presume you can't downgrade. > > -- > Lindsay Mathieson > > > ___ > Gluster-users mailing list > gluster-us...@gluster.org > http://www.gluster.org/mailman/listinfo/gluster-users > -- --Atin ___ Gluster-devel mailing list Gluster-devel@gluster.org http://www.gluster.org/mailman/listinfo/gluster-devel
[Gluster-devel] [gluster-devel] New Publishing Guidelines for Blog.Gluster.org
In keeping with Fedora, oVirt and some of our other communities with vibrant content, I'm extending a new proposal around how we can create more content that gets published on blog.gluster.org. Here's some new guidelines patterned after those: ## Make Your Proposal When you are ready to create a blog article for Gluster.org, you can send a proposal for your article to social-me...@gluster.org. Include: * Proposed title * 1-3 sentence summary of the article * Any relevant date information (Should this be tied to a release?) When writing articles for the Gluster blog, the goal is to keep your articles short, sweet, and to the point. News from all different parts of the community will be posted here, so not everyone will read longer articles. However, it is encouraged to link to more detailed blog posts, articles, or other news sources to help those who are interested learn more about what you are working on! Ideally, your Gluster blog article should be a quick “headliner” pointing out the “big stuff” to grab peoples’ attention. We're focusing on readability - ideally, this should be at a 4th grade reading level. All of our readers are going to suffer from too much information, constantly. If you want to check your readability score, try here: https://readability-score.com/ first. Once your article is accepted, it will be noted as "Assigned" within the Gluster blog's WordPress platform. If you have access to the platform, you can begin the writing process. If you need access, the editorial team will assign you a username and password to access WordPress. ## Basic Requirements: Categories and Tags As you are writing, there are some housekeeping issues to keep in mind. Every article will need a few things to be published. First, your article must be added to categories relevant to your team. If you are writing an article about a new software being worked on, your post would be categorized as “Development”. If your post is about a Gluster meetup, you would use the “Events” category. And so on. Additionally, add a few tags to your article to help categorize posts and make them easily searchable. You are free to add any tags you wish, but a good number is between 3-5 per article. ## Email for Final Approval Once you have finished writing your article, you should change it from “Draft” status to “Review”. Afterwards, send an email to social-media@ gluster.org with a link to your article, announcing that it is ready for review. List members will review your article and either give you a +1 or -1 about whether you can publish. After meeting a threshold of at least two positive votes, your article will be marked as "published" or scheduled for the future. Once your article is published, it will be automatically shared to the @Gluster Twitter, as well as other social media channels to help bring greater exposure to your article. -- Amye Scavarda | a...@redhat.com | Gluster Community Lead ___ Gluster-devel mailing list Gluster-devel@gluster.org http://www.gluster.org/mailman/listinfo/gluster-devel
Re: [Gluster-devel] [Gluster-users] release-3.6 end of life
On 19/08/2016 3:45 AM, Diego Remolina wrote: The one thing that still remains a mystery to me is how to downgrade glusterfs packages in Ubuntu. I have never been able to do that. There was also a post from someone about it recently on the list and I do not think it got any replies. I would have assumed something like: 1. stop volume(s) 2. if needed: reset gluster options not available is older version 3. if needed: downgrade op-version 4. stop all gluster daemons :) 5. sudo apt-get purge gluster* 6. sudo ppa-purge 7. Install older ppa. 8. install older gluster service 9. Start services 10. check peers and status 11. start volume 12. test if 2) and 3) can't be done then I presume you can't downgrade. -- Lindsay Mathieson ___ Gluster-devel mailing list Gluster-devel@gluster.org http://www.gluster.org/mailman/listinfo/gluster-devel
Re: [Gluster-devel] [Gluster-users] release-3.6 end of life
The one thing that still remains a mystery to me is how to downgrade glusterfs packages in Ubuntu. I have never been able to do that. There was also a post from someone about it recently on the list and I do not think it got any replies. Diego On Thu, Aug 18, 2016 at 1:39 PM, Joe Julian wrote: > Ok, scratch this entire email. I never noticed that the deficiency in Trusty > had been worked around to build the later packages. > > > > On 08/18/2016 10:23 AM, Kaleb S. KEITHLEY wrote: >> >> On 08/18/2016 01:22 PM, Kaleb S. KEITHLEY wrote: >>> >>> On 08/18/2016 01:10 PM, Joe Julian wrote: I'd like to plead with the community to continue to support 3.6 as a "lts" release. It's the last release version that can be used on Ubuntu 14.04 (Trusty Tahr) LTS which many users may be stuck using for quite some time (eol of April 2019). >>> >>> >>> What's wrong with 3.7 on Trusty? >>> >>> https://launchpad.net/~gluster/+archive/ubuntu/glusterfs-3.7/+packages >> >> >> Or 3.8? >> >> https://launchpad.net/~gluster/+archive/ubuntu/glusterfs-3.8/+packages >> >> -- >> >> Kaleb >> >> > > ___ > Gluster-users mailing list > gluster-us...@gluster.org > http://www.gluster.org/mailman/listinfo/gluster-users ___ Gluster-devel mailing list Gluster-devel@gluster.org http://www.gluster.org/mailman/listinfo/gluster-devel
Re: [Gluster-devel] [Gluster-users] release-3.6 end of life
Ok, scratch this entire email. I never noticed that the deficiency in Trusty had been worked around to build the later packages. On 08/18/2016 10:23 AM, Kaleb S. KEITHLEY wrote: On 08/18/2016 01:22 PM, Kaleb S. KEITHLEY wrote: On 08/18/2016 01:10 PM, Joe Julian wrote: I'd like to plead with the community to continue to support 3.6 as a "lts" release. It's the last release version that can be used on Ubuntu 14.04 (Trusty Tahr) LTS which many users may be stuck using for quite some time (eol of April 2019). What's wrong with 3.7 on Trusty? https://launchpad.net/~gluster/+archive/ubuntu/glusterfs-3.7/+packages Or 3.8? https://launchpad.net/~gluster/+archive/ubuntu/glusterfs-3.8/+packages -- Kaleb ___ Gluster-devel mailing list Gluster-devel@gluster.org http://www.gluster.org/mailman/listinfo/gluster-devel
Re: [Gluster-devel] [Gluster-users] release-3.6 end of life
On 08/18/2016 01:22 PM, Kaleb S. KEITHLEY wrote: On 08/18/2016 01:10 PM, Joe Julian wrote: I'd like to plead with the community to continue to support 3.6 as a "lts" release. It's the last release version that can be used on Ubuntu 14.04 (Trusty Tahr) LTS which many users may be stuck using for quite some time (eol of April 2019). What's wrong with 3.7 on Trusty? https://launchpad.net/~gluster/+archive/ubuntu/glusterfs-3.7/+packages Or 3.8? https://launchpad.net/~gluster/+archive/ubuntu/glusterfs-3.8/+packages -- Kaleb ___ Gluster-devel mailing list Gluster-devel@gluster.org http://www.gluster.org/mailman/listinfo/gluster-devel
Re: [Gluster-devel] release-3.6 end of life
On 08/18/2016 01:10 PM, Joe Julian wrote: I'd like to plead with the community to continue to support 3.6 as a "lts" release. It's the last release version that can be used on Ubuntu 14.04 (Trusty Tahr) LTS which many users may be stuck using for quite some time (eol of April 2019). What's wrong with 3.7 on Trusty? https://launchpad.net/~gluster/+archive/ubuntu/glusterfs-3.7/+packages -- Kaleb ___ Gluster-devel mailing list Gluster-devel@gluster.org http://www.gluster.org/mailman/listinfo/gluster-devel
[Gluster-devel] release-3.6 end of life
I'd like to plead with the community to continue to support 3.6 as a "lts" release. It's the last release version that can be used on Ubuntu 14.04 (Trusty Tahr) LTS which many users may be stuck using for quite some time (eol of April 2019). ___ Gluster-devel mailing list Gluster-devel@gluster.org http://www.gluster.org/mailman/listinfo/gluster-devel
Re: [Gluster-devel] md-cache improvements
- Original Message - > From: "Niels de Vos" > To: "Vijay Bellur" > Cc: "Poornima Gurusiddaiah" , "Dan Lambright" > , "Nithya Balachandran" > , "Raghavendra Gowdappa" , "Soumya > Koduri" , "Pranith > Kumar Karampuri" , "Gluster Devel" > > Sent: Thursday, August 18, 2016 9:32:34 AM > Subject: Re: [Gluster-devel] md-cache improvements > > On Mon, Aug 15, 2016 at 10:39:40PM -0400, Vijay Bellur wrote: > > Hi Poornima, Dan - > > > > Let us have a hangout/bluejeans session this week to discuss the planned > > md-cache improvements, proposed timelines and sort out open questions if > > any. > > > > Would 11:00 UTC on Wednesday work for everyone in the To: list? > > I'd appreciate it if someone could send the meeting minutes. It'll make > it easier to follow up and we can provide better status details on the > progress. Adding to this thread the tracking bug for the feature - 1211863 > > In any case, one of the points that Poornima mentioned was that upcall > events (when enabled) get cached in gfapi until the application handles > them. NFS-Ganesha is the only application that (currently) is interested > in these events. Other use-cases (like md-cache invalidation) would > enable upcalls too, and then cause event caching even when not needed. > > This change should address that, and I'm waiting for feedback on it. > There should be a bug report about these unneeded and uncleared caches, > but I could not find one... > > gfapi: do not cache upcalls if the application is not interested > http://review.gluster.org/15191 > > Thanks, > Niels > > > > > > Thanks, > > Vijay > > > > > > > > On 08/11/2016 01:04 AM, Poornima Gurusiddaiah wrote: > > > > > > My comments inline. > > > > > > Regards, > > > Poornima > > > > > > - Original Message - > > > > From: "Dan Lambright" > > > > To: "Gluster Devel" > > > > Sent: Wednesday, August 10, 2016 10:35:58 PM > > > > Subject: [Gluster-devel] md-cache improvements > > > > > > > > > > > > There have been recurring discussions within the gluster community to > > > > build > > > > on existing support for md-cache and upcalls to help performance for > > > > small > > > > file workloads. In certain cases, "lookup amplification" dominates data > > > > transfers, i.e. the cumulative round trip times of multiple LOOKUPs > > > > from the > > > > client mitigates benefits from faster backend storage. > > > > > > > > To tackle this problem, one suggestion is to more aggressively utilize > > > > md-cache to cache inodes on the client than is currently done. The > > > > inodes > > > > would be cached until they are invalidated by the server. > > > > > > > > Several gluster development engineers within the DHT, NFS, and Samba > > > > teams > > > > have been involved with related efforts, which have been underway for > > > > some > > > > time now. At this juncture, comments are requested from gluster > > > > developers. > > > > > > > > (1) .. help call out where additional upcalls would be needed to > > > > invalidate > > > > stale client cache entries (in particular, need feedback from DHT/AFR > > > > areas), > > > > > > > > (2) .. identify failure cases, when we cannot trust the contents of > > > > md-cache, > > > > e.g. when an upcall may have been dropped by the network > > > > > > Yes, this needs to be handled. > > > It can happen only when there is a one way disconnect, where the server > > > cannot > > > reach client and notify fails. We can have a retry for the same until the > > > cache > > > expiry time. > > > > > > > > > > > (3) .. point out additional improvements which md-cache needs. For > > > > example, > > > > it cannot be allowed to grow unbounded. > > > > > > This is being worked on, and will be targetted for 3.9 > > > > > > > > > > > Dan > > > > > > > > - Original Message - > > > > > From: "Raghavendra Gowdappa" > > > > > > > > > > List of areas where we need invalidation notification: > > > > > 1. Any changes to xattrs used by xlators to store metadata (like dht > > > > > layout > > > > > xattr, afr xattrs etc). > > > > > > Currently, md-cache will negotiate(using ipc) with the brick, a list of > > > xattrs > > > that it needs invalidation for. Other xlators can add the xattrs they are > > > interested > > > in to the ipc. But then these xlators need to manage their own caching > > > and processing > > > the invalidation request, as md-cache will be above all cluater xlators. > > > reference: http://review.gluster.org/#/c/15002/ > > > > > > > > 2. Scenarios where individual xlator feels like it needs a lookup. > > > > > For > > > > > example failed directory creation on non-hashed subvol in dht during > > > > > mkdir. > > > > > Though dht succeeds mkdir, it would be better to not cache this inode > > > > > as a > > > > > subsequent lookup will heal the directory and make things better. > > > > > > For this, these xlators can specify an indicator in the dict of > > > the fop cbk, to not cache. This shoul
Re: [Gluster-devel] md-cache improvements
On Mon, Aug 15, 2016 at 10:39:40PM -0400, Vijay Bellur wrote: > Hi Poornima, Dan - > > Let us have a hangout/bluejeans session this week to discuss the planned > md-cache improvements, proposed timelines and sort out open questions if > any. > > Would 11:00 UTC on Wednesday work for everyone in the To: list? I'd appreciate it if someone could send the meeting minutes. It'll make it easier to follow up and we can provide better status details on the progress. In any case, one of the points that Poornima mentioned was that upcall events (when enabled) get cached in gfapi until the application handles them. NFS-Ganesha is the only application that (currently) is interested in these events. Other use-cases (like md-cache invalidation) would enable upcalls too, and then cause event caching even when not needed. This change should address that, and I'm waiting for feedback on it. There should be a bug report about these unneeded and uncleared caches, but I could not find one... gfapi: do not cache upcalls if the application is not interested http://review.gluster.org/15191 Thanks, Niels > > Thanks, > Vijay > > > > On 08/11/2016 01:04 AM, Poornima Gurusiddaiah wrote: > > > > My comments inline. > > > > Regards, > > Poornima > > > > - Original Message - > > > From: "Dan Lambright" > > > To: "Gluster Devel" > > > Sent: Wednesday, August 10, 2016 10:35:58 PM > > > Subject: [Gluster-devel] md-cache improvements > > > > > > > > > There have been recurring discussions within the gluster community to > > > build > > > on existing support for md-cache and upcalls to help performance for small > > > file workloads. In certain cases, "lookup amplification" dominates data > > > transfers, i.e. the cumulative round trip times of multiple LOOKUPs from > > > the > > > client mitigates benefits from faster backend storage. > > > > > > To tackle this problem, one suggestion is to more aggressively utilize > > > md-cache to cache inodes on the client than is currently done. The inodes > > > would be cached until they are invalidated by the server. > > > > > > Several gluster development engineers within the DHT, NFS, and Samba teams > > > have been involved with related efforts, which have been underway for some > > > time now. At this juncture, comments are requested from gluster > > > developers. > > > > > > (1) .. help call out where additional upcalls would be needed to > > > invalidate > > > stale client cache entries (in particular, need feedback from DHT/AFR > > > areas), > > > > > > (2) .. identify failure cases, when we cannot trust the contents of > > > md-cache, > > > e.g. when an upcall may have been dropped by the network > > > > Yes, this needs to be handled. > > It can happen only when there is a one way disconnect, where the server > > cannot > > reach client and notify fails. We can have a retry for the same until the > > cache > > expiry time. > > > > > > > > (3) .. point out additional improvements which md-cache needs. For > > > example, > > > it cannot be allowed to grow unbounded. > > > > This is being worked on, and will be targetted for 3.9 > > > > > > > > Dan > > > > > > - Original Message - > > > > From: "Raghavendra Gowdappa" > > > > > > > > List of areas where we need invalidation notification: > > > > 1. Any changes to xattrs used by xlators to store metadata (like dht > > > > layout > > > > xattr, afr xattrs etc). > > > > Currently, md-cache will negotiate(using ipc) with the brick, a list of > > xattrs > > that it needs invalidation for. Other xlators can add the xattrs they are > > interested > > in to the ipc. But then these xlators need to manage their own caching and > > processing > > the invalidation request, as md-cache will be above all cluater xlators. > > reference: http://review.gluster.org/#/c/15002/ > > > > > > 2. Scenarios where individual xlator feels like it needs a lookup. For > > > > example failed directory creation on non-hashed subvol in dht during > > > > mkdir. > > > > Though dht succeeds mkdir, it would be better to not cache this inode > > > > as a > > > > subsequent lookup will heal the directory and make things better. > > > > For this, these xlators can specify an indicator in the dict of > > the fop cbk, to not cache. This should be fairly simple to implement. > > > > > > 3. removing of files > > > > When an unlink is issued from the mount point, the cache is invalidated. > > > > > > 4. writev on brick (to invalidate read cache on client) > > > > writev on brick from any other client will invalidate the metadata cache on > > all > > the other clients. > > > > > > > > > > Other questions: > > > > 5. Does md-cache has cache management? like lru or an upper limit for > > > > cache. > > > > Currently md-cache doesn't have any cache-management, we will be targeting > > this > > for 3.9 > > > > > > 6. Network disconnects and invalidating cache. When a network disconnect > > > > happens we need to i
Re: [Gluster-devel] FYI: change in bugzilla version(s) starting with the 3.9 release
On Wed, Aug 17, 2016 at 09:41:22PM +0530, Atin Mukherjee wrote: > On Wednesday 17 August 2016, Kaleb S. KEITHLEY wrote: > > > Hi, > > > > In today's Gluster Community Meeting it was tentatively agreed that we > > will change how we report the GlusterFS version for glusterfs bugs. > > > > Starting with the 3.9 release there will only be a "3.9" version in > > bugzilla; compared to the current scheme where there are, e.g., 3.8.0, > > 3.8.1, ..., 3.8.x versions in bugzilla. > > > May I ask what is the benefit we are going to get from this change? > Personally I am more inclined towards the existing option as users can > select the version and do not have to mention it in the comment. Sometimes > users may forget to report the version in the comment and we need to do > back and forth on this, instead having a specific release version on which > the bug can be filed looks a better option IMO. On the other hand, reporters pick a random version, but have the exact package names+version in the #0 comment (which does not always match). The template also asks for the version and other required points, these should not be seen as optional. For bug triaging it makes query'ing sorting through bugs easier. Doing a regular expression search/match on versions is the only way we can currently construct complete lists of 3.7.x bugs. Being anle to query on version=3.7 should help all the maintainers in constructing their searches, rss-feeds and email filters. People should not report bugs against 3.7.13 and earlier 3.7.x versions anymore, those will not get updated in any case. The current notation suggests users that they can stay on a 3.7.x version, and get updates for them. I would like to encourage users to take 3.7 as a version, and they should not care (and update automatically) to the latest 3.7.x. Niels > > > > > When filing a new bug report, the exact version can — and should — be > > entered in the comments section of the report. > > > > For 3.8 and earlier we will retain the old scheme for the lifetime of > > that release. > > > > If you have any questions or comments about this change you can reply to > > this email or raise them in IRC #gluster-dev (on freenode) > > > > > > -- > > > > Kaleb > > > > > -- > --Atin > ___ > Gluster-devel mailing list > Gluster-devel@gluster.org > http://www.gluster.org/mailman/listinfo/gluster-devel signature.asc Description: PGP signature ___ Gluster-devel mailing list Gluster-devel@gluster.org http://www.gluster.org/mailman/listinfo/gluster-devel
Re: [Gluster-devel] NetBSD regression is now netbsd6-regression
Er, typo. That was supposed to say netbsd7-regression :) On Thu, Aug 18, 2016 at 12:32 PM, Emmanuel Dreyfus wrote: > On Thu, Aug 18, 2016 at 12:07:08PM +0530, Nigel Babu wrote: > > As in the case of CentOS yesterday, the NetBSD job is now > netbsd6-regression. > > But we run regressions o nnetbsd-7 branch. Smoke tests are tun on neybsd-6. > > -- > Emmanuel Dreyfus > m...@netbsd.org > -- nigelb ___ Gluster-devel mailing list Gluster-devel@gluster.org http://www.gluster.org/mailman/listinfo/gluster-devel
Re: [Gluster-devel] NetBSD regression is now netbsd6-regression
On Thu, Aug 18, 2016 at 12:07:08PM +0530, Nigel Babu wrote: > As in the case of CentOS yesterday, the NetBSD job is now netbsd6-regression. But we run regressions o nnetbsd-7 branch. Smoke tests are tun on neybsd-6. -- Emmanuel Dreyfus m...@netbsd.org ___ Gluster-devel mailing list Gluster-devel@gluster.org http://www.gluster.org/mailman/listinfo/gluster-devel