Re: [Gluster-users] Proposal to mark few features as Deprecated / SunSet from Version 5.0

2019-03-19 Thread Vijay Bellur
I tried this configuration on my local setup and the test passed fine.

Adding the fuse and write-behind maintainers in Gluster to check if they
are aware of any oddities with using mmap & fuse.

Thanks,
Vijay

On Tue, Mar 19, 2019 at 2:21 PM Jim Kinney  wrote:

> Volume Name: home
> Type: Replicate
> Volume ID: 5367adb1-99fc-44c3-98c4-71f7a41e628a
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 1 x 2 = 2
> Transport-type: tcp,rdma
> Bricks:
> Brick1: bmidata1:/data/glusterfs/home/brick/brick
> Brick2: bmidata2:/data/glusterfs/home/brick/brick
> Options Reconfigured:
> performance.client-io-threads: off
> storage.build-pgfid: on
> cluster.self-heal-daemon: enable
> performance.readdir-ahead: off
> nfs.disable: off
>
>
> There are 11 other volumes and all are similar.
>
>
> On Tue, 2019-03-19 at 13:59 -0700, Vijay Bellur wrote:
>
> Thank you for the reproducer! Can you please let us know the output of
> `gluster volume info`?
>
> Regards,
> Vijay
>
> On Tue, Mar 19, 2019 at 12:53 PM Jim Kinney  wrote:
>
> This python will fail when writing to a file in a glusterfs fuse mounted
> directory.
>
> import mmap
>
> # write a simple example file
> with open("hello.txt", "wb") as f:
> f.write("Hello Python!\n")
>
> with open("hello.txt", "r+b") as f:
> # memory-map the file, size 0 means whole file
> mm = mmap.mmap(f.fileno(), 0)
> # read content via standard file methods
> print mm.readline()  # prints "Hello Python!"
> # read content via slice notation
> print mm[:5]  # prints "Hello"
> # update content using slice notation;
> # note that new content must have same size
> mm[6:] = " world!\n"
> # ... and read again using standard file methods
> mm.seek(0)
> print mm.readline()  # prints "Hello  world!"
> # close the map
> mm.close()
>
>
>
>
>
>
>
> On Tue, 2019-03-19 at 12:06 -0400, Jim Kinney wrote:
>
> Native mount issue with multiple clients (centos7 glusterfs 3.12).
>
> Seems to hit python 2.7 and 3+. User tries to open file(s) for write on
> long process and system eventually times out.
>
> Switching to NFS stops the error.
>
> No bug notice yet. Too many pans on the fire :-(
>
> On Tue, 2019-03-19 at 18:42 +0530, Amar Tumballi Suryanarayan wrote:
>
> Hi Jim,
>
> On Tue, Mar 19, 2019 at 6:21 PM Jim Kinney  wrote:
>
>
> Issues with glusterfs fuse mounts cause issues with python file open for
> write. We have to use nfs to avoid this.
>
> Really want to see better back-end tools to facilitate cleaning up of
> glusterfs failures. If system is going to use hard linked ID, need a
> mapping of id to file to fix things. That option is now on for all exports.
> It should be the default If a host is down and users delete files by the
> thousands, gluster _never_ catches up. Finding path names for ids across
> even a 40TB mount, much less the 200+TB one, is a slow process. A network
> outage of 2 minutes and one system didn't get the call to recursively
> delete several dozen directories each with several thousand files.
>
>
>
> Are you talking about some issues in geo-replication module or some other
> application using native mount? Happy to take the discussion forward about
> these issues.
>
> Are there any bugs open on this?
>
> Thanks,
> Amar
>
>
>
>
> nfs
> On March 19, 2019 8:09:01 AM EDT, Hans Henrik Happe  wrote:
>
> Hi,
>
> Looking into something else I fell over this proposal. Being a shop that
> are going into "Leaving GlusterFS" mode, I thought I would give my two
> cents.
>
> While being partially an HPC shop with a few Lustre filesystems,  we chose
> GlusterFS for an archiving solution (2-3 PB), because we could find files
> in the underlying ZFS filesystems if GlusterFS went sour.
>
> We have used the access to the underlying files plenty, because of the
> continuous instability of GlusterFS'. Meanwhile, Lustre have been almost
> effortless to run and mainly for that reason we are planning to move away
> from GlusterFS.
>
> Reading this proposal kind of underlined that "Leaving GluserFS" is the
> right thing to do. While I never understood why GlusterFS has been in
> feature crazy mode instead of stabilizing mode, taking away crucial
> features I don't get. With RoCE, RDMA is getting mainstream. Quotas are
> very useful, even though the current implementation are not perfect.
> Tiering also makes so much sense, but, for large files, not on a per-file
> level.
>
> To be honest we only use quotas. We got scared of trying out new
> performance features that potentially would open up a new back of issues.
>
> Sorry for being such a buzzkill. I really wanted it to be different.
>
> Cheers,
> Hans Henrik
> On 19/07/2018 08.56, Amar Tumballi wrote:
>
>
> * Hi all, Over last 12 years of Gluster, we have developed many features,
> and continue to support most of it till now. But along the way, we have
> figured out better methods of doing things. Also we are not actively
> maintaining some of these features. We are now thinking of cleaning 

Re: [Gluster-users] Proposal to mark few features as Deprecated / SunSet from Version 5.0

2019-03-19 Thread Jim Kinney
Volume Name: home
Type: Replicate
Volume ID: 5367adb1-99fc-44c3-98c4-71f7a41e628a
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp,rdma
Bricks:
Brick1: bmidata1:/data/glusterfs/home/brick/brick
Brick2: bmidata2:/data/glusterfs/home/brick/brick
Options Reconfigured:
performance.client-io-threads: off
storage.build-pgfid: on
cluster.self-heal-daemon: enable
performance.readdir-ahead: off
nfs.disable: off


There are 11 other volumes and all are similar.


On Tue, 2019-03-19 at 13:59 -0700, Vijay Bellur wrote:
> Thank you for the reproducer! Can you please let us know the output
> of `gluster volume info`?
> Regards,
> Vijay
> 
> On Tue, Mar 19, 2019 at 12:53 PM Jim Kinney 
> wrote:
> > This python will fail when writing to a file in a glusterfs fuse
> > mounted directory.
> > 
> > import mmap
> >   
> > # write a simple example file
> > with open("hello.txt", "wb") as f:
> > f.write("Hello Python!\n")
> > 
> > with open("hello.txt", "r+b") as f:
> > # memory-map the file, size 0 means whole file
> > mm = mmap.mmap(f.fileno(), 0)
> > # read content via standard file methods
> > print mm.readline()  # prints "Hello Python!"
> > # read content via slice notation
> > print mm[:5]  # prints "Hello"
> > # update content using slice notation;
> > # note that new content must have same size
> > mm[6:] = " world!\n"
> > # ... and read again using standard file methods
> > mm.seek(0)
> > print mm.readline()  # prints "Hello  world!"
> > # close the map
> > mm.close()
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> > On Tue, 2019-03-19 at 12:06 -0400, Jim Kinney wrote:
> > > Native mount issue with multiple clients (centos7 glusterfs
> > > 3.12).
> > > Seems to hit python 2.7 and 3+. User tries to open file(s) for
> > > write on long process and system eventually times out.
> > > Switching to NFS stops the error.
> > > No bug notice yet. Too many pans on the fire :-(
> > > On Tue, 2019-03-19 at 18:42 +0530, Amar Tumballi Suryanarayan
> > > wrote:
> > > > Hi Jim,
> > > > 
> > > > On Tue, Mar 19, 2019 at 6:21 PM Jim Kinney <
> > > > jim.kin...@gmail.com> wrote:
> > > > > 
> > > > >   
> > > > >   
> > > > > Issues with glusterfs fuse mounts cause issues with python
> > > > > file open for write. We have to use nfs to avoid this. 
> > > > > 
> > > > > Really want to see better back-end tools to facilitate
> > > > > cleaning up of glusterfs failures. If system is going to use
> > > > > hard linked ID, need a mapping of id to file to fix things.
> > > > > That option is now on for all exports. It should be the
> > > > > default  If a host is down and users delete files by the
> > > > > thousands, gluster _never_ catches up. Finding path names for
> > > > > ids across even a 40TB mount, much less  the 200+TB one, is a
> > > > > slow process. A network outage of 2 minutes and one system
> > > > > didn't get the call to recursively delete several dozen
> > > > > directories each with several thousand files. 
> > > > > 
> > > > > 
> > > > 
> > > > Are you talking about some issues in geo-replication module or
> > > > some other application using native mount? Happy to take the
> > > > discussion forward about these issues. 
> > > > Are there any bugs open on this?
> > > > Thanks,Amar 
> > > > > nfsOn March 19, 2019 8:09:01 AM EDT, Hans Henrik Happe <
> > > > > ha...@nbi.dk> wrote:
> > > > > > Hi,
> > > > > > Looking into something else I fell over this proposal.
> > > > > > Being a
> > > > > >   shop that are going into "Leaving GlusterFS" mode, I
> > > > > > thought I
> > > > > >   would give my two cents.
> > > > > > 
> > > > > > 
> > > > > > While being partially an HPC shop with a few Lustre
> > > > > > filesystems, 
> > > > > >   we chose GlusterFS for an archiving solution (2-3
> > > > > > PB), because we
> > > > > >   could find files in the underlying ZFS filesystems if
> > > > > > GlusterFS
> > > > > >   went sour.
> > > > > > We have used the access to the underlying files plenty,
> > > > > > because
> > > > > >   of the continuous instability of GlusterFS'.
> > > > > > Meanwhile, Lustre
> > > > > >   have been almost effortless to run and mainly for
> > > > > > that reason we
> > > > > >   are planning to move away from GlusterFS.
> > > > > > Reading this proposal kind of underlined that "Leaving
> > > > > > GluserFS"
> > > > > >   is the right thing to do. While I never understood
> > > > > > why GlusterFS
> > > > > >   has been in feature crazy mode instead of stabilizing
> > > > > > mode, taking
> > > > > >   away crucial features I don't get. With RoCE, RDMA is
> > > > > > getting
> > > > > >   mainstream. Quotas are very useful, even though the
> > > > > > current
> > > > > >   implementation are not perfect. Tiering also makes so
> > > > > > much sense,
> > > > > >   but, for large files, not on a per-file level.
> > > > > > To be honest 

Re: [Gluster-users] Proposal to mark few features as Deprecated / SunSet from Version 5.0

2019-03-19 Thread Vijay Bellur
Thank you for the reproducer! Can you please let us know the output of
`gluster volume info`?

Regards,
Vijay

On Tue, Mar 19, 2019 at 12:53 PM Jim Kinney  wrote:

> This python will fail when writing to a file in a glusterfs fuse mounted
> directory.
>
> import mmap
>
> # write a simple example file
> with open("hello.txt", "wb") as f:
> f.write("Hello Python!\n")
>
> with open("hello.txt", "r+b") as f:
> # memory-map the file, size 0 means whole file
> mm = mmap.mmap(f.fileno(), 0)
> # read content via standard file methods
> print mm.readline()  # prints "Hello Python!"
> # read content via slice notation
> print mm[:5]  # prints "Hello"
> # update content using slice notation;
> # note that new content must have same size
> mm[6:] = " world!\n"
> # ... and read again using standard file methods
> mm.seek(0)
> print mm.readline()  # prints "Hello  world!"
> # close the map
> mm.close()
>
>
>
>
>
>
>
> On Tue, 2019-03-19 at 12:06 -0400, Jim Kinney wrote:
>
> Native mount issue with multiple clients (centos7 glusterfs 3.12).
>
> Seems to hit python 2.7 and 3+. User tries to open file(s) for write on
> long process and system eventually times out.
>
> Switching to NFS stops the error.
>
> No bug notice yet. Too many pans on the fire :-(
>
> On Tue, 2019-03-19 at 18:42 +0530, Amar Tumballi Suryanarayan wrote:
>
> Hi Jim,
>
> On Tue, Mar 19, 2019 at 6:21 PM Jim Kinney  wrote:
>
>
> Issues with glusterfs fuse mounts cause issues with python file open for
> write. We have to use nfs to avoid this.
>
> Really want to see better back-end tools to facilitate cleaning up of
> glusterfs failures. If system is going to use hard linked ID, need a
> mapping of id to file to fix things. That option is now on for all exports.
> It should be the default If a host is down and users delete files by the
> thousands, gluster _never_ catches up. Finding path names for ids across
> even a 40TB mount, much less the 200+TB one, is a slow process. A network
> outage of 2 minutes and one system didn't get the call to recursively
> delete several dozen directories each with several thousand files.
>
>
>
> Are you talking about some issues in geo-replication module or some other
> application using native mount? Happy to take the discussion forward about
> these issues.
>
> Are there any bugs open on this?
>
> Thanks,
> Amar
>
>
>
>
> nfs
> On March 19, 2019 8:09:01 AM EDT, Hans Henrik Happe  wrote:
>
> Hi,
>
> Looking into something else I fell over this proposal. Being a shop that
> are going into "Leaving GlusterFS" mode, I thought I would give my two
> cents.
>
> While being partially an HPC shop with a few Lustre filesystems,  we chose
> GlusterFS for an archiving solution (2-3 PB), because we could find files
> in the underlying ZFS filesystems if GlusterFS went sour.
>
> We have used the access to the underlying files plenty, because of the
> continuous instability of GlusterFS'. Meanwhile, Lustre have been almost
> effortless to run and mainly for that reason we are planning to move away
> from GlusterFS.
>
> Reading this proposal kind of underlined that "Leaving GluserFS" is the
> right thing to do. While I never understood why GlusterFS has been in
> feature crazy mode instead of stabilizing mode, taking away crucial
> features I don't get. With RoCE, RDMA is getting mainstream. Quotas are
> very useful, even though the current implementation are not perfect.
> Tiering also makes so much sense, but, for large files, not on a per-file
> level.
>
> To be honest we only use quotas. We got scared of trying out new
> performance features that potentially would open up a new back of issues.
>
> Sorry for being such a buzzkill. I really wanted it to be different.
>
> Cheers,
> Hans Henrik
> On 19/07/2018 08.56, Amar Tumballi wrote:
>
>
> * Hi all, Over last 12 years of Gluster, we have developed many features,
> and continue to support most of it till now. But along the way, we have
> figured out better methods of doing things. Also we are not actively
> maintaining some of these features. We are now thinking of cleaning up some
> of these ‘unsupported’ features, and mark them as ‘SunSet’ (i.e., would be
> totally taken out of codebase in following releases) in next upcoming
> release, v5.0. The release notes will provide options for smoothly
> migrating to the supported configurations. If you are using any of these
> features, do let us know, so that we can help you with ‘migration’.. Also,
> we are happy to guide new developers to work on those components which are
> not actively being maintained by current set of developers. List of
> features hitting sunset: ‘cluster/stripe’ translator: This translator was
> developed very early in the evolution of GlusterFS, and addressed one of
> the very common question of Distributed FS, which is “What happens if one
> of my file is bigger than the available brick. Say, I have 2 TB hard drive,
> exported in glusterfs, my file is 

Re: [Gluster-users] Proposal to mark few features as Deprecated / SunSet from Version 5.0

2019-03-19 Thread Jim Kinney
This python will fail when writing to a file in a glusterfs fuse
mounted directory.
import mmap  # write a simple example filewith open("hello.txt", "wb")
as f:f.write("Hello Python!\n")
with open("hello.txt", "r+b") as f:# memory-map the file, size 0
means whole filemm = mmap.mmap(f.fileno(), 0)# read content via
standard file methodsprint mm.readline()  # prints "Hello
Python!"# read content via slice notationprint mm[:5]  # prints
"Hello"# update content using slice notation;# note that new
content must have same sizemm[6:] = " world!\n"# ... and read
again using standard file methodsmm.seek(0)print
mm.readline()  # prints "Hello  world!"# close the
mapmm.close()





On Tue, 2019-03-19 at 12:06 -0400, Jim Kinney wrote:
> Native mount issue with multiple clients (centos7 glusterfs 3.12).
> Seems to hit python 2.7 and 3+. User tries to open file(s) for write
> on long process and system eventually times out.
> Switching to NFS stops the error.
> No bug notice yet. Too many pans on the fire :-(
> On Tue, 2019-03-19 at 18:42 +0530, Amar Tumballi Suryanarayan wrote:
> > Hi Jim,
> > 
> > On Tue, Mar 19, 2019 at 6:21 PM Jim Kinney 
> > wrote:
> > > 
> > >   
> > >   
> > > Issues with glusterfs fuse mounts cause issues with python file
> > > open for write. We have to use nfs to avoid this. 
> > > 
> > > Really want to see better back-end tools to facilitate cleaning
> > > up of glusterfs failures. If system is going to use hard linked
> > > ID, need a mapping of id to file to fix things. That option is
> > > now on for all exports. It should be the default  If a host is
> > > down and users delete files by the thousands, gluster _never_
> > > catches up. Finding path names for ids across even a 40TB mount,
> > > much less  the 200+TB one, is a slow process. A network outage of
> > > 2 minutes and one system didn't get the call to recursively
> > > delete several dozen directories each with several thousand
> > > files. 
> > > 
> > > 
> > 
> > Are you talking about some issues in geo-replication module or some
> > other application using native mount? Happy to take the discussion
> > forward about these issues. 
> > Are there any bugs open on this?
> > Thanks,Amar 
> > > nfsOn March 19, 2019 8:09:01 AM EDT, Hans Henrik Happe <
> > > ha...@nbi.dk> wrote:
> > > > Hi,
> > > > Looking into something else I fell over this proposal.
> > > > Being a
> > > >   shop that are going into "Leaving GlusterFS" mode, I
> > > > thought I
> > > >   would give my two cents.
> > > > 
> > > > 
> > > > While being partially an HPC shop with a few Lustre
> > > > filesystems, 
> > > >   we chose GlusterFS for an archiving solution (2-3 PB),
> > > > because we
> > > >   could find files in the underlying ZFS filesystems if
> > > > GlusterFS
> > > >   went sour.
> > > > We have used the access to the underlying files plenty,
> > > > because
> > > >   of the continuous instability of GlusterFS'. Meanwhile,
> > > > Lustre
> > > >   have been almost effortless to run and mainly for that
> > > > reason we
> > > >   are planning to move away from GlusterFS.
> > > > Reading this proposal kind of underlined that "Leaving
> > > > GluserFS"
> > > >   is the right thing to do. While I never understood why
> > > > GlusterFS
> > > >   has been in feature crazy mode instead of stabilizing
> > > > mode, taking
> > > >   away crucial features I don't get. With RoCE, RDMA is
> > > > getting
> > > >   mainstream. Quotas are very useful, even though the
> > > > current
> > > >   implementation are not perfect. Tiering also makes so
> > > > much sense,
> > > >   but, for large files, not on a per-file level.
> > > > To be honest we only use quotas. We got scared of trying
> > > > out new
> > > >   performance features that potentially would open up a new
> > > > back of
> > > >   issues.
> > > > Sorry for being such a buzzkill. I really wanted it to be
> > > >   different.
> > > > 
> > > > 
> > > > Cheers,
> > > > 
> > > >   Hans Henrik
> > > > 
> > > > 
> > > > On 19/07/2018 08.56, Amar Tumballi
> > > >   wrote:
> > > > 
> > > > 
> > > > 
> > > > >   
> > > > >   
> > > > >   Hi all,
> > > > >   Over last 12 years of Gluster, we have developed
> > > > > many features, and continue to support most of it till now.
> > > > > But along the way, we have figured out better methods of
> > > > > doing things. Also we are not actively maintaining some of
> > > > > these features.
> > > > >   We are now thinking of cleaning up some of these
> > > > > ‘unsupported’ features, and mark them as ‘SunSet’ (i.e.,
> > > > > would be totally taken out of codebase in following releases)
> > > > > in next upcoming release, v5.0. The release notes will
> > > > > provide options for smoothly migrating to the supported
> > > > > configurations.
> > > 

Re: [Gluster-users] Proposal to mark few features as Deprecated / SunSet from Version 5.0

2019-03-19 Thread Jim Kinney
Native mount issue with multiple clients (centos7 glusterfs 3.12).
Seems to hit python 2.7 and 3+. User tries to open file(s) for write on
long process and system eventually times out.
Switching to NFS stops the error.
No bug notice yet. Too many pans on the fire :-(
On Tue, 2019-03-19 at 18:42 +0530, Amar Tumballi Suryanarayan wrote:
> Hi Jim,
> 
> On Tue, Mar 19, 2019 at 6:21 PM Jim Kinney 
> wrote:
> > 
> >   
> >   
> > Issues with glusterfs fuse mounts cause issues with python file
> > open for write. We have to use nfs to avoid this. 
> > 
> > Really want to see better back-end tools to facilitate cleaning up
> > of glusterfs failures. If system is going to use hard linked ID,
> > need a mapping of id to file to fix things. That option is now on
> > for all exports. It should be the default  If a host is down and
> > users delete files by the thousands, gluster _never_ catches up.
> > Finding path names for ids across even a 40TB mount, much less  the
> > 200+TB one, is a slow process. A network outage of 2 minutes and
> > one system didn't get the call to recursively delete several dozen
> > directories each with several thousand files. 
> > 
> > 
> 
> Are you talking about some issues in geo-replication module or some
> other application using native mount? Happy to take the discussion
> forward about these issues. 
> Are there any bugs open on this?
> Thanks,Amar 
> > nfsOn March 19, 2019 8:09:01 AM EDT, Hans Henrik Happe <
> > ha...@nbi.dk> wrote:
> > > Hi,
> > > Looking into something else I fell over this proposal. Being
> > > a
> > >   shop that are going into "Leaving GlusterFS" mode, I
> > > thought I
> > >   would give my two cents.
> > > 
> > > 
> > > While being partially an HPC shop with a few Lustre
> > > filesystems, 
> > >   we chose GlusterFS for an archiving solution (2-3 PB),
> > > because we
> > >   could find files in the underlying ZFS filesystems if
> > > GlusterFS
> > >   went sour.
> > > We have used the access to the underlying files plenty,
> > > because
> > >   of the continuous instability of GlusterFS'. Meanwhile,
> > > Lustre
> > >   have been almost effortless to run and mainly for that
> > > reason we
> > >   are planning to move away from GlusterFS.
> > > Reading this proposal kind of underlined that "Leaving
> > > GluserFS"
> > >   is the right thing to do. While I never understood why
> > > GlusterFS
> > >   has been in feature crazy mode instead of stabilizing mode,
> > > taking
> > >   away crucial features I don't get. With RoCE, RDMA is
> > > getting
> > >   mainstream. Quotas are very useful, even though the current
> > >   implementation are not perfect. Tiering also makes so much
> > > sense,
> > >   but, for large files, not on a per-file level.
> > > To be honest we only use quotas. We got scared of trying out
> > > new
> > >   performance features that potentially would open up a new
> > > back of
> > >   issues.
> > > Sorry for being such a buzzkill. I really wanted it to be
> > >   different.
> > > 
> > > 
> > > Cheers,
> > > 
> > >   Hans Henrik
> > > 
> > > 
> > > On 19/07/2018 08.56, Amar Tumballi
> > >   wrote:
> > > 
> > > 
> > > 
> > > >   
> > > >   
> > > >   Hi all,
> > > >   Over last 12 years of Gluster, we have developed many
> > > > features, and continue to support most of it till now. But
> > > > along the way, we have figured out better methods of doing
> > > > things. Also we are not actively maintaining some of these
> > > > features.
> > > >   We are now thinking of cleaning up some of these
> > > > ‘unsupported’ features, and mark them as ‘SunSet’ (i.e., would
> > > > be totally taken out of codebase in following releases) in next
> > > > upcoming release, v5.0. The release notes will provide options
> > > > for smoothly migrating to the supported configurations.
> > > >   If you are using any of these features, do let us
> > > > know, so that we can help you with ‘migration’.. Also, we are
> > > > happy to guide new developers to work on those components which
> > > > are not actively being maintained by current set of developers.
> > > >   List of features hitting sunset:
> > > >   ‘cluster/stripe’ translator:
> > > >   This translator was developed very early in the
> > > > evolution of GlusterFS, and addressed one of the very common
> > > > question of Distributed FS, which is “What happens if one of my
> > > > file is bigger than the available brick. Say, I have 2 TB hard
> > > > drive, exported in glusterfs, my file is 3 TB”. While it solved
> > > > the purpose, it was very hard to handle failure scenarios, and
> > > > give a real good experience to our users with this feature.
> > > > Over the time, Gluster solved the problem with it’s ‘Shard’
> > > > feature, which solves the problem in much better way, and
> > > > provides much 

Re: [Gluster-users] Proposal to mark few features as Deprecated / SunSet from Version 5.0

2019-03-19 Thread Hans Henrik Happe

On 19/03/2019 14.10, Amar Tumballi Suryanarayan wrote:
> Hi Hans,
>
> Thanks for the honest feedback. Appreciate this.
>
> On Tue, Mar 19, 2019 at 5:39 PM Hans Henrik Happe  > wrote:
>
> Hi,
>
> Looking into something else I fell over this proposal. Being a
> shop that are going into "Leaving GlusterFS" mode, I thought I
> would give my two cents.
>
> While being partially an HPC shop with a few Lustre filesystems, 
> we chose GlusterFS for an archiving solution (2-3 PB), because we
> could find files in the underlying ZFS filesystems if GlusterFS
> went sour.
>
> We have used the access to the underlying files plenty, because of
> the continuous instability of GlusterFS'. Meanwhile, Lustre have
> been almost effortless to run and mainly for that reason we are
> planning to move away from GlusterFS.
>
> Reading this proposal kind of underlined that "Leaving GluserFS"
> is the right thing to do. While I never understood why GlusterFS
> has been in feature crazy mode instead of stabilizing mode, taking
> away crucial features I don't get. With RoCE, RDMA is getting
> mainstream. Quotas are very useful, even though the current
> implementation are not perfect. Tiering also makes so much sense,
> but, for large files, not on a per-file level.
>
>
> It is a right concern to raise, and removing the existing features is
> not a good thing most of the times. But, one thing we noticed over the
> years is, the features which we develop, and not take to completion
> cause the major heart-burn. People think it is present, and it is
> already few years since its introduced, but if the developers are not
> working on it, users would always feel that the product doesn't work,
> because that one feature didn't work. 
>
> Other than Quota in the proposal email, for all other features, even
> though we have *some* users, we are inclined towards deprecating them,
> considering projects overall goals of stability in the longer run.
>  
>
> To be honest we only use quotas. We got scared of trying out new
> performance features that potentially would open up a new back of
> issues.
>
> About Quota, we heard enough voices, so we will make sure we keep it.
> The original email was 'Proposal', and hence these opinions matter for
> decision.
>
> Sorry for being such a buzzkill. I really wanted it to be different.
>
> We hear you. Please let us know one thing, which were the versions you
> tried ?
>
We started at 3.6 4 years ago. Now we are at 3.12.15, working towards
moving to 4.1.latest.
> We hope in coming months, our recent focus on Stability and Technical
> debt reduction will help you to re-look at Gluster after sometime.
That's great to hear.
>
> Cheers,
> Hans Henrik
>
> On 19/07/2018 08.56, Amar Tumballi wrote:
>> *
>>
>> Hi all,
>>
>> Over last 12 years of Gluster, we have developed many features,
>> and continue to support most of it till now. But along the way,
>> we have figured out better methods of doing things. Also we are
>> not actively maintaining some of these features.
>>
>> We are now thinking of cleaning up some of these ‘unsupported’
>> features, and mark them as ‘SunSet’ (i.e., would be totally taken
>> out of codebase in following releases) in next upcoming release,
>> v5.0. The release notes will provide options for smoothly
>> migrating to the supported configurations.
>>
>> If you are using any of these features, do let us know, so that
>> we can help you with ‘migration’.. Also, we are happy to guide
>> new developers to work on those components which are not actively
>> being maintained by current set of developers.
>>
>>
>>   List of features hitting sunset:
>>
>>
>> ‘cluster/stripe’ translator:
>>
>> This translator was developed very early in the evolution of
>> GlusterFS, and addressed one of the very common question of
>> Distributed FS, which is “What happens if one of my file is
>> bigger than the available brick. Say, I have 2 TB hard drive,
>> exported in glusterfs, my file is 3 TB”. While it solved the
>> purpose, it was very hard to handle failure scenarios, and give a
>> real good experience to our users with this feature. Over the
>> time, Gluster solved the problem with it’s ‘Shard’ feature, which
>> solves the problem in much better way, and provides much better
>> solution with existing well supported stack. Hence the proposal
>> for Deprecation.
>>
>> If you are using this feature, then do write to us, as it needs a
>> proper migration from existing volume to a new full supported
>> volume type before you upgrade.
>>
>>
>> ‘storage/bd’ translator:
>>
>> This feature got into the code base 5 years back with this patch
>> [1]. Plan was to use a block
>> device directly as a 

Re: [Gluster-users] Proposal to mark few features as Deprecated / SunSet from Version 5.0

2019-03-19 Thread Amar Tumballi Suryanarayan
Hi Jim,

On Tue, Mar 19, 2019 at 6:21 PM Jim Kinney  wrote:

>
> Issues with glusterfs fuse mounts cause issues with python file open for
> write. We have to use nfs to avoid this.
>
> Really want to see better back-end tools to facilitate cleaning up of
> glusterfs failures. If system is going to use hard linked ID, need a
> mapping of id to file to fix things. That option is now on for all exports.
> It should be the default If a host is down and users delete files by the
> thousands, gluster _never_ catches up. Finding path names for ids across
> even a 40TB mount, much less the 200+TB one, is a slow process. A network
> outage of 2 minutes and one system didn't get the call to recursively
> delete several dozen directories each with several thousand files.
>
>
Are you talking about some issues in geo-replication module or some other
application using native mount? Happy to take the discussion forward about
these issues.

Are there any bugs open on this?

Thanks,
Amar


>
>
> nfs
> On March 19, 2019 8:09:01 AM EDT, Hans Henrik Happe  wrote:
>>
>> Hi,
>>
>> Looking into something else I fell over this proposal. Being a shop that
>> are going into "Leaving GlusterFS" mode, I thought I would give my two
>> cents.
>>
>> While being partially an HPC shop with a few Lustre filesystems,  we
>> chose GlusterFS for an archiving solution (2-3 PB), because we could find
>> files in the underlying ZFS filesystems if GlusterFS went sour.
>>
>> We have used the access to the underlying files plenty, because of the
>> continuous instability of GlusterFS'. Meanwhile, Lustre have been almost
>> effortless to run and mainly for that reason we are planning to move away
>> from GlusterFS.
>>
>> Reading this proposal kind of underlined that "Leaving GluserFS" is the
>> right thing to do. While I never understood why GlusterFS has been in
>> feature crazy mode instead of stabilizing mode, taking away crucial
>> features I don't get. With RoCE, RDMA is getting mainstream. Quotas are
>> very useful, even though the current implementation are not perfect.
>> Tiering also makes so much sense, but, for large files, not on a per-file
>> level.
>>
>> To be honest we only use quotas. We got scared of trying out new
>> performance features that potentially would open up a new back of issues.
>>
>> Sorry for being such a buzzkill. I really wanted it to be different.
>>
>> Cheers,
>> Hans Henrik
>> On 19/07/2018 08.56, Amar Tumballi wrote:
>>
>>
>> * Hi all, Over last 12 years of Gluster, we have developed many features,
>> and continue to support most of it till now. But along the way, we have
>> figured out better methods of doing things. Also we are not actively
>> maintaining some of these features. We are now thinking of cleaning up some
>> of these ‘unsupported’ features, and mark them as ‘SunSet’ (i.e., would be
>> totally taken out of codebase in following releases) in next upcoming
>> release, v5.0. The release notes will provide options for smoothly
>> migrating to the supported configurations. If you are using any of these
>> features, do let us know, so that we can help you with ‘migration’.. Also,
>> we are happy to guide new developers to work on those components which are
>> not actively being maintained by current set of developers. List of
>> features hitting sunset: ‘cluster/stripe’ translator: This translator was
>> developed very early in the evolution of GlusterFS, and addressed one of
>> the very common question of Distributed FS, which is “What happens if one
>> of my file is bigger than the available brick. Say, I have 2 TB hard drive,
>> exported in glusterfs, my file is 3 TB”. While it solved the purpose, it
>> was very hard to handle failure scenarios, and give a real good experience
>> to our users with this feature. Over the time, Gluster solved the problem
>> with it’s ‘Shard’ feature, which solves the problem in much better way, and
>> provides much better solution with existing well supported stack. Hence the
>> proposal for Deprecation. If you are using this feature, then do write to
>> us, as it needs a proper migration from existing volume to a new full
>> supported volume type before you upgrade. ‘storage/bd’ translator: This
>> feature got into the code base 5 years back with this patch
>> [1]. Plan was to use a block device
>> directly as a brick, which would help to handle disk-image storage much
>> easily in glusterfs. As the feature is not getting more contribution, and
>> we are not seeing any user traction on this, would like to propose for
>> Deprecation. If you are using the feature, plan to move to a supported
>> gluster volume configuration, and have your setup ‘supported’ before
>> upgrading to your new gluster version. ‘RDMA’ transport support: Gluster
>> started supporting RDMA while ib-verbs was still new, and very high-end
>> infra around that time were using Infiniband. Engineers did work with
>> Mellanox, and got the technology into GlusterFS for 

Re: [Gluster-users] Proposal to mark few features as Deprecated / SunSet from Version 5.0

2019-03-19 Thread Amar Tumballi Suryanarayan
Hi Hans,

Thanks for the honest feedback. Appreciate this.

On Tue, Mar 19, 2019 at 5:39 PM Hans Henrik Happe  wrote:

> Hi,
>
> Looking into something else I fell over this proposal. Being a shop that
> are going into "Leaving GlusterFS" mode, I thought I would give my two
> cents.
>
> While being partially an HPC shop with a few Lustre filesystems,  we chose
> GlusterFS for an archiving solution (2-3 PB), because we could find files
> in the underlying ZFS filesystems if GlusterFS went sour.
>
> We have used the access to the underlying files plenty, because of the
> continuous instability of GlusterFS'. Meanwhile, Lustre have been almost
> effortless to run and mainly for that reason we are planning to move away
> from GlusterFS.
>
> Reading this proposal kind of underlined that "Leaving GluserFS" is the
> right thing to do. While I never understood why GlusterFS has been in
> feature crazy mode instead of stabilizing mode, taking away crucial
> features I don't get. With RoCE, RDMA is getting mainstream. Quotas are
> very useful, even though the current implementation are not perfect.
> Tiering also makes so much sense, but, for large files, not on a per-file
> level.
>
>
It is a right concern to raise, and removing the existing features is not a
good thing most of the times. But, one thing we noticed over the years is,
the features which we develop, and not take to completion cause the major
heart-burn. People think it is present, and it is already few years since
its introduced, but if the developers are not working on it, users would
always feel that the product doesn't work, because that one feature didn't
work.

Other than Quota in the proposal email, for all other features, even though
we have *some* users, we are inclined towards deprecating them, considering
projects overall goals of stability in the longer run.


> To be honest we only use quotas. We got scared of trying out new
> performance features that potentially would open up a new back of issues.
>
> About Quota, we heard enough voices, so we will make sure we keep it. The
original email was 'Proposal', and hence these opinions matter for decision.

Sorry for being such a buzzkill. I really wanted it to be different.
>
> We hear you. Please let us know one thing, which were the versions you
tried ?

We hope in coming months, our recent focus on Stability and Technical debt
reduction will help you to re-look at Gluster after sometime.


> Cheers,
> Hans Henrik
> On 19/07/2018 08.56, Amar Tumballi wrote:
>
>
> * Hi all, Over last 12 years of Gluster, we have developed many features,
> and continue to support most of it till now. But along the way, we have
> figured out better methods of doing things. Also we are not actively
> maintaining some of these features. We are now thinking of cleaning up some
> of these ‘unsupported’ features, and mark them as ‘SunSet’ (i.e., would be
> totally taken out of codebase in following releases) in next upcoming
> release, v5.0. The release notes will provide options for smoothly
> migrating to the supported configurations. If you are using any of these
> features, do let us know, so that we can help you with ‘migration’.. Also,
> we are happy to guide new developers to work on those components which are
> not actively being maintained by current set of developers. List of
> features hitting sunset: ‘cluster/stripe’ translator: This translator was
> developed very early in the evolution of GlusterFS, and addressed one of
> the very common question of Distributed FS, which is “What happens if one
> of my file is bigger than the available brick. Say, I have 2 TB hard drive,
> exported in glusterfs, my file is 3 TB”. While it solved the purpose, it
> was very hard to handle failure scenarios, and give a real good experience
> to our users with this feature. Over the time, Gluster solved the problem
> with it’s ‘Shard’ feature, which solves the problem in much better way, and
> provides much better solution with existing well supported stack. Hence the
> proposal for Deprecation. If you are using this feature, then do write to
> us, as it needs a proper migration from existing volume to a new full
> supported volume type before you upgrade. ‘storage/bd’ translator: This
> feature got into the code base 5 years back with this patch
> [1]. Plan was to use a block device
> directly as a brick, which would help to handle disk-image storage much
> easily in glusterfs. As the feature is not getting more contribution, and
> we are not seeing any user traction on this, would like to propose for
> Deprecation. If you are using the feature, plan to move to a supported
> gluster volume configuration, and have your setup ‘supported’ before
> upgrading to your new gluster version. ‘RDMA’ transport support: Gluster
> started supporting RDMA while ib-verbs was still new, and very high-end
> infra around that time were using Infiniband. Engineers did work with
> Mellanox, and got the 

Re: [Gluster-users] Proposal to mark few features as Deprecated / SunSet from Version 5.0

2019-03-19 Thread Jim Kinney
0">For my uses, the RDMA transport is essential. Much of my storage is used for 
HPC systems and IB is the network layer. We still use v3.12.

Issues with glusterfs fuse mounts cause issues with python file open for write. 
We have to use nfs to avoid this. 

Really want to see better back-end tools to facilitate cleaning up of glusterfs 
failures. If system is going to use hard linked ID, need a mapping of id to 
file to fix things. That option is now on for all exports. It should be the 
default  If a host is down and users delete files by the thousands, gluster 
_never_ catches up. Finding path names for ids across even a 40TB mount, much 
less  the 200+TB one, is a slow process. A network outage of 2 minutes and one 
system didn't get the call to recursively delete several dozen directories each 
with several thousand files. 



On March 19, 2019 8:09:01 AM EDT, Hans Henrik Happe  wrote:
>Hi,
>
>Looking into something else I fell over this proposal. Being a shop
>that
>are going into "Leaving GlusterFS" mode, I thought I would give my two
>cents.
>
>While being partially an HPC shop with a few Lustre filesystems,  we
>chose GlusterFS for an archiving solution (2-3 PB), because we could
>find files in the underlying ZFS filesystems if GlusterFS went sour.
>
>We have used the access to the underlying files plenty, because of the
>continuous instability of GlusterFS'. Meanwhile, Lustre have been
>almost
>effortless to run and mainly for that reason we are planning to move
>away from GlusterFS.
>
>Reading this proposal kind of underlined that "Leaving GluserFS" is the
>right thing to do. While I never understood why GlusterFS has been in
>feature crazy mode instead of stabilizing mode, taking away crucial
>features I don't get. With RoCE, RDMA is getting mainstream. Quotas are
>very useful, even though the current implementation are not perfect.
>Tiering also makes so much sense, but, for large files, not on a
>per-file level.
>
>To be honest we only use quotas. We got scared of trying out new
>performance features that potentially would open up a new back of
>issues.
>
>Sorry for being such a buzzkill. I really wanted it to be different.
>
>Cheers,
>Hans Henrik
>
>On 19/07/2018 08.56, Amar Tumballi wrote:
>> *
>>
>> Hi all,
>>
>> Over last 12 years of Gluster, we have developed many features, and
>> continue to support most of it till now. But along the way, we have
>> figured out better methods of doing things. Also we are not actively
>> maintaining some of these features.
>>
>> We are now thinking of cleaning up some of these ‘unsupported’
>> features, and mark them as ‘SunSet’ (i.e., would be totally taken out
>> of codebase in following releases) in next upcoming release, v5.0.
>The
>> release notes will provide options for smoothly migrating to the
>> supported configurations.
>>
>> If you are using any of these features, do let us know, so that we
>can
>> help you with ‘migration’.. Also, we are happy to guide new
>developers
>> to work on those components which are not actively being maintained
>by
>> current set of developers.
>>
>>
>>   List of features hitting sunset:
>>
>>
>> ‘cluster/stripe’ translator:
>>
>> This translator was developed very early in the evolution of
>> GlusterFS, and addressed one of the very common question of
>> Distributed FS, which is “What happens if one of my file is bigger
>> than the available brick. Say, I have 2 TB hard drive, exported in
>> glusterfs, my file is 3 TB”. While it solved the purpose, it was very
>> hard to handle failure scenarios, and give a real good experience to
>> our users with this feature. Over the time, Gluster solved the
>problem
>> with it’s ‘Shard’ feature, which solves the problem in much better
>> way, and provides much better solution with existing well supported
>> stack. Hence the proposal for Deprecation.
>>
>> If you are using this feature, then do write to us, as it needs a
>> proper migration from existing volume to a new full supported volume
>> type before you upgrade.
>>
>>
>> ‘storage/bd’ translator:
>>
>> This feature got into the code base 5 years back with this patch
>> [1]. Plan was to use a block device
>> directly as a brick, which would help to handle disk-image storage
>> much easily in glusterfs.
>>
>> As the feature is not getting more contribution, and we are not
>seeing
>> any user traction on this, would like to propose for Deprecation.
>>
>> If you are using the feature, plan to move to a supported gluster
>> volume configuration, and have your setup ‘supported’ before
>upgrading
>> to your new gluster version.
>>
>>
>> ‘RDMA’ transport support:
>>
>> Gluster started supporting RDMA while ib-verbs was still new, and
>very
>> high-end infra around that time were using Infiniband. Engineers did
>> work with Mellanox, and got the technology into GlusterFS for better
>> data migration, data copy. While current day kernels support very
>good
>> speed with 

Re: [Gluster-users] Proposal to mark few features as Deprecated / SunSet from Version 5.0

2019-03-19 Thread Hans Henrik Happe
Hi,

Looking into something else I fell over this proposal. Being a shop that
are going into "Leaving GlusterFS" mode, I thought I would give my two
cents.

While being partially an HPC shop with a few Lustre filesystems,  we
chose GlusterFS for an archiving solution (2-3 PB), because we could
find files in the underlying ZFS filesystems if GlusterFS went sour.

We have used the access to the underlying files plenty, because of the
continuous instability of GlusterFS'. Meanwhile, Lustre have been almost
effortless to run and mainly for that reason we are planning to move
away from GlusterFS.

Reading this proposal kind of underlined that "Leaving GluserFS" is the
right thing to do. While I never understood why GlusterFS has been in
feature crazy mode instead of stabilizing mode, taking away crucial
features I don't get. With RoCE, RDMA is getting mainstream. Quotas are
very useful, even though the current implementation are not perfect.
Tiering also makes so much sense, but, for large files, not on a
per-file level.

To be honest we only use quotas. We got scared of trying out new
performance features that potentially would open up a new back of issues.

Sorry for being such a buzzkill. I really wanted it to be different.

Cheers,
Hans Henrik

On 19/07/2018 08.56, Amar Tumballi wrote:
> *
>
> Hi all,
>
> Over last 12 years of Gluster, we have developed many features, and
> continue to support most of it till now. But along the way, we have
> figured out better methods of doing things. Also we are not actively
> maintaining some of these features.
>
> We are now thinking of cleaning up some of these ‘unsupported’
> features, and mark them as ‘SunSet’ (i.e., would be totally taken out
> of codebase in following releases) in next upcoming release, v5.0. The
> release notes will provide options for smoothly migrating to the
> supported configurations.
>
> If you are using any of these features, do let us know, so that we can
> help you with ‘migration’.. Also, we are happy to guide new developers
> to work on those components which are not actively being maintained by
> current set of developers.
>
>
>   List of features hitting sunset:
>
>
> ‘cluster/stripe’ translator:
>
> This translator was developed very early in the evolution of
> GlusterFS, and addressed one of the very common question of
> Distributed FS, which is “What happens if one of my file is bigger
> than the available brick. Say, I have 2 TB hard drive, exported in
> glusterfs, my file is 3 TB”. While it solved the purpose, it was very
> hard to handle failure scenarios, and give a real good experience to
> our users with this feature. Over the time, Gluster solved the problem
> with it’s ‘Shard’ feature, which solves the problem in much better
> way, and provides much better solution with existing well supported
> stack. Hence the proposal for Deprecation.
>
> If you are using this feature, then do write to us, as it needs a
> proper migration from existing volume to a new full supported volume
> type before you upgrade.
>
>
> ‘storage/bd’ translator:
>
> This feature got into the code base 5 years back with this patch
> [1]. Plan was to use a block device
> directly as a brick, which would help to handle disk-image storage
> much easily in glusterfs.
>
> As the feature is not getting more contribution, and we are not seeing
> any user traction on this, would like to propose for Deprecation.
>
> If you are using the feature, plan to move to a supported gluster
> volume configuration, and have your setup ‘supported’ before upgrading
> to your new gluster version.
>
>
> ‘RDMA’ transport support:
>
> Gluster started supporting RDMA while ib-verbs was still new, and very
> high-end infra around that time were using Infiniband. Engineers did
> work with Mellanox, and got the technology into GlusterFS for better
> data migration, data copy. While current day kernels support very good
> speed with IPoIB module itself, and there are no more bandwidth for
> experts in these area to maintain the feature, we recommend migrating
> over to TCP (IP based) network for your volume.
>
> If you are successfully using RDMA transport, do get in touch with us
> to prioritize the migration plan for your volume. Plan is to work on
> this after the release, so by version 6.0, we will have a cleaner
> transport code, which just needs to support one type.
>
>
> ‘Tiering’ feature
>
> Gluster’s tiering feature which was planned to be providing an option
> to keep your ‘hot’ data in different location than your cold data, so
> one can get better performance. While we saw some users for the
> feature, it needs much more attention to be completely bug free. At
> the time, we are not having any active maintainers for the feature,
> and hence suggesting to take it out of the ‘supported’ tag.
>
> If you are willing to take it up, and maintain it, do let us know, and
> we are happy to assist you.
>
> If you 

Re: [Gluster-users] Proposal to mark few features as Deprecated / SunSet from Version 5.0

2018-07-23 Thread Amar Tumballi
On Mon, Jul 23, 2018 at 8:21 PM, Gudrun Mareike Amedick <
g.amed...@uni-luebeck.de> wrote:

> Hi,
>
> we're planning a dispersed volume with at least 50 project directories.
> Each of those has its own quota ranging between 0.1TB and 200TB. Comparing
> XFS
> project quotas over several servers and bricks to make sure their total
> matches the desired value doesn't really sound practical. It would probably
> be
> possible to create and maintain 50 volumes and more, but it doesn't seem
> to be a desirable solution. The quotas aren't fixed and resizing a volume is
> not as trivial as changing the quota.
>
> Quota was in the past and still is a very comfortable way to solve this.
>
> But what is the new recommended way for such a setting when the quota is
> going to be deprecated?
>
>
Thanks for the feedback. Helps us to prioritize. Will get back on this.

-Amar



> Kind regards
>
> Gudrun
> Am Donnerstag, den 19.07.2018, 12:26 +0530 schrieb Amar Tumballi:
> > Hi all,
> >
> > Over last 12 years of Gluster, we have developed many features, and
> continue to support most of it till now. But along the way, we have figured
> out
> > better methods of doing things. Also we are not actively maintaining
> some of these features.
> >
> > We are now thinking of cleaning up some of these ‘unsupported’ features,
> and mark them as ‘SunSet’ (i.e., would be totally taken out of codebase in
> > following releases) in next upcoming release, v5.0. The release notes
> will provide options for smoothly migrating to the supported configurations.
> >
> > If you are using any of these features, do let us know, so that we can
> help you with ‘migration’.. Also, we are happy to guide new developers to
> > work on those components which are not actively being maintained by
> current set of developers.
> >
> > List of features hitting sunset:
> >
> > ‘cluster/stripe’ translator:
> >
> > This translator was developed very early in the evolution of GlusterFS,
> and addressed one of the very common question of Distributed FS, which is
> > “What happens if one of my file is bigger than the available brick. Say,
> I have 2 TB hard drive, exported in glusterfs, my file is 3 TB”. While it
> > solved the purpose, it was very hard to handle failure scenarios, and
> give a real good experience to our users with this feature. Over the time,
> > Gluster solved the problem with it’s ‘Shard’ feature, which solves the
> problem in much better way, and provides much better solution with existing
> > well supported stack. Hence the proposal for Deprecation.
> >
> > If you are using this feature, then do write to us, as it needs a proper
> migration from existing volume to a new full supported volume type before
> > you upgrade.
> >
> > ‘storage/bd’ translator:
> >
> > This feature got into the code base 5 years back with this patch[1].
> Plan was to use a block device directly as a brick, which would help to
> handle
> > disk-image storage much easily in glusterfs.
> >
> > As the feature is not getting more contribution, and we are not seeing
> any user traction on this, would like to propose for Deprecation.
> >
> > If you are using the feature, plan to move to a supported gluster volume
> configuration, and have your setup ‘supported’ before upgrading to your new
> > gluster version.
> >
> > ‘RDMA’ transport support:
> >
> > Gluster started supporting RDMA while ib-verbs was still new, and very
> high-end infra around that time were using Infiniband. Engineers did work
> > with Mellanox, and got the technology into GlusterFS for better data
> migration, data copy. While current day kernels support very good speed with
> > IPoIB module itself, and there are no more bandwidth for experts in
> these area to maintain the feature, we recommend migrating over to TCP (IP
> > based) network for your volume.
> >
> > If you are successfully using RDMA transport, do get in touch with us to
> prioritize the migration plan for your volume. Plan is to work on this
> > after the release, so by version 6.0, we will have a cleaner transport
> code, which just needs to support one type.
> >
> > ‘Tiering’ feature
> >
> > Gluster’s tiering feature which was planned to be providing an option to
> keep your ‘hot’ data in different location than your cold data, so one can
> > get better performance. While we saw some users for the feature, it
> needs much more attention to be completely bug free. At the time, we are not
> > having any active maintainers for the feature, and hence suggesting to
> take it out of the ‘supported’ tag.
> >
> > If you are willing to take it up, and maintain it, do let us know, and
> we are happy to assist you.
> >
> > If you are already using tiering feature, before upgrading, make sure to
> do gluster volume tier detach all the bricks before upgrading to next
> > release. Also, we recommend you to use features like dmcache on your LVM
> setup to get best performance from bricks.
> >
> > ‘Quota’
> >
> > This is a call out for ‘Quota’ feature, 

Re: [Gluster-users] Proposal to mark few features as Deprecated / SunSet from Version 5.0

2018-07-23 Thread Gudrun Mareike Amedick
Hi,

we're planning a dispersed volume with at least 50 project directories. Each of 
those has its own quota ranging between 0.1TB and 200TB. Comparing XFS
project quotas over several servers and bricks to make sure their total matches 
the desired value doesn't really sound practical. It would probably be
possible to create and maintain 50 volumes and more, but it doesn't seem to be 
a desirable solution. The quotas aren't fixed and resizing a volume is
not as trivial as changing the quota. 

Quota was in the past and still is a very comfortable way to solve this.

But what is the new recommended way for such a setting when the quota is going 
to be deprecated?

Kind regards

Gudrun
Am Donnerstag, den 19.07.2018, 12:26 +0530 schrieb Amar Tumballi:
> Hi all,
> 
> Over last 12 years of Gluster, we have developed many features, and continue 
> to support most of it till now. But along the way, we have figured out
> better methods of doing things. Also we are not actively maintaining some of 
> these features.
> 
> We are now thinking of cleaning up some of these ‘unsupported’ features, and 
> mark them as ‘SunSet’ (i.e., would be totally taken out of codebase in
> following releases) in next upcoming release, v5.0. The release notes will 
> provide options for smoothly migrating to the supported configurations.
> 
> If you are using any of these features, do let us know, so that we can help 
> you with ‘migration’.. Also, we are happy to guide new developers to
> work on those components which are not actively being maintained by current 
> set of developers.
> 
> List of features hitting sunset:
> 
> ‘cluster/stripe’ translator:
> 
> This translator was developed very early in the evolution of GlusterFS, and 
> addressed one of the very common question of Distributed FS, which is
> “What happens if one of my file is bigger than the available brick. Say, I 
> have 2 TB hard drive, exported in glusterfs, my file is 3 TB”. While it
> solved the purpose, it was very hard to handle failure scenarios, and give a 
> real good experience to our users with this feature. Over the time,
> Gluster solved the problem with it’s ‘Shard’ feature, which solves the 
> problem in much better way, and provides much better solution with existing
> well supported stack. Hence the proposal for Deprecation.
> 
> If you are using this feature, then do write to us, as it needs a proper 
> migration from existing volume to a new full supported volume type before
> you upgrade.
> 
> ‘storage/bd’ translator:
> 
> This feature got into the code base 5 years back with this patch[1]. Plan was 
> to use a block device directly as a brick, which would help to handle
> disk-image storage much easily in glusterfs.
> 
> As the feature is not getting more contribution, and we are not seeing any 
> user traction on this, would like to propose for Deprecation.
> 
> If you are using the feature, plan to move to a supported gluster volume 
> configuration, and have your setup ‘supported’ before upgrading to your new
> gluster version.
> 
> ‘RDMA’ transport support:
> 
> Gluster started supporting RDMA while ib-verbs was still new, and very 
> high-end infra around that time were using Infiniband. Engineers did work
> with Mellanox, and got the technology into GlusterFS for better data 
> migration, data copy. While current day kernels support very good speed with
> IPoIB module itself, and there are no more bandwidth for experts in these 
> area to maintain the feature, we recommend migrating over to TCP (IP
> based) network for your volume.
> 
> If you are successfully using RDMA transport, do get in touch with us to 
> prioritize the migration plan for your volume. Plan is to work on this
> after the release, so by version 6.0, we will have a cleaner transport code, 
> which just needs to support one type.
> 
> ‘Tiering’ feature
> 
> Gluster’s tiering feature which was planned to be providing an option to keep 
> your ‘hot’ data in different location than your cold data, so one can
> get better performance. While we saw some users for the feature, it needs 
> much more attention to be completely bug free. At the time, we are not
> having any active maintainers for the feature, and hence suggesting to take 
> it out of the ‘supported’ tag.
> 
> If you are willing to take it up, and maintain it, do let us know, and we are 
> happy to assist you.
> 
> If you are already using tiering feature, before upgrading, make sure to do 
> gluster volume tier detach all the bricks before upgrading to next
> release. Also, we recommend you to use features like dmcache on your LVM 
> setup to get best performance from bricks.
> 
> ‘Quota’
> 
> This is a call out for ‘Quota’ feature, to let you all know that it will be 
> ‘no new development’ state. While this feature is ‘actively’ in use by
> many people, the challenges we have in accounting mechanisms involved, has 
> made it hard to achieve good performance with the feature. Also, the
> amount of extended 

Re: [Gluster-users] Proposal to mark few features as Deprecated / SunSet from Version 5.0

2018-07-20 Thread Amar Tumballi
On Thu, Jul 19, 2018 at 6:46 PM, mabi  wrote:

> Hi Amar,
>
> Just wanted to say that I think the quota feature in GlusterFS is really
> useful. In my case I use it on one volume where I have many cloud
> installations (mostly files) for different people and all these need to
> have a different quota set on a specific directory. The GlusterFS quota
> allows me nicely to manage that which would not be possible in the
> application directly. It would really be an overhead for me to for example
> to have one volume per installation just because of setting the max size
> like that.
>
> I hope that this feature can continue to exist.
>
>
Thanks for the feedback. We will consider this use-case.


> Best regards,
> M.
>
>
>
> ‐‐‐ Original Message ‐‐‐
> On July 19, 2018 8:56 AM, Amar Tumballi  wrote:
>
> Hi all,
>
> Over last 12 years of Gluster, we have developed many features, and
> continue to support most of it till now. But along the way, we have figured
> out better methods of doing things. Also we are not actively maintaining
> some of these features.
>
> We are now thinking of cleaning up some of these ‘unsupported’ features,
> and mark them as ‘SunSet’ (i.e., would be totally taken out of codebase in
> following releases) in next upcoming release, v5.0. The release notes
> will provide options for smoothly migrating to the supported configurations.
>
> If you are using any of these features, do let us know, so that we can
> help you with ‘migration’.. Also, we are happy to guide new developers to
> work on those components which are not actively being maintained by current
> set of developers.
> *List of features hitting sunset:*
> *‘cluster/stripe’ translator:*
>
> This translator was developed very early in the evolution of GlusterFS,
> and addressed one of the very common question of Distributed FS, which is
> “What happens if one of my file is bigger than the available brick. Say, I
> have 2 TB hard drive, exported in glusterfs, my file is 3 TB”. While it
> solved the purpose, it was very hard to handle failure scenarios, and give
> a real good experience to our users with this feature. Over the time,
> Gluster solved the problem with it’s ‘Shard’ feature, which solves the
> problem in much better way, and provides much better solution with existing
> well supported stack. Hence the proposal for Deprecation.
>
> If you are using this feature, then do write to us, as it needs a proper
> migration from existing volume to a new full supported volume type before
> you upgrade.
> *‘storage/bd’ translator:*
>
> This feature got into the code base 5 years back with this *patch*
> [1]. Plan was to use a block device
> directly as a brick, which would help to handle disk-image storage much
> easily in glusterfs.
>
> As the feature is not getting more contribution, and we are not seeing any
> user traction on this, would like to propose for Deprecation.
>
> If you are using the feature, plan to move to a supported gluster volume
> configuration, and have your setup ‘supported’ before upgrading to your new
> gluster version.
> *‘RDMA’ transport support:*
>
> Gluster started supporting RDMA while ib-verbs was still new, and very
> high-end infra around that time were using Infiniband. Engineers did work
> with Mellanox, and got the technology into GlusterFS for better data
> migration, data copy. While current day kernels support very good speed
> with IPoIB module itself, and there are no more bandwidth for experts in
> these area to maintain the feature, we recommend migrating over to TCP (IP
> based) network for your volume.
>
> If you are successfully using RDMA transport, do get in touch with us to
> prioritize the migration plan for your volume. Plan is to work on this
> after the release, so by version 6.0, we will have a cleaner transport
> code, which just needs to support one type.
> *‘Tiering’ feature*
>
> Gluster’s tiering feature which was planned to be providing an option to
> keep your ‘hot’ data in different location than your cold data, so one can
> get better performance. While we saw some users for the feature, it needs
> much more attention to be completely bug free. At the time, we are not
> having any active maintainers for the feature, and hence suggesting to take
> it out of the ‘supported’ tag.
>
> If you are willing to take it up, and maintain it, do let us know, and we
> are happy to assist you.
>
> If you are already using tiering feature, before upgrading, make sure to
> do gluster volume tier detach all the bricks before upgrading to next
> release. Also, we recommend you to use features like dmcache on your LVM
> setup to get best performance from bricks.
> *‘Quota’*
>
> This is a call out for ‘Quota’ feature, to let you all know that it will
> be ‘no new development’ state. While this feature is ‘actively’ in use by
> many people, the challenges we have in accounting mechanisms involved, has
> made it hard to achieve good performance with 

Re: [Gluster-users] Proposal to mark few features as Deprecated / SunSet from Version 5.0

2018-07-19 Thread mabi
Hi Amar,

Just wanted to say that I think the quota feature in GlusterFS is really 
useful. In my case I use it on one volume where I have many cloud installations 
(mostly files) for different people and all these need to have a different 
quota set on a specific directory. The GlusterFS quota allows me nicely to 
manage that which would not be possible in the application directly. It would 
really be an overhead for me to for example to have one volume per installation 
just because of setting the max size like that.

I hope that this feature can continue to exist.

Best regards,
M.

‐‐‐ Original Message ‐‐‐
On July 19, 2018 8:56 AM, Amar Tumballi  wrote:

> Hi all,
>
> Over last 12 years of Gluster, we have developed many features, and continue 
> to support most of it till now. But along the way, we have figured out better 
> methods of doing things. Also we are not actively maintaining some of these 
> features.
>
> We are now thinking of cleaning up some of these ‘unsupported’ features, and 
> mark them as ‘SunSet’ (i.e., would be totally taken out of codebase in 
> following releases) in next upcoming release, v5.0. The release notes will 
> provide options for smoothly migrating to the supported configurations.
>
> If you are using any of these features, do let us know, so that we can help 
> you with ‘migration’.. Also, we are happy to guide new developers to work on 
> those components which are not actively being maintained by current set of 
> developers.
>
> List of features hitting sunset:
>
> ‘cluster/stripe’ translator:
>
> This translator was developed very early in the evolution of GlusterFS, and 
> addressed one of the very common question of Distributed FS, which is “What 
> happens if one of my file is bigger than the available brick. Say, I have 2 
> TB hard drive, exported in glusterfs, my file is 3 TB”. While it solved the 
> purpose, it was very hard to handle failure scenarios, and give a real good 
> experience to our users with this feature. Over the time, Gluster solved the 
> problem with it’s ‘Shard’ feature, which solves the problem in much better 
> way, and provides much better solution with existing well supported stack. 
> Hence the proposal for Deprecation.
>
> If you are using this feature, then do write to us, as it needs a proper 
> migration from existing volume to a new full supported volume type before you 
> upgrade.
>
> ‘storage/bd’ translator:
>
> This feature got into the code base 5 years back with this 
> [patch](http://review.gluster.org/4809)[1]. Plan was to use a block device 
> directly as a brick, which would help to handle disk-image storage much 
> easily in glusterfs.
>
> As the feature is not getting more contribution, and we are not seeing any 
> user traction on this, would like to propose for Deprecation.
>
> If you are using the feature, plan to move to a supported gluster volume 
> configuration, and have your setup ‘supported’ before upgrading to your new 
> gluster version.
>
> ‘RDMA’ transport support:
>
> Gluster started supporting RDMA while ib-verbs was still new, and very 
> high-end infra around that time were using Infiniband. Engineers did work 
> with Mellanox, and got the technology into GlusterFS for better data 
> migration, data copy. While current day kernels support very good speed with 
> IPoIB module itself, and there are no more bandwidth for experts in these 
> area to maintain the feature, we recommend migrating over to TCP (IP based) 
> network for your volume.
>
> If you are successfully using RDMA transport, do get in touch with us to 
> prioritize the migration plan for your volume. Plan is to work on this after 
> the release, so by version 6.0, we will have a cleaner transport code, which 
> just needs to support one type.
>
> ‘Tiering’ feature
>
> Gluster’s tiering feature which was planned to be providing an option to keep 
> your ‘hot’ data in different location than your cold data, so one can get 
> better performance. While we saw some users for the feature, it needs much 
> more attention to be completely bug free. At the time, we are not having any 
> active maintainers for the feature, and hence suggesting to take it out of 
> the ‘supported’ tag.
>
> If you are willing to take it up, and maintain it, do let us know, and we are 
> happy to assist you.
>
> If you are already using tiering feature, before upgrading, make sure to do 
> gluster volume tier detach all the bricks before upgrading to next release. 
> Also, we recommend you to use features like dmcache on your LVM setup to get 
> best performance from bricks.
>
> ‘Quota’
>
> This is a call out for ‘Quota’ feature, to let you all know that it will be 
> ‘no new development’ state. While this feature is ‘actively’ in use by many 
> people, the challenges we have in accounting mechanisms involved, has made it 
> hard to achieve good performance with the feature. Also, the amount of 
> extended attribute get/set operations while using the 

Re: [Gluster-users] Proposal to mark few features as Deprecated / SunSet from Version 5.0

2018-07-19 Thread Amar Tumballi
On Thu, Jul 19, 2018 at 6:06 PM, Jim Kinney  wrote:

> Too bad the RDMA will be abandoned. It's the perfect transport for
> intranode processing and data sync.
>
>


> I currently use RDMA on a computational cluster between nodes and gluster
> storage. The older IB cards will support 10G IP and 40G IB. I've had some
> success with connectivity but am still faltering with fuse performance. As
> soon as some retired gear is reconnected I'll have a test bed for HA NFS
> over RDMA to computational cluster and 10G IP to non-cluster systems.
>
> But it looks like Gluster 6 is a ways away so maybe I'll get more hardware
> or time to pitch in some code after groking enough IB.
>
>
We are happy to continue to make releases with RDMA for some more time if
there are users. The "proposal" is to make sure we give enough heads up
about the experts in that area not having cycles to make any more
enhancements to the feature.



> Thanks for the heads up and all the work to date.
>

Glad to hear back from you! Makes us realize there are things which we
haven't touched in some time, but people using them.

Thanks,
Amar


>
> On July 19, 2018 2:56:35 AM EDT, Amar Tumballi 
> wrote:
>>
>>
>> *Hi all,Over last 12 years of Gluster, we have developed many features,
>> and continue to support most of it till now. But along the way, we have
>> figured out better methods of doing things. Also we are not actively
>> maintaining some of these features.We are now thinking of cleaning up some
>> of these ‘unsupported’ features, and mark them as ‘SunSet’ (i.e., would be
>> totally taken out of codebase in following releases) in next upcoming
>> release, v5.0. The release notes will provide options for smoothly
>> migrating to the supported configurations.If you are using any of these
>> features, do let us know, so that we can help you with ‘migration’.. Also,
>> we are happy to guide new developers to work on those components which are
>> not actively being maintained by current set of developers.List of features
>> hitting sunset:‘cluster/stripe’ translator:This translator was developed
>> very early in the evolution of GlusterFS, and addressed one of the very
>> common question of Distributed FS, which is “What happens if one of my file
>> is bigger than the available brick. Say, I have 2 TB hard drive, exported
>> in glusterfs, my file is 3 TB”. While it solved the purpose, it was very
>> hard to handle failure scenarios, and give a real good experience to our
>> users with this feature. Over the time, Gluster solved the problem with
>> it’s ‘Shard’ feature, which solves the problem in much better way, and
>> provides much better solution with existing well supported stack. Hence the
>> proposal for Deprecation.If you are using this feature, then do write to
>> us, as it needs a proper migration from existing volume to a new full
>> supported volume type before you upgrade.‘storage/bd’ translator:This
>> feature got into the code base 5 years back with this patch
>> [1]. Plan was to use a block device
>> directly as a brick, which would help to handle disk-image storage much
>> easily in glusterfs.As the feature is not getting more contribution, and we
>> are not seeing any user traction on this, would like to propose for
>> Deprecation.If you are using the feature, plan to move to a supported
>> gluster volume configuration, and have your setup ‘supported’ before
>> upgrading to your new gluster version.‘RDMA’ transport support:Gluster
>> started supporting RDMA while ib-verbs was still new, and very high-end
>> infra around that time were using Infiniband. Engineers did work with
>> Mellanox, and got the technology into GlusterFS for better data migration,
>> data copy. While current day kernels support very good speed with IPoIB
>> module itself, and there are no more bandwidth for experts in these area to
>> maintain the feature, we recommend migrating over to TCP (IP based) network
>> for your volume.If you are successfully using RDMA transport, do get in
>> touch with us to prioritize the migration plan for your volume. Plan is to
>> work on this after the release, so by version 6.0, we will have a cleaner
>> transport code, which just needs to support one type.‘Tiering’
>> featureGluster’s tiering feature which was planned to be providing an
>> option to keep your ‘hot’ data in different location than your cold data,
>> so one can get better performance. While we saw some users for the feature,
>> it needs much more attention to be completely bug free. At the time, we are
>> not having any active maintainers for the feature, and hence suggesting to
>> take it out of the ‘supported’ tag.If you are willing to take it up, and
>> maintain it, do let us know, and we are happy to assist you.If you are
>> already using tiering feature, before upgrading, make sure to do gluster
>> volume tier detach all the bricks before upgrading to next release. Also,
>> we recommend you to use features like dmcache 

Re: [Gluster-users] Proposal to mark few features as Deprecated / SunSet from Version 5.0

2018-07-19 Thread Jim Kinney
Too bad the RDMA will be abandoned. It's the perfect transport for intranode 
processing and data sync.

I currently use RDMA on a computational cluster between nodes and gluster 
storage. The older IB cards will support 10G IP and 40G IB. I've had some 
success with connectivity but am still faltering with fuse performance. As soon 
as some retired gear is reconnected I'll have a test bed for HA NFS over RDMA 
to computational cluster and 10G IP to non-cluster systems.

But it looks like Gluster 6 is a ways away so maybe I'll get more hardware or 
time to pitch in some code after groking enough IB.

Thanks for the heads up and all the work to date. 

On July 19, 2018 2:56:35 AM EDT, Amar Tumballi  wrote:
>*Hi all,Over last 12 years of Gluster, we have developed many features,
>and
>continue to support most of it till now. But along the way, we have
>figured
>out better methods of doing things. Also we are not actively
>maintaining
>some of these features.We are now thinking of cleaning up some of these
>‘unsupported’ features, and mark them as ‘SunSet’ (i.e., would be
>totally
>taken out of codebase in following releases) in next upcoming release,
>v5.0. The release notes will provide options for smoothly migrating to
>the
>supported configurations.If you are using any of these features, do let
>us
>know, so that we can help you with ‘migration’.. Also, we are happy to
>guide new developers to work on those components which are not actively
>being maintained by current set of developers.List of features hitting
>sunset:‘cluster/stripe’ translator:This translator was developed very
>early
>in the evolution of GlusterFS, and addressed one of the very common
>question of Distributed FS, which is “What happens if one of my file is
>bigger than the available brick. Say, I have 2 TB hard drive, exported
>in
>glusterfs, my file is 3 TB”. While it solved the purpose, it was very
>hard
>to handle failure scenarios, and give a real good experience to our
>users
>with this feature. Over the time, Gluster solved the problem with it’s
>‘Shard’ feature, which solves the problem in much better way, and
>provides
>much better solution with existing well supported stack. Hence the
>proposal
>for Deprecation.If you are using this feature, then do write to us, as
>it
>needs a proper migration from existing volume to a new full supported
>volume type before you upgrade.‘storage/bd’ translator:This feature got
>into the code base 5 years back with this patch
>[1]. Plan was to use a block device
>directly as a brick, which would help to handle disk-image storage much
>easily in glusterfs.As the feature is not getting more contribution,
>and we
>are not seeing any user traction on this, would like to propose for
>Deprecation.If you are using the feature, plan to move to a supported
>gluster volume configuration, and have your setup ‘supported’ before
>upgrading to your new gluster version.‘RDMA’ transport support:Gluster
>started supporting RDMA while ib-verbs was still new, and very high-end
>infra around that time were using Infiniband. Engineers did work with
>Mellanox, and got the technology into GlusterFS for better data
>migration,
>data copy. While current day kernels support very good speed with IPoIB
>module itself, and there are no more bandwidth for experts in these
>area to
>maintain the feature, we recommend migrating over to TCP (IP based)
>network
>for your volume.If you are successfully using RDMA transport, do get in
>touch with us to prioritize the migration plan for your volume. Plan is
>to
>work on this after the release, so by version 6.0, we will have a
>cleaner
>transport code, which just needs to support one type.‘Tiering’
>featureGluster’s tiering feature which was planned to be providing an
>option to keep your ‘hot’ data in different location than your cold
>data,
>so one can get better performance. While we saw some users for the
>feature,
>it needs much more attention to be completely bug free. At the time, we
>are
>not having any active maintainers for the feature, and hence suggesting
>to
>take it out of the ‘supported’ tag.If you are willing to take it up,
>and
>maintain it, do let us know, and we are happy to assist you.If you are
>already using tiering feature, before upgrading, make sure to do
>gluster
>volume tier detach all the bricks before upgrading to next release.
>Also,
>we recommend you to use features like dmcache on your LVM setup to get
>best
>performance from bricks.‘Quota’This is a call out for ‘Quota’ feature,
>to
>let you all know that it will be ‘no new development’ state. While this
>feature is ‘actively’ in use by many people, the challenges we have in
>accounting mechanisms involved, has made it hard to achieve good
>performance with the feature. Also, the amount of extended attribute
>get/set operations while using the feature is not very ideal. Hence we
>recommend our users to move towards setting quota on backend bricks
>directly (ie, XFS 

[Gluster-users] Proposal to mark few features as Deprecated / SunSet from Version 5.0

2018-07-19 Thread Amar Tumballi
*Hi all,Over last 12 years of Gluster, we have developed many features, and
continue to support most of it till now. But along the way, we have figured
out better methods of doing things. Also we are not actively maintaining
some of these features.We are now thinking of cleaning up some of these
‘unsupported’ features, and mark them as ‘SunSet’ (i.e., would be totally
taken out of codebase in following releases) in next upcoming release,
v5.0. The release notes will provide options for smoothly migrating to the
supported configurations.If you are using any of these features, do let us
know, so that we can help you with ‘migration’.. Also, we are happy to
guide new developers to work on those components which are not actively
being maintained by current set of developers.List of features hitting
sunset:‘cluster/stripe’ translator:This translator was developed very early
in the evolution of GlusterFS, and addressed one of the very common
question of Distributed FS, which is “What happens if one of my file is
bigger than the available brick. Say, I have 2 TB hard drive, exported in
glusterfs, my file is 3 TB”. While it solved the purpose, it was very hard
to handle failure scenarios, and give a real good experience to our users
with this feature. Over the time, Gluster solved the problem with it’s
‘Shard’ feature, which solves the problem in much better way, and provides
much better solution with existing well supported stack. Hence the proposal
for Deprecation.If you are using this feature, then do write to us, as it
needs a proper migration from existing volume to a new full supported
volume type before you upgrade.‘storage/bd’ translator:This feature got
into the code base 5 years back with this patch
[1]. Plan was to use a block device
directly as a brick, which would help to handle disk-image storage much
easily in glusterfs.As the feature is not getting more contribution, and we
are not seeing any user traction on this, would like to propose for
Deprecation.If you are using the feature, plan to move to a supported
gluster volume configuration, and have your setup ‘supported’ before
upgrading to your new gluster version.‘RDMA’ transport support:Gluster
started supporting RDMA while ib-verbs was still new, and very high-end
infra around that time were using Infiniband. Engineers did work with
Mellanox, and got the technology into GlusterFS for better data migration,
data copy. While current day kernels support very good speed with IPoIB
module itself, and there are no more bandwidth for experts in these area to
maintain the feature, we recommend migrating over to TCP (IP based) network
for your volume.If you are successfully using RDMA transport, do get in
touch with us to prioritize the migration plan for your volume. Plan is to
work on this after the release, so by version 6.0, we will have a cleaner
transport code, which just needs to support one type.‘Tiering’
featureGluster’s tiering feature which was planned to be providing an
option to keep your ‘hot’ data in different location than your cold data,
so one can get better performance. While we saw some users for the feature,
it needs much more attention to be completely bug free. At the time, we are
not having any active maintainers for the feature, and hence suggesting to
take it out of the ‘supported’ tag.If you are willing to take it up, and
maintain it, do let us know, and we are happy to assist you.If you are
already using tiering feature, before upgrading, make sure to do gluster
volume tier detach all the bricks before upgrading to next release. Also,
we recommend you to use features like dmcache on your LVM setup to get best
performance from bricks.‘Quota’This is a call out for ‘Quota’ feature, to
let you all know that it will be ‘no new development’ state. While this
feature is ‘actively’ in use by many people, the challenges we have in
accounting mechanisms involved, has made it hard to achieve good
performance with the feature. Also, the amount of extended attribute
get/set operations while using the feature is not very ideal. Hence we
recommend our users to move towards setting quota on backend bricks
directly (ie, XFS project quota), or to use different volumes for different
directories etc.As the feature wouldn’t be deprecated immediately, the
feature doesn’t need a migration plan when you upgrade to newer version,
but if you are a new user, we wouldn’t recommend setting quota feature. By
the release dates, we will be publishing our best alternatives guide for
gluster’s current quota feature.Note that if you want to contribute to the
feature, we have project quota based issue open
[2] Happy to get
contributions, and help in getting a newer approach to
Quota.--These are our set of initial features
which we propose to take out of ‘fully’ supported features. While we are in
the process of making the user/developer experience of the