On 3/14/2013 7:11 AM, Andy Malato wrote: > > ! Date: Wed, 13 Mar 2013 22:57:02 -0400 > ! From: Jeffrey Altman <[email protected]> > ! To: [email protected] > ! Cc: [email protected] > ! Subject: Re: [OpenAFS] Re: Change in volume status during vos dump in > OpenAFS > ! 1.6.x > ! > ! Matt: > ! > ! Have you looked at the man pages? > ! > ! http://docs.openafs.org/Reference/index > ! > ! The pages for "fs listquota", "fs quota", "fs setquota", "fs setvol", > ! "vos clone", "vos copy", "vos create", "vos examine", "vos move", "vos > ! partinfo", and "vos shadow" include the following text: > ! > ! "Currently, the maximum quota for a volume is 2 terabytes (2^41 bytes). > ! Note that this only affects the volume's quota; a volume may grow much > ! larger if the volume quota is disabled. However, volumes over 2 > ! terabytes in size may be impractical to move, and may have their size > ! incorrectly reported by some tools, such as L<fs_listquota(1)>." > > So while a volume can grow to more than 2 terabytes in size, the various > tools may not work correctly with volumes this large ?
The only restrictions are: * once you want a volume to be larger than 2TB you cannot restrict its size. * applications that query the amount of free space with be told 2TB until such time as the amount of free space drops below 2TB > Are there any > plans to address these issues and to allow quotas larger than than 2tb ? Protocol changes are required to report larger sizes, larger volume-id ranges, larger vnode-id ranges, 64-bit timestamps, etc. The process of upgrading all of these items has been referred to as "RPC Refresh". http://gerrit.openafs.org/#change,4573 Your File System, Inc., at the 2012 European AFS and Kerberos Conference, announced its plans for YFS 1.0 which will address all of the name space issues along with the performance issues necessary to address data sets that large. http://conferences.inf.ed.ac.uk/eakc2012/slides/Announcing-YFS-1.0-Euro-AFS-2012.pdf http://conferences.inf.ed.ac.uk/eakc2012/videos/eakc2012-yfs-altman.mp4 > As more and more researchers continue to work with big data, the request > for multi terabyte volumes is becoming more frequent. We are starting > to get requests from researchers for volumes in the range of 5 to 7TB > and we expect that future requests will probably be even larger. > > From what I read here it appears that AFS may be "impractical" to > support large datasets of this size? Can you (or anyone else) confirm > other sites that are using AFS to support the use of big data ? There are certainly sites that are using volumes larger than 8TB. What becomes impractical at that size is moving the volumes between servers in any reasonable period of time. Depending upon the number of files and directories that make up the data eventually you will begin to exhaust the per volume vnode pool. If you need to apply limits on the size of the volume you can do so by allocating partitions of the size you want and storing on AFS volume on each. In the end the question is why does the volume need to be one volume as opposed to splitting the data across multiple volumes that are mounted. Jeffrey Altman
smime.p7s
Description: S/MIME Cryptographic Signature
