might be better idea than the one I have been considering.

On 2013/06/23, at 23:08, erik quanstrom <[email protected]> wrote:

> On Sun Jun 23 09:38:01 EDT 2013, [email protected] wrote:
>> Thank you cinap,
>> 
>> I tried to copy all my  Dropbox data to cwfs.
>> the number of files that exceeded 144B name limit was only 3 in 40000 files.
>> I will be happy if cwfs64x natively accepts longer name, but the requirement
>> is almost endless. for example, OSX support 1024B names.
>> I wonder if making NAMELEN larger is the only way  to handle the problem.
> 
> without a different structure, it is the only way to handle the problem.
> 
> a few things to keep in mind about file names.  file names when they
> appear in 9p messages can't be split between messages.  this applies
> to walk, create, stat or read (of parent directory).  i think this places
> the restriction that maxnamelen <= IOUNIT - 43 bytes.  the distribution
> limits IOUNIT through the mnt driver to 8192+24.  (9atom uses
> 6*8k+24)
> 
> there are two basic ways to change the format to deal with this
> 1.  provide an escape to point to auxillary storage.  this is kind to
> existing storage.
> 2.  make the name (and thus the directory entry) variable length.
> 
> on our fs (which has python and some other nasties), the average
> file length is 11.  making the blocks variable length could save 25%
> (62 directory entries per buffer).  but it might be annoying to have
> to migrate the whole fs.
> 
> so since there are so few long names, why not waste a whole block
> on them?  if using the "standard" (ish) 8k raw block size (8180 for
> data), the expansion of the header could be nil (through creative
> encoding) and there would be 3 extra blocks taken for indirect names.
> for your case, the cost for 144-byte file names would be that DIRPERBUF
> goes from 47 to 31.  so most directories > 31 entries will take 
> 1.5 x (in the big-O sense) their original space even if there are
> no long names.
> 
> - erik
> 


Reply via email to