Moinak Ghosh wrote:
> Frank Hofmann wrote:
>> Sorry, wrong mail domain :)
>> FrankH.
>>
>>
>> ---------- Forwarded message ----------
>> Date: Fri, 28 Sep 2007 14:21:54 +0100 (BST)
>> From: Frank Hofmann <Frank.Hofmann at Sun.COM>
>> To: Moinak Ghosh <Moinak.Ghosh at Sun.COM>
>> Cc: hsfs-iteam at Sun.COM, caiman-discuss at Sun.COM
>> Subject: Re: Final webrevs updated for HSFS readahead changes
>>
>>
>> Hi Moinak,
>>
>> to me, the code looks great now. Except for one thing:
>>
>> As far as the kstats go, what's the system doing on multiple kstats 
>> being created with the same name ? You're doing the kstat_create once 
>> per mount, but re-use the same name/ID fields. I think that might 
>> cause memory leaks; has the hsfs kstat code been tested with 2+ 
>> concurrently mounted/accessed HSFS filesystems ?
>>
>> If you take e.g. UFS as a precedence (inode / logging / directio 
>> kstats), all of these are created at module initialization, not at 
>> mount. Same for the ZFS arc kstats.
>> There's other precedence (fssnap) for per-mountpoint kstats, but if 
>> you do that you'd have to:
>>      - make sure you pass unique names / IDs
>>      - delete previously-existing kstats of the same name / ID
>> when you call kstat_create(). It's more complicated and there's the 
>> possibility for errors.
>>
>> The kstats are nice to have, right there; but if it gets too 
>> complicated to make them per-mountpoint then either make them global, 
>> or simply purge the kstats, get a few more SDT probes in and let 
>> DTrace do the sum() instead of the kstat framework. It's probably not 
>> worth spending another week on this, it's a side aspect of the change.
>>   
>
>  Thanks for the comments. I think I can easily change the
>  kstats to be global and also run tests with multiple mounts.
>  The kstats are useful as they capture information during
>  normal operation and provide snapshots of behavior till
>  date, whereas DTrace captures information only when
>  tracing.

   Well I was mistaken. It is not trivial to change to global kstats.
   There will be a bit of code change.
   Also the various counters are being updated at points where
   existing mountpoint specific locking semantics ensure
   correctness.
   Additional locking is thus avoided. To have accurate global kstats
   will mean additional locking, atomic functions etc.

   Also I am using the fsid for the instance field in kstat_create. So
   even though multiple kstats are created with the same name,
   each is a unique instance. Here is a kstat -m hsfs_fs output on
   a T2000 with 2 hsfs filesystems mounted, one a real DVD and
   one a lofi-ed ISO image:

   /dev/dsk/c0t0d0s2      3.0G   3.0G     0K   100%    /mnt/cdrom
   /dev/lofi/1            3.3G   3.3G     0K   100%    /mnt/cdrom1

   bash-3.00# kstat -m hsfs_fs
   module: hsfs_fs                         instance: 161404109
   name:   hsfs_read_stats                 class:    hsfs
        cache_read_pages                0
        coalesced_pages                 0
        crtime                          792184.007607208
        pages_lost                      0
        physical_read_pages             0
        readahead_pages                 0
        snaptime                        792201.042498716
        total_pages_requested           0

   module: hsfs_fs                         instance: 343967265
   name:   hsfs_read_stats                 class:    hsfs
        cache_read_pages                2936
        coalesced_pages                 2938
        crtime                          179.142698436
        pages_lost                      0
        physical_read_pages             5119
        readahead_pages                 2919
        snaptime                        792201.04417398
        total_pages_requested           5098

   So there is no memory leak in this case.

Regards,
Moinak.

>
> Regards,
> Moinak.
>
>> Thanks for the great work on this !
>> FrankH.
>>
>>
>> On Thu, 20 Sep 2007, Moinak Ghosh wrote:
>>
>>  
>>> I have completed all the updates after the last review comments and 
>>> completed
>>> all my testing. I have done manual tests and Filebench tests on 
>>> SPARC and
>>> x86. The Filebench tests were also repeated to kmem_flags set to 
>>> 0x1f  and
>>> checked for correctness and leaks. The fsfuzzer test was run for 
>>> 10days without
>>> issues. The bug is updated.
>>>
>>> The updated webrevs are available at:
>>> http://www.genunix.org/distributions/belenix_site//binfiles/hsfswebrev/index.htm
>>>  
>>>
>>>
>>> Updated binaries available from:
>>> http://www.genunix.org/distributions/belenix_site//binfiles/hsfs.amd64
>>> http://www.genunix.org/distributions/belenix_site//binfiles/hsfs.sparcv9 
>>>
>>>
>>> This update adds the following changes:
>>>
>>> - Incorporate all the review comments.
>>> - Add kstats and SDT probes for debugging purposes. You can copy the
>>> updated module, mount a DVD and see:
>>> kstat -m hsfs_fs
>>> dtrace -l -P sdt | grep hsfs
>>> - SPARC testing was done on a T2000 and it threw up a surprise perf
>>> issue for the single-stream sequential read and random read tests. 
>>> It was
>>> found using standard kmem_alloc for thousands of small allocations of
>>> size 96 bytes was inefficient. So a kmem_cache was used and it made
>>> a dramatic difference.
>>> - Removed one unnecessary page_lookup in the readahead path.
>>>
>>> As it stands now there is an across the board improvement on all 
>>> Filebench
>>> tests including the random read tests on both x86 and SPARC.
>>>
>>> Please send your replies to caiman-discuss where this email was 
>>> posted as
>>> well.
>>>
>>> Regards,
>>> Moinak.
>>>
>>>     
>> _______________________________________________
>> caiman-discuss mailing list
>> caiman-discuss at opensolaris.org
>> http://mail.opensolaris.org/mailman/listinfo/caiman-discuss
>>   
>
>


Reply via email to