Sorry I'm late to this vote since I've been reviewing and testing out this 
patch. Preliminary testing indicates good stability with marked improvement in 
performance.

+1 to commit and have in 2.0 release provided the cache has proper regression 
tests. The code base sorely needs to be updated for the last decade of 
advancement in storage. 

regards,

-George


--- On Wed, 12/9/09, John Plevyak <jplev...@acm.org> wrote:

> From: John Plevyak <jplev...@acm.org>
> Subject: cache partition size patch commit vote request
> To: trafficserver-dev@incubator.apache.org
> Date: Wednesday, December 9, 2009, 4:15 PM
> 
> I would like to ask for a vote on whether or not to commit
> the cache partition size patch (+1, -1, 0).
> 
> Background:
> 
> In the current cache, the disk is broken up into 8GB
> partitions where
>   - objects are hashed to partitions
>   - an object must fit entirely in a partition
> effectively limiting object size and
>      introducing potential competition
>   - each partition has it's own write pointer,
> resulting in lots of seeks on large
>      disks to write in different
> places
> 
> The patch adds:
>   - support of partitions up to .5PB (500TB) with the
> default 512 block size
>   - larger aggregation buffer: 2MB (since we have
> fewer we can have larger ones)
>   - larger top fast IOBuffer size (64kB), this should
> probably be increased but this
>      patch fixes bugs which prevent
> increasing this in the current code
>   - internal cache support for large objects (still
> requires VIO/IOBuffer changes + HTTP changes)
>   - experimental support for do_io_pread
>   - new on disk format which will support do_io_pread
> as well as non-HTTP header access
> 
> Potential downsize:
>   This patch changes the on disk format which will
> require a cache wipe.
>   I have tried to include all the changes necessary to
> implement large object,
>   pread and pluggable-protocol header usage, but one
> can never be sure.
> 
> The patch is available for under TS-46 in jira.
> 
> Thank you,
> john
>

Reply via email to