On Jan 9, 2012, at 5:44 AM, Edward Ned Harvey wrote:

>> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
>> boun...@opensolaris.org] On Behalf Of Bob Friesenhahn
>> To put things in proper perspective, with 128K filesystem blocks, the
>> worst case file fragmentation as a percentage is 0.39%
>> (100*1/((128*1024)/512)).  On a Microsoft Windows system, the
>> defragger might suggest that defragmentation is not warranted for this
>> percentage level.
> I don't think that's correct...
> Suppose you write a 1G file to disk.  It is a database store.  Now you start
> running your db server.  It starts performing transactions all over the
> place.  It overwrites the middle 4k of the file, and it overwrites 512b
> somewhere else, and so on.  

It depends on the database, but many (eg Oracle database) are COW and
write fixed block sizes so your example does not apply.

> Since this is COW, each one of these little
> writes in the middle of the file will actually get mapped to unused sectors
> of disk.  Depending on how quickly they're happening, they may be aggregated
> as writes...  But that's not going to help the sequential read speed of the
> file, later when you stop your db server and try to sequentially copy your
> file for backup purposes.

Those who expect sequential to get performance out of HDDs usually end up
being sad :-( Interestingly, if you run Oracle database on top of ZFS on top of
SSDs, then you have COW over COW over COW. Now all we need is a bull! :-)
 -- richard


ZFS and performance consulting
illumos meetup, Jan 10, 2012, Menlo Park, CA

zfs-discuss mailing list

Reply via email to