On 10/3/07, Sergej Pupykin <[EMAIL PROTECTED]> wrote:
>
> >DP> the problem is: pacman uses lots of small files that over time get
> >DP> spread over the whole paritition. i am not aware of any filesystem
> >DP> (except maybe for a database engine) that can manage to keep frequent
> >DP> usage of small files over time not fragmented with other things. if
> >DP> you can mention a (modern) file system that can do this in fact, i'm
> >DP> glad to hear it.
>
> Most of files in pacman dir have size < 1 block, so there is no
> fragmentation. 312 files from 28087 on my machine have size > 4K. All of
> them (except 2 big .install files) are filelisting and placed in
> pacman/local directory.
>
> I wondered how pacman-optimize can optimize :O I think this is
> fortuitousness. :)

http://en.wikipedia.org/wiki/Fragmentation_%28computer%29

You are referring to data fragmentation. The problem is actually
external fragmentation. It is not the files themselves that are spread
out on the drive into multiple pieces, but the whole directory
hierarchy is spread out, so when a readdir is called, the disk has to
do much more seeking than if the whole tree was in one place.

-Dan

_______________________________________________
arch mailing list
arch@archlinux.org
http://archlinux.org/mailman/listinfo/arch

Reply via email to