On Fri, 2006-06-30 at 00:22 +0400, Vladimir V. Saveliev wrote:
> > This seems strange, because to me this type of workload would lend
> > itself to being less fragmented then most workloads. All the box
> does is
> > records TV programs, so over the course of 30-60min periods I would
> > guess 95+% of the writes are sequential. 
> > 
> 
> do you ever remove files?

Yes, files are deleted when the drive starts to fill up, which is how I
discovered this issue in the first place. I always kept a minimum of
10gb free, and when I got close to that limit is when the load would
spike. I have since set to the limit to 40gb and I haven't seen the
problem since, but I can't use that 40gb of space either though.

> 
> > Why would the fragmentation be so bad? Is there a way to tell what
> the
> > fragmentation rate is?
> > 
> 
> can you please run debugreiserfs -m /dev/hda1 > bitmap and send me
> that
> file?
> bitmap should contain dump of free and used blocks. If most of bitmap
> blocks contain a lot of interleaving free/used sections - free space
> is
> highly fragmented and allocating new free blocks can be CPU
> expensive. 

I do record two programs at once from time to time, so I can understand
how that would cause fragmentation. However after each program I also
transcode them to a different format one at a time. So I would think
that would reduce fragmentation that may have occurred from recording
two programs at once? Although I suppose if I was transcoding and
recording at the same time, it would just make things worse.

I will send Vladimir the debugreiserfs output privately. 

-- 
Mike Benoit <[EMAIL PROTECTED]>

Attachment: signature.asc
Description: This is a digitally signed message part

Reply via email to