Sent from my iPhone

> On Jun 22, 2018, at 5:03 PM, Carl Zwanzig <[email protected]> wrote:
> 
>> On 6/22/2018 1:37 PM, Ronak wrote:
>> We have audio files that are more than 100 hours long, and we need them
>> to be fragmented quickly. It totally seems like there's an I/O problem
>> here because fragmentation is faster the longer we set our fragment size
>> to.
> Sounds like it's time to break out iostat or sar (or even dtrace) and do some 
> snooping.
> 
> Are you reading and writing to different drives?  (The usual metric for a 
> spinning drive is 150 IOPS for random access, RAID doesn't necessarily speed 
> that up.)
> 

Nope it’s the same disk that we’re reading and writing to. I’m really curious 
why we can’t just buffer fragments in ram and write them out in larger bursts. 
Why not make this an option we can set?

This would simulate the performance for a 10 sec fragment size.

I’d imagine you have the same reading buffer regardless of the fragment size.

I’m thinking of running this on even more powerful machines to see if I can get 
the times down. But is it possible to get an option to tweak the output buffer 
size?

> Later,
> 
> z!
> _______________________________________________
> ffmpeg-user mailing list
> [email protected]
> http://ffmpeg.org/mailman/listinfo/ffmpeg-user
> 
> To unsubscribe, visit link above, or email
> [email protected] with subject "unsubscribe".

_______________________________________________
ffmpeg-user mailing list
[email protected]
http://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
[email protected] with subject "unsubscribe".

Reply via email to