Jared,

I'm curious, how do you deal with files up to 1gb? Does mogilefs handle
files this big better than it used to, or have you implemented separating
large files into several chunks? I had read somewhere on the lists a while
back that anything over 100mb often causes the storage daemon to crash, and
that having files that big was generally frowned upon.

On Tue, Jul 1, 2008 at 1:17 PM, Jared Klett <[EMAIL PROTECTED]> wrote:

> hi mike,
>
>        We've been using MogileFS for about a year and half now at
> blip.tv.
>
>        We have nearly 20 storage nodes with 130 TB of storage (we use a
> device count of two since we're dealing with files anywhere up to 1 GB),
> and 2.81 million files.
>
>        I'd be happy to provide more detail if you wish.
>
> cheers,
>
> - Jared
>
> --
> Jared Klett
> Co-founder, blip.tv
> office: 917.546.6989 x2002
> mobile: 646.526.8948
> aol im: JaredAtWrok
> http://blog.blip.tv
>

Reply via email to