mike wrote:
How is MogileFS with millions of smaller files (graphics, etc.)
'sfine, though billions are debatable. If your hardware's too slow, get better hardware. DB'll be the sticking point here, and the schema's miniscule.
How is it for large files (600+ meg) - is there a rough limit as to file sizes before it becomes too segmented or whatever?
Might need a little tuning, but seems to do okay. You have to insure min_free_space is bigger than the largest file you expect to have... unless that bug is fixed? (it only ensures you have min_free_space available before storing a file... not the length of the file).
Does it support seeking in the middle of files, and resuming where it left off? i.e. if I did a wget -c on a file being served by MogileFS, and I got the first 1 meg, will the second request resume with a standard HTTP style offset?
For downloads, yeah. That's a straight-up perlbal feature. There was someone messing with uploads, but I dunno if it worked? :)
-Dormando
