using mogstored and storing huge files (up to 11GB) on a comp that has 512MB RAM without a problem.

using php api for mogilefs from http://www.capoune.net/mogilefs/ with some modification for storing the file. Problem i had was that the client cached the whole file in the RAM and so killed my application.
(Those mod will make it into the next release i guess)

lg
Jürgen



Andy Lo A Foe wrote:
Hi,

In my experience if you want to store large files in MogileFS (>
100MB) you should definitely be using WebDAV on the storage nodes
(apache, lightttpd or nignix).

Gr,
Andy

On Wed, Apr 23, 2008 at 11:32 PM, Benjamin James Meynell
<[EMAIL PROTECTED]> wrote:
I am experiencing the same problem. The only difference for me is that mogstored crashes 
for files ~180MB and above, but I get the same error(s) so I'm assuming the crash 
threshold is due to memory differences (the machine I'm testing on has ~650MB of free 
RAM). When running mogstored in the foreground it quites with an "Out of 
Memory!" error. I've also noticed that upon a fresh restart of mogstored it is more 
forgiving and can accept files it previously ran out of memory on. Was a solution ever 
found?

 >   crash log: Negative length at
 >   /usr/share/perl5/Danga/Socket.pm line 1133.
 >
 >   Socket.pm states (line 1127-1142):
 >       # if this is too high, perl quits(!!).  reports
 >   on mailing lists
 >       # don't seem to point to a universal answer.
 >   5MB worked for some,
 >       # crashed for others.  1MB works for more
 >   people.  let's go with 1MB
 >       # for now.  :/
 >       my $req_bytes = $bytes > 1048576 ? 1048576 :
 >   $bytes;
 >
 >       my $res = sysread($sock, $buf, $req_bytes, 0);
 >       DebugLevel >= 2 && $self->debugmsg("sysread =
 >   %d; \$! = %d", $res, $!);
 >
 >       if (! $res && $! != EWOULDBLOCK) {
 >           # catches 0=conn closed or undef=error
 >           DebugLevel >= 2 && $self->debugmsg("Fd \#%d
 >   read hit the end of the road.", $self->{fd});
 >           return undef;
 >       }
 >
 >       return \$buf;
 >
 >   dormando schreef:
 >
 >     Can you get the crashlog from mogstored?
 >
 >     Run it in the foreground or watch syslog/etc.
 >
 >     -Dormando
 >
 >     Jaco wrote:
 >
 >       Hi,
 >
 >       MogileFS is running fine now, but there are
 >       still some major problems when I try to upload
 >       files bigger than 1 GB. With such large files I
 >       get errors like "Send failure: Connection reset
 >       by peer" and "select/poll returned error" and
 >       the mogstored daemon is suddenly killed. Is this
 >       normal?
 >
 >       If it is normal, is there a way to automatically
 >       split large files into smaller chunks during
 >       upload using mogilefsd? I know this can be done
 >       with mogtool, but I'm looking for a more
 >       transparent method within mogilefsd.
 >
 >       Thanks in advance.


Reply via email to