Matthew Toseland wrote:

> On Sat, Feb 22, 2003 at 11:29:56PM -0500, Gianni Johansson wrote:
> > Matthew Toseland wrote:
> >
> > > On Sat, Feb 22, 2003 at 11:03:38PM -0500, Gianni Johansson wrote:
> > > > Matthew:
> > > >
> > > > With the current default  datastore size setting downloading of large
> > > > splitfiles is effectively disabled by your tempbucket accounting limits
> > > > because you can't get enough temp space.
> > > >
> > > > We should just warn users that they need lots of temp space.
> > > > Arbitrarily failing is rude.
> > >
> > > We can determine whether there is enough space in the splitfile download
> > > dialog, and warn the user if there isn't. Fortunately the MIME utils
> > > problem that causes fproxy to need twice the file size in temp space
> > > only applies to uploads.
> > > >
> > > > Can't we turn it off and deal with this issue after the release?
> > > No. If we turn it off the default configuration will result in serious
> > > problems if you download a large splitfile with space close to store
> > > size.
> > >
> > > One remaining issue: currently storeMaxTempFraction default to 1/3.
> > > Maybe we should increase this. We can't make it 1.0 because the node
> > > needs some space for various things (like handling requests).
> >
> > I think the accounting is missing some releases.
> >
> > http://server:8888/servlet/nodeinfo/internal/env
> > reportss, Space used by temp files in the data store 37,228 KiB
> >
> > but du -h in my  <blahblah>/store/temp dir only reports  740k.
>
> Maybe... or maybe it is simply being careful. Like the store, it
> allocates space for the file _before_ writing it. With the new
> anti-lock-contention code, it increases the size allocated by a factor
> of 1.5 or a minimum increment of 1kB every time it runs out of space
> (with a minimum allocation size of 1kB... all of these are configurable
> on a per-tempfilebucket basis). The temp store is used for for example
> keys that are currently being downloaded and haven't been committed yet
> as well as client side stuff.

Step back an look at what we had before the temp bucket accounting and what we have 
now.

There are really two issues:
1) The hard accounting limit.
2) The fact that fred is being forced to compete with client apps (e.g. SFRS, IS) for 
temp space.

Before:
1) Out of disk errors cause *client* operations to fail.  But don't cause fred to 
fail, because fred's temp
space comes out of the data store which is pre-allocated.  (do I have this right?)
2) As long as there is available space on the drive the user experiences no problems.
3) Out of disk errors cause unpredictable behavior in *client apps*

Now:
1) Out of temp space errors *caused by clients* can starve fred of temp space.
2) Client operations that require a lot of temp space arbitrarily fail no matter how 
much space is available on
the disk.
i.e. I can have 10G free on my drive and the default install (1/3 of a 250Mb ds?) will 
fail to download a 100mb
splitfile.


I appreciate the issues you are attempting to address, but this needs a rethink.

--gj

I am actually having trouble downloading  ~=20M splitfiles with my 200M datastore node.




>
> >
> > >
> > > >
> > > > --gj
>
> --
> Matthew Toseland
> [EMAIL PROTECTED]/[EMAIL PROTECTED]
> Full time freenet hacker.
> http://freenetproject.org/
> Freenet Distribution Node (temporary) at 
> http://80-192-4-23.cable.ubr09.na.blueyonder.co.uk:8889/GptQvHy-Ap8/
> ICTHUS.
>
>   ------------------------------------------------------------------------
>    Part 1.2Type: application/pgp-signature


_______________________________________________
devl mailing list
[EMAIL PROTECTED]
http://hawk.freenetproject.org:8080/cgi-bin/mailman/listinfo/devl

Reply via email to