On Friday 24 Jan 2003 12:01 am, Matthew Toseland wrote:

> > > Fproxy uses 256kB to 1MB chunks, but other clients could use other
> > > sizes. That is however the recommended range for most uses.
> >
> > ...
> >
> > > Splitfiles therefore can fail if too many of the chunks are no longer
> > > fetchable.
> >
> > Is it possible to use smaller chunks? Can you give me a link to a
> > document that explains how to control the use of FEC via fproxy? For
> > example, can I force the use of FEC for files smaller than 1 MB?
>
> No. Your application would not use Fproxy anyway, it would probably use
> a library to talk directly to the node using FCP. What language are you
> considering?

Perl for maintainance utilities and browser-side JavaScript for client-side 
operation.

And yes, I know that JS will make the anonymity filter complain, but 
ultimately, JS in browser is no more of an anonymity issue than any other way 
to create Freenet application.

Once I have a prototype read-write implementation written in javascript, I 
will probably port it to Perl DBI.

With the browser-based approach, fproxy is just about all I can do, as 
everything has to be done using HTTP requests.

In the case of a Perl DBI library, there are more options, but the HTTP access 
libraries are already there to make things a bit easier.

> > > You do know that Freenet is lossy, right? Content which is not accessed
> > > very much will eventually expire.
> >
> > Yes, this is why I am thinking about using DBR. There would potentially
> > be a number of nodes that would once per day retrieve the data, compact
> > it into bigger files, and re-insert it for the next day. This would be
> > equivalent to vacuum (PostgreSQL) and optimize (MySQL) commands.
> >
> > The daily operation data would involve inserting many small files (one
> > file per record in a table, one file per delete flag, etc.)
> >
> > This would all be gathered, compacted, and re-inserted. Any indices would
> > also get re-generated in the same way.
>
> Hmm. Interesting.

No idea if it will work, but I don't think there's any way to find out other 
than write a prototype. :-)

> > > FEC divides the file into segments of up to 128 chunks (I think).
> > > It then creates 64 check blocks for the 128 chunks (obviously fewer if
> > > fewer original chunks), and inserts the lot, along with a file
> > > specifying the CHKs of all the different chunks inserted for each
> > > segment.
> >
> > Doesn't that mean that with maximum chunk size of 1 MB, this limits the
> > file size to 128 MB? Or did I misunderstand the maximum chunk size, and
> > it is purely a matter of caching as a factor of the store size?
>
> No. After 128MB, we use more than one segment. Within each segment, we
> need any 128 of the 192 chunks to reconstruct the file.

Ah, OK, I understand now.

> > Is FEC fixed at 50% redundancy, or can the amount of redundancy be
> > controlled (e.g. reduced to 25%, if requested)? Or has it been tried and
> > tested that around 50% gives best results?
>
> Hmm. At the moment it is hard-coded. At some point we may change this.
> The original libraries support other amounts.

OK.

Thank you for your help. I think I have a better idea of how to go about 
writing my app now. :-)

Gordan

_______________________________________________
Tech mailing list
[EMAIL PROTECTED]
http://hawk.freenetproject.org/cgi-bin/mailman/listinfo/tech

Reply via email to