On Tue, Nov 01, 2005 at 11:11:52AM -0500, dead wave wrote:
> 
> While thinking about Darknets and their use, I thought
> it would be great to add a background download manager built-in freenet.

We will have a download manager backend in Freenet. It will be
accessible via FCP, fproxy will be able to add files to it, and it will
be able to retry infinitely including across reboots if the user wants
to store the data, and it will be able to save directly to disk, or
cache the data until the user or client specifies where they want to put
it.

Now, as far as using it for hard-to-find non-splitfile content... Hmmm!
> 
> It would be useful for large or hard to find containers.
> 
> Also, using subscription, a darknet could automaticaly mirror/update certain 
> files or chunks(spreed across the darknet) for faster access.

Hmmm!

Okay, the basic idea here is:
- The idea that you would start a fetch for a file that you can't get
  fproxy to fetch.
- The background download manager would then retry indefinitely, along
  with all the other queued content, rotating, and subject to rate
  limiting of some sort.
- We could include the next editions of updatable sites in this system,
  automatically.
- When it did fetch it, it would notify you somehow. An RSS feed, an
  email, running a supplied script, anything.
- When it has been fetched, you could browse it in fproxy, if it was
  content which is capable of being browsed in fproxy. This implies it
  must be integrated with fproxy's client cache somehow, or simply that
  it is in the datastore after it has been fetched.
- The overheads of the above can be reduced, and functionality can be
  improved, by using passive requests.
- Some form of prefetch may be useful.
- This is in fact the beginnings of non-real-time support. It would make
  Freenet far more useful in a whole range of ways:
  - Some files will be "rare". Either they haven't been inserted yet, or
    they are stuck in the wrong part of the network, or whatever. People
    have traditionally used FUQID, with its eternal polling, to find
    such files. This may be seen as anti-social, but it is going to
    happen, so we may as well regulate it and do it as efficiently as
    possible. Apart from the download manager, per-node failure tables
    will help.
  - The next edition of a freesite will not be available until it is
    inserted. It is perfectly valid to put in a request to be notified
    when a file is found which doesn't exist yet. Passive requests would
    be really cool for waiting for the next edition of a freesite. This
    can be done by exactly the same mechanism as the above.
  - Hostile environment transports are likely to be high latency
    transports. They won't be always on, and they may not even be
    available on-demand. But there is still a massive amount you can do
    with them. Good user interfaces may well make the difference between
    a viable darknet and a non-viable one, at least in testing.
  - Nodes with not-always-on connections, which aren't necessarily
    darknet, would also benefit significantly. For instance, if you run
    a node on one PC, and connect to it from another, which has the
    firepower to decode splitfiles, and both PCs are down sometimes...

Overall, I am very enthusiastic about the above idea.
Now, what needs to change, relative to current plans?
- The background download manager needs to be able to fetch *any* file,
  even if it isn't a splitfile. So far so good, that was always the plan.
- It needs to be able to retry indefinitely. That was also the plan; we
  have to provide it, or people will just reimplement FUQID.
- It has to be able to notify the user when a file has been fetched.
  This was vaguely in the plan.
- It can be integrated with the new updating scheme. I was planning to
  have a new updating system, which would let Fred take responsibility
  for updating functionality, meaning we could implement the details
  however we wanted to (editions, TUKs, DBRs, passive requests...). The
  plan was to have an RSS feed of regularly visited sites which have
  been updated. This can be integrated with the download manager, yay!
- We may need to integrate the download manager slightly more with
  fproxy, to the extent that it will need to:
  a) Provide a link to the downloaded resource in its original URI, not
     just the file fetched. No big deal - one line of code!
  b) Ensure that that link is fetchable. This shouldn't be a big deal
     either, as the data should be cached initially in the datastore. If
     we later implement various anonymity schemes that require a
     client-cache and locally requested cotent not to be cached in the
     datastore, it becomes somewhat more complex, especially if we want
     to keep the data across restarts.
- We have a clearer rationale for using passive requests in the download
  manager. It was always a good idea, it's a really good idea in the
  light of the above.
- 
> 
> Implementing a distributed database and webcrawler with full anonymous 
> searches for version 1.0 would make freenet
> more user friendly.

Anonymous search would be very useful, agreed. Even if it is slow. There
are two EASY ways to do anonymous search; neither of them is fully
distributed; they export the trust/spam issue, but should do fine for
now. Both of them involve a central spider spidering sites and creating
an index:
1. Spider publishes the index. Search client fetches it in order to do
searches. Obviously there are scalability issues here; once the index is
large enough that clients can't prefetch the whole thing, searching
becomes rather slow. This can be integrated into the high latency
handling though.
2. Client sends a message to the server, server replies. There are
various means of doing this; the efficient, fast ones would require new
functionality; they rely on "rendezvous at a key".
-- 
Matthew J Toseland - toad at amphibian.dyndns.org
Freenet Project Official Codemonkey - http://freenetproject.org/
ICTHUS - Nothing is impossible. Our Boss says so.
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 189 bytes
Desc: Digital signature
URL: 
<https://emu.freenetproject.org/pipermail/devl/attachments/20051103/1280960c/attachment.pgp>

Reply via email to