Note: I've written this in plain text but with some HTML markup.
I hope that doesn't cause grievous problems for anyone.

[EMAIL PROTECTED] ([EMAIL PROTECTED]) wrote:

> That is exactly what I meant.  It would not query the encrypted files, but
> rather it would determine which directories Freenet is sharing, and then
> index those files.

I think you're still suffering from a fundamental misunderstanding
of what Freenet is, and isn't.

I *used* to (ahem) have a nice FAQ on the web page which addressed
precisely this confusion.  It's gone now; there's
<a href="http://freenetproject.org/index.php?page=whatis";>What is
Freenet?</a>, and that may be helpful, but it's probably not
going to help you unlearn your false assumptions.  So here goes:

What is Freenet?
----------------

First, let's start by saying what it <i>isn't</i>.  Freenet does
not work like a traditional "peer-to-peer" file sharing client.
You don't point Freenet at a directory on your disk and say "share
all of the files in that directory".  You can't search for files
on other people's Freenet nodes, or even on your own.

So, what is Freenet then?  The technical answer is that it's a
distributed, anonymous, encrypted publishing system.  It's kind of
like the World Wide Web, because you can use a web browser to
retrieve "pages" from it, and you can hyperlink to other pages.
But it's not exactly like the WWW, because there are no web servers;
the pages you're fetching are pulled from the Freenet storage space
using completely different methods than what the WWW uses.  Also,
it's a bit like Usenet, in that stuff is "inserted" (or "posted")
into Freenet from a single point, and then propagated to multiple
places so that people can read it even if the point of origin is
no longer on-line.  (People have claimed to publish widely popular
"Freesites" from dial-up connections.)  But it's not exactly like
Usenet, because the pages you insert are not copied to every single
Freenet node; instead, they only go to a tiny fraction of the nodes.

The basic Freenet package has two parts to it: the node ("Fred"), and
fproxy.  The node, Fred (which stands for "Freenet Reference Daemon")
is a background process that runs continuously on your computer.
It answers requests for data from other Freenet nodes, and it issues
requests of its own.  When you install Freenet, you dedicate a part
of your local hard disk to become the "data store".  Files that Fred
has received from other nodes are put into the data store, and when
it's full, the least recently used files are deleted.  All of the
files in the data store are encrypted; you can't read them, and
Fred can't read them, unless you have the decryption key.  This is
done on purpose, because it allows you to plausibly deny having
asked for, or inserted, a specific file if someone should discover
that you have it in your data store.

Fproxy is the component that makes Freenet act like a web server,
so that you can "surf" it using a normal web browser.  It's currently
bundled together with Fred, running as part of the same process,
but it's possible that it may be separated in the future.  (Once,
long ago, some people had written separate standalone fproxy-like
programs, but to the best of my knowledge none of them have survived.)
You talk to fproxy by pointing a web browser at
<a href="http://127.0.0.1:8888/";>http://127.0.0.1:8888/</a>.
From here, you'll see a set of "bookmarks" to Freesites, which you
can also customize.  You can click on these sites, and eventually,
if everything's working right, you'll get the pages that they link
to.  The default bookmark sites contain huge lists of links to other
Freenet sites, so that you can quickly find information on a wide
variety of topics.

Navigation of Freenet is very much like "surfing" the WWW in the
old days before search engines like Altavista and Google came
along.  If you wanted to find information, you'd have to start
with a site that was "pretty close" to what you wanted, and hope
that it had links to sites which had more details.  It's possible
to write a "spider" which does what Google does -- start with a
set of known Freesites, and follow the hyperlinks to other sites,
recursively, thereby mapping out the entire Freesite space.  A
few people have even done this over the years, and some of these
search engines may still be online.  But since these search engines
would operate over the normal WWW, they suffer from the WWW's
limitations: mainly, that you submit requests in the clear, not
encrypted (the search engine knows what you asked for); there's a
central point of failure (if the site the search engine is on goes
down, it's unusable); and your requests are not anonymous (the
search engine knows what IP address submitted the requests).

(We'd then move on to more detailed descriptions, with a list of
the types of keys, pictures of A requesting files from B, and so
on.  Hopefully most of <i>that</i> has survived....)

-- 
Greg Wooledge                  |   "Truth belongs to everybody."
[EMAIL PROTECTED]              |    - The Red Hot Chili Peppers
http://wooledge.org/~greg/     |

Attachment: pgp00000.pgp
Description: PGP signature

Reply via email to