Re: Some ideas on Freenet architecture

2017-11-01 Thread Arne Babenhauserheide

Matthew John Toseland  writes:

> On 16/09/17 08:31, Arne Babenhauserheide wrote:
>> 
>> Matthew John Toseland  writes:
>>> The above is slightly inaccurate. For a genuine user, the third step
>>> starts off as a bundle, then later becomes a broadcast. We enforce the
>>> scarcity and popularity requirements in both stages.
>> 
>> Doesn’t enforcing scarcity at the bundle level create information
>> leakage about the length of the bundle-tunnel?
>
> I don't know?

That’s something we’ll have to figure out, otherwise this could
seriously decrease anonymity.

Best wishes,
Arne
-- 
Unpolitisch sein
heißt politisch sein
ohne es zu merken


signature.asc
Description: PGP signature


Re: Some ideas on Freenet architecture

2017-09-16 Thread Matthew John Toseland
On 16/09/17 08:31, Arne Babenhauserheide wrote:
> 
> Matthew John Toseland  writes:
>> The above is slightly inaccurate. For a genuine user, the third step
>> starts off as a bundle, then later becomes a broadcast. We enforce the
>> scarcity and popularity requirements in both stages.
> 
> Doesn’t enforcing scarcity at the bundle level create information
> leakage about the length of the bundle-tunnel?

I don't know?
> 
>>> However a solation which keeps the resilience of WoT active could be
>>> built on top of what you describe by not having one global queue but
>>> rather N queues with each ID trusting one of the queues chosen at
>>> random. Every day you choose another section to watch.
>>>
>>> N could be chosen by taking the size of the WoT into account.
>>
>> For Scarce KSKs to work, as currently designed, the queue needs to be
>> *very* popular. So maybe have everyone subscribe to the global queue,
>> but new identities only accepted automatically by some randomised subset
>> of the graph. Then you need to get some likes or whatever before you can
>> be seen more widely.
> 
> That should work, too, yes.
> 
> There could also be multiple different applications listening for the
> same queue. It only says "there’s a new key".
> 
>>> This would enable use-cases like automatic newsbots (something I’m
>>> sorely missing for babcom_cli).
>>
>> I don't understand this use-case.
> 
> I’d like to provide users with some amount of autonomous systems who for
> example aggregate posts about a given topic. Something like mailing
> lists. But for these, they must work without user interaction, but
> without opening an avenue for spamming.
> 
> Best wishes,
> Arne
> 



signature.asc
Description: OpenPGP digital signature


Re: Some ideas on Freenet architecture

2017-09-16 Thread Arne Babenhauserheide

Matthew John Toseland  writes:
> The above is slightly inaccurate. For a genuine user, the third step
> starts off as a bundle, then later becomes a broadcast. We enforce the
> scarcity and popularity requirements in both stages.

Doesn’t enforcing scarcity at the bundle level create information
leakage about the length of the bundle-tunnel?

>> However a solation which keeps the resilience of WoT active could be
>> built on top of what you describe by not having one global queue but
>> rather N queues with each ID trusting one of the queues chosen at
>> random. Every day you choose another section to watch.
>> 
>> N could be chosen by taking the size of the WoT into account.
>
> For Scarce KSKs to work, as currently designed, the queue needs to be
> *very* popular. So maybe have everyone subscribe to the global queue,
> but new identities only accepted automatically by some randomised subset
> of the graph. Then you need to get some likes or whatever before you can
> be seen more widely.

That should work, too, yes.

There could also be multiple different applications listening for the
same queue. It only says "there’s a new key".

>> This would enable use-cases like automatic newsbots (something I’m
>> sorely missing for babcom_cli).
>
> I don't understand this use-case.

I’d like to provide users with some amount of autonomous systems who for
example aggregate posts about a given topic. Something like mailing
lists. But for these, they must work without user interaction, but
without opening an avenue for spamming.

Best wishes,
Arne
-- 
Unpolitisch sein
heißt politisch sein
ohne es zu merken


signature.asc
Description: PGP signature


Re: Some ideas on Freenet architecture

2017-09-15 Thread Matthew John Toseland
On 15/09/17 20:51, Arne Babenhauserheide wrote:
> 
> Matthew John Toseland  writes:
>> Specifically, the process for a scarce *insert* will be:
>> 1) Is the key popular?? We can tell this from how many peers are
>> subscribed to it in the ULPR table. If not, kill the request; we only
>> provide protection for popular applications.
>> 2) Has there been another scarce SSK insert from the same peer recently?
>> If so, kill the request; they've used their quota. (More complex
>> conditions are possible; quotas, per-key etc)
>> 3) Commit the data to the datastore and propagate it via ULPRs.
>>
>> So indeed, there is no routing involved here.
> 
> This sounds good.

The above is slightly inaccurate. For a genuine user, the third step
starts off as a bundle, then later becomes a broadcast. We enforce the
scarcity and popularity requirements in both stages.

Spammers would of course go straight into the broadcast stage.

Not involving routing is because I can't see a way to involve routing
that actually works. Perfect scalability is not necessary, only good
enough scalability.
> 
>> All we do is have a single global Scarce KSK queue for announcements.
>> Every WoT instance subscribes to it. The key is generated from the date
>> and changes e.g. once a day. All scarce inserts are propagated to all
>> nodes - but the scarcity mechanism restricts the number of inserts.
> …
>> Every WoT instance adds every identity announced via the queue. Then we
>> rely on the usual mechanisms to weed out spammers after the fact - some
>> combination of positive and negative trust, backed by the fact that it's
>> relatively costly to create new identities.
> 
> This misses part of the point of WoT: WoT only lets you be seen by a
> small group at first and when you get positive trust you get seen by a
> wider group.
> 
> However a solation which keeps the resilience of WoT active could be
> built on top of what you describe by not having one global queue but
> rather N queues with each ID trusting one of the queues chosen at
> random. Every day you choose another section to watch.
> 
> N could be chosen by taking the size of the WoT into account.

For Scarce KSKs to work, as currently designed, the queue needs to be
*very* popular. So maybe have everyone subscribe to the global queue,
but new identities only accepted automatically by some randomised subset
of the graph. Then you need to get some likes or whatever before you can
be seen more widely.
> 
> This would enable use-cases like automatic newsbots (something I’m
> sorely missing for babcom_cli).

I don't understand this use-case.
> 
> Kudos!
> 
> Best wishes,
> Arne
>



signature.asc
Description: OpenPGP digital signature


Re: Some ideas on Freenet architecture

2017-09-15 Thread Arne Babenhauserheide

Matthew John Toseland  writes:
> Specifically, the process for a scarce *insert* will be:
> 1) Is the key popular?? We can tell this from how many peers are
> subscribed to it in the ULPR table. If not, kill the request; we only
> provide protection for popular applications.
> 2) Has there been another scarce SSK insert from the same peer recently?
> If so, kill the request; they've used their quota. (More complex
> conditions are possible; quotas, per-key etc)
> 3) Commit the data to the datastore and propagate it via ULPRs.
>
> So indeed, there is no routing involved here.

This sounds good.

> All we do is have a single global Scarce KSK queue for announcements.
> Every WoT instance subscribes to it. The key is generated from the date
> and changes e.g. once a day. All scarce inserts are propagated to all
> nodes - but the scarcity mechanism restricts the number of inserts.
…
> Every WoT instance adds every identity announced via the queue. Then we
> rely on the usual mechanisms to weed out spammers after the fact - some
> combination of positive and negative trust, backed by the fact that it's
> relatively costly to create new identities.

This misses part of the point of WoT: WoT only lets you be seen by a
small group at first and when you get positive trust you get seen by a
wider group.

However a solation which keeps the resilience of WoT active could be
built on top of what you describe by not having one global queue but
rather N queues with each ID trusting one of the queues chosen at
random. Every day you choose another section to watch.

N could be chosen by taking the size of the WoT into account.

This would enable use-cases like automatic newsbots (something I’m
sorely missing for babcom_cli).

Kudos!

Best wishes,
Arne
-- 
Unpolitisch sein
heißt politisch sein
ohne es zu merken


signature.asc
Description: PGP signature


Re: Some ideas on Freenet architecture

2017-09-15 Thread Matthew John Toseland
On 14/09/17 22:18, Arne Babenhauserheide wrote:
> 
> Matthew John Toseland  writes:
>> On 14/09/17 17:33, Arne Babenhauserheide wrote:
>>> I think the core problem is that I don’t quite understand what you’re
>>> proposing. Bundles I understand, but I don’t understand how the scarce
>>> SSKs change routing.
>>
>> They don't. Pre-routing (bundles) applies to scarce SSKs as well as to
>> ordinary keys. What changes (in proposal 1 and later) is that during the
>> pre-routing phase we impose per-connection limits.
>>
>> With the updates I've posted, scarce SSKs do not get routed at all. I
>> don't think it makes sense to try to impose that sort of limits on
>> routed inserts, because it would tend to distort routing.
>>
>> Instead they only work if a key is popular. For popular keys we
>> generally don't route much either: the first few requests get routed,
>> but we end up forming a web of ULPRs, so that when the data is inserted,
>> it rapidly gets propagated everywhere.
> 
> It first needs to be in the store of a node at the fitting location,
> right? Then it gets requested and the ULPRs catch the requests.

No.
> 
> Or do the ULPRs actually see the inserts directly?

ULPRs are Ultra-Lightweight Passive Requests. What that means is that
they are *waiting for the data to be inserted/found*.

Once the data is found, we rapidly propagate it to the entire network.
So if the data has been inserted already, there is no need for ULPRs,
and it can be found within a hop or two of just about any node.

The other side of this mechanism is "RecentlyFailed": if we get a
request for a key, and the same node has recently requested it (with
some other conditions I don't remember), we terminate the request,
because we are effectively already looking for it. Because of ULPRs, if
we see the data, it will be forwarded to the node that requested it.

Put together, we radically reduce traffic generated by things like WoT,
allowing them to subscribe to keys but only actively poll them every 10
minutes or so, while propagating popular data so quickly that FLIP has
sub-minute latencies.
> 
>> Hence my proposal is that we only allow "scarce SSKs" for keys so
>> popular that most of the network is subscribing to them via ULPRs. This
>> only means 5% or so of the network is actively requesting them
>> occasionally. But it covers the key use cases such as WoT announcement.
> 
> So you mean when there’s a ULPR pointing to a KSK, it wouldn’t be routed?

Popular keys don't get routed. The first few requests get routed but
pretty soon we are rejecting requests with RecentlyFailed. Periodically
a few more requests are let through.

Inserts *do* currently get routed, even for popular keys: we forward
them as normal, and then only start triggering the ULPRs once the insert
has run out of HTL and been stored. But this is a security tradeoff. And
ULPRs are not just useful for globally important keys; they can be a
useful optimisation if only part of the network is polling for the key,
forming a tree routed where the data should be stored.

But for scarce SSK inserts, we are assuming that only popular keys
matter. This mechanism is purely for things like WoT announcements,
where there's a good chance that 1/HTL of the network is polling for them.

Specifically, the process for a scarce *insert* will be:
1) Is the key popular?? We can tell this from how many peers are
subscribed to it in the ULPR table. If not, kill the request; we only
provide protection for popular applications.
2) Has there been another scarce SSK insert from the same peer recently?
If so, kill the request; they've used their quota. (More complex
conditions are possible; quotas, per-key etc)
3) Commit the data to the datastore and propagate it via ULPRs.

So indeed, there is no routing involved here.
> 
>> IMHO we need a degree of fairness to provide effective spam defence.
>> Hence the multiple versions proposal: the spammer can insert a limited
>> number of keys, but he can't prevent the other legitimate inserts.
> 
> This sounds like it would create much additional complexity. Do we need
> globally consistent CAPTCHA solutions?

We're trying to eliminate the need for CAPTCHAs here, because they have
poor security *and* poor usability, and are basically unsustainable. We
could use them as well, but any serious attacker will certainly be able
to solve CAPTCHAs cheaply, while they are a PITA for many users,
especially disabled users.

All we do is have a single global Scarce KSK queue for announcements.
Every WoT instance subscribes to it. The key is generated from the date
and changes e.g. once a day. All scarce inserts are propagated to all
nodes - but the scarcity mechanism restricts the number of inserts.
Spammers tend to be contained in a small area of the network, though
they may cause some local damage, but if that becomes a problem there
are probably solutions we could think about.

Every WoT instance adds every identity announced via the 

Re: Some ideas on Freenet architecture

2017-09-14 Thread Arne Babenhauserheide

Matthew John Toseland  writes:
> On 14/09/17 17:33, Arne Babenhauserheide wrote:
>> I think the core problem is that I don’t quite understand what you’re
>> proposing. Bundles I understand, but I don’t understand how the scarce
>> SSKs change routing.
>
> They don't. Pre-routing (bundles) applies to scarce SSKs as well as to
> ordinary keys. What changes (in proposal 1 and later) is that during the
> pre-routing phase we impose per-connection limits.
>
> With the updates I've posted, scarce SSKs do not get routed at all. I
> don't think it makes sense to try to impose that sort of limits on
> routed inserts, because it would tend to distort routing.
>
> Instead they only work if a key is popular. For popular keys we
> generally don't route much either: the first few requests get routed,
> but we end up forming a web of ULPRs, so that when the data is inserted,
> it rapidly gets propagated everywhere.

It first needs to be in the store of a node at the fitting location,
right? Then it gets requested and the ULPRs catch the requests.

Or do the ULPRs actually see the inserts directly?

> Hence my proposal is that we only allow "scarce SSKs" for keys so
> popular that most of the network is subscribing to them via ULPRs. This
> only means 5% or so of the network is actively requesting them
> occasionally. But it covers the key use cases such as WoT announcement.

So you mean when there’s a ULPR pointing to a KSK, it wouldn’t be routed?

> IMHO we need a degree of fairness to provide effective spam defence.
> Hence the multiple versions proposal: the spammer can insert a limited
> number of keys, but he can't prevent the other legitimate inserts.

This sounds like it would create much additional complexity. Do we need
globally consistent CAPTCHA solutions?

>> Or even deeper into my missing understanding: How should multiple SSKs
>> compete at all, given that only one person has the private key?
>
> Announcements typically use KSKs, known private keys, or similar
> spammable mechanisms. That's what we're trying to replace here.

That’s great!

> The only points at which this escalates to the client layer are:
> 1) For proposal 2, clients will get multiple responses to a
> request/subscription, and will need to decide what to do with them.
> 2) For proposal 3, clients additionally should register a vote by
> reinserting the SSK they prefer.




-- 
Unpolitisch sein
heißt politisch sein
ohne es zu merken


signature.asc
Description: PGP signature


Re: Some ideas on Freenet architecture

2017-09-14 Thread Matthew John Toseland


On 14/09/17 17:33, Arne Babenhauserheide wrote:
> 
> Matthew John Toseland  writes:
>> On 13/09/17 21:23, Arne Babenhauserheide wrote:
>>> Matthew John Toseland  writes:
 Proposal 0: Bundles
 ---
>>> Do I understand it correctly that this means that all inserts from a
>>> given source initially travel along the same path?
>> Right, this is the poor man's tunneling scheme. Although one day perhaps
>> we could have something like PISCES.
> 
> Poor mans tunneling (=simplest tunneling which could work) sounds like
> something pretty useful.
> 
>>> It would be great to have this for requests, too, since this would
>>> clearly stop most correlation attacks which are used right now.
> …
 Proposal 1: Scarce SSKs with global (per-node) scarcity
 ---
>>> …
 Proposal 2: Scarce SSKs with per-key scarcity and multiple versions
 ---
>>> …
 Proposal 3: Scarce SSKs with voting
 ---
>>> …
>>> I don’t really understand these three. They all sound like they could
>>> leak information from the anonymous ID layer to the networking layer.
>>
>> I don't see why. Maybe you can make an argument re voting schemes. But
>> what is wrong with proposal 1 that makes it any different security-wise
>> to what happens now?
> 
> I think the core problem is that I don’t quite understand what you’re
> proposing. Bundles I understand, but I don’t understand how the scarce
> SSKs change routing.

They don't. Pre-routing (bundles) applies to scarce SSKs as well as to
ordinary keys. What changes (in proposal 1 and later) is that during the
pre-routing phase we impose per-connection limits.

With the updates I've posted, scarce SSKs do not get routed at all. I
don't think it makes sense to try to impose that sort of limits on
routed inserts, because it would tend to distort routing.

Instead they only work if a key is popular. For popular keys we
generally don't route much either: the first few requests get routed,
but we end up forming a web of ULPRs, so that when the data is inserted,
it rapidly gets propagated everywhere.

Hence my proposal is that we only allow "scarce SSKs" for keys so
popular that most of the network is subscribing to them via ULPRs. This
only means 5% or so of the network is actively requesting them
occasionally. But it covers the key use cases such as WoT announcement.

For new apps which might take time to get that popular, we will need to
provide an API which will smoothly fallback from one to the other.

> My reason to worry is that when something on the content layer (for
> example pseudonyms) affects routing, it can be detected from routing
> performance. Do you want to provide spam defense, or do you want to
> provide some kind of fairness?

IMHO we need a degree of fairness to provide effective spam defence.
Hence the multiple versions proposal: the spammer can insert a limited
number of keys, but he can't prevent the other legitimate inserts.
> 
> Or even deeper into my missing understanding: How should multiple SSKs
> compete at all, given that only one person has the private key?

Announcements typically use KSKs, known private keys, or similar
spammable mechanisms. That's what we're trying to replace here.

Although it would equally work with PSKs... but maybe we don't need the
complexity of PSKs with the voting proposal.

The only points at which this escalates to the client layer are:
1) For proposal 2, clients will get multiple responses to a
request/subscription, and will need to decide what to do with them.
2) For proposal 3, clients additionally should register a vote by
reinserting the SSK they prefer.
> 
> Best wishes,
> Arne



signature.asc
Description: OpenPGP digital signature


Re: Some ideas on Freenet architecture

2017-09-14 Thread Arne Babenhauserheide

Matthew John Toseland  writes:
> On 13/09/17 21:23, Arne Babenhauserheide wrote:
>> Matthew John Toseland  writes:
>>> Proposal 0: Bundles
>>> ---
>> Do I understand it correctly that this means that all inserts from a
>> given source initially travel along the same path?
> Right, this is the poor man's tunneling scheme. Although one day perhaps
> we could have something like PISCES.

Poor mans tunneling (=simplest tunneling which could work) sounds like
something pretty useful.

>> It would be great to have this for requests, too, since this would
>> clearly stop most correlation attacks which are used right now.
…
>>> Proposal 1: Scarce SSKs with global (per-node) scarcity
>>> ---
>> …
>>> Proposal 2: Scarce SSKs with per-key scarcity and multiple versions
>>> ---
>> …
>>> Proposal 3: Scarce SSKs with voting
>>> ---
>> …
>> I don’t really understand these three. They all sound like they could
>> leak information from the anonymous ID layer to the networking layer.
>
> I don't see why. Maybe you can make an argument re voting schemes. But
> what is wrong with proposal 1 that makes it any different security-wise
> to what happens now?

I think the core problem is that I don’t quite understand what you’re
proposing. Bundles I understand, but I don’t understand how the scarce
SSKs change routing.

My reason to worry is that when something on the content layer (for
example pseudonyms) affects routing, it can be detected from routing
performance. Do you want to provide spam defense, or do you want to
provide some kind of fairness?

Or even deeper into my missing understanding: How should multiple SSKs
compete at all, given that only one person has the private key?

Best wishes,
Arne
-- 
Unpolitisch sein
heißt politisch sein
ohne es zu merken


signature.asc
Description: PGP signature


Re: Some ideas on Freenet architecture

2017-09-13 Thread Matthew John Toseland
On 13/09/17 21:23, Arne Babenhauserheide wrote:
> 
> Matthew John Toseland  writes:
> 
>> On 12/09/17 22:34, Arne Babenhauserheide wrote:
>>>
>>> Matthew John Toseland  writes:
 But we need *something* in this approximate area.
>>>
>>> I fully agree with that. I think however that we need to be careful not
>>> to require user interaction for that. Users already added the friend, we
>>> can’t require more than maintaining the friend connections. Otherwise
>>> that would be the next bottleneck.
>>
>> What makes you think we'd need user interaction?
> 
> That’s how I understood it.
> 
>> If there is any deconfliction it's all at the client level, not the
>> node. The client can enforce any appropriate rules and decide to cast a
>> vote by reinserting the block it most prefers. The user might get
>> involved if there's a dispute about e.g. eliminating spammers.
> 
> I don’t understand how the client casts votes without exposing the
> data it uses to decide how to vote. How do you avoid coupling private
> data with the insert decisions?

For the one proposal that involves votes, a vote is effectively an insert.
> 
>> Proposal 0: Bundles
>> ---
>>
>> If an SSK insert has maximum HTL, we pre-route it along a fixed
>> pseudo-random path, which ideally doesn't change for a given source
>> node. It only uses friend connections, and stays at max HTL until a
>> pseudo-random termination point (always the same node on a static
>> network). After that it turns into an ordinary insert.
> 
> Do I understand it correctly that this means that all inserts from a
> given source initially travel along the same path?
> 
> If yes, then it sounds good.

Right, this is the poor man's tunneling scheme. Although one day perhaps
we could have something like PISCES.
> 
>> There are tricks we can use to compensate for churn by using shadow
>> nodes so that even if we can't route it to fixed peer A, we route it to
>> fixed peer B instead, but the following hop is the same (possibly using
>> FOAF connections and proof of previous arrangements and whatever).
> 
> That sounds like quite a bit of additional complexity but a nice
> longterm goal.
> 
> It would be great to have this for requests, too, since this would
> clearly stop most correlation attacks which are used right now.
> 
>> Proposal 1: Scarce SSKs with global (per-node) scarcity
>> ---
> …
>> Proposal 2: Scarce SSKs with per-key scarcity and multiple versions
>> ---
> …
>> Proposal 3: Scarce SSKs with voting
>> ---
> …
> I don’t really understand these three. They all sound like they could
> leak information from the anonymous ID layer to the networking layer.

I don't see why. Maybe you can make an argument re voting schemes. But
what is wrong with proposal 1 that makes it any different security-wise
to what happens now?



signature.asc
Description: OpenPGP digital signature


Re: Some ideas on Freenet architecture

2017-09-13 Thread Arne Babenhauserheide

Matthew John Toseland  writes:

> On 12/09/17 22:34, Arne Babenhauserheide wrote:
>> 
>> Matthew John Toseland  writes:
>>> But we need *something* in this approximate area.
>> 
>> I fully agree with that. I think however that we need to be careful not
>> to require user interaction for that. Users already added the friend, we
>> can’t require more than maintaining the friend connections. Otherwise
>> that would be the next bottleneck.
>
> What makes you think we'd need user interaction?

That’s how I understood it.

> If there is any deconfliction it's all at the client level, not the
> node. The client can enforce any appropriate rules and decide to cast a
> vote by reinserting the block it most prefers. The user might get
> involved if there's a dispute about e.g. eliminating spammers.

I don’t understand how the client casts votes without exposing the
data it uses to decide how to vote. How do you avoid coupling private
data with the insert decisions?

> Proposal 0: Bundles
> ---
>
> If an SSK insert has maximum HTL, we pre-route it along a fixed
> pseudo-random path, which ideally doesn't change for a given source
> node. It only uses friend connections, and stays at max HTL until a
> pseudo-random termination point (always the same node on a static
> network). After that it turns into an ordinary insert.

Do I understand it correctly that this means that all inserts from a
given source initially travel along the same path?

If yes, then it sounds good.

> There are tricks we can use to compensate for churn by using shadow
> nodes so that even if we can't route it to fixed peer A, we route it to
> fixed peer B instead, but the following hop is the same (possibly using
> FOAF connections and proof of previous arrangements and whatever).

That sounds like quite a bit of additional complexity but a nice
longterm goal.

It would be great to have this for requests, too, since this would
clearly stop most correlation attacks which are used right now.

> Proposal 1: Scarce SSKs with global (per-node) scarcity
> ---
…
> Proposal 2: Scarce SSKs with per-key scarcity and multiple versions
> ---
…
> Proposal 3: Scarce SSKs with voting
> ---
…
I don’t really understand these three. They all sound like they could
leak information from the anonymous ID layer to the networking layer.

Best wishes,
Arne
-- 
Unpolitisch sein
heißt politisch sein
ohne es zu merken


signature.asc
Description: PGP signature


Re: Some ideas on Freenet architecture

2017-09-13 Thread Matthew John Toseland


On 13/09/17 19:56, Matthew John Toseland wrote:
> 
> 
> On 13/09/17 19:51, Matthew John Toseland wrote:
>> On 12/09/17 22:34, Arne Babenhauserheide wrote:
>>>
>>> Matthew John Toseland  writes:
 But we need *something* in this approximate area.
>>>
>>> I fully agree with that. I think however that we need to be careful not
>>> to require user interaction for that. Users already added the friend, we
>>> can’t require more than maintaining the friend connections. Otherwise
>>> that would be the next bottleneck.
>>
>> What makes you think we'd need user interaction?
>>
>> If there is any deconfliction it's all at the client level, not the
>> node. The client can enforce any appropriate rules and decide to cast a
>> vote by reinserting the block it most prefers. The user might get
>> involved if there's a dispute about e.g. eliminating spammers.
>>
>> Lets try to flesh out a proposal for scarce SSKs of some kind...
>>
>> Proposal 0: Bundles
>> ---
>>
>> If an SSK insert has maximum HTL, we pre-route it along a fixed
>> pseudo-random path, which ideally doesn't change for a given source
>> node. It only uses friend connections, and stays at max HTL until a
>> pseudo-random termination point (always the same node on a static
>> network). After that it turns into an ordinary insert.
>>
>> The point here is that we provide as few samples as possible for a
>> correlation attack. If we receive a request, we know there is a fairly
>> high chance (say 10%) that it came from the peer, but receiving more
>> requests tells us nothing more. At least in the absence of churn.
>>
>> There are tricks we can use to compensate for churn by using shadow
>> nodes so that even if we can't route it to fixed peer A, we route it to
>> fixed peer B instead, but the following hop is the same (possibly using
>> FOAF connections and proof of previous arrangements and whatever).
>>
>> We might want to use this for CHKs too, but it would have negligible
>> performance impact for SSKs, and SSKs are more predictable. For CHKs
>> performance matters much more, although perhaps given the downstream
>> relaying it's not that bad.
>>
>> Proposal 1: Scarce SSKs with global (per-node) scarcity
>> ---
>>
>> SSK inserts can have a "scarce" flag.
>>
>> For SSK inserts which have this flag:
>>
>> On each hop during the pre-routing phase, we check whether we have
>> recently relayed a scarce request from that peer. If so, we terminate
>> the insert; they'll have to wait. We can make the condition more
>> generous to allow announcing multiple identities, be more flexible with
>> relaying when friend degree varies, etc.
>>
>> Once we exit the pre-routing phase, we check whether the data is
>> popular. Popular means that lots of our peers are waiting for it. If it
>> is not popular, we handle it as a normal insert, returning the existing
>> data etc. So if nobody is listening you can insert however many keys you
>> want. This implies scarcity needs to be in fairly large granularity.
>>
>> If we have the data, then the key will not be popular, as we've already
>> served the data to everyone waiting.
>>
>> If the key is popular (and we don't have the data), we broadcast it to
>> everyone waiting. They will continue the broadcast, provided that the
>> key is popular from their point of view too.
>>
>> We should probably use a different keyspace for scarce SSKs. But the
>> basic property here is that IF the key is popular, THEN the bad guys
>> only get N shots at it, where N is the number of friends they have.
>>
>> This is adequate for announcement of new WoT identities, provided there
>> is a single global queue. For per-user announcement queues it would not
>> work well since they would not be popular enough.
>>
>> Proposal 2: Scarce SSKs with per-key scarcity and multiple versions
>> ---
>>
>> "Popular" is redefined to include the key has recently been in demand,
>> even though we have the data now.
>>
>> If the key is popular, even if we have the data, we broadcast it.
>>
>> Hence we can have many versions of the key, and they all get propagated
>> to sufficiently interested clients. Clients then need to decide what to
>> do with all the different versions themselves. There will be a time
>> limit after which the key is no longer regarded as popular, and we just
>> reject new inserts as usual, to simplify implementation and reduce the
>> amount of sensitive data we need to keep.
>>
>> The pre-routing scarce relay check is changed to be per-{peer,key}, so
>> that if we have recently relayed an insert for that specific key from
>> that specific peer, then we terminate it. Hence each key makes up a
>> separate scarcity area. This arguably has advantages for anonymity when
>> using multiple scarcity-reliant services.
>>
>> This proposal also only amplifies the attacker's efforts by a factor of
>> 

Re: Some ideas on Freenet architecture

2017-09-13 Thread Matthew John Toseland


On 13/09/17 19:51, Matthew John Toseland wrote:
> On 12/09/17 22:34, Arne Babenhauserheide wrote:
>>
>> Matthew John Toseland  writes:
>>> But we need *something* in this approximate area.
>>
>> I fully agree with that. I think however that we need to be careful not
>> to require user interaction for that. Users already added the friend, we
>> can’t require more than maintaining the friend connections. Otherwise
>> that would be the next bottleneck.
> 
> What makes you think we'd need user interaction?
> 
> If there is any deconfliction it's all at the client level, not the
> node. The client can enforce any appropriate rules and decide to cast a
> vote by reinserting the block it most prefers. The user might get
> involved if there's a dispute about e.g. eliminating spammers.
> 
> Lets try to flesh out a proposal for scarce SSKs of some kind...
> 
> Proposal 0: Bundles
> ---
> 
> If an SSK insert has maximum HTL, we pre-route it along a fixed
> pseudo-random path, which ideally doesn't change for a given source
> node. It only uses friend connections, and stays at max HTL until a
> pseudo-random termination point (always the same node on a static
> network). After that it turns into an ordinary insert.
> 
> The point here is that we provide as few samples as possible for a
> correlation attack. If we receive a request, we know there is a fairly
> high chance (say 10%) that it came from the peer, but receiving more
> requests tells us nothing more. At least in the absence of churn.
> 
> There are tricks we can use to compensate for churn by using shadow
> nodes so that even if we can't route it to fixed peer A, we route it to
> fixed peer B instead, but the following hop is the same (possibly using
> FOAF connections and proof of previous arrangements and whatever).
> 
> We might want to use this for CHKs too, but it would have negligible
> performance impact for SSKs, and SSKs are more predictable. For CHKs
> performance matters much more, although perhaps given the downstream
> relaying it's not that bad.
> 
> Proposal 1: Scarce SSKs with global (per-node) scarcity
> ---
> 
> SSK inserts can have a "scarce" flag.
> 
> For SSK inserts which have this flag:
> 
> On each hop during the pre-routing phase, we check whether we have
> recently relayed a scarce request from that peer. If so, we terminate
> the insert; they'll have to wait. We can make the condition more
> generous to allow announcing multiple identities, be more flexible with
> relaying when friend degree varies, etc.
> 
> Once we exit the pre-routing phase, we check whether the data is
> popular. Popular means that lots of our peers are waiting for it. If it
> is not popular, we handle it as a normal insert, returning the existing
> data etc. So if nobody is listening you can insert however many keys you
> want. This implies scarcity needs to be in fairly large granularity.
> 
> If we have the data, then the key will not be popular, as we've already
> served the data to everyone waiting.
> 
> If the key is popular (and we don't have the data), we broadcast it to
> everyone waiting. They will continue the broadcast, provided that the
> key is popular from their point of view too.
> 
> We should probably use a different keyspace for scarce SSKs. But the
> basic property here is that IF the key is popular, THEN the bad guys
> only get N shots at it, where N is the number of friends they have.
> 
> This is adequate for announcement of new WoT identities, provided there
> is a single global queue. For per-user announcement queues it would not
> work well since they would not be popular enough.
> 
> Proposal 2: Scarce SSKs with per-key scarcity and multiple versions
> ---
> 
> "Popular" is redefined to include the key has recently been in demand,
> even though we have the data now.
> 
> If the key is popular, even if we have the data, we broadcast it.
> 
> Hence we can have many versions of the key, and they all get propagated
> to sufficiently interested clients. Clients then need to decide what to
> do with all the different versions themselves. There will be a time
> limit after which the key is no longer regarded as popular, and we just
> reject new inserts as usual, to simplify implementation and reduce the
> amount of sensitive data we need to keep.
> 
> The pre-routing scarce relay check is changed to be per-{peer,key}, so
> that if we have recently relayed an insert for that specific key from
> that specific peer, then we terminate it. Hence each key makes up a
> separate scarcity area. This arguably has advantages for anonymity when
> using multiple scarcity-reliant services.
> 
> This proposal also only amplifies the attacker's efforts by a factor of
> N, where N is the number of friends he has.
> 
> I don't think we can separate the per-key-scarcity property here from
> the 

Re: Some ideas on Freenet architecture

2017-09-13 Thread Matthew John Toseland
On 12/09/17 22:34, Arne Babenhauserheide wrote:
> 
> Matthew John Toseland  writes:
>> But we need *something* in this approximate area.
> 
> I fully agree with that. I think however that we need to be careful not
> to require user interaction for that. Users already added the friend, we
> can’t require more than maintaining the friend connections. Otherwise
> that would be the next bottleneck.

What makes you think we'd need user interaction?

If there is any deconfliction it's all at the client level, not the
node. The client can enforce any appropriate rules and decide to cast a
vote by reinserting the block it most prefers. The user might get
involved if there's a dispute about e.g. eliminating spammers.

Lets try to flesh out a proposal for scarce SSKs of some kind...

Proposal 0: Bundles
---

If an SSK insert has maximum HTL, we pre-route it along a fixed
pseudo-random path, which ideally doesn't change for a given source
node. It only uses friend connections, and stays at max HTL until a
pseudo-random termination point (always the same node on a static
network). After that it turns into an ordinary insert.

The point here is that we provide as few samples as possible for a
correlation attack. If we receive a request, we know there is a fairly
high chance (say 10%) that it came from the peer, but receiving more
requests tells us nothing more. At least in the absence of churn.

There are tricks we can use to compensate for churn by using shadow
nodes so that even if we can't route it to fixed peer A, we route it to
fixed peer B instead, but the following hop is the same (possibly using
FOAF connections and proof of previous arrangements and whatever).

We might want to use this for CHKs too, but it would have negligible
performance impact for SSKs, and SSKs are more predictable. For CHKs
performance matters much more, although perhaps given the downstream
relaying it's not that bad.

Proposal 1: Scarce SSKs with global (per-node) scarcity
---

SSK inserts can have a "scarce" flag.

For SSK inserts which have this flag:

On each hop during the pre-routing phase, we check whether we have
recently relayed a scarce request from that peer. If so, we terminate
the insert; they'll have to wait. We can make the condition more
generous to allow announcing multiple identities, be more flexible with
relaying when friend degree varies, etc.

Once we exit the pre-routing phase, we check whether the data is
popular. Popular means that lots of our peers are waiting for it. If it
is not popular, we handle it as a normal insert, returning the existing
data etc. So if nobody is listening you can insert however many keys you
want. This implies scarcity needs to be in fairly large granularity.

If we have the data, then the key will not be popular, as we've already
served the data to everyone waiting.

If the key is popular (and we don't have the data), we broadcast it to
everyone waiting. They will continue the broadcast, provided that the
key is popular from their point of view too.

We should probably use a different keyspace for scarce SSKs. But the
basic property here is that IF the key is popular, THEN the bad guys
only get N shots at it, where N is the number of friends they have.

This is adequate for announcement of new WoT identities, provided there
is a single global queue. For per-user announcement queues it would not
work well since they would not be popular enough.

Proposal 2: Scarce SSKs with per-key scarcity and multiple versions
---

"Popular" is redefined to include the key has recently been in demand,
even though we have the data now.

If the key is popular, even if we have the data, we broadcast it.

Hence we can have many versions of the key, and they all get propagated
to sufficiently interested clients. Clients then need to decide what to
do with all the different versions themselves. There will be a time
limit after which the key is no longer regarded as popular, and we just
reject new inserts as usual, to simplify implementation and reduce the
amount of sensitive data we need to keep.

The pre-routing scarce relay check is changed to be per-{peer,key}, so
that if we have recently relayed an insert for that specific key from
that specific peer, then we terminate it. Hence each key makes up a
separate scarcity area. This arguably has advantages for anonymity when
using multiple scarcity-reliant services.

This proposal also only amplifies the attacker's efforts by a factor of
N, where N is the number of friends he has.

I don't think we can separate the per-key-scarcity property here from
the multiple-versions property. But both are potentially useful.

Proposal 3: Scarce SSKs with voting
---

We keep track of how many times each version of the popular scarce key
has been inserted (that is, how many peers have 

Re: Some ideas on Freenet architecture

2017-09-12 Thread Arne Babenhauserheide

Matthew John Toseland  writes:
> But we need *something* in this approximate area.

I fully agree with that. I think however that we need to be careful not
to require user interaction for that. Users already added the friend, we
can’t require more than maintaining the friend connections. Otherwise
that would be the next bottleneck.

Best wishes,
Arne
-- 
Unpolitisch sein
heißt politisch sein
ohne es zu merken


signature.asc
Description: PGP signature


Re: Some ideas on Freenet architecture

2017-09-12 Thread Matthew John Toseland
On 11/09/17 22:48, Arne Babenhauserheide wrote:
> 
> Matthew John Toseland  writes:
>> Applied to spam, for example, we could justify banning somebody by
>> showing some of his messages.
>>
>> Does it still allow for spam amplification? Probably, if we immediately
>> propagate inserts to everywhere. But maybe we can resolve the fight
>> within a few small random parts of the network. And the fact that you
>> can only vote once per darknet connection on any given key severely
>> limits the mischief you can do... so maybe it's manageable even with
>> full propagation.
> 
> How would we avoid having to interact with all inserts?

Simple answer: we don't. We propagate every insert to everyone
listening, for a sufficiently popular key.

This is viable on a darknet, because we assume the attacker has a
limited number of edges.

Possibly better, more complex answer: Each insert is routed along a
fixed pseudo-random route. If there is a conflict, we make the newly
inserted data available within a limited range of where it ends up. If
the users/clients like the data, they reinsert it, and it propagates
further. Once there are enough inserts the winning data goes everywhere.
> 
>>> Because it's our only source of scarcity. The whole objective of this
>>> part of the proposal is to create spam-proof, adequately-scalable
>>> distributed keyword search. Or distributed data structures of whatever
>>> other kind, where we can maintain the structure in a collaborative
>>> manner, obtaining a consensus, without having to poll every outbox and
>>> every fork.
> 
> I agree that our darknet structure is our only real source of scarcity
> (but only in the immediate region: One malicious darknet peer can
> introduce an arbitrary number of additional distant peers).
> 
> But I’m wary of mixing the darknet structure too much with content. For
> scalable keyword search we could already use the WoT and merge
> information from identities — with efficient transfer, because the data
> will be widely cached.

I'm skeptical that this can work well:
1) It may or may not be possible to make it scale adequately.
2) It's hard to maintain efficient distributed data structures such as
search indexes.
> 
> What I’d be more interested in is to see whether we can use darknet
> connections with something like blinded tokens to allow introducing WoT
> IDs without CAPTCHAs while keeping the WoT IDs separate from the darknet
> structure. I’d like to be able to offer a friend who installs Freenet
> something which allows him or her to introduce a few WoT IDs.

Exactly. Even if WoT works, it depends on some external source of
scarcity. So we need some way to use darknet scarcity for introductions
(to prevent DoS/spam), without giving away too much information. But as
far as I remember there hasn't been an implementible scarce keys
proposal, just a lot of hand waving. If you have one then by all means
make it.

The above is a slightly different approach, which may have some
advantages for particular applications. But we need *something* in this
approximate area.
> 
> Best wishes,
> Arne



signature.asc
Description: OpenPGP digital signature


Re: Some ideas on Freenet architecture

2017-09-11 Thread Arne Babenhauserheide

Matthew John Toseland  writes:
> Applied to spam, for example, we could justify banning somebody by
> showing some of his messages.
>
> Does it still allow for spam amplification? Probably, if we immediately
> propagate inserts to everywhere. But maybe we can resolve the fight
> within a few small random parts of the network. And the fact that you
> can only vote once per darknet connection on any given key severely
> limits the mischief you can do... so maybe it's manageable even with
> full propagation.

How would we avoid having to interact with all inserts?

>> Because it's our only source of scarcity. The whole objective of this
>> part of the proposal is to create spam-proof, adequately-scalable
>> distributed keyword search. Or distributed data structures of whatever
>> other kind, where we can maintain the structure in a collaborative
>> manner, obtaining a consensus, without having to poll every outbox and
>> every fork.

I agree that our darknet structure is our only real source of scarcity
(but only in the immediate region: One malicious darknet peer can
introduce an arbitrary number of additional distant peers).

But I’m wary of mixing the darknet structure too much with content. For
scalable keyword search we could already use the WoT and merge
information from identities — with efficient transfer, because the data
will be widely cached.

What I’d be more interested in is to see whether we can use darknet
connections with something like blinded tokens to allow introducing WoT
IDs without CAPTCHAs while keeping the WoT IDs separate from the darknet
structure. I’d like to be able to offer a friend who installs Freenet
something which allows him or her to introduce a few WoT IDs.

Best wishes,
Arne
-- 
Unpolitisch sein
heißt politisch sein
ohne es zu merken


signature.asc
Description: PGP signature


Re: Some ideas on Freenet architecture

2017-09-03 Thread Matthew John Toseland
On 03/09/17 21:01, Matthew John Toseland wrote:
> On 03/09/17 20:53, Arne Babenhauserheide wrote:
>> Hi toad,
>>
>> Matthew John Toseland  writes:
>>> Thoughts? I'm not a dev any more, but thinking about Bitcoin and other
>>> stuff maybe there are some ways forward that we've missed...
>>>
>>> Dispute resolution, spam and distributed data structures
>>> 

I'm actually more interested in your view on the second part, but this
is worth discussing too...
>>>
>>> A key is disputable if:
>> …
>>> For any disputable key, any darknet node can vote at most once.
>>
>> Can you explain why we would want to bind darknet nodes (=real life
>> identities) to voting about content? Currently this is done in the truly
>> pseudonymous inner layer.

To be clear, a disputable SSK vote is pretty much the same as an insert:

It is routed from node to node according to a probabilistically
decremented HTL. It doesn't take effect (it doesn't check the datastore)
until it has travelled a reasonable number of hops, for security reasons.

In fact, it might *be* an insert. Then the client fetches the SSK, and
when it sees something it approves of, it inserts the blob. If it finds
something it doesn't approve of (e.g. violation of the consistency
rules), it doesn't insert, or it creates an alternative and inserts that.

But we probably want to change the request/ULPR mechanic too, so that
the client will see both versions if there is a dispute going on nearby.
>>
>> What is the gain from disputable keys?

Answered below. But this also gets us some of the advantages of PSKs,
without necessarily needing PSKs. Or if we do have PSKs, it allows us
much more flexible verification. So we can maintain arbitrary data
structures while keeping DoS attacks manageable. In particular when
managing a shared tree structure everyone needs to keep to the rules.

Applied to spam, for example, we could justify banning somebody by
showing some of his messages.

Does it still allow for spam amplification? Probably, if we immediately
propagate inserts to everywhere. But maybe we can resolve the fight
within a few small random parts of the network. And the fact that you
can only vote once per darknet connection on any given key severely
limits the mischief you can do... so maybe it's manageable even with
full propagation.
> 
> Because it's our only source of scarcity. The whole objective of this
> part of the proposal is to create spam-proof, adequately-scalable
> distributed keyword search. Or distributed data structures of whatever
> other kind, where we can maintain the structure in a collaborative
> manner, obtaining a consensus, without having to poll every outbox and
> every fork.
> 
> IMHO this is one of the remaining fundamental research questions for
> Freenet.
> 
> It's that or a block chain. And we don't want a block chain.
> 
> Or stick to the current approach of hoping that outbox polling scales
> well enough. But even if it works for chat I doubt it will work well for
> search. I believe my proposal will scale further even for chat.
> 
> And I'm not binding them - it's no more traceable than any other key AFAICS?
>>
>> Best wishes,
>> Arne
>>
> 



signature.asc
Description: OpenPGP digital signature


Re: Some ideas on Freenet architecture

2017-09-03 Thread Matthew John Toseland
On 03/09/17 20:53, Arne Babenhauserheide wrote:
> Hi toad,
> 
> Matthew John Toseland  writes:
>> Thoughts? I'm not a dev any more, but thinking about Bitcoin and other
>> stuff maybe there are some ways forward that we've missed...
>>
>> Dispute resolution, spam and distributed data structures
>> 
>>
>> A key is disputable if:
> …
>> For any disputable key, any darknet node can vote at most once.
> 
> Can you explain why we would want to bind darknet nodes (=real life
> identities) to voting about content? Currently this is done in the truly
> pseudonymous inner layer.
> 
> What is the gain from disputable keys?

Because it's our only source of scarcity. The whole objective of this
part of the proposal is to create spam-proof, adequately-scalable
distributed keyword search. Or distributed data structures of whatever
other kind, where we can maintain the structure in a collaborative
manner, obtaining a consensus, without having to poll every outbox and
every fork.

IMHO this is one of the remaining fundamental research questions for
Freenet.

It's that or a block chain. And we don't want a block chain.

Or stick to the current approach of hoping that outbox polling scales
well enough. But even if it works for chat I doubt it will work well for
search. I believe my proposal will scale further even for chat.

And I'm not binding them - it's no more traceable than any other key AFAICS?
> 
> Best wishes,
> Arne
>



signature.asc
Description: OpenPGP digital signature


Re: Some ideas on Freenet architecture

2017-09-03 Thread Arne Babenhauserheide
Hi toad,

Matthew John Toseland  writes:
> Thoughts? I'm not a dev any more, but thinking about Bitcoin and other
> stuff maybe there are some ways forward that we've missed...
>
> Dispute resolution, spam and distributed data structures
> 
>
> A key is disputable if:
…
> For any disputable key, any darknet node can vote at most once.

Can you explain why we would want to bind darknet nodes (=real life
identities) to voting about content? Currently this is done in the truly
pseudonymous inner layer.

What is the gain from disputable keys?

Best wishes,
Arne
-- 
Unpolitisch sein
heißt politisch sein
ohne es zu merken


signature.asc
Description: PGP signature


Re: Some ideas on Freenet architecture

2017-09-03 Thread Matthew John Toseland
On 03/09/17 18:31, Matthew John Toseland wrote:
> Thoughts? I'm not a dev any more, but thinking about Bitcoin and other
> stuff maybe there are some ways forward that we've missed...
> 
> Dispute resolution, spam and distributed data structures
> 
> 
> A key is disputable if:
> 1) It is an SSK or similar signature-based type where multiple valid
> values are possible, and
> 2) It is popular enough for ULPRs et kick in, i.e. most nodes have
> copies of and ULPR subscriptions to this key.
> 
> For any disputable key, any darknet node can vote at most once. Votes
> are relayed in a fixed path across the darknet, so that only one vote
> passes a given directed edge for a given key. Ideally, votes are blind
> while they are being routed; the previous iteration of the key publishes
> a public key and then later broadcasts the private key.
> 
> It follows that votes are pseudonymous (i.e. can be correlated), and
> heterogeneous node degree may reduce some people's voting power. However
> overall this should be Good Enough (tm).
> 
> Disputable keys are used for the root of big distributed datastructures,
> such as the set of all valid posters on WoT, a keyword search index, a
> PKI indexed by any suitable identifier etc. (A block chain is of course
> another way to do this)
> 
> We can enforce any set of invariants on such data structures via the
> voting system, as well as incorporating explicit user input. For example:
> 
> The agreed root is the root of a hash tree. But we don't need to
> traverse the tree:
> - The index is divided by hash length into blocks of a large enough size
> that the individual blocks won't fall out.
> - The root includes the minimum and maximum USK offsets for all blocks
> in the index. It includes the base hashes for this version and the last
> few versions of the root.
> - As long as this range is small, we can fetch a block directly by its
> hash. The block will contain proof that it is included in the hash tree.
> This proof includes the actual CHK nodes as blobs, which will be
> randomly inserted/healed.
> - Each node also includes version range information.
> - All updates to the tree must maintain the invariant that the gap
> between the minimum and maximum value is reasonable. We can enforce this
> using the voting system above. In the absence of detectable attacks,
> hashing should tend to ensure this anyway.
> 
> Ideally there would be only one such tree, so that we could use classic
> timestamping service tricks such as publishing the current hash etc.
> 
> Thoughts on network architecture
> 
> 
> We can provide strong anonymity, and reasonable robustness guarantees on
> a darknet. We can't achieve this on an opennet. We still need some sort
> of guest mode, and we need strong incentives to move to the proper
> darknet. So what should Freenet look like?
> 
> Core darknet
> - High uptime, reasonable bandwidth.
> - Many F2F links.
> - Eligible to participate in dispute resolution as above.
> - Stores data permanently, i.e. has a store not just a cache.
> - Some tunneling system similar to PISCES for privacy.
> - Single coherent network.
> 
> Non-core darknet
> - Similar to darknet, but lower performance.
> - May be penalised by load management.
> - May have less access to dispute resolution because of having very few
> links.
> - May still use tunneling.
> - In extreme cases (e.g. 1 link), lower security / need to trust peers more.
> 
> Gateway nodes
> - Hybrid, connect to both opennet and darknet nodes.
> - Eligible to become a seednode; responds to announcements.
> - May also qualify as core darknet nodes.
> - Must be able to prove connectivity to the Core Darknet.
> - Requests entering guests are treated as lower priority than
> intra-darknet traffic, and reset HTL when they become true darknet traffic.
> - Requests are never routed from darknet to opennet.
> 
> Guest/opennet nodes
> - Cache only. No store.
> - User requests a set of gateways from a seed service.
> - If true transient, entirely reliant on gateway. Requests are sent
> directly to the gateway, possibly using tunneling. Tunneling requires
> trusting the gateway, although if there are multiple gateways we can
> compare them, especially we keep a versioned network map using the
> dispute resolution mechanism.

Another detail: A network map is not used at all in darknet tunnel
setup. It's not updated every time a link goes down. It may not even be
updated when a node is added to the core darknet. It's just a trick to
make it easier to detect when your gateway nodes are on different
networks i.e. one of them is dishonest.

> - If semi-permanent, can announce to gateways and get a bunch of opennet
> nodes to talk to. Requests will be routed through local opennet for some
> limited HTL, and then sent to gateway.
> - Limited to no access to dispute resolution.
> 
> Seeds:
> - Any gateway may be a seed, and gateways' identities