On 2019-07-21 08:11, Arne Babenhauserheide wrote:
yanma...@cock.li writes:
On 2019-07-20 07:59, Arne Babenhauserheide wrote:
yanma...@cock.li writes:
Frontends would be disposable and dime-a-dozen
To get to this situations, you must make it very, very easy to host
them. This might be a major endeavor (but one which would benefit
Freenet a lot).
If you want to make it dime-a-dozen, you need to make it easy to
install
Freenet with FMS already setup.
Aren't Freenet and FMS already trivial to install programmatically?
If you have actual IDs, you must provide a secure way to log in — not
secure against the server, but secure against others users
impersonating
you.
Visibility is also based on the ID, otherwise you don’t get real spam
defense (you’d have to rely on the site hoster to manage spam for you).
Well, doesn't the cookie mechanism I described do this? Everyone is
anonymous, but personally I think that goes under "It's not a bug, it's
a feature". No need for an actual login field, although you could give
them a link that sets the cookie for them that they can use as a
bookmark if they want.
If you need reliable names, you could implement tripcode functionality.
For a bit more security, secure tripcodes. This also simplifies
implementation. On the other hand, it's ugly. But then again, Worse is
Better, and if you want your software to spread you should make it as
simple as possible, like a virus.
https://en.wikipedia.org/wiki/Imageboard#Tripcodes
Recap: When posting, you enter a name and a secret, separated by one or
two pound symbols. For instance, yanmaani#NDl5tlZY. Then the server
hashes whatever is after the pound symbol(s), and replaces it by its
(possibly truncated) hash. So an example output would be
yanmaani#Ri2dTa5T. If using two pound symbols, a secure tripcode is
computed, where the server appends a secret salt, which makes offline
brute-force impossible. That has the downside of not being portable, of
course.
Maybe you'd want to replace the pound symbol by something else, but
that's an implementation detail.
A user who was really serious about becoming famous should either GPG
sign their messages or download Freenet. I don't think "secure
identities" are useful more than as a spam countermeasure, and for that
purpose tripcodes are more than enough.
Anyway, the upside of predictable names is that you could separate the
"announce" and "post" parts. Then posting could require for instance one
captcha, and announcing ten. And there would be no need for the server
to do any book-keeping.
So the workflow for a new user would be
a1) visit gateway.com/announce
a2) type in their name (optional) and tripcode
a3) solve 10 captchas
b1) visit gateway.com/board/thread
b2) make their post, type in tripcode
b3) solve 1 captcha, for rudimentary flood/tripcode bruteforce control
Steps B can be done before steps A, but a minor UX issue is users doing
steps B and only B. There could be a warning if the server believes that
to be the case, though.
Kind of like on this mailing list: first you solve a captcha to get your
e-mail added to the list, and then you can post. Except for the part
where you can't first post and then solve captchas.
Super ugly but probably decently functioning hybrid approach: posting,
if done from a clean IP, gets a very low trust from a seed node,
provided the IP hasn't been used in the past week or whatever. This
would make it possible for "normal users" (e.g. casuals dropping by who
just want to try it out) to post without much hassle, while still making
it possible for Tor users to post. And the network could handle
blacklisting that seed node if botnets are raising too much hell.
Of course now the implementation is much harder, since you need to pass
the IP on to the onion service without terminating SSL. So I would
rather not go down that route.
The network could handle blacklisting too old identities that suddenly
resurface - an attacker stands nothing more to gain from solving 20
captchas and getting a new identity from the service than they do from
solving 20 captchas and getting a new identity with its own key, so I
wouldn't imagine it that big a problem.
Another way could be for posts without well-announced identities to go
into some kind of "pool" where you can vote on them. Or you could just
have low standards for initial identity verification.
Probably, this can be fine-tuned on-the-fly long as you get the concept
working. So I wouldn't bike-shed it too much.
What I'm curious about is how the identity generation should
proceed. In particular, can the WoT have multiple identities sharing
the same key?
No, and that wouldn’t be a good idea, since they could switch to the
other ID if they’d manage to trick the server into using another public
name.
But there's no issues if the server is well-coded? Or does WoT point
blank not have the ability to deal with multiple identities sharing the
same key?
If that's the case, could you change the public key exponent for the
same result? That's how the custom .onion name brute-forcers do it. Then
you don't need any persistent read-write storage, since you can just
pick the exponent from an array of primes keyed on hash(name + secret
salt).
Downside is of course the limited range of public key exponents, and the
extreme complexity it brings, so I would rather not do it.
What's the limit in WoT preventing supporting multiple identities with
different human-readable names but same keys?
If you block open proxies, then you exclude all tor users, but you
don’t
get real security, because botnets are horribly cheap.
Not block them, but have them solve captchas. In this regard, "Real
security" is impossible, since you can likewise pay to get captchas
solved.
How much does a botnet cost, then? A captcha costs $0.001 to solve, so
if you pay $0.02/IP that's more or less equivalent to solving 20
captchas.
Same goes for spam filtering - for sure it can be evaded, but it filters
out some types of junk, and as long as it doesn't have a too high FP
rate, it reduces the total amount of junk even if not by 100%. The more
layers, the better, in theory.
Doesn't FMS already limit posting rate on the client side?
Not that I know of. It has delay of messages to provide more anonymity.
Then it could be done externally: any user that posts too many messages
too rapidly gets a poor rating.
The server always has non-public information about the users. The
question is just how to represent it.
Not in my approach, as far as I can see. Well, it has the secret salts,
the exact posting times, and the tripcode secrets, but not more. Neither
of those should be considered very secret, unlike the long-term
linkability of a user who thought he or she was anonymous.
Do you reckon they would
blacklist the main ID's trust list, because it has too many children
which are rotten apples?
Yes. It would then be the same as those public IDs (where the secret
key
was published intentionally) which get blocked after abuse.
So how should this be dealt with? If the identity cost is the same as
for the new seed nodes, would it still have this issue?
Extremely hacky solution, which I'd want your input on before even
thinking about implementing: The page to announce nodes just passes
captchas through to the seed nodes, without ever holding anything like a
master seed key. Then it's not the front-end's problem anymore. It would
also make them easier to start, since they wouldn't need to first get a
trusted identity for bootstrapping purposes.
Bots which "jump the gun" would get blacklisted by the other bots
programmatically. Bots which censor messages would get blacklisted,
provided they didn't block all messages sent within a certain
timeframe.
You’d likely have to block them via FMS and only consider bots in the
distributed algorithm which are not blacklisted by given moderator IDs.
That's what I'm thinking. It's easier to just use one trust list for
everything and assume the consensus is right though, so there's no need
to directly designate specific moderator IDs.
Each bot can tell if one has deviated from the protocol, so it wouldn't
need to be that centralized.
Another question is if FCP already supports a "stripped-down mode",
where it doesn't expose internal material, only stuff that's on the
network. I know SIGAINT ran a Freenet <-> Tor proxy, do you know how
they did it?
There is public gateway mode, but I would not vouch for its security —
it might have deteriorated over the past years of little usage.
What would be its vulnerabilities then? Accidentally exposing internal
pages?
Cheers