On Wed, May 27, 2009 at 1:29 PM, Thomas Sachau <m...@tommyserver.de> wrote:
>> A small number could still be rather large.  Having thousands see it
>> ought to suffice.  For the current network, I see no reason not to
>> have the (default) limits such that basically everyone sees it.
>
> If your small number is that big, you should add that because for me, "small" 
> is not around
> "thousends". Additionally, if you allow them to reach thousends (will a 
> freenet based message system
> ever reach more people?), is there any value in restricting this anyway?

Currently, the total number of people using Freenet is small.
Hopefully that will not always be the case.  Designing a new system
that assumes it will always be the case seems like a rather bad idea
to me.

In this context, I would say small means sublinear growth with the
size of the entire network.  Having the new-identity spam reach
thousands of recipients is far better than having it reach tens of
thousands or millions.

>
>> If the post is really that valuable, some people will mark the poster
>> as trusted.  Then everyone will see it.
>
> Why should they? People are lazy, so most, if not all will just read it, 
> maybe answer it, but who
> thinks about rating someone because of a single post? People are and will 
> always be lazy.

If the post is only somewhat valuable, it might take a few posts.  If
it's a provocative photo that escaped from an oppressive regime, I
suspect it wouldn't.

Granting trust automatically on replies is an idea that has been
discussed before.  It has a great deal of merit.  I'm in favor of it.
I just don't think that should be the highest level of trust.

>
>> You may think that everyone should be equal; I don't.  If newbies are
>> posting stuff that isn't spam (be it one message or many), I'm willing
>> to believe someone my web can reach will mark them trusted.  You
>> obviously aren't; that's fine too.  Fortunately, there is no
>> requirement we use the same capacity limiting functions -- that should
>> be configurable for each user.  If you want to make the default
>> function fairly permissive, that's fine.  I think you'd be making the
>> wrong choice, but personally I wouldn't care that much because I'd
>> just change it away from the default if new-identity spam was a
>> problem.
>
> So you want the default to be more censoring. And you trust people to not be 
> lazy. I oppose both.
> First, if you really want to implement such censorship, make the default 
> open, with thousends of
> trusted users, it wont be a difference anyway. Second, why should people mark 
> new identities as
> trusted? I use FMS and i dont change the trust of every identity i see there. 
> And i do somehow
> manage a trustlist there. If someone is lazy (and the majority is), they will 
> do nothing.

If one of your design requirements is that new identities can post and
be seen by everyone, you have made the spam problem unsolvable BY
DEFINITION.  That is bad.

The whole point of Advogato or other web of trust systems is that you
don't have to mark everyone you see as trusted, only some of them.  As
long as a reasonable number of people do the same thing, so that the
whole graph is well connected, that will suffice.

>
>> Also, you seem to be mistaken about what I mean by limiting CAPTCHA
>> identity capacity.  Limiting it to 1 means it's nonzero.  That means
>> the identity can receive trust and be accepted, so the message will be
>> read.  All it means is that they can't grant trust to anyone else.  It
>> says nothing about their own ability to post messages.  The wouldn't
>> need to solve lots of CAPTCHAs any more than they would under eg FMS.
>> A few should suffice, for redundancy vs collisions and the poster
>> having gone offline.
>
> ???
>
> Who told you that someone would have to solve many captchas and that forever? 
> You only need to solve
> 1 captcha that is not already solved and which is from a trusted person which 
> publishes its trustlist.
> And i dont think he is mistaken. You still require people to mark identities 
> as trusted to get them
> visible and have them stay visible to others. This wont happen, so people 
> will loose their
> Captcha-Trust and will have to solve more captchas. Annoying for everyone, 
> and most annoying for the
> lazy majority.

The captcha problem is exactly the same as with FMS or WoT.  You could
implement it exactly as either of those does with Advogato.  How many
and how often a new user must solve captchas is only peripherally
related to which algorithm you run on the trust graph.  IIRC, trust in
FMS does not propagate very far at all, which means for more than a
few people to see you you need to be on many trust lists.  That means
solving many captchas or getting lots of manual ratings.  Advogato or
WoT (AIUI, anyway) both improve on this.

I am proposing an improved solution.  Currently, in FMS or WoT, Sam
can solve a captcha Alice published.  Since he then has trust from
Alice, he can mark a large number of fake identities as trusted.  With
my proposal, where captcha-based trust has tighter limitations than
manual trust, he has to solve a captcha for each fake identity.  This
proposal is not a required part of Advogato, it is my own suggestion.
It could be applied to WoT as well, I believe.

If you assume that people will not maintain trust lists, then it
doesn't matter what algorithm you run on the trust graph.  There won't
be one.  FMS, WoT, and Advogato all fail completely under that
assumption.

>
>> Fundamentally, it's a question of whether you believe CAPTCHAs work.
>> I don't.  If you start with an assumption that CAPTCHAs are a minor
>> hindrance at most, then if you require that everyone sees messages
>> sent by identities that have only solved CAPTCHAs and not gained
>> manual trust, then you've made it a design criteria to permit
>> unlimited amounts of spam.  (That's bad.)  If you believe CAPTCHAs
>> work, then things are a bit easier...  but I think the balance of the
>> evidence is against that belief.
>
> Captchas may not be the ultimative solution. But they are one way to let 
> people in while prooving to
> be humans. And you will need this limit (human proove), so you will always 
> need some sort of captcha
> or a real friends trust network.

Captchas do not prove someone is human.  They prove that someone
solved a problem.  If your captchas are good, that means they are more
likely to be human.  I work from an assumption that captchas are
marginally effective at best.  If you think I am mistaken in that,
please explain why.  From that assumption, I conclude that we need a
system that is reasonably effective against a spammer who can solve
significant numbers of captchas, but still is capable of making use of
the information that solving a captcha does provide.

Evan Daniel
_______________________________________________
Devl mailing list
Devl@freenetproject.org
http://emu.freenetproject.org/cgi-bin/mailman/listinfo/devl

Reply via email to