[freenet-dev] a social problem with Wot (was: Hashcash introduction, was: Question about WoT )

2009-05-14 Thread Evan Daniel
On Thu, May 14, 2009 at 4:22 AM, xor  wrote:
> On Wednesday 13 May 2009 22:48:53 Evan Daniel wrote:
>> On Wed, May 13, 2009 at 4:28 PM, xor  wrote:
>> > On Wednesday 13 May 2009 10:01:31 Luke771 wrote:
>> >> Thomas Sachau wrote:
>> >> > Luke771 schrieb:
>> >> >> I can't comment on the technical part because I wouldnt know what im
>> >> >> talking about.
>> >> >> However, I do like the 'social' part (being able to see an identity
>> >> >> even if the censors mark it down it right away as it's created)
>> >> >
>> >> > "The censors"? There is no central authority to censor people.
>> >> > "Censors" can only censor the web-of-trust for those people that trust
>> >> > them and which want to see a censored net. You cant and should not
>> >> > prevent them from this, if they want it.
>> >>
>> >> This have been discussed ?a lot.
>> >> the fact that censoship isnt done by a central authority but by a mob
>> >> rule is irrelevant.
>> >> Censorship in this contest is "blocking users based on the content of
>> >> their messages"
>> >>
>> >> ?The whole point ?is basically this: "A tool created to block flood
>> >> attacks ?is being used to discriminate against a group of users.
>> >>
>> >> Now, it is true that they can't really censor anything because users can
>> >> decide what trust lists to use, but it is also true that this abuse of
>> >> the wot does creates problems. They are social problems and not
>> >> technical ones, but still 'freenet problems'.
>> >>
>> >> If we see the experience with FMS as a test for the Web of Trust, the
>> >> result of that test is in my opinion something in between a miserable
>> >> failure and a catastrophe.
>> >>
>> >> The WoT never got to prove itself against a real flood attack, we have
>> >> no idea what would happen if someone decided to attack FMS, not even if
>> >> the WoT would stop the attempted attack at all, leave alone finding out
>> >> how fast and/or how well it would do it.
>> >>
>> >> In other words, for what we know, the WoT may very well be completely
>> >> ineffective against a DoS attack.
>> >> All we know about it is that the WoT can be used to discriminate against
>> >> people, we know that it WILL be used in that way, and we know that
>> >> because of a proven fact: it's being used to discriminate against people
>> >> right now, on FMS
>> >>
>> >> That's all we know.
>> >> We know that some people will abuse WoT, but we dont really know if it
>> >> would be effective at stopping DoS attacks.
>> >> Yes, it "should" work, but we don't 'know'.
>> >>
>> >> The WoT has never been tested t actually do the job it's designed to do,
>> >> yet the Freenet 'decision makers' are acting as if the WoT had proven
>> >> its validity beyond any reasonable doubt, and at the same time they
>> >> decide to ignore the only one proven fact that we have.
>> >>
>> >> This whole situation is ridiculous, ?I don't know if it's more funny or
>> >> sad... ?it's grotesque. It reminds me of our beloved politicians, always
>> >> knowing what's the right thing to do, except that it never works as
>> >> expected.
>> >
>> > No, it is not ridiculous, you are just having a point of view which is
>> > not abstract enough:
>> >
>> > If there is a shared medium (= Freenet, Freetalk, etc.) which is writable
>> > by EVERYONE, it is absolutely IMPOSSIBLE to *automatically* (as in "by
>> > writing an intelligent software") distinguish spam from useful uploads,
>> > because "EVERYONE" can be evil.
>> >
>> > EITHER you manually view every single piece of information which is
>> > uploaded and decide yourself whether you consider it as spam or not OR
>> > you adopt the ratings of other people so each person only has to rate a
>> > small subset of the uploaded data. There are no other options.
>> >
>> > And what the web of trust does is exactly the second option: it "load
>> > balances" the content rating equally between all users.
>>
>> While your statement is trivially true (assuming we ignore some fairly
>> potent techniques like bayesian classifiers that rely on neither
>> additional work by the user or reliance on the opinions of others...),
>
> Bayesian filters DO need input: You need to give them "old" spam and non-spam
> messages so that they can decide about new input.
>
> But they cannot help Freetalk because they cannot prevent "identity spam",
> i.e. the creation of very large amounts of identities.

They do not require input from *other people*.

>
>> it misses the real point: the fact that WoT spreads the work around
>> does not mean it does so efficiently or effectively, or that the
>> choices it makes wrt various design tradeoffs are actually the choices
>> that we, as its users, would make if we considered those choices
>> carefully.
>>
>> A web of trust is a complex system, the entire purpose of which is to
>> create useful emergent behaviors. ?Too much focus on the micro-level
>> behavior of the parts of such a system, instead of the emergent
>> properties of the system as a whole, means 

[freenet-dev] a social problem with Wot (was: Hashcash introduction, was: Question about WoT )

2009-05-14 Thread xor
On Wednesday 13 May 2009 22:48:53 Evan Daniel wrote:
> On Wed, May 13, 2009 at 4:28 PM, xor  wrote:
> > On Wednesday 13 May 2009 10:01:31 Luke771 wrote:
> >> Thomas Sachau wrote:
> >> > Luke771 schrieb:
> >> >> I can't comment on the technical part because I wouldnt know what im
> >> >> talking about.
> >> >> However, I do like the 'social' part (being able to see an identity
> >> >> even if the censors mark it down it right away as it's created)
> >> >
> >> > "The censors"? There is no central authority to censor people.
> >> > "Censors" can only censor the web-of-trust for those people that trust
> >> > them and which want to see a censored net. You cant and should not
> >> > prevent them from this, if they want it.
> >>
> >> This have been discussed  a lot.
> >> the fact that censoship isnt done by a central authority but by a mob
> >> rule is irrelevant.
> >> Censorship in this contest is "blocking users based on the content of
> >> their messages"
> >>
> >>  The whole point  is basically this: "A tool created to block flood
> >> attacks  is being used to discriminate against a group of users.
> >>
> >> Now, it is true that they can't really censor anything because users can
> >> decide what trust lists to use, but it is also true that this abuse of
> >> the wot does creates problems. They are social problems and not
> >> technical ones, but still 'freenet problems'.
> >>
> >> If we see the experience with FMS as a test for the Web of Trust, the
> >> result of that test is in my opinion something in between a miserable
> >> failure and a catastrophe.
> >>
> >> The WoT never got to prove itself against a real flood attack, we have
> >> no idea what would happen if someone decided to attack FMS, not even if
> >> the WoT would stop the attempted attack at all, leave alone finding out
> >> how fast and/or how well it would do it.
> >>
> >> In other words, for what we know, the WoT may very well be completely
> >> ineffective against a DoS attack.
> >> All we know about it is that the WoT can be used to discriminate against
> >> people, we know that it WILL be used in that way, and we know that
> >> because of a proven fact: it's being used to discriminate against people
> >> right now, on FMS
> >>
> >> That's all we know.
> >> We know that some people will abuse WoT, but we dont really know if it
> >> would be effective at stopping DoS attacks.
> >> Yes, it "should" work, but we don't 'know'.
> >>
> >> The WoT has never been tested t actually do the job it's designed to do,
> >> yet the Freenet 'decision makers' are acting as if the WoT had proven
> >> its validity beyond any reasonable doubt, and at the same time they
> >> decide to ignore the only one proven fact that we have.
> >>
> >> This whole situation is ridiculous,  I don't know if it's more funny or
> >> sad...  it's grotesque. It reminds me of our beloved politicians, always
> >> knowing what's the right thing to do, except that it never works as
> >> expected.
> >
> > No, it is not ridiculous, you are just having a point of view which is
> > not abstract enough:
> >
> > If there is a shared medium (= Freenet, Freetalk, etc.) which is writable
> > by EVERYONE, it is absolutely IMPOSSIBLE to *automatically* (as in "by
> > writing an intelligent software") distinguish spam from useful uploads,
> > because "EVERYONE" can be evil.
> >
> > EITHER you manually view every single piece of information which is
> > uploaded and decide yourself whether you consider it as spam or not OR
> > you adopt the ratings of other people so each person only has to rate a
> > small subset of the uploaded data. There are no other options.
> >
> > And what the web of trust does is exactly the second option: it "load
> > balances" the content rating equally between all users.
>
> While your statement is trivially true (assuming we ignore some fairly
> potent techniques like bayesian classifiers that rely on neither
> additional work by the user or reliance on the opinions of others...),

Bayesian filters DO need input: You need to give them "old" spam and non-spam 
messages so that they can decide about new input.

But they cannot help Freetalk because they cannot prevent "identity spam", 
i.e. the creation of very large amounts of identities.

> it misses the real point: the fact that WoT spreads the work around
> does not mean it does so efficiently or effectively, or that the
> choices it makes wrt various design tradeoffs are actually the choices
> that we, as its users, would make if we considered those choices
> carefully.
>
> A web of trust is a complex system, the entire purpose of which is to
> create useful emergent behaviors.  Too much focus on the micro-level
> behavior of the parts of such a system, instead of the emergent
> properties of the system as a whole, means that you won't get the
> emergent properties you wanted.
>

Yes, the current web of trust implementation might not be perfect. But it is 
one of the only solutions to the spam problem, if not 

Re: [freenet-dev] a social problem with Wot (was: Hashcash introduction, was: Question about WoT )

2009-05-14 Thread xor
On Wednesday 13 May 2009 22:48:53 Evan Daniel wrote:
 On Wed, May 13, 2009 at 4:28 PM, xor x...@gmx.li wrote:
  On Wednesday 13 May 2009 10:01:31 Luke771 wrote:
  Thomas Sachau wrote:
   Luke771 schrieb:
   I can't comment on the technical part because I wouldnt know what im
   talking about.
   However, I do like the 'social' part (being able to see an identity
   even if the censors mark it down it right away as it's created)
  
   The censors? There is no central authority to censor people.
   Censors can only censor the web-of-trust for those people that trust
   them and which want to see a censored net. You cant and should not
   prevent them from this, if they want it.
 
  This have been discussed  a lot.
  the fact that censoship isnt done by a central authority but by a mob
  rule is irrelevant.
  Censorship in this contest is blocking users based on the content of
  their messages
 
   The whole point  is basically this: A tool created to block flood
  attacks  is being used to discriminate against a group of users.
 
  Now, it is true that they can't really censor anything because users can
  decide what trust lists to use, but it is also true that this abuse of
  the wot does creates problems. They are social problems and not
  technical ones, but still 'freenet problems'.
 
  If we see the experience with FMS as a test for the Web of Trust, the
  result of that test is in my opinion something in between a miserable
  failure and a catastrophe.
 
  The WoT never got to prove itself against a real flood attack, we have
  no idea what would happen if someone decided to attack FMS, not even if
  the WoT would stop the attempted attack at all, leave alone finding out
  how fast and/or how well it would do it.
 
  In other words, for what we know, the WoT may very well be completely
  ineffective against a DoS attack.
  All we know about it is that the WoT can be used to discriminate against
  people, we know that it WILL be used in that way, and we know that
  because of a proven fact: it's being used to discriminate against people
  right now, on FMS
 
  That's all we know.
  We know that some people will abuse WoT, but we dont really know if it
  would be effective at stopping DoS attacks.
  Yes, it should work, but we don't 'know'.
 
  The WoT has never been tested t actually do the job it's designed to do,
  yet the Freenet 'decision makers' are acting as if the WoT had proven
  its validity beyond any reasonable doubt, and at the same time they
  decide to ignore the only one proven fact that we have.
 
  This whole situation is ridiculous,  I don't know if it's more funny or
  sad...  it's grotesque. It reminds me of our beloved politicians, always
  knowing what's the right thing to do, except that it never works as
  expected.
 
  No, it is not ridiculous, you are just having a point of view which is
  not abstract enough:
 
  If there is a shared medium (= Freenet, Freetalk, etc.) which is writable
  by EVERYONE, it is absolutely IMPOSSIBLE to *automatically* (as in by
  writing an intelligent software) distinguish spam from useful uploads,
  because EVERYONE can be evil.
 
  EITHER you manually view every single piece of information which is
  uploaded and decide yourself whether you consider it as spam or not OR
  you adopt the ratings of other people so each person only has to rate a
  small subset of the uploaded data. There are no other options.
 
  And what the web of trust does is exactly the second option: it load
  balances the content rating equally between all users.

 While your statement is trivially true (assuming we ignore some fairly
 potent techniques like bayesian classifiers that rely on neither
 additional work by the user or reliance on the opinions of others...),

Bayesian filters DO need input: You need to give them old spam and non-spam 
messages so that they can decide about new input.

But they cannot help Freetalk because they cannot prevent identity spam, 
i.e. the creation of very large amounts of identities.

 it misses the real point: the fact that WoT spreads the work around
 does not mean it does so efficiently or effectively, or that the
 choices it makes wrt various design tradeoffs are actually the choices
 that we, as its users, would make if we considered those choices
 carefully.

 A web of trust is a complex system, the entire purpose of which is to
 create useful emergent behaviors.  Too much focus on the micro-level
 behavior of the parts of such a system, instead of the emergent
 properties of the system as a whole, means that you won't get the
 emergent properties you wanted.


Yes, the current web of trust implementation might not be perfect. But it is 
one of the only solutions to the spam problem, if not the only. 

So the question is not whether to use a WoT but rather how to program the WoT 
to fit our purposes.

Well anyway, if someone has an alternative to WoT, please tell us, but you 
cannot say do not use it if you have none.



Re: [freenet-dev] a social problem with Wot (was: Hashcash introduction, was: Question about WoT )

2009-05-14 Thread Evan Daniel
On Thu, May 14, 2009 at 4:22 AM, xor x...@gmx.li wrote:
 On Wednesday 13 May 2009 22:48:53 Evan Daniel wrote:
 On Wed, May 13, 2009 at 4:28 PM, xor x...@gmx.li wrote:
  On Wednesday 13 May 2009 10:01:31 Luke771 wrote:
  Thomas Sachau wrote:
   Luke771 schrieb:
   I can't comment on the technical part because I wouldnt know what im
   talking about.
   However, I do like the 'social' part (being able to see an identity
   even if the censors mark it down it right away as it's created)
  
   The censors? There is no central authority to censor people.
   Censors can only censor the web-of-trust for those people that trust
   them and which want to see a censored net. You cant and should not
   prevent them from this, if they want it.
 
  This have been discussed  a lot.
  the fact that censoship isnt done by a central authority but by a mob
  rule is irrelevant.
  Censorship in this contest is blocking users based on the content of
  their messages
 
   The whole point  is basically this: A tool created to block flood
  attacks  is being used to discriminate against a group of users.
 
  Now, it is true that they can't really censor anything because users can
  decide what trust lists to use, but it is also true that this abuse of
  the wot does creates problems. They are social problems and not
  technical ones, but still 'freenet problems'.
 
  If we see the experience with FMS as a test for the Web of Trust, the
  result of that test is in my opinion something in between a miserable
  failure and a catastrophe.
 
  The WoT never got to prove itself against a real flood attack, we have
  no idea what would happen if someone decided to attack FMS, not even if
  the WoT would stop the attempted attack at all, leave alone finding out
  how fast and/or how well it would do it.
 
  In other words, for what we know, the WoT may very well be completely
  ineffective against a DoS attack.
  All we know about it is that the WoT can be used to discriminate against
  people, we know that it WILL be used in that way, and we know that
  because of a proven fact: it's being used to discriminate against people
  right now, on FMS
 
  That's all we know.
  We know that some people will abuse WoT, but we dont really know if it
  would be effective at stopping DoS attacks.
  Yes, it should work, but we don't 'know'.
 
  The WoT has never been tested t actually do the job it's designed to do,
  yet the Freenet 'decision makers' are acting as if the WoT had proven
  its validity beyond any reasonable doubt, and at the same time they
  decide to ignore the only one proven fact that we have.
 
  This whole situation is ridiculous,  I don't know if it's more funny or
  sad...  it's grotesque. It reminds me of our beloved politicians, always
  knowing what's the right thing to do, except that it never works as
  expected.
 
  No, it is not ridiculous, you are just having a point of view which is
  not abstract enough:
 
  If there is a shared medium (= Freenet, Freetalk, etc.) which is writable
  by EVERYONE, it is absolutely IMPOSSIBLE to *automatically* (as in by
  writing an intelligent software) distinguish spam from useful uploads,
  because EVERYONE can be evil.
 
  EITHER you manually view every single piece of information which is
  uploaded and decide yourself whether you consider it as spam or not OR
  you adopt the ratings of other people so each person only has to rate a
  small subset of the uploaded data. There are no other options.
 
  And what the web of trust does is exactly the second option: it load
  balances the content rating equally between all users.

 While your statement is trivially true (assuming we ignore some fairly
 potent techniques like bayesian classifiers that rely on neither
 additional work by the user or reliance on the opinions of others...),

 Bayesian filters DO need input: You need to give them old spam and non-spam
 messages so that they can decide about new input.

 But they cannot help Freetalk because they cannot prevent identity spam,
 i.e. the creation of very large amounts of identities.

They do not require input from *other people*.


 it misses the real point: the fact that WoT spreads the work around
 does not mean it does so efficiently or effectively, or that the
 choices it makes wrt various design tradeoffs are actually the choices
 that we, as its users, would make if we considered those choices
 carefully.

 A web of trust is a complex system, the entire purpose of which is to
 create useful emergent behaviors.  Too much focus on the micro-level
 behavior of the parts of such a system, instead of the emergent
 properties of the system as a whole, means that you won't get the
 emergent properties you wanted.


 Yes, the current web of trust implementation might not be perfect. But it is
 one of the only solutions to the spam problem, if not the only.

 So the question is not whether to use a WoT but rather how to program the WoT
 to fit our purposes.

 Well anyway, if 

[freenet-dev] a social problem with Wot (was: Hashcash introduction, was: Question about WoT )

2009-05-13 Thread xor
On Wednesday 13 May 2009 10:01:31 Luke771 wrote:
> Thomas Sachau wrote:
> > Luke771 schrieb:
> >> I can't comment on the technical part because I wouldnt know what im
> >> talking about.
> >> However, I do like the 'social' part (being able to see an identity even
> >> if the censors mark it down it right away as it's created)
> >
> > "The censors"? There is no central authority to censor people. "Censors"
> > can only censor the web-of-trust for those people that trust them and
> > which want to see a censored net. You cant and should not prevent them
> > from this, if they want it.
>
> This have been discussed  a lot.
> the fact that censoship isnt done by a central authority but by a mob
> rule is irrelevant.
> Censorship in this contest is "blocking users based on the content of
> their messages"
>
>  The whole point  is basically this: "A tool created to block flood
> attacks  is being used to discriminate against a group of users.
>
> Now, it is true that they can't really censor anything because users can
> decide what trust lists to use, but it is also true that this abuse of
> the wot does creates problems. They are social problems and not
> technical ones, but still 'freenet problems'.
>
> If we see the experience with FMS as a test for the Web of Trust, the
> result of that test is in my opinion something in between a miserable
> failure and a catastrophe.
>
> The WoT never got to prove itself against a real flood attack, we have
> no idea what would happen if someone decided to attack FMS, not even if
> the WoT would stop the attempted attack at all, leave alone finding out
> how fast and/or how well it would do it.
>
> In other words, for what we know, the WoT may very well be completely
> ineffective against a DoS attack.
> All we know about it is that the WoT can be used to discriminate against
> people, we know that it WILL be used in that way, and we know that
> because of a proven fact: it's being used to discriminate against people
> right now, on FMS
>
> That's all we know.
> We know that some people will abuse WoT, but we dont really know if it
> would be effective at stopping DoS attacks.
> Yes, it "should" work, but we don't 'know'.
>
> The WoT has never been tested t actually do the job it's designed to do,
> yet the Freenet 'decision makers' are acting as if the WoT had proven
> its validity beyond any reasonable doubt, and at the same time they
> decide to ignore the only one proven fact that we have.
>
> This whole situation is ridiculous,  I don't know if it's more funny or
> sad...  it's grotesque. It reminds me of our beloved politicians, always
> knowing what's the right thing to do, except that it never works as
> expected.
>

No, it is not ridiculous, you are just having a point of view which is not 
abstract enough:

If there is a shared medium (= Freenet, Freetalk, etc.) which is writable by 
EVERYONE, it is absolutely IMPOSSIBLE to *automatically* (as in "by writing an 
intelligent software") distinguish spam from useful uploads, because 
"EVERYONE" can be evil. 

EITHER you manually view every single piece of information which is uploaded 
and decide yourself whether you consider it as spam or not OR you adopt the 
ratings of other people so each person only has to rate a small subset of the 
uploaded data. There are no other options.

And what the web of trust does is exactly the second option: it "load 
balances" the content rating equally between all users.



-- next part --
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 197 bytes
Desc: This is a digitally signed message part.
URL: 



[freenet-dev] a social problem with Wot (was: Hashcash introduction, was: Question about WoT )

2009-05-13 Thread Evan Daniel
On Wed, May 13, 2009 at 4:28 PM, xor  wrote:
> On Wednesday 13 May 2009 10:01:31 Luke771 wrote:
>> Thomas Sachau wrote:
>> > Luke771 schrieb:
>> >> I can't comment on the technical part because I wouldnt know what im
>> >> talking about.
>> >> However, I do like the 'social' part (being able to see an identity even
>> >> if the censors mark it down it right away as it's created)
>> >
>> > "The censors"? There is no central authority to censor people. "Censors"
>> > can only censor the web-of-trust for those people that trust them and
>> > which want to see a censored net. You cant and should not prevent them
>> > from this, if they want it.
>>
>> This have been discussed ?a lot.
>> the fact that censoship isnt done by a central authority but by a mob
>> rule is irrelevant.
>> Censorship in this contest is "blocking users based on the content of
>> their messages"
>>
>> ?The whole point ?is basically this: "A tool created to block flood
>> attacks ?is being used to discriminate against a group of users.
>>
>> Now, it is true that they can't really censor anything because users can
>> decide what trust lists to use, but it is also true that this abuse of
>> the wot does creates problems. They are social problems and not
>> technical ones, but still 'freenet problems'.
>>
>> If we see the experience with FMS as a test for the Web of Trust, the
>> result of that test is in my opinion something in between a miserable
>> failure and a catastrophe.
>>
>> The WoT never got to prove itself against a real flood attack, we have
>> no idea what would happen if someone decided to attack FMS, not even if
>> the WoT would stop the attempted attack at all, leave alone finding out
>> how fast and/or how well it would do it.
>>
>> In other words, for what we know, the WoT may very well be completely
>> ineffective against a DoS attack.
>> All we know about it is that the WoT can be used to discriminate against
>> people, we know that it WILL be used in that way, and we know that
>> because of a proven fact: it's being used to discriminate against people
>> right now, on FMS
>>
>> That's all we know.
>> We know that some people will abuse WoT, but we dont really know if it
>> would be effective at stopping DoS attacks.
>> Yes, it "should" work, but we don't 'know'.
>>
>> The WoT has never been tested t actually do the job it's designed to do,
>> yet the Freenet 'decision makers' are acting as if the WoT had proven
>> its validity beyond any reasonable doubt, and at the same time they
>> decide to ignore the only one proven fact that we have.
>>
>> This whole situation is ridiculous, ?I don't know if it's more funny or
>> sad... ?it's grotesque. It reminds me of our beloved politicians, always
>> knowing what's the right thing to do, except that it never works as
>> expected.
>>
>
> No, it is not ridiculous, you are just having a point of view which is not
> abstract enough:
>
> If there is a shared medium (= Freenet, Freetalk, etc.) which is writable by
> EVERYONE, it is absolutely IMPOSSIBLE to *automatically* (as in "by writing an
> intelligent software") distinguish spam from useful uploads, because
> "EVERYONE" can be evil.
>
> EITHER you manually view every single piece of information which is uploaded
> and decide yourself whether you consider it as spam or not OR you adopt the
> ratings of other people so each person only has to rate a small subset of the
> uploaded data. There are no other options.
>
> And what the web of trust does is exactly the second option: it "load
> balances" the content rating equally between all users.

While your statement is trivially true (assuming we ignore some fairly
potent techniques like bayesian classifiers that rely on neither
additional work by the user or reliance on the opinions of others...),
it misses the real point: the fact that WoT spreads the work around
does not mean it does so efficiently or effectively, or that the
choices it makes wrt various design tradeoffs are actually the choices
that we, as its users, would make if we considered those choices
carefully.

A web of trust is a complex system, the entire purpose of which is to
create useful emergent behaviors.  Too much focus on the micro-level
behavior of the parts of such a system, instead of the emergent
properties of the system as a whole, means that you won't get the
emergent properties you wanted.

Evan Daniel



[freenet-dev] a social problem with Wot (was: Hashcash introduction, was: Question about WoT )

2009-05-13 Thread Daniel Cheng
On Wed, May 13, 2009 at 4:01 PM, Luke771  wrote:
> Thomas Sachau wrote:
>> Luke771 schrieb:
>>
>>> I can't comment on the technical part because I wouldnt know what im
>>> talking about.
>>> However, I do like the 'social' part (being able to see an identity even
>>> if the censors mark it down it right away as it's created)
>>>
>>
>> "The censors"? There is no central authority to censor people. "Censors" can 
>> only censor the
>> web-of-trust for those people that trust them and which want to see a 
>> censored net. You cant and
>> should not prevent them from this, if they want it.
>>
>>
> This have been discussed ?a lot.
> the fact that censoship isnt done by a central authority but by a mob
> rule is irrelevant.
> Censorship in this contest is "blocking users based on the content of
> their messages"
>
> ?The whole point ?is basically this: "A tool created to block flood
> attacks ?is being used to discriminate against a group of users.
> [pedophiles / gays / terrorist / dissidents / ...]

You don't have to repeat this again and again.
We *are* aware of this problem.
We need solution, not re-stating problem.

Don't tell me frost is the solution -- it is being DoS'ed again.

In fms, you can always adjust the MinLocalMessageTrust to get whatever
message you please to read.  -- ya, you may call it censorship..
but it is the one every reader can opt-out with 2 clicks. --- Even
if majority abuse the system, the poster can always post, the reader
may know who is being censored and adjust accordingly .

In frost, when sb DoS the system...  the poster cannot post anything.
there is nothing a reader can do.

Now, tell me, which one is better?

[...]

>> Why use this sort of announcement, if it takes several days? Announcement 
>> over captchas takes only
>> around 24 hours, which is faster and needs less resources. So i dont see any 
>> real reason for
>> hashcash-introductions.
>>
[...]
> On the other hand, a malicious user who is able to create new identities
> quickly enough (slave labor would do the trick) would still be capable
> to send 75 messages per announced ID... so the 'grace period' should be
> as small as possible to minimize this problem. Maybe 25 or 30 messages?

As long as creating a new identity is free.
1 message is enough to flood the whole system.

Not only that,
Even ZERO message is enough to flood the whole system.
--- if you can introduce thousands of identity in a few days,
everybody would be busy polling from the "fake" identities.

--



[freenet-dev] a social problem with Wot (was: Hashcash introduction, was: Question about WoT )

2009-05-13 Thread Arne Babenhauserheide
On Wednesday, 13. May 2009 10:24:52 Daniel Cheng wrote:
> In fms, you can always adjust the MinLocalMessageTrust to get whatever
> message you please to read.  -- ya, you may call it censorship..
> but it is the one every reader can opt-out with 2 clicks. --- Even
> if majority abuse the system, the poster can always post, the reader
> may know who is being censored and adjust accordingly .

As long as I can just disable the censorship (and I'm aware tha it exists) I 
don't care about it. Noone has the right to make me listen, but also I don't 
have the right to prevent someone from speaking. 

Luckily the itnernet allows us to join these two goals: You can speak, but 
maybe noone will hear you. 

Important here is, that there must not be a way to check if I join in the 
censorship, else people can create social pressure. 

I don't really use FMS yet, so I need to ask: is there a way to check that? If 
yes: How can we get rid of it? 

Best wishes, 
Arne

--- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- 
   - singing a part of the history of free software -
  http://infinite-hands.draketo.de
-- next part --
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 198 bytes
Desc: This is a digitally signed message part.
URL: 



[freenet-dev] a social problem with Wot (was: Hashcash introduction, was: Question about WoT )

2009-05-13 Thread Luke771
Thomas Sachau wrote:
> Luke771 schrieb:
>   
>> I can't comment on the technical part because I wouldnt know what im 
>> talking about.
>> However, I do like the 'social' part (being able to see an identity even 
>> if the censors mark it down it right away as it's created)
>> 
>
> "The censors"? There is no central authority to censor people. "Censors" can 
> only censor the
> web-of-trust for those people that trust them and which want to see a 
> censored net. You cant and
> should not prevent them from this, if they want it.
>
>   
This have been discussed  a lot.
the fact that censoship isnt done by a central authority but by a mob 
rule is irrelevant.
Censorship in this contest is "blocking users based on the content of 
their messages"

 The whole point  is basically this: "A tool created to block flood 
attacks  is being used to discriminate against a group of users.

Now, it is true that they can't really censor anything because users can 
decide what trust lists to use, but it is also true that this abuse of 
the wot does creates problems. They are social problems and not 
technical ones, but still 'freenet problems'.

If we see the experience with FMS as a test for the Web of Trust, the 
result of that test is in my opinion something in between a miserable 
failure and a catastrophe.

The WoT never got to prove itself against a real flood attack, we have 
no idea what would happen if someone decided to attack FMS, not even if 
the WoT would stop the attempted attack at all, leave alone finding out 
how fast and/or how well it would do it.

In other words, for what we know, the WoT may very well be completely 
ineffective against a DoS attack.
All we know about it is that the WoT can be used to discriminate against 
people, we know that it WILL be used in that way, and we know that 
because of a proven fact: it's being used to discriminate against people 
right now, on FMS

That's all we know.
We know that some people will abuse WoT, but we dont really know if it 
would be effective at stopping DoS attacks.
Yes, it "should" work, but we don't 'know'.

The WoT has never been tested t actually do the job it's designed to do, 
yet the Freenet 'decision makers' are acting as if the WoT had proven 
its validity beyond any reasonable doubt, and at the same time they 
decide to ignore the only one proven fact that we have.

This whole situation is ridiculous,  I don't know if it's more funny or 
sad...  it's grotesque. It reminds me of our beloved politicians, always 
knowing what's the right thing to do, except that it never works as 
expected.


Quickly back to our 'social problem': we have seen on FMS that  as_soon 
as a bunch of idiots figured out that they had an instrument of power in 
their hands, they decided to use it to play "holier than thou" and 
discriminate against people deemed "immoral", namely pedophiles and/or 
kiddie porn users.

Now, I'm not justifying pedophilia, kiddie pron or anything. In fact, 
I'm not even discussing it. What I'm doing is to point out that it is 
extremely easy to single out pedophiles as "bad guys" who "should" be 
discriminated against.
It's like asking people 'would you discriminate against unrepented 
sadistic serial killers?"
Hell yeah.
Anyone would.
Same thing with pedophiles, they're so "bad" that our hate towards their 
acts takes all of our attention, making us miss the realy important stuff.

In this case, the problem isn't about discriminating against pedophiles 
(false target), the problem is about setting a precedent, make us accept 
that "discriminating against $group is OK as long as the group in 
question is "bad" enough.
THIS IS DANGEROUS!
Today pedophiles, tomorrow gays.
Today terrorists, tomorrow dissidents.

I hope I made it clear enough this time, because I dont think I can 
explain it any better than so. And by the way, if I still can't get my 
point across, I'll probably give up.

>> On the other hand tho, if a user knows that it will take his system 
>> three days or a week to finish the job, he may decide to do it anyway.
>> I mean the real problem is 'not knowing' that it may take a long time.
>> A user that starts a process and doesnt see any noticeable progress 
>> would probably abort, but the same user would let it run to completion 
>> if he expects it to take several days.
>> 
>
> Why use this sort of announcement, if it takes several days? Announcement 
> over captchas takes only
> around 24 hours, which is faster and needs less resources. So i dont see any 
> real reason for
> hashcash-introductions.
>
>   
the long calculation thing wouldnt work after all, as it has been 
pointed out, computer power increases too fast for this kind of solution 
to be effective.

The other idea was good, a 'grace period' of say 75 "free" messages for 
every new identity before the WoT kicks in, would definitely be a good 
idea because it would greatly reduce the power in the hands of the "wot 
abusers" (if you don;t like the term 

[freenet-dev] a social problem with Wot (was: Hashcash introduction, was: Question about WoT )

2009-05-13 Thread Luke771
Thomas Sachau wrote:
 Luke771 schrieb:
   
 I can't comment on the technical part because I wouldnt know what im 
 talking about.
 However, I do like the 'social' part (being able to see an identity even 
 if the censors mark it down it right away as it's created)
 

 The censors? There is no central authority to censor people. Censors can 
 only censor the
 web-of-trust for those people that trust them and which want to see a 
 censored net. You cant and
 should not prevent them from this, if they want it.

   
This have been discussed  a lot.
the fact that censoship isnt done by a central authority but by a mob 
rule is irrelevant.
Censorship in this contest is blocking users based on the content of 
their messages

 The whole point  is basically this: A tool created to block flood 
attacks  is being used to discriminate against a group of users.

Now, it is true that they can't really censor anything because users can 
decide what trust lists to use, but it is also true that this abuse of 
the wot does creates problems. They are social problems and not 
technical ones, but still 'freenet problems'.

If we see the experience with FMS as a test for the Web of Trust, the 
result of that test is in my opinion something in between a miserable 
failure and a catastrophe.

The WoT never got to prove itself against a real flood attack, we have 
no idea what would happen if someone decided to attack FMS, not even if 
the WoT would stop the attempted attack at all, leave alone finding out 
how fast and/or how well it would do it.

In other words, for what we know, the WoT may very well be completely 
ineffective against a DoS attack.
All we know about it is that the WoT can be used to discriminate against 
people, we know that it WILL be used in that way, and we know that 
because of a proven fact: it's being used to discriminate against people 
right now, on FMS

That's all we know.
We know that some people will abuse WoT, but we dont really know if it 
would be effective at stopping DoS attacks.
Yes, it should work, but we don't 'know'.

The WoT has never been tested t actually do the job it's designed to do, 
yet the Freenet 'decision makers' are acting as if the WoT had proven 
its validity beyond any reasonable doubt, and at the same time they 
decide to ignore the only one proven fact that we have.

This whole situation is ridiculous,  I don't know if it's more funny or 
sad...  it's grotesque. It reminds me of our beloved politicians, always 
knowing what's the right thing to do, except that it never works as 
expected.


Quickly back to our 'social problem': we have seen on FMS that  as_soon 
as a bunch of idiots figured out that they had an instrument of power in 
their hands, they decided to use it to play holier than thou and 
discriminate against people deemed immoral, namely pedophiles and/or 
kiddie porn users.

Now, I'm not justifying pedophilia, kiddie pron or anything. In fact, 
I'm not even discussing it. What I'm doing is to point out that it is 
extremely easy to single out pedophiles as bad guys who should be 
discriminated against.
It's like asking people 'would you discriminate against unrepented 
sadistic serial killers?
Hell yeah.
Anyone would.
Same thing with pedophiles, they're so bad that our hate towards their 
acts takes all of our attention, making us miss the realy important stuff.

In this case, the problem isn't about discriminating against pedophiles 
(false target), the problem is about setting a precedent, make us accept 
that discriminating against $group is OK as long as the group in 
question is bad enough.
THIS IS DANGEROUS!
Today pedophiles, tomorrow gays.
Today terrorists, tomorrow dissidents.

I hope I made it clear enough this time, because I dont think I can 
explain it any better than so. And by the way, if I still can't get my 
point across, I'll probably give up.

 On the other hand tho, if a user knows that it will take his system 
 three days or a week to finish the job, he may decide to do it anyway.
 I mean the real problem is 'not knowing' that it may take a long time.
 A user that starts a process and doesnt see any noticeable progress 
 would probably abort, but the same user would let it run to completion 
 if he expects it to take several days.
 

 Why use this sort of announcement, if it takes several days? Announcement 
 over captchas takes only
 around 24 hours, which is faster and needs less resources. So i dont see any 
 real reason for
 hashcash-introductions.

   
the long calculation thing wouldnt work after all, as it has been 
pointed out, computer power increases too fast for this kind of solution 
to be effective.

The other idea was good, a 'grace period' of say 75 free messages for 
every new identity before the WoT kicks in, would definitely be a good 
idea because it would greatly reduce the power in the hands of the wot 
abusers (if you don;t like the term 'censors' we can agree to call them 
that)

On the other hand, a malicious 

Re: [freenet-dev] a social problem with Wot (was: Hashcash introduction, was: Question about WoT )

2009-05-13 Thread Daniel Cheng
On Wed, May 13, 2009 at 4:01 PM, Luke771 luke771.li...@gmail.com wrote:
 Thomas Sachau wrote:
 Luke771 schrieb:

 I can't comment on the technical part because I wouldnt know what im
 talking about.
 However, I do like the 'social' part (being able to see an identity even
 if the censors mark it down it right away as it's created)


 The censors? There is no central authority to censor people. Censors can 
 only censor the
 web-of-trust for those people that trust them and which want to see a 
 censored net. You cant and
 should not prevent them from this, if they want it.


 This have been discussed  a lot.
 the fact that censoship isnt done by a central authority but by a mob
 rule is irrelevant.
 Censorship in this contest is blocking users based on the content of
 their messages

  The whole point  is basically this: A tool created to block flood
 attacks  is being used to discriminate against a group of users.
 [pedophiles / gays / terrorist / dissidents / ...]

You don't have to repeat this again and again.
We *are* aware of this problem.
We need solution, not re-stating problem.

Don't tell me frost is the solution -- it is being DoS'ed again.

In fms, you can always adjust the MinLocalMessageTrust to get whatever
message you please to read.  -- ya, you may call it censorship..
but it is the one every reader can opt-out with 2 clicks. --- Even
if majority abuse the system, the poster can always post, the reader
may know who is being censored and adjust accordingly .

In frost, when sb DoS the system...  the poster cannot post anything.
there is nothing a reader can do.

Now, tell me, which one is better?

[...]

 Why use this sort of announcement, if it takes several days? Announcement 
 over captchas takes only
 around 24 hours, which is faster and needs less resources. So i dont see any 
 real reason for
 hashcash-introductions.

[...]
 On the other hand, a malicious user who is able to create new identities
 quickly enough (slave labor would do the trick) would still be capable
 to send 75 messages per announced ID... so the 'grace period' should be
 as small as possible to minimize this problem. Maybe 25 or 30 messages?

As long as creating a new identity is free.
1 message is enough to flood the whole system.

Not only that,
Even ZERO message is enough to flood the whole system.
--- if you can introduce thousands of identity in a few days,
everybody would be busy polling from the fake identities.

--
___
Devl mailing list
Devl@freenetproject.org
http://emu.freenetproject.org/cgi-bin/mailman/listinfo/devl


Re: [freenet-dev] a social problem with Wot (was: Hashcash introduction, was: Question about WoT )

2009-05-13 Thread Arne Babenhauserheide
On Wednesday, 13. May 2009 10:24:52 Daniel Cheng wrote:
 In fms, you can always adjust the MinLocalMessageTrust to get whatever
 message you please to read.  -- ya, you may call it censorship..
 but it is the one every reader can opt-out with 2 clicks. --- Even
 if majority abuse the system, the poster can always post, the reader
 may know who is being censored and adjust accordingly .

As long as I can just disable the censorship (and I'm aware tha it exists) I 
don't care about it. Noone has the right to make me listen, but also I don't 
have the right to prevent someone from speaking. 

Luckily the itnernet allows us to join these two goals: You can speak, but 
maybe noone will hear you. 

Important here is, that there must not be a way to check if I join in the 
censorship, else people can create social pressure. 

I don't really use FMS yet, so I need to ask: is there a way to check that? If 
yes: How can we get rid of it? 

Best wishes, 
Arne

--- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- 
   - singing a part of the history of free software -
  http://infinite-hands.draketo.de


signature.asc
Description: This is a digitally signed message part.
___
Devl mailing list
Devl@freenetproject.org
http://emu.freenetproject.org/cgi-bin/mailman/listinfo/devl

Re: [freenet-dev] a social problem with Wot (was: Hashcash introduction, was: Question about WoT )

2009-05-13 Thread xor
On Wednesday 13 May 2009 10:01:31 Luke771 wrote:
 Thomas Sachau wrote:
  Luke771 schrieb:
  I can't comment on the technical part because I wouldnt know what im
  talking about.
  However, I do like the 'social' part (being able to see an identity even
  if the censors mark it down it right away as it's created)
 
  The censors? There is no central authority to censor people. Censors
  can only censor the web-of-trust for those people that trust them and
  which want to see a censored net. You cant and should not prevent them
  from this, if they want it.

 This have been discussed  a lot.
 the fact that censoship isnt done by a central authority but by a mob
 rule is irrelevant.
 Censorship in this contest is blocking users based on the content of
 their messages

  The whole point  is basically this: A tool created to block flood
 attacks  is being used to discriminate against a group of users.

 Now, it is true that they can't really censor anything because users can
 decide what trust lists to use, but it is also true that this abuse of
 the wot does creates problems. They are social problems and not
 technical ones, but still 'freenet problems'.

 If we see the experience with FMS as a test for the Web of Trust, the
 result of that test is in my opinion something in between a miserable
 failure and a catastrophe.

 The WoT never got to prove itself against a real flood attack, we have
 no idea what would happen if someone decided to attack FMS, not even if
 the WoT would stop the attempted attack at all, leave alone finding out
 how fast and/or how well it would do it.

 In other words, for what we know, the WoT may very well be completely
 ineffective against a DoS attack.
 All we know about it is that the WoT can be used to discriminate against
 people, we know that it WILL be used in that way, and we know that
 because of a proven fact: it's being used to discriminate against people
 right now, on FMS

 That's all we know.
 We know that some people will abuse WoT, but we dont really know if it
 would be effective at stopping DoS attacks.
 Yes, it should work, but we don't 'know'.

 The WoT has never been tested t actually do the job it's designed to do,
 yet the Freenet 'decision makers' are acting as if the WoT had proven
 its validity beyond any reasonable doubt, and at the same time they
 decide to ignore the only one proven fact that we have.

 This whole situation is ridiculous,  I don't know if it's more funny or
 sad...  it's grotesque. It reminds me of our beloved politicians, always
 knowing what's the right thing to do, except that it never works as
 expected.


No, it is not ridiculous, you are just having a point of view which is not 
abstract enough:

If there is a shared medium (= Freenet, Freetalk, etc.) which is writable by 
EVERYONE, it is absolutely IMPOSSIBLE to *automatically* (as in by writing an 
intelligent software) distinguish spam from useful uploads, because 
EVERYONE can be evil. 

EITHER you manually view every single piece of information which is uploaded 
and decide yourself whether you consider it as spam or not OR you adopt the 
ratings of other people so each person only has to rate a small subset of the 
uploaded data. There are no other options.

And what the web of trust does is exactly the second option: it load 
balances the content rating equally between all users.





signature.asc
Description: This is a digitally signed message part.
___
Devl mailing list
Devl@freenetproject.org
http://emu.freenetproject.org/cgi-bin/mailman/listinfo/devl

Re: [freenet-dev] a social problem with Wot (was: Hashcash introduction, was: Question about WoT )

2009-05-13 Thread Evan Daniel
On Wed, May 13, 2009 at 4:28 PM, xor x...@gmx.li wrote:
 On Wednesday 13 May 2009 10:01:31 Luke771 wrote:
 Thomas Sachau wrote:
  Luke771 schrieb:
  I can't comment on the technical part because I wouldnt know what im
  talking about.
  However, I do like the 'social' part (being able to see an identity even
  if the censors mark it down it right away as it's created)
 
  The censors? There is no central authority to censor people. Censors
  can only censor the web-of-trust for those people that trust them and
  which want to see a censored net. You cant and should not prevent them
  from this, if they want it.

 This have been discussed  a lot.
 the fact that censoship isnt done by a central authority but by a mob
 rule is irrelevant.
 Censorship in this contest is blocking users based on the content of
 their messages

  The whole point  is basically this: A tool created to block flood
 attacks  is being used to discriminate against a group of users.

 Now, it is true that they can't really censor anything because users can
 decide what trust lists to use, but it is also true that this abuse of
 the wot does creates problems. They are social problems and not
 technical ones, but still 'freenet problems'.

 If we see the experience with FMS as a test for the Web of Trust, the
 result of that test is in my opinion something in between a miserable
 failure and a catastrophe.

 The WoT never got to prove itself against a real flood attack, we have
 no idea what would happen if someone decided to attack FMS, not even if
 the WoT would stop the attempted attack at all, leave alone finding out
 how fast and/or how well it would do it.

 In other words, for what we know, the WoT may very well be completely
 ineffective against a DoS attack.
 All we know about it is that the WoT can be used to discriminate against
 people, we know that it WILL be used in that way, and we know that
 because of a proven fact: it's being used to discriminate against people
 right now, on FMS

 That's all we know.
 We know that some people will abuse WoT, but we dont really know if it
 would be effective at stopping DoS attacks.
 Yes, it should work, but we don't 'know'.

 The WoT has never been tested t actually do the job it's designed to do,
 yet the Freenet 'decision makers' are acting as if the WoT had proven
 its validity beyond any reasonable doubt, and at the same time they
 decide to ignore the only one proven fact that we have.

 This whole situation is ridiculous,  I don't know if it's more funny or
 sad...  it's grotesque. It reminds me of our beloved politicians, always
 knowing what's the right thing to do, except that it never works as
 expected.


 No, it is not ridiculous, you are just having a point of view which is not
 abstract enough:

 If there is a shared medium (= Freenet, Freetalk, etc.) which is writable by
 EVERYONE, it is absolutely IMPOSSIBLE to *automatically* (as in by writing an
 intelligent software) distinguish spam from useful uploads, because
 EVERYONE can be evil.

 EITHER you manually view every single piece of information which is uploaded
 and decide yourself whether you consider it as spam or not OR you adopt the
 ratings of other people so each person only has to rate a small subset of the
 uploaded data. There are no other options.

 And what the web of trust does is exactly the second option: it load
 balances the content rating equally between all users.

While your statement is trivially true (assuming we ignore some fairly
potent techniques like bayesian classifiers that rely on neither
additional work by the user or reliance on the opinions of others...),
it misses the real point: the fact that WoT spreads the work around
does not mean it does so efficiently or effectively, or that the
choices it makes wrt various design tradeoffs are actually the choices
that we, as its users, would make if we considered those choices
carefully.

A web of trust is a complex system, the entire purpose of which is to
create useful emergent behaviors.  Too much focus on the micro-level
behavior of the parts of such a system, instead of the emergent
properties of the system as a whole, means that you won't get the
emergent properties you wanted.

Evan Daniel
___
Devl mailing list
Devl@freenetproject.org
http://emu.freenetproject.org/cgi-bin/mailman/listinfo/devl