-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

On 22/09/13 20:51, Jonathan Wilkes wrote:
> Goodwill is a pre-internet concept that is predicated on things
> like short human memories, and it wholesale ignores all the moral 
> hazards that come from being able to install a splitter on a
> single line and copying all data everywhere, private or not, for
> nearly nothing. Not to mention store it forever.  Not to mention
> retrieve it for next to nothing...

Either you think the internet has destroyed any meaningful notion of
friendship or we're talking at cross purposes.

Let's imagine that you and I are friends who belong to a P2P social
network. We make an encrypted connection between ourselves, over which
we exchange various confidential information, including lists of our
friends. My contention is that the goodwill between us as friends will
tend to prevent either of us from sharing that information against the
other's will. Thus your list of friends won't end up in the hands of
some marketing company, and neither will mine. Nevertheless, equipped
with lists of our friends' friends, we can search for acquaintances
old and new, just like on Facebook.

This doesn't require goodwill from Facebook or AT&T or the NSA or any
other centralised entity - it only requires goodwill between friends.
I don't believe that's an obsolete concept.

> Unfortunately the p2p model seems to be to make a prototype, or 
> even a working system, and stop at the point where it's minimally 
> functional, usually because funds are scarce.  That's unfortunate, 
> because for anything like an updated concept of goodwill to really 
> function in such a system the user _must_ know what the current 
> threats are and a way to accurately glean the mid-term and maybe 
> long-term threats.

There are plenty of failed P2P systems, just like there are plenty of
failed centralised systems. The floor of Silicon Valley is littered
with dead and dying Facebook clones. So I wouldn't dismiss the P2P
approach just because it's produced some failures.

> Otherwise you create a social network that looks like it has
> checks and balances built-in, but, e.g, no one really understands
> _why_ sharing beyond the first node is a danger and no one cares
> about honoring the premise (including the friend sharing the list
> in the first place).

I think those concepts are easier to grasp in the P2P setting than the
centralised setting, because they map to existing norms concerning
personal relationships and privacy - for example, if a friend sends
you a private message, there's a well-established norm that you don't
share that message. As long as the software implements normative
behaviour as the default, users will only break the norms if they're
determined to do so - and such violations will always be local in scope.

> Nearly every social network UX is designed to hide such risks, and 
> I don't see any examples of an alternative.  Does yours offer one?

There's no reason for an app to provide a user interface for acts that
harm other users. If someone wants to create their own "sell out your
friends" fork of the software then of course nobody can stop them. But
that's a small-scale violation of trust between that person and their
friends - it isn't really comparable to someone at Facebook or AT&T
copying strangers' data en masse to the NSA.

Cheers,
Michael

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.10 (GNU/Linux)

iQEcBAEBAgAGBQJSQFxGAAoJEBEET9GfxSfMPwgH/0a3h9YOskYlXrhzVpthha7t
TMP5oi3HFfTrpgPB2yTHfTaNzD9bTKalTLawN7vvnDoaq+Eu9UIRny+dNYar2u4X
d5EHGSq+vv65X9M3X+mmmmb7QhAGy/G2ycNNgky1k/fse/Jzr2fg4yOkofSBmQyf
D5oYgAkvl3Ykhn3WSprUdQCUceG6a7Tr8ihsKcFLpXs0adiQfQsdcjmN9r2Acfsb
mKszf0rJFTOT0+7winhtdKTRoIeez42hVTD51tJDyaT+Jbl8VYTPIFIMVhH8u8gO
d6zWXcGUWALm5i1qNDguXlNfpZPy7LaFwgUj2P18K8H2scaCoYgGdiw6yiCmhBE=
=UOq0
-----END PGP SIGNATURE-----
-- 
Liberationtech is public & archives are searchable on Google. Violations of 
list guidelines will get you moderated: 
https://mailman.stanford.edu/mailman/listinfo/liberationtech. Unsubscribe, 
change to digest, or change password by emailing moderator at 
compa...@stanford.edu.

Reply via email to