Re: nettime The Society of the Unspectacular

2007-06-11 Thread Felix Stalder
On Sunday, 10. June 2007 19:42, Morlock Elloi wrote:

 If empowerment of the public by cheap self-publishing has demonstrated
 anything, it is that a vast majority has nothing to say, lacks any
 detectable talent and mimicks TV in publishing the void of own life (but
 unlike TV they derive no income from commercials.)

If media are made by, and for, one's own community (which might be very 
small) then talent and excitement are measured very differently. The 
material on youtube etc is boring, mainly, I guess, because it was not 
made for you. Most of us produce lots of stuff that is boring to all but a 
hand full of people. But to them, it's great. It's the stuff that used to 
be called private, but is now online because it's the easiest way to get 
to the intended audience of 5 (or 500, or 5000).

 So I wouldn't say that the classical notion of public has changed in
 the sense that it got fragmented around new media. It's new media
 giving content-free personal smalltalk the ability to be globally
 visible (not that anyone looks at it in practice, but they could, in
 theory.)

The technical possibility that everyone can watch it is pointing into the 
totally wrong direction. It's doesn't mean that everyone should watch it, 
it only means that the size of the audience is not determined on the level 
of the technical protocol but can scale freely up or down.

This does, in some from, lead to a fragmentation of the public, not the 
least because the public in modern democracies was constituted through 
the narrow bandwidth of mass media. Though I'm not sure if this is the 
reason, as Eric suspects, for the very manifest trend of governments 
withdrawing from public discourse. Yet, for whatever reason, there seems 
to be a inverse relationship between the degree of privacy of ordinary 
people and the secrecy of governments. 

Felix

--- http://felix.openflows.com - out now:
*|Manuel Castells and the Theory of the Network Society. Polity, 2006 
*|Open Cultures and the Nature of Networks. Ed. Futura/Revolver, 2005 


#  distributed via nettime: no commercial use without permission
#  nettime is a moderated mailing list for net criticism,
#  collaborative text filtering and cultural politics of the nets
#  more info: [EMAIL PROTECTED] and info nettime-l in the msg body
#  archive: http://www.nettime.org contact: [EMAIL PROTECTED]


Re: nettime sad news

2007-04-25 Thread Felix Stalder

This is very sad news, indeed. As Trebor Scholz wrote Ricardo Rosas
saw and established connections where few people could perceive them,
let alone could make them work. Yet, once he pointed them out and set
out to bring them into the world, they were natural. He introduced a
lot of people, including myself, to Brazil and to a world of ideas,
cosmoplitan and uniquely personal at the same time. He did so in the
most humane way possible, by having long conversations, zig-zaging
through Sao Paolo, disappearing and turing up again with more people,
more connections, more things to do. I was always convinced our paths
would cross again, there would be plenty of time for more drinks,
walks, and conversations. It would have been the most natural thing in
the world. Now it won't be.

Felix





--- http://felix.openflows.com - out now:
*|Manuel Castells and the Theory of the Network Society. Polity, 2006 
*|Open Cultures and the Nature of Networks. Ed. Futura/Revolver, 2005 



#  distributed via nettime: no commercial use without permission
#  nettime is a moderated mailing list for net criticism,
#  collaborative text filtering and cultural politics of the nets
#  more info: [EMAIL PROTECTED] and info nettime-l in the msg body
#  archive: http://www.nettime.org contact: [EMAIL PROTECTED]


Re: nettime shocklogs wikipedia entry

2007-02-08 Thread Felix Stalder
On Wednesday, 7. February 2007 16:33, Geert Lovink wrote:

 What is kind of amazing is the Anglo-Saxon language policing, which
 term is and is not 'proper' English. An (English) wikipedia entry
 cannot be valid if it based on 'foreign language' sources now about
 that? Wikipedia is not a dictionary and in fact there are many
 Englishes so it makes you wonder why in particular 'neologisms' are
 targetted. and not names of (famous) persons, as Pit Schulz mentioned.

I think the case against neologism is pretty strong in an 
encyclopedia which aims to document the state of established factual  
knowledge, rather than advance it.

The case against using exclusively non-english sources is pretty
strong, too. Wikipedia, as a whole, is a global project, whereas the
English Wikipedia is, well, an English-language project. Sure, there
are many Englishes these days, but these are, still, Englishes. I
don't think anyone at en.wikipedia would object to a source, or even
an article, written in Indian English, or Jamaican English, or in an
global ESL English. For the sake of transparency en.wikipedia relies
on English-language sources. I mean, I don't read Dutch, so there is
no way for me to check if these sources are relevant. I would not call
this 'language policing'.

I'm pretty sure there is a Dutch-language version of Wikipedia (I
never checked and I'm offline right now) where using Dutch-only
sources is perfectly valid (I guess).

Felix





--- http://felix.openflows.com - out now:
*|Manuel Castells and the Theory of the Network Society. Polity, 2006 
*|Open Cultures and the Nature of Networks. Ed. Futura/Revolver, 2005 




#  distributed via nettime: no commercial use without permission
#  nettime is a moderated mailing list for net criticism,
#  collaborative text filtering and cultural politics of the nets
#  more info: [EMAIL PROTECTED] and info nettime-l in the msg body
#  archive: http://www.nettime.org contact: nettime@bbs.thing.net


nettime Machine Writing / Machine Reading

2007-01-17 Thread Felix Stalder

[This his how my script-based reading system (SpamAssassin) interprets 
Alan's script-based writing. The point is not that it (miss)qualifies as 
spam, but the interpretative rules that come into effect. Felix]

 - Forwarded message from Alan Sondheim [EMAIL PROTECTED] -

 X-Spam-Flag: YES
 X-Spam-Checker-Version: SpamAssassin 3.1.7 (2006-10-05) on
  chavez.mayfirst.org X-Spam-Level: ***
 X-Spam-Status: Yes, score=3.9 required=3.5 tests=AWL,BAYES_50,
   DATE_IN_PAST_24_48,DRUGS_PAIN,LONGWORDS,UNIQUE_WORDS autolearn=no
   version=3.1.7
 X-Spam-Report:
   *  0.9 DATE_IN_PAST_24_48 Date: is 24 to 48 hours before Received: date
   *  2.3 UNIQUE_WORDS BODY: Message body has many words used only once
   *  0.0 BAYES_50 BODY: Bayesian spam probability is 40 to 60%
   *  [score: 0.5000]
   *  0.0 DRUGS_PAIN Refers to a pain relief drug
   *  3.8 LONGWORDS Long string of long words
   * -3.1 AWL AWL: From: address is in the auto white-list
 X-Virus-Scanned: Debian amavisd-new at chavez.mayfirst.org
 From: Alan Sondheim [EMAIL PROTECTED]
 Subject: *SPAM* nettime my life collapsed
 To: nettime-l@bbs.thing.net
 Reply-To: Alan Sondheim [EMAIL PROTECTED]
 X-Virus-Scanned: by amavisd-new-20030616-p10 (Debian) at openflows.org
 X-Spam-Prev-Subject: nettime my life collapsed

 this was culled/scraped from http://www.asondheim.org/biog.txt
 i have been working on an autobiography stemming from a simple perl
 program
 #!/usr/local/bin/perl -w
 # biography
 $| = 1;
 `cp .bio .bio.old`;
 print Would you like to add to bio information? If so, type y.\n;
 chop($str=STDIN);
 if ($str eq y) {print Begin with date.\n;
 print Write single line, use ^d to end.\n;
 open(APPEND,  .bio);
 @text=STDIN;
 print APPEND @text;
 close APPEND;}
 `sort -o .bio .bio`;
 exit(0);
 that gave me the opportunity for sorting and organizing memories; the
 program was then abandoned as the entries were flushed out; the prog-
 ram was then re-employed for new memories, and so forth. the result is
 unique among autobiogs, as is the entangled compression below:


 ...





#  distributed via nettime: no commercial use without permission
#  nettime is a moderated mailing list for net criticism,
#  collaborative text filtering and cultural politics of the nets
#  more info: [EMAIL PROTECTED] and info nettime-l in the msg body
#  archive: http://www.nettime.org contact: nettime@bbs.thing.net


Re: nettime Iraq: The Way Forward

2007-01-10 Thread Felix Stalder
On Friday, 5. January 2007 20:36, Michael H Goldhaber wrote:

 We have reached a crucial turning point in American history. The 
 November elections and current polls have made clear that Americans 
 have soured on the Iraq war, and want the troops to be withdrawn
 rapidly.

I'm not a close observer of American politics (how come that Lieberman was 
relected?), but what strikes me as the really remarkable outcome of this 
election is that it revealed the total bankrupcy of the ideologies that 
has been dominant since the end of the cold war: neo-liberalism (with its 
emphasis on freedom) and neo-conservativism (with its emphasis on 
security), which have produced not freedom and security but abandonnment 
and fear. Neoliberalism has had to declare bankrupcy a while ago, but 9/11 
provided the opportunity to swiftly replace it with its darker cousin, so 
the void was less obvious.

Now, we are in a situation where nobody has any good idea what to 
do. Bringing the troops home now is as unrealistic as fighting for 
victory. What comes next? Nobody seems to know beyond short-term 
political tactical games. 

But while such desorientation might provide room for creative thinking, I'm 
not optimistic. The social conditions which have provided the mass basis 
for the acceptance of faith-based politics are still here. Just that the 
war in Iraq is too manifestly disasterous to whish away.

Salon Magazine recently featured an interesting interview with Chris 
Hedges, NYT reporter (Bosnia, Middle East), and author of a new book on 
the US Christian right, American Fascists, that seems directly relevant 
here.

http://salon.com/books/feature/2007/01/08/fascism/

 Since the midterm election, many have suggested that the Christian
 right has peaked, and the movement has in fact suffered quite a few
 severe blows since both of our books came out

It's suffered severe blows in the past too. It depends on how you view
the engine of the movement. For me, the engine of the movement is deep
economic and personal despair. A terrible distortion and deformation of
American society, where tens of millions of people in this country feel
completely disenfranchised, where their physical communities have been
obliterated, whether that's in the Rust Belt in Ohio or these monstrous
exurbs like Orange County, where there is no community. There are no
community rituals, no community centers, often there are no sidewalks.
People live in empty soulless houses and drive big empty cars on
freeways to Los Angeles and sit in vast offices and then come home
again. You can't deform your society to that extent, and you can't shunt
people aside and rip away any kind of safety net, any kind of program
that gives them hope, and not expect political consequences.

Democracies function because the vast majority live relatively stable
lives with a degree of hope, and, if not economic prosperity, at least
enough of an income to free them from severe want or instability.
Whatever the Democrats say now about the war, they're not addressing the
fundamental issues that have given rise to this movement.

 But isn't there a change in the Democratic Party, now that it's
 talking about class issues and economic issues more so than in the
 past?

Yes, but how far are they willing to go? The corporations that fund the
Republican Party fund them. I don't hear anybody talking about repealing
the bankruptcy bill, just like I don't hear them talking about torture.
The Democrats recognize the problem, but I don't see anyone offering any
kind of solutions that will begin to re-enfranchise people into American
society. The fact that they can't get even get healthcare through is
pretty depressing.

 The argument you're now making sounds in some ways like Tom Frank's,
 which is basically that support for the religious right represents a
 kind of misdirected class warfare. But your book struck me differently
 -- it seemed to be much more about what this movement offers people
 psychologically.

Yeah, the economic is part of it, but you have large sections of the
middle class that are bulwarks within this movement, so obviously the
economic part isn't enough. The reason the catastrophic loss of
manufacturing jobs is important is not so much the economic deprivation
but the social consequences of that deprivation. The breakdown of
community is really at the core here. When people lose job stability,
when they work for $16 an hour and don't have health insurance, and
nobody funds their public schools and nobody fixes their infrastructure,
that has direct consequences into how the life of their community is
led.

I know firsthand because my family comes from a working-class town in
Maine that has suffered exactly this kind of deterioration. You pick up
the local paper and the weekly police blotter is just DWIs and domestic
violence. We've shattered these lives, and it isn't always economic.
That's where I guess I would differ with Frank. It's really the
destruction of the 

Re: nettime Copyright, Copyleft and the Creative Anti-Commons

2006-12-14 Thread Felix Stalder
I'm not sure I understand the main thrust of the argument. 

On the one hand, GPL-type copyleft is criticized for not preventing the 
appropriation (or, more precisely, use) of code by commercial, capitalist 
interests. These still manage to move profits from labor (employees / 
contractors who are paid less than the value their labor produces) into 
the hands of the capital (shareholders, I guess). 

On the other hand, Creative Commons, which precisely enables the author 
to prevent this through the non-commercial clause is criticized for 
perpetuating the proprietary logic embodied in the author function 
controlling the use of the work.

Which seems to leave as the conclusion that within capitalism the structure 
of copyright, or IP more generally, doesn't really matter, because it 
either supports directly fundamentally-flawed notions of property (à la 
CC), or it does not prevent the common resource to be used in support of 
capitalist ends (à la GPL). In this view, copyfights appear to articulate 
a secondary contradiction within capitalism, which cannot solved as long 
as the main contradition, that between labor and capital, is not 
redressed. 

Is that it?



#  distributed via nettime: no commercial use without permission
#  nettime is a moderated mailing list for net criticism,
#  collaborative text filtering and cultural politics of the nets
#  more info: [EMAIL PROTECTED] and info nettime-l in the msg body
#  archive: http://www.nettime.org contact: nettime@bbs.thing.net


nettime Sweden could scrap file-sharing ban

2006-06-13 Thread Felix Stalder

[It would be ironic if the raid on piratebay.org turned out to be the 
trigger to create an 'alternative compensation system' (levy on broadband 
to compensate right holders in order to legalize p2p file sharing). The 
guys from piratebay have been among the most vocal (and astute) critics of 
such an idea. [1], [2].
[1] http://www.nettime.org/Lists-Archives/nettime-l-0407/msg00020.html
[2] http://www.nettime.org/Lists-Archives/nettime-l-0407/msg00032.html
Felix]


http://www.thelocal.se/article.php?ID=4024date=20060609

The Local: Sweden's news in English
Sweden could scrap file-sharing ban

Published: 9th June 2006 10:36 CET

Sweden could introduce a charge on all broadband subscriptions to
compensate music and film companies for the downloading of their
work, while legalizing the downloading of copyright-protected
material, justice minister Thomas Bodstrom has said.

Bodstrom told Sydsvenskan that he could consider tearing up
legislation passed last year that made it illegal to download
copyrighted material. He said that a broadband charge was discussed
by Swedish political parties last year, but the Moderates and Left
Party rejected it. If they have changed their minds, he is willing to
discuss any new proposals they might have, he said.

The Left Party said yesterday that they wanted to scrap the current
law because it had not reduced illegal file sharing. The Moderate
Party has said that the whole area of copyright law should be
overhauled to make it clearer, more effective and adapted to
technological developments.

The most important thing for me is that authors and artists get paid
and I will never retreat from that, he told the paper.

I have not changed my position, I still think that [the current law]
is the best option for two reasons: first, it would be unfair on
those who have subscribed to broadband and don't want to download,
secondly because it would mean that the government was setting the
price for goods, which I don't think we should do, whether those
goods are in a shop or on the net, he told TT.

But if the Moderates and Left Party have made a 180 degree turn and
changed their minds completely, of course they can come and tell us
about it. But we had this discussion last year. If they now want to
find a completely new solution and have new proposals or ideas we
will naturally discuss them.

But he emphasized that he favoured the current rules, which he said
has created a market, which would not have happened if we hadn't had
this law. It is now possible to buy a song for ten kronor, and that
is thanks to the new law.

Bodstrom said he had not been approached directly by the Left Party
or Moderates, and had only read about their proposals in the media.

TT/The Local




http://felix.openflows.org-- out now:
*|Manuel Castells and the Theory of the Network Society. Polity, 2006 
*|Open Cultures and the Nature of Networks. Ed. Futura/Revolver, 2005 


- End forwarded message -

#  distributed via nettime: no commercial use without permission
#  nettime is a moderated mailing list for net criticism,
#  collaborative text filtering and cultural politics of the nets
#  more info: [EMAIL PROTECTED] and info nettime-l in the msg body
#  archive: http://www.nettime.org contact: nettime@bbs.thing.net


nettime nettime as practice

2006-06-12 Thread Felix Stalder
On Sunday, 11. June 2006 02:21, John Hopkins wrote:
 In this Light, I would challenge Felix and Ted (and any others
 feeling qualified) to write a brief task description of the
 (different) roles/positions necessary to run nettime as it is today.
 Put it out here.  I certainly have some interest, but would need to
 know the scalability and absolute size of what tasks are necessary,
 and how they are (technically and socially) accomplished...

There is not much of a challenge here. All that you really need is
some long-term dedication to contributing to the nettime project on a
very regular basis. It helps to like it, be somewhat familiar with it,
and feel comfortable with its style.

Technically, in order to start moderating you need to be able to deal
with email on a *nix Shell (via ssh). There is no web-interface. It's
good to know the mail program 'mutt' (because we have some custom
setting that save serious time), but if you don't, it's something that
can be learned like other semi-technical stuff and we can help, and,
indeed, will help.

In the medium term, you should familiarize yourself with 'procmail'
and 'spam assassin', otherwise lots of time is spent going through
spam and/or finding false positives. This is a real hassle.

In the long term, you might need to look for a new host for the
mailing list (which has up to 50'000 outgoing mails a day), and for a
new admin for the web server. Right now, all of this is provided by
other people on a goodwill basis. As goodwill is usually personal,
rather than institutional, we have no idea how that transfers.

Also, did I mention?, you need high tolerance for personal abuse, by
people who don't know you, and, what can be more annoying, by people
who know you. It seems unavoidable because of the architecture of
mailing lists, and some people really make a big deal out of it.

Now, this might sound like endless, thankless drudgery, but it's not,
or not only. Being deeply involved in nettime is a good reason, and a
motivator, to pay close attention to the discussions and the people,
which is very worthwhile. You learn a lot, and a lot of people learn
about you. Though the abuse part sucks, no matter what.

Felix








http://felix.openflows.org-- out now:
*|Manuel Castells and the Theory of the Network Society. Polity, 2006 
*|Open Cultures and the Nature of Networks. Ed. Futura/Revolver, 2005 



#  distributed via nettime: no commercial use without permission
#  nettime is a moderated mailing list for net criticism,
#  collaborative text filtering and cultural politics of the nets
#  more info: [EMAIL PROTECTED] and info nettime-l in the msg body
#  archive: http://www.nettime.org contact: nettime@bbs.thing.net


RE: nettime nettime as idea

2006-06-09 Thread Felix Stalder

Hi everyone,

sorry for my previous post, it went out without being finished. What  
I wanted to say was that many of the themes that critical net.culture 
talked about 10 years ago are now mainstream. They are now playing
themselves out on a scale far beyond 'net.culture', indeed, they have 
become culture, without any pre-fix.  

If that amounts to winning or losing is besides the point. In some
ways, it reminds me a bit of the 1968 movements which also transformed
daily life (at least in the West), but as the world around them
shifted, with consequences very different from what they intended.
Again, if they lost or won, is does not really matter. The world is a
different place now.

For most of the actors of the early net.culture, this meant either
late professionalizing or early retirement. Nettime as a project did
not so much professionalize as specialize. It exchanged scope for
focus which has moved it a bit closer to academic culture, which is
also characterized by that trade-off. But anyone who really knows
academia, and the texts it produces (which I personally appreciate),
will also recognize how far nettime still is from that. Its scope
broader, its style sharper.

Caroline Nevejan [EMAIL PROTECTED]:
 Critiqing others for having done 'stuff', aging and moving on in  
 life, I find rather uninteresting. I get interested when I hear what  
 you like to do yourself.

I agree, on many levels, nettime works quite well, so there is not an
urgent need to change something. But, this does not mean it cannot be
improved. Sure it can. But to do that, we need concrete ideas, what
would you, personally, individually, like to see in nettime, and how
do you put up the resources to do it? The easiest thing is to do it
yourself. Silvan Zurbueck did that when he wanted an rss feed for
nettime, he took the feed, pumped into a blog, and now there is an rss
feed. [1] Tobias van Veen did that when he wanted to hold a nettime
meeting in NA, and now we had it. Great. They had an idea, they
figured out a way of doing it (by doing it themselves and roping in
others to contribute). This is how things work, not by telling others
what they should or should not do. The same goes for the various
nettime lists in other languages. People came up with the idea of
doing something, and they are doing it. Most of the people on this
list are not aware of that, because these lists are in languages few
of us speak.

[1] http://nettime.freeflux.net, http://nettime-ann.freeflux.net/

Andreas Broeckmann [EMAIL PROTECTED]:
 finally, if you are unhappy with the list, be aware that 'the list',
 i.e. nettime, is what gets posted. of course, moderation plays a
 role in this. but the greater role is played by the things that get
 written and sent, or not. if certain discussions are not happening,
 it is because people are not writing their opinions.

Again, I agree. Moderation is a non-issue, a red-herring. Even if 
the technical set-up of an email list (conceived at a time when   
ICT had much less social intelligence built in that it as at times
today) lends itself to believing the otherwise. And it's not that Ted 
and I are turning away the masses who want to do his kind of work.
In fact, nobody ever volunteers. N0b0dy, that's with two zeros. We
occasionally ask people who are contributing interesting material to  
the list if they want to moderate, and the answer has always been 
'Thank you for asking, but I really do not have the time.'

There is one exception. Nettime-ann. Here, four people -- Mason   
Dixon, Tulpje Tulp, Tsila Hassine, and Hannah Davenport -- responded  
to an open call what to do with the announcements, and are now
running this as their own project, connected to the main list by  
name and lose but friendly cooperation. They are doing a great, if
unglamorous, job. 

Over the years, we experimented with various set-ups, most importantly
dividing the list into two feeds, the standard moderated one and an
non-moderated one, called nettime-bold. The interested in the second
channel was small from the beginning, and waned entirely shortly
after. The levels of spam and self-promotion seem to be tiring for
everyone but the self-promoters. After we had to start manually
removing posts from the nettime-bold archive, because people entirely
unrelated to the list were accused -- with their names and telephone
numbers -- of being pedophiles and sent us harrowing stories how this
ruined their lives, because googling their names brought up these
posts (google loves nettime and ranks its posts often very high up) we
decided that this was not the resource we wanted to provide. When we
shut-down the list, nobody seemed to notice.

So, if anyone feels like moderating -- near daily work, over a long period
of time -- and knows how to use an email program on a unix shell
(perferably mutt), 

nettime Coalition of Canadian Art Professionals Releases Open Letter on Copyright

2006-06-08 Thread Felix Stalder
[The voices of artists against the expansion of copyrights are getting
stronger. Stuff like that will make it harder for the industry to
claim to represent the interests of creators. Very good. Felix]


Media Release:

Coalition of Canadian Art Professionals Releases Open Letter on
Copyright

Tuesday, June 6, 2006

http://www.appropriationart.ca/

Over 500 Art Professionals Call for Balanced Copyright Laws

Ottawa, ON -- June 6, 2006 -- Over 500 members of Canada's art
community have today released an open letter to the Ministers of
Canadian Heritage and Industry calling on the Canadian government to
adopt balanced copyright laws that respect the reality of contemporary
art practice. Appropriation Art, A Coalition of Art Professionals,
comprises artists, curators, arts organizations and art institutions
who share a deep concern over Canada's copyright policies and the
impact these policies have on the creation and dissemination of
contemporary art. The Coalition argues that Canada's current copyright
laws put at particular risk those artworks using appropriation, such
as conceptual art, art video  film, sound art and collage.

The Coalition offers three principles that it argues must ground
Canada's copyright policy:

FAIR ACCESS TO COPYRIGHTED MATERIAL LIES AT THE HEART OF COPYRIGHT.
Creators need access to the works of others to create. Legislative
changes premised on the need to give copyright owners more control
over their works must be rejected.

ARTISTS AND OTHER CREATORS REQUIRE CERTAINTY OF ACCESS. The time has
come for the Canadian government to consider replacing fair dealing
with a broader defense, such as fair use, that will offer artists the
certainty they require to create.

ANTI-CIRCUMVENTION LAWS SHOULD NOT OUTLAW CREATIVE ACCESS. Laws that
privilege technical measures that protect access to digital works must
be rejected. The law should not outlaw otherwise legal dealings with
copyrighted works merely because a digital lock has been used. Artists
work with a contemporary palette, using new technology. They work from
within popular culture, using material from movies and popular music.
Contemporary culture should not be immune to critical commentary.

Artworks that use appropriation have a long and well documented place
in the history of art notes Sarah Joyce, a signatory to the Open
Letter. These works are collected and exhibited in major cultural
institutions across Canada and throughout the world and yet artists
express this form of creativity under threat of the law. To silence
this valid form of creativity is tragic. That Canada's laws do so is
simply wrong.

Canada's art community has not been consulted on the implications
of possible copyright reforms, states Gordon Duggan, another of the
Open Letter's signatories. We are creators, and we rely on copyright
laws for our livelihood. Yet, to my knowledge, the needs of Canadian
artists have never been a consideration in copyright policy debates.
It is time that changed. The sheer size and makeup of this coalition
relects the level of dissatisfaction within the art community. These
changes are set to lock Canadian art into a very narrow idea of what
the Government wants art to be rather than reflecting the reality of
contemporary Canadian art.

The open letter has been posted at the Coalition's website at
www.appropriationart.ca.

About The Coalition of Art Professionals: The signatories to the Open
Letter span the full range of Canada's art community, and include
artists, galleries, art institutions, and curators. A full list of the
over 500 individuals and organizations lending their name to the Open
Letter may be found at www.appropriationart.ca.

For further information, contact:

Sarah Joyce or Gordon Duggan
[EMAIL PROTECTED]








http://felix.openflows.org-- out now:
*|Manuel Castells and the Theory of the Network Society. Polity, 2006 
*|Open Cultures and the Nature of Networks. Ed. Futura/Revolver, 2005 


- End forwarded message -

#  distributed via nettime: no commercial use without permission
#  nettime is a moderated mailing list for net criticism,
#  collaborative text filtering and cultural politics of the nets
#  more info: [EMAIL PROTECTED] and info nettime-l in the msg body
#  archive: http://www.nettime.org contact: nettime@bbs.thing.net


Re: nettime Organised Networks: Transdisciplinarity and New Institutional Forms

2006-04-11 Thread Felix Stalder
Perhaps I'm missing here something obvious, but I always thought that networks 
are a basic type of organization (as are hierachies, markets, and communes, 
in fact, standard theory assumes that there are only these four basic forms). 
So, to speak of an organized network, makes no sense to me. All networks are 
organized, by definition. 

 The social-technical dynamics of ICT-based networks constitute organisation
 in ways substantively different from networked organisations (unions,
 state, firms, universities).

Again, this makes no sense to me. All large-scale contemporary networks are 
ICT-based. In fact, ICT is what allows them to scale and hence have a chance 
to successfully compete with vertically integrated hierachies (the only 
organizational form, up to very recently, that scaled well).

Also, I always thought that unions, the state and its bureaucracies, 
universities, and old-school firms were prime examples of hierachical 
organizations. If they are not, what is? And it what sense is a union a 
networked organization?

I'm getting a head-ache, and this is only the first sentence.

Felix






http://felix.openflows.org-- out now:
*|Manuel Castells and the Theory of the Network Society. Polity, 2006 
*|Open Cultures and the Nature of Networks. Ed. Futura/Revolver, 2005 


#  distributed via nettime: no commercial use without permission
#  nettime is a moderated mailing list for net criticism,
#  collaborative text filtering and cultural politics of the nets
#  more info: [EMAIL PROTECTED] and info nettime-l in the msg body
#  archive: http://www.nettime.org contact: nettime@bbs.thing.net


Re: nettime let's go negative and join snubster

2006-04-06 Thread Felix Stalder
Negativity is has its charms, but the fate of being positive and 
community-minded
is inescapable on the Internet. In order to do anything collaboratively, that 
is,
anything at all beyond pure consumption, the 'community' has to minimize 
internal
dissent and make people feel good about contributing. This does not mean 
squashing
internal critique totally, otherwise things get boring, but stabilize it on a
level where it is productive, and thus, dare I say, turns positive. After all, 
if
really you don't like it, why don't you just go somewhere else? Anyone who
administered an online collaborative project has uttered this sentence more than
once.

So, we have special interest communities, dog-lovers, negativity-lovers and so 
on.
Endless solipsistic niches of like-minded people. Snubster just has a cool
branding. The days of critique and negativity are over. Rather, we have
interesting discussions on minor points, enabled by the fact that we basically
agree with one another. There are too many options to waste your time really
disagreeing with people.

Felix



#  distributed via nettime: no commercial use without permission
#  nettime is a moderated mailing list for net criticism,
#  collaborative text filtering and cultural politics of the nets
#  more info: [EMAIL PROTECTED] and info nettime-l in the msg body
#  archive: http://www.nettime.org contact: nettime@bbs.thing.net


nettime TRIPS was a mistake

2006-04-03 Thread Felix Stalder
[It's amazing to see that the treaty which many identify as the corner stone 
of information feudalism (Peter Drahos) is judged as a failure by one of 
it's main designers. From Ian Brown's blog, via the always excellent EDRI 
newsletter [2]. Felix]


Lehman: TRIPS was a mistake
http://dooom.blogspot.com/2006/03/lehman-trips-was-mistake.html

I'm attending a great meeting in Brussels on The Politics and Ideology of 
Intellectual Property [1]. We just had quite a newsflash from Bruce Lehman, 
President Clinton's head of intellectual property policy who was largely 
responsible for the Agreement on Trade-Related Aspects of Intellectual 
Property Rights (TRIPS).

Lehman now believes TRIPS has been a failure for the United States, because 
the WTO agreement in which it is included opened US markets to overseas 
manufactured goods and destroyed the US manufacturing industry. He feels that 
the US has kept its part of the TRIPS bargain, but that with 90% piracy in 
China, higher-end developing nations have not. In retrospect, he feels the US 
should instead have introduced labour and environmental standards into the 
WTO agreement so that jobs would not be lost in the US manufacturing sector 
to countries with few environmental standards and weak unions.

How exhilirating that Mr Lehman agrees with civil society IP experts across 
the developed and developing world!


[1] http://www.tacd.org/docs/?id=286
[2] http://www.edri.org/






http://felix.openflows.org-- out now:
*|Manuel Castells and the Theory of the Network Society. Polity, 2006 
*|Open Cultures and the Nature of Networks. Ed. Futura/Revolver, 2005 



#  distributed via nettime: no commercial use without permission
#  nettime is a moderated mailing list for net criticism,
#  collaborative text filtering and cultural politics of the nets
#  more info: [EMAIL PROTECTED] and info nettime-l in the msg body
#  archive: http://www.nettime.org contact: nettime@bbs.thing.net


nettime Open Source Projects as Voluntary Hierarchies

2006-03-17 Thread Felix Stalder

Open Source Projects as Voluntary Hierarchies

Weber, Steven (2004) The Success of Open Source. Cambridge, MA, Harvard UP
ISBN: 0-674-01292-5, pp. 311

Over the last half-decade, free and open source software (FOSS) has moved from 
the hacker margins to the mainstream. Corporations, large and small, have 
invested in it, some governments are actively supporting it and it is 
becoming an increasingly important tool for the building of an international 
civil society. In the social sciences, the field is receiving a growing share 
of attention, evidenced by a widening stream of research output. The central 
repository for relevant papers, opensource.mit.edu, lists some 250 
researchers with a self-declared interest in all things FOSS and almost as 
many scholarly papers, contributed in just five years. Additionally, there 
are several volumes written by activists, book-length treatments by 
journalists, plus biographies of the two most prominent figures, Richard 
Stallman and Linus Torvalds.

To this burgeoning literature, the most ambitious contribution is Steven 
Weber's The Success of Open Source. Weber, a political scientist from 
University of California, Berkeley, focuses on a political economy approach 
which he understands as 'a system of sustainable value creation and a set of 
governance mechanisms' (p.1). His main interest lies in the social formations 
built around FOSS's particular mode of production. What differentiates this 
mode from other systems of immaterial production is its approach to property. 
Whereas the conventional notions of property are based on unambiguous 
ownership and the associated right of excluding others, in the context of 
FOSS, property is organized around the right to distribute. Its key concerns 
are not how to ascribe ownership and manage exclusion, but to develop the 
best strategies to maximize access and collaboration. This is a profound 
change, and Weber puts it rightly at the beginning of his analysis. In this 
perspective FOSS is a public good, a resource that, once produced, everyone 
can use, akin to a street light. Standard political theory assumes there are 
no incentives for private entities to produce public goods, because the 
non-excludability invites free-riding, impeding markets built on scarcity. 
Thus, the provision of such goods is usually in the hands of governments who 
invest in them for the benefit of society as a whole.

FOSS is a counter-intuitive example where a large number of entities, 
individuals as well as organizations, produce highly complex products as 
public goods with no, or very little, involvement of the state. Early 
analysts recognized that, given mainstream theoretical assumptions, FOSS is 
an 'impossible public good' (p.1). Clearly, however, it is not a fluke. Many 
of the core projects  - such as the Linux kernel, the Apache webserver, the 
GNU software  - are by now more than a decade old and are still growing, so 
it is no longer in doubt as to whether FOSS represents a 'system of 
sustainable value creation'. But what kind of system? The two core chapters 
of the book (which also contains a thorough history of FOSS and somewhat less 
thorough sections on business and law) focus on the 'microfoundations', the 
individual motivations to contribute to FOSS projects, and on the 
'macro-organizations' involved. Weber aims to show that contributors are not 
altruistic, but guided by range of incentives, from seeking aesthetic 
pleasures to reputation and identity building. The question of incentives is 
probably the best researched of all aspects of the conundrum that FOSS poses 
to conventional political and economic theory and Weber does a very good job 
of systematizing and summarizing the state of the discussion, even if he adds 
little new.

More interesting and original is the chapter on the macro-organization of FOSS 
projects. Here, Weber shows that these projects are not chaotic at all, but 
tend to have explicit formal structures (release schedules, project leaders, 
official repositories, etc) and that notions of self-organization do not 
really clarify much. To bring together the two basic observations that all 
contributions are voluntary, and that projects are hierarchically structured, 
Weber develops the notion of a voluntary hierarchy (though, he never quite 
calls it that). In such a governance system, individuals voluntarily accept 
their position in a hierarchy, because they realize that doing so is 
beneficial to them. Their own contributions get recognized and the overall 
project develops into a direction that they like. In such as system, contrary 
to what we usually think of hierarchies, power flows from the bottom to the 
top 'because the leader depends on the followers more than the other way 
around[']Asymmetrical interdependencies favor the potential followers, who 
will make a free and voluntary choice where to invest their work' (p.160).

The freedom of choice if and where to contribute is not 

nettime Netbase (1995-2006)

2006-02-18 Thread Felix Stalder

Yesterday, there was a party in Vienna. It was a small, at times sombre, at 
times
exuberant affair, fitting for the occasion. The final call for netbase, the
institute for cultural technologies. Today, the doors remained closed and the
website turned static.

After more than a decade sailing hard against the currents, suffering countless
near-death experiences, it's hard to believe that the fall of the curtain is now
final. No more publicity stunts. 

With the netbase, one of the last 'free radicals' of the early internet culture
disappears, an institution which understood art as necessarily critical, both of
the commercial hype and the old and new centers of power.

Insisting on the freedom of art, defining its value as cultural intelligence,
probing alternative futures, netbase refused play along with the neo-liberal
redefinition of culture into 'services' to be measured by tourism boards, 
economic
development agencies, or ministries of education. 

Rather, what characterized netbase was an insistence on acting in public, 
engaging
the public directly and on its own terms. That such an approach is ultimately
doomed, particularly in a country like Austria, is hardly a surprise. Like a 
crash
in a formula one race, it's easy to say i saw it coming. 

Even if the real surprise is probably that netbase lasted that long, witnessing
its closure is a sad affair nevertheless. Particularly for many nettimers, who
enjoyed, at one time or another, its particular kind of hospitality in here
Vienna.


Felix

 
+---+-+---
http://felix.openflows.org


#  distributed via nettime: no commercial use without permission
#  nettime is a moderated mailing list for net criticism,
#  collaborative text filtering and cultural politics of the nets
#  more info: [EMAIL PROTECTED] and info nettime-l in the msg body
#  archive: http://www.nettime.org contact: nettime@bbs.thing.net


nettime The crisis of democracy and referenda

2006-01-13 Thread Felix Stalder
 On 11/01/06, Prem Chandavarkar [EMAIL PROTECTED] wrote:
  A referendum helps to resolve impasses reached when you have polarised
  opinions on critical single cause issues.  It cannot be a substitute for
  the day to day negotiations of representative politics.

 True.  On the other hand, elected representatives may well be less
 likely to pass unpopular laws, and more likely to take the views of
 the majority into account when carrying on those day-to-day
 negotiations, if they know that citizens can easily arrange a
 referendum on any issue in order to reverse their decisions.  The
 existence of an easy referendum mechanism, even if it is rarely used,
 may thus make politicians more sensitive to public opinion.

In Switzerland, the place that has the most extensive experience with
referenda, this is exactly what happens. Politically speaking, the most
important thing about a referendum is not calling one, but to be able to
credibly claim that one can does so. This buys you the ticket to the
negotiation table.

Before any law passes, there are extensive rounds of negotiations (called
Vernehmlassung in Swiss-German), where all the groups that can call a
referendum are asked to provide feedback to the proposed law, making sure
that all the powerful groups in the country agree on a law, or can at least
live with it. Nobody wants to work for years on a law, and then have it
subjected to to vagueries of a public vote (which is always unpredictable,
since one never knows about the context in which the vote is actually held).
In practice, this slows down everything, and give a lot of influence to
unelected presentative of powerful groups, why may, depending on the issue,
include unions and environmental groups.

As an effect, the power of elected politicians is serverely curtailed, after
all, the representative aspects are only one part of this particular Swiss
brand of democracy.

Because the mechanisms of Swiss democracy are rather different from others,
the crisis that it faces is also very different. Yes, of course, there's also
a lot of lobbying, but given the curtailed power of politicians, buying them
off only gets you so far. The actual crisis is two fold: first, given the
need to consult and include ever diverging interest, the system slows down to
a crawl, as, in the end, it's always safer to do nothing than to risk losing
face in a refendum. Second, more and more stuff gets decided on an
international basis, with the national parliaments only responsible for
converting international treaties (or EU directives) into national law. Yet,
the fiction that direct democracy is the ultimate source of power, needs to
be maintained, as it's so crucial to Swiss identity. So how do you square
this? By inventing a construction called autonomer Nachvollzug which can be
translated as autonomous conformation. If that sounds like a paradox, it
is. The key idea is that Switzerland is autonomous to conform to
international agreements. In fact, of course, it is not, but given its deep
interlikages with the EU and other countries, it simply has to take over what
is decided there.


+---+-+---
http://felix.openflows.org


#  distributed via nettime: no commercial use without permission
#  nettime is a moderated mailing list for net criticism,
#  collaborative text filtering and cultural politics of the nets
#  more info: [EMAIL PROTECTED] and info nettime-l in the msg body
#  archive: http://www.nettime.org contact: nettime@bbs.thing.net


Re: nettime Benjamin Mako Hill on Creative Commons

2005-08-01 Thread Felix Stalder
  The CC licenses, however, try to provide some protections for the
  producers of content by providing non-commercial clauses.

 Which is a bogus advantage. We had this discussion in Nettime before,
 and the common sense was that the concept of commerce implied in those
 clauses is neither defined nor clear at all. If our exchange would be
 printed in a Nettime book, and the book was for sale even if it made no
 profit or even losses for the publishers, it would be still a
 commercial distribution and hence not allow the inclusion of material
 licensed with this clause. This would even be the case if it were
 published on a CD-ROM sold for 50 cents, or in exchange for a blank CD
 medium.

The non-commercial clause is, indeed, deeply problematic. It is virtually
impossible to define what commercial means. It is not a legal concept to my
knowledge. Is everything that is sold a commercial transaction. Or only things
that are sold with the intention of profit? Then again, how would one define
intention? Or is it the success that makes a venture commercial?  Assuming the
nettime reader, as it was printed and distributed, did not constitute a 
commercial
venture. But what if it had been a runaway success, with four reprints? That 
would
have made it profitable, for sure. Where would one draw the line? After the 
first
re-print? or the second?

In the end, the non-commercial clause restricts the creative commons to
consumption, hobbyism, and, how convenient for its academic sponsors, to 
teaching. 

While I see the point of, say, musicians not wanting to have their works misused
in advertisement, the share-alike clause of the GPL would already have prevented
this from ever happening. There is no way in hell that any brand would allow its
ads to be released under the GPL. You cannot put a trademark under the GPL.

The only example I can think of that the GPL would not protect a musician 
against
crass, unwanted commercial exploitation is if a GPLed song was included in a
commercial compilation of songs. This would not affect the closed license of the
other songs, or the compilation as a whole (similar to including free software 
on
a cd with other programs).

In the end, while I do not agree with Florian that the differences between works
that are necessarily collaborative and temporary (say, software) than those that
can be indivually produced and finished (say, a novel) are negligible, on the
level of the license the share-alike aspect works well for both as a protection
against commercial rip-offs without producing the problems of a non-commercial
clause.


Felix



+---+-+---
http://felix.openflows.org


#  distributed via nettime: no commercial use without permission
#  nettime is a moderated mailing list for net criticism,
#  collaborative text filtering and cultural politics of the nets
#  more info: [EMAIL PROTECTED] and info nettime-l in the msg body
#  archive: http://www.nettime.org contact: nettime@bbs.thing.net


nettime A pragmatic respone to a Critique of the Commons without Commonalty

2005-07-08 Thread Felix Stalder
Forwarded with the permission of the author. Felix

--  Forwarded Message  --

Subject: [ipr] A pragmatic respone to a Critique of the Commons without 
Commonalty
Date: Thursday, 7. July 2005 19:58
From: Andrew Rens [EMAIL PROTECTED]
To: iprpublicdomain [EMAIL PROTECTED]

...

The article certainly provides a stimulating
theoretical critique of Creative Commons. There are a
number of places where the critique elides the complex
nature of issues most notably the characterisation of
Res Communes, and the analysis of precisely what
copyright constitutes as private and public. I am
however not going to address the theoretical critique
of Creative Commons at this point. There are a few
matters of pragmatic import, which the analysis seeks
to occlude. I address these as matters of strategy as
someone who has experienced political struggle, and
also has used law to effect rhizomatic change.

No-one at Creative Commons, certainly not Professor
Lessig is suggesting that Creative Commons is the only
or even primary model for creativity, nor that it
represents the ideal state of copyright. Rather it is
one working model amoung many. For those who believe
that Intellectual Property can be balanced; a healthy
creative ecosystem will include a multitude of
different models. For those who don't believe in
Intellectual Property at all Creative Commons is a
practical strategy of working towards an open culture
given real world conditions. It has already opened up
space for voices that would not otherwise have been
heard. The energy and excitement surrounding Creative
Commons stem in part because many people are for the
first time to see what open culture looks like, and so
imagine what it could be in the future. As such it is
a greater stimulant to libre culture than academic
critiques of late capitalist cultural production.

It is also a project which people and organisations of
all theoretical stripes and ideological flavours can
co-operate in, which is why both Jack Valenti and John
Perry Barlow spoke at the opening. One of the greatest
strengths of Creative Commons is this aspect of an
open project; people who may differ on many other
issues can all contribute to the common good.

A common project such as this can easily be
stigmatized as co-option, but only by the same logic
that human rights lawyers, using what law there is
within a repressive state to secure the freedom or
safety of prisoners, can be regarded as co-opted.
Participation in Creative Commons is not an exhaustive
index of who a person is. A person can support
Creative Commons and be committed to the long term
abolition of all Intellectual Property or work full
time for a corporate law firm or any of the whole
range of options in between.

The analysis rejects copyright law as an organising
strategy for creativity, yet does not develop an
alternative vision of a commons, specifying only what
it is not, it is thus difficult to imagine this
theoretical commons at all.

Andrew Rens
Legal Lead
Creative Commons South Africa



+---+-+---
http://felix.openflows.org


#  distributed via nettime: no commercial use without permission
#  nettime is a moderated mailing list for net criticism,
#  collaborative text filtering and cultural politics of the nets
#  more info: [EMAIL PROTECTED] and info nettime-l in the msg body
#  archive: http://www.nettime.org contact: nettime@bbs.thing.net


nettime Using copyright to stop the publication of 'mein kampf'

2005-06-23 Thread Felix Stalder

Here's an interesting use of copyright for all of those who track its (ab)use 
for
political reasons. In Poland, publisher Marek Skierkowski is being investigated 
on
behalf of the state of Bavaria for infringing on its copyright on the works of
Adolf Hitler.

The case is the following: The publisher, who has no history of neo-Nazi
sympathizing, decided to publish Mein Kampf purely for commercial reasons. Now,
Poland -- like many other countries in Europe -- has a law criminalizing
distribution of fascist propaganda (?246 of Polish criminal code). However, the
publisher could convince the state attorney that Mein Kampf does not constitute
current political propaganda, but has to be viewed as a historic document and 
that
making it accessible would serve historic and scientific purposes, not the least
because he is clearly not politically motivated. Since he does not try to 
convince
anyone of any political views, his publication do not constitute propaganda, so 
the
reasoning of the attorney.

What does this have to do with copyright and Bavaria? After WWII, the state of
Bavaria was given by the allies all the copy- and author's rights belonging to
Hitler, because he was officially registered as a Munich resident by the end of 
the
war. And now, Bavaria tries to use its copyrights to stop the publication in 
Poland
after the application of national criminal law failed to do so.  Bavaria holds 
the
copyrights for another ten years (70 years after the death of the author) after
which it falls into the public domain.

Source: http://www.spiegel.de/politik/ausland/0,1518,361691,00.html


+---+-+---
http://felix.openflows.org




#  distributed via nettime: no commercial use without permission
#  nettime is a moderated mailing list for net criticism,
#  collaborative text filtering and cultural politics of the nets
#  more info: [EMAIL PROTECTED] and info nettime-l in the msg body
#  archive: http://www.nettime.org contact: nettime@bbs.thing.net


Re: nettime The Ghost in the Network

2005-05-17 Thread Felix Stalder
On Monday, 16. May 2005 12:56, Alexander Galloway and Eugene Thacker wrote:

 We suggest that this opposition between closed and open is flawed. It
 unwittingly perpetuates one of today's most insidious political myths,
 that the state and capital are the two sole instigators of control.
 Instead of the open/closed opposition we suggest the pairing
 physical/social. The so-called open logics of control, those associated
 with (non proprietary) computer code or with the Internet protocols,
 operate primarily using a physical model of control. For example,
 protocols interact with each other by physically altering and amending
 lower protocological objects (IP prefixes its header onto a TCP data
 object, which prefixes its header onto an HTTP object, and so on). But
 on the other hand, the so-called closed logics of state and commercial
 control operate primarily using a social model of control. For, example,
 Microsoft's commercial prowess is renewed via the social activity of
 market exchange. Or, using another example, Digital Rights Management
 licenses establish a social relationship between producers and
 consumers, a social relationship backed up by specific legal realities
 (DMCA). Viewed in this way, we find it self evident that physical
 control (i.e. protocol) is equally powerful if not more so than social
 control. Thus, we hope to show that if the topic at hand is one of
 control, then the monikers of open and closed simply further confuse
 the issue. Instead we would like to speak in terms of alternatives of
 control whereby the controlling logic of both open and closed
 systems is brought out into the light of day.


I think this equation of protocol = control, which is also the core of 
Galloway's stimulating book [1], is fundamentally flawed, because it mixes 
terms in ways that is not helpful to a critical political analysis.

A protocol, technical or social, is a series of standards which regulate 
how different entities can interact without the establishment of a formal 
hierarchy. Remember, the term originated in the context of exchanges 
between the king and foreign diplomats. The key about this relationship 
was that the diplomats were not the king's subjects, yet the diplomats 
were the equal to the king. They were different. The purpose of a protocol 
was to allow them to interact without the establishment of a formal 
hierarchy.

To argue that the protocol now, somehow, controlled the king and the 
diplomats seems strange. The same problem occurs when arguing that the 
Internet Protocol is somehow the ultimate controlling mechanism of the 
Internet. The fact that communication takes place within certain 
constraints, which enable communication in the first place, does not 
equate control. Rather, constraints on one level (the protocol of 
communication) can provide the grounds for freedom on an other level 
(content of communication). This is social theory 101.

The whole argument of protocol = control seems to rest on a somewhat 
unimaginative reading of Foucault's micro physics of power, in which he 
argued that language itself is a main source of power and that the 
establishment of categories (e.g. madness) was itself a supreme act of 
power.  To transfer this one-on-one to protocols of communication 
networks, yields yet another control phantasy (or nightmare, depending on 
your agenda). The only choice it leaves you is to jump into a some sort of 
'pre-social' state. And this is precisely what Galloway  Thacker offer 
us:

 Unplug from the grid. Plug into your friends. Adhocracy will rule.
 Autonomy and security will only happen when telecommunications operate
 around ad hoc networking. Syndicate yourself to the locality.

What we have here is the 'social' vs. the 'technical', and the 'unplanned' 
vs. the 'planned'. Why this should lead to more freedom is dubious. Unless 
we understand freedom as absence of rules and control as presence of 
rules. This, however, is a very misleading understanding of these 
concepts, as has been argued often, not the least by in the feminist 
critique of the anti-authoritarian social movements of the late 1960s. [2]


PS: I am not arguing that protocols cannot be used as mechanism of social 
control. Rather, this has to be established on a case-by-case basis, 
rather than pronouncing protocols as means of control per se.


[1] Galloway, Alexander R. (2004). Protocol: How Control Exists After 
Decentralization. Cambridge, MA, MIT Press

[2] Freeman, Jo (1972). The Tyranny of Structurelessness. The Second Wave. 
Vol. 2 No. 1 http://www.jofreeman.com/joreen/tyranny.htm

 





+---+-+---
http://felix.openflows.org


#  distributed via nettime: no commercial use without permission
#  nettime is a moderated mailing list for net criticism,
#  collaborative text filtering and cultural politics of the nets
#  more info: [EMAIL PROTECTED] and info nettime-l in the msg body
#  archive: http://www.nettime.org contact: 

nettime I.B.M. to Give Free Access to 500 Patents

2005-01-11 Thread Felix Stalder

[As the article points out at the end, 500 patents is a relatively small 
number for IBM (which holds more than 10.000 software patents). 
Nevertheless, it represents a significant policy change in how to manage 
patents by the world's leading holder of patents. Is is also very 
different from Microsoft's current approach of seeking cross-licensing 
deals among holders of large patent portfolios.

IBM Press Release: http://www.ibm.com/news/us/en/2005/01/patents.html
Linux World Story: http://www.linuxworld.com/story/47749_p.htm

Felix]

NYT January 11, 2005
I.B.M. to Give Free Access to 500 Patents
By STEVE LOHR
http://www.nytimes.com/2005/01/11/technology/11soft.html

I.B.M. plans to announce today that it is making 500 of its software patents
freely available to anyone working on open-source projects, like the popular
Linux operating system, on which programmers collaborate and share code.

The new model for I.B.M., analysts say, represents a shift away from the
traditional corporate approach to protecting ownership of ideas through
patents, copyrights, trademark and trade-secret laws. The conventional
practice is to amass as many patents as possible and then charge anyone who
wants access to them. I.B.M. has long been the champion of that formula. The
company, analysts estimate, collected $1 billion or more last year from
licensing its inventions.

The move comes after a lengthy internal review by I.B.M., the world's largest
patent holder, of its strategy toward intellectual property. I.B.M.
executives said the patent donation today would be the first of several such
steps.

John Kelly, the senior vice president for technology and intellectual
property, called the patent contribution the beginning of a new era in how
I.B.M. will manage intellectual property.

I.B.M. may be redefining its intellectual property strategy, but it apparently
has no intention of slowing the pace of its patent activity. I.B.M. was
granted 3,248 patents in 2004, far more than any other company, according to
the United States Patent and Trademark Office. The patent office is
announcing today its yearly ranking of the top 10 private-sector patent
recipients.

I.B.M. collected 1,300 more patents last year than the second-ranked company,
Matsushita Electric Industrial of Japan. The other American companies among
the top 10 patent recipients were Hewlett-Packard, Micron Technology and
Intel.

I.B.M. executives say the company's new approach to intellectual property
represents more than a rethinking of where the company's self-interest lies.
In recent speeches, for example, Samuel J. Palmisano, I.B.M.'s chief
executive, has emphasized the need for more open technology standards and
collaboration as a way to stimulate economic growth and job creation.

On this issue, I.B.M. appears to be siding with a growing number of academics
and industry analysts who regard open-source software projects as early
evidence of the wide collaboration and innovation made possible by the
Internet, providing opportunities for economies, companies and individuals
who can exploit the new model.

This is exciting, said Lawrence Lessig, a professor at Stanford Law School
and founder of the school's Center for Internet and Society. It is I.B.M.
making good on its commitment to encourage a different kind of software
development and recognizing the burden that patents can impose.

I.B.M. has already made substantial contributions to open-source software
projects in the last few years. The company has been the leading corporate
supporter of Linux. It donated computer code worth more than $40 million to
an open-source group, Eclipse, which offers software tools for building
programs. Last year, I.B.M. gave to an open-source group a database program
called Cloudscape, which cost the company $85 million to develop.

Those past contributions, however, have gone mainly to projects that serve to
make Linux - fast becoming a viable alternative to the operating systems
Windows from Microsoft and Solaris from Sun Microsystems - more attractive to
corporate customers. In that respect, supporting Linux helps to undermine
I.B.M.'s rivals and can be seen as a smart tactic for I.B.M. The company's
commercial software strategy is focused largely on its WebSphere software,
which runs on top of operating systems.

Today's move by I.B.M. is not aimed at a specific project, but opens access to
14 categories of technology, including those that manage electronic commerce,
storage, image processing, data handling and Internet communications.

This is much broader than the contributions we've made in the past, said Jim
Stallings, vice president for standards and intellectual property at I.B.M.
These patents are for technologies that are deeply embedded in many industry
uses, and they will be available to anyone working on open-source projects
including small companies and individual entrepreneurs.

I.B.M. executives said they hoped the company's initial contribution of 500
patents 

Re: nettime A 'licensing fee' for GNU/Linux?

2004-08-09 Thread Felix Stalder

OK, let me try to restate my argument somewhat differently as to take into
consideration a) the fact that software being proprietary _per se_ does
not indemnify the user (Florian's point) and b) that SW patents create a
mess for all programmers (Scott's point) and c) that none of us is a
patent lawyer hence we don't know when patent infringement creates
liability for the author and when for the user (Novica's point).

The key point here is b). SW patents make the publishing of software code
more difficult because they create uncertainly over IP rights. This
uncertainty can be limited, but never completely eliminated, by extensive
and expensive patent research.

Users, for understandable reasons, don't want to be exposed to this kind
of risk, so they will demand, if that is not already a standard clause in
contracts, that the provider of software guarantees that he has all the
rights to the software licensed. So the party which issues the license of
the software will have to assume this risk.

Small companies have a hard time to do this because they can neither
afford to do the necessary research to be able to assess the risk
realistically, nor can they afford to pay possible settlements, in case
they get sued successfully. After all, how many companies could pony up
more than $520 million as the result of an infringement suit?

Large companies can deal with this risk for a variety of reasons. They
hold many of the patents themselves; they are in cross-licensing
agreements with other companies with large patent pools; they have the
lawyers necessary to fight the cases and they have the reserves to pay the
occasional fine as a general costs of doing business.

Small companies have none of that and, this is the key point, neither have
various foundations and authors of FOSS.  Consequently, neither small
proprietary software companies, nor FOSS communities can issues such
guarantees and hence the users of their software will have to assume the
risk.

For users of FOSS unwilling to accept such risk -- mainly large
institutional users -- there are two possibilities. One is to buy their
FOSS solution from a major vendor that offers indemnification as part of
the service contract (similar to a provider of proprietary software). The
other is to purchase insurance (like the one offered by OSRM). Both create
costs not entirely dissimilar to a licensing fee.

In addition, I would speculate, that such indemnification clauses and
insurances will limit the freedom of development in the future and could
lead to a concentration in the SW industry, proprietary _and_ FOSS. The
difference is that the proprietary SW industry is already highly
concentrated, whereas the FOSS industry is usually thought of as more
decentralized.

In this sense, SW patents will not kill FOSS, but they will give large
companies much more leaway in determining its future, substantially
hollowing out the 'freedom' in free software.

Felix



+---+-+--- 
http://felix.openflows.org



#  distributed via nettime: no commercial use without permission
#  nettime is a moderated mailing list for net criticism,
#  collaborative text filtering and cultural politics of the nets
#  more info: [EMAIL PROTECTED] and info nettime-l in the msg body
#  archive: http://www.nettime.org contact: [EMAIL PROTECTED]


Re: nettime A 'licensing fee' for GNU/Linux?

2004-08-08 Thread Felix Stalder
 Felix, sorry if I sound rude, but this is not true, and you
 unintentionally spread FUD here!

 Proprietary licensing does _not_ protect customers from patent ligitation,
 unless the license contract explicitly states so. Software patents can be
 and have been enforced against users/licensees of proprietary software,
 too. Unisys' enforcement of the LZW/GIF patent, with its legal action
 against websites that used GIF images in 1999 (see
 http://lpf.ai.mit.edu/Patents/Gif/Gif.html) is a prominent example.

Well, actually, the story of the GIF patent controversy is exactly the
other way around and fits perfectly into my argument about differences
between proprietary and FOSS in terms of risk exposure in the coming
patent mess.

Yes, Unisys did sue some people over their use of .gif files on their
webpages. But the details are important here. As Mark Starr, General
Patent and Technology Counsel for Unisys at the time, explained it to
Slashdot if the GIFs on your Web site were created with software that is
licensed by Unisys, you are fine. Nobody at Unisys is going to try to get
$5000 or even $0.50 out of you. Period. [1]

As he continued to explain, all of the major proprietary packages (Adobe,
Corel etc) had licensed the patented technology and hence users where
entitled make as many .gif images as they wanted for whatever purpose.
What they were after were people who used programs that had not licensed
the patents, which were mainly freeware (though sometimes this freeware
was distributed as part of commercial software) and FOSS programs (though
they played a minor role back then in the field of graphic design).

 The suspension of Munich Linux project, which was made toalarm the
 public about future risks for free software through software patenting
 in the EU, was therefore dangerously dumb shoot-yourself-into-the-foot
 PR which did nothing but play into the hands of the proprietary
 software industry.

Independent of how you think about the timing and its strategic value, the
problem is real and it's not going to go away by not talking about it. It
seems pretty clear to me that patents will be a major weapons against FOSS
and the more this becomes public knowledge, the better it is for the fight
against software patents. Contrary to what Moglen preaches so eloquently,
the development of technology is never straight and the FSF does not have
it all figured out.

Recently, a two year old memo written by someone at HP arguing that
patents are the Archilles heel of FOSS has surfaced [2]. He points in
particular to section 7 of the GPL [3] which explicitly forbids to
distribute GPL'ed software that contains patents that require a license
fees. Asked to respond to it, Eben Moglen copped out, saying that the
filing of a lawsuit alleging patent infringement would not be enough to
activate section 7. What he did not say was that positive court decision
would!

Now, is this going to be 'shutdown' FOSS? I doubt it, because major
companies such as IBM and HP have invested massively into FOSS and
Microsoft and others have little interest to alienate them. But it could
substantially transform the social dynamics around FOSS. After all, one of
the not so unintended consequences of the patent system is that it allows
to form cartells without running into anti-trust issues.


[1] http://slashdot.org/article.pl?sid=99/08/31/0143246
[2] http://www.newsforge.com/article.pl?sid=04/07/19/2315200
[3] http://www.gnu.org/copyleft/gpl.html


+---+-+---
http://felix.openflows.org




#  distributed via nettime: no commercial use without permission
#  nettime is a moderated mailing list for net criticism,
#  collaborative text filtering and cultural politics of the nets
#  more info: [EMAIL PROTECTED] and info nettime-l in the msg body
#  archive: http://www.nettime.org contact: [EMAIL PROTECTED]


nettime A 'licensing fee' for GNU/Linux?

2004-08-07 Thread Felix Stalder
It seems like the real battle over the future of Free and Open Source Software 
is being fought in the area of patents, not copyright.

Copyright, which protects a particular expression, is very hard to infringe 
upon involuntarily. Even if two people happen to have the same idea, chances 
are, they will express it differently. From the point of view of copyright, 
no harm done. Patents are different, they protect an idea, independent from 
its expression. If you have an idea that someone else has already patented, 
though luck. It's not your idea anymore. The history of technology is full of 
cases where two people came up with the same invention, but one was faster. 
Famous is the case of Alexander Graham Bell and Elisha Grey. Both filed their 
patents for the telephone on February 14, 1876. Bell's was the fifth entry of 
that day, while Gray's was the 39th.

Fast forward to today. Patent offices everywhere are drowning in applications 
and are chronically understaffed. Once a patent has been submitted, it can 
take more than a year before it is reviewed, but once it has been approved, 
it becomes valid retroactively. In an area like software development, where 
product cycles counted in months, rather than years, this introduces 
irreducible uncertainty. There is no way of knowing what patents are in the 
pipeline. Combine that with the fact that complex software packages include a 
potentially large number of ideas that might, or might not, be patentable it 
becomes evident that it's essentially unknowable if there might be a patent 
issue hidden somewhere.

This applies to all kinds of software, proprietary as well as free/open 
source. From a user's point of view, there is, however, a crucial 
differences. With proprietary software, the company from which the software 
is licensed assumes all responsibility and the user has no worries beyond the 
licensing fees. So, when last August a court ruled in an exeptional case that 
Internet Explorer improperly contained patented technology, it was Microsoft 
that had to pay up $520,600,000.00 [1]. For the users, the verdict had no 
relevance what so ever. The case was exceptional because usually, large 
corporations can settle their patent disputes by crosslicensing their patent 
portfolios. That makes things easy and has the nice effect of keeping others 
out.

The case is different with Free /Open Source Software. In this case, the users 
are at real risk. The city of Munich realized this and, in early August, 
postponed their high profile switch to Linux to assess the patent risk. For 
the moment, they remain committed to the migration project [2]. They were 
afraid to suddenly receive an injunction and having to stop using their Linux 
machines. Chances, one might guess, are remote, but even this is unacceptable 
to a public administration.  

A few days earlier, a company called 'Open Source Risk Management' [3], which 
has Bruce Perens as one of its board members, issued a report warning that 
the Linux Kernel potentially infringes on 283 patents. Of these only 98 are 
owned by companies currently friendly to Linux, including 60 from IBM, 20 
from Hewlett-Packard and 11 from Intel. This warning was not entirely 
disinterested, since OSRM will soon begin to sell insurances. The prices, as 
announced so far, are $150,000.00 per year and this protects against 
settlement costs of up to $ 5,000,000 [4].

In a similar vein, large Linux sellers such as IBM and HP offer indemnity 
clauses as part of their Linux deals (in the context of the SCO cause).

It's not a big leap of imagination to see the explicit costs of an insurance, 
or the implicit costs of an indemnification clause as part of a service 
contract, as a kind of 'licensing fee' for Linux. And like other licensing 
contracts, they could introduce serious restrictions that work perfectly well 
on top of GPL code. In HP's case, for example, the indemnification only 
applies to Linux run on HP hardware. 

In case of OSRM, one must assume that there will be limits to the kinds of 
modifications one is allowed to do to the software. Perhaps there will be a 
list of approved modules one may to compile into the kernel under the terms 
of the insurance. In some way or another, OSRM will have to define what code 
exactly the insurance covers.

While this kind of patent risk is unlikely to hit the end user directly, it 
might turn into a major issue for institutional users who are vital in 
helping Linux break out of its current niche.

If anything, this problem is going to get worse. At the end of July, Microsoft 
announced that it plans to file 3000 patents this year. This would be a 
significant increase over the 2000 patents it filed last year and the 1000 
patents filed just a few years ago. No wonder Bill Gates says that this is 
something that we are pretty excited (about).[5] 


[1] http://www.ucop.edu/news/archives/2003/aug11art1.htm
[2] http://www.muenchen.de/Rathaus/bb_dir/presse/2004/08/

Re: nettime The Art of Sweatshop

2004-08-03 Thread Felix Stalder
Andrew, Rana,

I know nothing about this particular outfit other than its email 
advertisement, so calling it a 'sweatshop' was more an act of parody a la 
'spam kr!it!k' rather one of analysis. The subject line 'business' seemed 
rather bland. Yet, it was also not random, as the message struck me for 
several reasons. 

First, paintings are treated like any other commodity whose costs can be 
lowered by outsourcing production into a low-wage country. So also for art, 
Southern China becomes the 'low cost manufacturing base.' Second, like many 
other low-end businesses, this proposition is spewed about randomly as spam. 
In fact, nettime got it several time (that's why I noticed it). Third, it 
contains some rather untrustworthy claims such as the painting being done by 
'famous artists', though they remain unspecified.

Most importantly, though, it introduces an extreme separation -- extreme in 
the context of Western art, more common in the textile industry -- between 
ordering and producing. While made-to-order art has never entirely gone out 
of fashion with the artist becoming an autonomous subject (so the story line) 
it has been transformed into an intimate process ( as in having your portrait 
painted). As such, it's based on a supposedly deep relationship between the 
person doing the ordering and the one doing the execution. 

Now, this email indicates that two things are happening. The made-to-order 
relationship is reappearing with all the loss of status that entails for the 
artists (a 'famous artist' yet anonymous, like the great medieval 
artists/artisans). Yet, at the same time, this relationship has been broken 
under the cost-imperative. This allows to enjoy the product which, like a 
brand, has a status value much higher than its use value, without any regard 
to the context of its production. While this is not a sufficient cause to 
assume sweatshop production conditions, it's a necessary step to establish 
them for the production of high-value objects.


Felix

On Sunday 01 August 2004 18:03, Andrew Ross wrote:

 Re: the subject line. Just a matter of interest, why do you assume this is
 a sweatshop operation? Simply because it is in China?  Or is it impossible
 to imagine the condition of Chinese artisans as comparing favorably with
 their Western counterparts?
 ...

-- 
+---+-+---
http://felix.openflows.org

#  distributed via nettime: no commercial use without permission
#  nettime is a moderated mailing list for net criticism,
#  collaborative text filtering and cultural politics of the nets
#  more info: [EMAIL PROTECTED] and info nettime-l in the msg body
#  archive: http://www.nettime.org contact: [EMAIL PROTECTED]


Re: nettime Content Flatrate and the Social Democracy of the Digital Commons

2004-07-17 Thread Felix Stalder
This is a pretty good, if partisan, summary of the discussion and it 
highlights what is one of the most fundamental, and I would agree troubling, 
differences between the CreativeCommons and the 'flatrate' approach. CC 
relies on a bottom-up strategy that can start right here and right now. No 
need to wait for 'them' to do something before 'you' can get going.

The flatrate proposal has, in its implementation, strong top-down aspects. You 
cannot start it small and you cannot start on your own. This is where the 
EFF's voluntary proposal is fundamentally flawed.

In terms of process, this is a problem, and process matters a lot if you don't 
know where you are headed -- and I think it's pretty save to say that 
nobodies really knows how things will shape up in this area. It's all trial 
and error. 

However, the critique is based three very questionable assumptions.

First. In terms of network design, Rasmus, again and again, equates the system 
architecture with the application that will run on the system. The system has 
centralized aspects, hence the application and the effect of these 
applications will be centralizing. Yet, if you look at it, there is no direct 
relationship between network design and application effects. Take, as an 
example, the railroad network. It's highly centralized, yet it's social 
effects were decentralizing. Take electronic networks. Their architecture is 
decentralized on some levels, centralized on others and the effects are 
centralizing control and decentralizing execution, at least in the 
economy[1]. Now what does that have to do with Rasmus argument, one might 
ask.

Rasmus argues that because of the top-down aspects of the proposal, the effect 
will be as top down. It will only make the mega-corps richer. Well, it 
doesn't. Take the situation today. What does an artist really need a label 
for? Distributing CDs and collecting money. And for this, the bigger is the 
better. Small labels would like to do that, but there are structural reasons 
that favor economies of scale, not the least, because you need a large 
apparatus to distribute stuff and collect money. The p2p networks are great 
at doing the first, but lousy at doing the latter. Hence, the majors lost 
control only over distribution, not over compensation and as long as this 
situation persists, they hold some important cards.

Now, when it comes to compensation, they do a really poor job, but the key is, 
they are still better than anyone else. And this is the reason why they still 
exist and why few musicians are outright fans of p2p. If you can wrestle 
control over the other half also away from the majors, their lease on life as 
expired for good. Then we will have a situation where smaller labels will 
prosper because they can concentrate on doing what they do best -- support 
talent -- while not being structurally disadvantaged when it comes to 
compensating talent. In this perspective, an network architecture that has 
top-down aspects can have a decentralizing effect.



Second. Most artist don't make any money today, why should they make any 
tomorrow.

 Cultural producers are making their living in a true multitude of ways.
 The sale of reproductions is just one. People have other jobs part- or
 full-time, they have subsidies of different kinds, some are students,
 many get money by performing live and giving lessons. In general,
 workfare-type political measures on the labor market [22] is a far
 bigger threat against most artists than any new reproduction technique.

Great, working 8 hours at McDonald's so you can produce free culture in your 
spare time. Or perhaps free culture is only for those lucky enough to have 
high-paying jobs that give them free time (like high-end programmers?).

I personally don't like the situations -- and I'm sure most of us know them -- 
where everyone gets paid except the artists. How many artists show their 
stuff without compensation in museums and kunsthallen? How many curators work 
there for free? How many printers print the fancy catalogues for free? How 
many janitors do? 

You get the drift. There is a clear imbalance, and one that gets legitimized 
with some outmoded mystique about creative work being rewarding in and off 
itself. OK, artists don't get paid in cash, but, hey, they are showered with 
symbolic capital! 

It is not that a 'new reproduction technique' is threatening the artists. 
What's happening desite deep technological change, the situation is not 
changing. All that empowering, and, yes, decentralizing potential of new 
media has stopped just where the money would have started. Hm.

Also, demanding that the welfare state cross-subsidizes the production of 
culture through a generous system of unemployment insurance is not only not 
particularly realistic, but also a strange in a text that uses 'social 
democracy' with such pejorative undertones. I find it hard to tell where 
social democracy ends and the welfare state starts and it 

nettime FBI ABDUCTS ARTIST, SEIZES ART

2004-05-27 Thread Felix Stalder
--- Forwarded message follows ---

May 25, 2004
FOR IMMEDIATE RELEASE

FBI ABDUCTS ARTIST, SEIZES ART
Feds Unable to Distinguish Art from Bioterrorism
Grieving Artist Denied Access to Deceased Wife's Body

 DEFENSE  FUND ESTABLISHED - HELP URGENTLY NEEDED

Steve Kurtz was already suffering from one tragedy when he called 911
early in the morning to tell them his wife had suffered a cardiac arrest
and died in her sleep.  The police arrived and, cranked up on the rhetoric
of the War on Terror, decided Kurtz's art supplies were actually
bioterrorism weapons.

Thus began an Orwellian stream of events in which FBI agents abducted
Kurtz without charges, sealed off his entire block, and confiscated his
computers, manuscripts, art supplies... and even his wife's body.

Like the case of Brandon Mayfield, the Muslim lawyer from Portland
imprisoned for two weeks on the flimsiest of false evidence, Kurtz's case
amply demonstrates the dangers posed by the USA PATRIOT Act coupled with
government-nurtured terrorism hysteria.

Kurtz's case is ongoing, and, on top of everything else, Kurtz is facing a
mountain of legal fees. Donations to his legal defense can be made at
http://www.rtmark.com/CAEdefense/

FEAR RUN AMOK

Steve Kurtz is Associate Professor in the Department of Art at the State
University of New York's University at Buffalo, and a member of the
internationally-acclaimed Critical Art Ensemble.

Kurtz's wife, Hope Kurtz, died in her sleep of cardiac arrest in the early
morning hours of May 11. Police arrived, became suspicious of Kurtz's art
supplies and called the FBI.

Within hours, FBI agents had detained Kurtz as a suspected bioterrorist
and cordoned off the entire block around his house. (Kurtz walked away the
next day on the advice of a lawyer, his detention having proved to be
illegal.) Over the next few days, dozens of agents in hazmat suits, from a
number of law enforcement agencies, sifted through Kurtz's work, analyzing
it on-site and impounding computers, manuscripts, books, equipment, and
even his wife's body for further analysis. Meanwhile, the Buffalo Health
Department condemned his house as a health risk.

Kurtz, a member of the Critical Art Ensemble, makes art which addresses
the politics of biotechnology. Free Range Grains, CAE's latest project,
included a mobile DNA extraction laboratory for testing food products for
possible transgenic contamination. It was this equipment which triggered
the Kafkaesque chain of events.

FBI field and laboratory tests have shown that Kurtz's equipment was not
used for any illegal purpose. In fact, it is not even _possible_ to use
this equipment for the production or weaponization of dangerous germs.
Furthermore, any person in the US may legally obtain and possess such
equipment.

Today, there is no legal way to stop huge corporations from putting
genetically altered material in our food, said Defense Fund spokeswoman
Carla Mendes. Yet owning the equipment required to test for the presence
of 'Frankenfood' will get you accused of 'terrorism.' You can be illegally
detained by shadowy government agents, lose access to your home, work, and
belongings, and find that your recently deceased spouse's body has been
taken away for 'analysis.'

Though Kurtz has finally been able to return to his home and recover his
wife's body, the FBI has still not returned any of his equipment,
computers or manuscripts, nor given any indication of when they will. The
case remains open.

HELP URGENTLY NEEDED

A small fortune has already been spent on lawyers for Kurtz and other
Critical Art Ensemble members. A defense fund has been established at
http://www.rtmark.com/CAEdefense/ to help defray the legal costs which
will continue to mount so long as the investigation continues. Donations
go directly to the legal defense of Kurtz and other Critical Art Ensemble
members. Should the funds raised exceed the cost of the legal defense, any
remaining money will be used to help other artists in need.

To make a donation, please visit http://www.rtmark.com/CAEdefense/

For more information on the Critical Art Ensemble, please visit
http://www.critical-art.net/

Articles about the case:
http://www.rtmark.com/CAEdefense/news-WKBW-2.html
http://www.rtmark.com/CAEdefense/news-WKBW.html

On advice of counsel, Steve Kurtz is unable to answer questions regarding
his case. Please direct questions or comments to Carla Mendes
[EMAIL PROTECTED].




#  distributed via nettime: no commercial use without permission
#  nettime is a moderated mailing list for net criticism,
#  collaborative text filtering and cultural politics of the nets
#  more info: [EMAIL PROTECTED] and info nettime-l in the msg body
#  archive: http://www.nettime.org contact: [EMAIL PROTECTED]


nettime Transeuropean Picnic

2004-05-03 Thread Felix Stalder

Transeuropean Picnic

Historic events are odd things, mostly disappointing. They feel either like 
empty routines of calendarial arbitrariness (200 years French Revolution, the 
millennium) or utterly imposed (9/11, war in Iraq). Either way, they usually 
render one passive, through boredom or powerlessness. History, it seems, is 
always made by others. The EU enlargement, somehow, doesn't really fit this 
pattern, eventhough it had plenty of both in it.

Yet, it is also, or perhaps primarily, an unfinished event, one whose actual 
meaning goes far beyond the overcoming the divisions of the cold war or any 
other of the standard themes trotted out by celebratory speakers on market 
squares across the continent. Its meaning, really, will only slowly emerge, 
through the accumulation of everyday practice. The EU, after all, famously 
lacks a vision.

How could such a practice look like from the point-of-view of open media 
cultures? To think about this, kuda.org, together with v2, issued an 
invitation to gather in Novi Sad, Serbia for a transeuropean pic-nic on the 
weekend of the enlargement [1].

Of course, being in Serbia, one cannot help but be reminded that this great 
process of unification is also a process of creating new boundaries, of 
establishing new visa regimes, border controls and barriers to mobilities 
(which my spell checker insists to render as 'nobilities'). Yet, bringing 
together a hundred people from some 20 countries between the Netherland and 
Georgia on a shoe-string budget and have them picnic on the porch of Tito's 
hunting cabin in the midst of a pristine national park, one felt equally that 
new possibilities were opening up, in the cracks of the major narrative.

This, as became more clear to me during the discussions, has to do with the 
particular character of this thing, the EU, that is growing before our eyes. 
Most importantly, the EU is not a state. It doesn't raise taxes, doesn't have 
a military or a police force, doesn't create laws (only directives to be made 
into laws at the national level), or issue passports. It doesn't even have a 
sports team. Yet, it is also not a meaningless exercise of an out-of-control 
bureaucracy issuing 'symbols' and creating well-intentioned but freefloating 
'discourses'. Rather, the best way to think of the EU, it seems to me, is as 
a gigantic coordination mechanism. It has a relatively small hub 
('Brussels'), trying to get others nodes in a network -- some bigger, others 
smaller than itself -- to behave in a way that things can flow between them 
more easily. The enlargement just added a lot of nodes to this network. The 
coordinating hub's main function is to issue pointers that help to direct 
these massive material and immaterial flows.

The strange thing about these pointers is their consistency. They are hard and 
soft at the same time. By directing flows, they create new pools of 
opportunities, while draining others off their resources. For example, many 
educational institutions in Europe are going through painfull restructuring 
processes at the moment, not just because of funding problems, but because of 
attempts to reorient themselves according to EU pointers ('Bologna reform') 
hoping to then profit from the new opportunities created by the flows of 
people, projects and money being pumped through a somewhat more standardized 
European educational landscape. Of course, no institution is forced to do 
that -- that's the soft part. However, not doing it will amount to a 
self-marginalization virtually nobody is willing to accept -- that's the hard 
part.

The EU, then, is a myriad of such circulation systems whose main power rests 
on its ability to include or exclude nodes. The main difference between 
inside and outside of a network is that opportunities are created exclusively 
inside the network (through the circulation of flows of all kinds) whereas 
outside, marginality is structurally re-enforced all the time (by being 
bypassed).

The important thing is that the EU is not one but a myriad of circulation 
systems. Many overlap and reinforce one another -- the enlargement is also a 
process of accelerating such consolidation -- but the degree of overlap is 
much smaller than in a traditional nation state (say, the US). And this, it 
seems to me, is where independent cultural practices come in. They can 
contribute that this consolidation of the patterns of inclusion / exclusion 
do not become absolute. They can extend the networks to include nodes other 
than the officially sanctioned ones, thus making sure that not only 
opportunities flows beyond the borders (if there is one aspect of the EU that 
is state-like, then it's the Schengen Treaty), but that new opportunities are 
created precisely because the cultural micro-networks are different from the 
official ones.

This is not an 'Anti-EU' strategy, which was made clear by many is picnickers 
is luxury that only those who inside the EU can afford. Rather, 

nettime Community Radio in Venzuela

2004-03-14 Thread Felix Stalder
[I have no direct knowlegde of the complex situation in Venezuela. Yet, I
found this article on community radios/TVs to be very interesting. As far
as I can tell, Chavez, though attacking the oligrachy (see Brian Holmes'
post a few days ago), has not been shutting down, or taking over, their
media. Rather, he is building up his own institutions / power bases, that
run parallel to it. This might help explain why there are two highly
energized groups confronting each other, both being able to organize
protest with huge turn-outs. This would not be possible without access to
mass media.

It probably depends on one's point of view, if this is to call a new type
of 'participatory democracy' or 'run-away populism'. It doesn't strike me,
though, as dictatorial or authoritarian. You certainly don't have hundreds
of community radios/TVs is Cuba.]


March 8, 2004
CARACAS JOURNAL
Pirate Radio as Public Radio, in the President's Corner
By JUAN FORERO

http://www.iht.com/ihtsearch.php?id=509215

ARACAS, Venezuela, March 7 The sound room of Radio Perola, a small
community station on the poor edge of this city, is papered with posters
celebrating Latin American revolutionaries like Fidel Castro and offering
a stern warning to the behemoth to the north: Death to the Yankee
Invader.

The setting seems fitting for José Ovalles's politically charged Saturday
radio program. Gripping a microphone and waving reports from a government
news agency, the white-haired retired computer teacher charges that a
far-flung opposition movement arrayed against President Hugo Chávez is
part of an American-led conspiracy. He ridicules the president's foes as
criminals with scant backing.

He urges listeners to defend what Mr. Chávez calls his Bolivarian
Revolution, which is under international pressure to allow a recall vote
on the president's tumultuous five-year rule. We have to fight for a free
country,  he said recently, one with no international interference.

The message, beamed from a 13-kilowatt station in what was once the
storeroom of a housing project, reaches at most a few hundred homes. But
Radio Perola is part of a mushrooming chain of small government-supported
radio and television stations that are central to Mr. Chávez's efforts to
counter the four big private television networks, which paint him as an
unstable dictator.

With Venezuela on edge, stations like Radio Perola are poised to play an
even bigger role in this oil-rich nation's political battle.

Instead of shutting down his news media tormenters, Mr. Chávez's tactic
appears to be to ignore them as much as possible while relying on former
ham radio operators and low-budget television stations to get the
government's message across.

Although the stations say they are independent and autonomous, Mr. Chávez
has announced that $2.6 million would be funneled to them this year. They
also will receive technical assistance and advertising from state-owned
companies.

This year, we will not only legalize and enable approximately 200 more
communitarian radios and televisions with equipment, but we will also
promote them, the communication and information minister, Jesse Chacón,
said in an interview posted on a pro-Chávez Web site.

The stations have been important to Mr. Chavez's government during the
current turmoil, in which the opposition has accused the government of
fraudulently disqualifying hundreds of thousands of signatures for a
recall referendum.

Through it all, the private television and radio stations and the nation's
largest newspapers have stepped up their pressure, presenting a parade of
antigovernment analysts and opposition figures.

Mr. Ovalles, though, calls the opposition gangsters and accuses private
news organizations of faking the sizes of antigovernment marches.

At first glance, the community stations and their largely volunteer staffs
hardly seem political, nor do they offer the wallop of the big news
organizations. Programming often deals with mundane matters like trash
pickups or road conditions. The stations are staffed by volunteers, from
teenagers eager for the chance to play Venezuelan hip-hop or salsa to
homemakers who want to tell listeners how to stretch earnings in tough
times.

The main objective, say those who work at the stations, is to show there
is another side to neighborhoods that, in the popular press, are presented
as crime-ridden ghettos.

The image of the barrios is one of criminals, violence, prostitution,
where kids are abandoned, said Gabriel Gil, a producer at Catia TV, a
three-year-old station that recently moved into a vast building belonging
to the Ministry of Justice. We say we are television of the poor.

Radio Un Nuevo Día, in a poor neighborhood, is much like the rest. Its
small transmitter has been set up in the corner of a bedroom in a two-room
cinder block house belonging to a cleaning woman, Zulay Zerpa.

Bedsheets separate the bare-bones operation from the cots where her two
children sleep.

I cook, I clean, I 

nettime Music Labels Tap Downloading Networks

2003-11-18 Thread Felix Stalder
It was long suspected that p2p usage stats could reveal more accurate user
preferences than traditional traditional charts and 'hit parades'. Sad to
see it implemented like this.

 Our hope was that we could take the technology revolution that 
 Napster made popular and create tools for the benefit of copyright 
 holders, said Eric Garland, BigChampagne's chief executive.

Felix



Music Labels Tap Downloading Networks
Mon Nov 17,10:17 AM ET
By ALEX VEIGA, AP Business Writer
http://news.yahoo.com/news?tmpl=story2cid=487u=/ap/file_swapping_intelligence

LOS ANGELES - The recording industry, it seems, doesn't hate absolutely
everything about illicit music downloading. Despite their legal blitzkrieg
to stop online song-swapping, many music labels are benefiting from —
and paying for -- intelligence on the latest trends in Internet trading.

It's a rich digital trove these recording executives are mining. By
following the buzz online, they can determine where geographically to
market specific artists for maximum profitability.

The record industry has always been more about vibe and hype, said
Jeremy Welt, head of new media for Maverick Records in Los Angeles. For
the first time, we're making decisions based on what consumers are doing
and saying as opposed to just looking at radio charts.

One company, Beverly Hills-based BigChampagne, began mining such data from
popular peer-to-peer networks in 2000 and has built a thriving business
selling it to recording labels.

The company -- which takes its name from the Peter Tosh song lyric, You
drink your big champagne and laugh -- taps directly into file-sharing
networks like Kazaa's FastTrack. It checks on how often its clients'
artists show up in searches or how frequently their songs are downloaded.
The data can be sorted by market or geographical region.

BigChampagne also has a TopSwaps chart that ranks the most shared songs.  
Rapper Eminem (news - web sites) was first in a recent scan, his songs
downloaded more than 8.6 million times in one day.

Our hope was that we could take the technology revolution that Napster
(news - web sites) made popular and create tools for the benefit of
copyright holders, said Eric Garland, BigChampagne's chief executive.

The bountiful market research is gleaned from behavior for which the music
industry otherwise shows no tolerance. Hurt by a three-year decline in
music sales, the industry has sued the major file-sharing networks, along
with individuals who have used them.

It wouldn't be very smart if we weren't looking at what they're doing,
Welt said.

The file-sharing companies are also taking notice. This week, Altnet
threatened legal action against nine companies, including BigChampagne,
that it accused of violating patents on file-identifying technology.
BigChampagne denies using the Altnet technology or playing any role in
helping recording companies identify users for lawsuits.

BigChampagne has certainly done well by file-swapping. It formed in July
2000, just as the Internet boom was beginning to bust, and now counts
Maverick, DreamWorks, Warner Bros., Disney and Atlantic Records among its
clients. All the major labels have worked with BigChampagne in one
capacity or another,  Garland said.

Traditionally, labels had relied for market research largely on commercial
radio, MTV and music store sales.

Label executives waited weeks to get feedback based on limited audience
sampling -- typically by randomly calling listeners and asking if they
recognized a song after hearing a snippet.

Only after several weeks would they begin to get a picture of whether a
single was getting heard. And until Soundscan began electronically
tracking album sales in the 1990s, the industry relied only on a survey of
music retailers to gauge fan interest.

The emergence of free online trading, beginning in the late 1990s with
MP3.com and the original Napster, suddenly made it technologically
feasible to track music consumption in a whole new way.

It's the most vast and scaleable sample audience that the world has ever
seen, Garland said.

BigChampagne data are essentially a tally of what millions of music fans
are doing every hour.

Peer-to-peer systems function by sending search queries and file transfers
across a network of several computer users. Every time someone searches
Kazaa for a song, that query is passed along the network. BigChampagne
taps in as if it were a regular user and compiles the traffic flows in a
database it later sorts.

What we do in effect is act like a superuser who demands access to the
network in its entirety, Garland said.

BigChampagne doesn't identify individuals or gather usernames, Garland
said.  But by analyzing users' numeric Internet addresses, BigChampagne
can still pinpoint location and give clients a sense of where an artist is
most popular.

By using BigChampagne, labels can release a song to radio and, if there
are signs demand is brewing on the song-swapping networks, immediately
make the single 

nettime Are all codes code?

2003-11-04 Thread Felix Stalder
[This is unlikely to be a legal case, though from a semiotic point of view, 
it's nevertheless puzzling. Is using images that are released under the GPL 
the same than using source code released the GPL? Is including existing 
images into new images, in this case, a screenshot of a kde desktop in a tv 
series, the same as including existing source code into a new source code? 
Felix]


Posted by Jonathan Riddell on Friday 31/Oct/2003, @17:09
from the 24h-to-3.2beta dept.
http://dot.kde.org/1067616574/

The third series of television show 24 started in the US last week. In the aim 
to improve security, The Counter Terrorist Unit seem to have switched 
operating system from MacOS to KDE [1]. Interestingly they used a 3-year-old 
KDE 1.x desktop. These older icons are made available under a public domain 
licence. If a GPL'd set of icons had been used, would we now be legally able 
to modify, sell and distribute the episode under the terms of the GPL over 
the internet? 

[1] http://jriddell.org/24-kde.html


+---+-+---
http://felix.openflows.org

#  distributed via nettime: no commercial use without permission
#  nettime is a moderated mailing list for net criticism,
#  collaborative text filtering and cultural politics of the nets
#  more info: [EMAIL PROTECTED] and info nettime-l in the msg body
#  archive: http://www.nettime.org contact: [EMAIL PROTECTED]


nettime WSJ: Can Copyright Be Saved?

2003-10-21 Thread Felix Stalder
[It's quite amazing, not too long ago, an outfit like the WSJ would have
any questioning of the absolute enforcement of copyrights slandered the
way Forbes slandered the FSF recently. Now, suddenly, even the WSJ admits
that things are up for grabs and that there are valid several options.
Now, you might not agree with their portraying of DRM as middle of the
road solution, but just putting it out as one of several options,
including a tax!, rather than the only one, is quite a significant change
in itself. Felix]


Can Copyright Be Saved?
New ideas to make intellectual property work in the digital age

By ETHAN SMITH
Staff Reporter of THE WALL STREET JOURNAL
October 20, 2003

For some people, the future of copyright law is here, and it looks a lot
like Gilberto Gil.

The Brazilian singer-songwriter plans to release a groundbreaking CD this
winter, which will include three of his biggest hits from the 1970s. It
isn't the content of the disc that makes it so novel, though -- it's the
copyright notice that will accompany it.

Instead of the standard all rights reserved, the notice will explicitly
allow users of the CD to work the music into their own material. You are
free ... to make derivative works, the notice will state in part. That's a
significant departure from the standard copyright notice, which forbids such
use of creative material and requires a legal agreement to be worked out for
any exceptions.

Is this the future of copyright? Perhaps. But a better way to think of it is
that it's one of the possible futures of copyright. Because right now, it's
all pretty much up for grabs.

Blame it all on the Digital Age. As any digital downloader can tell you,
technology and the Internet have made it simple for almost anyone to make
virtually unlimited copies of music, videos and other creative works. With
so many people doing just that, artists and entertainment companies
sometimes appear helpless to prevent illegal copying, and their halting
legal efforts so far have antagonized customers while hardly putting a dent
in piracy.

The challenge is finding a way out of this mess. Efforts fall broadly into
two camps. On one side, generally speaking, are those who revel in the
freedom that technology has brought to the distribution of creative
material, and who believe that copyright law should reflect this newfound
freedom.

On the other side are those who believe that the digital age hasn't changed
anything in terms of the rights of artists and entertainment companies to
control the distribution of their creations and to be paid for them -- the
essence of copyright law. For them, the answer is to leave copyright law
intact, and to use technology to make it harder for people to make digital
copies.

Here's a closer look at some of the competing visions.

IN THIS TOGETHER

The copyright notice for Mr. Gil's coming CD is being crafted by Creative
Commons, a nonprofit organization that seeks to redraw the copyright
landscape. Believing traditional copyrights are too restrictive, it aims to
create plain-language copyright notices that explicitly offer a greater
degree of freedom to those who would reshape or redistribute the copyrighted
material.

Traditional copyright law gives owners of creative material -- and them
alone -- the right to copy or distribute their works. Although they can
waive all or part of those rights, the process isn't easy and usually occurs
in response to a particular request. Those hurdles, critics say, can hinder
the open and freewheeling sharing of material the digital age makes
possible.

Creative Commons seeks to make the system more flexible by spelling out
which rights the copyright holder wishes to reserve and which are being
waived without waiting for a request. Artists can mix and match from among
four basic licensing agreements: They can decide whether they simply want
attribution anytime their work is used by someone else; whether they want to
deny others use of the work for profit without permission; whether they want
to prevent others from altering the material; and whether they want to
permit the use of material only if the new work is offered to the public
under the same terms. An underlying layer of digital code enforces the
rights laid out by the owner, telling computers how a given work can be
used.

A Creative Commons license isn't for everyone. It might appeal to
independent artists for whom free samples, distributed online, might
represent an attractive marketing option, or for someone like Mr. Gil, who
believes that making it easier to share and reshape his music can be an
important part of the creative process. But it's unlikely to appeal to the
big media companies, for which copyrighted material is what they sell.

Still, Mr. Gil, who is also Brazil's culture minister, sees Creative Commons
as a way to unlock the creative potential of digital technology. I'm doing
it as an artist, he says. But our ministry has been following the process
and getting interested in 

nettime European Parliament Decision against Software Patentability

2003-09-25 Thread Felix Stalder
The discussion on software patents in the EU parliament in Strasbourg has
triggered one of the most substantive political manifestations of the Open
Source / Free Software communities in Europe to date. In Vienna, for
example, there was a demonstration in front of the patent office, with a
surprisingly large turnout, 300 people [1] (very few software artists,
though). In other cities the story was similar [2].

These, and many other, initiatives had some success and positive
last-minute admendments were introduced. Apparently, most members of
parliament were rather surprised by the level of public response, as they
thought this to be an uncontroversial technicality, which was how it was
presented to them by the industry.

Below is an evaluation of the new patent directive in Europe. As usual,
there is quite a bit of uncertainty as to how it is going to be
implemented.

Felix


[1] http://wiki.ael.be/index.php/InfoStandVienna
[2] http://wiki.ael.be/index.php/InfoStands


--  Forwarded Message  --

Subject: [ffii] EP Decision against Software Patentability
Date: Thursday 25 September 2003 09:05
From: Hartmut Pilch [EMAIL PROTECTED]
To: [EMAIL PROTECTED]

FFII News -- For Immediate Release -- Please Redistribute
+++ +++ +++ +++ +++ +++ +++ +++ +++ +++ +++ +++ +++ +++
EU Parliament Votes for Real Limits on Patentability
Strasburg 2003/09/24
   For immediate Release

   In its plenary vote on the 24th of September, the European Parliament
   approved the proposed directive on patentability of
   computer-implemented inventions with amendments that clearly restate
   the non-patentability of programming and business logic, and uphold
   freedom of publication and interoperation.

 * [9]Backgrounds
 * [10]Media Contacts
 * [11]About the FFII -- www.ffii.org
 * [12]About the Eurolinux Alliance -- www.eurolinux.org
 * [13]Permanent URL of this Press Release
 * [14]Annotated Links

Backgrounds

   The day before the vote, CEC Commissioner Bolkestein had
   [15]threatened that the Commission and the Council would withdraw the
   directive proposal and hand the questions back to the national patent
   administrators on the board of the European Patent Office (EPO),
   should the Parliament vote for the amendments which it supported
   today. It remains to be seen, whether the European Commission is
   committed to harmonisation and clarification or only to patent owner
   interests, says Hartmut Pilch, president of FFII. This is now our
   directive too. We must help the European Parliament defend it.

   The directive text as amended by the European Parliament is
   unbelievably good! I couldn't believe it as I was posting it article
   by article to the Slashdot story. It just gets better and better, and
   it hangs together incredibly cohesively. I think we have done
   something amazing this week exclaimed James Heald, a member of the
   FFII/Eurolinux software patent working group, as he put together the
   voted amendments into a [16]consolidated version.

   With the new provisions of article 2, a computer-implemented
   invention is no longer a trojan horse, but a washing machine,
   explains Erik Josefsson from SSLUG and FFII, who has been advising
   Swedish MEPs on the directive in recent weeks. That the majorities for
   the voted amendments had support from very different political groups
   - this reflects the arduous political discussion that had led to two
   postponements before.

   However, when 78 amendments are voted in 40 minutes some glitches are
   bound to happen: The recitals were not amended thouroughly. One of
   them still claims algorithms to be patentable when they solve a
   technical problem., says Jonas Maebe, Belgian FFII representative
   currently working in the European Parliament. But we have all the
   ingredients for a good directive. We've been able to do the rough
   sculpting work. Now the patching work can begin. The spirit of the
   European Patent Convention is 80% reaffirmed, and the Parliament is in
   a good position to remove the remaining inconsistencies in the second
   reading.

   The directive will have to withstand further consultation with the
   Council of Ministers that is more informal and hence less public than
   Parliamentary Procedures. In the past, the Council of Ministers has
   left patent policy decisions to its patent policy working party,
   which consists of patent law experts who are also sitting on the
   administrative council of the European Patent Office (EPO). This group
   has been one of the most determined promoters of unlimited
   patentability, including program claims, in Europe.

   Says Laura Creighton, software entrepreneur and venture capitalist,
   who has supported the FFII/Eurolinux campaign with donations and
   travelled from Sweden to Brussels several times to attend conferences
   and meetings with MEPs:

 Now those 

nettime basic terms in the IP discusssion

2003-08-28 Thread Felix Stalder

I'm writing a little glossary for a newspaper [1] we are putting together
on IP issues. The newspaper will be distributed at WSIS [2]. Better
definitions are welcome.


Public Good: 
 
Goods whose use is non-rivalrous, i.e. using the good does not deplete it,
and non-excludable, i.e. once it is produced people cannot be excluded
from using it. The light house at the coast, alerting ships of potential
peril, is an example of a public good. Without intellectual property law,
particularly copyrights and patents, all digital information would be a
public good.


Private Property:
-
Information owned by a private legal entity (a coporation or a individual 
person). The owner has exclusive rights to the property as defined by the IP 
law and can do as s/he pleases with it. Most importantly, the owner can 
freely set the conditions under which it can be accessed and used by third 
parties.


Public Property:

Information owned by the state. Within the bounds of the law and what is 
politically acceptable, the state can do with the information has it sees 
fit. Example: census data.


Public Domain:
--
Information that has no legal protection, either because copyrights/patents 
have expired, or because it has been released into the public domain by the 
owner. Example: the works of William Shakespear.


Commons:
---
A pool for information that is managed by a community of users. Acceptable use 
policies are set by the community. Usually, access to the resource is granted 
non a non-discriminatory basis and at no or low costs. Example: scientific 
information, open source software.


[1] http://www.world-information.org/wio/wsis
[2] http://www.itu.int/wsis/
 




+---+-+---
http://felix.openflows.org

#  distributed via nettime: no commercial use without permission
#  nettime is a moderated mailing list for net criticism,
#  collaborative text filtering and cultural politics of the nets
#  more info: [EMAIL PROTECTED] and info nettime-l in the msg body
#  archive: http://www.nettime.org contact: [EMAIL PROTECTED]


Re: nettime [Fwd: Re: [ox-en] Felix Stalder: Six Limitations to the Current Open Source Development Methodology]

2003-08-26 Thread Felix Stalder
 Date: Mon, 25 Aug 2003 19:46:55 +0200
 From: Stefan Merten [EMAIL PROTECTED]

 Last week (9 days ago) geert lovink wrote:
  Six Limitations to the Current Open Source Development Methodology

 I'm not always sure in which way or to what areas the following points
 are limitations.

These limitations refer to the kind of problems that can be addressed
through the current form of social organization developed in the Open
Source Movement. The way Open Source Projects are organized reflects the
specifics of problem -- developing software -- and thus they cannot serve
as a model to address problem with very different characteristics.

This does not mean that other problems, for example, the development of
drugs, cannot be organized in an open way, but this 'open way' will have
to look very different from the way Open Source Software projects are
organized because the problem of creating drugs is very different from the
problem of creating software. In other words, there is an intimate
relationship between the characteristics of the problem and the social
organization of its solution.


  However, particularly outside the software domain, the Open Source
  projects remain relatively marginal. Why? Some of it can be explained by
  the relative newness of the approach. It takes time for new ideas to take
  hold and to be transferred successfully from one context to another.

 I'd like to underline this point. Free Software took 15-20 years to
 reach the public space. If I consider that period I find it promising
 that similar approaches are far more known today than Free Software
 was in the late 80's.

I agree. This is why I'm very hopeful. But in order to continue the social
innovation, we need to address these problems, rather than hoping they
will be solved somehow by the magic of 'openness'.

 Let's check this.

  1) Producers are not sellers
 
  The majority professional, i.e. highly-skilled, programmers do not draw
  their
  economic livelihood from directly selling the code they write. Many work
  for organizations that use software but do not sell it, for example as
  system administrators.

 So they sell at least the kind of workforce they use when producing
 Free Software.

Yes, but the difference between using and selling software is important,
even within the commercial sector. If you're simply using the software,
you don't care if it's available to others as well (since it is anyway).
If you're selling it, you need to control it.


 I'm not sure about the first example but for IBM workers and students
 Free Software is then at least to some degree an alienated thing: They
 don't program because of the program but because of the money they may
 sell their services for (IBM) or the reputation they get for it
 (students).

I think is too simplistic to say that all work is paid or has other
utilitarian motives is alienated.

 As a result this software is not Double Free Software as I called it
 on the German list some time ago because the software is written for a
 purpose outside the software and its concrete use value. I'm arguing
 that this degrades the quality of the software because of the
 alienation.

Any evidence for this? Would be interested in seeing it. Again, I think
this is simplistic. Many students love what they do. The fact that they
get a degree for it is an additional motivation, but a detraction.


 Unfortunately AFAIK there is no study yet which tries to answer the
 question which amount of Free Software is written under alienated
 conditions and which amount is Double Free Software.

I guess one of the reasons why this hasn't happened is that it's simply
impossible to define what alienated means within any degree of empirical
relevance in this context. We are speaking of highly-skilled,
self-motivated professionals, and not about people in the assembly line.
The contexts are different and the differences matter.


 However, I can't see where the limitation is here. Oekonux argues that
 it is one of the basic *strengths* of Free Software that it is not
 sold by those who create it. This way the creators can focus on the
 use value of the software alone and are not obstructed by marketing
 needs. Exactly this is one of the reasons why Free Software is so
 successful.

I'm not saying that the limitations make Open Source Software bad, but
that they limit its social model in terms of the problems to which it can
be applied.

 So I'd argue that this is not an limitation to spread the principles
 of Free Software to other areas but a precondition.

 Felix' argument makes sense only if you assume that each little piece
 of work / effort needs to be sold. However, this is not true for
 *lots* of areas in human life. One instance close to software is the
 hobby sector where people spend lots of efforts including spending
 money. The only reward they get is the Selbstentfaltung they
 experience while doing their hobby.

This reminds me of a discussion I had about a year ago 

nettime RIP: Walter Ong

2003-08-18 Thread Felix Stalder
Rev. Walter J. Ong; traced the history of communication

By Mary Rourke, Los Angeles Times, 8/16/2003
ttp://www.boston.com/news/education/k_12/articles/2003/08/16/rev_walter_j_ong_traced_the_history_of_communication

LOS ANGELES -- The Rev. Walter J. Ong, a Jesuit priest and a leading scholar 
in the field of language and culture who traced the transition from oral to 
written communication in his more than 20 books, died Tuesday at St. Mary's 
Health Center in Richmond Heights, Mo., a suburb of St. Louis. He was 90.

In his writings and lectures, Father Ong explored the development of 
communication from its preliterate beginnings to its current reliance on 
radio, television, and the Internet. He was fascinated by the transition from 
one form of communication to another. He used ancient stories such as Homer's 
Odyssey to demonstrate that preliterate cultures relied on oral thought, 
in which the storyteller might contradict himself and the story itself might 
change over time until it was written down.

He contrasted oral tradition with the written, using the works of Greek 
philosophers Plato and Aristotle to illustrate the change. A written text 
relies on a set of ground rules for logical reasoning, as well as a 
consistent use of terms, to communicate information.

The two traditions influenced cultural values, Father Ong pointed out. 
Although an oral society places a high value on communal memory and the 
elders who are the main link to history, a literate one focuses on individual 
reasoning and introspection.

The rise of technology introduced other changes. In a high-tech culture, a 
person reads a novel and imagines a movie in his mind. The Internet blurs 
people's exterior and interior worlds. Virtual reality is no longer a private 
matter.

Father Ong's meticulous research on those developments helped lay the 
foundation for an understanding of modern media culture.

Some of his research corresponded with the work of his famous teacher, 
Marshall McLuhan, whose interest in the history of the verbal arts in Western 
culture inspired Father Ong to pursue his own studies.

He was McLuhan's student in graduate school when he completed a master's 
degree in English at St. Louis University. McLuhan was a faculty member from 
1937 to 1944. (Father Ong went on from there to earn a doctorate at Harvard 
University.)

Although McLuhan became a pop-culture guru in the 1960s -- global village, 
his term for the interconnectedness of the world by mass media, is now 
included in Webster's Dictionary -- Father Ong remained a scholar's scholar. 
His writing style was dense and complex, not easy to grasp. His ideas were 
subtle and cumulative, not catchy. His most highly regarded book, for 
example, is titled Orality and Literacy: The Technologizing of the World 
(1982).

Ong is the sort of guy the experts read, said Thomas J. Farrell, whose book 
Walter Ong's Contributions to Cultural Studies (Hampton Press, 2000) has 
helped make the priest's work more accessible.

Born in Kansas City, Mo., on Nov. 30, 1912, Father Ong said he knew he wanted 
to be a priest from the time he was in high school. He entered the Society of 
Jesus in 1935 and was ordained in 1946.

He spent most of his teaching career in the English department at St. Louis 
University. He taught courses in Renaissance literature, his specialty, along 
with a range of others. He also lectured at Oxford University, Yale Divinity 
School, and a number of other top schools around the world until he retired 
in 1991.

Despite Father Ong's academic achievements, he was first and foremost a 
priest, Farrell said. He said daily Mass at 5:30 a.m., regularly heard 
confessions, and wore cleric's garb wherever he went.

Father Ong's academic work only strengthened his belief in God. God created 
the evolving world, and it's still evolving, he told the St. Louis Post 
Dispatch in March 2002.


 
+---+-+---
http://felix.openflows.org

#  distributed via nettime: no commercial use without permission
#  nettime is a moderated mailing list for net criticism,
#  collaborative text filtering and cultural politics of the nets
#  more info: [EMAIL PROTECTED] and info nettime-l in the msg body
#  archive: http://www.nettime.org contact: [EMAIL PROTECTED]


Re: nettime Six Limitations to the Current Open Source Development Methodology

2003-08-16 Thread Felix Stalder

Hi Ben,

 I would be hesitant to define the open source approach
 solely or even primarily in terms of the characteristics you mention.

Perhaps I did not put it as clearly as I should have. I did not mean to 
characterize the open source approach in terms of its internal 
organization. Rather, my focus was on the characteristics of the problems to 
which it has been so far applied successfully.

I totally agree that, from organizational point of view, the points you list 
such as open participation are very important. Your list is fully consistent 
with my elaborations. The fact that software, or an encyclopedia, do not come 
with any product liability *does* facilitates open collaboration. If you 
could sue, say, the Apache Software Foundation for a server crash, or 
Wikipedia for erroneous information, I'm sure their development model would 
look different.

 The Open Organizations project (http://www.open-organizations.org) is an
 attempt to synthesize these principles, and some others, into a workable,
 general-purpose model.

I'm skeptical about the possibility of a workable, general-purpose
model. My post was about the fact that the type of problem affects the
social organization through which the solution is being developed.
Different types of problems demand different types organizations to
address them. You cannot organize the development of drugs the same way
you organize the development of software. For one, very few people would
be willing to be beta-testers.

There are certain aspects that will be universal to all open development
processes, such as common ownership of knowledge. However, the type of
social organization in which commonly owned knowledge can be created will
be vastly different depending on the type of knowledge.

So far, we have learned how to create commonly owned knowledge as long as
the type of knowledge exhibits, among others, the six characteristics I
listed. The next round of social innovation is about finding ways to
create free knowledge / information in other areas as well.




+---+-+---
http://felix.openflows.org

#  distributed via nettime: no commercial use without permission
#  nettime is a moderated mailing list for net criticism,
#  collaborative text filtering and cultural politics of the nets
#  more info: [EMAIL PROTECTED] and info nettime-l in the msg body
#  archive: http://www.nettime.org contact: [EMAIL PROTECTED]


nettime Six Limitations to the Current Open Source Development Methodology

2003-08-14 Thread Felix Stalder

Six Limitations to the Current Open Source Development Methodology

The Open Source Approach to develop informational goods has been
spectacularly successful, particularly in the area for which it was
developed, software. Also beyond software, there are important, successfull
Open Source projects such as the free Encyclopedia, Wikipedia; collaborative
sites writing/publishing projects such as koro5hin.org; and the Distributed
Proofreading Project, attached to the Gutenberg Project.

However, particularly outside the software domain, the Open Source projects
remain relatively marginal. Why? Some of it can be explained by the relative
newness of the approach. It takes time for new ideas to take hold and to be
transferred successfully from one context to another. But this is only part
of the story. The other part is that the current development model is based
on a number of specific, yet unacknowledged conditions that limit its
applicability to more diverse contexts, say the music distribution or drug
research.

The boundaries to the open production model as it has been established in the
last decade are set by six conditions characterizing virtually all of the
success stories of what Benkler called commons-based peer production. The
following list is a conceptual abstraction, a kind of ideal-type. The actual
configuration and relative importance of each condition varies from project
to project, but taken together they indicate the boundaries of the current
model. In this elaboration, I draw from examples of free and open source
software, but it would be simple to illustrate these limitation based on open
content projects.


1) Producers are not sellers

The majority professional, i.e. highly-skilled, programmers do not draw their
economic livelihood from directly selling the code they write. Many work for
organizations that use software but do not sell it, for example as system
administrators. For them the efficient solution of particular problems is of
interest, and if that solution can be found and maintained by collaborating
with others, the sharing of code is not an issue. For others employed in
private sector companies, for example at IBM, the development of free
software is the basis for selling services based on that code. The fact that
some people can use that code without purchasing the services is more than
off-set by being able to base the service on the collective creativity of the
developer community at large. From IBM's point of view, the costs of
participating in open software development can be regarded as 'capital
investment' necessary for the selling of the resulting product: services.

For members of academia (faculty and students) writing code, but not selling
(often explicitly prohibited), contributes to their professional goals, be it
as part of their education, be it as part of their professional
reputation-building. For them, sharing of code is not only part of their
professional advancement, but an integral part of the professional culture
that sustains them also economically,. in form of salaries for the faculty
and stipends for the (graduate) students.

Last but not least are all those who use their professional skills outside
 the professional setting, for example at home on evenings and weekends.
 Having already secured their financial stability, they can now pursue other
 interests using the same skill set.

2) Limited capital investment

Particularly the last, and very important group of people, whose who work
outside the institutional framework on projects based on their own
idiosyncratic interests, can only exist due to the fact that the means of
production are extraordinarily inexpensive and accessible. Materially, all
that is needed is a standard computer (often even a substandard one would
already suffice) and a fast, reliable connection to the communication forums
of the community. Of course, the computer and the network rely on a level of
infrastructure that cannot be taken for granted in large parts of the world,
but for most people in the centers of development, they are within relatively
easy reach.

Once this access to be means of communication is secured, the skills
 necessary to participate in the development of code can also be acquired
collaboratively, free of charge. The number of self-taught programmers is
significant. Since no expensive diplomas are necessary to become active, the
financial hurdle is, indeed, extraordinarily low.


3) High number of potential contributors

Programming knowledge is becoming relatively common knowledge, no longer
restricted to an engineering elite, but widely distributed throughout
society. Of course, truly great programmers are rare as truly great artists
are, but average professional knowledge is widely available. This has a
quantitative and a qualitative dimensions. Quantitatively, the number of able
programmers is in the millions, and rising. Qualitatively, the range of
people capable programmers is also unusually wide, 

nettime Open Source translation of Harry Potter

2003-07-06 Thread Felix Stalder
After distributed proofreading [1], now distributed translating.

According to this website [2] more than 1000 people contributed to the 
translation into German of Harry Potter 4. Now, they are translating volume 
5, which has been released in English but not yet in German. 

The way it works: volunteers sign up, then they are assigned 5 pages to 
translate within 4 weeks (they have to procure the English original 
themselves). If they translation meets the required standards, the 
contributor will receive acess to the other translated pages. The ensure a 
certain consistency, there is a HP-special dictionary [3]. In addition, this 
project also translates additional chapters written by HP fans, 
English-German and German-English.

The translated texts are not available to the public at large. They only 
circulate within the community of translators and others who contribute to 
the project. This, it seems, helps to keep the publisher from getting 
nervous. It's a typical fan project, as they encourage people to translate on 
their own a set of pages, even is a translation already exists, and the 
stated motivation is a) fun and b) the act of translating leads to a deeper 
understanding of the text.



[1] http://www.pgdp.net/c/default.php
[2] http://www.harry-auf-deutsch.de/
[2] http://www.hp-fc.de

 
+---+-+---
http://felix.openflows.org

#  distributed via nettime: no commercial use without permission
#  nettime is a moderated mailing list for net criticism,
#  collaborative text filtering and cultural politics of the nets
#  more info: [EMAIL PROTECTED] and info nettime-l in the msg body
#  archive: http://www.nettime.org contact: [EMAIL PROTECTED]


nettime nettime 3000?

2003-03-04 Thread Felix Stalder

Right at the time when nettime reached the arbitrary yet symbolic number
of 3000 subscribers, the number of error messages flooding the nettime
system reached such proportions (several hundreds a day) that we were
finally forced to go through the boring process of unsubscribing those
addresses that were clearly broken. Within days, nettime got purged of 10%
of its subscribers.

All in all, this was an utterly unspectacular process, spring cleaning if
you will, but makes me wonder, nevertheless, what kind of community is
this in which 10% of the 'members' are dead, so to speak.

So, what kind of community is it? Clearly, it's no longer the hybrid
structured by the two intersecting vectors of online exchanges and
off-line events, back-packing on the European media festival circuit.
These ain't the 90s anymore. Rather, the last time (as far as I know) a
significant number of 'nettimers' were physically in the same place -- at
the WOS II in late 2001 in Berlin -- was a non-event. Just a bunch of
people happening to be together drinking beer in clubs where one could not
communicate with anyone who was more than 1 meter away. There was no sense
of being a group, rather communication unfolded as a series of friendly,
or disinterested, individual encounters. Physically, there was no
many-to-many communication, just one-to-one.

At the same time, nettime in terms of its online exchanges is doing quite
well. It's a stable, reliable, perhaps a bit predictable (the flip side of
reliable), long-term project. I personally don't know of another list that
is comparable in terms of breadth and quality of content.

It seems that, as a community, nettime has been moving in the opposite
direction of what is usually understood as the normal 'maturing' process
of a virtual community, namely, that on-line exchange sooner or later
create the desire for off-line meetings. For nettime, off-line events --
meetings, paper publications -- were crucially important initially but
steadily declined to the point that when the last nettime publication
appeared (as part of Vuk Cosic's Biennale catalogue) only a fraction of
list subscribers (perhaps not even all of those whose texts were
reprinted) even noticed.

A lot of this has to do with the subscriber base becoming more diverse
(geographically, socially, intellectually), the early enthusiasm wearing
off and the distributed, non-ownership, volunteer model showing its
conservative tendencies. This needs to be qualified. Ownership here is not
understood in these sense of being the property of someone, but in the
sense of 'taking ownership' and assuming responsibility.

Who is responsible for nettime? Of course, there are some
responsibilities.  If the email server goes down, the phone at The Thing
will ring. If something on the web server needs to be changed, the action
is in Amsterdam. And the moderation does daily maintenance work.

But responsible in the sense of being able to make decision beyond minor
tinkering is no-one. So, things stay the same as far as the technical is
concerned. Nevertheless, socially, things have changed quite a bit, the
community has become more virtual in all senses.

Perhaps, this has to do with the relative maturing of other networks, say
social forums, art festivals or conferences, which are more efficient at
providing real meeting places for more narrowly defined (but more
populous)  groups whose sense of community is more comprehensive. In a
way, nettime has always defined itself negatively. Being sponsored by art
institutions, but not being an art project itself. Having lots of
intellectuals on board, but being non-academic. Having a strong political
slant, but not being affiliated with any particular segment of the
multitude.

In a time where institutions enjoy a new found respect, nettime, once
again, goes against the trend, becoming more virtual, more distributed,
more ephemeral. This process is not explicit, but it's clearly felt, as
could be witnessed by the last major discussion on the list which, by no
means a co-incidence, was about the institutionalization of another once
hybrid project: rhizome. A discussion that, from the outside was supremely
absurd -- after all, how important is a $5 membership fee, really -- but
from the inside, it seemed to touch a strange cord, one that indicates
that nettime still has a 'sense of self', which, not surprisingly, is
still defined negatively.






+---+-+--- 
http://felix.openflows.org

#  distributed via nettime: no commercial use without permission
#  nettime is a moderated mailing list for net criticism,
#  collaborative text filtering and cultural politics of the nets
#  more info: [EMAIL PROTECTED] and info nettime-l in the msg body
#  archive: http://www.nettime.org contact: [EMAIL PROTECTED]


Re: nettime anti-piracy goons considered harmful

2003-02-04 Thread Felix Stalder
At 03.02.03 19:14, Morlock Elloi wrote:

The only way to benefit from openness is to use it and verify yourself, 
insteadof delluding yourself that someone out there will spend days 
doing that for ...what ?

There are certainly advantages to doing things yourself (just ask all the 
guys hanging around 'home depot'), but there are also clear limitations to 
it. In how many areas can one be truly proficient? In very few, at best. I 
think it was said of Goethe that he was the last person to be able to 
command the entire (scientific) knowledge available at the time. The 
Germans even have an expression for this: Universalgelehrter. This, 
unfortunately, was nearly 200 years ago and the amount of knowledge 
available has exploded many times to a degree that there is probably nobody 
around who fully understands even a clearly circumscribed domain such as a 
computer.

I have no idea of aviation (beyond stretching my arm out of the window of a 
speeding car) but I still have a couple of frequent flyer accounts. Does 
that make me a naive fool? Not necessarily, since there are social 
institutions around, say the FAA in the US, whose mandate is to ensure 
aviation safety. They verify the safety of airplanes, airports etc. Now, 
the trick for such institutions to work is that a) there need to be the 
resources around to get the job done, and b) the conditions need to be 
right so that the job is doable at all.

In respect to software, if you do not have access to the source code, there 
is very little you can do, no matter what your resources are, in order 
check the specifics of the program, particularly not in regard to hidden 
features or bugs. In effect you are forced to blindly trust the vendor of 
the software. The vendor, of course, has an interest in maintaining the 
reputation of the product, so he will never tell you that something is 
wrong with it (particularly since there is no liability). Opening up the 
source code, at the very least, provides the conditions under which the job 
of verifying the software becomes doable.

Of course, that does not mean necessarily that someone with a keen eye is 
actually doing it. Which gets us to the question of where the resources 
come from to do the checking. This clearly is a tricky problem. What are 
the social institutions supporting OS development in the long run? While 
much needs remains to be developed, it's not that we are standing at the 
beginning of the process. The way OS projects are organized -- 
collaboratively and open -- optimizes the chances that bugs are found and 
minimizes the possibilities that someone is able to hide a feature in it. 
Furthermore, only one person has to find the bug (and fix it) for it to 
become available to all users. On the other hand, even if you find a bug in 
an M$ program, chances are, your neighbour will never know it, because you 
are not allow to tell him and M$ won't do it.

Note that I say optimizes the chances and one person has to find the 
bug both are strong conditionals. There is no guarantee here. But also 
doing it yourself is not really one, since how do you know that you fully 
understood the code? IBetter assume you don't. I guess there were a lot of 
intelligent people looking at the source code of PGP and still, a bug 
eluded all of them for a long time. Chances are nobody found the bug nobody 
could exploit it. But once the bug was found, it was published readily 
increasing the chances of it being fixed.

The answer to the imperfections of OSS is not to verify yourself, after 
all, the answer to the difficulties of writing good software is also not to 
write it yourself, but to distribute the process to those willing and able 
to do it. What we need to find now, are institutions capable of sustaining 
this process. So far, OSS hasn't done badly on this front either.

Felix





--|-
http://felix.openflows.org

#  distributed via nettime: no commercial use without permission
#  nettime is a moderated mailing list for net criticism,
#  collaborative text filtering and cultural politics of the nets
#  more info: [EMAIL PROTECTED] and info nettime-l in the msg body
#  archive: http://www.nettime.org contact: [EMAIL PROTECTED]



Re: nettime revenge of the concept

2003-01-27 Thread Felix Stalder
  In his excellent paper,
Coase's Penguin: Linux and The Nature of the Firm, Yochai Benkler
explains, not the motivation, but the technical and legal
preconditions for cooperative informational and cultural production.
The technical considerations are basically: telematically interlinked
personal computers. The legal precondition is basically: that
information be treated as what it arguably is, a non-rivalrous
good, i.e. a resource that can't run out, that can't be destroyed in
the using, and that therefore cannot be treated as an ownable
commodity. Benkler's conclusion is that networked informational and
cultural production obeys neither the constraints of a firm (with a
bureaucratic organization), nor the price signals given by a market
(buy and sell are irrelevant to non-rivalrous goods). So Benkler
is talking about a form of production which is at once
non-bureaucratic and, yes, non-capitalist, i.e. divorced from that
complex and changeable human institution which transnational state
capitalism now dominates almost entirely: the market.


I think 'non-capitalism' is a missreading of Benkler's argument and of the 
Open Source Software phenomenon. I deliberately say OSS and not Free 
Software, since such a reading might apply more narrowly to FS (though I'm 
not even sure about that) but certainly not to the OSS in general. I think 
(non)capitalism is a category that doesn't help much explaining the 
practive of OSS (as supposed to some of the political theories that 
motivate some of the FS/OSS figures).

What Benkler said in this essay, which is indeed brilliant, is that 
conventional economists know only of two ways how to organize production: 
within a closed organization (the firm, the bureaucracy) and in an open 
system (the market). The question is always: how to achieve efficient 
organization of people and resources in regard to a desired productive 
outcome. Signals used to achieve this coordination within the closed 
structure are commands relayed through hierachies. In the open structure, 
it's money: prices attached to goods (and services). What he claims now, 
and I basically agree with him, is that a third way of organizing labour 
has emerged, heavily relying on the Internet. He calls it 'commons-based 
peer production.'

Now, there are capitalist firms and non-capitalist 'firms' (state 
bureaucracies, co-ops) and there are capitalist and non-capitalist markets. 
A traditional farmer's market, for example, is not a capitalist market. 
Just remember Fernand Braudel's distinction between markets and 
anti-markets which Manuel DeLanda dusted off a few years ago (check the 
nettime archives).

In the same sense, there is capitalist 'commons-based peer production' 
(think of Amazon's way to recommend books, or IBM's investment in Linux, 
Redhat and so on). There's also non-capitalist 'commons-based peer 
production' (say, GNU, Debian, Wikipedia, nettime and so on).

What is perhaps most interesting is how the 'capitalist' and 
'non-capitalist' elements intersect and what that might tell us about the 
politcal dimension of these movements. I think it's exactly this hybridity 
(along with the limitation to non-rivalrous goods and even more, to 
'functional works') that makes the OSS phenomenon very interesting but only 
of limited value as a political project (which is not necessarily a bad 
thing).


Felix




--|-
http://felix.openflows.org

#  distributed via nettime: no commercial use without permission
#  nettime is a moderated mailing list for net criticism,
#  collaborative text filtering and cultural politics of the nets
#  more info: [EMAIL PROTECTED] and info nettime-l in the msg body
#  archive: http://www.nettime.org contact: [EMAIL PROTECTED]