Re: The Society of the Unspectacular

2007-06-11 Thread Felix Stalder
On Sunday, 10. June 2007 19:42, Morlock Elloi wrote:

> If "empowerment" of the public by cheap self-publishing has demonstrated
> anything, it is that a vast majority has nothing to say, lacks any
> detectable talent and mimicks TV in publishing the void of own life (but
> unlike TV they derive no income from commercials.)

If media are made by, and for, one's own community (which might be very 
small) then talent and excitement are measured very differently. The 
material on youtube etc is "boring", mainly, I guess, because it was not 
made for you. Most of us produce lots of stuff that is boring to all but a 
hand full of people. But to them, it's great. It's the stuff that used to 
be called private, but is now online because it's the easiest way to get 
to the intended audience of 5 (or 500, or 5000).

> So I wouldn't say that the classical notion of "public" has changed in
> the sense that it got fragmented around "new media". It's "new media"
> giving content-free personal smalltalk the ability to be globally
> visible (not that anyone looks at it in practice, but they could, in
> theory.)

The technical possibility that "everyone" can watch it is pointing into the 
totally wrong direction. It's doesn't mean that everyone should watch it, 
it only means that the size of the audience is not determined on the level 
of the technical protocol but can scale freely up or down.

This does, in some from, lead to a fragmentation of the public, not the 
least because the "public" in modern democracies was constituted through 
the narrow bandwidth of mass media. Though I'm not sure if this is the 
reason, as Eric suspects, for the very manifest trend of governments 
withdrawing from public discourse. Yet, for whatever reason, there seems 
to be a inverse relationship between the degree of privacy of ordinary 
people and the secrecy of governments. 

Felix

--- http://felix.openflows.com - out now:
*|Manuel Castells and the Theory of the Network Society. Polity, 2006 
*|Open Cultures and the Nature of Networks. Ed. Futura/Revolver, 2005 


#  distributed via : no commercial use without permission
#   is a moderated mailing list for net criticism,
#  collaborative text filtering and cultural politics of the nets
#  more info: [EMAIL PROTECTED] and "info nettime-l" in the msg body
#  archive: http://www.nettime.org contact: [EMAIL PROTECTED]


popular info warfare, narcocorridos, and youtube

2007-04-27 Thread Felix Stalder


[This is from one of the most interesting sources analyzing organized
criminality, from a staunchly structural point of view. New modes of
organisation (terrorist/criminal networks) operate against old modes
of organisation (states and armies) and transform each other in the
process. Or, as the subtitle of the blog says, "networked tribes,
infrastructure disruption, and the emerging bazaar of violence.
An open notebook on the first epochal war of the 21st Century".
Frightening but enlightening. Felix (the blog contains numerous links,
which are not reproduced here).]



INFO WARFARE, NARCOCORRIDOS, AND YOUTUBE
http://globalguerrillas.typepad.com/globalguerrillas/2007/04/journal_the_neg.html

"Many of these ballads [narcocorridos, or drug trafficker's
ballad] are in the classic Medieval style, and they are an
anachronistic link between the earliest European poetic traditions and
the world of crack cocaine and gangsta rap." Elija Wald.

"Following the model of terrorist groups such as al-Qaeda,
the cartels have discovered the Web as a powerful means of
transmitting threats, recruiting members and glorifying the
narco-trafficker lifestyle of big money, big guns and big thrills."
Manuel Roig-Franzia, writing for the Washington Post

The best way to view Mexican narcocorridos, or ballads to drug
traffickers, is as a form of information warfare directed
simultaneously at:

* internal/general audiences (to enhance the prestige of 
  affiliation and attract adherents),
* the opposition (to demoralize and provoke),
* the state (to demonstrate its impotence through 
  brazen announcements of intent). 

Here's an example. The popular Mexican singer, Valentin "The Golden
Rooster" Elizalde wrote a paean to the Sinaloan cartel that villified
the Zetas/Gulf Cartel. The song's video was posted on YouTube
(featuring bodies of killed cartel members from news clips). The
Gulf Cartel/Zetas sent a return message by killing Elizalde and his
manager outside a concert, punctuated by a YouTube video of Elizalde's
autopsy.

Popular Infowar

Historically, information warfare was restricted to elites
(government, media, parties, etc.). The onrush of Jihadi videos,
political pro-war/anti-war blogs, and narcocorrido videos have
categorically demonstrated that this state of affairs has changed. We
now live in a world where infowarfare is accomplished by individual
practitioners through an open source framework. Over time, the gap
between those in the open source framework and the elites will widen
in the favor of the former -- we ave only scratched the surface of
where this empowering technology can go.

While the new media infowar will be chaotic (as much as against each
other as for or against the state), the bulk of the momentum will be
with those that represent revisionist forces. Namely, those groups
that want to change the status quo. Here's an example: Lee Garnett
at PostPolitical notes an important transition already occurring in
Narcocorridos:

 However, more recent slayings have shown a marked tendency to try
to transcend the limits of revenge. The videos have begun depicting
the killers as vigilantes, bringing justice to the streets by killing
off members of their hated rival cartels, which are depicted as the
enemies of the people.









--- http://felix.openflows.com - out now:
*|Manuel Castells and the Theory of the Network Society. Polity, 2006 
*|Open Cultures and the Nature of Networks. Ed. Futura/Revolver, 2005 


- End forwarded message -

#  distributed via : no commercial use without permission
#   is a moderated mailing list for net criticism,
#  collaborative text filtering and cultural politics of the nets
#  more info: [EMAIL PROTECTED] and "info nettime-l" in the msg body
#  archive: http://www.nettime.org contact: [EMAIL PROTECTED]


Re: sad news

2007-04-25 Thread Felix Stalder

This is very sad news, indeed. As Trebor Scholz wrote Ricardo Rosas
saw and established connections where few people could perceive them,
let alone could make them work. Yet, once he pointed them out and set
out to bring them into the world, they were natural. He introduced a
lot of people, including myself, to Brazil and to a world of ideas,
cosmoplitan and uniquely personal at the same time. He did so in the
most humane way possible, by having long conversations, zig-zaging
through Sao Paolo, disappearing and turing up again with more people,
more connections, more things to do. I was always convinced our paths
would cross again, there would be plenty of time for more drinks,
walks, and conversations. It would have been the most natural thing in
the world. Now it won't be.

Felix





--- http://felix.openflows.com - out now:
*|Manuel Castells and the Theory of the Network Society. Polity, 2006 
*|Open Cultures and the Nature of Networks. Ed. Futura/Revolver, 2005 



#  distributed via : no commercial use without permission
#   is a moderated mailing list for net criticism,
#  collaborative text filtering and cultural politics of the nets
#  more info: [EMAIL PROTECTED] and "info nettime-l" in the msg body
#  archive: http://www.nettime.org contact: [EMAIL PROTECTED]


Re: "call for blogging code of conduct"

2007-04-02 Thread Felix Stalder

I only followed this only very tangentially, but from what I can
gather, I think Geert is right. In many ways, this is old news. It's
a classic case of a community where all members used to have all
the rules internalized, i.e. they were 'voluntarily' adhering to
them, thus there was no need to enforce them. As the community grows,
people join who do not repress in the same way their ideosynractic
urges. Now, the community comes under stress how to deal with them.
The illusion of voluntary consensus has been shattered. They have to
find a way how to transform their "undifferentiated openness" into
something structured without killing off the community dynamics. One
might call this "sustainable openness". We've seen that with usenet,
email lists, online forums, and now blogging. Pretty similar issues,
including misogyny.

However, it's not a simple re-run in all respects. There's also
something that is definitely different from the earlier cases.
Blogging has become so big that it's not only attracting destructive
energies but it begins to matter in a main-stream way (usenet, email
lists, online forums never did). At least in the hypersensitive US
political scene, "a-list" bloggers wield considerable influence (or at
least, political operatives believe this).

As a consequence, these bloggers have to decide how to deal with their
growing power and the politics than come with it. In other words, they
are slowly being transformed from observers into actors, from being
independent, honest commentators to becoming part of the inner circle.

Almost all candidates in the US presidential race are trying to
enlist bloggers in their campaigns, as paid staff members. Even
those bloggers who remain nominally independent suddenly have their
postings examined under strategic considerations, ie. readers and
other bloggers are beginning to wonder if there are hidden preferences
and secret deals. And even these bloggers who remain truly independent
have to watch out what they say, otherwise some "provocative" posting
will come back to haunt them should they decide to become full-time
political consultants at a later point. This just happened to two
bloggers who had to resign from the Edwards campaign.

This, actually, strikes me as much more critical than some people 
spewing hate in easy-to-control comment sections. Monitoring comments 
is a task that can be accomplished easily and cheaply. Slashdot has   
done that years ago.  

The real issue with respect to political blogging is this: how to
exert real mainstream influence without becoming subsumed under the
existing logic of power? Quite difficult to pull off.

My hunch is that political bloggers will turn into affiliated
advocates more or less aligning with the current power (and
counter-power) structures. Running a successfull blog is the ticket
to enter the establishment (be it the political or media one, if one
cares to differentiate between them).

Felix





--- http://felix.openflows.com - out now:
*|Manuel Castells and the Theory of the Network Society. Polity, 2006 
*|Open Cultures and the Nature of Networks. Ed. Futura/Revolver, 2005 



#  distributed via : no commercial use without permission
#   is a moderated mailing list for net criticism,
#  collaborative text filtering and cultural politics of the nets
#  more info: [EMAIL PROTECTED] and "info nettime-l" in the msg body
#  archive: http://www.nettime.org contact: nettime@bbs.thing.net


Torrents of Desire

2007-03-08 Thread Felix Stalder

[This is a revised version of a talk I gave late last year:
http://mail.sarai.net/pipermail/newsletter/2006-November/000170.html]


Torrents of Desire and the Shape of the Information Landscape

We are in the midst an uneven shift from an information environment
characterized by scarcity of cultural goods to one characterized
by their abundance. Until very recently, even privileged people
had access to a relatively limited number of news sources, books,
audio recordings, films and other forms of informational goods. This
was partly due to the fact that the means of mass communication
were expensive, cumbersome and thus relatively centralized. In
this configuration, most people were relegated to the role of
consumers, or, if they lacked purchasing power, not even that. This
is changing. The Internet is giving ever greater numbers of people
access to efficient means of mass communication and p2p protocols
such as Bittorrent are making the distribution of material highly
efficient. For some reason to be further examined, more and more
material is becoming freely available within this new information
environment. As an effect, the current structure of the culture
industries, in Adorno's sense,[1] is being undermined, and with it,
deeply-entrenched notions of intellectual property. This is happening
despite well-orchestrated campaigns by major industries to prevent
this shift. The campaigns include measures raging from the seemingly
endless expansion of intellectual property regulations across the
globe, to new technologies aimed at maintaining informational scarcity
(digital rights management (DRM) systems), to mass persecution of
average citizens who engage in standard practices on p2p networks.

As a consequence, we are in the midst of a pitched battle. One side we
have organized industries, with their well-honed machines of political
lobbying and armies of highly-paid lawyers and technologists, on the
other side. Strangely enough, on the other side, we do not have any
powerful interests or well-organized commercial players. Rather we
have a rag-tag group of people and small groups, including programmers
who develop open source tools to efficiently distribute digital
files; administrators running infrastructural nodes for p2p networks
out of their small ISPs (Internet Service Providers) or using cheap
hosted locations; shadowy, closed "release groups" who specialize in
circumventing any kind of copy-protection and making works available
within their own circles often before it they are available to the
public; and, finally, millions of ordinary computer users who prefer
to get their goods from the p2p networks where they are freely
available (not just free of charge, but also without DRM) and where
they can, if they wish to, release their own material just as easily.

Usually, as political thinkers from Niccolò Machiavelli to Lawrence
Lessig will tell you, well organized entrenched interested are at
an advantage over the forces of innovation which tend to be poorly
organized at the beginning.[2] And, looking at the legal arena, there
is plenty of reason to be pessimistic. Yet, looking at the social
arena, where what people actually do counts, things look different.
Despite a new and tougher laws and legal persecution, p2p networks
are prospering, to the degree that they account for 50-80% of global
internet traffic, depending of region and time of the day.

So, how come that such an unorganized group of people, who agree on
very little, who have neither an coherent ideology, a business-model,
or even much of a self-consciousness as a group, manages to challenge,
if not overrun, well-organized sectors of industry and, as an effect,
dramatically change the informational landscape? Having excluded
ideology or business, the short answer remaining is: desire, raw and
unchecked. When we think of desires, we usually think of needs. This
was most consequentially formalized by the social psychologist Abraham
Maslov (1908-1970) who developed a pyramid of needs as an explanation
of human motivation, ranging from the physiological (breathing, food,
sleep, sex etc) at the bottom, to "self-actualization" (morality,
creativity etc) at the top.[3] Following this, we could think of p2p
networking as filling a need for people whose basic survival is out of
question and who can now address a lack of informational goods. People
are finally getting the information they've always wanted but could
not access, either because the materials were not available, or priced
out of their range. While it's easy to see how such an explanation can
hold certain validity, for example in the context of circumventing
censorship, I think it it's far too limited to account for the full
force of the p2p phenomenon.

_Desire rather than need_

Rather, it's more fruitful in this case to view desire not as
something resulting from a lack, but as Deleuze and Guattari
suggested, as primary productive force, as an unarticulated
will-to-existence.[4] Th

Re: shocklogs wikipedia entry

2007-02-08 Thread Felix Stalder
On Wednesday, 7. February 2007 16:33, Geert Lovink wrote:

> What is kind of amazing is the Anglo-Saxon language policing, which
> term is and is not 'proper' English. An (English) wikipedia entry
> cannot be valid if it based on 'foreign language' sources now about
> that? Wikipedia is not a dictionary and in fact there are many
> Englishes so it makes you wonder why in particular 'neologisms' are
> targetted. and not names of (famous) persons, as Pit Schulz mentioned.

I think the case against neologism is pretty strong in an 
encyclopedia which aims to document the state of established factual  
knowledge, rather than advance it.

The case against using exclusively non-english sources is pretty
strong, too. Wikipedia, as a whole, is a global project, whereas the
English Wikipedia is, well, an English-language project. Sure, there
are many Englishes these days, but these are, still, Englishes. I
don't think anyone at en.wikipedia would object to a source, or even
an article, written in Indian English, or Jamaican English, or in an
global ESL English. For the sake of transparency en.wikipedia relies
on English-language sources. I mean, I don't read Dutch, so there is
no way for me to check if these sources are relevant. I would not call
this 'language policing'.

I'm pretty sure there is a Dutch-language version of Wikipedia (I
never checked and I'm offline right now) where using Dutch-only
sources is perfectly valid (I guess).

Felix





--- http://felix.openflows.com - out now:
*|Manuel Castells and the Theory of the Network Society. Polity, 2006 
*|Open Cultures and the Nature of Networks. Ed. Futura/Revolver, 2005 




#  distributed via : no commercial use without permission
#   is a moderated mailing list for net criticism,
#  collaborative text filtering and cultural politics of the nets
#  more info: [EMAIL PROTECTED] and "info nettime-l" in the msg body
#  archive: http://www.nettime.org contact: nettime@bbs.thing.net


Energy Consumption of an Avator in Second Life

2007-02-07 Thread Felix Stalder

from http://www.roughtype.com/archives/2006/12/avatars_consume.php

He quotes Philip Rosedale, the head of Linden Lab, the company behind
the virtual world: "We're running at full power all the time, so
we consume an enormous amount of electrical power in co-location
facilities [where they house their 4,000 server computers]. We're
running out of power for the square feet of rack space that we've
got machines in. We can't for example use [blade] servers right now
because they would simply require more electricity than you could get
for the floor space they occupy."

If there are on average between 10,000 and 15,000 avatars "living"
in Second Life at any point, that means the world has a population
of about 12,500. Supporting those 12,500 avatars requires 4,000
servers as well as the 12,500 PCs the avatars' physical alter egos are
using. Conservatively, a PC consumes 120 watts and a server consumes
200 watts. Throw in another 50 watts per server for data-center
air conditioning. So, on a daily basis, overall Second Life power
consumption equals:

(4,000 x 250 x 24) + (12,500 x 120 x 24) = 60,000,000 watt-hours
or 60,000 kilowatt-hours

Per capita, that's:

60,000 / 12,500 = 4.8 kWh

Which, annualized, gives us 1,752 kWh. So an avatar consumes 1,752
kWh per year. By comparison, the average human, on a worldwide basis,
consumes 2,436 kWh per year. So there you have it: an avatar consumes
a bit less energy than a real person, though they're in the same
ballpark.

<...>

UPDATE: In a comment on this post, Sun's Dave Douglas takes the
calculations another step, translating electricity consumption into
CO2 emissions. (Carbon dioxide, he notes, "is the most prevalent
greenhouse gas from the production of electricity.") He writes:
"looking at CO2 production, 1,752 kWH/year per avatar is about 1.17
tons of CO2. That's the equivalent of driving an SUV around 2,300
miles (or a Prius around 4,000)."








#  distributed via : no commercial use without permission
#   is a moderated mailing list for net criticism,
#  collaborative text filtering and cultural politics of the nets
#  more info: [EMAIL PROTECTED] and "info nettime-l" in the msg body
#  archive: http://www.nettime.org contact: nettime@bbs.thing.net


Machine Writing / Machine Reading

2007-01-17 Thread Felix Stalder

[This his how my script-based reading system (SpamAssassin) interprets 
Alan's script-based writing. The point is not that it (miss)qualifies as 
spam, but the interpretative rules that come into effect. Felix]

> - Forwarded message from Alan Sondheim <[EMAIL PROTECTED]> -
>
> X-Spam-Flag: YES
> X-Spam-Checker-Version: SpamAssassin 3.1.7 (2006-10-05) on
>  chavez.mayfirst.org X-Spam-Level: ***
> X-Spam-Status: Yes, score=3.9 required=3.5 tests=AWL,BAYES_50,
>   DATE_IN_PAST_24_48,DRUGS_PAIN,LONGWORDS,UNIQUE_WORDS autolearn=no
>   version=3.1.7
> X-Spam-Report:
>   *  0.9 DATE_IN_PAST_24_48 Date: is 24 to 48 hours before Received: date
>   *  2.3 UNIQUE_WORDS BODY: Message body has many words used only once
>   *  0.0 BAYES_50 BODY: Bayesian spam probability is 40 to 60%
>   *  [score: 0.5000]
>   *  0.0 DRUGS_PAIN Refers to a pain relief drug
>   *  3.8 LONGWORDS Long string of long words
>   * -3.1 AWL AWL: From: address is in the auto white-list
> X-Virus-Scanned: Debian amavisd-new at chavez.mayfirst.org
> From: Alan Sondheim <[EMAIL PROTECTED]>
> Subject: *SPAM*  my life collapsed
> To: nettime-l@bbs.thing.net
> Reply-To: Alan Sondheim <[EMAIL PROTECTED]>
> X-Virus-Scanned: by amavisd-new-20030616-p10 (Debian) at openflows.org
> X-Spam-Prev-Subject:  my life collapsed
>
> this was culled/scraped from http://www.asondheim.org/biog.txt
> i have been working on an autobiography stemming from a simple perl
> program
> #!/usr/local/bin/perl -w
> # biography
> $| = 1;
> `cp .bio .bio.old`;
> print "Would you like to add to bio information? If so, type y.\n";
> chop($str=);
> if ($str eq "y") {print "Begin with date.\n";
> print "Write single line, use ^d to end.\n";
> open(APPEND, ">> .bio");
> @text=;
> print APPEND @text;
> close APPEND;}
> `sort -o .bio .bio`;
> exit(0);
> that gave me the opportunity for sorting and organizing memories; the
> program was then abandoned as the entries were flushed out; the prog-
> ram was then re-employed for new memories, and so forth. the result is
> unique among autobiogs, as is the entangled compression below:
>
>
> <...>





#  distributed via : no commercial use without permission
#   is a moderated mailing list for net criticism,
#  collaborative text filtering and cultural politics of the nets
#  more info: [EMAIL PROTECTED] and "info nettime-l" in the msg body
#  archive: http://www.nettime.org contact: nettime@bbs.thing.net


Re: Iraq: The Way Forward

2007-01-10 Thread Felix Stalder
On Friday, 5. January 2007 20:36, Michael H Goldhaber wrote:

> We have reached a crucial turning point in American history. The 
> November elections and current polls have made clear that Americans 
> have soured on the Iraq war, and want the troops to be withdrawn
> rapidly.

I'm not a close observer of American politics (how come that Lieberman was 
relected?), but what strikes me as the really remarkable outcome of this 
election is that it revealed the total bankrupcy of the ideologies that 
has been dominant since the end of the cold war: neo-liberalism (with its 
emphasis on freedom) and neo-conservativism (with its emphasis on 
security), which have produced not freedom and security but abandonnment 
and fear. Neoliberalism has had to declare bankrupcy a while ago, but 9/11 
provided the opportunity to swiftly replace it with its darker cousin, so 
the void was less obvious.

Now, we are in a situation where nobody has any good idea what to 
do. "Bringing the troops home now" is as unrealistic as "fighting for 
victory". What comes next? Nobody seems to know beyond short-term 
political tactical games. 

But while such desorientation might provide room for creative thinking, I'm 
not optimistic. The social conditions which have provided the mass basis 
for the acceptance of faith-based politics are still here. Just that the 
war in Iraq is too manifestly disasterous to whish away.

Salon Magazine recently featured an interesting interview with Chris 
Hedges, NYT reporter (Bosnia, Middle East), and author of a new book on 
the US Christian right, "American Fascists", that seems directly relevant 
here.

http://salon.com/books/feature/2007/01/08/fascism/

> Since the midterm election, many have suggested that the Christian
> right has peaked, and the movement has in fact suffered quite a few
> severe blows since both of our books came out

It's suffered severe blows in the past too. It depends on how you view
the engine of the movement. For me, the engine of the movement is deep
economic and personal despair. A terrible distortion and deformation of
American society, where tens of millions of people in this country feel
completely disenfranchised, where their physical communities have been
obliterated, whether that's in the Rust Belt in Ohio or these monstrous
exurbs like Orange County, where there is no community. There are no
community rituals, no community centers, often there are no sidewalks.
People live in empty soulless houses and drive big empty cars on
freeways to Los Angeles and sit in vast offices and then come home
again. You can't deform your society to that extent, and you can't shunt
people aside and rip away any kind of safety net, any kind of program
that gives them hope, and not expect political consequences.

Democracies function because the vast majority live relatively stable
lives with a degree of hope, and, if not economic prosperity, at least
enough of an income to free them from severe want or instability.
Whatever the Democrats say now about the war, they're not addressing the
fundamental issues that have given rise to this movement.

> But isn't there a change in the Democratic Party, now that it's
> talking about class issues and economic issues more so than in the
> past?

Yes, but how far are they willing to go? The corporations that fund the
Republican Party fund them. I don't hear anybody talking about repealing
the bankruptcy bill, just like I don't hear them talking about torture.
The Democrats recognize the problem, but I don't see anyone offering any
kind of solutions that will begin to re-enfranchise people into American
society. The fact that they can't get even get healthcare through is
pretty depressing.

> The argument you're now making sounds in some ways like Tom Frank's,
> which is basically that support for the religious right represents a
> kind of misdirected class warfare. But your book struck me differently
> -- it seemed to be much more about what this movement offers people
> psychologically.

Yeah, the economic is part of it, but you have large sections of the
middle class that are bulwarks within this movement, so obviously the
economic part isn't enough. The reason the catastrophic loss of
manufacturing jobs is important is not so much the economic deprivation
but the social consequences of that deprivation. The breakdown of
community is really at the core here. When people lose job stability,
when they work for $16 an hour and don't have health insurance, and
nobody funds their public schools and nobody fixes their infrastructure,
that has direct consequences into how the life of their community is
led.

I know firsthand because my family comes from a working-class town in
Maine that has suffered exactly this kind of deterioration. You pick up
the local paper and the weekly police blotter is just DWIs and domestic
violence. We've shattered these lives, and it isn't always economic.
That's where I guess I would differ with Frank. It's really the

Re: Michael Malone : Regulating Destruction

2006-12-28 Thread Felix Stalder
On Thursday, 28. December 2006 15:12, [EMAIL PROTECTED] wrote:
> Too little -- The reason why there are so few IPOs nowadays is that
> the Internet Bubble is over and is not coming back. This has nothing
> to do with "regulation." This is a cyclical change that follows a
> well-documented pattern, which is best described by Carlota Perez in
> her "Technological Revolutions and Financial Capital."

I wonder if the lack of IPOs has also something to do with the way
the IP mess shapes the current business landscape. At least partially
-- certainly with patents, but perhaps also with copyright -- IP has
become a defensive mechanism. A kind of an arms race. You need a lot
of IP in order to prevent others, who also own extensive portfolios,
from trying to sue you out of existence. Which would be easy since it
has become very hard to do anything of any size and not violate some
IP laws somewhere. Particularly with "social applications".

The effect is the formation of a kind of IP cartel, able to coordinate
its actions through cross-licensing, in ways that do not only avoid
anti-trust legislation, but also make it impossible for any member
of the cartel to defect (because they would break the licensing
agreement). In such a situation, the only prospect for a start-up is
to be bought up by a large company, able to prevent others from suing
it. Effectively, bringing it into the cartel.

This, it seems, is what happened with youtube.com, which now is in the
favorable position to cut deals with the majors who will then go after
the independent sites and sue them for copyright infringement. They
will even be able to cite their agreement with youtube.com as a proof
that they are willing to play constructively.

Felix








--- http://felix.openflows.com - out now:
*|Manuel Castells and the Theory of the Network Society. Polity, 2006 
*|Open Cultures and the Nature of Networks. Ed. Futura/Revolver, 2005 



#  distributed via : no commercial use without permission
#   is a moderated mailing list for net criticism,
#  collaborative text filtering and cultural politics of the nets
#  more info: [EMAIL PROTECTED] and "info nettime-l" in the msg body
#  archive: http://www.nettime.org contact: nettime@bbs.thing.net


Re: Copyright, Copyleft and the Creative Anti-Commons

2006-12-14 Thread Felix Stalder
I'm not sure I understand the main thrust of the argument. 

On the one hand, GPL-type copyleft is criticized for not preventing the 
appropriation (or, more precisely, use) of code by commercial, capitalist 
interests. These still manage to move profits from labor (employees / 
contractors who are paid less than the value their labor produces) into 
the hands of the capital (shareholders, I guess). 

On the other hand, Creative Commons, which precisely enables the "author" 
to prevent this through the non-commercial clause is criticized for 
perpetuating the proprietary logic embodied in the "author" function 
controlling the use of the "work".

Which seems to leave as the conclusion that within capitalism the structure 
of copyright, or IP more generally, doesn't really matter, because it 
either supports directly fundamentally-flawed notions of property (à la 
CC), or it does not prevent the common resource to be used in support of 
capitalist ends (à la GPL). In this view, copyfights appear to articulate 
a "secondary contradiction" within capitalism, which cannot solved as long 
as the main contradition, that between labor and capital, is not 
redressed. 

Is that it?



#  distributed via : no commercial use without permission
#   is a moderated mailing list for net criticism,
#  collaborative text filtering and cultural politics of the nets
#  more info: [EMAIL PROTECTED] and "info nettime-l" in the msg body
#  archive: http://www.nettime.org contact: nettime@bbs.thing.net


Feral Cities

2006-09-25 Thread Felix Stalder
[In a recent article of the German Telepolis Magazine [1], I found
a link to the following article, written by an academic at the Naval
War College, titled "Feral Cities". I had to look up the term "feral"
which appears to mean wild, in the sense of no longer domesticated.
[1] http://www.heise.de/tp/r4/artikel/23/23616/1.html]

FERAL CITIES
Richard J. Norton
Naval War College Review, Autumn 2003, Vol. LVI, No. 4
http://www.nwc.navy.mil/press/Review/2003/Autumn/art6-a03.htm

FERAL CITIES

Richard J. Norton

Naval War College Review, Autumn 2003, Vol. LVI, No. 4

Imagine a great metropolis covering hundreds of square miles. Once a vital 
component in a national economy, this sprawling urban environment is now a 
vast collection of blighted buildings, an immense petri dish of both 
ancient and new diseases, a territory where the rule of law has long been 
replaced by near anarchy in which the only security available is that 
which is attained through brute power.1 Such cities have been routinely 
imagined in apocalyptic movies and in certain science-fiction genres, 
where they are often portrayed as gigantic versions of T. S. Eliot’s Rat’s 
Alley.2 Yet this city would still be globally connected. It would possess 
at least a modicum of commercial linkages, and some of its inhabitants 
would have access to the world’s most modern communication and computing 
technologies. It would, in effect, be a feral city.

Admittedly, the very term “feral city” is both provocative and 
controversial. Yet this description has been chosen advisedly. The feral 
city may be a phenomenon that never takes place, yet its emergence should 
not be dismissed as impossible. The phrase also suggests, at least 
faintly, the nature of what may become one of the more difficult security 
challenges of the new century.

Over the past decade or so a great deal of scholarly attention has been 
paid to the phenomenon of failing states.3 Nor has this pursuit been 
undertaken solely by the academic community. Government leaders and 
military commanders as well as directors of nongovernmental organizations 
and intergovernmental bodies have attempted to deal with faltering, 
failing, and failed states. Involvement by the United States in such 
matters has run the gamut from expressions of concern to cautious 
humanitarian assistance to full-fledged military intervention. In 
contrast, however, there has been a significant lack of concern for the 
potential emergence of failed cities. This is somewhat surprising, as the 
feral city may prove as common a feature of the global landscape of the 
first decade of the twenty-first century as the faltering, failing, or 
failed state was in the last decade of the twentieth. While it may be 
premature to suggest that a truly feral city—with the possible exception 
of Mogadishu—can be found anywhere on the globe today, indicators point to 
a day, not so distant, when such examples will be easily found.

This article first seeks to define a feral city. It then describes such a 
city’s attributes and suggests why the issue is worth international 
attention. A possible methodology to identify cities that have the 
potential to become feral will then be presented. Finally, the potential 
impact of feral cities on the U.S. military, and the U.S. Navy 
specifically, will be discussed.

DEFINITION AND ATTRIBUTES

The putative “feral city” is (or would be) a metropolis with a population 
of more than a million people in a state the government of which has lost 
the ability to maintain the rule of law within the city’s boundaries yet 
remains a functioning actor in the greater international system.4

In a feral city social services are all but nonexistent, and the vast 
majority of the city’s occupants have no access to even the most basic 
health or security assistance. There is no social safety net. Human 
security is for the most part a matter of individual initiative. Yet a 
feral city does not descend into complete, random chaos. Some elements, be 
they criminals, armed resistance groups, clans, tribes, or neighborhood 
associations, exert various degrees of control over portions of the city. 
Intercity, city-state, and even international commercial transactions 
occur, but corruption, avarice, and violence are their hallmarks. A feral 
city experiences massive levels of disease and creates enough pollution to 
qualify as an international environmental disaster zone. Most feral cities 
would suffer from massive urban hypertrophy, covering vast expanses of 
land. The city’s structures range from once-great buildings symbolic of 
state power to the meanest shantytowns and slums. Yet even under these 
conditions, these cities continue to grow, and the majority of occupants 
do not voluntarily leave.5

Feral cities would exert an almost magnetic influence on terrorist 
organizations. Such megalopolises will provide exceptionally safe havens 
for armed resistance groups, especially those h

Re: Peace-for-War

2006-08-09 Thread Felix Stalder
On Monday, 7. August 2006 00:34, Brian Holmes wrote:

> There seems to be a difference in the way the groups of
> steersmen operate, both on the diplomatic and economic
> levels.
<...>
> Generally the Marxist theorists give you a 
> systemic explanation; capitalism does this or that, it has a
> long-term trend. I have always thought that the only way to
> help get the Left moving again is to say, groups and
> individuals do this or that; and we can stop them.

I think of the reasons why understanding the present -- in the amateur 
world-theory mode that we are engaged in here -- is so difficult is that 
we have a number of very different dynamics and we do not really know how 
they intersect, reinforce and transform each other.

First, there are structural dynamics which creating a playing field that is 
anything but level, rather it is skewed in favor of some groups. Michael 
Hudson's book "Superimperialism" explains the establishment of such a 
skewed playing field in the area of international finance rather well (as 
far as I can tell from skimming it just now). The reason why the WTO talks 
have collapsed is that the developing countries tried to make the "free 
trade" rules work a bit less against their interests. No change for that.

Second, there's old-fashioned interventionist power politics, with a 
repertoire from targeted sanctions to full scale war. This is brute-force 
intervention by very powerful and relatively easy to pin-point actors, 
usually states or their proxies. The states often act in the interests of 
the powerful national groups, but I would hesitate to see them as a simple 
extension of, say, corporate interests.

Third, their are processes that are largely out of control of any actor, 
but where a range of actors scramble to exert influence as good as they 
can to bend the outcomes to their strategic interests, without ever 
achieving anything such as real control over the developments.

Now, this is nothing particularly new. Any complex historical situation has 
structural, interventionist/strategic and chaotic dimensions. What is new, 
and what makes analysis so difficult, is that their relative weight, and 
how they shape each other, is changing. 

My hunch is that interventionist power politics are loosing weight, whereas 
processes that are too complex to control are gaining. I don't mean that 
power politics ceases to exist or that there are no more state-sponsored 
wars (well, that would be pretty dumb things to say right now), but that 
even hard-core military interventions quickly get bogged down in chaotic 
situations where the occupying army quickly becomes one of many actors 
scrambling along, rather than successfully imposing its own long-term 
strategy. Iraq, of course, is the prime case for this argument.

The reason for this change in the composition of complex historical 
processes is, I presume, that the number of actors has grown massively. 
Not the least due to the fact that large-scale coordination does no longer 
require a difficult to manage, expensive apparatus, but can be done on the 
fly, cheaply through open networks of communication and transport. 

So, we have a lot of actors, each following their own strategy, which can, 
in some way or the other, influence the course of events. Of course, not 
all actors have the same amount of resources at their disposal -- power 
differentials still exists. However, in a situation of asymmetric warfare 
this might less important that it used to. Or, perhaps more precisely, 
this is precisely what contributes to feeding complex, hard-to-control 
processes in the first place. The number of actors has grown that are 
powerful enough to disturb the establishment of order without being 
powerful enough to establish order themselves. 

Felix


http://felix.openflows.org-- out now:
*|Manuel Castells and the Theory of the Network Society. Polity, 2006 
*|Open Cultures and the Nature of Networks. Ed. Futura/Revolver, 2005 


#  distributed via : no commercial use without permission
#   is a moderated mailing list for net criticism,
#  collaborative text filtering and cultural politics of the nets
#  more info: [EMAIL PROTECTED] and "info nettime-l" in the msg body
#  archive: http://www.nettime.org contact: nettime@bbs.thing.net


Sweden could scrap file-sharing ban

2006-06-13 Thread Felix Stalder

[It would be ironic if the raid on piratebay.org turned out to be the 
trigger to create an 'alternative compensation system' (levy on broadband 
to compensate right holders in order to legalize p2p file sharing). The 
guys from piratebay have been among the most vocal (and astute) critics of 
such an idea. [1], [2].
[1] http://www.nettime.org/Lists-Archives/nettime-l-0407/msg00020.html
[2] http://www.nettime.org/Lists-Archives/nettime-l-0407/msg00032.html
Felix]


http://www.thelocal.se/article.php?ID=4024&date=20060609

The Local: Sweden's news in English
Sweden could scrap file-sharing ban

Published: 9th June 2006 10:36 CET

Sweden could introduce a charge on all broadband subscriptions to
compensate music and film companies for the downloading of their
work, while legalizing the downloading of copyright-protected
material, justice minister Thomas Bodstrom has said.

Bodstrom told Sydsvenskan that he could consider tearing up
legislation passed last year that made it illegal to download
copyrighted material. He said that a broadband charge was discussed
by Swedish political parties last year, but the Moderates and Left
Party rejected it. If they have changed their minds, he is willing to
discuss any new proposals they might have, he said.

The Left Party said yesterday that they wanted to scrap the current
law because it had not reduced illegal file sharing. The Moderate
Party has said that the whole area of copyright law should be
overhauled to make it clearer, more effective and adapted to
technological developments.

"The most important thing for me is that authors and artists get paid
and I will never retreat from that," he told the paper.

"I have not changed my position, I still think that [the current law]
is the best option for two reasons: first, it would be unfair on
those who have subscribed to broadband and don't want to download,
secondly because it would mean that the government was setting the
price for goods, which I don't think we should do, whether those
goods are in a shop or on the net," he told TT.

"But if the Moderates and Left Party have made a 180 degree turn and
changed their minds completely, of course they can come and tell us
about it. But we had this discussion last year. If they now want to
find a completely new solution and have new proposals or ideas we
will naturally discuss them."

But he emphasized that he favoured the current rules, which he said
"has created a market, which would not have happened if we hadn't had
this law. It is now possible to buy a song for ten kronor, and that
is thanks to the new law."

Bodstrom said he had not been approached directly by the Left Party
or Moderates, and had only read about their proposals in the media.

TT/The Local




http://felix.openflows.org-- out now:
*|Manuel Castells and the Theory of the Network Society. Polity, 2006 
*|Open Cultures and the Nature of Networks. Ed. Futura/Revolver, 2005 


- End forwarded message -

#  distributed via : no commercial use without permission
#   is a moderated mailing list for net criticism,
#  collaborative text filtering and cultural politics of the nets
#  more info: [EMAIL PROTECTED] and "info nettime-l" in the msg body
#  archive: http://www.nettime.org contact: nettime@bbs.thing.net


nettime as practice

2006-06-12 Thread Felix Stalder
On Sunday, 11. June 2006 02:21, John Hopkins wrote:
> In this Light, I would challenge Felix and Ted (and any others
> feeling qualified) to write a brief task description of the
> (different) roles/positions necessary to run nettime as it is today.
> Put it out here.  I certainly have some interest, but would need to
> know the scalability and absolute size of what tasks are necessary,
> and how they are (technically and socially) accomplished...

There is not much of a challenge here. All that you really need is
some long-term dedication to contributing to the nettime project on a
very regular basis. It helps to like it, be somewhat familiar with it,
and feel comfortable with its style.

Technically, in order to start moderating you need to be able to deal
with email on a *nix Shell (via ssh). There is no web-interface. It's
good to know the mail program 'mutt' (because we have some custom
setting that save serious time), but if you don't, it's something that
can be learned like other semi-technical stuff and we can help, and,
indeed, will help.

In the medium term, you should familiarize yourself with 'procmail'
and 'spam assassin', otherwise lots of time is spent going through
spam and/or finding false positives. This is a real hassle.

In the long term, you might need to look for a new host for the
mailing list (which has up to 50'000 outgoing mails a day), and for a
new admin for the web server. Right now, all of this is provided by
other people on a goodwill basis. As goodwill is usually personal,
rather than institutional, we have no idea how that transfers.

Also, did I mention?, you need high tolerance for personal abuse, by
people who don't know you, and, what can be more annoying, by people
who know you. It seems unavoidable because of the architecture of
mailing lists, and some people really make a big deal out of it.

Now, this might sound like endless, thankless drudgery, but it's not,
or not only. Being deeply involved in nettime is a good reason, and a
motivator, to pay close attention to the discussions and the people,
which is very worthwhile. You learn a lot, and a lot of people learn
about you. Though the abuse part sucks, no matter what.

Felix








http://felix.openflows.org-- out now:
*|Manuel Castells and the Theory of the Network Society. Polity, 2006 
*|Open Cultures and the Nature of Networks. Ed. Futura/Revolver, 2005 



#  distributed via : no commercial use without permission
#   is a moderated mailing list for net criticism,
#  collaborative text filtering and cultural politics of the nets
#  more info: [EMAIL PROTECTED] and "info nettime-l" in the msg body
#  archive: http://www.nettime.org contact: nettime@bbs.thing.net


RE: nettime as idea

2006-06-09 Thread Felix Stalder

Hi everyone,

sorry for my previous post, it went out without being finished. What  
I wanted to say was that many of the themes that critical net.culture 
talked about 10 years ago are now mainstream. They are now playing
themselves out on a scale far beyond 'net.culture', indeed, they have 
become culture, without any pre-fix.  

If that amounts to winning or losing is besides the point. In some
ways, it reminds me a bit of the 1968 movements which also transformed
daily life (at least in the West), but as the world around them
shifted, with consequences very different from what they intended.
Again, if they lost or won, is does not really matter. The world is a
different place now.

For most of the actors of the early net.culture, this meant either
late professionalizing or early retirement. Nettime as a project did
not so much professionalize as specialize. It exchanged scope for
focus which has moved it a bit closer to academic culture, which is
also characterized by that trade-off. But anyone who really knows
academia, and the texts it produces (which I personally appreciate),
will also recognize how far nettime still is from that. Its scope
broader, its style sharper.

Caroline Nevejan <[EMAIL PROTECTED]>:
> Critiqing others for having done 'stuff', aging and moving on in  
> life, I find rather uninteresting. I get interested when I hear what  
> you like to do yourself.

I agree, on many levels, nettime works quite well, so there is not an
urgent need to change something. But, this does not mean it cannot be
improved. Sure it can. But to do that, we need concrete ideas, what
would you, personally, individually, like to see in nettime, and how
do you put up the resources to do it? The easiest thing is to do it
yourself. Silvan Zurbueck did that when he wanted an rss feed for
nettime, he took the feed, pumped into a blog, and now there is an rss
feed. [1] Tobias van Veen did that when he wanted to hold a nettime
meeting in NA, and now we had it. Great. They had an idea, they
figured out a way of doing it (by doing it themselves and roping in
others to contribute). This is how things work, not by telling others
what they should or should not do. The same goes for the various
nettime lists in other languages. People came up with the idea of
doing something, and they are doing it. Most of the people on this
list are not aware of that, because these lists are in languages few
of us speak.

[1] http://nettime.freeflux.net, http://nettime-ann.freeflux.net/

Andreas Broeckmann <[EMAIL PROTECTED]>:
> finally, if you are unhappy with the list, be aware that 'the list',
> i.e. nettime, is what gets posted. of course, moderation plays a
> role in this. but the greater role is played by the things that get
> written and sent, or not. if certain discussions are not happening,
> it is because people are not writing their opinions.

Again, I agree. Moderation is a non-issue, a red-herring. Even if 
the technical set-up of an email list (conceived at a time when   
ICT had much less social intelligence built in that it as at times
today) lends itself to believing the otherwise. And it's not that Ted 
and I are turning away the masses who want to do his kind of work.
In fact, nobody ever volunteers. N0b0dy, that's with two zeros. We
occasionally ask people who are contributing interesting material to  
the list if they want to moderate, and the answer has always been 
'Thank you for asking, but I really do not have the time.'

There is one exception. Nettime-ann. Here, four people -- Mason   
Dixon, Tulpje Tulp, Tsila Hassine, and Hannah Davenport -- responded  
to an open call what to do with the announcements, and are now
running this as their own project, connected to the main list by  
name and lose but friendly cooperation. They are doing a great, if
unglamorous, job. 

Over the years, we experimented with various set-ups, most importantly
dividing the list into two feeds, the standard moderated one and an
non-moderated one, called nettime-bold. The interested in the second
channel was small from the beginning, and waned entirely shortly
after. The levels of spam and self-promotion seem to be tiring for
everyone but the self-promoters. After we had to start manually
removing posts from the nettime-bold archive, because people entirely
unrelated to the list were accused -- with their names and telephone
numbers -- of being pedophiles and sent us harrowing stories how this
ruined their lives, because googling their names brought up these
posts (google loves nettime and ranks its posts often very high up) we
decided that this was not the resource we wanted to provide. When we
shut-down the list, nobody seemed to notice.

So, if anyone feels like moderating -- near daily work, over a long period
of time -- and knows how to use an email program on a unix shell
(perferably m

Re: report_on_NNA

2006-06-08 Thread Felix Stalder
> Time(s) moves along -- especially nettime(s).  So what about these
> time(s)?

The rebels of the net.culture of the 1990s have encountered a cruel fate: 
they won. Alas, not on their terms. Many of the themes that have been 
explored by the old-timers are now mainstream. 

*free exchange of culture? Bittorrent and other filesharing networks are 
used by the millions. Nothing terribly avant-garde here, but it got the 
dinosaurs scared. 

* blurring the boundaries of artist and audience? For every music video 
produced by the industry and (illegally) posted on youtube.com (and 
similar things) you have 10 'tribute' videos (also illegal) where fans are 
remixing the music and reediting the visuals. Most of them suck, view 
would qualify as 'art'. But, hey, these are the users who are doing some 
real interaction here, not just pressing buttons.

* open personalities? We have this concept everyday in the evening news, 
but this time, it's not media pranks by a former soccer player, but bombs 
and murder by a shadowy organization everyone can claim membership in.

* xs4all? We, at least in Europe and N.America, have broadband coming out 
of our ears, thanks to Deutsche Telekoms and other friendly global 
corporations who gladly give our access logs to whatever law enforcement 
agency.

* net.art? net.artist Vuk Cosic represented Slovenia at the Venice Biennale 
2001. Geert Lovink and Florian Schneider have been repeatedly at Kassel's 
Documenta. 

* tactical media? embedded journalists in the service of the pentagon.

The list could go on and on. Many early experiments have been absorbed into 
the mainstream. I don't mean that they have been copted, no, they have 
turned out to be intelligent tactics of acting in media environments. 

So, what can you do? Things have become much bigger, much more complicated, 
but alsol, much more relevant. The old avant-garde sticl no longer works. 

One option is to retire. Lots of people have done that and moved on to 
other things.

Another option is to really engange the beast, which, in one way or the 
other, means professionalizing. Rhizome has done that in respects to the 
art world. transmediale has done that in respects to various levels of 
government who are funding it. Others have done by becoming academics or 
whatever. 

For nettime, really, has resisted this trend. 


http://felix.openflows.org-- out now:
*|Manuel Castells and the Theory of the Network Society. Polity, 2006 
*|Open Cultures and the Nature of Networks. Ed. Futura/Revolver, 2005 


#  distributed via : no commercial use without permission
#   is a moderated mailing list for net criticism,
#  collaborative text filtering and cultural politics of the nets
#  more info: [EMAIL PROTECTED] and "info nettime-l" in the msg body
#  archive: http://www.nettime.org contact: nettime@bbs.thing.net


Coalition of Canadian Art Professionals Releases Open Letter on Copyright

2006-06-08 Thread Felix Stalder
[The voices of artists against the expansion of copyrights are getting
stronger. Stuff like that will make it harder for the industry to
claim to represent the interests of creators. Very good. Felix]


Media Release:

Coalition of Canadian Art Professionals Releases Open Letter on
Copyright

Tuesday, June 6, 2006

http://www.appropriationart.ca/

Over 500 Art Professionals Call for Balanced Copyright Laws

Ottawa, ON -- June 6, 2006 -- Over 500 members of Canada's art
community have today released an open letter to the Ministers of
Canadian Heritage and Industry calling on the Canadian government to
adopt balanced copyright laws that respect the reality of contemporary
art practice. Appropriation Art, A Coalition of Art Professionals,
comprises artists, curators, arts organizations and art institutions
who share a deep concern over Canada's copyright policies and the
impact these policies have on the creation and dissemination of
contemporary art. The Coalition argues that Canada's current copyright
laws put at particular risk those artworks using appropriation, such
as conceptual art, art video & film, sound art and collage.

The Coalition offers three principles that it argues must ground
Canada's copyright policy:

FAIR ACCESS TO COPYRIGHTED MATERIAL LIES AT THE HEART OF COPYRIGHT.
Creators need access to the works of others to create. Legislative
changes premised on the "need" to give copyright owners more control
over their works must be rejected.

ARTISTS AND OTHER CREATORS REQUIRE CERTAINTY OF ACCESS. The time has
come for the Canadian government to consider replacing fair dealing
with a broader defense, such as fair use, that will offer artists the
certainty they require to create.

ANTI-CIRCUMVENTION LAWS SHOULD NOT OUTLAW CREATIVE ACCESS. Laws that
privilege technical measures that protect access to digital works must
be rejected. The law should not outlaw otherwise legal dealings with
copyrighted works merely because a digital lock has been used. Artists
work with a contemporary palette, using new technology. They work from
within popular culture, using material from movies and popular music.
Contemporary culture should not be immune to critical commentary.

"Artworks that use appropriation have a long and well documented place
in the history of art" notes Sarah Joyce, a signatory to the Open
Letter. "These works are collected and exhibited in major cultural
institutions across Canada and throughout the world and yet artists
express this form of creativity under threat of the law. To silence
this valid form of creativity is tragic. That Canada's laws do so is
simply wrong."

"Canada's art community has not been consulted on the implications
of possible copyright reforms," states Gordon Duggan, another of the
Open Letter's signatories. "We are creators, and we rely on copyright
laws for our livelihood. Yet, to my knowledge, the needs of Canadian
artists have never been a consideration in copyright policy debates.
It is time that changed. The sheer size and makeup of this coalition
relects the level of dissatisfaction within the art community. These
changes are set to lock Canadian art into a very narrow idea of what
the Government wants art to be rather than reflecting the reality of
contemporary Canadian art."

The open letter has been posted at the Coalition's website at
www.appropriationart.ca.

About The Coalition of Art Professionals: The signatories to the Open
Letter span the full range of Canada's art community, and include
artists, galleries, art institutions, and curators. A full list of the
over 500 individuals and organizations lending their name to the Open
Letter may be found at www.appropriationart.ca.

For further information, contact:

Sarah Joyce or Gordon Duggan
[EMAIL PROTECTED]








http://felix.openflows.org-- out now:
*|Manuel Castells and the Theory of the Network Society. Polity, 2006 
*|Open Cultures and the Nature of Networks. Ed. Futura/Revolver, 2005 


- End forwarded message -

#  distributed via : no commercial use without permission
#   is a moderated mailing list for net criticism,
#  collaborative text filtering and cultural politics of the nets
#  more info: [EMAIL PROTECTED] and "info nettime-l" in the msg body
#  archive: http://www.nettime.org contact: nettime@bbs.thing.net


Re: Network, Swarm, Microstructure

2006-04-19 Thread Felix Stalder
On Wednesday, 19. April 2006 18:26, Ned Rossiter wrote:

> I'm really surprised you persist with the idea that networks (as
> protocols) are without hierarchies:
> > All networks can be defined by their protocols, [...] Protocols enable
> > interaction without a hierarchy.
>
> Because in that same first paragraph you contradict yourself:
> > in order to
> > participate in a network, actors have to adhere to the dominant
> > protocols.

First, networks and hierarchies are different modes of organization. The fact 
that real-life organizations tend to be hybrids of various sorts does not 
mean that these modes are not distinguisheable on the basis of their 
differences. It just means that they can be combined. The fact that there are 
endless shades of grey also does not mean that black and white are the same.

> and later:
> > In other words, in order to communicate and be productive, one has
> > to join, by
> > choice or coercion, a particular networks (or several, more
> > likely), thus
> > accept their protocols and have one's view of the world defined by
> > a shared
> > horizon
>
> adherence is another word for submission, and in the case of networks
> it's submission to social/technical protocols that is done willingly,
> although often with tensions of one kind or another (thus the
> politics of networks). Another way of understanding this is that in
> order to participate within a network, one must accept the prevailing
> hierarchies (modalities of governance/protocols). But this isn't to
> say that hierarchies can't be changed or shifted, only that they exist.

Second, adherence to a protocol is not the same as submission under a 
hierachy. One of the origins of the term protocol is in diplomacy and it 
designated the rules that govern the interaction between the sovereign, say a 
king, and the foreign diplomats stationed at his court. The reason why a 
protocol was necessary was, and still is, precisely because the foreign 
diplomats were, and are, not subjects of the king. In fact, they were outside 
the hierachy, independent of the king. Hence, they needed a set of rules that 
governed their interaction. The king could not simply impose his rules.

When we speak about social protocols, it's comparable. We write here in 
English. One can say that the grammar is the protocol of language. In order 
to be able to have a conversation here, I must adhere to the conventions of 
English grammar, a foreign language. But must I submit to the conventions of 
grammar? And for this to be a hierachical situation, who, exactly, would be 
my superior. Is there someone who effectively regulates the English language? 
And who will punish me for my ESL mistakes? 

Now, social protocols are often fuzzy, and some rules can be bent, but still, 
try arranging your words randomly, conversation will stop.

> As a moderator of nettime, you know all too well the way in which
> hierarchies are played out.

Of course, but nobody ever said that nettime was a pure network. Indeed, there 
are those who think it's fac!st dictatorship.

> So let's accept that hierarchies are essential to networks, and the
> question of governance is going nowhere for as long as we persist to
> speak of networks in terms of absolute horizontal relations (or in
> the case of communes example, spaces of consensus).  That's simply
> incorrect, and your own text demonstrates that.

Hierarchies are not essential to networks, even if they are often combined. 
There is a difference between a conceptual discussion of ideal types, and 
concrete analysis of empirical examples.

The fact that networks are not the space of absolute freedom (whatever that 
would be), but that there are rules that cannot be easily ignored, does not 
mean that it's a hierarchy. The fact that it's not a hierarchy, on the other 
hand, does not mean that there is no power in networks. It just operates 
differently. In hierarchies, power operates through coercion. In networks, it 
works through exclusion. These are different modes, and it helps to 
acknowlegde such difference when we want to understand the particular 
character of novel combinations. 

Felix





http://felix.openflows.org-- out now:
*|Manuel Castells and the Theory of the Network Society. Polity, 2006 
*|Open Cultures and the Nature of Networks. Ed. Futura/Revolver, 2005 



#  distributed via : no commercial use without permission
#   is a moderated mailing list for net criticism,
#  collaborative text filtering and cultural politics of the nets
#  more info: [EMAIL PROTECTED] and "info nettime-l" in the msg body
#  archive: http://www.nettime.org contact: nettime@bbs.thing.net


Re: Network, Swarm, Microstructure

2006-04-19 Thread Felix Stalder

Wow! What an essay. It took me two days just to read it, and I think I'll have 
to read it a few more times. 

For the time being, I'll stick to half a paragraph, which is key in my view.

> I am beginning to think that there are two fundamental
> factors that help to explain the consistency of
> self-organized human activity. The first is the existence of
> a shared horizon - aesthetic, ethical, philosophical, and/or
> metaphysical - which is patiently and deliberately built up
> over time, and which gives the members of a group the
> capacity to recognize each other as existing within the same
> referential universe, even when they are dispersed and
> mobile. You can think of this as "making worlds." The second
> is the capacity for temporal coordination at a distance: the
> exchange among a dispersed group of information, but also of
> affect, about unique events that are continuously unfolding
> in specific locations. This exchange of information and
> affect then becomes a set of constantly changing, constantly
> reinterpreted clues about how to act in the shared world.

All networks can be defined by their protocols, formal rules that set the 
terms of engagement of otherwise independent agents. Protocols enable 
interaction without a hierarchy. Indeed, the protocol creates the space of the 
possible (or, the shared horizon, to use Brian's term) and in order to 
participate in a network, actors have to adhere to the dominant protocols. 
Without a protocol, there is no network.

Now, networks have multiple dimensions, and on each of those protocols 
operate. Most readily distinguisheable are technical and social protocols, 
even though there are obvious interrelations between them. Technical protocols 
are things like TCP/IP, Bittorrent, SMTP or others. Social protocols are 
styles of communication, shared assumptions and values, common projects etc.

Protocols are not fixed, they can be adapted, but since they are what makes a 
network a network, they are very hard to change from the inside. It's 
difficult to cooperatively transform the very condition of cooperation. 
Hence, it's often easier to create new networks, rather than transform old 
ones, particularly since there can be overlap between the two in term of some 
agents agreeing on new protocols. Networks can fork, particularly is 
the resource of the network is digital information.

The reason why we are all speaking about networks now is that information and 
communication technology (ICT) has decisively affected balance between 
flexibility and coordination in social organizations. Until very recently, 
these two aspects stood in an inverse negative relationship to one another. 
As coordination increased, flexibility went down. Large projects (think of 
states, armies, major companies etc) tended to be highly structured in order 
to manage scale (the history of this development is analyzed in Alfred 
Chandler's classic 'The Visible Hand. The Managerial Revolution in American 
Business.'). This was, to a large degree, a function of their information 
processing. Vertically integrated hierarchies are relatively information-poor 
forms of organization. Thus they can handle large coordination tasks by 
passing around slips of paper with information printed on them, at the price 
of turning inflexible (an 'iron cage' as Max Weber saw it at the height of the 
bureaucratic model, 100 years ago).

ICTs are enabling (just enabling, not determining) people and organizations to 
handle much, much more information efficiently, hence they still can scale, 
but to not need to accept inflexibility as the trade-off. In other words, 
even large organizations, or, perhaps to be more precise, large projects 
undertaken my multiple entities, some as small as individuals, are now 
organized as networks (or at least face competition from networks). 

This ability of multiple entities to undertake very large projects, loosely 
coordinated, is what is fuelling the renaissance of notions such as 
"multitude" which aim to express this still hard-to-grasp combination 
flexibility (agents remain semi-autonomous) and coordination (they do 
something larger than themselves).

Such large scale networks, or large scale projects carried by multiple smaller 
networked organizations, are highly communicative, not just because their 
coordination requires lots of exchanges, but also their output is, to a large 
degree, communication as well (new cultural codes, new scientific discoveries 
and procedures, new management methods etc etc). Indeed, geographically 
dispersed networks are held together by nothing but communication. It 
provides their shared horizon.

When stressing the importance of communication, this should not be understood 
as somehow taking place on a different level than production. In fact, the 
activity of communication and production are more and more merging. The 
attempt to expand the proprietary logic of capitalist production into the 
shared space of

Re: Markets, Hierarchies, Networks: 2 questions

2006-04-14 Thread Felix Stalder
On Thursday, 13. April 2006 16:51, Brian Holmes wrote:
> First let's try to figure out what's really being talked
> about. Felix seems to be referring to the theory of economic
> organization, and probably to three landmark papers:
>
> -Ronald Coase, "The Nature of the Firm" (1937)
> -Walter J. Powell, "Neither Market Nor Hierarchy" (1990)
> -Yochai Benkler, "Coase's Penguin" (2002)

Mostly. The field I'm speaking about is organizational theory, located between 
economics and sociology. Institutionally, organizational theory has been 
connected to economic and management departments, hence they used to see only 
markets and firms (hierarchies) and, since post-fordism, networks. This 
really is very standard. However, for the present discussion, I think it's 
important to add a fourth type, communes (NOT commons) or co-operatives. They 
represent attempts to combine explicit planning and decision-making (like 
hierarchies) while working against a hierarchical division of labor, so 
typical for firms and bureaucracies. However, since their economic impact 
used to be relatively small, and, more generally, they were regarded poorly 
by the proponents of 'free markets' that make up these departments, they were 
ignored by organizational theory. We should not do the same mistake.

I am, and I think Ned is too, interested in networks as governance mechanisms 
that are different from other institutional structures. Hence, I think it's 
not useful to argue that any type of connection is a network, as Shannon 
Clark did. I know that's how the concept is used in a technological context, 
or within complexity theory, but since these field are not concerned with 
issues of power, i.e. governance, I don't think their approach is useful 
here. 

> So my first question is this: How justified is it to think
> of FOUR different forms of productive organization - market,
> hierarchy, network and commons? Aren't the last two just
> variations on each other? 

As already mentioned, I meant communes (or cooperatives) and not commons. I 
think the difference between cooperatives and networks are important, in as 
much as cooperatives have explicit decision-making structures and means to 
enforce their decisions, but do not function as hierarchies.


> If the aim is to examine 
> large-scale organizations in the real world, isn't it best
> to establish the distinctions and hybridizations between
> THREE broad sets of rules or structures of governance -
> based on competition, subservience and reciprocity, or on
> market, hierarchy and network? 

Perhaps, in this view, a commune would be a close-kit type of network. But I'm 
not sure how far this really gets us. If you look at how power works, there 
are real differences between these different sets. In markets, power is based 
on money, since the coordination takes place through price signals. In 
hierarchy, power is based on position, since decision-making authority is 
hard-coded into the structure of the organization. In a network, power is a) 
based on the ability to define the network protocol, and b) on the ability to 
contribute to the overall goal of the network on the basis of that protocol. 
In cooperatives, power is based on the ability to create consensus.


> And finally, is "network" 
> really the best possible name for the last form of
> structuring and governance, or does it just lead to
> confusion because of the connection to ICT hardware and the
> associated diagrams? Why not talk about market, hierarchy
> and cooperation?

Somehow, I think 'cooperation' is located on a different, normative, level. I 
have a hard time to think of cooperation in negative terms, and I have less 
problems thinking of networks as, say, being set up for exploitation. 

> The second question springs from that last point, and has to
> do with social network analysis. As far as I can tell, this
> is a science - or branch of inquiry, anyway - that's mainly
> driven by innovations in graphic representation,
> particularly the Pajek software developed by a couple of
> Slovenian researchers, but also the stuff by Valdis Krebs,
> etc. The question is, does social network analysis have a
> theory? Because in effect, you can REPRESENT anything as a
> network, once you have defined nodes (and categories of
> nodes) plus connections (and quantities or qualities of
> connections). Those analytic representations take the form
> of fascinating pictures. But what kinds of theoretical
> synthesis come after the analysis? Does social network
> analysis make specific contributions to our understandings
> of the ways people structure and govern their relations to
> each other? Or does it just subsume every kind of relation
> under the picture of a network?


As far as I know, social network analysis (sna) is a method, not a theory. 
it's basically a mapping technique, a method to measure communicative 
interaction between a set of people, primarily quantitatively. For sna, any 
set of connections is a

Re: Organised Networks: Transdisciplinarity and New Institutional Forms

2006-04-11 Thread Felix Stalder
Perhaps I'm missing here something obvious, but I always thought that networks 
are a basic type of organization (as are hierachies, markets, and communes, 
in fact, standard theory assumes that there are only these four basic forms). 
So, to speak of an organized network, makes no sense to me. All networks are 
organized, by definition. 

> The social-technical dynamics of ICT-based networks constitute organisation
> in ways substantively different from networked organisations (unions,
> state, firms, universities).

Again, this makes no sense to me. All large-scale contemporary networks are 
ICT-based. In fact, ICT is what allows them to scale and hence have a chance 
to successfully compete with vertically integrated hierachies (the only 
organizational form, up to very recently, that scaled well).

Also, I always thought that unions, the state and its bureaucracies, 
universities, and old-school firms were prime examples of hierachical 
organizations. If they are not, what is? And it what sense is a union a 
networked organization?

I'm getting a head-ache, and this is only the first sentence.

Felix






http://felix.openflows.org-- out now:
*|Manuel Castells and the Theory of the Network Society. Polity, 2006 
*|Open Cultures and the Nature of Networks. Ed. Futura/Revolver, 2005 


#  distributed via : no commercial use without permission
#   is a moderated mailing list for net criticism,
#  collaborative text filtering and cultural politics of the nets
#  more info: [EMAIL PROTECTED] and "info nettime-l" in the msg body
#  archive: http://www.nettime.org contact: nettime@bbs.thing.net


Re: let's go negative and join snubster

2006-04-06 Thread Felix Stalder
Negativity is has its charms, but the fate of being positive and 
community-minded
is inescapable on the Internet. In order to do anything collaboratively, that 
is,
anything at all beyond pure consumption, the 'community' has to minimize 
internal
dissent and make people feel good about contributing. This does not mean 
squashing
internal critique totally, otherwise things get boring, but stabilize it on a
level where it is productive, and thus, dare I say, turns positive. After all, 
if
really you don't like it, why don't you just go somewhere else? Anyone who
administered an online collaborative project has uttered this sentence more than
once.

So, we have special interest communities, dog-lovers, negativity-lovers and so 
on.
Endless solipsistic niches of like-minded people. Snubster just has a cool
branding. The days of critique and negativity are over. Rather, we have
interesting discussions on minor points, enabled by the fact that we basically
agree with one another. There are too many options to waste your time really
disagreeing with people.

Felix



#  distributed via : no commercial use without permission
#   is a moderated mailing list for net criticism,
#  collaborative text filtering and cultural politics of the nets
#  more info: [EMAIL PROTECTED] and "info nettime-l" in the msg body
#  archive: http://www.nettime.org contact: nettime@bbs.thing.net


TRIPS was a mistake

2006-04-03 Thread Felix Stalder
[It's amazing to see that the treaty which many identify as the corner stone 
of "information feudalism" (Peter Drahos) is judged as a failure by one of 
it's main designers. From Ian Brown's blog, via the always excellent EDRI 
newsletter [2]. Felix]


Lehman: TRIPS was a mistake
http://dooom.blogspot.com/2006/03/lehman-trips-was-mistake.html

I'm attending a great meeting in Brussels on "The Politics and Ideology of 
Intellectual Property" [1]. We just had quite a newsflash from Bruce Lehman, 
President Clinton's head of intellectual property policy who was largely 
responsible for the Agreement on Trade-Related Aspects of Intellectual 
Property Rights (TRIPS).

Lehman now believes TRIPS has been a failure for the United States, because 
the WTO agreement in which it is included opened US markets to overseas 
manufactured goods and destroyed the US manufacturing industry. He feels that 
the US has kept its part of the TRIPS bargain, but that with 90% piracy in 
China, higher-end developing nations have not. In retrospect, he feels the US 
should instead have introduced labour and environmental standards into the 
WTO agreement so that jobs would not be lost in the US manufacturing sector 
to countries with few environmental standards and weak unions.

How exhilirating that Mr Lehman agrees with civil society IP experts across 
the developed and developing world!


[1] http://www.tacd.org/docs/?id=286
[2] http://www.edri.org/






http://felix.openflows.org-- out now:
*|Manuel Castells and the Theory of the Network Society. Polity, 2006 
*|Open Cultures and the Nature of Networks. Ed. Futura/Revolver, 2005 



#  distributed via : no commercial use without permission
#   is a moderated mailing list for net criticism,
#  collaborative text filtering and cultural politics of the nets
#  more info: [EMAIL PROTECTED] and "info nettime-l" in the msg body
#  archive: http://www.nettime.org contact: nettime@bbs.thing.net


Open Source Projects as Voluntary Hierarchies

2006-03-17 Thread Felix Stalder

Open Source Projects as Voluntary Hierarchies

Weber, Steven (2004) The Success of Open Source. Cambridge, MA, Harvard UP
ISBN: 0-674-01292-5, pp. 311

Over the last half-decade, free and open source software (FOSS) has moved from 
the hacker margins to the mainstream. Corporations, large and small, have 
invested in it, some governments are actively supporting it and it is 
becoming an increasingly important tool for the building of an international 
civil society. In the social sciences, the field is receiving a growing share 
of attention, evidenced by a widening stream of research output. The central 
repository for relevant papers, opensource.mit.edu, lists some 250 
researchers with a self-declared interest in all things FOSS and almost as 
many scholarly papers, contributed in just five years. Additionally, there 
are several volumes written by activists, book-length treatments by 
journalists, plus biographies of the two most prominent figures, Richard 
Stallman and Linus Torvalds.

To this burgeoning literature, the most ambitious contribution is Steven 
Weber's The Success of Open Source. Weber, a political scientist from 
University of California, Berkeley, focuses on a political economy approach 
which he understands as 'a system of sustainable value creation and a set of 
governance mechanisms' (p.1). His main interest lies in the social formations 
built around FOSS's particular mode of production. What differentiates this 
mode from other systems of immaterial production is its approach to property. 
Whereas the conventional notions of property are based on unambiguous 
ownership and the associated right of excluding others, in the context of 
FOSS, property is organized around the right to distribute. Its key concerns 
are not how to ascribe ownership and manage exclusion, but to develop the 
best strategies to maximize access and collaboration. This is a profound 
change, and Weber puts it rightly at the beginning of his analysis. In this 
perspective FOSS is a public good, a resource that, once produced, everyone 
can use, akin to a street light. Standard political theory assumes there are 
no incentives for private entities to produce public goods, because the 
non-excludability invites free-riding, impeding markets built on scarcity. 
Thus, the provision of such goods is usually in the hands of governments who 
invest in them for the benefit of society as a whole.

FOSS is a counter-intuitive example where a large number of entities, 
individuals as well as organizations, produce highly complex products as 
public goods with no, or very little, involvement of the state. Early 
analysts recognized that, given mainstream theoretical assumptions, FOSS is 
an 'impossible public good' (p.1). Clearly, however, it is not a fluke. Many 
of the core projects  - such as the Linux kernel, the Apache webserver, the 
GNU software  - are by now more than a decade old and are still growing, so 
it is no longer in doubt as to whether FOSS represents a 'system of 
sustainable value creation'. But what kind of system? The two core chapters 
of the book (which also contains a thorough history of FOSS and somewhat less 
thorough sections on business and law) focus on the 'microfoundations', the 
individual motivations to contribute to FOSS projects, and on the 
'macro-organizations' involved. Weber aims to show that contributors are not 
altruistic, but guided by range of incentives, from seeking aesthetic 
pleasures to reputation and identity building. The question of incentives is 
probably the best researched of all aspects of the conundrum that FOSS poses 
to conventional political and economic theory and Weber does a very good job 
of systematizing and summarizing the state of the discussion, even if he adds 
little new.

More interesting and original is the chapter on the macro-organization of FOSS 
projects. Here, Weber shows that these projects are not chaotic at all, but 
tend to have explicit formal structures (release schedules, project leaders, 
official repositories, etc) and that notions of self-organization do not 
really clarify much. To bring together the two basic observations that all 
contributions are voluntary, and that projects are hierarchically structured, 
Weber develops the notion of a voluntary hierarchy (though, he never quite 
calls it that). In such a governance system, individuals voluntarily accept 
their position in a hierarchy, because they realize that doing so is 
beneficial to them. Their own contributions get recognized and the overall 
project develops into a direction that they like. In such as system, contrary 
to what we usually think of hierarchies, power flows from the bottom to the 
top 'because the leader depends on the followers more than the other way 
around[']Asymmetrical interdependencies favor the potential followers, who 
will make a free and voluntary choice where to invest their work' (p.160).

The freedom of choice if and where to contribute is not 

Netbase (1995-2006)

2006-02-18 Thread Felix Stalder

Yesterday, there was a party in Vienna. It was a small, at times sombre, at 
times
exuberant affair, fitting for the occasion. The final call for netbase, the
institute for cultural technologies. Today, the doors remained closed and the
website turned static.

After more than a decade sailing hard against the currents, suffering countless
near-death experiences, it's hard to believe that the fall of the curtain is now
final. No more publicity stunts. 

With the netbase, one of the last 'free radicals' of the early internet culture
disappears, an institution which understood art as necessarily critical, both of
the commercial hype and the old and new centers of power.

Insisting on the freedom of art, defining its value as cultural intelligence,
probing alternative futures, netbase refused play along with the neo-liberal
redefinition of culture into 'services' to be measured by tourism boards, 
economic
development agencies, or ministries of education. 

Rather, what characterized netbase was an insistence on acting in public, 
engaging
the public directly and on its own terms. That such an approach is ultimately
doomed, particularly in a country like Austria, is hardly a surprise. Like a 
crash
in a formula one race, it's easy to say "i saw it coming." 

Even if the real surprise is probably that netbase lasted that long, witnessing
its closure is a sad affair nevertheless. Particularly for many nettimers, who
enjoyed, at one time or another, its particular kind of hospitality in here
Vienna.


Felix

 
+---+-+---
http://felix.openflows.org


#  distributed via : no commercial use without permission
#   is a moderated mailing list for net criticism,
#  collaborative text filtering and cultural politics of the nets
#  more info: [EMAIL PROTECTED] and "info nettime-l" in the msg body
#  archive: http://www.nettime.org contact: nettime@bbs.thing.net


The Stuff of Culture

2006-01-24 Thread Felix Stalder
[This is the opening essay of a new book of mine, called "Open Cultures and 
the Nature of Networks" which was published in English and Serbian by the 
lovely people of kuda.org, late last year. It is being distributed by 
Revolver, Archiv fuer aktuelle Kunst. Hard copies are available from the 
distributor (http://revolver-books.de/w3NoM.php?nodeId=675) and a pdf with the 
english 
portion of the book can be downloaded via my website 
(http://felix.openflows.org/pdf/Notebook_eng.pdf). Felix]


The Stuff of Culture

Today, we are confronted with a strange, hard-to-categorize question: what is 
culture made out of? Our answer, I am convinced, will have a profound impact 
not just on future culture, with a capital C, but on the entire the social 
reality of the emerging network societies. Today, culture, understood broadly 
as a system of meaning articulated through symbols, can no longer be 
separated from the (informational) economy, or, thanks to genetic 
engineering, from life itself.

Historically, there have been two different approaches to culture. One 
approach to culture would be to characterize it as object-oriented, the other 
as exchange-oriented. The first treats culture as made out of discrete 
objects, existing more or less independently from one another, like chairs 
around a table, or books on a shelf. While such things can be arranged in 
relation to one another, their meaning and function remains the same 
regardless. One person can sit on one chair, no matter how many chairs there 
are in a room, or how they are arranged. The content of a book does not 
change when re-shelving it. The other view takes culture to be made out of 
continuous processes, in which one act feeds into the other, in an unbroken 
chain. Like "la ola", the wave people do in stadiums when the game they are 
watching becomes boring. By looking at the individual act in isolation, one 
cannot differentiate between whether someone getting up to stretch their 
tired bones, or they are participating in collective entertainment. The 
function and meaning of such an act are not self-contained in the act, but in 
its relation to others. It is not only what people do, but also, perhaps even 
more importantly, what happens between them, what flows from one to the 
other. The two perspectives create different sets of concepts for 
understanding culture: the timeless work of art versus the process of 
creation, the individual inventor versus the scientific community, the 
statement versus the conversation, the recording versus the live performance, 
and so on. These two perspectives, and the practices through which they are 
expressed, are currently coming into deep conflict with one another, hence 
the new urgency to the question: what is culture made out of?

Of course, culture always consists of both, that is of stable objects (such as 
furniture, cloths, works of artifice, timeless tunes, written laws) and of 
ongoing, fluid exchanges (for instance spoken languages, values, customs and 
routines). The issue is not an ???either/or???. We do not have to choose one 
over 
the other. The dichotomy just sketched is an analytical device to highlight 
the differences. The real issue is how these two aspects relate to one 
another. Put simply, is the fixed a local, temporary hardening of the fluid, 
or is the fluid nothing but a residual aspect of the fixed? These are not 
only philosophical questions, but also political and economic ones. How do we 
organize society, to facilitate the creation of objects, or the creation of 
exchanges? How do we value the work of keeping the conversation flowing, 
versus the work going into the production of discrete units? 
It is no coincidence that this question is pressed upon us today because the 
issue is eminently technological. Before the invention of writing it was 
difficult to fix ideas on to material objects. 

Culture was oral and the way of maintaining culture was to keep exchanging it, 
to re-tell stories far and wide. In the process story tellers, bards and 
other traveling performers, some more talented, others less, created infinite 
versions of the same basic material and these versions dissipated as quickly 
as the performers moved on. The technology of writing allowed for the first 
time the transfer parts of their fluid performances into fixed objects. The 
earliest work of Western literature, Homer's Odyssey, is exactly that: an 
oral epic written up. The earliest written philosophy, Plato's, is mainly 
dialogs. 

Slowly, culture began to gravitate towards objects, both in terms of 
production and reception. Yet, until the development of print, the 
difficulties of (re)producing manuscripts put serious limits on the extent to 
which the object-orientation they contained could spread throughout culture. 
With print, and later with the mechanical recording of sound and images, the 
balance shifted decisively. Culture became re-made as a series of stable 
objects. With these objec

The crisis of democracy and referenda

2006-01-13 Thread Felix Stalder
> On 11/01/06, Prem Chandavarkar <[EMAIL PROTECTED]> wrote:
> > A referendum helps to resolve impasses reached when you have polarised
> > opinions on critical single cause issues.  It cannot be a substitute for
> > the day to day negotiations of representative politics.
>
> True.  On the other hand, elected representatives may well be less
> likely to pass unpopular laws, and more likely to take the views of
> the majority into account when carrying on those day-to-day
> negotiations, if they know that citizens can easily arrange a
> referendum on any issue in order to reverse their decisions.  The
> existence of an easy referendum mechanism, even if it is rarely used,
> may thus make politicians more sensitive to public opinion.

In Switzerland, the place that has the most extensive experience with
referenda, this is exactly what happens. Politically speaking, the most
important thing about a referendum is not calling one, but to be able to
credibly claim that one can does so. This buys you the ticket to the
negotiation table.

Before any law passes, there are extensive rounds of negotiations (called
"Vernehmlassung" in Swiss-German), where all the groups that can call a
referendum are asked to provide feedback to the proposed law, making sure
that all the powerful groups in the country agree on a law, or can at least
live with it. Nobody wants to work for years on a law, and then have it
subjected to to vagueries of a public vote (which is always unpredictable,
since one never knows about the context in which the vote is actually held).
In practice, this slows down everything, and give a lot of influence to
unelected presentative of powerful groups, why may, depending on the issue,
include unions and environmental groups.

As an effect, the power of elected politicians is serverely curtailed, after
all, the representative aspects are only one part of this particular Swiss
brand of democracy.

Because the mechanisms of Swiss democracy are rather different from others,
the crisis that it faces is also very different. Yes, of course, there's also
a lot of lobbying, but given the curtailed power of politicians, buying them
off only gets you so far. The actual crisis is two fold: first, given the
need to consult and include ever diverging interest, the system slows down to
a crawl, as, in the end, it's always safer to do nothing than to risk losing
face in a refendum. Second, more and more stuff gets decided on an
international basis, with the national parliaments only responsible for
converting international treaties (or EU directives) into national law. Yet,
the fiction that direct democracy is the ultimate source of power, needs to
be maintained, as it's so crucial to Swiss identity. So how do you square
this? By inventing a construction called "autonomer Nachvollzug" which can be
translated as "autonomous conformation". If that sounds like a paradox, it
is. The key idea is that Switzerland is autonomous to conform to
international agreements. In fact, of course, it is not, but given its deep
interlikages with the EU and other countries, it simply has to take over what
is decided there.


+---+-+---
http://felix.openflows.org


#  distributed via : no commercial use without permission
#   is a moderated mailing list for net criticism,
#  collaborative text filtering and cultural politics of the nets
#  more info: [EMAIL PROTECTED] and "info nettime-l" in the msg body
#  archive: http://www.nettime.org contact: nettime@bbs.thing.net


nettime-l, and nettime-ann, as rss feeds

2005-11-29 Thread Felix Stalder

There are now rss feeds available from nettime-l, and the announcement 
channel, nettime-ann. To make even more relevant your continously-updating 
websites :)

http://nettime.freeflux.net
http://nettime-ann.freeflux.net/

Many thanks to Silvan Zurbruegg who set-up the blogs from which the feeds 
are generated.

Felix

+---+-+---
http://felix.openflows.org


#  distributed via : no commercial use without permission
#   is a moderated mailing list for net criticism,
#  collaborative text filtering and cultural politics of the nets
#  more info: [EMAIL PROTECTED] and "info nettime-l" in the msg body
#  archive: http://www.nettime.org contact: nettime@bbs.thing.net


Re: Benjamin Mako Hill on Creative Commons

2005-08-01 Thread Felix Stalder
> > The CC licenses, however, try to provide some protections for the
> > producers of content by providing non-commercial clauses.
>
> Which is a bogus advantage. We had this discussion in Nettime before,
> and the common sense was that the concept of "commerce" implied in those
> clauses is neither defined nor clear at all. If our exchange would be
> printed in a Nettime book, and the book was for sale even if it made no
> profit or even losses for the publishers, it would be still a
> "commercial" distribution and hence not allow the inclusion of material
> licensed with this clause. This would even be the case if it were
> published on a CD-ROM sold for 50 cents, or in exchange for a blank CD
> medium.

The non-commercial clause is, indeed, deeply problematic. It is virtually
impossible to define what commercial means. It is not a legal concept to my
knowledge. Is everything that is sold a commercial transaction. Or only things
that are sold with the intention of profit? Then again, how would one define
intention? Or is it the success that makes a venture commercial?  Assuming the
nettime reader, as it was printed and distributed, did not constitute a 
commercial
venture. But what if it had been a runaway success, with four reprints? That 
would
have made it profitable, for sure. Where would one draw the line? After the 
first
re-print? or the second?

In the end, the non-commercial clause restricts the creative commons to
consumption, hobbyism, and, how convenient for its academic sponsors, to 
teaching. 

While I see the point of, say, musicians not wanting to have their works misused
in advertisement, the share-alike clause of the GPL would already have prevented
this from ever happening. There is no way in hell that any brand would allow its
ads to be released under the GPL. You cannot put a trademark under the GPL.

The only example I can think of that the GPL would not protect a musician 
against
crass, unwanted commercial exploitation is if a GPLed song was included in a
commercial compilation of songs. This would not affect the closed license of the
other songs, or the compilation as a whole (similar to including free software 
on
a cd with other programs).

In the end, while I do not agree with Florian that the differences between works
that are necessarily collaborative and temporary (say, software) than those that
can be indivually produced and finished (say, a novel) are negligible, on the
level of the license the share-alike aspect works well for both as a protection
against commercial rip-offs without producing the problems of a non-commercial
clause.


Felix



+---+-+---
http://felix.openflows.org


#  distributed via : no commercial use without permission
#   is a moderated mailing list for net criticism,
#  collaborative text filtering and cultural politics of the nets
#  more info: [EMAIL PROTECTED] and "info nettime-l" in the msg body
#  archive: http://www.nettime.org contact: nettime@bbs.thing.net


A pragmatic respone to a Critique of the Commons without Commonalty

2005-07-08 Thread Felix Stalder
Forwarded with the permission of the author. Felix

--  Forwarded Message  --

Subject: [ipr] A pragmatic respone to a Critique of the Commons without 
Commonalty
Date: Thursday, 7. July 2005 19:58
From: Andrew Rens <[EMAIL PROTECTED]>
To: ipr&publicdomain <[EMAIL PROTECTED]>

<...>

The article certainly provides a stimulating
theoretical critique of Creative Commons. There are a
number of places where the critique elides the complex
nature of issues most notably the characterisation of
Res Communes, and the analysis of precisely what
copyright constitutes as "private" and "public". I am
however not going to address the theoretical critique
of Creative Commons at this point. There are a few
matters of pragmatic import, which the analysis seeks
to occlude. I address these as matters of strategy as
someone who has experienced political struggle, and
also has used law to effect rhizomatic change.

No-one at Creative Commons, certainly not Professor
Lessig is suggesting that Creative Commons is the only
or even primary model for creativity, nor that it
represents the ideal state of copyright. Rather it is
one working model amoung many. For those who believe
that Intellectual Property can be balanced; a healthy
creative ecosystem will include a multitude of
different models. For those who don't believe in
Intellectual Property at all Creative Commons is a
practical strategy of working towards an open culture
given real world conditions. It has already opened up
space for voices that would not otherwise have been
heard. The energy and excitement surrounding Creative
Commons stem in part because many people are for the
first time to see what open culture looks like, and so
imagine what it could be in the future. As such it is
a greater stimulant to libre culture than academic
critiques of late capitalist cultural production.

It is also a project which people and organisations of
all theoretical stripes and ideological flavours can
co-operate in, which is why both Jack Valenti and John
Perry Barlow spoke at the opening. One of the greatest
strengths of Creative Commons is this aspect of an
open project; people who may differ on many other
issues can all contribute to the common good.

A common project such as this can easily be
stigmatized as co-option, but only by the same logic
that human rights lawyers, using what law there is
within a repressive state to secure the freedom or
safety of prisoners, can be regarded as co-opted.
Participation in Creative Commons is not an exhaustive
index of who a person is. A person can support
Creative Commons and be committed to the long term
abolition of all Intellectual Property or work full
time for a corporate law firm or any of the whole
range of options in between.

The analysis rejects copyright law as an organising
strategy for creativity, yet does not develop an
alternative vision of a commons, specifying only what
it is not, it is thus difficult to imagine this
theoretical commons at all.

Andrew Rens
Legal Lead
Creative Commons South Africa



+---+-+---
http://felix.openflows.org


#  distributed via : no commercial use without permission
#   is a moderated mailing list for net criticism,
#  collaborative text filtering and cultural politics of the nets
#  more info: [EMAIL PROTECTED] and "info nettime-l" in the msg body
#  archive: http://www.nettime.org contact: nettime@bbs.thing.net


Using copyright to stop the publication of 'mein kampf'

2005-06-23 Thread Felix Stalder

Here's an interesting use of copyright for all of those who track its (ab)use 
for
political reasons. In Poland, publisher Marek Skierkowski is being investigated 
on
behalf of the state of Bavaria for infringing on its copyright on the works of
Adolf Hitler.

The case is the following: The publisher, who has no history of neo-Nazi
sympathizing, decided to publish Mein Kampf purely for commercial reasons. Now,
Poland -- like many other countries in Europe -- has a law criminalizing
distribution of fascist propaganda (?246 of Polish criminal code). However, the
publisher could convince the state attorney that Mein Kampf does not constitute
current political propaganda, but has to be viewed as a historic document and 
that
making it accessible would serve historic and scientific purposes, not the least
because he is clearly not politically motivated. Since he does not try to 
convince
anyone of any political views, his publication do not constitute propaganda, so 
the
reasoning of the attorney.

What does this have to do with copyright and Bavaria? After WWII, the state of
Bavaria was given by the allies all the copy- and author's rights belonging to
Hitler, because he was officially registered as a Munich resident by the end of 
the
war. And now, Bavaria tries to use its copyrights to stop the publication in 
Poland
after the application of national criminal law failed to do so.  Bavaria holds 
the
copyrights for another ten years (70 years after the death of the author) after
which it falls into the public domain.

Source: http://www.spiegel.de/politik/ausland/0,1518,361691,00.html


+---+-+---
http://felix.openflows.org




#  distributed via : no commercial use without permission
#   is a moderated mailing list for net criticism,
#  collaborative text filtering and cultural politics of the nets
#  more info: [EMAIL PROTECTED] and "info nettime-l" in the msg body
#  archive: http://www.nettime.org contact: nettime@bbs.thing.net


Brazil approves bill to free Aids drug patents

2005-06-04 Thread Felix Stalder
[via: commons-law <[EMAIL PROTECTED]>]

http://www.aids.gov.br/

Chamber of Deputies unanimously approves
parliamentary bill to free Aids drug patents

The Constitution, Justice and Citizenship
Commission of the Chamber of Deputies (CCJ)
unanimously approved, this Wednesday, Bill Number
22/03 submitted by Federal Deputy Roberto Gouveia
(PT-Sao Paulo). This Bill modifies Article 18 of
the Brazilian Patents Law (9.279/96), thereby
freeing Aids drugs, together with their
manufacturing processes, from patent coverage.
This will enable Brazilian manufacturing
laboratories to make such drugs.

Deputies from different political parties have
taken the view that public health interests as
well as those related to life itself take
precedence over industrial rights and they
therefore voted unanimously for the
constitutionality of the proposal put forward by
Deputy Roberto Gouveia. Deputy Antonio Carlos
Biscaia (PT-Rio de Janeiro), the Reporter for the
bill, explained that protection under the
Constitution of industrial inventions is "not
absolute", but conditional on the interests of
society as a whole. The Bill will now proceed to
the Federal Senate for appraisal.

Voting on the Bill in the CCJ was accompanied by
activists from all over Brazil, together with
representatives of the National STD/Aids Program.
Every time there was a vote cast in favor of the
Bill the activists responded with loud applause,
raising placards supporting approval of the Bill.
When the Table Chairman, Federal Deputy Jose
Mentor (PT-Sao Paulo), announced the end result
of the vote, Plenary Number 1 of the Annex to the
Chamber of Deputies witnessed emotional scenes.
Activists, parliamentarians and representatives
of the Federal Government embraced one another
and congratulated each other on the victory. A
number of them were moved to tears. Roberto
Gouveia was particularly touched by the outcome
of the vote and said he was confident that the
Senate would go ahead and approve the measure. He
said "the Bill will enter the Federal Senate with
strong backing, totally legitimized by all the
commissions that it has transited". Deputy
Gouveia added that the proposal does not fly in
the face of any international agreement. "On the
contrary", he declared, "we are doing what the
Universal Declaration of Human Rights stipulates.
We are acting in defense of life itself".

Laurinha Brelaz of the Manaus Friendship and
Solidarity Network considered that approval of
the Bill in the CCJ was an important landmark in
the struggle against the Aids epidemic. She
declared that "the Bill will make it possible to
manufacture cheaper medicines, therefore
increasing and ensuring access to treatment for
Aids patients". Laurinha went on to call
attention to the fact that the Bill still has to
go through the Senate and for that reason the
movement in its support   "must continue its
active role, otherwise the multinational drug
companies can try to bring influence to bear on
our Senators".

Currently, eight of the 16 antiretroviral drugs
used in Aids treatment and distributed through
the Public Health network in Brazil are under
patent protection. Over 70% of the amount spent
by the Ministry of Health on acquiring anti-Aids
drugs are in fact spent on only three of these
particular medicines. For the Director of the
Brazilian National STD/Aids Program, Dr Pedro
Chequer, approval of this Bill will mark a
watershed internationally and can open up new
negotiating possibilities. In Dr Chequer's words
"this crowns the Doha Declaration and is in line
with what the World Health Organization has been
extolling - that medical drugs for treating Aids
are a right of humanity".

The Bill was voted conclusively and now goes to
the Federal Senate. There is no requirement for
it to be submitted to the Chamber of Deputies
Plenary. If it is approved by the Senate with no
amendments, it will then be submitted to the
President of the Republic for ratification.

Communication Section
 National STD/Aids Program
 Ministry of Health
 Brazil
 +55-61-448-8016/8018
 ___

 The following is Roberto Gouveia's justification
and the original text of Bill  Number 22/03

 CHAMBER OF DEPUTIES

 PARLIAMENTARY BILL

 (submitted by Mr Roberto Gouveia)

 Covers the invention of medication for the
prevention and treatment of the Acquired
Immunodeficiency Syndrome SIDA/Aids and the
procedure for its procurement as non-patentable
materials.

 The National Congress decrees:

 Art.
 1=BA..=
=2E...
 ... =20
ART. 18 of Law n.=BA 9.279, of 14 May 1996, comes into force with the follo=
wing=20
additional clause IV:
 "Art.18...=
=2E.
 
 IV - the medication, together with its
respective procurement procedure, specifically
for the prevention and 

Re: The Ghost in the Network

2005-05-17 Thread Felix Stalder
On Monday, 16. May 2005 12:56, Alexander Galloway and Eugene Thacker wrote:>

> We suggest that this opposition between closed and open is flawed. It
> unwittingly perpetuates one of today's most insidious political myths,
> that the state and capital are the two sole instigators of control.
> Instead of the open/closed opposition we suggest the pairing
> physical/social. The so-called open logics of control, those associated
> with (non proprietary) computer code or with the Internet protocols,
> operate primarily using a physical model of control. For example,
> protocols interact with each other by physically altering and amending
> lower protocological objects (IP prefixes its header onto a TCP data
> object, which prefixes its header onto an HTTP object, and so on). But
> on the other hand, the so-called closed logics of state and commercial
> control operate primarily using a social model of control. For, example,
> Microsoft's commercial prowess is renewed via the social activity of
> market exchange. Or, using another example, Digital Rights Management
> licenses establish a social relationship between producers and
> consumers, a social relationship backed up by specific legal realities
> (DMCA). Viewed in this way, we find it self evident that physical
> control (i.e. protocol) is equally powerful if not more so than social
> control. Thus, we hope to show that if the topic at hand is one of
> control, then the monikers of "open" and "closed" simply further confuse
> the issue. Instead we would like to speak in terms of "alternatives of
> control" whereby the controlling logic of both "open" and "closed"
> systems is brought out into the light of day.


I think this equation of "protocol = control", which is also the core of 
Galloway's stimulating book [1], is fundamentally flawed, because it mixes 
terms in ways that is not helpful to a critical political analysis.

A protocol, technical or social, is a series of standards which regulate 
how different entities can interact without the establishment of a formal 
hierarchy. Remember, the term originated in the context of exchanges 
between the king and foreign diplomats. The key about this relationship 
was that the diplomats were not the king's subjects, yet the diplomats 
were the equal to the king. They were different. The purpose of a protocol 
was to allow them to interact without the establishment of a formal 
hierarchy.

To argue that the protocol now, somehow, controlled the king and the 
diplomats seems strange. The same problem occurs when arguing that the 
Internet Protocol is somehow the ultimate controlling mechanism of the 
Internet. The fact that communication takes place within certain 
constraints, which enable communication in the first place, does not 
equate control. Rather, constraints on one level (the protocol of 
communication) can provide the grounds for freedom on an other level 
(content of communication). This is social theory 101.

The whole argument of protocol = control seems to rest on a somewhat 
unimaginative reading of Foucault's micro physics of power, in which he 
argued that language itself is a main source of power and that the 
establishment of categories (e.g. madness) was itself a supreme act of 
power.  To transfer this one-on-one to protocols of communication 
networks, yields yet another control phantasy (or nightmare, depending on 
your agenda). The only choice it leaves you is to jump into a some sort of 
'pre-social' state. And this is precisely what Galloway & Thacker offer 
us:

> Unplug from the grid. Plug into your friends. Adhocracy will rule.
> Autonomy and security will only happen when telecommunications operate
> around ad hoc networking. Syndicate yourself to the locality.

What we have here is the 'social' vs. the 'technical', and the 'unplanned' 
vs. the 'planned'. Why this should lead to more freedom is dubious. Unless 
we understand freedom as absence of rules and control as presence of 
rules. This, however, is a very misleading understanding of these 
concepts, as has been argued often, not the least by in the feminist 
critique of the anti-authoritarian social movements of the late 1960s. [2]


PS: I am not arguing that protocols cannot be used as mechanism of social 
control. Rather, this has to be established on a case-by-case basis, 
rather than pronouncing protocols as means of control per se.


[1] Galloway, Alexander R. (2004). Protocol: How Control Exists After 
Decentralization. Cambridge, MA, MIT Press

[2] Freeman, Jo (1972). The Tyranny of Structurelessness. The Second Wave. 
Vol. 2 No. 1 http://www.jofreeman.com/joreen/tyranny.htm

 





+---+-+---
http://felix.openflows.org


#  distributed via : no commercial use without permission
#   is a moderated mailing list for net criticism,
#  collaborative text filtering and cultural politics of the nets
#  more info: [EMAIL PROTECTED] and "info nettime-l" in the msg body
#  archive: http://www.nettime.org co

Actors' union shouts 'cut' on digital film

2005-04-12 Thread Felix Stalder
[via fibreculture, sarai, three times around the globe, but still interesting. 
question: are actors part of digital culture, or people with personality 
rights? felix]

http://www.theage.com.au/news/Outsourcing/Actors-union-shouts-cut/2005/04/11/1113071894581.html

Actors' union shouts 'cut' on digital film
By Seamus Byrne
April 12, 2005


The Australian actors union is blocking a world-first remixable film
project, and possibly forcing the production offshore, out of fear
that footage of actors could be misused.

The Media, Entertainment and Arts Alliance has stopped production on
the "re-mixable" film experiment because of plans to release the film
under a Creative Commons (CC) licence. The $100,000 short film
Sanctuary has been seeking a dispensation from the MEAA since January
to allow professional actors to participate in the production. The
film's cast supports the concept but the MEAA board has refused any
dispensation, stalling production scheduled to start in late March.

The CC licence will allow audiences to freely copy and edit the film's
digital assets for non-commercial purposes, this being the issue of
central concern to the MEAA. "We don't see any safe way a performer
can appear in this," says Simon Whipp, MEAA national director.
"Footage could be taken and included in a pro-abortion advertisement
or a pro-choice advertisement.

"Any non-commercial usage the performer may or may not agree with.
Then for commercial work, performers are asked to sign a statement
about what other commercials they have appeared in and this can be
used to determine whether or not to include that performer. Without
full knowledge of future usage of the film, it could unwittingly place
that performer in breach of future commercial agreements."
AdvertisementAdvertisement

The film's Australian director, Michela Ledwidge, received an
Inventions award from Britain's National Endowment for Science,
Technology and the Arts, in recognition of the groundbreaking nature
of the experiment.

This has led to further support from the Australian Film Commission,
which is funding the interactive and CGI elements of the work. Carole
Sklan, AFC director of film development, says: "We appreciate that
there are many issues raised by the application of the Creative
Commons licence to Australian productions and have encouraged both the
MEAA and the producer to negotiate to address these."

Ms Ledwidge hopes to allay MEAA fears as part of the application for
dispensation. "We (showed) our intent to be conservative in the re-use
we showcase. If we fail in our duties to operate a trust network there
will be problems but we're up for the responsibility."

The licence supports the moral rights of the author, but Mr Whipp says
the conflict with the CC licence is particular to Australia. "If you
come from where performers also have moral rights, this isn't such an
issue. But here performers have no moral rights - nothing prevents the
ridicule of the performers. We have spoken with Brian Fitzgerald, the
dean of the faculty of law at QUT (who is closely associated with
Creative Commons development in Australia), who understands our
concerns and will look to work with us on the matter."

Ms Ledwidge fears her project will have to head back overseas. "We
will still make the film but plans for an Australian shoot will have
to be revised."

The Creative Commons system is a "some rights reserved" form of
copyright, providing an alternative to the black and white of full
copyright and public domain. Australian versions of the CC licences
only came into effect in February.



+---+-+---
http://felix.openflows.org


#  distributed via : no commercial use without permission
#   is a moderated mailing list for net criticism,
#  collaborative text filtering and cultural politics of the nets
#  more info: [EMAIL PROTECTED] and "info nettime-l" in the msg body
#  archive: http://www.nettime.org contact: nettime@bbs.thing.net


Re: Re: What's the meaning of "non-commercial"?

2005-01-23 Thread Felix Stalder
On Tuesday, 18. January 2005 18:13, Benjamin Geer wrote:

> > This only applies if you assume that each work as a small number of
> > authors, or that these authors are easily identifiable. This, of course,
> > is not the case with major collaborative works. It's next to impossible
> > to identify all the authors of, say, a wikipedia article.
>
> That would be the case whichever licence Wikipedia used.  If a licence
> imposes any restrictions at all, it's possible that someone may wish to
> ask for a special exception. 

This is a bit of a moot point. The difference is not 'restrictions' vs 'no 
restrictions'. Openness and freedom are not constituted by the absence of 
rules (which are always enabling and constraining) but a particular set of 
rules that is biased to promote certain dynamics and inhibit others.

The question is, what kind of dynamics does the non-commercial clause in CC 
license promote? Basically, it creates two universes, a commercial one and a 
non-commercial one. The more complex these information goods becomes in terms 
of their authorship (think of remixes of remixes, etc) the less it becomes 
possible to move information form the non-commercial universe to the 
commercial one. 

Now, this might sound like a massive attack on capitalism, the one Benjamin 
sees lacking from the GPL.

> Another way of looking at it is that this is one of the limitations of
> FLOSS, which keeps it from contributing to an alternative to capitalism.

However, it seems to me, this critique is totally misguided. For one, it 
assumes that there is a clear boundary between the two categories which is 
not the case for two reasons. One, there are no clear definition for those 
terms and we are back to murky case-by-case decisions. Second, within the 
same process, we find nowadays routinely elements that are commercial and 
that are non-commercial. The two universes are not separate, but deeply 
intertwined, as highlighted in recent concepts such as 'bio-politics'. 

So, what the actual effect of the non-commercial clause is to lock information 
into a ghetto where production must be done for free, or, where its material 
support cannot be provided by the producers themselves through small scale 
commercial transactions (say, a DJ selling a few copies of her CD) but needs 
support from government grants, foundations, or other donors. A rather 
strange alternative to capitalism.  


Felix

+---+-+---
http://felix.openflows.org


#  distributed via : no commercial use without permission
#   is a moderated mailing list for net criticism,
#  collaborative text filtering and cultural politics of the nets
#  more info: [EMAIL PROTECTED] and "info nettime-l" in the msg body
#  archive: http://www.nettime.org contact: nettime@bbs.thing.net


Re: Re: What's the meaning of "non-commercial"?

2005-01-18 Thread Felix Stalder
On Sunday, 16. January 2005 06:22, Patrice Riemens wrote:
> This being said, the clausula that prior permission must be seeked before
> engaging in _possible_ commercial use does not appear so much of a burden.
> In a culture of copyright as our own, it is being routinely done all the
> time.

This only applies if you assume that each work as a small number of 
authors, or that these authors are easily identifiable. This, of course, 
is not the case with major collaborative works. It's next to impossible to 
identify all the authors of, say, a wikipedia article. Furthermore, the 
derivative work might be very different from the initial work (say 
difference between full article in wikipedia and the 'stub' from which it 
originated) so that it's questionable if the original author should have 
any rights in deciding how the derivative work is being used.

Hence, one would need a clear definition of what non-commercial means, one 
that could be applied without involving much individual judgment. But, as 
Keith Heart wrote, the distinction between commercial and noncommercial 
activity is largely a (convenient) fiction, something that feminists have 
insisted upon for a long time in their critique of house work.

Is all labor that is being paid for commercial labor? Such a definition 
would make most NGOs etc. commercial. Or, are only activities that are 
profit-oriented to be considered commercial? In this case, all government 
activities would have to be counted as non-commercial.

I'm not sure if in this context, corporate vs non-corporate would be a 
much better distinction. Are foundations, say the Soros Foundations, 
corporations? It seems to be that it merely displaces the problem.

One of the most innovative aspects of FLOSS is that has managed to avoid 
exactly this distinction, hence you have people from the radically 
different contexts building upon, and contributing to, the same code-base.

In many ways, the GPL provides a de-militarized zone. Everyone agrees to 
leave the big guns at the door. Period. The non-commercial CC license, on 
the other hand, is a pledge not to use the guns, if you play nice. And, to 
be on the sure side, being nice means to consume, but not to build upon 
works in a serious way.


Felix


+---+-+---
http://felix.openflows.org



#  distributed via : no commercial use without permission
#   is a moderated mailing list for net criticism,
#  collaborative text filtering and cultural politics of the nets
#  more info: [EMAIL PROTECTED] and "info nettime-l" in the msg body
#  archive: http://www.nettime.org contact: nettime@bbs.thing.net


I.B.M. to Give Free Access to 500 Patents

2005-01-11 Thread Felix Stalder

[As the article points out at the end, 500 patents is a relatively small 
number for IBM (which holds more than 10.000 software patents). 
Nevertheless, it represents a significant policy change in how to manage 
patents by the world's leading holder of patents. Is is also very 
different from Microsoft's current approach of seeking cross-licensing 
deals among holders of large patent portfolios.

IBM Press Release: http://www.ibm.com/news/us/en/2005/01/patents.html
Linux World Story: http://www.linuxworld.com/story/47749_p.htm

Felix]

NYT January 11, 2005
I.B.M. to Give Free Access to 500 Patents
By STEVE LOHR
http://www.nytimes.com/2005/01/11/technology/11soft.html

I.B.M. plans to announce today that it is making 500 of its software patents
freely available to anyone working on open-source projects, like the popular
Linux operating system, on which programmers collaborate and share code.

The new model for I.B.M., analysts say, represents a shift away from the
traditional corporate approach to protecting ownership of ideas through
patents, copyrights, trademark and trade-secret laws. The conventional
practice is to amass as many patents as possible and then charge anyone who
wants access to them. I.B.M. has long been the champion of that formula. The
company, analysts estimate, collected $1 billion or more last year from
licensing its inventions.

The move comes after a lengthy internal review by I.B.M., the world's largest
patent holder, of its strategy toward intellectual property. I.B.M.
executives said the patent donation today would be the first of several such
steps.

John Kelly, the senior vice president for technology and intellectual
property, called the patent contribution "the beginning of a new era in how
I.B.M. will manage intellectual property."

I.B.M. may be redefining its intellectual property strategy, but it apparently
has no intention of slowing the pace of its patent activity. I.B.M. was
granted 3,248 patents in 2004, far more than any other company, according to
the United States Patent and Trademark Office. The patent office is
announcing today its yearly ranking of the top 10 private-sector patent
recipients.

I.B.M. collected 1,300 more patents last year than the second-ranked company,
Matsushita Electric Industrial of Japan. The other American companies among
the top 10 patent recipients were Hewlett-Packard, Micron Technology and
Intel.

I.B.M. executives say the company's new approach to intellectual property
represents more than a rethinking of where the company's self-interest lies.
In recent speeches, for example, Samuel J. Palmisano, I.B.M.'s chief
executive, has emphasized the need for more open technology standards and
collaboration as a way to stimulate economic growth and job creation.

On this issue, I.B.M. appears to be siding with a growing number of academics
and industry analysts who regard open-source software projects as early
evidence of the wide collaboration and innovation made possible by the
Internet, providing opportunities for economies, companies and individuals
who can exploit the new model.

"This is exciting," said Lawrence Lessig, a professor at Stanford Law School
and founder of the school's Center for Internet and Society. "It is I.B.M.
making good on its commitment to encourage a different kind of software
development and recognizing the burden that patents can impose."

I.B.M. has already made substantial contributions to open-source software
projects in the last few years. The company has been the leading corporate
supporter of Linux. It donated computer code worth more than $40 million to
an open-source group, Eclipse, which offers software tools for building
programs. Last year, I.B.M. gave to an open-source group a database program
called Cloudscape, which cost the company $85 million to develop.

Those past contributions, however, have gone mainly to projects that serve to
make Linux - fast becoming a viable alternative to the operating systems
Windows from Microsoft and Solaris from Sun Microsystems - more attractive to
corporate customers. In that respect, supporting Linux helps to undermine
I.B.M.'s rivals and can be seen as a smart tactic for I.B.M. The company's
commercial software strategy is focused largely on its WebSphere software,
which runs on top of operating systems.

Today's move by I.B.M. is not aimed at a specific project, but opens access to
14 categories of technology, including those that manage electronic commerce,
storage, image processing, data handling and Internet communications.

"This is much broader than the contributions we've made in the past," said Jim
Stallings, vice president for standards and intellectual property at I.B.M.
"These patents are for technologies that are deeply embedded in many industry
uses, and they will be available to anyone working on open-source projects
including small companies and individual entrepreneurs."

I.B.M. executives said they hoped the company's initial contribution of 500

Creating Monopolies of Knowledge

2004-11-11 Thread Felix Stalder
This is an extraordinary article, because the Microsoft person is entirely 
open about what they want to do. Here, the issue of patenting is not about 
protection inventors, not even about protecting investments in research, but 
about being able to build alliances among major corporations which will then 
hold in their 'walled commons' the "vast majority" of relevant patents. 

According to his estimates, there are 30 relevant companies (ie with large 
patent portfolios). Of these 30, already half, if not more, have entered into 
"broad cross-licensing pacts."  What we can see here is the institutional 
formation of 'Information Feudalism'. Felix 


Microsoft--license to deal
By Ina Fried

http://news.com.com/Microsoft--license+to+deal/2100-1012_3-5440881.html
November 8, 2004, 4:00 AM PST

After stepping up its own patent push, Microsoft is now trying to get its 
hands on other companies' intellectual property.

Doing so will give the company more freedom to develop software in new areas 
and help the company as it seeks to indemnify its customers against any 
claims of patent infringement.

"If we are able to strike cross-licensing deals with the top 30 technology 
companies, that alone would provide us access to a vast majority of the 
patents in areas we care about," David Kaefer, Microsoft's director of 
intellectual property licensing, told CNET News.com.

Microsoft has roughly 100 licensing deals in the works, with about 15 to 20 
being broad cross-licensing pacts with other large companies, Kaefer said, 
adding that it can take from one to two years to reach an accord.

"We're making good progress on some," Kaefer said. "Others are moving more 
slowly."

It has been 11 months since Microsoft said it would step up its intellectual 
property efforts. The company has two formal licensing programs--one for its 
FAT file format and the other for its ClearType font rendering 
technology--and it could add more soon.

The company has started to increase the number of sales coming in, but Kaefer 
said the amount of money Microsoft makes by licensing its patents and other 
intellectual property is still far less than the revenue from any of its 
traditional business units.

Because Microsoft paid out about $1.4 billion in the last fiscal year to 
license other companies' technology, turning a profit is not a realistic 
goal, he said. "However, there is an opportunity to narrow the gap."
Digital agenda

Join the clubs
Microsoft also is rapidly trying to boost its presence among the elite in the 
patent filing world. The software giant, which holds less than 4,000 patents, 
plans to file 3,000 applications for patents this year alone.

As Microsoft tries to identify companies to talk with on technology swaps, it 
is trying to think broadly--even striking deals with perceived rivals, such 
as its agreement with PalmOne. "The thing about IP licensing is you can build 
alliances with companies people might otherwise see as strange bedfellows," 
Kaefer said.

In many cases, striking a deal is the easy part, but implementing the 
cooperative elements can be a challenge. One need only look at the slow pace 
of work with Sun Microsystems to see how challenging it can be to implement 
such accords.

Give and take
Microsoft also is finding things tricky as it tries to work with standards 
bodies and open-source communities, something that is clearly a delicate 
process. The recent challenges over patent issues related to the Sender ID 
antispam standard illustrate how conflicts can arise even when various 
parties have good intentions, Kaefer said.

One place the software titan is trying to avoid is the courtroom. Following 
the lead of its intellectual property lawyer, former IBM attorney Marshall 
Phelps, Microsoft is seeking to beef up its licensing without having to file 
a bunch of suits to do so. Kaefer noted that Phelps built IBM's intellectual 
property business without filing a single lawsuit (although he inherited one 
when he took the job).

That said, Microsoft is pursuing negotiations with companies it feels are 
using its intellectual property. "It's not possible for us to just look the 
other way," he said.

 
+---+-+---
http://felix.openflows.org


#  distributed via : no commercial use without permission
#   is a moderated mailing list for net criticism,
#  collaborative text filtering and cultural politics of the nets
#  more info: [EMAIL PROTECTED] and "info nettime-l" in the msg body
#  archive: http://www.nettime.org contact: [EMAIL PROTECTED]


Re: Re: Reflections on Dan Hunter's Culture War

2004-10-07 Thread Felix Stalder
On Wednesday 06 October 2004 12:54, Jamie King wrote:
> In respect to 'Marxist-Lessigism', it is a good gag, but a one-liner. It
> should be restricted to one line. Lessig's reformist position has very
> little in common with a Marxist analysis.

While reading the original article, I kept wondering what Hunter was 
really after. There are so many inconsistencies in his text, it's 
mind-boggling.

'Marxism-Lessigm' is not an analytical category but a polemical one. It does
not illuminate in any way what what Lessig, much less the broader 'movement',
are all about. The main intention is to associate Lessig with something that
has, in the these circles, clearly very negativ connotations. I'm sure, we
will see this phrase soon in Forbes or in the Wall Street Journal.

At one point Hunter himself seems to recognize the purely polemical 
character of his argument. He criticizes those who call Lessig a Marxist 
for using "a simple rhetorical cherry-bomb that makes plenty of noise and 
smoke, but illuminates little." How different is what he does?

Of course, Jamie is right to point out that Lessig's position has nothing 
to do with a systemic critique a la Marx. Curiously, Hunter himself knows 
this as well, he writes "in reality, Marxist-Lessigism is little more than 
a small set of limitations on the expansion of intellectual property." 
Lessig's critique of IP is about as revolutionary as George Soros' 
analysis of global capitalism.

So, what does Hunter want? He clearly is not a IP maximalist, because it calls
this a "ridiculous position".

Strangely, after criticizing Lessig for being both a radical _and_ for 
focusing only on a 'small set of limitations' he abruptly switches the 
argument and adopts Moglen's line: The law doesn't really matter, because 
the revolution is inevitable. The low costs of production suddenly enable 
people to follow non-economic incentives in the production of culture. 
Like Moglen, Hunter completely ignores the possibility of a capitalist 
service economy on top of a free software commons (which is what IBM, 
Novell, RedHat, etc are after), and he also ignores the threat to the open 
source communities posed by software patents (which is what Microsoft, etc 
are after).

In many ways, Hunter's main motivation seems to take a piss at Lessig by 
painting him as a radical reformist working in an area that is of no 
relevance. Ouch! While I think a solid critique of Lessig is necessary, 
Hunter's is wanting on almost every level imaginable.


Felix




+---+-+---
http://felix.openflows.org


#  distributed via : no commercial use without permission
#   is a moderated mailing list for net criticism,
#  collaborative text filtering and cultural politics of the nets
#  more info: [EMAIL PROTECTED] and "info nettime-l" in the msg body
#  archive: http://www.nettime.org contact: [EMAIL PROTECTED]


Mickey Mao's alliance with China's communists

2004-09-25 Thread Felix Stalder
[And this after Nikita Krutchev could not go to Disneyland in 1959. 
http://www.snopes.com/disney/parks/nikita.htm]


By MARINA JIMC9NEZ
With a report from Reuters
http://xrl.us/c7cr

UPDATED AT 3:06 AM EDT  Friday, Sep 24, 2004

Forty-five years ago, during the Maoist era, Chinese people were ordered 
to kill rats as part of a hygiene campaign.

This week, Walt Disney Co. announced it had entered into a partnership 
with China's Communist Youth League to promote Mi Laoshu, or Mickey Mouse, 
whose Chinese name means rat as well as mouse.

The California-based entertainment giant is organizing "outreach programs" 
through the Youth Palaces run by the Communist Youth League to teach 
Chines e children about the travails of Daffy Duck and Goofy, how to draw 
Mickey, an d the many other delights of North American consumerism.

The marketing ploy, in advance of the opening of a $1.8-billion (U.S.) 
Hong Kong Disneyland, is a telling sign of the extent to which China has 
become more corporate than Communist.

Mi Laoshu is already known in some Chinese cities, his image on countless 
knockoff T-shirts, computers, schoolbags and toys in flagrant violation of 
copyright laws.

He is even used as a mascot for an English-language school for children. 
But few Chinese associate Mickey with Walt Disney, which will make it 
difficult to market Hong Kong Disneyland, slated to open in late 2005 or 
early 2006, observes Mike Szonyi, a professor at the University of 
Toronto.

Walt Disney's partnership with China's Youth Palaces "is one part of an 
overall brand-building process," Jay Rasulo, president of Walt Disney 
Parks and Resorts, told Reuters in Hong Kong yesterday.

Such grassroots brand-building was not needed when Disney opened its Paris 
and Tokyo theme parks, said Mr. Rasulo, who was in town to view the mock 
turret being put in place atop Sleeping Beauty's castle at Hong Kong 
Disneyland on rural Lantau Island.

"We've had to be innovative. If you look at Europe and Tokyo, the brand 
was far better understood," he said.

In July, Mickey Mouse and other Disney characters visited 500 children at 
t wo youth centres in Guangzhou, in southern China, and the company plans 
other

interactive games and storytelling visits. Other efforts to build the 
Disne y brand include storytelling in public libraries, tours of shopping 
malls by the famous mouse and other characters such as Goofy and Donald 
Duck, and programs on Hong Kong television, which can be viewed in 
southern China.

"In one session, we teach [children] to draw Mickey Mouse; they're all 
amazed by that," said Irene Chan, vice-president for public affairs at 
Hong Kong Disneyland. "We hope we can expand to more cities and 
provinces."

Prof. Szonyi notes there is a disconnect between Mi Laoshu and Walt Disney:
"The Chinese have had Mickey Mouse for decades, but he's a copyright 
rip-of f version," he said. "The company has to repair this."

Some 70 million young Chinese are members of the Communist Youth League. 
Disney expects that about one-third of visitors to its Hong Kong park, the 
second in Asia after Tokyo, will come from mainland China.

Some in Hong Kong worry the park will suffer from competition if Disney 
bui lds another park in Shanghai, which the company says it will not do 
before 2010.

"There's very little doubt in my mind that there will be a market further 
north in China for a second Disneyland," Mr. Rasulo said.


+---+-+---
http://felix.openflows.org


#  distributed via : no commercial use without permission
#   is a moderated mailing list for net criticism,
#  collaborative text filtering and cultural politics of the nets
#  more info: [EMAIL PROTECTED] and "info nettime-l" in the msg body
#  archive: http://www.nettime.org contact: [EMAIL PROTECTED]


Re: A 'licensing fee' for GNU/Linux?

2004-08-09 Thread Felix Stalder

OK, let me try to restate my argument somewhat differently as to take into
consideration a) the fact that software being proprietary _per se_ does
not indemnify the user (Florian's point) and b) that SW patents create a
mess for all programmers (Scott's point) and c) that none of us is a
patent lawyer hence we don't know when patent infringement creates
liability for the author and when for the user (Novica's point).

The key point here is b). SW patents make the publishing of software code
more difficult because they create uncertainly over IP rights. This
uncertainty can be limited, but never completely eliminated, by extensive
and expensive patent research.

Users, for understandable reasons, don't want to be exposed to this kind
of risk, so they will demand, if that is not already a standard clause in
contracts, that the provider of software guarantees that he has all the
rights to the software licensed. So the party which issues the license of
the software will have to assume this risk.

Small companies have a hard time to do this because they can neither
afford to do the necessary research to be able to assess the risk
realistically, nor can they afford to pay possible settlements, in case
they get sued successfully. After all, how many companies could pony up
more than $520 million as the result of an infringement suit?

Large companies can deal with this risk for a variety of reasons. They
hold many of the patents themselves; they are in cross-licensing
agreements with other companies with large patent pools; they have the
lawyers necessary to fight the cases and they have the reserves to pay the
occasional fine as a general costs of doing business.

Small companies have none of that and, this is the key point, neither have
various foundations and authors of FOSS.  Consequently, neither small
proprietary software companies, nor FOSS communities can issues such
guarantees and hence the users of their software will have to assume the
risk.

For users of FOSS unwilling to accept such risk -- mainly large
institutional users -- there are two possibilities. One is to buy their
FOSS solution from a major vendor that offers indemnification as part of
the service contract (similar to a provider of proprietary software). The
other is to purchase insurance (like the one offered by OSRM). Both create
costs not entirely dissimilar to a licensing fee.

In addition, I would speculate, that such indemnification clauses and
insurances will limit the freedom of development in the future and could
lead to a concentration in the SW industry, proprietary _and_ FOSS. The
difference is that the proprietary SW industry is already highly
concentrated, whereas the FOSS industry is usually thought of as more
decentralized.

In this sense, SW patents will not kill FOSS, but they will give large
companies much more leaway in determining its future, substantially
hollowing out the 'freedom' in free software.

Felix



+---+-+--- 
http://felix.openflows.org



#  distributed via : no commercial use without permission
#   is a moderated mailing list for net criticism,
#  collaborative text filtering and cultural politics of the nets
#  more info: [EMAIL PROTECTED] and "info nettime-l" in the msg body
#  archive: http://www.nettime.org contact: [EMAIL PROTECTED]


Re: A 'licensing fee' for GNU/Linux?

2004-08-08 Thread Felix Stalder
> Felix, sorry if I sound rude, but this is not true, and you
> unintentionally spread FUD here!
>
> Proprietary licensing does _not_ protect customers from patent ligitation,
> unless the license contract explicitly states so. Software patents can be
> and have been enforced against users/licensees of proprietary software,
> too. Unisys' enforcement of the LZW/GIF patent, with its legal action
> against websites that used GIF images in 1999 (see
> ) is a prominent example.

Well, actually, the story of the GIF patent controversy is exactly the
other way around and fits perfectly into my argument about differences
between proprietary and FOSS in terms of risk exposure in the coming
patent mess.

Yes, Unisys did sue some people over their use of .gif files on their
webpages. But the details are important here. As Mark Starr, General
Patent and Technology Counsel for Unisys at the time, explained it to
Slashdot "if the GIFs on your Web site were created with software that is
licensed by Unisys, you are fine. Nobody at Unisys is going to try to get
$5000 or even $0.50 out of you. Period." [1]

As he continued to explain, all of the major proprietary packages (Adobe,
Corel etc) had licensed the patented technology and hence users where
entitled make as many .gif images as they wanted for whatever purpose.
What they were after were people who used programs that had not licensed
the patents, which were mainly freeware (though sometimes this freeware
was distributed as part of commercial software) and FOSS programs (though
they played a minor role back then in the field of graphic design).

> The suspension of Munich Linux project, which was made toalarm the
> public about future risks for free software through software patenting
> in the EU, was therefore dangerously dumb shoot-yourself-into-the-foot
> PR which did nothing but play into the hands of the proprietary
> software industry.

Independent of how you think about the timing and its strategic value, the
problem is real and it's not going to go away by not talking about it. It
seems pretty clear to me that patents will be a major weapons against FOSS
and the more this becomes public knowledge, the better it is for the fight
against software patents. Contrary to what Moglen preaches so eloquently,
the development of technology is never straight and the FSF does not have
it all figured out.

Recently, a two year old memo written by someone at HP arguing that
patents are the Archilles heel of FOSS has surfaced [2]. He points in
particular to section 7 of the GPL [3] which explicitly forbids to
distribute GPL'ed software that contains patents that require a license
fees. Asked to respond to it, Eben Moglen copped out, saying that the
filing of a lawsuit alleging patent infringement would not be enough to
activate section 7. What he did not say was that positive court decision
would!

Now, is this going to be 'shutdown' FOSS? I doubt it, because major
companies such as IBM and HP have invested massively into FOSS and
Microsoft and others have little interest to alienate them. But it could
substantially transform the social dynamics around FOSS. After all, one of
the not so unintended consequences of the patent system is that it allows
to form cartells without running into anti-trust issues.


[1] http://slashdot.org/article.pl?sid=99/08/31/0143246
[2] http://www.newsforge.com/article.pl?sid=04/07/19/2315200
[3] http://www.gnu.org/copyleft/gpl.html


+---+-+---
http://felix.openflows.org




#  distributed via : no commercial use without permission
#   is a moderated mailing list for net criticism,
#  collaborative text filtering and cultural politics of the nets
#  more info: [EMAIL PROTECTED] and "info nettime-l" in the msg body
#  archive: http://www.nettime.org contact: [EMAIL PROTECTED]


A 'licensing fee' for GNU/Linux?

2004-08-07 Thread Felix Stalder
It seems like the real battle over the future of Free and Open Source Software 
is being fought in the area of patents, not copyright.

Copyright, which protects a particular expression, is very hard to infringe 
upon involuntarily. Even if two people happen to have the same idea, chances 
are, they will express it differently. From the point of view of copyright, 
no harm done. Patents are different, they protect an idea, independent from 
its expression. If you have an idea that someone else has already patented, 
though luck. It's not your idea anymore. The history of technology is full of 
cases where two people came up with the same invention, but one was faster. 
Famous is the case of Alexander Graham Bell and Elisha Grey. Both filed their 
patents for the telephone on February 14, 1876. Bell's was the fifth entry of 
that day, while Gray's was the 39th.

Fast forward to today. Patent offices everywhere are drowning in applications 
and are chronically understaffed. Once a patent has been submitted, it can 
take more than a year before it is reviewed, but once it has been approved, 
it becomes valid retroactively. In an area like software development, where 
product cycles counted in months, rather than years, this introduces 
irreducible uncertainty. There is no way of knowing what patents are in the 
pipeline. Combine that with the fact that complex software packages include a 
potentially large number of ideas that might, or might not, be patentable it 
becomes evident that it's essentially unknowable if there might be a patent 
issue hidden somewhere.

This applies to all kinds of software, proprietary as well as free/open 
source. From a user's point of view, there is, however, a crucial 
differences. With proprietary software, the company from which the software 
is licensed assumes all responsibility and the user has no worries beyond the 
licensing fees. So, when last August a court ruled in an exeptional case that 
Internet Explorer improperly contained patented technology, it was Microsoft 
that had to pay up $520,600,000.00 [1]. For the users, the verdict had no 
relevance what so ever. The case was exceptional because usually, large 
corporations can settle their patent disputes by crosslicensing their patent 
portfolios. That makes things easy and has the nice effect of keeping others 
out.

The case is different with Free /Open Source Software. In this case, the users 
are at real risk. The city of Munich realized this and, in early August, 
postponed their high profile switch to Linux to assess the patent risk. For 
the moment, they remain committed to the migration project [2]. They were 
afraid to suddenly receive an injunction and having to stop using their Linux 
machines. Chances, one might guess, are remote, but even this is unacceptable 
to a public administration.  

A few days earlier, a company called 'Open Source Risk Management' [3], which 
has Bruce Perens as one of its board members, issued a report warning that 
the Linux Kernel potentially infringes on 283 patents. Of these only 98 are 
owned by companies currently friendly to Linux, including 60 from IBM, 20 
from Hewlett-Packard and 11 from Intel. This warning was not entirely 
disinterested, since OSRM will soon begin to sell insurances. The prices, as 
announced so far, are $150,000.00 per year and this protects against 
settlement costs of up to $ 5,000,000 [4].

In a similar vein, large Linux sellers such as IBM and HP offer indemnity 
clauses as part of their Linux deals (in the context of the SCO cause).

It's not a big leap of imagination to see the explicit costs of an insurance, 
or the implicit costs of an indemnification clause as part of a service 
contract, as a kind of 'licensing fee' for Linux. And like other licensing 
contracts, they could introduce serious restrictions that work perfectly well 
on top of GPL code. In HP's case, for example, the indemnification only 
applies to Linux run on HP hardware. 

In case of OSRM, one must assume that there will be limits to the kinds of 
modifications one is allowed to do to the software. Perhaps there will be a 
list of approved modules one may to compile into the kernel under the terms 
of the insurance. In some way or another, OSRM will have to define what code 
exactly the insurance covers.

While this kind of patent risk is unlikely to hit the end user directly, it 
might turn into a major issue for institutional users who are vital in 
helping Linux break out of its current niche.

If anything, this problem is going to get worse. At the end of July, Microsoft 
announced that it plans to file 3000 patents this year. This would be a 
significant increase over the 2000 patents it filed last year and the 1000 
patents filed just a few years ago. No wonder Bill Gates says that this "is 
something that we are pretty excited (about)."[5] 


[1] http://www.ucop.edu/news/archives/2003/aug11art1.htm
[2] http://www.muenchen.de/Rathaus/bb_dir/presse/2004/08/
99

Re: The Art of Sweatshop

2004-08-03 Thread Felix Stalder
Andrew, Rana,

I know nothing about this particular outfit other than its email 
advertisement, so calling it a 'sweatshop' was more an act of parody a la 
'spam kr!it!k' rather one of analysis. The subject line 'business' seemed 
rather bland. Yet, it was also not random, as the message struck me for 
several reasons. 

First, paintings are treated like any other commodity whose costs can be 
lowered by outsourcing production into a low-wage country. So also for art, 
Southern China becomes the 'low cost manufacturing base.' Second, like many 
other low-end businesses, this proposition is spewed about randomly as spam. 
In fact, nettime got it several time (that's why I noticed it). Third, it 
contains some rather untrustworthy claims such as the painting being done by 
'famous artists', though they remain unspecified.

Most importantly, though, it introduces an extreme separation -- extreme in 
the context of Western art, more common in the textile industry -- between 
ordering and producing. While made-to-order art has never entirely gone out 
of fashion with the artist becoming an autonomous subject (so the story line) 
it has been transformed into an intimate process ( as in having your portrait 
painted). As such, it's based on a supposedly deep relationship between the 
person doing the ordering and the one doing the execution. 

Now, this email indicates that two things are happening. The made-to-order 
relationship is reappearing with all the loss of status that entails for the 
artists (a 'famous artist' yet anonymous, like the great medieval 
artists/artisans). Yet, at the same time, this relationship has been broken 
under the cost-imperative. This allows to enjoy the product which, like a 
brand, has a status value much higher than its use value, without any regard 
to the context of its production. While this is not a sufficient cause to 
assume sweatshop production conditions, it's a necessary step to establish 
them for the production of high-value objects.


Felix

On Sunday 01 August 2004 18:03, Andrew Ross wrote:

> Re: the subject line. Just a matter of interest, why do you assume this is
> a sweatshop operation? Simply because it is in China?  Or is it impossible
> to imagine the condition of Chinese artisans as comparing favorably with
> their Western counterparts?
 <...>

-- 
+---+-+---
http://felix.openflows.org

#  distributed via : no commercial use without permission
#   is a moderated mailing list for net criticism,
#  collaborative text filtering and cultural politics of the nets
#  more info: [EMAIL PROTECTED] and "info nettime-l" in the msg body
#  archive: http://www.nettime.org contact: [EMAIL PROTECTED]


Re: "Content Flatrate" and the Social Democracy of the Digital Commons

2004-07-17 Thread Felix Stalder
This is a pretty good, if partisan, summary of the discussion and it 
highlights what is one of the most fundamental, and I would agree troubling, 
differences between the CreativeCommons and the 'flatrate' approach. CC 
relies on a bottom-up strategy that can start right here and right now. No 
need to wait for 'them' to do something before 'you' can get going.

The flatrate proposal has, in its implementation, strong top-down aspects. You 
cannot start it small and you cannot start on your own. This is where the 
EFF's voluntary proposal is fundamentally flawed.

In terms of process, this is a problem, and process matters a lot if you don't 
know where you are headed -- and I think it's pretty save to say that 
nobodies really knows how things will shape up in this area. It's all trial 
and error. 

However, the critique is based three very questionable assumptions.

First. In terms of network design, Rasmus, again and again, equates the system 
architecture with the application that will run on the system. The system has 
centralized aspects, hence the application and the effect of these 
applications will be centralizing. Yet, if you look at it, there is no direct 
relationship between network design and application effects. Take, as an 
example, the railroad network. It's highly centralized, yet it's social 
effects were decentralizing. Take electronic networks. Their architecture is 
decentralized on some levels, centralized on others and the effects are 
centralizing control and decentralizing execution, at least in the 
economy[1]. Now what does that have to do with Rasmus argument, one might 
ask.

Rasmus argues that because of the top-down aspects of the proposal, the effect 
will be as top down. It will only make the mega-corps richer. Well, it 
doesn't. Take the situation today. What does an artist really need a label 
for? Distributing CDs and collecting money. And for this, the bigger is the 
better. Small labels would like to do that, but there are structural reasons 
that favor economies of scale, not the least, because you need a large 
apparatus to distribute stuff and collect money. The p2p networks are great 
at doing the first, but lousy at doing the latter. Hence, the majors lost 
control only over distribution, not over compensation and as long as this 
situation persists, they hold some important cards.

Now, when it comes to compensation, they do a really poor job, but the key is, 
they are still better than anyone else. And this is the reason why they still 
exist and why few musicians are outright fans of p2p. If you can wrestle 
control over the other half also away from the majors, their lease on life as 
expired for good. Then we will have a situation where smaller labels will 
prosper because they can concentrate on doing what they do best -- support 
talent -- while not being structurally disadvantaged when it comes to 
compensating talent. In this perspective, an network architecture that has 
top-down aspects can have a decentralizing effect.



Second. Most artist don't make any money today, why should they make any 
tomorrow.

> Cultural producers are making their living in a true multitude of ways.
> The sale of reproductions is just one. People have other jobs part- or
> full-time, they have subsidies of different kinds, some are students,
> many get money by performing live and giving lessons. In general,
> "workfare"-type political measures on the labor market [22] is a far
> bigger threat against most artists than any new reproduction technique.

Great, working 8 hours at McDonald's so you can produce free culture in your 
spare time. Or perhaps free culture is only for those lucky enough to have 
high-paying jobs that give them free time (like high-end programmers?).

I personally don't like the situations -- and I'm sure most of us know them -- 
where everyone gets paid except the artists. How many artists show their 
stuff without compensation in museums and kunsthallen? How many curators work 
there for free? How many printers print the fancy catalogues for free? How 
many janitors do? 

You get the drift. There is a clear imbalance, and one that gets legitimized 
with some outmoded mystique about creative work being rewarding in and off 
itself. OK, artists don't get paid in cash, but, hey, they are showered with 
symbolic capital! 

It is not that a 'new reproduction technique' is threatening the artists. 
What's happening desite deep technological change, the situation is not 
changing. All that empowering, and, yes, decentralizing potential of new 
media has stopped just where the money would have started. Hm.

Also, demanding that the welfare state cross-subsidizes the production of 
culture through a generous system of unemployment insurance is not only not 
particularly realistic, but also a strange in a text that uses 'social 
democracy' with such pejorative undertones. I find it hard to tell where 
social democracy ends and the welfare state starts and 

EU sponsors pro-DRM PR

2004-07-02 Thread Felix Stalder

This is an excerpt from the excellent EDRi (European Digital Rights) 
newsletter. 

<...>


4. EU initiative to make DRM more acceptable


The European Commission has funded a new project to make Digital Rights
Management more acceptable to consumers. INDICARE (the Informed Dialogue
about Consumer Acceptability of DRM Solutions in Europe) is distributing
its first e-mail newsletter this week. The newsletter includes links to
articles on the INDICARE website that are conceived as the starting point
for online discussions. Under the E-Content programme 2003-2004 1 million
euro is allocated for 'accompanying measures' like community building.

DRM-technology is seen by both the Commission and the (multi-)national
entertainment industry as the best solution to control copyrights in a
digital environment. Civil rights organisations, data protection
authorities and consumer unions however are not very keen on giving
complete control over their reading, listening and watching habits to
industrial parties. Initiatives to integrate DRM in both hard- and
software, like the TCPA initiative, have strongly been criticised for
violating fundamental freedoms of computer and internet usage.

Apparently, the Commission believes there is nothing fundamentally wrong
with the idea, it only needs some better public relations. The first few
articles on the website painfully illustrate how arrogant the industry is
currently thinking about citizens as passive consumers. The report 'A bite
from the apple' about a DRM-conference in New York in April 2004,
describes how DRM was re-defined as 'Digital Richness Management' by a
representative from RightsCom, leaving no doubt about the destination of
that wealth.

The report continues: "It was interesting to note that no representatives
of consumer organisations or other institutions representing the consumer
side were present at the conference. Invisible also were interest groups
representing the interests of consumers as citizens in access to
information services and infrastructure under affordable, reasonable
conditions, and under conditions that respect further public interest
objectives. (...) It was even more interesting to note that some of the
conference participants clearly welcomed this situation. As Josh Hug,
Development Manager at RealNetworks Inc. put it: "Consumers are not
represented here, perhaps that is good. They do not have to be. They have
already enough power."

Indicare website
http://www.indicare.org

Ross Anderson Trusted Computing FAQ (august 2003)
http://www.cl.cam.ac.uk/~rja14/tcpa-faq.html

European Commission Communication on the Management of Copyright and
Related Rights (16.04.2004)
http://www.europa.eu.int/comm/internal_market/en/intprop/docs/com-2004-261_en.pdf


<...>


15. About


EDRI-gram is a bi-weekly newsletter about digital civil rights in Europe. 
Currently EDRI has 16 members from 11 European countries. European Digital 
Rights takes an active interest in developments in the EU accession countries 
and wants to share knowledge and awareness through the EDRI-grams. All 
contributions, suggestions for content or agenda-tips are most welcome.

Except where otherwise noted, this newsletter is licensed under the
Creative Commons Attribution 2.0 License. See the full text at
http://creativecommons.org/licenses/by/2.0/

Newsletter editor: Sjoera Nas <[EMAIL PROTECTED]>

Information about EDRI and its members:
http://www.edri.org/

- EDRI-gram subscription information

subscribe by e-mail
To: [EMAIL PROTECTED]
Subject: subscribe

You will receive an automated e-mail asking to confirm your request.

unsubscribe by e-mail
To: [EMAIL PROTECTED]
Subject: unsubscribe

- EDRI-gram in Russian, Ukrainian and Italian

EDRI-gram is also available in Russian, Ukrainian and Italian, a few days
after the English edition. The contents are the same.

Translations are provided by Sergei Smirnov, Human Rights Network, Russia;
Privacy Ukraine and autistici.org, Italy

The EDRI-gram in Russian can be read on-line via
http://www.hro.org/editions/edri/

The EDRI-gram in Ukrainian can be read on-line via
http://www.internetrights.org.ua/index.php?page=edri-gram

The EDRI-gram in Italian can be read on-line via
http://www.autistici.org/edrigram/

- Newsletter archive

Back issues are available at:
http://www.edri.org/cgi-bin/index?funktion=edrigram

- Help

Please ask <[EMAIL PROTECTED]> if you have any problems with subscribing or
unsubscribing.


Publication of this newsletter is made possible by a grant from
the Open Society Institute (OSI).



+---+-+---
http://felix.openflows.org

#  distributed via

FBI ABDUCTS ARTIST, SEIZES ART

2004-05-27 Thread Felix Stalder
--- Forwarded message follows ---

May 25, 2004
FOR IMMEDIATE RELEASE

FBI ABDUCTS ARTIST, SEIZES ART
Feds Unable to Distinguish Art from Bioterrorism
Grieving Artist Denied Access to Deceased Wife's Body

 DEFENSE  FUND ESTABLISHED - HELP URGENTLY NEEDED

Steve Kurtz was already suffering from one tragedy when he called 911
early in the morning to tell them his wife had suffered a cardiac arrest
and died in her sleep.  The police arrived and, cranked up on the rhetoric
of the "War on Terror," decided Kurtz's art supplies were actually
bioterrorism weapons.

Thus began an Orwellian stream of events in which FBI agents abducted
Kurtz without charges, sealed off his entire block, and confiscated his
computers, manuscripts, art supplies... and even his wife's body.

Like the case of Brandon Mayfield, the Muslim lawyer from Portland
imprisoned for two weeks on the flimsiest of false evidence, Kurtz's case
amply demonstrates the dangers posed by the USA PATRIOT Act coupled with
government-nurtured terrorism hysteria.

Kurtz's case is ongoing, and, on top of everything else, Kurtz is facing a
mountain of legal fees. Donations to his legal defense can be made at
http://www.rtmark.com/CAEdefense/

FEAR RUN AMOK

Steve Kurtz is Associate Professor in the Department of Art at the State
University of New York's University at Buffalo, and a member of the
internationally-acclaimed Critical Art Ensemble.

Kurtz's wife, Hope Kurtz, died in her sleep of cardiac arrest in the early
morning hours of May 11. Police arrived, became suspicious of Kurtz's art
supplies and called the FBI.

Within hours, FBI agents had "detained" Kurtz as a suspected bioterrorist
and cordoned off the entire block around his house. (Kurtz walked away the
next day on the advice of a lawyer, his "detention" having proved to be
illegal.) Over the next few days, dozens of agents in hazmat suits, from a
number of law enforcement agencies, sifted through Kurtz's work, analyzing
it on-site and impounding computers, manuscripts, books, equipment, and
even his wife's body for further analysis. Meanwhile, the Buffalo Health
Department condemned his house as a health risk.

Kurtz, a member of the Critical Art Ensemble, makes art which addresses
the politics of biotechnology. "Free Range Grains," CAE's latest project,
included a mobile DNA extraction laboratory for testing food products for
possible transgenic contamination. It was this equipment which triggered
the Kafkaesque chain of events.

FBI field and laboratory tests have shown that Kurtz's equipment was not
used for any illegal purpose. In fact, it is not even _possible_ to use
this equipment for the production or weaponization of dangerous germs.
Furthermore, any person in the US may legally obtain and possess such
equipment.

"Today, there is no legal way to stop huge corporations from putting
genetically altered material in our food," said Defense Fund spokeswoman
Carla Mendes. "Yet owning the equipment required to test for the presence
of 'Frankenfood' will get you accused of 'terrorism.' You can be illegally
detained by shadowy government agents, lose access to your home, work, and
belongings, and find that your recently deceased spouse's body has been
taken away for 'analysis.'"

Though Kurtz has finally been able to return to his home and recover his
wife's body, the FBI has still not returned any of his equipment,
computers or manuscripts, nor given any indication of when they will. The
case remains open.

HELP URGENTLY NEEDED

A small fortune has already been spent on lawyers for Kurtz and other
Critical Art Ensemble members. A defense fund has been established at
http://www.rtmark.com/CAEdefense/ to help defray the legal costs which
will continue to mount so long as the investigation continues. Donations
go directly to the legal defense of Kurtz and other Critical Art Ensemble
members. Should the funds raised exceed the cost of the legal defense, any
remaining money will be used to help other artists in need.

To make a donation, please visit http://www.rtmark.com/CAEdefense/

For more information on the Critical Art Ensemble, please visit
http://www.critical-art.net/

Articles about the case:
http://www.rtmark.com/CAEdefense/news-WKBW-2.html
http://www.rtmark.com/CAEdefense/news-WKBW.html

On advice of counsel, Steve Kurtz is unable to answer questions regarding
his case. Please direct questions or comments to Carla Mendes
<[EMAIL PROTECTED]>.




#  distributed via : no commercial use without permission
#   is a moderated mailing list for net criticism,
#  collaborative text filtering and cultural politics of the nets
#  more info: [EMAIL PROTECTED] and "info nettime-l" in the msg body
#  archive: http://www.nettime.org contact: [EMAIL PROTECTED]


Transeuropean Picnic

2004-05-03 Thread Felix Stalder

Transeuropean Picnic

Historic events are odd things, mostly disappointing. They feel either like 
empty routines of calendarial arbitrariness (200 years French Revolution, the 
millennium) or utterly imposed (9/11, war in Iraq). Either way, they usually 
render one passive, through boredom or powerlessness. History, it seems, is 
always made by others. The EU enlargement, somehow, doesn't really fit this 
pattern, eventhough it had plenty of both in it.

Yet, it is also, or perhaps primarily, an unfinished event, one whose actual 
meaning goes far beyond the "overcoming the divisions of the cold war" or any 
other of the standard themes trotted out by celebratory speakers on market 
squares across the continent. Its meaning, really, will only slowly emerge, 
through the accumulation of everyday practice. The EU, after all, famously 
lacks a vision.

How could such a practice look like from the point-of-view of open media 
cultures? To think about this, kuda.org, together with v2, issued an 
invitation to gather in Novi Sad, Serbia for a transeuropean pic-nic on the 
weekend of the enlargement [1].

Of course, being in Serbia, one cannot help but be reminded that this great 
process of unification is also a process of creating new boundaries, of 
establishing new visa regimes, border controls and barriers to mobilities 
(which my spell checker insists to render as 'nobilities'). Yet, bringing 
together a hundred people from some 20 countries between the Netherland and 
Georgia on a shoe-string budget and have them picnic on the porch of Tito's 
hunting cabin in the midst of a pristine national park, one felt equally that 
new possibilities were opening up, in the cracks of the major narrative.

This, as became more clear to me during the discussions, has to do with the 
particular character of this thing, the EU, that is growing before our eyes. 
Most importantly, the EU is not a state. It doesn't raise taxes, doesn't have 
a military or a police force, doesn't create laws (only directives to be made 
into laws at the national level), or issue passports. It doesn't even have a 
sports team. Yet, it is also not a meaningless exercise of an out-of-control 
bureaucracy issuing 'symbols' and creating well-intentioned but freefloating 
'discourses'. Rather, the best way to think of the EU, it seems to me, is as 
a gigantic coordination mechanism. It has a relatively small hub 
('Brussels'), trying to get others nodes in a network -- some bigger, others 
smaller than itself -- to behave in a way that things can flow between them 
more easily. The enlargement just added a lot of nodes to this network. The 
coordinating hub's main function is to issue pointers that help to direct 
these massive material and immaterial flows.

The strange thing about these pointers is their consistency. They are hard and 
soft at the same time. By directing flows, they create new pools of 
opportunities, while draining others off their resources. For example, many 
educational institutions in Europe are going through painfull restructuring 
processes at the moment, not just because of funding problems, but because of 
attempts to reorient themselves according to EU pointers ('Bologna reform') 
hoping to then profit from the new opportunities created by the flows of 
people, projects and money being pumped through a somewhat more standardized 
European educational landscape. Of course, no institution is forced to do 
that -- that's the soft part. However, not doing it will amount to a 
self-marginalization virtually nobody is willing to accept -- that's the hard 
part.

The EU, then, is a myriad of such circulation systems whose main power rests 
on its ability to include or exclude nodes. The main difference between 
inside and outside of a network is that opportunities are created exclusively 
inside the network (through the circulation of flows of all kinds) whereas 
outside, marginality is structurally re-enforced all the time (by being 
bypassed).

The important thing is that the EU is not one but a myriad of circulation 
systems. Many overlap and reinforce one another -- the enlargement is also a 
process of accelerating such consolidation -- but the degree of overlap is 
much smaller than in a traditional nation state (say, the US). And this, it 
seems to me, is where independent cultural practices come in. They can 
contribute that this consolidation of the patterns of inclusion / exclusion 
do not become absolute. They can extend the networks to include nodes other 
than the officially sanctioned ones, thus making sure that not only 
opportunities flows beyond the borders (if there is one aspect of the EU that 
is state-like, then it's the Schengen Treaty), but that new opportunities are 
created precisely because the cultural micro-networks are different from the 
official ones.

This is not an 'Anti-EU' strategy, which was made clear by many is picnickers 
is luxury that only those who inside the EU can afford. Rather, i

Community Radio in Venzuela

2004-03-14 Thread Felix Stalder
[I have no direct knowlegde of the complex situation in Venezuela. Yet, I
found this article on community radios/TVs to be very interesting. As far
as I can tell, Chavez, though attacking the oligrachy (see Brian Holmes'
post a few days ago), has not been shutting down, or taking over, their
media. Rather, he is building up his own institutions / power bases, that
run parallel to it. This might help explain why there are two highly
energized groups confronting each other, both being able to organize
protest with huge turn-outs. This would not be possible without access to
mass media.

It probably depends on one's point of view, if this is to call a new type
of 'participatory democracy' or 'run-away populism'. It doesn't strike me,
though, as dictatorial or authoritarian. You certainly don't have hundreds
of community radios/TVs is Cuba.]


March 8, 2004
CARACAS JOURNAL
Pirate Radio as Public Radio, in the President's Corner
By JUAN FORERO

http://www.iht.com/ihtsearch.php?id=509215

ARACAS, Venezuela, March 7 The sound room of Radio Perola, a small
community station on the poor edge of this city, is papered with posters
celebrating Latin American revolutionaries like Fidel Castro and offering
a stern warning to the behemoth to the north: "Death to the Yankee
Invader."

The setting seems fitting for José Ovalles's politically charged Saturday
radio program. Gripping a microphone and waving reports from a government
news agency, the white-haired retired computer teacher charges that a
far-flung opposition movement arrayed against President Hugo Chávez is
part of an American-led conspiracy. He ridicules the president's foes as
criminals with scant backing.

He urges listeners to defend what Mr. Chávez calls his Bolivarian
Revolution, which is under international pressure to allow a recall vote
on the president's tumultuous five-year rule. "We have to fight for a free
country,"  he said recently, "one with no international interference."

The message, beamed from a 13-kilowatt station in what was once the
storeroom of a housing project, reaches at most a few hundred homes. But
Radio Perola is part of a mushrooming chain of small government-supported
radio and television stations that are central to Mr. Chávez's efforts to
counter the four big private television networks, which paint him as an
unstable dictator.

With Venezuela on edge, stations like Radio Perola are poised to play an
even bigger role in this oil-rich nation's political battle.

Instead of shutting down his news media tormenters, Mr. Chávez's tactic
appears to be to ignore them as much as possible while relying on former
ham radio operators and low-budget television stations to get the
government's message across.

Although the stations say they are independent and autonomous, Mr. Chávez
has announced that $2.6 million would be funneled to them this year. They
also will receive technical assistance and advertising from state-owned
companies.

"This year, we will not only legalize and enable approximately 200 more
communitarian radios and televisions with equipment, but we will also
promote them," the communication and information minister, Jesse Chacón,
said in an interview posted on a pro-Chávez Web site.

The stations have been important to Mr. Chavez's government during the
current turmoil, in which the opposition has accused the government of
fraudulently disqualifying hundreds of thousands of signatures for a
recall referendum.

Through it all, the private television and radio stations and the nation's
largest newspapers have stepped up their pressure, presenting a parade of
antigovernment analysts and opposition figures.

Mr. Ovalles, though, calls the opposition "gangsters" and accuses private
news organizations of faking the sizes of antigovernment marches.

At first glance, the community stations and their largely volunteer staffs
hardly seem political, nor do they offer the wallop of the big news
organizations. Programming often deals with mundane matters like trash
pickups or road conditions. The stations are staffed by volunteers, from
teenagers eager for the chance to play Venezuelan hip-hop or salsa to
homemakers who want to tell listeners how to stretch earnings in tough
times.

The main objective, say those who work at the stations, is to show there
is another side to neighborhoods that, in the popular press, are presented
as crime-ridden ghettos.

"The image of the barrios is one of criminals, violence, prostitution,
where kids are abandoned," said Gabriel Gil, a producer at Catia TV, a
three-year-old station that recently moved into a vast building belonging
to the Ministry of Justice. "We say we are television of the poor."

Radio Un Nuevo Día, in a poor neighborhood, is much like the rest. Its
small transmitter has been set up in the corner of a bedroom in a two-room
cinder block house belonging to a cleaning woman, Zulay Zerpa.

Bedsheets separate the bare-bones operation from the cots where her two
children sleep.

"I cook

Dean and Kerry: Hot and Cool

2004-01-28 Thread Felix Stalder
Reading Ronda's article, it seems to me that she, and a lot of other people 
who write about Dean, the Internet and politics, miss some essential points. 
Usually, the story is one about grassroots involvement, the power of 
connectivity, etc. These are certainly important points, and they support a 
story we all like to hear -- the Internet as a means of democratic 
participation. Yet, the events suggest that underneath this, there might be a 
different story.

One of the the events McLuhan referred to again and again, was the Nixon/
Kennedy debate in 1960, which was right at the transition from radio to TV as 
the predominant means of mass communication. TV had reached a penetration of 
about 50% of the households. The majority of people who listened to this 
debate on radio thought that Nixon had come across better, while those who 
watched it on TV thought Kennedy was more appealing.

McLuhan related this back to the particular characteristics of the two media, 
calling radio 'hot' (high-definition, agitating) and TV 'cool' (low 
definition, sedating) and concluded that different types of media favor 
different types of politicians. The cool Kennedy was suited better for the TV 
age than the hot-headed Nixon. (The fact that Nixon eventually became 
president indicates a) politicians can adapt and b) McGovern was even 
hotter.)

Anyway, as I watch some of the spectacle around the democratic primaries, it 
strikes me that it could be possible that, again, we have the story of 
different media favoring different types of personalities. Why? First, the 
Dean campaign is different from other maverick campaigns (say, John McCain in 
2000) insofar that it's clearly not the case of an independent, poorly 
organized, under-funded campaign being steamrolled by superior organizing and 
funding. After all, Dean has, by far, the most money and, arguably, the best 
on-the-ground organization. So, this is not the classic outside-insider 
story, largely thanks to the Internet, as many have observed.

Yet, could it be that exactly the kinds of qualities that make Dean so 
attractive to get involved with via the Internet make him less appealing on 
TV? Online, his 'radical' stance comes across as principled, as a clear 
alternative. On TV, it comes across as arrogant and hot-headed. TV clearly is 
a cool medium, favoring a cool demeanor by politicians. Nobody got this 
across better than Clinton. Yet, cool politicians are not the types who feel 
you need to help personally (unless in ultra-crass cases such as the 
Clinton-impeachment that spawned moveon.org). On the Internet, spontaneity is 
essential part of an engaging interactive experience, while on TV, it's 
amateurish.

Dean, it seems, is in a difficult position. He needs to continue to appeal to 
his Internet-based organization, which could fall apart as quickly as it was 
assembled, yet he needs to tone himself down to make the transition onto TV 
where the boring but authoritative-looking Kerry operates much more smoothly.

Following the McLuhan story, Dean-types would win, in the long run, as we move 
from TV-based to Internet-based politics, but things are never that smooth. 
In the short-run, I certainly wouldn't bet on it.


Felix 

 
+---+-+---
http://felix.openflows.org

#  distributed via : no commercial use without permission
#   is a moderated mailing list for net criticism,
#  collaborative text filtering and cultural politics of the nets
#  more info: [EMAIL PROTECTED] and "info nettime-l" in the msg body
#  archive: http://www.nettime.org contact: [EMAIL PROTECTED]


Canada deems P2P downloading legal

2003-12-13 Thread Felix Stalder
[This strikes me as a major precedence, though it's likely to raise as many 
question as it addresses. For example, how do you differentiate between 
uploading and downloading in a p2p networked? Do files in the 'shared folder' 
already count as uploaded? Also, one has to wonder if the old distribution 
mechanisms represent the user behaviour in p2p systems?




For those who like to read the actual legal texts, they are here [1]. The 

[1] http://www.cb-cda.gc.ca/new-e.html]

*Canada deems P2P downloading legal*
By John Borland
CNET News.com
December 12, 2003, 2:20 PM PT
URL: http://zdnet.com.com/2100-1104-5121479.html

*Downloading copyrighted music from peer-to-peer networks is legal in
Canada, although uploading files is not, Canadian copyright regulators
said in a ruling released Friday.*

In the same decision [1] the Copyright Board of Canada imposed a government 
fee of as much as $25 on iPod-like MP3 players, putting the devices in the 
same category as audio tapes and blank CDs. The money collected from levies 
on "recording mediums" goes into a fund to pay musicians and songwriters for 
revenues lost from consumers' personal copying. Manufacturers are responsible 
for paying the fees and often pass the cost on to consumers.

The peer-to-peer component of the decision was prompted by questions from 
consumer and entertainment groups about ambiguous elements of Canadian law. 
Previously, most analysts had said uploading was illegal but that downloading 
for personal use might be allowed.

"As far as computer hard drives are concerned, we say that for the time being, 
it is still legal," said Claude Majeau, secretary general of the Copyright 
Board.

The decision is likely to ruffle feathers on many sides, from 
consumer-electronics sellers worried about declining sales to international 
entertainment companies worried about the spread of peer-to-peer networks.

Copyright holder groups such as the Recording Industry Association of America 
(RIAA ) had already been critical of Canada's 
copyright laws, in large part because the country has not instituted 
provisions similar to those found in the U.S. Digital Millennium Copyright 
Act. One portion of that law makes it illegal to break, or to distribute 
tools for breaking, digital copy protection mechanisms, such as the 
technology used to protect DVDs from piracy.

A lawyer for the Canadian record industry's trade association said the group 
still believed downloading was illegal, despite the decision.

"Our position is that under Canadian law, downloading is also prohibited," 
said Richard Pfohl, general counsel for the Canadian Recording Industry 
Association. "This is the opinion of the Copyright Board, but Canadian courts 
will decide this issue."

In its decision Friday, the Copyright Board said uploading or distributing 
copyrighted works online appeared to be prohibited under current Canadian 
law.

However, the country's copyright law does allow making a copy for personal use 
and does not address the source of that copy or whether the original has to 
be an authorized or noninfringing version, the board said.

Under those laws, certain media are designated as appropriate for making 
personal copies of music, and producers pay a per-unit fee into a pool 
designed to compensate musicians and songwriters. Most audio tapes and CDs, 
and now MP3 players, are included in that category. Other mediums, such as 
DVDs, are not deemed appropriate for personal copying.

Computer hard drives have never been reviewed under that provision, however. 
In its decision Friday, the board decided to allow personal copies on a hard 
drive until a fee ruling is made specifically on that medium or until the 
courts or legislature tell regulators to rule otherwise.

"Until such time, as a decision is made on hard drives, for the time being, 
(we are ruling) in favor of consumers," Majeau said.

Legal analysts said that courts would likely rule on the file-swapping issue 
later, despite Friday's opinion.

"I think it is pretty significant," Michael Geist, a law professor at the 
University of Ottawa, said. "It's not that the issue is resolved...I think 
that sooner or later, courts will sound off on the issue. But one thing they 
will take into consideration is the Copyright Board ruling."

Friday's decision will also impose a substantial surcharge on hard drive-based 
music players such as Apple Computer's iPod or the new Samsung Napster player 
for the first time. MP3 players with up to 10GB of memory will have an added 
levy of $15 added to their price, while larger players will see $25 added on 
top of the wholesale price.

MP3 players with less than 1GB of memory will have only a $2 surcharge added 
to their cost.

With a population of about 31 million people, Canada is approximately 
one-tenth the size of the United States. But Canadians are relatively heavy 
users of high-speed Internet connections, which make it easy to download 
music files. About 4.1 millio

Chip implant gets cash under your skin

2003-11-25 Thread Felix Stalder
[It's hard to decide if this is merely ridiculuous, just sad, or if a more 
general statement about the nature of (American) capitalism can be deduced 
from this. Probably not. A couple of years ago, a "business idea" like this 
might have attracted millions in venture capital. These days, the company 
gets delisted from NASDQ. Some sort of progress, I guess. Felix]


Chip implant gets cash under your skin
By Declan McCullagh
CNET News.com
November 25, 2003, 9:32 AM PT
URL: http://zdnet.com.com/2100-1103-5111637.html

Radio frequency identification tags aren't just for pallets of goods in 
supermarkets anymore.

Applied Digital Solutions of Palm Beach, Fla., is hoping that Americans can be 
persuaded to implant RFID chips under their skin to identify themselves when 
going to a cash machine or in place of using a credit card. The surgical 
procedure, which is performed with local anesthetic, embeds a 12-by-2.1mm 
RFID tag in the flesh of a human arm.

ADS Chief Executive Scott Silverman, in a speech at the ID World 2003 
conference in Paris last Friday, said his company had developed a "VeriPay" 
RFID technology and was hoping to find partners in financial services firms.

Matthew Cossolotto, a spokesman for ADS who says he's been "chipped," argues 
that competing proposals to embed RFID tags in key fobs or cards were flawed. 
"If you lose the RFID key fob or if it's stolen, someone else could use it 
and have access to your important accounts," Cossolotto said. "VeriPay solves 
that problem. It's subdermal and very difficult to lose. You don't leave it 
sitting in the backseat of the taxi."

RFID tags are miniscule microchips, which some manufacturers have managed to 
shrink to half the size of a grain of sand. They listen for a radio query and 
respond by transmitting a unique ID code, typically a 64-bit identifier 
yielding about 18 thousand trillion possible values. Most RFID tags have no 
batteries. They use the power from the initial radio signal to transmit their 
response.

When embedded in human bodies, RFID tags raise unique security concerns. 
First, because they broadcast their ID number, a thief could rig up his or 
her own device to intercept and then rebroadcast the signal to an automatic 
teller machine. Second, sufficiently dedicated thieves may try to slice the 
tags out of their victims.

"We do hear concerns about this from a privacy point of view," Cossolotto 
said. "Obviously, the company wants to do all it can to protect privacy. If 
you don't want it anymore...you can go to a doctor and have it removed. It's 
not something I would recommend people do at home. I call it an opt-out 
feature."

Chris Hoofnagle, a lawyer at the Electronic Privacy Information Center, said 
implanted RFID tags cause an additional worry. "When your bank card is 
compromised, all you have to do is make a call to the issuer," Hoofnagle 
said. "In this case, you have to make a call to a surgeon.

"It doesn't make sense to go from a card, which is controlled by an 
individual, to a chip, which you cannot control."

ADS shares have slid from a high of about $12 in 2000 to 40 cents, and the 
company is now fighting to stay listed on the Nasdaq. "Our common stock did 
not regain the minimum bid price requirement and on Oct. 28, 2003, the Nasdaq 
Stock Market informed us by letter that our securities would be delisted from 
the SmallCap," ADS said in a Nov. 14 filing with the U.S. Securities and 
Exchange Commission. The company also warned that its implantable microchips 
are manufactured solely by Raytheon without a "formal written agreement," and 
any price increases or supply disruptions would have serious negative 
consequences.

MasterCard has been testing an RFID technology called PayPass. It looks like 
any other credit card but is outfitted with an RFID tag that lets it be read 
by a receiver instead of scanned through a magnetic stripe. "We're certainly 
looking at designs like key fobs," MasterCard Vice President Art Kranzley 
told USA Today last week. "It could be in a pen or a pair of earrings. 
Ultimately, it could be embedded in anything--someday, maybe even under the 
skin."

ADS is running a special promotion, urging Americans to "get chipped." The 
first 100,000 people to sign up will receive a $50 discount.




+---+-+---
http://felix.openflows.org

#  distributed via : no commercial use without permission
#   is a moderated mailing list for net criticism,
#  collaborative text filtering and cultural politics of the nets
#  more info: [EMAIL PROTECTED] and "info nettime-l" in the msg body
#  archive: http://www.nettime.org contact: [EMAIL PROTECTED]


Music Labels Tap Downloading Networks

2003-11-18 Thread Felix Stalder
It was long suspected that p2p usage stats could reveal more accurate user
preferences than traditional traditional charts and 'hit parades'. Sad to
see it implemented like this.

> "Our hope was that we could take the technology revolution that 
> Napster made popular and create tools for the benefit of copyright 
> holders," said Eric Garland, BigChampagne's chief executive.

Felix



Music Labels Tap Downloading Networks
Mon Nov 17,10:17 AM ET
By ALEX VEIGA, AP Business Writer
http://news.yahoo.com/news?tmpl=story2&cid=487&u=/ap/file_swapping_intelligence

LOS ANGELES - The recording industry, it seems, doesn't hate absolutely
everything about illicit music downloading. Despite their legal blitzkrieg
to stop online song-swapping, many music labels are benefiting from —
and paying for -- intelligence on the latest trends in Internet trading.

It's a rich digital trove these recording executives are mining. By
following the buzz online, they can determine where geographically to
market specific artists for maximum profitability.

"The record industry has always been more about vibe and hype," said
Jeremy Welt, head of new media for Maverick Records in Los Angeles. "For
the first time, we're making decisions based on what consumers are doing
and saying as opposed to just looking at radio charts."

One company, Beverly Hills-based BigChampagne, began mining such data from
popular peer-to-peer networks in 2000 and has built a thriving business
selling it to recording labels.

The company -- which takes its name from the Peter Tosh song lyric, "You
drink your big champagne and laugh" -- taps directly into file-sharing
networks like Kazaa's FastTrack. It checks on how often its clients'
artists show up in searches or how frequently their songs are downloaded.
The data can be sorted by market or geographical region.

BigChampagne also has a "TopSwaps" chart that ranks the most shared songs.  
Rapper Eminem (news - web sites) was first in a recent scan, his songs
downloaded more than 8.6 million times in one day.

"Our hope was that we could take the technology revolution that Napster
(news - web sites) made popular and create tools for the benefit of
copyright holders," said Eric Garland, BigChampagne's chief executive.

The bountiful market research is gleaned from behavior for which the music
industry otherwise shows no tolerance. Hurt by a three-year decline in
music sales, the industry has sued the major file-sharing networks, along
with individuals who have used them.

"It wouldn't be very smart if we weren't looking at what they're doing,"
Welt said.

The file-sharing companies are also taking notice. This week, Altnet
threatened legal action against nine companies, including BigChampagne,
that it accused of violating patents on file-identifying technology.
BigChampagne denies using the Altnet technology or playing any role in
helping recording companies identify users for lawsuits.

BigChampagne has certainly done well by file-swapping. It formed in July
2000, just as the Internet boom was beginning to bust, and now counts
Maverick, DreamWorks, Warner Bros., Disney and Atlantic Records among its
clients. All the major labels have worked with BigChampagne "in one
capacity or another,"  Garland said.

Traditionally, labels had relied for market research largely on commercial
radio, MTV and music store sales.

Label executives waited weeks to get feedback based on limited audience
sampling -- typically by randomly calling listeners and asking if they
recognized a song after hearing a snippet.

Only after several weeks would they begin to get a picture of whether a
single was getting heard. And until Soundscan began electronically
tracking album sales in the 1990s, the industry relied only on a survey of
music retailers to gauge fan interest.

The emergence of free online trading, beginning in the late 1990s with
MP3.com and the original Napster, suddenly made it technologically
feasible to track music consumption in a whole new way.

"It's the most vast and scaleable sample audience that the world has ever
seen," Garland said.

BigChampagne data are essentially a tally of what millions of music fans
are doing every hour.

Peer-to-peer systems function by sending search queries and file transfers
across a network of several computer users. Every time someone searches
Kazaa for a song, that query is passed along the network. BigChampagne
taps in as if it were a regular user and compiles the traffic flows in a
database it later sorts.

"What we do in effect is act like a superuser who demands access to the
network in its entirety," Garland said.

BigChampagne doesn't identify individuals or gather usernames, Garland
said.  But by analyzing users' numeric Internet addresses, BigChampagne
can still pinpoint location and give clients a sense of where an artist is
most popular.

By using BigChampagne, labels can release a song to radio and, if there
are signs demand is brewing on the song-swapping networks, immedi

Are all codes code?

2003-11-04 Thread Felix Stalder
[This is unlikely to be a legal case, though from a semiotic point of view, 
it's nevertheless puzzling. Is using images that are released under the GPL 
the same than using source code released the GPL? Is including existing 
images into new images, in this case, a screenshot of a kde desktop in a tv 
series, the same as including existing source code into a new source code? 
Felix]


Posted by Jonathan Riddell on Friday 31/Oct/2003, @17:09
from the 24h-to-3.2beta dept.
http://dot.kde.org/1067616574/

The third series of television show 24 started in the US last week. In the aim 
to improve security, The Counter Terrorist Unit seem to have switched 
operating system from MacOS to KDE [1]. Interestingly they used a 3-year-old 
KDE 1.x desktop. These older icons are made available under a public domain 
licence. If a GPL'd set of icons had been used, would we now be legally able 
to modify, sell and distribute the episode under the terms of the GPL over 
the internet? 

[1] http://jriddell.org/24-kde.html


+---+-+---
http://felix.openflows.org

#  distributed via : no commercial use without permission
#   is a moderated mailing list for net criticism,
#  collaborative text filtering and cultural politics of the nets
#  more info: [EMAIL PROTECTED] and "info nettime-l" in the msg body
#  archive: http://www.nettime.org contact: [EMAIL PROTECTED]


WSJ: Can Copyright Be Saved?

2003-10-21 Thread Felix Stalder
[It's quite amazing, not too long ago, an outfit like the WSJ would have
any questioning of the absolute enforcement of copyrights slandered the
way Forbes slandered the FSF recently. Now, suddenly, even the WSJ admits
that things are up for grabs and that there are valid several options.
Now, you might not agree with their portraying of DRM as "middle of the
road" solution, but just putting it out as one of several options,
including a tax!, rather than the only one, is quite a significant change
in itself. Felix]


Can Copyright Be Saved?
New ideas to make intellectual property work in the digital age

By ETHAN SMITH
Staff Reporter of THE WALL STREET JOURNAL
October 20, 2003

For some people, the future of copyright law is here, and it looks a lot
like Gilberto Gil.

The Brazilian singer-songwriter plans to release a groundbreaking CD this
winter, which will include three of his biggest hits from the 1970s. It
isn't the content of the disc that makes it so novel, though -- it's the
copyright notice that will accompany it.

Instead of the standard "all rights reserved," the notice will explicitly
allow users of the CD to work the music into their own material. "You are
free ... to make derivative works," the notice will state in part. That's a
significant departure from the standard copyright notice, which forbids such
use of creative material and requires a legal agreement to be worked out for
any exceptions.

Is this the future of copyright? Perhaps. But a better way to think of it is
that it's one of the possible futures of copyright. Because right now, it's
all pretty much up for grabs.

Blame it all on the Digital Age. As any digital downloader can tell you,
technology and the Internet have made it simple for almost anyone to make
virtually unlimited copies of music, videos and other creative works. With
so many people doing just that, artists and entertainment companies
sometimes appear helpless to prevent illegal copying, and their halting
legal efforts so far have antagonized customers while hardly putting a dent
in piracy.

The challenge is finding a way out of this mess. Efforts fall broadly into
two camps. On one side, generally speaking, are those who revel in the
freedom that technology has brought to the distribution of creative
material, and who believe that copyright law should reflect this newfound
freedom.

On the other side are those who believe that the digital age hasn't changed
anything in terms of the rights of artists and entertainment companies to
control the distribution of their creations and to be paid for them -- the
essence of copyright law. For them, the answer is to leave copyright law
intact, and to use technology to make it harder for people to make digital
copies.

Here's a closer look at some of the competing visions.

IN THIS TOGETHER

The copyright notice for Mr. Gil's coming CD is being crafted by Creative
Commons, a nonprofit organization that seeks to redraw the copyright
landscape. Believing traditional copyrights are too restrictive, it aims to
create plain-language copyright notices that explicitly offer a greater
degree of freedom to those who would reshape or redistribute the copyrighted
material.

Traditional copyright law gives owners of creative material -- and them
alone -- the right to copy or distribute their works. Although they can
waive all or part of those rights, the process isn't easy and usually occurs
in response to a particular request. Those hurdles, critics say, can hinder
the open and freewheeling sharing of material the digital age makes
possible.

Creative Commons seeks to make the system more flexible by spelling out
which rights the copyright holder wishes to reserve and which are being
waived without waiting for a request. Artists can mix and match from among
four basic licensing agreements: They can decide whether they simply want
attribution anytime their work is used by someone else; whether they want to
deny others use of the work for profit without permission; whether they want
to prevent others from altering the material; and whether they want to
permit the use of material only if the new work is offered to the public
under the same terms. An underlying layer of digital code enforces the
rights laid out by the owner, telling computers how a given work can be
used.

A Creative Commons license isn't for everyone. It might appeal to
independent artists for whom free samples, distributed online, might
represent an attractive marketing option, or for someone like Mr. Gil, who
believes that making it easier to share and reshape his music can be an
important part of the creative process. But it's unlikely to appeal to the
big media companies, for which copyrighted material is what they sell.

Still, Mr. Gil, who is also Brazil's culture minister, sees Creative Commons
as a way to unlock the creative potential of digital technology. "I'm doing
it as an artist," he says. "But our ministry has been following the process
and getting intereste

BBC Creative Archive

2003-10-14 Thread Felix Stalder
[This is not a new story, but it seems it hasn't made it yet on the
nettime radar. It strikes me as a major initiative and one can hope it
will put pressure on other public broadcasters to do the same. It will be
interesting to see how they implement it since the rights situation of TV
material is notoriously complex and not all BBC shows are entirely
produced by the BBC (i,e. news shows containing footage bought from third
parties). Felix]



Dyke to open up BBC archive
http://news.bbc.co.uk/go/pr/fr/-/2/hi/entertainment/3177479.stm
Published: 2003/08/24 11:47:38 GMT

Greg Dyke, director general of the BBC, has announced plans to give the
public full access to all the corporation's programme archives.

Mr Dyke said on Sunday that everyone would in future be able to download
BBC radio and TV programmes from the internet.

The service, the BBC Creative Archive, would be free and available to
everyone, as long as they were not intending to use the material for
commercial purposes, Mr Dyke added.

"The BBC probably has the best television library in the world," said Mr
Dyke, who was speaking at the Edinburgh TV Festival.

"Up until now this huge resource has remained locked up, inaccessible to
the public because there hasn't been an effective mechanism for
distribution.

"But the digital revolution and broadband are changing all that.

"For the first time there is an easy and affordable way of making this
treasure trove of BBC content available to all."

He predicted that everyone would benefit from the online archive, from
people accessing the internet at home, children and adults using public
libraries, to students at school and university.

Future focus

Mr Dyke appeared at the TV festival to give the Richard Dunn interview,
one of the main events of the three-day industry event.

He said the new online service was part of the corporation's future, or
"second phase", strategy for the development of digital technology.

Mr Dyke said he believed this second phase would see a shift of emphasis
by broadcasters.

Their focus would move away from commercial considerations to providing
"public value", he said.

"I believe that we are about to move into a second phase of the digital
revolution, a phase which will be more about public than private value;
about free, not pay services; about inclusivity, not exclusion.

"In particular, it will be about how public money can be combined with new
digital technologies to transform everyone's lives."



[1]http://www2.thny.bbc.co.uk/pressoffice/pressreleases/stories/2003/08_august/24/dyke_dunn_lecture.shtml

[2]The Guardian, Auntie's digital revelation
http://www.guardian.co.uk/online/story/0,3605,1030176,00.html



+---+-+---
http://felix.openflows.org

#  distributed via : no commercial use without permission
#   is a moderated mailing list for net criticism,
#  collaborative text filtering and cultural politics of the nets
#  more info: [EMAIL PROTECTED] and "info nettime-l" in the msg body
#  archive: http://www.nettime.org contact: [EMAIL PROTECTED]


European Parliament Decision against Software Patentability

2003-09-25 Thread Felix Stalder
The discussion on software patents in the EU parliament in Strasbourg has
triggered one of the most substantive political manifestations of the Open
Source / Free Software communities in Europe to date. In Vienna, for
example, there was a demonstration in front of the patent office, with a
surprisingly large turnout, 300 people [1] (very few software artists,
though). In other cities the story was similar [2].

These, and many other, initiatives had some success and positive
last-minute admendments were introduced. Apparently, most members of
parliament were rather surprised by the level of public response, as they
thought this to be an uncontroversial technicality, which was how it was
presented to them by the industry.

Below is an evaluation of the new patent directive in Europe. As usual,
there is quite a bit of uncertainty as to how it is going to be
implemented.

Felix


[1] http://wiki.ael.be/index.php/InfoStandVienna
[2] http://wiki.ael.be/index.php/InfoStands


--  Forwarded Message  --

Subject: [ffii] EP Decision against Software Patentability
Date: Thursday 25 September 2003 09:05
From: Hartmut Pilch <[EMAIL PROTECTED]>
To: [EMAIL PROTECTED]

FFII News -- For Immediate Release -- Please Redistribute
+++ +++ +++ +++ +++ +++ +++ +++ +++ +++ +++ +++ +++ +++
EU Parliament Votes for Real Limits on Patentability
Strasburg 2003/09/24
   For immediate Release

   In its plenary vote on the 24th of September, the European Parliament
   approved the proposed directive on "patentability of
   computer-implemented inventions" with amendments that clearly restate
   the non-patentability of programming and business logic, and uphold
   freedom of publication and interoperation.

 * [9]Backgrounds
 * [10]Media Contacts
 * [11]About the FFII -- www.ffii.org
 * [12]About the Eurolinux Alliance -- www.eurolinux.org
 * [13]Permanent URL of this Press Release
 * [14]Annotated Links

Backgrounds

   The day before the vote, CEC Commissioner Bolkestein had
   [15]threatened that the Commission and the Council would withdraw the
   directive proposal and hand the questions back to the national patent
   administrators on the board of the European Patent Office (EPO),
   should the Parliament vote for the amendments which it supported
   today. "It remains to be seen, whether the European Commission is
   committed to "harmonisation and clarification" or only to patent owner
   interests", says Hartmut Pilch, president of FFII. "This is now our
   directive too. We must help the European Parliament defend it."

   "The directive text as amended by the European Parliament is
   unbelievably good! I couldn't believe it as I was posting it article
   by article to the Slashdot story. It just gets better and better, and
   it hangs together incredibly cohesively. I think we have done
   something amazing this week" exclaimed James Heald, a member of the
   FFII/Eurolinux software patent working group, as he put together the
   voted amendments into a [16]consolidated version.

   "With the new provisions of article 2, a computer-implemented
   invention is no longer a trojan horse, but a washing machine",
   explains Erik Josefsson from SSLUG and FFII, who has been advising
   Swedish MEPs on the directive in recent weeks. That the majorities for
   the voted amendments had support from very different political groups
   - this reflects the arduous political discussion that had led to two
   postponements before.

   However, when 78 amendments are voted in 40 minutes some glitches are
   bound to happen: "The recitals were not amended thouroughly. One of
   them still claims algorithms to be patentable when they solve a
   technical problem.", says Jonas Maebe, Belgian FFII representative
   currently working in the European Parliament. "But we have all the
   ingredients for a good directive. We've been able to do the rough
   sculpting work. Now the patching work can begin. The spirit of the
   European Patent Convention is 80% reaffirmed, and the Parliament is in
   a good position to remove the remaining inconsistencies in the second
   reading."

   The directive will have to withstand further consultation with the
   Council of Ministers that is more informal and hence less public than
   Parliamentary Procedures. In the past, the Council of Ministers has
   left patent policy decisions to its "patent policy working party",
   which consists of patent law experts who are also sitting on the
   administrative council of the European Patent Office (EPO). This group
   has been one of the most determined promoters of unlimited
   patentability, including program claims, in Europe.

   Says Laura Creighton, software entrepreneur and venture capitalist,
   who has supported the FFII/Eurolinux campaign with donations and
   travelled from Sweden to Brussels several times to attend conferences
   and meetings with MEPs

basic terms in the IP discusssion

2003-08-28 Thread Felix Stalder

I'm writing a little glossary for a newspaper [1] we are putting together
on IP issues. The newspaper will be distributed at WSIS [2]. Better
definitions are welcome.


Public Good: 
 
Goods whose use is non-rivalrous, i.e. using the good does not deplete it,
and non-excludable, i.e. once it is produced people cannot be excluded
from using it. The light house at the coast, alerting ships of potential
peril, is an example of a public good. Without intellectual property law,
particularly copyrights and patents, all digital information would be a
public good.


Private Property:
-
Information owned by a private legal entity (a coporation or a individual 
person). The owner has exclusive rights to the property as defined by the IP 
law and can do as s/he pleases with it. Most importantly, the owner can 
freely set the conditions under which it can be accessed and used by third 
parties.


Public Property:

Information owned by the state. Within the bounds of the law and what is 
politically acceptable, the state can do with the information has it sees 
fit. Example: census data.


Public Domain:
--
Information that has no legal protection, either because copyrights/patents 
have expired, or because it has been released into the public domain by the 
owner. Example: the works of William Shakespear.


Commons:
---
A pool for information that is managed by a community of users. Acceptable use 
policies are set by the community. Usually, access to the resource is granted 
non a non-discriminatory basis and at no or low costs. Example: scientific 
information, open source software.


[1] http://www.world-information.org/wio/wsis
[2] http://www.itu.int/wsis/
 




+---+-+---
http://felix.openflows.org

#  distributed via : no commercial use without permission
#   is a moderated mailing list for net criticism,
#  collaborative text filtering and cultural politics of the nets
#  more info: [EMAIL PROTECTED] and "info nettime-l" in the msg body
#  archive: http://www.nettime.org contact: [EMAIL PROTECTED]


Re: [Fwd: Re: [ox-en] Felix Stalder: Six Limitations to the Current Open Source Development Methodology]

2003-08-26 Thread Felix Stalder
> Date: Mon, 25 Aug 2003 19:46:55 +0200
> From: Stefan Merten <[EMAIL PROTECTED]>

> Last week (9 days ago) geert lovink wrote:
> > Six Limitations to the Current Open Source Development Methodology
>
> I'm not always sure in which way or to what areas the following points
> are limitations.

These limitations refer to the kind of problems that can be addressed
through the current form of social organization developed in the Open
Source Movement. The way Open Source Projects are organized reflects the
specifics of problem -- developing software -- and thus they cannot serve
as a model to address problem with very different characteristics.

This does not mean that other problems, for example, the development of
drugs, cannot be organized in an open way, but this 'open way' will have
to look very different from the way Open Source Software projects are
organized because the problem of creating drugs is very different from the
problem of creating software. In other words, there is an intimate
relationship between the characteristics of the problem and the social
organization of its solution.


> > However, particularly outside the software domain, the Open Source
> > projects remain relatively marginal. Why? Some of it can be explained by
> > the relative newness of the approach. It takes time for new ideas to take
> > hold and to be transferred successfully from one context to another.
>
> I'd like to underline this point. Free Software took 15-20 years to
> reach the public space. If I consider that period I find it promising
> that similar approaches are far more known today than Free Software
> was in the late 80's.

I agree. This is why I'm very hopeful. But in order to continue the social
innovation, we need to address these problems, rather than hoping they
will be solved somehow by the magic of 'openness'.

> Let's check this.
>
> > 1) Producers are not sellers
> >
> > The majority professional, i.e. highly-skilled, programmers do not draw
> > their
> > economic livelihood from directly selling the code they write. Many work
> > for organizations that use software but do not sell it, for example as
> > system administrators.
>
> So they sell at least the kind of workforce they use when producing
> Free Software.

Yes, but the difference between using and selling software is important,
even within the commercial sector. If you're simply using the software,
you don't care if it's available to others as well (since it is anyway).
If you're selling it, you need to control it.


> I'm not sure about the first example but for IBM workers and students
> Free Software is then at least to some degree an alienated thing: They
> don't program because of the program but because of the money they may
> sell their services for (IBM) or the reputation they get for it
> (students).

I think is too simplistic to say that all work is paid or has other
utilitarian motives is alienated.

> As a result this software is not Double Free Software as I called it
> on the German list some time ago because the software is written for a
> purpose outside the software and its concrete use value. I'm arguing
> that this degrades the quality of the software because of the
> alienation.

Any evidence for this? Would be interested in seeing it. Again, I think
this is simplistic. Many students love what they do. The fact that they
get a degree for it is an additional motivation, but a detraction.


> Unfortunately AFAIK there is no study yet which tries to answer the
> question which amount of Free Software is written under alienated
> conditions and which amount is Double Free Software.

I guess one of the reasons why this hasn't happened is that it's simply
impossible to define what alienated means within any degree of empirical
relevance in this context. We are speaking of highly-skilled,
self-motivated professionals, and not about people in the assembly line.
The contexts are different and the differences matter.


> However, I can't see where the limitation is here. Oekonux argues that
> it is one of the basic *strengths* of Free Software that it is not
> sold by those who create it. This way the creators can focus on the
> use value of the software alone and are not obstructed by marketing
> needs. Exactly this is one of the reasons why Free Software is so
> successful.

I'm not saying that the limitations make Open Source Software bad, but
that they limit its social model in terms of the problems to which it can
be applied.

> So I'd argue that this is not an limitation to spread the principles
> of Free Software to other areas but a precondition.
>
> Felix' argument makes sense only if you assume that each little piece
> of work / effort needs to be sold. However, this is not true for
> *lots* of areas in human life. One instance close to software is the
> hobby sector where people spend lots of efforts including spending
> money. The only "reward" they get is the Selbstentfaltung they
> experience while doing

RIP: Walter Ong

2003-08-18 Thread Felix Stalder
Rev. Walter J. Ong; traced the history of communication

By Mary Rourke, Los Angeles Times, 8/16/2003
ttp://www.boston.com/news/education/k_12/articles/2003/08/16/rev_walter_j_ong_traced_the_history_of_communication

LOS ANGELES -- The Rev. Walter J. Ong, a Jesuit priest and a leading scholar 
in the field of language and culture who traced the transition from oral to 
written communication in his more than 20 books, died Tuesday at St. Mary's 
Health Center in Richmond Heights, Mo., a suburb of St. Louis. He was 90.

In his writings and lectures, Father Ong explored the development of 
communication from its preliterate beginnings to its current reliance on 
radio, television, and the Internet. He was fascinated by the transition from 
one form of communication to another. He used ancient stories such as Homer's 
"Odyssey" to demonstrate that preliterate cultures relied on "oral thought," 
in which the storyteller might contradict himself and the story itself might 
change over time until it was written down.

He contrasted oral tradition with the written, using the works of Greek 
philosophers Plato and Aristotle to illustrate the change. A written text 
relies on a set of ground rules for logical reasoning, as well as a 
consistent use of terms, to communicate information.

The two traditions influenced cultural values, Father Ong pointed out. 
Although an oral society places a high value on communal memory and the 
elders who are the main link to history, a literate one focuses on individual 
reasoning and introspection.

The rise of technology introduced other changes. In a high-tech culture, a 
person reads a novel and imagines a movie in his mind. The Internet blurs 
people's exterior and interior worlds. Virtual reality is no longer a private 
matter.

Father Ong's meticulous research on those developments helped lay the 
foundation for an understanding of modern media culture.

Some of his research corresponded with the work of his famous teacher, 
Marshall McLuhan, whose interest in the history of the verbal arts in Western 
culture inspired Father Ong to pursue his own studies.

He was McLuhan's student in graduate school when he completed a master's 
degree in English at St. Louis University. McLuhan was a faculty member from 
1937 to 1944. (Father Ong went on from there to earn a doctorate at Harvard 
University.)

Although McLuhan became a pop-culture guru in the 1960s -- "global village," 
his term for the interconnectedness of the world by mass media, is now 
included in Webster's Dictionary -- Father Ong remained a scholar's scholar. 
His writing style was dense and complex, not easy to grasp. His ideas were 
subtle and cumulative, not catchy. His most highly regarded book, for 
example, is titled "Orality and Literacy: The Technologizing of the World" 
(1982).

"Ong is the sort of guy the experts read," said Thomas J. Farrell, whose book 
"Walter Ong's Contributions to Cultural Studies" (Hampton Press, 2000) has 
helped make the priest's work more accessible.

Born in Kansas City, Mo., on Nov. 30, 1912, Father Ong said he knew he wanted 
to be a priest from the time he was in high school. He entered the Society of 
Jesus in 1935 and was ordained in 1946.

He spent most of his teaching career in the English department at St. Louis 
University. He taught courses in Renaissance literature, his specialty, along 
with a range of others. He also lectured at Oxford University, Yale Divinity 
School, and a number of other top schools around the world until he retired 
in 1991.

Despite Father Ong's academic achievements, "he was first and foremost a 
priest," Farrell said. "He said daily Mass at 5:30 a.m., regularly heard 
confessions, and wore cleric's garb wherever he went."

Father Ong's academic work only strengthened his belief in God. "God created 
the evolving world, and it's still evolving," he told the St. Louis Post 
Dispatch in March 2002.


 
+---+-+---
http://felix.openflows.org

#  distributed via : no commercial use without permission
#   is a moderated mailing list for net criticism,
#  collaborative text filtering and cultural politics of the nets
#  more info: [EMAIL PROTECTED] and "info nettime-l" in the msg body
#  archive: http://www.nettime.org contact: [EMAIL PROTECTED]


Re: Six Limitations to the Current Open Source Development Methodology

2003-08-16 Thread Felix Stalder

Hi Ben,

> I would be hesitant to define the "open source approach"
> solely or even primarily in terms of the characteristics you mention.

Perhaps I did not put it as clearly as I should have. I did not mean to 
characterize the "open source approach" in terms of its internal 
organization. Rather, my focus was on the characteristics of the problems to 
which it has been so far applied successfully.

I totally agree that, from organizational point of view, the points you list 
such as open participation are very important. Your list is fully consistent 
with my elaborations. The fact that software, or an encyclopedia, do not come 
with any product liability *does* facilitates open collaboration. If you 
could sue, say, the Apache Software Foundation for a server crash, or 
Wikipedia for erroneous information, I'm sure their development model would 
look different.

> The Open Organizations project (http://www.open-organizations.org) is an
> attempt to synthesize these principles, and some others, into a workable,
> general-purpose model.

I'm skeptical about the possibility of a "workable, general-purpose
model". My post was about the fact that the type of problem affects the
social organization through which the solution is being developed.
Different types of problems demand different types organizations to
address them. You cannot organize the development of drugs the same way
you organize the development of software. For one, very few people would
be willing to be beta-testers.

There are certain aspects that will be universal to all "open" development
processes, such as common ownership of knowledge. However, the type of
social organization in which commonly owned knowledge can be created will
be vastly different depending on the type of knowledge.

So far, we have learned how to create commonly owned knowledge as long as
the type of knowledge exhibits, among others, the six characteristics I
listed. The next round of social innovation is about finding ways to
create free knowledge / information in other areas as well.




+---+-+---
http://felix.openflows.org

#  distributed via : no commercial use without permission
#   is a moderated mailing list for net criticism,
#  collaborative text filtering and cultural politics of the nets
#  more info: [EMAIL PROTECTED] and "info nettime-l" in the msg body
#  archive: http://www.nettime.org contact: [EMAIL PROTECTED]


Six Limitations to the Current Open Source Development Methodology

2003-08-14 Thread Felix Stalder

Six Limitations to the Current Open Source Development Methodology

The "Open Source Approach" to develop informational goods has been
spectacularly successful, particularly in the area for which it was
developed, software. Also beyond software, there are important, successfull
Open Source projects such as the free Encyclopedia, Wikipedia; collaborative
sites writing/publishing projects such as koro5hin.org; and the Distributed
Proofreading Project, attached to the Gutenberg Project.

However, particularly outside the software domain, the Open Source projects
remain relatively marginal. Why? Some of it can be explained by the relative
newness of the approach. It takes time for new ideas to take hold and to be
transferred successfully from one context to another. But this is only part
of the story. The other part is that the current development model is based
on a number of specific, yet unacknowledged conditions that limit its
applicability to more diverse contexts, say the music distribution or drug
research.

The boundaries to the open production model as it has been established in the
last decade are set by six conditions characterizing virtually all of the
success stories of what Benkler called "commons-based peer production." The
following list is a conceptual abstraction, a kind of ideal-type. The actual
configuration and relative importance of each condition varies from project
to project, but taken together they indicate the boundaries of the current
model. In this elaboration, I draw from examples of free and open source
software, but it would be simple to illustrate these limitation based on open
content projects.


1) Producers are not sellers

The majority professional, i.e. highly-skilled, programmers do not draw their
economic livelihood from directly selling the code they write. Many work for
organizations that use software but do not sell it, for example as system
administrators. For them the efficient solution of particular problems is of
interest, and if that solution can be found and maintained by collaborating
with others, the sharing of code is not an issue. For others employed in
private sector companies, for example at IBM, the development of free
software is the basis for selling services based on that code. The fact that
some people can use that code without purchasing the services is more than
off-set by being able to base the service on the collective creativity of the
developer community at large. From IBM's point of view, the costs of
participating in open software development can be regarded as 'capital
investment' necessary for the selling of the resulting product: services.

For members of academia (faculty and students) writing code, but not selling
(often explicitly prohibited), contributes to their professional goals, be it
as part of their education, be it as part of their professional
reputation-building. For them, sharing of code is not only part of their
professional advancement, but an integral part of the professional culture
that sustains them also economically,. in form of salaries for the faculty
and stipends for the (graduate) students.

Last but not least are all those who use their professional skills outside
 the professional setting, for example at home on evenings and weekends.
 Having already secured their financial stability, they can now pursue other
 interests using the same skill set.

2) Limited capital investment

Particularly the last, and very important group of people, whose who work
outside the institutional framework on projects based on their own
idiosyncratic interests, can only exist due to the fact that the means of
production are extraordinarily inexpensive and accessible. Materially, all
that is needed is a standard computer (often even a substandard one would
already suffice) and a fast, reliable connection to the communication forums
of the community. Of course, the computer and the network rely on a level of
infrastructure that cannot be taken for granted in large parts of the world,
but for most people in the centers of development, they are within relatively
easy reach.

Once this access to be means of communication is secured, the skills
 necessary to participate in the development of code can also be acquired
collaboratively, free of charge. The number of self-taught programmers is
significant. Since no expensive diplomas are necessary to become active, the
financial hurdle is, indeed, extraordinarily low.


3) High number of potential contributors

Programming knowledge is becoming relatively common knowledge, no longer
restricted to an engineering elite, but widely distributed throughout
society. Of course, truly great programmers are rare as truly great artists
are, but average professional knowledge is widely available. This has a
quantitative and a qualitative dimensions. Quantitatively, the number of able
programmers is in the millions, and rising. Qualitatively, the range of
people capable programmers is also unusually wide,

Six Limitations to the Current Open Source Development Methodology

2003-08-14 Thread Felix Stalder
Six Limitations to the Current Open Source Development Methodology

The "Open Source Approach" to develop informational goods has been 
spectacularly successful, particularly in the area for which it was 
developed, software. Also beyond software, there are important, successfull 
Open Source projects such as the free Encyclopedia, Wikipedia; collaborative 
sites writing/publishing projects such as koro5hin.org; and the Distributed 
Proofreading Project, attached to the Gutenberg Project.

However, particularly outside the software domain, the Open Source projects 
remain relatively marginal. Why? Some of it can be explained by the relative 
newness of the approach. It takes time for new ideas to take hold and to be 
transferred successfully from one context to another. But this is only part 
of the story. The other part is that the current development model is based 
on a number of specific, yet unacknowledged conditions that limit its 
applicability to more diverse contexts, say the music distribution or drug 
research.

The boundaries to the open production model as it has been established in the 
last decade are set by six conditions characterizing virtually all of the 
success stories of what Benkler called "commons-based peer production." The 
following list is a conceptual abstraction, a kind of ideal-type. The actual 
configuration and relative importance of each condition varies from project 
to project, but taken together they indicate the boundaries of the current 
model. In this elaboration, I draw from examples of free and open source 
software, but it would be simple to illustrate these limitation based on open 
content projects.


1) Producers are not sellers

The majority professional, i.e. highly-skilled, programmers do not draw their 
economic livelihood from directly selling the code they write. Many work for 
organizations that use software but do not sell it, for example as system 
administrators. For them the efficient solution of particular problems is of 
interest, and if that solution can be found and maintained by collaborating 
with others, the sharing of code is not an issue. For others employed in 
private sector companies, for example at IBM, the development of free 
software is the basis for selling services based on that code. The fact that 
some people can use that code without purchasing the services is more than 
off-set by being able to base the service on the collective creativity of the 
developer community at large. From IBM's point of view, the costs of 
participating in open software development can be regarded as 'capital 
investment' necessary for the selling of the resulting product: services.

For members of academia (faculty and students) writing code, but not selling 
(often explicitly prohibited), contributes to their professional goals, be it 
as part of their education, be it as part of their professional 
reputation-building. For them, sharing of code is not only part of their 
professional advancement, but an integral part of the professional culture 
that sustains them also economically,. in form of salaries for the faculty 
and stipends for the (graduate) students.

Last but not least are all those who use their professional skills outside the 
professional setting, for example at home on evenings and weekends. Having 
already secured their financial stability, they can now pursue other 
interests using the same skill set.

2) Limited capital investment

Particularly the last, and very important group of people, whose who work 
outside the institutional framework on projects based on their own 
idiosyncratic interests, can only exist due to the fact that the means of 
production are extraordinarily inexpensive and accessible. Materially, all 
that is needed is a standard computer (often even a substandard one would 
already suffice) and a fast, reliable connection to the communication forums 
of the community. Of course, the computer and the network rely on a level of 
infrastructure that cannot be taken for granted in large parts of the world, 
but for most people in the centers of development, they are within relatively 
easy reach.

Once this access to be means of communication is secured, the skills necessary 
to participate in the development of code can also be acquired 
collaboratively, free of charge. The number of self-taught programmers is 
significant. Since no expensive diplomas are necessary to become active, the 
financial hurdle is, indeed, extraordinarily low.


3) High number of potential contributors

Programming knowledge is becoming relatively common knowledge, no longer 
restricted to an engineering elite, but widely distributed throughout 
society. Of course, truly great programmers are rare as truly great artists 
are, but average professional knowledge is widely available. This has a 
quantitative and a qualitative dimensions. Quantitatively, the number of able 
programmers is in the millions, and rising. Qualitatively, the range

Open Source translation of Harry Potter

2003-07-06 Thread Felix Stalder
After distributed proofreading [1], now distributed translating.

According to this website [2] more than 1000 people contributed to the 
translation into German of Harry Potter 4. Now, they are translating volume 
5, which has been released in English but not yet in German. 

The way it works: volunteers sign up, then they are assigned 5 pages to 
translate within 4 weeks (they have to procure the English original 
themselves). If they translation meets the required standards, the 
contributor will receive acess to the other translated pages. The ensure a 
certain consistency, there is a HP-special dictionary [3]. In addition, this 
project also translates additional chapters written by HP fans, 
English-German and German-English.

The translated texts are not available to the public at large. They only 
circulate within the community of translators and others who contribute to 
the project. This, it seems, helps to keep the publisher from getting 
nervous. It's a typical fan project, as they encourage people to translate on 
their own a set of pages, even is a translation already exists, and the 
stated motivation is a) fun and b) the act of translating leads to a deeper 
understanding of the text.



[1] http://www.pgdp.net/c/default.php
[2] http://www.harry-auf-deutsch.de/
[2] http://www.hp-fc.de

 
+---+-+---
http://felix.openflows.org

#  distributed via : no commercial use without permission
#   is a moderated mailing list for net criticism,
#  collaborative text filtering and cultural politics of the nets
#  more info: [EMAIL PROTECTED] and "info nettime-l" in the msg body
#  archive: http://www.nettime.org contact: [EMAIL PROTECTED]


Re: opencontent.org dissolves and stalls its licenses

2003-07-02 Thread Felix Stalder

On Tuesday 01 July 2003 17:06, Florian Cramer wrote:
> As a lecturer in the humanities and net activist who has been
> evangelizing open content internationally in lectures, papers and as the
> moderator of congress panels since 1999, I feel like being slapped into
> my face. It is terrible if you educate people about open content and the
> necessity of copylefting public information resources, pointing them
> again and again to opencontent.org and their licenses, and now see that
> reference dissolve.

On Wednesday 02 July 2003 08:17, auskadi wrote:
> When I first read this series of postings the lawyer (Heiko read "common
> law lawyer") in me asked himself why does someone have to maintain a
> licence?. All that is needed is that you use it. For this it just needs to
> be available. Simply because someone shuts down a web site (or stops
> updating it) doesn't seem to invalidate a form of licence if you choose to
> use it.

If this move from OpenContent to CreativeCommons would have been done a bit 
smoother, stressing it to be an upgrade of the concept, rather than the "end 
of opencontent", much of Florian's concerns, at least vis-a-vis the people he 
convinced, would have been alleviated. From a legal point of view, it's no 
problem at all, since, as auskadi writes, a license does not need to be 
maintained to be valid. If you published something under the OpenContent 
license, it fact that someone announces that he will not issue future 
versions of it, doesn't matter. The license keeps its legal validity.

The problem is more one of PR than of substance. It makes the OC movement 
'look bad', rather exposing some inherent weakness. But, of course, 
appearances do matter. OpenContent was a great term and to abandon it is a 
bad move.


On Tuesday 01 July 2003 17:06, Florian Cramer wrote:
> Imagine the FSF suddenly abandoning/stalling the GPL in favor of some
> yet-unwritten different license, leaving ten thousands of Free Software
> developers in the legal lurch and betraying their trust. What is an
> unlikely horror scenario for free software now has become the reality of
> open content.

On Tuesday 01 July 2003 22:36, Francis Hwang wrote:
> Florian, is there anything to prevent you or somebody else from taking
> up the OPL and maintaining it without David Wiley's involvement? So
> David Wiley doesn't think it's worth his time. Maybe there's somebody
> out there who does.

The reference to the Free Software Foundation is important because it 
indicates that if you want to maintain a long-term project, you need some 
institutional setting in which it can survive even is the initiator(s) have 
left. This is also why Francis' call for someone else to take it over -- 
implicitely stating that Florian should do it if he cares so much -- is so 
wrong. What is needed are not heroic individuals who maintain projects for 
the good of it, but reliable long-term settings that can support ideas and 
concepts in the long run.

This is also why I think the move to CreativeCommons is a good one and not as 
confusing as Florian states. Unless, of course, your idea of "openness" is 
the GPL and nothing else, then CC offers plenty of occasion to deviate from 
the right path.


On Wednesday 02 July 2003 08:17, auskadi wrote:
> My initial feeling is that (excuse this for those friends
> in the States) is that CC is too hung up on North American ideas of
> liberty and the "founding fathers". It is one thing I have trouble with
> when I read Lessig, Boyle et al...this preoccupation with the values of
> the US or their version of what they are.

I agree that US constitutionalism doesn't hold very much appeal outside of the 
US and works only to a limited degree in the US (see the Eldrige case). While 
this is a serious problem with the overall political strategy of Lessig and 
others, it doesn't invalidate the CC licenses in any way. They are still 
useful and it seems better to me to have one place that offers a few 
easy-to-customize licenses than having to read through the 2O plus licenses 
listed on opensource.org, or trying to figure out the difference between the 
OpenContent license and the GNU Free Documentation license. 


Felix
 


+---+-+---
http://felix.openflows.org

#  distributed via : no commercial use without permission
#   is a moderated mailing list for net criticism,
#  collaborative text filtering and cultural politics of the nets
#  more info: [EMAIL PROTECTED] and "info nettime-l" in the msg body
#  archive: http://www.nettime.org contact: [EMAIL PROTECTED]


nettime 3000?

2003-03-04 Thread Felix Stalder

Right at the time when nettime reached the arbitrary yet symbolic number
of 3000 subscribers, the number of error messages flooding the nettime
system reached such proportions (several hundreds a day) that we were
finally forced to go through the boring process of unsubscribing those
addresses that were clearly broken. Within days, nettime got purged of 10%
of its subscribers.

All in all, this was an utterly unspectacular process, spring cleaning if
you will, but makes me wonder, nevertheless, what kind of community is
this in which 10% of the 'members' are dead, so to speak.

So, what kind of community is it? Clearly, it's no longer the hybrid
structured by the two intersecting vectors of online exchanges and
off-line events, back-packing on the European media festival circuit.
These ain't the 90s anymore. Rather, the last time (as far as I know) a
significant number of 'nettimers' were physically in the same place -- at
the WOS II in late 2001 in Berlin -- was a non-event. Just a bunch of
people happening to be together drinking beer in clubs where one could not
communicate with anyone who was more than 1 meter away. There was no sense
of being a group, rather communication unfolded as a series of friendly,
or disinterested, individual encounters. Physically, there was no
many-to-many communication, just one-to-one.

At the same time, nettime in terms of its online exchanges is doing quite
well. It's a stable, reliable, perhaps a bit predictable (the flip side of
reliable), long-term project. I personally don't know of another list that
is comparable in terms of breadth and quality of content.

It seems that, as a community, nettime has been moving in the opposite
direction of what is usually understood as the normal 'maturing' process
of a virtual community, namely, that on-line exchange sooner or later
create the desire for off-line meetings. For nettime, off-line events --
meetings, paper publications -- were crucially important initially but
steadily declined to the point that when the last nettime publication
appeared (as part of Vuk Cosic's Biennale catalogue) only a fraction of
list subscribers (perhaps not even all of those whose texts were
reprinted) even noticed.

A lot of this has to do with the subscriber base becoming more diverse
(geographically, socially, intellectually), the early enthusiasm wearing
off and the distributed, non-ownership, volunteer model showing its
conservative tendencies. This needs to be qualified. Ownership here is not
understood in these sense of being the property of someone, but in the
sense of 'taking ownership' and assuming responsibility.

Who is responsible for nettime? Of course, there are some
responsibilities.  If the email server goes down, the phone at The Thing
will ring. If something on the web server needs to be changed, the action
is in Amsterdam. And the moderation does daily maintenance work.

But responsible in the sense of being able to make decision beyond minor
tinkering is no-one. So, things stay the same as far as the technical is
concerned. Nevertheless, socially, things have changed quite a bit, the
community has become more virtual in all senses.

Perhaps, this has to do with the relative maturing of other networks, say
social forums, art festivals or conferences, which are more efficient at
providing real meeting places for more narrowly defined (but more
populous)  groups whose sense of community is more comprehensive. In a
way, nettime has always defined itself negatively. Being sponsored by art
institutions, but not being an art project itself. Having lots of
intellectuals on board, but being non-academic. Having a strong political
slant, but not being affiliated with any particular segment of the
multitude.

In a time where institutions enjoy a new found respect, nettime, once
again, goes against the trend, becoming more virtual, more distributed,
more ephemeral. This process is not explicit, but it's clearly felt, as
could be witnessed by the last major discussion on the list which, by no
means a co-incidence, was about the institutionalization of another once
hybrid project: rhizome. A discussion that, from the outside was supremely
absurd -- after all, how important is a $5 membership fee, really -- but
from the inside, it seemed to touch a strange cord, one that indicates
that nettime still has a 'sense of self', which, not surprisingly, is
still defined negatively.






+---+-+--- 
http://felix.openflows.org

#  distributed via : no commercial use without permission
#   is a moderated mailing list for net criticism,
#  collaborative text filtering and cultural politics of the nets
#  more info: [EMAIL PROTECTED] and "info nettime-l" in the msg body
#  archive: http://www.nettime.org contact: [EMAIL PROTECTED]


Re: anti-piracy goons considered harmful

2003-02-04 Thread Felix Stalder
At 03.02.03 19:14, Morlock Elloi wrote:

>The only way to benefit from openness is to use it and verify yourself, 
>insteadof delluding yourself that someone out there will spend days 
>doing that for ...what ?

There are certainly advantages to doing things yourself (just ask all the 
guys hanging around 'home depot'), but there are also clear limitations to 
it. In how many areas can one be truly proficient? In very few, at best. I 
think it was said of Goethe that he was the last person to be able to 
command the entire (scientific) knowledge available at the time. The 
Germans even have an expression for this: "Universalgelehrter." This, 
unfortunately, was nearly 200 years ago and the amount of knowledge 
available has exploded many times to a degree that there is probably nobody 
around who fully understands even a clearly circumscribed domain such as a 
computer.

I have no idea of aviation (beyond stretching my arm out of the window of a 
speeding car) but I still have a couple of frequent flyer accounts. Does 
that make me a naive fool? Not necessarily, since there are social 
institutions around, say the FAA in the US, whose mandate is to ensure 
aviation safety. They verify the safety of airplanes, airports etc. Now, 
the trick for such institutions to work is that a) there need to be the 
resources around to get the job done, and b) the conditions need to be 
right so that the job is doable at all.

In respect to software, if you do not have access to the source code, there 
is very little you can do, no matter what your resources are, in order 
check the specifics of the program, particularly not in regard to hidden 
features or bugs. In effect you are forced to blindly trust the vendor of 
the software. The vendor, of course, has an interest in maintaining the 
reputation of the product, so he will never tell you that something is 
wrong with it (particularly since there is no liability). Opening up the 
source code, at the very least, provides the conditions under which the job 
of verifying the software becomes doable.

Of course, that does not mean necessarily that someone with a keen eye is 
actually doing it. Which gets us to the question of where the resources 
come from to do the checking. This clearly is a tricky problem. What are 
the social institutions supporting OS development in the long run? While 
much needs remains to be developed, it's not that we are standing at the 
beginning of the process. The way OS projects are organized -- 
collaboratively and open -- optimizes the chances that bugs are found and 
minimizes the possibilities that someone is able to hide a feature in it. 
Furthermore, only one person has to find the bug (and fix it) for it to 
become available to all users. On the other hand, even if you find a bug in 
an M$ program, chances are, your neighbour will never know it, because you 
are not allow to tell him and M$ won't do it.

Note that I say "optimizes the chances" and "one person has to find the 
bug" both are strong conditionals. There is no guarantee here. But also 
doing it yourself is not really one, since how do you know that you fully 
understood the code? IBetter assume you don't. I guess there were a lot of 
intelligent people looking at the source code of PGP and still, a bug 
eluded all of them for a long time. Chances are nobody found the bug nobody 
could exploit it. But once the bug was found, it was published readily 
increasing the chances of it being fixed.

The answer to the imperfections of OSS is not to verify yourself, after 
all, the answer to the difficulties of writing good software is also not to 
write it yourself, but to distribute the process to those willing and able 
to do it. What we need to find now, are institutions capable of sustaining 
this process. So far, OSS hasn't done badly on this front either.

Felix





--|-
http://felix.openflows.org

#  distributed via : no commercial use without permission
#   is a moderated mailing list for net criticism,
#  collaborative text filtering and cultural politics of the nets
#  more info: [EMAIL PROTECTED] and "info nettime-l" in the msg body
#  archive: http://www.nettime.org contact: [EMAIL PROTECTED]



Re: revenge of the concept

2003-01-27 Thread Felix Stalder
>  In his excellent paper,
>"Coase's Penguin: Linux and The Nature of the Firm," Yochai Benkler
>explains, not the motivation, but the technical and legal
>preconditions for cooperative informational and cultural production.
>The technical considerations are basically: telematically interlinked
>personal computers. The legal precondition is basically: that
>information be treated as what it arguably is, a "non-rivalrous
>good," i.e. a resource that can't run out, that can't be destroyed in
>the using, and that therefore cannot be treated as an ownable
>commodity. Benkler's conclusion is that networked informational and
>cultural production obeys neither the constraints of a firm (with a
>bureaucratic organization), nor the price signals given by a market
>("buy" and "sell" are irrelevant to non-rivalrous goods). So Benkler
>is talking about a form of production which is at once
>non-bureaucratic and, yes, non-capitalist, i.e. divorced from that
>complex and changeable human institution which transnational state
>capitalism now dominates almost entirely: the market.


I think 'non-capitalism' is a missreading of Benkler's argument and of the 
Open Source Software phenomenon. I deliberately say OSS and not Free 
Software, since such a reading might apply more narrowly to FS (though I'm 
not even sure about that) but certainly not to the OSS in general. I think 
"(non)capitalism" is a category that doesn't help much explaining the 
practive of OSS (as supposed to some of the political theories that 
motivate some of the FS/OSS figures).

What Benkler said in this essay, which is indeed brilliant, is that 
conventional economists know only of two ways how to organize production: 
within a closed organization (the firm, the bureaucracy) and in an open 
system (the market). The question is always: how to achieve efficient 
organization of people and resources in regard to a desired productive 
outcome. Signals used to achieve this coordination within the closed 
structure are commands relayed through hierachies. In the open structure, 
it's money: prices attached to goods (and services). What he claims now, 
and I basically agree with him, is that a third way of organizing labour 
has emerged, heavily relying on the Internet. He calls it 'commons-based 
peer production.'

Now, there are capitalist firms and non-capitalist 'firms' (state 
bureaucracies, co-ops) and there are capitalist and non-capitalist markets. 
A traditional farmer's market, for example, is not a capitalist market. 
Just remember Fernand Braudel's distinction between markets and 
anti-markets which Manuel DeLanda dusted off a few years ago (check the 
nettime archives).

In the same sense, there is capitalist 'commons-based peer production' 
(think of Amazon's way to recommend books, or IBM's investment in Linux, 
Redhat and so on). There's also non-capitalist 'commons-based peer 
production' (say, GNU, Debian, Wikipedia, nettime and so on).

What is perhaps most interesting is how the 'capitalist' and 
'non-capitalist' elements intersect and what that might tell us about the 
politcal dimension of these movements. I think it's exactly this hybridity 
(along with the limitation to non-rivalrous goods and even more, to 
'functional works') that makes the OSS phenomenon very interesting but only 
of limited value as a political project (which is not necessarily a bad 
thing).


Felix




--|-
http://felix.openflows.org

#  distributed via : no commercial use without permission
#   is a moderated mailing list for net criticism,
#  collaborative text filtering and cultural politics of the nets
#  more info: [EMAIL PROTECTED] and "info nettime-l" in the msg body
#  archive: http://www.nettime.org contact: [EMAIL PROTECTED]



Re: Space of Flows: Characteristics and Strategies

2002-11-28 Thread Felix Stalder
At 27.11.02 08:03, brian caroll wrote:

>   hi Felix. fascinating essay. i am left wondering who/what
>   exactly in the mid-1970s ushered in the space of flows, i
>   can guess but wonder if it goes into complexity theorists.


My reasoning for setting this date is simpler. In the early 1970s many of
the leading stock exchanges switched from telephone/telex to
computer-networks as their main infrastructure of communication. Around
the same time Reuters introduced trading terminal for individual traders
(a kind of giant mainframe, if I understand this correctly), a service
that was adopted very rapidly. Also in the early 1970s, Nixon floated the
U$, ie cancelled the link of the most important currency to anything
material (gold). Both movements rapidly expanded the scope of the
financial markets.  In the 1980s, in the US and Britain, corresponding
political ideologies came to power which, through a series of
deregulations, particularly of the financial services, further expanded
the scope of these markets (ie what could be traded how and by whom). Like
all dates for complex processes, this one is a bit arbitrary, but in
hind-sight, it seems that around that time, the growth of the digital
space of flows passed a "point-of-no-return." The Internet, arguably,
didn't reach this point until the late 1980s/early 1990s.


>   the other aspect is that i find it difficult for such a value
>   given to the digital, as it is not always superior, and in
>   many ways inferior to actual experience. audiophiles and the
>   warm sound of vacuum tubes is one example, communicating face-
>   to-face and the signal-noise ratio might be another, if it is
>   to include time and movement and how gestures and the way an
>   idea is shared contributes to understanding, besides a trillion
>   other examples, including sex. as far as i know, the human body
>   processes its information both through analogue and digital
>   types of electromagnetic nerve impulses, or some physiological
>   subsystem which keeps us breathing, moving, and acting.


I didn't mean to give particular value to the digital beyond recognizing
that increasingly communication is at one point or another digital and
that this medium has different characteristics than other media. This does
not affect so much face-to-face communication as all other forms, ie print
and other analog communication technologies. As you said, there is nothing
that can match the richness of communication that takes place in one place
at one time, but all other communicative crutches (our extensions, as
McLuhan called them) we have to stretch communication across time and
space, are being redefined. Whether nor not that's a good or bad process
is hard to say, and given the extreme complexity and heterogeneity of this
process, also probably not that interesting to address in general.


>   the technological forms of life, i am guessing, might be in some
>   way related to what happens when all those individual arrows on
>   a chart of the ocean of what is start doing their things, and
>   a descriptor is needed to say 'what is' is. in its present form,
>   i wonder if technological forms of life is a variant on the idea
>   of memes. that is, if there are nodes, if each are nodes relating
>   to eachother, and how the space of flows, or the node of techno-
>   life, might relate to other ideas/concepts.

Technological form of life is a bit tricky a concept and my use of it is
still experimental. But intuitively, it seems to make sense in that a) the
smallest 'funcational' (or perhaps 'cognitive' or 'creative') entity is
not the individual (ie the reaonsing subject of the Enlightenment) but an
association. This idea comes from Goffman and his interactionist studies
of families way back in 60s (I might be mistaken with the dates, I do not
have the references at hand). What is different now, and that is why there
is a modifier technological to it, is that many of these associations
(such as nettime as an 'intellectual culture', if you will) are no longer
face to face, but mediated technologically. And the technology, perhaps
only because it's relatively new and still unstable, is visible as a major
influence on how these association are formed and structured. The whole
idea of a "movement of movements" is strickling similar to the idea of a
"network of networks" (ie the Internet).


>From: "Elnor Buhard" <[EMAIL PROTECTED]>
>Date: Tue, 26 Nov 2002 16:28:40 -0500
>Subject: Re:  Space of Flows:  Characteristics and  Strategies
>
>
>i have some issues with the concepts of flow described below, in particular
>to the claim that they are newly unstable.  maritime flow, and trade routes
>in general, are historically quite volatile  (e.g. trade, language, +
>power on the niger river, around the 13th century in present-day mali).
>glancing at the record, it seems like flows were just as subject to change,
>things were structurally quite similar although admittedly with a much
>slower clock s

Space of Flows: Characteristics and Strategies

2002-11-26 Thread Felix Stalder
[This is a slightly revised version of a talk I gave at the Doors of
Perception conference called "Flow: The design challenge of pervasive
computing." http://flow.doorsofperception.com ]


Space of Flows:  Characteristics and  Strategies
=====
Felix Stalder <[EMAIL PROTECTED]>


This text addresses three interrelated questions in order to query the 
status of the object within the space of flows and speculate about some 
ramifications for designing within this new environment.

* What is the space of flows in general?
* How is it different from the space of places, the type of geography we 
learned in school? And, finally,
* how do we deal with these differences?

So, what is the space of flows? The conference flyer already introduced us
to the famous idea of Heraclitus: panta rei: everything flows. What he was
referring to is a general condition of nature. Everything is in a constant
process of transformation. Even Mount Everest is not static but continuous
to grow at a rate of about 3-5 millimeters each year.

The concept of the space of flows is different from this. It refers to a
specific historic condition which has become predominant only quite
recently, arguably in the mid 1970s. The space of flows ­ to give you a
general definition ­ is that stage of human action whose dimensions are
created by dynamic movement, rather than by static location.

The operative words here are movement and human action. Without movement,
this space would cease to exist and we would fall back into the space of
places, defined by mountains, buildings and borders.

Equally important, the movement takes place through human action and it
creates the conditions for our everyday lifes. In this sense, the drifting
tectonic plates, even though they move too, are not part of the space of
flows. They drift no matter what we do, causing much headache and the
occasional humbling experience to Californians.

I said that the space of flows has become the dominant predominant stage
on which our world is shaped only recently. But, of course, there have
always been social spaces that were created by movement. Here in
Amsterdam, the maritime world of long distance trading is still very
present.

The space of flows ­ now and then ­ consists of three elements:

* the medium through which things flows,
* the things that flow, and
* the nodes among which the flows circulate.

In regard to Dutch long distance trading, the medium was the ocean. The
medium was characterized by currents, storms and many other conditions
that favoured some flows over others. Oceans and sailing ships were
unsuitable for trading fresh fruits, but highly capable of transporting
dried spices.  This point can be generalized. There is always a close
relationship between the medium of the flows and their contents. One of
the first messages that came through the transatlantic telegraph cable
when it opened in the mid 19th century was: The Queen has a cold. This
factoid became newsworthy only under the conditions of instantaneous
transmission.

The final element in the space of flows are the nodes, the harbours and
trading posts that the Dutch established around the world. Flows always go
from one node to an other. In a world with only a single harbour, ships
are mere entertainment. Nodes focus movement into flows. Nodes, like the
harbour where goods are loaded into ships, are membranes that connect
various flows to one another and flows with places: a node is a kind of
interface, and like all interfaces, they shape profoundly what they
interface to.

Flows are created by subtle interplay of similarity and difference among
nodes. People who do not speak the same language have a very hard time
communicating. People who know the exact same stories have nothing to tell
to one another. We have all seen old couples who sit silently next to one
another. They know each other so well that they have nothing to exchange
anymore.

Despite similarities, maritime flows are also every different from today's
information flows.  Since ports, the distance between them and the
currents of the sea are relatively stable, the maritime space of flows is
static in ways ours is not.

The quintessential node in our contemporary space of flows is the office,
the command and control centers for the flows of goods, people and
information. In pre-industrial manufacturing, the function of the work
bench and of the office were barely separated. Rather, they were one and
the same. This was efficient was long as the flows were small and slow.  
As volume and speed of production increased, this model came into a
crisis. As a direct response to the growth of the factory output over the
previous 100 years, the office emerged into centrality in the second half
of the 19th century.

Flows and nodes began to differentiate.

The office represents the attempt to better manage the flows of goods
pouring out of the factories. These flows are constant