Bob Wyman wrote:

>Nikolas 'Atrus' Coukouma wrote:
>  
>
>>I now know that Creative Commons has an RDF schema for describing
>>licensing
>>    
>>
>       We've been over this ground many times before. Read my post at:
>http://bobwyman.pubsub.com/main/2005/03/lazyweb_query_a.html
>  
>
I didn't really want to discuss the licensing mess here. The previous
discussion seemed to conclude that it was risky and out of scope. I just
wanted to note that some attempt had been made to make a
machine-readable description of licensing.

>       Creative commons licenses can only be used to *grant* permissions --
>not restrict them. In fact, it is violation of the Creative Commons licenses
>to implement systems that interpret CC licenses as prohibiting use. For
>example, a CC "non-commercial" license does not forbid commercial use.
>Rather, it simply fails to grant commercial use rights. Whether or not
>commercial use is permitted depends on other factors -- such as local law,
>etc. 
>       To claim that something like a CC "non-commercial" license
>prohibited the commercial use of content, you would have to argue that such
>use was prohibited whether or not a CC license was present. You would have
>to argue that NO content in RSS or Atom feeds could be used without an
>explicit grant via Creative Commons or some other means. The result would be
>a "poisoning of the stream" in that success of your assertion would
>instantly shut down all use of RSS and Atom feeds which did not carry
>Creative Commons licenses or other means of granting rights (i.e. 99.9% of
>all feeds.)
>       There is a strong argument that can be made that anyone who
>publishes an RSS or Atom file without taking care to control access to the
>feed has implicitly waived their right to restrict use of the data since the
>usual and customary use of RSS and Atom files is to facilitate aggregation
>and syndication.
>       The Digital Rights Management space is papered with hundreds of
>patents -- a number of which claim the use of XML to encode licenses and
>many which make general claims concerning methods no matter what encoding
>format might be used. Without evidence that someone has done a complete
>patent search concerning whatever methods may be proposed to control use of
>content (such as noindex extensions, etc.), it is unlikely that any
>responsible developer would risk the potential patent infringements that
>might result from implementing the method.
>  
>
Now the /success/ of Creative Commons is debatable. I don't think anyone
is going to try to implement DRM, in the usual locking sense, in a
syndication feed, but maybe I'm wrong. I'm only concerned with providing
guidelines to users so they have a better chance of knowing what they're
doing.

Thanks for pointing out the problem with even encoding licenses. I'll
definitely proceed with caution, if at all.

>>opt-out of services such as Feedster, Technorati, and PubSub.
>>    
>>
>       To the best of my knowledge, there is no useful means by which the
>services mentioned can be distinguished from any other form of aggregator.
>(Note: I am CTO for PubSub.com. What have you got against us?)
>
I don't have anything against any of the services named, per-se. They're
just some of the largest and best search engines around, and therefore
the largest concerns of people trying to keep their content from being
indexed (for whatever reason).

> The PubSub
>service, for instance, only reads RSS and Atom feeds (we do no HTML
>scraping) and we produce feeds *only* on behalf of specific users. (with the
>exception of a few "sample" feeds.) Thus, you can't distinguish what we do
>for our users from what individual users do for themselves with aggregators
>running on their own machines. (Of course, users often pay for personal
>aggregators while PubSub is free. Which is more commercial?)
>  
>
The area is definitely gray (definitely a maybe?). The best difference I
can come up with is that search/match services index large volumes of
content and individual users never intend to read most of it. I suppose
people could subscribe to 8,988,910 weblogs (number from Technorati)
personally and do the same thing, but it seems absurd to compare reading
70% of entries to reading 0.000001% (if that).

Similar arguments can be made with HTML search engines. Sure, you could
crawl the web yourself and "view" that information, but it's unlikely
you'll ever read a significant percentage of it (I'm sure specification
and manual authors feel that this is true for all content)

You also share the same content among all users. So do
online-aggregation services. I doubt there will ever be a definitive
line. I think there's a fuzzy philosophical one and you can decide which
one you're on.

>>it was suggested by Roger Benningfield that search eninges and
>>syndication sites use atom:summary instead of atom:content to
>>avoid the noarchive issue.
>>    
>>
>       Most feeds do not contain atom:summary elements. It is optional.
>This is not a useful path to a solution. In any case, reliance on
>atom:summary elements wouldn't help you with RSS files. If Atom has a rights
>issue then RSS does as well.
>  
>
I was unsatisfied with it as a solution. I think it makes a nice little
guidline in the rare cases it applies to and that's about it.

I've also already e-mailed Dave Winer to discuss the issue in RSS. I
even cited the discussion ;) I would certainly prefer a cross-format
solution and am quite happy to use Walter Underwoods' general solution
for XML content (mentioned elsewhere in this tree).

>       If you want to have fine control over the use of your feeds, why
>don't you use a blogging system that provides such control? For instance,
>Yahoo!'s 360 service enables very fine control of access rights. Also,
>services like LiveJournal, Typepad, etc. allow you to mark your posts as
>"non-public" and as a result they don't get syndicated. The control you want
>should be provided by your blogging software -- not embedded in the Atom or
>RSS formats. RSS and Atom are formats for open and broad syndication. Any
>effort to bring DRM into this world will only result in a poisoning of the
>stream and a loss of the value of syndication to the millions who currently
>rely on it.
>  
>
The point that "RSS and Atom are formats for open and broad syndication"
is an excellent one and something I've pointed out often. The question
was "is there any interest in providing minimal privacy elements in the
spec, comprable to what HTML provides" and apparently the answer is
"no." That's fine.

Part of what I've been wrestling with is the question of "are RSS and
Atom suited to aggregation when syndication is not desired?" LiveJournal
is what I spend time working on, and there's been an argument over there
about whether or not people want all of the content of all of their
public posts syndicated. It's being worked on at the application level,
and I wondered if there could be anything done once it's in one of these
standard XML formats. The current approach is to just not put the
information into a syndication-oriented format, and it seems to be the
right one.

>       If you really want to focus on non-individual use of your stuff, why
>don't you go after the sites that simply take RSS data and use it to build
>HTML web sites? It is entirely possible that the implicit waiver concerning
>content flowing through the syndication systems does not apply once the
>content is removed from the system. (Note: This may be a *very* important
>point... but is one for the lawyers, not us mere mortals.)
>  
>
I concur. I'm also looking for something that helps people find (and
possibly be alerted to) copyright infringement and abuse. It's a
somewhat seperate battle.

My current feeling is that a "noarchive"-like element should be used to
indicate that the items should not persist any longer than they do in
the feed. I'm sure the issue will be explored sometime, and hopefully
before things get too unpleasant.

>               bob wyman
>

-Nikolas 'Atrus' Coukouma

Reply via email to