Re: PaceRepeatIdInDocument solution

2005-02-20 Thread Sam Ruby
Bob Wyman wrote:
	Given that history shows that publishing repeated ids has never
bothered anyone enough to cause them to complain, we should permit this
benign practice to continue. 
I have exactly the opposite experience.  I have people who have thanked 
me for noticing that they have repeated ids as it indicated an error in 
their software.

- Sam Ruby


Re: PaceRepeatIdInDocument solution

2005-02-20 Thread Graham
On 20 Feb 2005, at 4:07 am, Bob Wyman wrote:
PubSub regularly produces feeds with multiple instances of the same 
atom:id.
Which part of universally unique didn't you understand?
	It is particularly important to avoid prohibiting this benign
practice since it is so important to generators of aggregated feeds.
Aggregated feed generators are supposed to maintain atom:id unchanged 
when
they copy entries into an aggregate feed.
This is a fair point. I concede that multiple versions of entries with 
the same id is acceptable if and only if they have different feed ids 
in their head-in-entry, since essentially then they aren't in the same 
feed.

Graham


Re: PaceRepeatIdInDocument solution

2005-02-20 Thread Eric Scheid

On 20/2/05 4:34 PM, Graham [EMAIL PROTECTED] wrote:

 That's not what I meant. I opposed atom:modified because this use case
 wasn't on the table then. I oppose multiple ids partly because we don't
 have atom:modified. You can't have one without the other.

if this use case was on the table back then, and you were to consider the
question in that light, where would you stand?

(actually, we could have atom:modified while still outlawing multiple id's,
but you are right that atom:modified is required to disambiguate if multiple
id's are allowed.)

 My real problem with atom:modified is that it's unnecessarily tied to
 the Last Modification Date semantic, when it would work just as well
 for this purpose if it weren't. We just need a date with the constraint
 date(a)date(b) whenever entry instance a is older than entry instance
 b. This would make it easier to generate in various scenarios, and
 sidestep the problem of defining what a modification is.

heh -- one way to then generate to fit your requirements would be

atom:modified := atom:published + sequential-number

that is, increment the atom:modified by one second for every version.

Semantically, it would work ... for comparing two instances of one entry. It
wouldn't work for establishing if an entry was modified before or after
[some event moment] (eg. close of the stock exchange).

As to defining modification ... I was writing spec text which attempted
just that ... I was working towards the idea that any changes in values of
atom:entry elements, other than serialisation (including character encoding
and element order), and ignoring any changes in referenced resources.

 Nonsense. That's like arguing that http agents should only support those
 mime-types which were already defined oh so many years ago. No software
 currently exists that can possibly be expecting application/foo, but that
 doesn't mean application/foo is an illegal mime-type.
 
 No, since the HTTP spec says that any mime type is possible, whereas the Atom
 spec says ids are universally unique. If it's wrong to then think you won't
 find the same id twice in the same document, the spec needs to say so.

My apologies: I was making an analogy, not providing an example. As such,
ignore the specifics of the analogy. I should have just written Spec FOO is
silent on X, therefore X is verboten. Which is a nonsense position.

e.



Re: PaceRepeatIdInDocument solution

2005-02-20 Thread Henry Story

On 20 Feb 2005, at 17:10, Graham wrote:
On 20 Feb 2005, at 4:07 am, Bob Wyman wrote:
PubSub regularly produces feeds with multiple instances of the same 
atom:id.
Which part of universally unique didn't you understand?
Ok, I see so you interpret the universally unique in
[[
An Identity construct is an element whose content  conveys a permanent, 
universally unique identifier for  the construct's parent. Its content 
MUST be a URI, as  defined by [RFC3986]. Note that  the definition of 
URI excludes relative references.
]]

to mean that there can only be one entry in a feed with the same id, and
presumably across all feeds, or else why use the word universally and
not feedally?
Why could we not then allow another id construct, call it entryId that
would be what all entries that are just editorial changes of one another
have in common?
This would be something like:
feed
...
entry
idtag:bblfish.net/entry1/version1/id
entryidtag:bblfish.net/entry1//entryid
titleAtom Robots Run Amok/title
...
/entry
entry
idtag:bblfish.net/entry1/version2/id
entryidtag:bblfish.net/entry1//entryid
titleAtom-Powered Robots Run Amok/title
...
/entry

/feed
As you can see in the above feed there are no two entries with
the same id. Yet there are two entries with the same entryid.
Would the above be an ok feed for you, or are there some other
reasons why a entryid node would be illegal?
	It is particularly important to avoid prohibiting this benign
practice since it is so important to generators of aggregated feeds.
Aggregated feed generators are supposed to maintain atom:id unchanged 
when
they copy entries into an aggregate feed.
This is a fair point. I concede that multiple versions of entries with 
the same id is acceptable if and only if they have different feed ids 
in their head-in-entry, since essentially then they aren't in the same 
feed.
So should we replace universally unique above with unique in a feed 
then?

Graham



Re: PaceRepeatIdInDocument solution

2005-02-20 Thread Walter Underwood

About logical clocks in atom:modified:

--On February 21, 2005 3:30:13 AM +1100 Eric Scheid [EMAIL PROTECTED] wrote:

 Semantically, it would work ... for comparing two instances of one entry. It
 wouldn't work for establishing if an entry was modified before or after
 [some event moment] (eg. close of the stock exchange).

Establishing sequences of events is rather tricky. See Leslie Lamport's
Time, Clocks, and the Ordering of Events in Distributed Systems for how
to do it with logical clocks. The core part of the paper is short, maybe
five pages, and definitely worth reading if you care about this stuff.

 http://research.microsoft.com/users/lamport/pubs/time-clocks.pdf

Synchronized clocks make this simpler. If Atom depends on comparing timestamps
from different servers, then synchronized clocks are a SHOULD. See the text in
PaceCaching for an example.

Synchronized clocks are already a SHOULD for HTTP.

wunder
--
Walter Underwood
Principal Architect, Verity



Re: PaceRepeatIdInDocument solution

2005-02-20 Thread Graham
On 20 Feb 2005, at 4:30 pm, Eric Scheid wrote:
if this use case was on the table back then, and you were to consider 
the
question in that light, where would you stand?
I like the model where the feed content is approximately The current 
version of the latest entries. I don't think anything else makes much 
sense, least of all Various states past and present of various entries 
(some assembly required).

heh -- one way to then generate to fit your requirements would be
My idea would be that the originating server would simply stamp entries 
with the current time during feed generation, so if they get mixed up 
in transit by third parties or caches the later version would still be 
known. Note the originating server doesn't have to store or keep track 
of anything.

As to defining modification ... I was writing spec text which 
attempted
just that ... I was working towards the idea that any changes in 
values of
atom:entry elements, other than serialisation (including character 
encoding
and element order), and ignoring any changes in referenced resources.
But that gets complicated to generate cleanly. When you edit your feed 
template you need to do something like:

  if (entry modification date  date template was edited)
 print date template was edited
  else
 print entry modification date
Generating a last modification date according to someone else's idea of 
modification is not pretty.

My apologies: I was making an analogy, not providing an example. As 
such,
ignore the specifics of the analogy. I should have just written Spec 
FOO is
silent on X, therefore X is verboten. Which is a nonsense position.
I wouldn't define the phrase universally unique as Atom being silent. 
It is massively open to interpretation since the literal meaning is 
nonsense, and I suppose conveying no useful information could be 
interpreted as silence.

Graham


RE: PaceRepeatIdInDocument solution

2005-02-20 Thread Bob Wyman

Graham wrote:
 My idea would be that the originating server would simply stamp 
 entries with the current time during feed generation, so if they
 get mixed up in transit by third parties or caches the later version
 would still be known. Note the originating server doesn't have to
 store or keep track of anything.
You propose timestamps, yet you oppose atom:modified -- which was
intended to provide precisely the timestamps you suggest. Or, is there
something I am missing? (atom:modified was entry-specific but you seem to be
suggesting a feed-global timestamp...)
Is the problem in your comment that the originating server doesn't
have to store or keep track of anything? Does this imply that your
timestamp is really just the atom:updated of the feed and would change every
time that the feed was updated? If this is the case, should we reword the
definition of atom:updated to say that when it is used as feed metadata, it
MUST be updated every time the feed is changed in ANY way? Should the
significant change words only apply to atom:updated in entries? (Note:
Since the feed's atom:updated is an element of Head, this implies that if
HeadInEntry stands, the feed's atom:updated would or could be in the
entry.)
If your timestamp is really the same as the feed's atom:updated,
then what is the impact of your proposal on signatures? Would all individual
entries in a feed need to be re-signed every time any change was made to the
feed such as inserting a new entry? Would this be the case even if the
change did not otherwise modify the signed entry? 
Does your timestamp proposal imply that an entry which appeared in
multiple feeds would have a different timestamp in each feed it appeared?
(Note: That would have the odd effect of tending to make error-prone copies
consistently appear to be the 'last modifications.' One tends to write to
the main feed first and then later to category feeds... Copies will
generally have timestamps later than the originals.)

 Generating a last modification date according to someone else's
 idea of modification is not pretty.
Yes! This is the problem that an intermediary in the channel faces.
The intermediary (a proxy, retrospective search engine, prospective matching
engine, etc.) needs to know which entry is the most recent modification.
However, Atom provides no mechanism by which the last modification can be
identified without heuristics. (atom:updated only tells you the time of the
last significant modification but that leaves you unable to determine
which of various alternative insignificant modifications should be passed
on by the intermediary.
Without help from the Atom format, the best an intermediary can do
is keep track of date_entry_was_found. However, this can cause problems
since entries can exist in multiple feeds. Thus, the order in which the
feeds are read can cause old entries to over-write newer ones.
 
bob wyman




Re: PaceRepeatIdInDocument solution

2005-02-19 Thread Henry Story
I think I can prove that the two versions are perfectly compatible and 
orthogonal. I can prove that logically there is no inconsistency, and
some empirical backing that this is feasible. But I am not alone. Bob
Wyman I believe has a lot more empirical support.

You on the other hand, as usual I notice, have absolutely no argument to
defend your case.
Henry Story
On 18 Feb 2005, at 23:55, Graham wrote:
Allowing more than one version of the same entry in a syndication feed 
is unacceptable in itself, which is fundamentally incompatible with 
archive feeds, no matter what the conceptual definition of id is.

Graham



Re: PaceRepeatIdInDocument solution

2005-02-19 Thread Henry Story

On 18 Feb 2005, at 23:55, Graham wrote:
Allowing more than one version of the same entry in a syndication 
feed is unacceptable in itself, which is fundamentally incompatible 
with archive feeds, no matter what the conceptual definition of id 
is.

Graham
Let me make my point even clearer. If something is fundamentally 
incompatible,
then it should be *dead-easy* to prove or reveal this incompatibility.

So develop your thought a little, and you can only come out the winner.
Henry


Re: PaceRepeatIdInDocument solution

2005-02-19 Thread Graham
On 19 Feb 2005, at 11:23 am, Henry Story wrote:
Let me make my point even clearer. If something is fundamentally 
incompatible,
then it should be *dead-easy* to prove or reveal this incompatibility.
i) Syndication documents shouldn't ever contain multiple versions of 
the same entry*.
ii) Archive documents apparently need to be able to contain multiple 
versions of the same entry.

* for the simple reason that it makes them an order of magnitude harder 
to process and display correctly (and often impossible to display 
correctly, since it won't always be clear which is the latest version).

Your wittering on about conceptual models doesn't make you better than 
us.

Graham


Re: PaceRepeatIdInDocument solution

2005-02-19 Thread Roger B.

 i) Syndication documents shouldn't ever contain multiple versions of
 the same entry*.

Graham: +1.

 ii) Archive documents apparently need to be able to contain multiple
 versions of the same entry.

I don't even buy that much, personally.

--
Roger Benningfield



Re: PaceRepeatIdInDocument solution

2005-02-19 Thread Eric Scheid

On 20/2/05 2:46 AM, Graham [EMAIL PROTECTED] wrote:

 i) Syndication documents shouldn't ever contain multiple versions of
 the same entry*.
 
 * for the simple reason that it makes them an order of magnitude harder
 to process and display correctly (and often impossible to display
 correctly, since it won't always be clear which is the latest version).

Think of a feed as a stream of entry instances (not hard to do), and process
accordingly. The same thing with a feed document. Whether you read from the
top of the document to the bottom, or vice versa, shouldn't matter - you can
identify the more recent entry by atom:updated. If two instances with the
same atom:id have the same atom:updated, then there is no significant
difference between the two, so go with a random choice (that's not hard
either) (and lobby for atom:modified while you're at it).

For feed readers that already support entry persistence and entry
replacement when an entry is updated from one document to the next, why is
this an order of magnitude more difficult to do in the one document?

e.



Re: PaceRepeatIdInDocument solution

2005-02-19 Thread Graham
On 19 Feb 2005, at 11:06 pm, Eric Scheid wrote:
If two instances with the same atom:id have the same atom:updated, 
then there is no significant difference between the two, so go with a 
random choice
*that the author considered significant*. If you've told the use 
they're getting the latest version, and they see something else, that 
doesn't fit my definition of working correctly. A paradigm where the 
instance in the feed is always the newest version works much much 
better.

For feed readers that already support entry persistence and entry
replacement when an entry is updated from one document to the next, 
why is
this an order of magnitude more difficult to do in the one document?
I was talking about feed readers that don't.
And even those that do, you now need to look for duplicates within the 
feed instead of just comparing the new set to the old set. ie Instead 
of removing duplicates that exist between set A and set B, I now also 
have to look within set A as well. You seem to have suggested earlier 
that entries be added to the store one by one. This is not possible in 
Shrook because of the various layers of idiot proofing.

Graham


Re: PaceRepeatIdInDocument solution

2005-02-19 Thread Henry Story

On 19 Feb 2005, at 16:46, Graham wrote:
On 19 Feb 2005, at 11:23 am, Henry Story wrote:
Let me make my point even clearer. If something is fundamentally 
incompatible,
then it should be *dead-easy* to prove or reveal this incompatibility.
i) Syndication documents shouldn't ever contain multiple versions of 
the same entry*.
ii) Archive documents apparently need to be able to contain multiple 
versions of the same entry.

* for the simple reason that it makes them an order of magnitude 
harder to process and display correctly (and often impossible to 
display correctly, since it won't always be clear which is the latest 
version).
I don't accept that it makes it an order of magnitude harder to process 
these
documents, or if it is an order of magnitude harder, its an order of 
magnitude
larger than an infinitesimal amount, which is still an infinitesimal 
amount.
I am writing such a tool, so I think I have some grasp on the subject.

But accepting for the sake of argument that you are right, you need 
compare the difficulty of writing a feed reader with the difficulty of 
writing a feed
itself. Not allowing duplicate versions of an entry in a feed just 
pushes the
complexity of writing the feed from the feed reader to the feed writer:
now the feed writer has to contain the logic to make sure than no 
duplicates
appear in the feed.  Instead of the feed writer just being able to 
paste the
new entry to the end of the feed, it has to parse the whole feed 
document
and make sure it contains no duplicates.

Since I can see very good reasons to make life easier for the feed 
writer, in the
same way as one has tried to keep html simple for the common html 
writer, I
think your argument may in fact turn out to be a good supporting 
argument for
allowing multiple versions of an entry in the same feed document.

Your wittering on about conceptual models doesn't make you better than 
us.
I never pretended it does make me better.
I have been exploring tools such as rdf, as I believe that they can 
bring a
lot of clarity to debates such as this one. Just as engineers don't 
hesitate
to use mathematics to help them in their tasks, so I think using logical
analysis should help us here. I hope that as I understand these tools 
better
I will be able to explain the insights these disciplines bring in 
plainer
english.

In the mean time I have a lot of respect for Tim Berners Lee, and
I try my best to understand the direction he is going in, the tools he
is developing and the insights these lead to.
Henry Story
http://bblfish.net/
Graham



Re: PaceRepeatIdInDocument solution

2005-02-19 Thread Graham
On 20 Feb 2005, at 1:27 am, Eric Scheid wrote:
hmmm ... looking back in the archives I see you were opposed to
atom:modified, you couldn't see any use case where you would want the 
entry
instances to clearly indicate which is more recent. Hashes won't help 
you
here.
Yes, if you want multiple versions you need atom:modified. I oppose 
both.

A paradigm that fails completely once a reader starts traversing
@rel=prev
Not if the url in the prev is properly thought through; ie instead of 
asking for page 2 the uri query asks for entried before n, where n 
is the oldest entry number in the page before.

Anyway, rel=prev doesn't exist last time I checked.
or they have a planet aggregator in their subscriptions which
has fallen behind due to ping lags.
Are there really aggregators naïve enough to take an entry with the 
same id from one feed and paste over the last retrieved entry from 
another? There are far more problems with that before you start 
worrying about what is the latest entry.

the newest version is something which should be publisher 
controlled, not
left to the variable circumstances of protocol happenstance and
idiosyncratic personal subscription lists.
or picking randomly, as you suggested not 2 emails ago.
OK, lets look at feed readers that don't then [etc]
This is where Eric dictates how other people's feed readers should work 
to fit the flaws in his preferred proposition.

[1] do you know of any publishing software which currently emits feeds 
with
multiple instances of entries? I can't think of any.
None. That's why it should be explicitly barred, since no software is 
expecting it.

Graham


RE: PaceRepeatIdInDocument solution

2005-02-19 Thread Bob Wyman

Graham wrote:
 [1] do you know of any publishing software which currently emits
 feeds with multiple instances of entries? I can't think of any.
 None. That's why it should be explicitly barred, since no software
 is expecting it.
PubSub regularly produces feeds with multiple instances of the same
atom:id. No one has every complained about this to us.
Given that history shows that publishing repeated ids has never
bothered anyone enough to cause them to complain, we should permit this
benign practice to continue. 
It is particularly important to avoid prohibiting this benign
practice since it is so important to generators of aggregated feeds.
Aggregated feed generators are supposed to maintain atom:id unchanged when
they copy entries into an aggregate feed. However, the Atom format doesn't
provide rigorous guarantees that atom:id's will be unique across feeds.
Thus, aggregated feed publishers are left with the choice of 1) Trusting
feed publishers or 2) Assigning new atom:id's to all entries published. The
first option will inevitably result in repeated ids and the second results
in massive amounts of work, difficulties in duplicate detection, violation
of the maintain atom:id rule, etc.
Forbidding repeated ids causes damage. History shows, however, that
allowing repeated ids is benign.

bob wyman




Re: PaceRepeatIdInDocument solution

2005-02-19 Thread Eric Scheid

On 20/2/05 1:47 PM, Graham [EMAIL PROTECTED] wrote:

 On 20 Feb 2005, at 1:27 am, Eric Scheid wrote:
 
 hmmm ... looking back in the archives I see you were opposed to
 atom:modified, you couldn't see any use case where you would want the entry
 instances to clearly indicate which is more recent. Hashes won't help you
 here.
 
 Yes, if you want multiple versions you need atom:modified. I oppose both.
 
atom:modified also helps in distinguishing multiple instances found in
separate feed documents.

You oppose atom:modified, and yet you insist on kludging a hack for
identifying which of two entries is the most recent. A hack which isn't even
mentioned in the spec, so gawd help software developers all arriving at the
same hacky solution to the problem.

You opposed it because you couldn't foresee any use case for it, and now you
have a use case for it but you say that that use case should be banned
because you opposed atom:modified.

I forget: is this the circular reasoning logical fallacy, or the begging the
question fallacy?

 A paradigm that fails completely once a reader starts traversing @rel=prev
 
 Not if the url in the prev is properly thought through; ie instead of asking
 for page 2 the uri query asks for entried before n, where n is the oldest
 entry number in the page before.

Where are these special semantics codified into a specification?

Also, define oldest. Is this the one with the oldest atom:updated, even
though you earlier (and rightly) dissed that because it was that the author
considered significant. Or is oldest defined by atom:published, which as
you might recall is an *optional* element for atom:entry.

Also, define number in entry number. Entries are not numbered, they have
id's, and while it's often easy to use an incrementing serial that is not
always the case. 

 Anyway, rel=prev doesn't exist last time I checked.

This does: @rel=http://www.example.org/atom/link-rels#prev;, and that's a
valid value for the @rel attribute. There is also no language in the spec
that prevents someone registering prev in the Registry of Link Relations.

http://atompub.org/2005/01/27/draft-ietf-atompub-format-05.html#rfc.section
.9.1

So you might as well assume it does exist.

 or they have a planet aggregator in their subscriptions which has fallen
 behind due to ping lags.
 
 Are there really aggregators naïve enough to take an entry with the same id
 from one feed and paste over the last retrieved entry from another?
 
Naïve or smart? I subscribe to one feed which is the top headlines for that
site, and I also subscribe to all headlines for one category at that site.
The naïve thing to do there would be to not conflate entries with
identical id's.

Another use case: I subscribe to a feed from the publisher's website, but
later he sets up a link at feedburner.com or similar. The naïve thing would
be to assume that all the entries from feedburner.com are completely
different from those retrieved from example.com, despite having the same
id's.

 There are far more problems with that before you start worrying about what is
 the latest entry.
 
You forget: I determine what entries I subscribe to, as you do for yourself.
If a bad actor starts screwing with id's then I can also unsubscribe.

So, leaving aside hand waving scare-mongering statements like far more
problems, just what problems are there Graham?

 the newest version is something which should be publisher controlled, not
 left to the variable circumstances of protocol happenstance and idiosyncratic
 personal subscription lists.
 
 or picking randomly, as you suggested not 2 emails ago.
 
Glad you agree with me there.

We wouldn't need to pick randomly if we had atom:modified.

 OK, lets look at feed readers that don't then [etc]
 
 This is where Eric dictates how other people's feed readers should work to fit
 the flaws in his preferred proposition.
 
A gross sophistry on your part. If you don't want to argue the merits and
prefer ad hominem attacks, then there really isn't much point continuing.

 [1] do you know of any publishing software which currently emits feeds with
 multiple instances of entries? I can't think of any.
 
 None. That's why it should be explicitly barred, since no software is
 expecting it.

Nonsense. That's like arguing that http agents should only support those
mime-types which were already defined oh so many years ago. No software
currently exists that can possibly be expecting application/foo, but that
doesn't mean application/foo is an illegal mime-type.

e.




Re: PaceRepeatIdInDocument solution

2005-02-18 Thread Henry Story
I was not able to go and do the exercise I wanted to do, so here is a
more carefully worded version

The id construct in atom is ambiguous between two meanings. Since the
two meanings are orthogonal and not incompatible when properly 
distinguished,
the best solution is to distinguish them and allow both.

I would replace the following text in the spec:

4.5 The atom:id Element
The atom:id element is an Identity construct that  conveys a 
permanent, universally unique identifier for an  entry or feed.


with

4.5 The atom:versionId Element
The atom:versionId element is an Identity construct that  conveys a 
permanent, universally unique identifier for an  entry or feed version. 
There can only be
one entry  with the same versionId per feed document. There can be only 
one feed
document with the same versionId.

4.6 The atom:id Element
The atom:entryId element is an Identity construct that  conveys a 
permanent, universally unique identifier for an  entry or feed. There 
can be more
than one entry per feed document with the same id. And there can be 
multiple
feed documents with the same id.


Note:
atom:versionId is what I have called elsewhere the equivalence id 
relation
atom:entryId is what I have called the functional id relation

The above will allow the feed format to also be used as an archive 
format
if needed.

It clearly distinguishes the two types of ids that were hidden
in the ambiguous text that PaceRepeatIdInDocument tried to disambiguate 
one
way and other Paces tried to disambiguate the other way.

As such it correctly resolves an ambiguity by allowing both options.
Henry Story
Ps. text above written quickly cause I have to go do some exercise.