I always hesitate when presented with can/should questions about specs, especially in a standards context. Can one build tables that (in theory) summarize existing content? Sure. Should one? Value judgement, which means it devolves to a set of trade-offs and one's own opinion of relative probabilities. Depending upon the use cases we are interested in solving, perceived effort, assumed audiences, and perceived probability of various failure modes, people often come to different conclusions. I'm not sure exactly which use cases you have in mind from your post.
Take CM's table as an example. Text before it says: " The following table summarizes the requirements from OSLC Core Specification as well as some additional specific to CM." If you summarize in that way, there needs (?) to be some provision around what happens in the case of a conflict, i.e. the summarizing table says, or appears to say, something different than the summarized content. Which wins (takes precedence)? Some people like to pick a winner in the spec in advance (e.g. in case of conflict, the summarizED content wins), usually to avoid delays while the owning "standards organization" resolves the conflict. Others take that tack that it's better to force humans to resolve the conflict, even if that delays implementations, so they want the spec to explicitly say "no winner, owners must revise to resolve". If the summary does not win (it is, essentially, informative not normative) then it's (probably) duplicate content. The tradeoff is increased spec maintenance effort to keep things consistent across changes, vs convenience (assuming it is consistent) or confusion (if inconsistent) for consumers of the spec(s). A separate but related variety of the same problem is completeness. CM appears specifically say "table mays be incomplete". This leads to the "if it's not complete, how useful is it" meme. FWIW I'm fine with incomplete, just says I treat it as a starting point. Summarizing can also be hard occasionally, due to context and chains of implication. As an example, "OSLC Services MUST support query responses in RDF/XML format (media type application/rdf+xml) and MAY support other formats" looks like a MUST, but it's 'nested' under (a consequence of) supporting the Query Capability (which is a 'may'). If an implementation does not support the QC, the current language as I read it says it may or may not support RDF/XML. It's a common problem in spec writing. If I mentally put on my developers' hat(s), in the large I suspect they'd say: a compliance table is exactly what I (as a service provider implementation owner) want to see. I want the list of exactly what I need to think about, and the minimum level necessary to assert compliance for the purposes of wide interop, and nothing else. Someone acting as a service consumer likely wants a similar list (scoped only to them). That would imply, for example, splitting CM's first row into two separate assertions, and either adding a column or having a separate table for reqts on clients. I don't know how to slice & dice 'best' it w/o going back to use cases and then getting into the discussions about priorities, probabilities, etc. I find summaries are generally helpful for new people, and as sanity checks e.g. when reviewing a new provider implementation. The CM approach seems like a reasonable set of tradeoffs (to me, based on my unspoken assumptions) that could be duplicated in Core. Best Regards, John Voice US 845-435-9470 BluePages Tivoli Component Technologies
