Austin Cheney wrote:
> Scott Sauyet wrote:
>> CSS, though, was designed for progressive rendering (the only
>> exceptions I can come up with are the `:nth-last-*` selectors),
>> whereas XPath really only makes sense on a full DOM.  I think that
>> explains much of the difference.
>
> I can't say I fully agree.  I don't think progressive rendering fully
> explains the nature of the problem.  The CSS selection model is created
> solely for cascading inheritance only.  This is necessary because pages
> paint from top to bottom and from parents to children.  While
> progressive rendering is a logical effect it is not the technical cause.
> The limitation here is twofold:
>
> 1) Selection is directionally only top-down.

What I was suggesting is that this top-down nature was dictated by the
needs of progressive rendering.  CSS was developed when the most
common method of internet connection was dial-up.  Anything that could
be done to enable speedier-seeming downloads was considered
essential.  And that meant that the user agent should be able to
progressively render the document as it's downloaded.

But this would be impossible if CSS selectors included the equivalent
of XPath's `ancestor-*`, `parent-*`, or `preceding-*` axes.

Imagine a CSS-XPath hybrid rule-set that looked like:

    //div {
        background: white;
    }
    //p[/span[@class='warning']/../preceeding-sibling::div {
        background: red
    }

When the browser encounters a DIV element, should it give it a white
or red background color?  It cannot know that the second rule does not
apply until all of that DIV's siblings have been parsed and found not
to contain a P element containing a SPAN element with class
"warning".  And of course it could be much worse than this.

To be fair, something of this sort did sneak into the CSS specs, the
`nth-last-*` have the same issues.  IMHO, they should simply be
removed.  But I'm pretty sure that, excluding those, most elements can
be rendered as soon as they are parsed.


> 2) Selection is designed for wide cascading only and not for descendant
> Specificity.

I'm afraid I don't understand what you mean here.


> In CSS these are not bugs, but when you need to specifically target a
> possibly not fully known location on a tree graph from a different
> location that may not be known and where the path between these points
> is a mystery CSS fails.  CSS is designed more for simplicity than for
> specificity.  For complete targeting you must be able to move up or down
> a graph and you must be able to target a node apart from otherwise
> identical peers of a set.

Yes, if you need that sort of targeting, CSS, or CSS-like selector
engines won't do it.

I simply have rarely found the need.


> Furthermore, I would not be so quick to say that a certain selection
> mode is only for a complete DOM.  What makes a DOM incomplete, for
> instance.  Additionally, XML can be fragmented and the DOM of a given
> fragment is always complete provided that if namespace resolution is
> required that namespace declarations are supplied, and not necessarily
> to a root element.

How about a selector that targets decedents of a previous sibling of a
node that has a certain property?  If the document is only partially
loaded, how can you know for sure that any particular node does not
match that selector?  Any future nodes that are decedents of siblings
of any ancestors of the current node might have that property and make
the current node match.


> As far as the costs are concerned in my opinion a pass/fail validity
> model always results in faster development than a permissive allowance
> model because then you know immediately if you have an error with some
> idea of what/where the error is.  In other words always fail yourself
> before you fail your customers.  I also believe that technologies that
> require less work from me without exposing technology costs or usability
> barriers to end users always generate a cost savings.  As a result I
> would recommend that users only use XPath for projects where backwards
> compatibility is a lesser concern.

Maybe I'm just tired.  I'm not seeing what it is that you suggest
represents a permissive allowance model and what sort of pass/fail
validity model you're comparing it to.  What technologies are you
suggesting require less work from you?

  -- Scott

-- 
To view archived discussions from the original JSMentors Mailman list: 
http://www.mail-archive.com/[email protected]/

To search via a non-Google archive, visit here: 
http://www.mail-archive.com/[email protected]/

To unsubscribe from this group, send email to
[email protected]

Reply via email to