Peter Sorotokin wrote:

TD> yes I consider the changes  detrimental, the new formulation
TD> will be harder to author correctly, and _much_ harder to implement.

So you said. Your comments, along with other folks comments, will of
course be taken into account.

I would really like to see the details on why the new formulation is harder to implement and author. It certainly seems much, much easier to me than the old one - on both implementation and authoring side.

The reason it is harder to implement is because these new test attributes have to be made available on every element in SVG. Thus this new 'rendering contextual' behavior must be all the classes supporting graphical elements in SVG. Some of this may be able to be moved into base classes but certainly not all of it.

In my opinion things are easy both in case where they apply to all elements
and when they apply to just one element. I do not see this as added complexity.

Ok, you are entitled to your opinion but I think that having it on every element is problematic - let's take a small example the SVG element which of it's three coordinate systems do you use for the resolution test? You can repeat this question for use and image as well (image/use are especially interesting because they can embed a transform that depends on the content being referenced).

    These can be defined and if you use something like 'user space'
it might not be too bad but it seems likely to me that when you
define a new attribute on all existing elements you are going to
run into cases where you need to define how feature X of element Y
interacts with them.  This is the whole NxM problem.

    It may be important to note that one of the main purposes of the
resolution switching code is to avoid the generation of the associated
rendering tree until needed - in hand with this is avoiding fetching
external resources until a particular rendering branch is selected.

I understand that this is the way Batik implemented - but it is very much
Batik-specific. Our fetching strategy was always much more aggressive, in part because of animations. We fetch the image as soon as it is inserted in the tree (in 1.2 there is a prefetch element that would give us better information).

Actually I think this how any implementation that uses an split
DOM/Render tree would work - perhaps Batik is one example of this but
I would consider the behavior specific to the architecture used not
the implementation. It may be true that if we rewrote our implementation to match Adobe this wouldn't be an issue - as it is
the test-attribute case would require very significant rework of our
implementation - the element approach drops right in.


    I think the question here is not "does there exist an
implementation for which it would easy" but "across all existing
implementations how hard would it be to add".  So Batik is one
implementation, and there are several others - if all the other
existing implementations feel that this is easy then Batik should
"lose" but just because it's easy for Adobe does not mean that the
implementation complexity issue is gone.  In fact I think we have
reached the conclusion that the test-attribute approach would be hard
for Batik.

    BTW have you actually implemented it?  Or is all this just
"what if" on your part?

In any case - how this is different from any "switching" attributes? They all behave like that and have to be implemented this way. One already can simply modify them with the script.

Currently there is no case where in one branch of the rendering tree the element should be rendered and in the other branch of the rendering tree the same element shouldn't be in the rendering tree. Also this depends on rendering context (remember we render the same document in different contexts - like for the overview) - right now the 'switching' attributes have no access to the rendering context (why should they currently?).

Also, I don't even think that most of the use for these attributes will come from the people who want to avoid loading stuff for zoomed view; I think it is extremely useful to simply avoid cluttering the view with small details. Thus it is very natural to allow these attributes on all elements - they belong there semantically.

All of my uses have been to avoid loading stuff for a zoomed view. The GIS folks need this very badly - they have huge data sets and they don't want everyone downloading the street-map for the US just too look at East Oshkosh NH.

    This also very important for image data, I regularly create
'contact strips' with 100+ 3/4mega pixel images.  If you do the math
the uncompressed zoomed in data set would be 1.2+Gb!  It is essential
that this data not be loaded (and that it be flushed when no longer in
use).

    Also I hadn't raised it earlier but the test-attribute approach
tends to be 'semantically' poor.  If you read my write ups I gave
the implementation some lee way to cheat at the border conditions,
(so up sample a little, or downsample a bit more than normal if it
means you can avoid downloading/decoding new data) this can make a big
difference in perceived performance.  The implementation can only do
this if the information about what the various 'options' are is
centralized in one place - you might be able to do this if authors use
the switch element but still the association here is very loose.

So this is not a simple matter of building the entire rendering tree
and just deciding if a particular node should paint it's self or not,
the rendering tree needs to morph as the user navigates. This
add another layer of complexity to the whole 'dirty' region management
because sometimes elements added to the rendering tree are coming
from DOM manipulations and sometimes from render context changes
in the first case you want to generate dirty regions for the next render for the later case you don't want to generate dirty regions
(because the change to render context will cause everything to
repaint anyways).

I think that if images are to be loaded conservatively (as they should
be in 1.2) it is expected from the UA not to load images which are clipped
out (unless prefetch is specified). This means that pan and zoom should be taken into account already.

Are they in Adobe? They aren't in Batik - so this is yet another significant change that implementations must implement to have a useful implementation of the 'test attribute' version of the feature. Add to this that for a number of elements you don't know the size of the referenced content until it is downloaded (use, marker, and image with overflow="visible") and once again authors are likely to find them scratching there head over bad performance.

    Add to this issues like a use element that references the same
multi-resolution content with different scale factors, or the
'overview' pane in Squiggle (provides a thumbnail view of document
for navigation) and this get's extremely complex to manage when
it's allowed to happen on any element anywhere.

I still do not see the problem. I think that once external resource is requested (and only
once it is requested), it should be loaded. So one can simply concentrate on requesting
resources only when needed for simple implementation.



    By having a single element that does resolution dependent
switching this complex code is much more centralized.  The current
proposal essentially requires adding the functionality of this element
to every element in SVG.


No, you'd add it only in base element, so it's not like you have to write this code many times. We don't have single element to do opacity, right? And "disabling" logic for required features already should be on every element.

In batik the 'disabling' logic is in the bridge that builds the
rendering tree from the DOM, it has no access to the rendering context. If the element fails the test then it is never proxied into
the rendering tree.


   We would have to add the notion of a resolution 'switch' to our base
class bridge and our graphics node, move all image loading into the
rendering tree (YUK!), along with the image loading we would have to
move the calculation of the viewing transform to the rendering tree
(which is very SVG specific).  This is complicated by the fact that
our bridge actually builds a different rendering tree for referenced
SVG content from image content.

Could it be done, yes but it would be difficult and expensive.

    There are also deeper architectural issues for Batik because it
generally builds it's rendering tree up front (this is nice for
error handling as you know if the document is good or not before
you go to display it).  This change would strongly push all
implementations to building the rendering tree 'on the fly' for
the first rendering.

    As to authoring, there are two major issues.  First for
multi-resolution data you almost always want boundary information
tied to the resolution selecting information.  The 'special
purpose' element can enforce this (in the few cases where you don't
really want bounds you can override 'overflow').  By having two
independent test attributes it is easy to forget the requiredView
attribute - the effect of this would be a general loss of performance
something that is easy to miss when all files are local but a big
mistake when it goes 'live'.


I think that the most useful way is not to load element until it is visible.
I agree that requiredView is not a good idea.

For me this is a non-starter as SVG is not designed so you can know the bounds of elements without loading the data.


Peter



     Secondly because the requiredView attribute doesn't clip -
"small" errors in the specification would be _incredibly_ difficult to
detect, they would generally show up when panning over the document as
'missing' data pan a little more 'boing' it pops in - ugg!

     Also I find it ironic that requiredView is considered a test
attribute for 'switch' as in at least one of it's major use cases
(tiling content) it can't be used in a switch it must be used outside
(otherwise at the seams you would lose one or the other tile).

     Incidentally the boundary information error is not just a
'what if' this was exactly one of the bugs I ran into - I changed
some stuff that accidentally disabled the viewport testing, everything
rendered just fine - it was just a bit slow (I was working with
local files), it wasn't until I checked in the debugger and noticed it
fetching/drawing all these 'offscreen' high resolution images that I
realized why it was slow.  I don't know how an end user would detect
this, he would likely just think that it was slow. Now this was a
bug in my implementation not a bug in content but it made it very clear
to me that it is important that content authors not be allowed to
easily make the same mistake!





--------------------------------------------------------------------- To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]



Reply via email to