Cedric wrote:
> On Fri, Jul 4, 2008 at 2:46 AM, Jose Gonzalez <[EMAIL PROTECTED]> wrote:
>
>>>> That would probably be a good way to do it. So, the vertex
>>>> coords are floats which correspond to sub-pixel precision canvas coords.
>>>> Every time one 'sets' the poly points the previous poly obj
>>>> geometry is invalidated and the new one calculated thus determining
>>>> the obj's size and pos (top-left corner of int bounding rect).
>>>> Moving the poly obj will mean rendering the vertices suitably
>>>> translated.
>>>> Resizing the poly internally scales the vertices by the float
>>>> fraction of the new size to the input-poly-size, and presrves the poly
>>>> obj's pos.
>>>> This will again mean rendering the so changed vertices accordingly. Note
>>>> that
>>>> this may have a bit of 'jitter' due to sizes being only integer values..
>>>> but
>>>> one is also always free to transform the set of vertices as desired and set
>>>> them again. In fact, it would eventually be good to have an api function
>>>> for
>>>> setting such a vertex transformation on such objects - say something
>>>> like
>>>>
>>>> evas_object_polygon_points_transform_set(obj, *t);
>>>>
>>>> where the 't' will be an evas transformation, say possibly an affine
>>>> and/or projective transform. This transform will act on the vertices for
>>>> the
>>>> purposes of rendering, but not affect the reported object size or position
>>>> -
>>>> though one would eventually like to have the effective 'bounding rect' of
>>>> any
>>>> transformed obje, whether it's a result of a general object (surface)
>>>> transform or of a specilaized vertex transform on certain objects that
>>>> might
>>>> support that.
>>>>
>>> Though this wouldn't have to be done til later (if desired), let me
>>> suggest a possible semantics for the use of such 'set' transforms on
>>> vertex-based objects like polys.
>>> First of all, I'd limit the transforms to only affine ones (though
>>> I won't into why here - just mentioning it), so if one inputs a transform
>>> with projective components, only the affine part is used.
>>>
>
> I would like to know why we should have such a restriction as I don't
> have as much background on this as you have.
>
>
Basically, because the overwhelming majority of libs/apis/specs that
deal with vgfx don't support projective transforms for their vector objects
(polygons in particular), only affine ones. This includes cairo, flash,
silverlight, svg (spec), ......
Why they don't we'd have to ask each one, but there are certain 'issues'
of semantics that are best avoided by restricting such vector/vertex based
objects that can be stroked and/or filled to only supporting affine transforms
of their geometric defining data. See more on transforms below.
>>> Then, one would first scale the input vertices according to the poly
>>> obj size but rel to the origin, apply the transform to those points, and
>>> translate forward to its obj position.
>>> So for example if one had input a rectangular poly with vertices
>>> (-50,0), (50, 0), (50, 100), and (-50, 100) thus giving a poly obj at an
>>> initial pos of (-50, 0) of initial size 100x100, and then resizes this
>>> to be 200x200, one'd internally get the vertices (-100, 0), (100, 0),
>>> (100, 200), and (-100, 200). One'd then apply the transform to those
>>> vertices, and lastly translate vertices those by whatever amount would've
>>> brought the un-transformed (but re-sized scaled) poly to the current obj
>>> pos, and one would then render that set of vertices.
>>>
>>> Let me also mention yet another possible way to proceed with all
>>> this poly stuff -- just for the sake of argument and completeness.
>>>
>> Note btw that this trio of transformations - scale the input poly
>> points, apply the input affine transform to that result, and translate
>> those further by the required offset.. can all be composed into a single
>> affine transform to be applied to the input poly -- and this is something
>> that a given engine might or might not be able to take advantage of.
>> The point here being that, as with all other objects as well, one
>> needs to have engine level poly-objs which hold all relevant canvas level
>> state about the obj needed for its rendering.. and thus gives a single
>> unified form to the engine-level obj rendering functions. One absolutely
>> needs to have this for things like images (and all others really), if
>> one wants to be able to introduce things like transforms to image objs --
>> ie. the engines need to have a concept of an image obj which contains
>> such information as not only the src image data (what is currently used)
>> but also its borders, the fill size, the object size, smooth scaling,
>> the transform, and others.
>>
>
>
>> Note also that when one does introduce transforms, either 'surface'
>> ones for the objs or 'vertex' ones for vertex-based objs like polys, lines,
>> paths, ... then the update region that objects need to generate is no longer
>> limited to be a subregion of the object's 'geometry' as is exclusively
>> done now - one needs the object's effective bounding rect.
>>
>
> I like the idea of behing able to apply a transformation to a polygon,
> it should be easy to add to the polygon and easy to understand in this
> scope, but this API should stay consistent with other evas object. I
> don't think we are ready at this point to plan this kind of
> transformation to all evas object and making it a special case only
> for polygons, could make it harder for a later implementation. So I
> would plan this kind of API for later. First we change the way we
> define polygons points and it's engine API, we make this rock. We
> should just be sure we correctly define the engine API so that we
> could later easily extend it.
>
It would be consistent - one needs to make a special case for all vgfx
objects.. they would allow for vertex-based (affine) transforms, as well as
whatever general transform api for objects (or filter api or whatnot), ie. you
could apply an affine transform to the vertices of a poly say, *and* you can
also projectively transform the poly object.
General object transforms have to be of 'surface' type -- ie. they have to
give the same result as if you first rendered the object to a buffer, applied
the transform to that image and composited the result to the dst. This will give
rather different results, in general, than what you would get if you applied the
same transform to a vgfx obj to render it (assuming affine transforms here).
Things get even more involved if you try and squeeze projective transforms
into the picture, especially with things like stroking, and other stuff.
In short: One needs to make a distinction between 'surface' transforms
which can apply to all objects, and 'vertex' transforms which can apply to vgfx
objects.. even if abstractly the transforms are given by a similar means
(matrices
of values).
As to whether evas is ready for transforms of all objects, well, I don't
see a problem with any of the current objects - just a lot of work re-writing
internals to make it possible.. some more than others. But if the argument is
about what kind of api, or more specifically whether separate transforms&masks
vs.
transforms and masks as part of a general filters mechanism.. well, leave me out
of the latter.
> I have just one question should we expose the floating point
> coordinate to the engine, or if we give it the list of points inside
> the object geometry. The first solution will potentially share more
> code between engine, but it could perhaps limit some optimisation. The
> second solution will require much more code, but could perhaps give us
> more engine specific optimisation and later possible improvements when
> we add transformation (I currently don't know any, just that with more
> information you can often do more smart choice).
>
>
Think about what one needs to do with these guys to get the 'usual' amount
of support for vgfx: You need to fill them with a color, and/or with an image
or
gradient as a 'texture' (aka 'pattern'), you need to stroke them with similar,
with different stroke weights, with possibly end caps, with possibly join
styles,
with possibly dash patterns, ...
I wouldn't sweat the deatils of what you start of with now, since there's
almost near certainty that whatever it is, you'll end up changing it several
times. :)
>>> One *could* keep things as they currently are - ie. keep the 'poly_
>>> point_add' api, keep the fact that poly objs don't respond to obj move
>>> or resizing... and also add such a 'poly_transform_set' api func to take
>>> care of moving,scaling,... (and add a general api func to get any obj's
>>> 'effective' bounding rect, ie. after all transforms, things like
>>> stroking, and similar such).
>>> But, I'd only advocate this if one also fixed things so that objects
>>> had the means to 'override' the general obj move/resize functions so that
>>> for vertex-based objects like polys those functions not only did nothing
>>> as far as rendering, but also did not report (via the geometry get)
>>> any new bogus sizes from having being set otherwise.. and this should be
>>> consistent across such vertex-based objs (except rectangles as they are
>>> their own
>>> kind of object, similar to, but not exactly a vertex-defined one).
>>>
>>>
>>>> There are other deatils of course, but that should cover most of
>>>> the salient 'geometry' related stuff.
>>>>
>>>>
>>>>>>>> This actually brings up an issue which is relevant to poly objs
>>>>>>>> as well, ie. of a 'good' software implementation for aa 'drawing' of
>>>>>>>> polys, paths, ...
>>>>>>>>
>>>>>>>> There are many such implementations (good or bad) around, I may
>>>>>>>> even have one or two buried somewhere, but for this I think a good
>>>>>>>> start would be some work that's in the "enesim" lib (though I think
>>>>>>>> what's there is from somewhere else). I'd say it might be a good
>>>>>>>> idea to see if the implementation there could be used in evas for its
>>>>>>>> poly filling.
>>>>>>>>
>>>>>>>> Whether enesim, or something like it, could/should be used as
>>>>>>>> an (external) engine for evas' software gfx is an interesting
>>>>>>>> question.
>>>>>>>> But if so, then it would be useful to slowly try to make both ends
>>>>>>>> meet.. hence this would be a good start. And if not, then at least
>>>>>>>> evas might benefit from a better poly filling implementation.
>>>>>>>>
>>>>>>>>
>>>>>>> I think both change are needed internal implementation and external
>>>>>>> API. I have some idea for the GL engine but absolutly not for the
>>>>>>> software implementation. So we could start by changing the external
>>>>>>> API and then update it's internal by having a much better/faster
>>>>>>> software implementation. And it would be cool if we can reuse the
>>>>>>> work that as already be done by other, and well done.
>>>>>>>
>>>>>>>
>>>>>> The use of better software, and/or gl, gfx 'engines', along with
>>>>>> external immediate-mode apis for these would be very useful
>>>>>> eventually.
>>>>>> As to the software drawing routine for poly filling being better/
>>>>>> faster... well, 'better' maybe in that it would do aa and sub-pixel
>>>>>> precision vertices.. but not faster, no way. The current one is so
>>>>>> simple -- it could be made a bit faster here and there, but not by
>>>>>> adding aa or supporting sub-pixel precision vertices. The hope is
>>>>>> that it would be 'good enough'. :)
>>>>>>
>>>>> Well adding aa would be nice, it could be the default case, but we
>>>>> should be able to deactivate it. A little bit like the smooth flag on
>>>>> image object I think. So do you have some time to help me work on this
>>>>> ?
>>>>>
>>>>>
>>>> Fortunately, there's already an api function to enable aa:
>>>> evas_object_anti_alias_set; which is currently only used by lines
>>>> and gradients (in case you don't want smoothing of the way the gradient is
>>>> rendered, a bit faster though likely not worth it for most uses.. I should
>>>> probably
>>>> get rid of support for that in grads).
>>>> As to helping out.. Sure, I could make some time for that -
>>>> though as I mentioned in an earlier email, the 'best' way to do this for
>>>> now
>>>> would be to use what's already there in "enesim". I don't really have time
>>>> or
>>>> desire to dig-out and review some stuff I have on poly rasterization (and I
>>>> don't want to use the even older stuff that's in imlib2 since though quite
>>>> fast it's limited to integer coord vertices and does a kind of 'thick'
>>>> filling that's probably not best here).
>>>> Ideally though, Jorge would be the best one to involve here since
>>>> enesim is his work and it could a good exercise in helping to 'make both
>>>> ends meet' as I mentioned before.. but I don't know how much time he may
>>>> for this at the moment. Jorge?
>>>>
>
> I did look at enesim code and it should be easy to reuse it. The code
> is clean, and Jorge did help me to understand it quickly. So it should
> be the right option.
>
Ahhh good. Make sure there's no 'licensing' issues or whatnot.. :)
____________________________________________________________
Fabulous Spa Getaway!
Enter for your chance to WIN great beauty prizes everyday!
http://thirdpartyoffers.juno.com/TGL2141/fc/JKFkuJi7Urpdu3MvRdyI5SSvZHCdUoT9zdMBWXvsnkfUywBOEysFQL/
-------------------------------------------------------------------------
Sponsored by: SourceForge.net Community Choice Awards: VOTE NOW!
Studies have shown that voting for your favorite open source project,
along with a healthy diet, reduces your potential for chronic lameness
and boredom. Vote Now at http://www.sourceforge.net/community/cca08
_______________________________________________
enlightenment-devel mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/enlightenment-devel