>>
>>      That would probably be a good way to do it. So, the vertex 
>> coords are
>> floats which correspond to sub-pixel precision canvas coords.
>>      Every time one 'sets' the poly points the previous poly obj 
>> geometry
>> is invalidated and the new one calculated thus determining the obj's 
>> size
>> and pos (top-left corner of int bounding rect).
>>      Moving the poly obj will mean rendering the vertices suitably 
>> translated.
>> Resizing the poly internally scales the vertices by the float 
>> fraction of the
>> new size to the input-poly-size, and presrves the poly obj's pos. 
>> This will
>> again mean rendering the so changed vertices accordingly. Note that 
>> this may
>> have a bit of 'jitter' due to sizes being only integer values.. but 
>> one is
>> also always free to transform the set of vertices as desired and set 
>> them
>> again. In fact, it would eventually be good to have an api function for
>> setting such a vertex transformation on such objects - say something 
>> like
>>
>> evas_object_polygon_points_transform_set(obj, *t);
>>
>> where the 't' will be an evas transformation, say possibly an affine 
>> and/or
>> projective transform. This transform will act on the vertices for the 
>> purposes
>> of rendering, but not affect the reported object size or position - 
>> though
>> one would eventually like to have the effective 'bounding rect' of any
>> transformed obje, whether it's a result of a general object (surface) 
>> transform
>> or of a specilaized vertex transform on certain objects that might 
>> support that.
>>
>
>      Though this wouldn't have to be done til later (if desired), let me
> suggest a possible semantics for the use of such 'set' transforms on 
> vertex-
> based objects like polys.
>      First of all, I'd limit the transforms to only affine ones (though
> I won't into why here - just mentioning it), so if one inputs a transform
> with projective components, only the affine part is used.
>      Then, one would first scale the input vertices according to the poly
> obj size but rel to the origin, apply the transform to those points, and
> translate forward to its obj position.
>      So for example if one had input a rectangular poly with vertices
> (-50,0), (50, 0), (50, 100), and (-50, 100) thus giving a poly obj at an
> initial pos of (-50, 0) of initial size 100x100, and then resizes this
> to be 200x200, one'd internally get the vertices (-100, 0), (100, 0),
> (100, 200), and (-100, 200). One'd then apply the transform to those
> vertices, and lastly translate vertices those by whatever amount would've
> brought the un-transformed (but re-sized scaled) poly to the current obj
> pos, and one would then render that set of vertices.
>
>      Let me also mention yet another possible way to proceed with all 
> this
> poly stuff -- just for the sake of argument and completeness.
>

      Note btw that this trio of transformations - scale the input poly
points, apply the input affine transform to that result, and translate
those further by the required offset.. can all be composed into a single
affine transform to be applied to the input poly -- and this is something
that a given engine might or might not be able to take advantage of.
      The point here being that, as with all other objects as well, one
needs to have engine level poly-objs which hold all relevant canvas level
state about the obj needed for its rendering.. and thus gives a single
unified form to the engine-level obj rendering functions. One absolutely
needs to have this for things like images (and all others really), if
one wants to be able to introduce things like transforms to image objs --
ie. the engines need to have a concept of an image obj which contains
such information as not only the src image data (what is currently used)
but also its borders, the fill size, the object size, smooth scaling,
the transform, and others.

      Note also that when one does introduce transforms, either 'surface'
ones for the objs or 'vertex' ones for vertex-based objs like polys, lines,
paths, ... then the update region that objects need to generate is no longer
limited to be a subregion of the object's 'geometry' as is exclusively
done now - one needs the object's effective bounding rect.


>      One *could* keep things as they currently are - ie. keep the 'poly_
> point_add' api, keep the fact that poly objs don't respond to obj move
> or resizing... and also add such a 'poly_transform_set' api func to take
> care of moving,scaling,... (and add a general api func to get any obj's
> 'effective' bounding rect, ie. after all transforms, things like 
> stroking,
> and similar such).
>      But, I'd only advocate this if one also fixed things so that objects
> had the means to 'override' the general obj move/resize functions so that
> for vertex-based objects like polys those functions not only did nothing
> as far as rendering, but also did not report (via the geometry get) 
> any new
> bogus sizes from having being set otherwise.. and this should be 
> consistent
> across such vertex-based objs (except rectangles as they are their own 
> kind
> of object, similar to, but not exactly a vertex-defined one).
>
>
>>      There are other deatils of course, but that should cover most of 
>> the
>> salient 'geometry' related stuff.
>>
>>
>>>>>>     This actually brings up an issue which is relevant to poly objs
>>>>>> as well, ie. of a 'good' software implementation for aa 'drawing' of
>>>>>> polys, paths, ...
>>>>>>
>>>>>>     There are many such implementations (good or bad) around, I may
>>>>>> even have one or two buried somewhere, but for this I think a good
>>>>>> start would be some work that's in the "enesim" lib (though I think
>>>>>> what's there is from somewhere else). I'd say it might be a good 
>>>>>> idea
>>>>>> to see if the implementation there could be used in evas for its 
>>>>>> poly
>>>>>> filling.
>>>>>>
>>>>>>     Whether enesim, or something like it, could/should be used as
>>>>>> an (external) engine for evas' software gfx is an interesting 
>>>>>> question.
>>>>>> But if so, then it would be useful to slowly try to make both ends
>>>>>> meet.. hence this would be a good start. And if not, then at least
>>>>>> evas might benefit from a better poly filling implementation.
>>>>>>         
>>>>> I think both change are needed internal implementation and external
>>>>> API. I have some idea for the GL engine but absolutly not for the
>>>>> software implementation. So we could start by changing the external
>>>>> API and then update it's internal by having a much better/faster
>>>>> software implementation. And it would be cool if we can reuse the 
>>>>> work
>>>>> that as already be done by other, and well done.
>>>>>       
>>>>     The use of better software, and/or gl, gfx 'engines', along with
>>>> external immediate-mode apis for these would be very useful 
>>>> eventually.
>>>>     As to the software drawing routine for poly filling being better/
>>>> faster... well, 'better' maybe in that it would do aa and sub-pixel
>>>> precision vertices.. but not faster, no way. The current one is so
>>>> simple -- it could be made a bit faster here and there, but not by
>>>> adding aa or supporting sub-pixel precision vertices. The hope is
>>>> that it would be 'good enough'. :)
>>>>     
>>>
>>> Well adding aa would be nice, it could be the default case, but we
>>> should be able to deactivate it. A little bit like the smooth flag on
>>> image object I think. So do you have some time to help me work on this
>>> ?
>>>   
>>
>>      Fortunately, there's already an api function to enable aa:
>> evas_object_anti_alias_set;  which is currently only used by lines 
>> and gradients
>> (in case you don't want smoothing of the way the gradient is 
>> rendered, a bit
>> faster though likely not worth it for most uses.. I should probably 
>> get rid
>> of support for that in grads).
>>      As to helping out.. Sure, I could make some time for that - 
>> though as
>> I mentioned in an earlier email, the 'best' way to do this for now 
>> would be
>> to use what's already there in "enesim". I don't really have time or 
>> desire
>> to dig-out and review some stuff I have on poly rasterization (and I 
>> don't
>> want to use the even older stuff that's in imlib2 since though quite 
>> fast
>> it's limited to integer coord vertices and does a kind of 'thick' 
>> filling
>> that's probably not best here).
>>      Ideally though, Jorge would be the best one to involve here since
>> enesim is his work and it could a good exercise in helping to 'make both
>> ends meet' as I mentioned before.. but I don't know how much time he may
>> for this at the moment. Jorge?
>>
>

____________________________________________________________
Fabulous Spa Getaway!
Enter for your chance to WIN great beauty prizes everyday!
http://thirdpartyoffers.juno.com/TGL2141/fc/JKFkuJi7UrpXkl4K8xYqHmLTF1oO72dmPcuYk69f7ZhNXnJi4Ar1qQ/

-------------------------------------------------------------------------
Sponsored by: SourceForge.net Community Choice Awards: VOTE NOW!
Studies have shown that voting for your favorite open source project,
along with a healthy diet, reduces your potential for chronic lameness
and boredom. Vote Now at http://www.sourceforge.net/community/cca08
_______________________________________________
enlightenment-devel mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/enlightenment-devel

Reply via email to