Re: [E-devel] Wayland and subsurfaces

2013-10-06 Thread Chris Michael
On 10/06/13 06:11, Cedric BAIL wrote:
 On Sat, Oct 5, 2013 at 12:05 AM, Rafael Antognolli antogno...@gmail.com 
 wrote:
 Example usage of what I have just committed (fixes and improvements
 for Evas_Video_Surface, and added Ecore_Wl_Subsurf) here:

 https://github.com/antognolli/buffer_object

 This is a helper, or a skeleton, for creating and setting up the image
 object that would be used with the buffers. It can be made more
 generic if necessary, thus allowing to use both Wayland buffers or X
 stuff. The code itself is inside buffer_object.c. Sample usage is
 inside main.c.
 That's exactly the direction where I wanted to get that code. Really
 nice patch, thanks. The next improvement I was looking for was to use
 somehow the pixels buffer directly when using OpenGL (zero copy
 scheme), by looking at your code I do think than in compositing mode
 we are still doing a copy. Am i right ?

 Anyway, this can be added somewhere in EFL, I just don't know exactly
 where would be the best place... ideas?
 That is indeed a good question. I guess the first place to use this is
 somewhere in Emotion's gstreamer backend. I would even prefer to see
 that feature working with VLC backend, but I don't think there is a
 way to make vlc output the pixels in a wayland surface.
Not currently :( And I would not count on one anytime soon :(

https://trac.videolan.org/vlc/ticket/7936

Although, if we had the pixels (I don't know the VLC code too well) then 
we should be able to slap those into a surface...

dh
   Also the
 gstreamer backend is easier to integrate as it doesn't require to
 communicate with another process to get the pixels (Not really a win
 in my opinion, but in this case will make life easier).

 Also I have been starting to think that maybe we should have a simpler
 layer than Emotion that does all this buffer management and his used
 by Emotion. That's just a though right now.


--
October Webinars: Code for Performance
Free Intel webinars can help you accelerate application performance.
Explore tips for MPI, OpenMP, advanced profiling, and more. Get the most from 
the latest Intel processors and coprocessors. See abstracts and register 
http://pubads.g.doubleclick.net/gampad/clk?id=60134791iu=/4140/ostg.clktrk
___
enlightenment-devel mailing list
enlightenment-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/enlightenment-devel


Re: [E-devel] Wayland and subsurfaces

2013-10-06 Thread Rafael Antognolli
On Sun, Oct 6, 2013 at 2:11 AM, Cedric BAIL cedric.b...@free.fr wrote:
 On Sat, Oct 5, 2013 at 12:05 AM, Rafael Antognolli antogno...@gmail.com 
 wrote:
 Example usage of what I have just committed (fixes and improvements
 for Evas_Video_Surface, and added Ecore_Wl_Subsurf) here:

 https://github.com/antognolli/buffer_object

 This is a helper, or a skeleton, for creating and setting up the image
 object that would be used with the buffers. It can be made more
 generic if necessary, thus allowing to use both Wayland buffers or X
 stuff. The code itself is inside buffer_object.c. Sample usage is
 inside main.c.

 That's exactly the direction where I wanted to get that code. Really
 nice patch, thanks. The next improvement I was looking for was to use
 somehow the pixels buffer directly when using OpenGL (zero copy
 scheme), by looking at your code I do think than in compositing mode
 we are still doing a copy. Am i right ?

Cool, thanks.

And well, I am not sure, but I think there was an optimization that
would allow to use the pixels directly when doing an image_data_set,
which is basically what I am doing. By compositing mode you mean
with or without the subsurface?

 Anyway, this can be added somewhere in EFL, I just don't know exactly
 where would be the best place... ideas?

 That is indeed a good question. I guess the first place to use this is
 somewhere in Emotion's gstreamer backend. I would even prefer to see
 that feature working with VLC backend, but I don't think there is a
 way to make vlc output the pixels in a wayland surface. Also the
 gstreamer backend is easier to integrate as it doesn't require to
 communicate with another process to get the pixels (Not really a win
 in my opinion, but in this case will make life easier).

 Also I have been starting to think that maybe we should have a simpler
 layer than Emotion that does all this buffer management and his used
 by Emotion. That's just a though right now.


-- 
Rafael Antognolli

--
October Webinars: Code for Performance
Free Intel webinars can help you accelerate application performance.
Explore tips for MPI, OpenMP, advanced profiling, and more. Get the most from 
the latest Intel processors and coprocessors. See abstracts and register 
http://pubads.g.doubleclick.net/gampad/clk?id=60134791iu=/4140/ostg.clktrk
___
enlightenment-devel mailing list
enlightenment-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/enlightenment-devel


Re: [E-devel] Wayland and subsurfaces

2013-10-06 Thread Rafael Antognolli
On Sun, Oct 6, 2013 at 9:04 AM, Chris Michael devilho...@comcast.net wrote:
 On 10/06/13 06:11, Cedric BAIL wrote:
 On Sat, Oct 5, 2013 at 12:05 AM, Rafael Antognolli antogno...@gmail.com 
 wrote:
 Example usage of what I have just committed (fixes and improvements
 for Evas_Video_Surface, and added Ecore_Wl_Subsurf) here:

 https://github.com/antognolli/buffer_object

 This is a helper, or a skeleton, for creating and setting up the image
 object that would be used with the buffers. It can be made more
 generic if necessary, thus allowing to use both Wayland buffers or X
 stuff. The code itself is inside buffer_object.c. Sample usage is
 inside main.c.
 That's exactly the direction where I wanted to get that code. Really
 nice patch, thanks. The next improvement I was looking for was to use
 somehow the pixels buffer directly when using OpenGL (zero copy
 scheme), by looking at your code I do think than in compositing mode
 we are still doing a copy. Am i right ?

 Anyway, this can be added somewhere in EFL, I just don't know exactly
 where would be the best place... ideas?
 That is indeed a good question. I guess the first place to use this is
 somewhere in Emotion's gstreamer backend. I would even prefer to see
 that feature working with VLC backend, but I don't think there is a
 way to make vlc output the pixels in a wayland surface.
 Not currently :( And I would not count on one anytime soon :(

 https://trac.videolan.org/vlc/ticket/7936

 Although, if we had the pixels (I don't know the VLC code too well) then
 we should be able to slap those into a surface...

Well, in the generic backend, we create the pixel buffer where VLC is
going to decode the video, right? So it's basically the same, we could
create a wl_buffer and let VLC write on it, and then we display that
as a subsurface if possible.

I was also thinking that if we add YUV as a buffer format for
wl_buffer (it's missing that so far), then VLC can write the pixels in
YUV format, and we let the composition of the subsurface + main
surface to the compositor, and that would speed up things, wouldn't
it?

   Also the
 gstreamer backend is easier to integrate as it doesn't require to
 communicate with another process to get the pixels (Not really a win
 in my opinion, but in this case will make life easier).

wl_buffer can be (maybe *must* be) a shm buffer, so that should be
easy to handle even in vlc, I think.

 Also I have been starting to think that maybe we should have a simpler
 layer than Emotion that does all this buffer management and his used
 by Emotion. That's just a though right now.


 --
 October Webinars: Code for Performance
 Free Intel webinars can help you accelerate application performance.
 Explore tips for MPI, OpenMP, advanced profiling, and more. Get the most from
 the latest Intel processors and coprocessors. See abstracts and register 
 http://pubads.g.doubleclick.net/gampad/clk?id=60134791iu=/4140/ostg.clktrk
 ___
 enlightenment-devel mailing list
 enlightenment-devel@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/enlightenment-devel

-- 
Rafael Antognolli

--
October Webinars: Code for Performance
Free Intel webinars can help you accelerate application performance.
Explore tips for MPI, OpenMP, advanced profiling, and more. Get the most from 
the latest Intel processors and coprocessors. See abstracts and register 
http://pubads.g.doubleclick.net/gampad/clk?id=60134791iu=/4140/ostg.clktrk
___
enlightenment-devel mailing list
enlightenment-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/enlightenment-devel


Re: [E-devel] Wayland and subsurfaces

2013-10-05 Thread Cedric BAIL
On Sat, Oct 5, 2013 at 12:05 AM, Rafael Antognolli antogno...@gmail.com wrote:
 Example usage of what I have just committed (fixes and improvements
 for Evas_Video_Surface, and added Ecore_Wl_Subsurf) here:

 https://github.com/antognolli/buffer_object

 This is a helper, or a skeleton, for creating and setting up the image
 object that would be used with the buffers. It can be made more
 generic if necessary, thus allowing to use both Wayland buffers or X
 stuff. The code itself is inside buffer_object.c. Sample usage is
 inside main.c.

That's exactly the direction where I wanted to get that code. Really
nice patch, thanks. The next improvement I was looking for was to use
somehow the pixels buffer directly when using OpenGL (zero copy
scheme), by looking at your code I do think than in compositing mode
we are still doing a copy. Am i right ?

 Anyway, this can be added somewhere in EFL, I just don't know exactly
 where would be the best place... ideas?

That is indeed a good question. I guess the first place to use this is
somewhere in Emotion's gstreamer backend. I would even prefer to see
that feature working with VLC backend, but I don't think there is a
way to make vlc output the pixels in a wayland surface. Also the
gstreamer backend is easier to integrate as it doesn't require to
communicate with another process to get the pixels (Not really a win
in my opinion, but in this case will make life easier).

Also I have been starting to think that maybe we should have a simpler
layer than Emotion that does all this buffer management and his used
by Emotion. That's just a though right now.
-- 
Cedric BAIL

--
October Webinars: Code for Performance
Free Intel webinars can help you accelerate application performance.
Explore tips for MPI, OpenMP, advanced profiling, and more. Get the most from 
the latest Intel processors and coprocessors. See abstracts and register 
http://pubads.g.doubleclick.net/gampad/clk?id=60134791iu=/4140/ostg.clktrk
___
enlightenment-devel mailing list
enlightenment-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/enlightenment-devel


Re: [E-devel] Wayland and subsurfaces

2013-10-04 Thread Rafael Antognolli
Example usage of what I have just committed (fixes and improvements
for Evas_Video_Surface, and added Ecore_Wl_Subsurf) here:

https://github.com/antognolli/buffer_object

This is a helper, or a skeleton, for creating and setting up the image
object that would be used with the buffers. It can be made more
generic if necessary, thus allowing to use both Wayland buffers or X
stuff. The code itself is inside buffer_object.c. Sample usage is
inside main.c.

Anyway, this can be added somewhere in EFL, I just don't know exactly
where would be the best place... ideas?

On Mon, Sep 23, 2013 at 7:35 PM, Rafael Antognolli antogno...@gmail.com wrote:
 Hey Raster,

 I added some code to do what you proposed, but it ended up kind of
 hackish IMHO. That was using the native surface API.

 The problem is that subsurfaces are handled the same way as surfaces,
 thus their handling code should belong to ecore_wl (similarly to X
 windows code belonging to ecore_x). In order to do what you said, I
 had to pass the compositor structure (and subcompositor structure),
 which lived inside Ecore_Wl, to Evas engine, and create the
 subsurfaces from there.

 I pushed the partial code for this, there are several things that must
 be done yet, but you can figure out what I've been doing if you want:

 http://git.enlightenment.org/core/efl.git/commit/?h=devs/antognolli/subsurfaces2

 On the other hand, I noticed that the Evas_Video_Surface struct and
 code, so far used only in Emotion gstreamer backend, seems to fit this
 task way better. It does make some assumptions that must be
 changed/fixed/improved, like assuming that it's possible to clip or
 resize the image, or that it is always in a layer below the current
 canvas, but I think I can handle this. Other than that, the code to
 hanlde subsurfaces with that API seems way more clean to me. Don't you
 think it would be better to do this code using Video Surfaces instead
 of the Native Surfaces API?

 Thanks

 On Wed, Aug 7, 2013 at 8:08 PM, Carsten Haitzler ras...@rasterman.com wrote:
 On Wed, 7 Aug 2013 16:32:53 -0300 Rafael Antognolli antogno...@gmail.com 
 said:

 Hey guys,

 I'm trying to add Wayland's subsurfaces support to EFL, but not sure
 about how to expose it.

 Actually, I'm not even sure if we should expose it or not. The only
 similar thing that I saw so far was the video overlay stuff, which I
 don't know exactly if it's the same thing.

 If not, what would it be? A sub-canvas of the main canvas? Or should I
 just expose something in the ecore_wayland_* namespace?

 Any thoughts?


 i already talked with devilhorns about this... and subsurfaces should 
 probably
 not be exposed at all. they should be silently handled inside the wayland
 engines for evas. they are basically useful for 2 things:

 1. popup menus and the like...
 2. breaking out objects that can live in their own buffer

 a #1 is a subset of #2 anyway.

 evas should be making the decisions as to what objects get broken out into
 subsurfaces frame-by-frame. it should create, destroy, reconfigure and
 re-render them as needed every time evas_render is called. of course the evas
 engine keeps track of current subsurfaces there from the previous frame.

 the criteria for being selected to become a subsurface depend on the 
 following:

 1. does the object have a buffer? can evas generate one (map/proxy) and
 pre-render it?
 2. if we have a buffer already, or can generate it, does the compositor 
 support
 the buffer format (yuv, rgb etc.)
 3. does the object geometry match transforms and clipping etc. the 
 compositor
 supports (last status i knew is that scaling still had to go in, and there 
 was
 no ability to clip these subsurfaces explicitly eg to a rectangle). so match 
 up
 clipping, color multiply, masking (not in evas atm, but maybe in future),
 scaling and map/transforms.
 4. after the first 3 checks, we will have a candidate list. sort this list
 based on criteria - eg objects that may be marked to be popups that exceed
 canvas bounds first (no such feature right now, but in future...), then yuv
 objects, then rgba buffer ones that change content less often, BUT change
 position (eg scroll), where we may benefit from avoiding re-rendering these 
 and
 sort these ones by estimated render cost.
 5. we need to either assume the compositor has a limit to how many 
 subsurfaces
 it can manage to deal with before this gets silly or just has no benefit. 
 popup
 subsurfaces that go outside window bounds are a necessity, so those always
 happen. evas must generate buffers for these no matter what (if they don't 
 have
 them already). then it's a matter of how many yuv and rgb subsurfaces to
 expose. for now a fixed configurable number (let's say an environment var) 
 will
 do, BUT this is something i think wayland needs to extend protocol-wise. the
 compositor should send over wayland events to clients indicating how many
 subsurfaces might be optimal and which formats might benefit from 
 acceleration.

Re: [E-devel] Wayland and subsurfaces

2013-09-23 Thread Rafael Antognolli
Hey Raster,

I added some code to do what you proposed, but it ended up kind of
hackish IMHO. That was using the native surface API.

The problem is that subsurfaces are handled the same way as surfaces,
thus their handling code should belong to ecore_wl (similarly to X
windows code belonging to ecore_x). In order to do what you said, I
had to pass the compositor structure (and subcompositor structure),
which lived inside Ecore_Wl, to Evas engine, and create the
subsurfaces from there.

I pushed the partial code for this, there are several things that must
be done yet, but you can figure out what I've been doing if you want:

http://git.enlightenment.org/core/efl.git/commit/?h=devs/antognolli/subsurfaces2

On the other hand, I noticed that the Evas_Video_Surface struct and
code, so far used only in Emotion gstreamer backend, seems to fit this
task way better. It does make some assumptions that must be
changed/fixed/improved, like assuming that it's possible to clip or
resize the image, or that it is always in a layer below the current
canvas, but I think I can handle this. Other than that, the code to
hanlde subsurfaces with that API seems way more clean to me. Don't you
think it would be better to do this code using Video Surfaces instead
of the Native Surfaces API?

Thanks

On Wed, Aug 7, 2013 at 8:08 PM, Carsten Haitzler ras...@rasterman.com wrote:
 On Wed, 7 Aug 2013 16:32:53 -0300 Rafael Antognolli antogno...@gmail.com 
 said:

 Hey guys,

 I'm trying to add Wayland's subsurfaces support to EFL, but not sure
 about how to expose it.

 Actually, I'm not even sure if we should expose it or not. The only
 similar thing that I saw so far was the video overlay stuff, which I
 don't know exactly if it's the same thing.

 If not, what would it be? A sub-canvas of the main canvas? Or should I
 just expose something in the ecore_wayland_* namespace?

 Any thoughts?


 i already talked with devilhorns about this... and subsurfaces should probably
 not be exposed at all. they should be silently handled inside the wayland
 engines for evas. they are basically useful for 2 things:

 1. popup menus and the like...
 2. breaking out objects that can live in their own buffer

 a #1 is a subset of #2 anyway.

 evas should be making the decisions as to what objects get broken out into
 subsurfaces frame-by-frame. it should create, destroy, reconfigure and
 re-render them as needed every time evas_render is called. of course the evas
 engine keeps track of current subsurfaces there from the previous frame.

 the criteria for being selected to become a subsurface depend on the 
 following:

 1. does the object have a buffer? can evas generate one (map/proxy) and
 pre-render it?
 2. if we have a buffer already, or can generate it, does the compositor 
 support
 the buffer format (yuv, rgb etc.)
 3. does the object geometry match transforms and clipping etc. the 
 compositor
 supports (last status i knew is that scaling still had to go in, and there was
 no ability to clip these subsurfaces explicitly eg to a rectangle). so match 
 up
 clipping, color multiply, masking (not in evas atm, but maybe in future),
 scaling and map/transforms.
 4. after the first 3 checks, we will have a candidate list. sort this list
 based on criteria - eg objects that may be marked to be popups that exceed
 canvas bounds first (no such feature right now, but in future...), then yuv
 objects, then rgba buffer ones that change content less often, BUT change
 position (eg scroll), where we may benefit from avoiding re-rendering these 
 and
 sort these ones by estimated render cost.
 5. we need to either assume the compositor has a limit to how many subsurfaces
 it can manage to deal with before this gets silly or just has no benefit. 
 popup
 subsurfaces that go outside window bounds are a necessity, so those always
 happen. evas must generate buffers for these no matter what (if they don't 
 have
 them already). then it's a matter of how many yuv and rgb subsurfaces to
 expose. for now a fixed configurable number (let's say an environment var) 
 will
 do, BUT this is something i think wayland needs to extend protocol-wise. the
 compositor should send over wayland events to clients indicating how many
 subsurfaces might be optimal and which formats might benefit from 
 acceleration.
 so for example. on your average desktop gpu setup you really only have 1 rgba
 layer, 1 yuv layer (and maybe a cursor) for the whole screen. that means that
 the best possible sensible # of layers to expose is 1, and just a yuv layer
 only. rgba layers wont benefit (thus only expose subsurfaces for popups that
 exceed window bounds - again a feature we don't have yet). but many arm soc's
 support 2, 3, or more layers (i have seen 5, 8 and even up to 12 layers). they
 often are highly flexible offering both yuv AND rgba and sometimes arbitrary
 mixes (a layer can be any format). if you have a mobile phone/tablet like ui,
 the app basically is fullscreen and thus can 

[E-devel] Wayland and subsurfaces

2013-08-07 Thread Rafael Antognolli
Hey guys,

I'm trying to add Wayland's subsurfaces support to EFL, but not sure
about how to expose it.

Actually, I'm not even sure if we should expose it or not. The only
similar thing that I saw so far was the video overlay stuff, which I
don't know exactly if it's the same thing.

If not, what would it be? A sub-canvas of the main canvas? Or should I
just expose something in the ecore_wayland_* namespace?

Any thoughts?

Thanks,
-- 
Rafael Antognolli

--
Get 100% visibility into Java/.NET code with AppDynamics Lite!
It's a free troubleshooting tool designed for production.
Get down to code-level detail for bottlenecks, with 2% overhead. 
Download for free and get started troubleshooting in minutes. 
http://pubads.g.doubleclick.net/gampad/clk?id=48897031iu=/4140/ostg.clktrk
___
enlightenment-devel mailing list
enlightenment-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/enlightenment-devel


Re: [E-devel] Wayland and subsurfaces

2013-08-07 Thread Rafael Antognolli
BTW, more info here, where it's well explained:

http://lists.freedesktop.org/archives/wayland-devel/2012-December/006623.html

On Wed, Aug 7, 2013 at 4:32 PM, Rafael Antognolli antogno...@gmail.com wrote:
 Hey guys,

 I'm trying to add Wayland's subsurfaces support to EFL, but not sure
 about how to expose it.

 Actually, I'm not even sure if we should expose it or not. The only
 similar thing that I saw so far was the video overlay stuff, which I
 don't know exactly if it's the same thing.

 If not, what would it be? A sub-canvas of the main canvas? Or should I
 just expose something in the ecore_wayland_* namespace?

 Any thoughts?

 Thanks,
 --
 Rafael Antognolli



-- 
Rafael Antognolli

--
Get 100% visibility into Java/.NET code with AppDynamics Lite!
It's a free troubleshooting tool designed for production.
Get down to code-level detail for bottlenecks, with 2% overhead. 
Download for free and get started troubleshooting in minutes. 
http://pubads.g.doubleclick.net/gampad/clk?id=48897031iu=/4140/ostg.clktrk
___
enlightenment-devel mailing list
enlightenment-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/enlightenment-devel


Re: [E-devel] Wayland and subsurfaces

2013-08-07 Thread The Rasterman
On Wed, 7 Aug 2013 16:32:53 -0300 Rafael Antognolli antogno...@gmail.com said:

 Hey guys,
 
 I'm trying to add Wayland's subsurfaces support to EFL, but not sure
 about how to expose it.
 
 Actually, I'm not even sure if we should expose it or not. The only
 similar thing that I saw so far was the video overlay stuff, which I
 don't know exactly if it's the same thing.
 
 If not, what would it be? A sub-canvas of the main canvas? Or should I
 just expose something in the ecore_wayland_* namespace?
 
 Any thoughts?
 

i already talked with devilhorns about this... and subsurfaces should probably
not be exposed at all. they should be silently handled inside the wayland
engines for evas. they are basically useful for 2 things:

1. popup menus and the like...
2. breaking out objects that can live in their own buffer

a #1 is a subset of #2 anyway.

evas should be making the decisions as to what objects get broken out into
subsurfaces frame-by-frame. it should create, destroy, reconfigure and
re-render them as needed every time evas_render is called. of course the evas
engine keeps track of current subsurfaces there from the previous frame.

the criteria for being selected to become a subsurface depend on the following:

1. does the object have a buffer? can evas generate one (map/proxy) and
pre-render it?
2. if we have a buffer already, or can generate it, does the compositor support
the buffer format (yuv, rgb etc.)
3. does the object geometry match transforms and clipping etc. the compositor
supports (last status i knew is that scaling still had to go in, and there was
no ability to clip these subsurfaces explicitly eg to a rectangle). so match up
clipping, color multiply, masking (not in evas atm, but maybe in future),
scaling and map/transforms.
4. after the first 3 checks, we will have a candidate list. sort this list
based on criteria - eg objects that may be marked to be popups that exceed
canvas bounds first (no such feature right now, but in future...), then yuv
objects, then rgba buffer ones that change content less often, BUT change
position (eg scroll), where we may benefit from avoiding re-rendering these and
sort these ones by estimated render cost.
5. we need to either assume the compositor has a limit to how many subsurfaces
it can manage to deal with before this gets silly or just has no benefit. popup
subsurfaces that go outside window bounds are a necessity, so those always
happen. evas must generate buffers for these no matter what (if they don't have
them already). then it's a matter of how many yuv and rgb subsurfaces to
expose. for now a fixed configurable number (let's say an environment var) will
do, BUT this is something i think wayland needs to extend protocol-wise. the
compositor should send over wayland events to clients indicating how many
subsurfaces might be optimal and which formats might benefit from acceleration.
so for example. on your average desktop gpu setup you really only have 1 rgba
layer, 1 yuv layer (and maybe a cursor) for the whole screen. that means that
the best possible sensible # of layers to expose is 1, and just a yuv layer
only. rgba layers wont benefit (thus only expose subsurfaces for popups that
exceed window bounds - again a feature we don't have yet). but many arm soc's
support 2, 3, or more layers (i have seen 5, 8 and even up to 12 layers). they
often are highly flexible offering both yuv AND rgba and sometimes arbitrary
mixes (a layer can be any format). if you have a mobile phone/tablet like ui,
the app basically is fullscreen and thus can sensibly try and use as many
subsurfaces as your hardware allows. there is no point exposing 500
subsurfaces. there is no point exposing 50. but there may be logic in exposing
between 1 to 20 given the hardware i've seen.

so the trick is in building the subsurface candidate list and then sorting it
and limiting it to the N best objects. once we know those N, ensure buffers
exist for them if they don't already and then manage the subsurfaces that map
to them accordingly during render. :)


-- 
- Codito, ergo sum - I code, therefore I am --
The Rasterman (Carsten Haitzler)ras...@rasterman.com


--
Get 100% visibility into Java/.NET code with AppDynamics Lite!
It's a free troubleshooting tool designed for production.
Get down to code-level detail for bottlenecks, with 2% overhead. 
Download for free and get started troubleshooting in minutes. 
http://pubads.g.doubleclick.net/gampad/clk?id=48897031iu=/4140/ostg.clktrk
___
enlightenment-devel mailing list
enlightenment-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/enlightenment-devel