On Sat, 31 May 2008 13:17:42 -0400 Jose Gonzalez <[EMAIL PROTECTED]> babbled:

>    Gustavo wrote:
> > I believe it's up to OS to save/restore all the registers when you
> > change threads. Am I wrong?
> 
>       I have no idea what happens with this, or how using multiple cpu
> cores affects that.

that was my point. it is THREAD SAFE. the os restores registers and cpu state
between context switches between threads/processes. its internal
process/threads state with the mmx/sse/fp state. the fact that pipes
restructured what gets called and how the calls get called and in what stages
they get called that brought it out reliably in certain situations. you will
find MOST of the drawing calls do NOT gaurd themselves on exit with an emms
(evas_cpu_end_opt). the only one actually was the line draw. no others do.
doing an end_opt at every draw call is not cheap - on older cpu's definitely
not. as most of evas's calls use NO floating point (and the polygon stuff
really doesn't need to - i should remove that) there is nigh zero need for what
is a mostly useless call - and so it can go at the end of the pipeline or just
before any of the rare fp calls. again - nothing to do with threads. all to do
with streamlined rendering pipelines.

> >>      There is much too great a difference in the behavior of the
> >> code with vs. without pipes to say for certain that the code-execution
> >> paths are well understood.
> >>     
> >
> > But do you remember my tests where I disabled the other threads, just
> > launching one and still having this behavior?
> >
> >   
> 
>       As I understood you, as soon as you disabled pipes the problems
> disappeared - presumably mmx is being released adequately, and indeed
> I know of no cases where a problem has being observed (with recent cvs)
> without pipes or on single-core systems. And when you enable pipes again,
> the problem came back immediately. That's what I thought you observed
> (among other things).

this was my point. there has been, in the past, a case where this DOES happen.
i have seen it - but it was very rare and i didn't have any reliably
reproducable test case. the code used to do things like this

canvas-level-render
  each-object
    calculate-all-things-to-draw about object and maybe call pre-render
callbacks etc. etc. (all of this may be out of evas/engine/software engine
controls)
      setup draw
        actually call draw call
    calculate-all-things-to-draw about another object ...
      setup draw
        actually draw
    calculate-all-things-to-draw about another object ...
      setup draw
        actually draw
    calculate-all-things-to-draw about another object ...
      setup draw
        actually draw
    ...

that is how it STILL goes without the thread code enabled - the thread code
literally removes all the pipe code. that is why it goes away. the fact that
cpu control can exit the rendering pipeline into callbacks or canvas space
means that there were (almost always) guards on the fp (ie emms calls) and thus
entering gradient drawing calls almost always worked fine.

when there is a pipeline - even 1 single pipeline - even if i removed every
pthread call and put a pipeline "inline" it goes like:


canvas-level-render
  each-object
    calculate-all-things-to-draw about object and maybe call pre-render
callbacks etc. etc. (all of this may be out of evas/engine/software engine
controls)
      setup draw
    calculate-all-things-to-draw about another object ...
      setup draw
    calculate-all-things-to-draw about another object ...
      setup draw
    calculate-all-things-to-draw about another object ...
      setup draw
    ...
  flush pipeline
    draw
    draw
    draw
    draw
    ...

and of course the flush pipeline may have multiple pipelines start their draw
cycles all at once in parallel. it splits up the destination render regions into
N regions and each pipeline - each in its own thread, each thread is forcibly
thrust onto its own cpu/core (so threads can't migrate between cpu's/cores).

the fact that now there was basically nothing between draw calls to guard
against fp op safety - as fp ops were not being used mostly, means that it was
much more likely u ended up with the cpu in a non emms state before doing fp
ops. even so i found it hard to reproduce in a simple test case - i needed the
whole gradient dialog in e to bring it out. (i found that edje_test - the old
one did it too eventually). my simple "display a white rectangle and a gradient
on top only) test app didnt show the bug. the draw pipeline was too simple and
had no mmx/sse state change before drawing the gradient - that is why.

as such in the old code in some circumstances the cpu was lest in a bad fp
state before entering the gradient draw code - but only very rarely.

so i repeat - the code as such is threadsafe. mmx/sse state is separate to
threads entirely. the only bit of code outside the gradient code that did fp
ops was suitably guarded before doing fps ops. it's much cheaper to guard
before the much rarer use of fps ops than guard on every exit from possible
mmx/sse ops.

-- 
------------- Codito, ergo sum - "I code, therefore I am" --------------
The Rasterman (Carsten Haitzler)    [EMAIL PROTECTED]


-------------------------------------------------------------------------
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/
_______________________________________________
enlightenment-devel mailing list
enlightenment-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/enlightenment-devel

Reply via email to