After a few more tests, some more observations which seem to be right to
me listed here (if that is confirmed, we may perhaps gather these and
other statements to the Wiki to get some guidelines for efficient shader
coding):
* 'big' performance users seem to be the scientists's friends, i.e.
Gary adressed those creating the 3d-models and aircraft in FGFS.
Yes, quite. And since your own aircraft doesn't repeat and is there
independent of visibility range, so in some sense this is a special case
anyway.
Problem is, once you have created a model, you don't control what others
do with
Heiko wrote:
And especially in FGFS not really Vertices is one of the big problems,
but .xml's and nasal-scripts.
No, you can't say that in general. It quite depends on what you do and
what options you use. Whatever you compute, it costs some amount of
resource, and how long your frame takes
On Sun, 5 Feb 2012, thorsten.i.r...@jyu.fi wrote:
I haven't really seen any guidelines about efficient shader coding, but
I've come across a few statements here and in the forum now and then,
which I so far assumed to be true. I've now spent the last few days
benchmarking my
Pushing most of the haze shader computation from the vertex to the
fragment level would seem to suggest an approximately constant cost for
the haze for the same view regardless of scenery complexity since the
number of hazy fragments remain about the same.
Thanks for the explanations - that
Pushing most of the haze shader computation from the vertex to the
fragment level would seem to suggest an approximately constant cost
for the haze for the same view regardless of scenery complexity since
the number of hazy fragments remain about the same.
Thanks for the explanations -
Pushing most of the haze shader computation from the vertex to
the fragment level would seem to suggest an approximately constant
cost for the haze for the same view regardless of scenery complexity
since the number of hazy fragments remain about the same.
Thanks for the
Am 06.02.2012 09:51, schrieb thorsten.i.r...@jyu.fi:
I guess my bottomline is that any code running on a per-frame basis should
be made more efficient when it can be made more efficient, regardless if
it is currently the limiting factor for someone or not, simply because it
may be the limiting
Thorsten,
And especially in FGFS not really Vertices is one of the big problems,
but .xml's and nasal-scripts.
No, you can't say that in general. It quite depends on what you do and
what options you use. Whatever you compute, it costs some amount of
resource, and how long your frame takes is
I haven't really seen any guidelines about efficient shader coding, but
I've come across a few statements here and in the forum now and then,
which I so far assumed to be true. I've now spent the last few days
benchmarking my lightfield/terrain haze framework to see if I can't
squeeze a bit more
On Sun, Feb 5, 2012 at 7:50 AM, thorsten.i.r...@jyu.fi wrote:
I haven't really seen any guidelines about efficient shader coding, but
I've come across a few statements here and in the forum now and then,
which I so far assumed to be true. I've now spent the last few days
benchmarking my
On Sun, Feb 5, 2012 at 12:50 PM, Thorsten Renk wrote:
I haven't really seen any guidelines about efficient shader coding, but
I've come across a few statements here and in the forum now and then,
which I so far assumed to be true. I've now spent the last few days
benchmarking my
Quote:
#2 has long been a point of frustration for me. I've given up trying
to address folks on the forum who say throw all the vertices/polygons
at it that you like! Graphics cards can handle millions with no
problem! Last time I looked there was even an FG wiki article that
advised modelers to
Dang, I shouldn't have ranted. This added nothing to Thorsten's
discussion. My apologies to the list and to Thorsten.
-Gary
Thorsten,
#2 has long been a point of frustration for me. I've given up trying
to address folks on the forum who say throw all the vertices/polygons
at it that you
Hi,
as pointed out in other threads, I am currently assembling a machine with five
nVidia cards. The machine and the cards are fast enough to render a steady
60fps, synced to vblank.
But this is only with shaders disabled. The moment, I enable shaders and 3d-
clouds, the frame rate drops to
This mirrors comments to the OSG mailing list, where people have reported
performance issues with multiple GPUs and screens. There hasn't been much of
a resolution there; some people have the problem, others don't. I've put
together a multi-GPU machine to explore these issues; now I just need to
This mirrors comments to the OSG mailing list, where people have reported
performance issues with multiple GPUs and screens. There hasn't been much
of a resolution there; some people have the problem, others don't. I've
put together a multi-GPU machine to explore these issues; now I just
Looks like this is a hardware problem. Looking at the kernel log, I found
irq 16: nobody cared (try booting with the irqpoll option)
irq 16 is being used by nvidia and the onboard USB ports. After disabling USB
in the BIOS, it seems like everything is running smooth. Because all my
control
18 matches
Mail list logo