On 13/02/2019 17:42, Marek Olšák wrote:
On Wed, Feb 13, 2019 at 2:28 AM Axel Davy <davyax...@gmail.com
<mailto:davyax...@gmail.com>> wrote:
On 13/02/2019 06:15, Marek Olšák wrote:
> I decided to enable this optimization on all Pro graphics cards.
> The reason is that I haven't had time to benchmark games.
> This decision may be changed based on community feedback, etc.
Could the decision to run the optimization be based on some perf
counters related to culling ? If enough vertices are culled, you'd
enable the optimization.
No, that's not possible. When I enable this, all gfx counters and
pipeline statistics report that (almost) no primitives are culled,
because the compute shader culls them before the gfx pipeline.
You would disable by default the optimization. The perf counters would
then be meaningful. If the perf counter tells you enough primitives are
culled, you'd switch to the optimization and would stop looking at the
counters. No need to enable if only a few things are culled.
The best of course is that if you detect at some point the optimization
is worth it, it won't stop being worth it in a different game scene, but
it should be already a good filter, as if you never go above the
threshold, you definitely don't need the optimization.
There seems to be an AMD patent on the optimization, I failed to
see it
mentioned, maybe it should be pointed out somewhere.
Unlikely. It's based on this:
https://frostbite-wp-prd.s3.amazonaws.com/wp-content/uploads/2016/03/29204330/GDC_2016_Compute.pdf
And this is pretty much a simpler version of what I implemented:
https://gpuopen.com/gaming-product/geometryfx/
Marek
This is what I found:
https://patents.google.com/patent/US20180033184A1/en
Axel
_______________________________________________
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/mesa-dev