On 16/09/16 11:05, Eric Anholt wrote:
Jose Fonseca <jfons...@vmware.com> writes:

On 14/09/16 11:03, Eric Anholt wrote:
I've applied Dylan's comments to the RFC series, and I've pushed a
starting trace-db with reference images for my i965 and vc4:

https://github.com/anholt/trace-db

Eric,

This is great initiative IMO.


One suggestion: you can get much smaller traces by repacking traces into
the Brotli format.

For example:

   $ wget
https://github.com/anholt/trace-db/raw/master/traces/humus/HDR.trace
   $ apitrace repack --brotli HDR.trace HDR.brotli.trace
   $ du -sh HDR*.trace
   4.5M HDR.brotli.trace
   7.2M HDR.trace


I introduced Brotli format precisely because some legacy OpenGL apps
that don't use VBOs yield a mind boggling traces.  But with effective
compression it's managable again.  For some of these we see traces 10x
smaller going from snappy to brotli.

The decompression speed is a bit slower in terms of CPU, but when
factored disk access time, it's a net win too.


The only thing to keep in mind is that qapitrace will not work with
brotli files because they are not seekable.  But it's just a matter of
repacking again to snappy (the repack operation is lossless.)

Great!  I don't think it makes sense to repack the existing ones
(they're in the git history already, and not small enough to justify
telling people to git clone --depth yet), but this will definitely
extend the lifetime of the github repo.

That's a real shame about qapitrace, though.  Is there no seeking at
all, even if qapitrace did an initial scan to note some points it would
want to go back to?

I'm not intematelly familiar with the internals of Brotli format. I'm not sure if there are any sync points at all. And even if there were, there's no public external interface to seek.

But if manually repacking is an hiderance, we could have qapitrace to repack on demand (ie, when loading the file, if its Brotli or Gzip, just quickly repack it to snappy into a temp file, then use the temp file.) Repacking from brottli to snappy should be quite fast.

Still, if somebody is debugging an issue with one of the trace files on trace-db, the best is just do the repack once, then invoke qapitrace again and again on the repacked one.

You could also provide some convenience shell scripts on trace-db to do the conversions in either direction.



One more thing: currently glretrace does a very poor job with vertex arrays, particularly vertex arrays: the same data is recorded in the trace over and over again. Even within a single draw, when the arrays are interleaved, the same data is recorded multiple times (once for each attr.) This is fixable. But like so many other fixable stuff in apitrace, it has been on my todo list forever.


Another tack of the same problem would be to have a "apitrace dedup in.trace out.trace" command that would go over all the blobs, remove the redundancy. This is the advantage that would be useful to compact old traces too.


Anyway, just thoughts. If anybody feels the urge to scratch either of these itches, let me know and I can give more detail.


Unfortunately, due to other work/personal demands, I have very little free time for apitrace maintenance.

Jose

PS: Honestly, if it wasn't for legacy/old-generation APIs like OpenGL and D3D8/D3D9 there would be very little need for apitrace. New gen APIs like Vulkan/Metal/D3D12 are much simpler by comparison, and there are comprehensive and well-staffed thirdparty tools for those APIs already. My point is, I suspect 10 years down the line, nobody will use Apitrace except for regression tests. So no point for me to sacrifice time in other stuff for its sake..
_______________________________________________
Piglit mailing list
Piglit@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/piglit

Reply via email to