Suppose that I want to map a GPU buffer to the CPU and do image analysis on it. 
I know all the usual cautions about this being a poor performance option, etc. 
But suppose for the moment that the use-case requires it.

What's the right set of preconditions to conclude that the buffer is in vanilla 
linear representation? In other words: no compression, tiling, or any other 
proprietary GPU tricks that would prevent accessing the pixel data in the same 
way you would for a dumb buffer.

I think that requiring the modifiers to be 0x0 would suffice. But is that 
overkill? Maybe there are situations when some modifiers are set, but they 
don't affect the interpretation of the pixel data.

Thanks
-Matt

Reply via email to