On Do, 2013-01-17 at 10:29 -0800, Jason Garrett-Glaser wrote:
> On Wed, Jan 16, 2013 at 7:59 AM, Sebastian Dröge
> <sl...@circular-chaos.org> wrote:
> > On Mi, 2013-01-16 at 16:45 +0100, Nicolas George wrote:
> >> Le septidi 27 nivôse, an CCXXI, Sebastian Dröge a écrit :
> >> > Right, but the calling application has no way to know what the library
> >> > will accept other than looking at x264_config.h.
> >>
> >> That is not true:
> >>
> >> /* x264_bit_depth:
> >>  *      Specifies the number of bits per pixel that x264 uses. This is 
> >> also the
> >>  *      bit depth that x264 encodes in. If this value is > 8, x264 will 
> >> read
> >>  *      two bytes of input data for each pixel sample, and expect the upper
> >>  *      (16-x264_bit_depth) bits to be zero.
> >>  *      Note: The flag X264_CSP_HIGH_DEPTH must be used to specify the
> >>  *      colorspace depth as well. */
> >> X264_API extern const int x264_bit_depth;
> >
> > Thanks, I missed these two in the documentation. FWIW, what's the point
> > of defining them in x264_config.h too then?
> 
> People seemed to like the idea of having both.  If I had to guess,
> accessing x264_bit_depth would require running a test program, which
> isn't possible if you're cross-compiling.

Yeah but instead of checking this during compile time it would make more
sense to do it during runtime. Otherwise the "replace-library" hack
won't work again to switch between 8 bit and 10 bit builds.


Btw, what's the reason for making it a compile time parameter and not
allowing to handle 8/9/10 bit from the same library build?

Attachment: signature.asc
Description: This is a digitally signed message part

Reply via email to