Actually sorry, I gave you erroneous information just now - it has been
a while since I played around with the CCDC register settings;

 

10-bit BT.656 does work - the only problem is that it generates 16-bit
pixel output (not 8-bit output as we normally get currently); the data
going to RAM will now be twice as wide; 

There is a PACK8 bit setting in the CCDCFG, but it does not serve the
purpose since it truncates the MSBs, instead of the LSBs - this produces
the color problems we were having before;

 

If you do stick with keeping it in 16-bit pixel data format, then your
codec servers might be in jeopardy; for H.264 encode at least, the
current default input data setting of XDM_BYTE is the only setting that
works; XDM_LE_16 is the ideal setting that should allow for 16 bit
input, but LE_16 and LE_32 both are not supported by the codec servers
(might be different for other codecs). This is one the main reasons why
TI chose 8-bit BT.656 mode over 10-bit; the sacrifice in a small amount
of resolution is ok to ensure codec servers function normally

 

The one option you can do is a software truncation through optimized ARM
assembly - I tried this and it was useless, as no matter how good the
code was, it still does its operations on a D1 image, and I could only
get a frame rate of about 15 fps after this truncation. 

 

Final suggestion: 8-bit is the way to go :-)

 

Jerry

_______________________________________________
Davinci-linux-open-source mailing list
[email protected]
http://linux.davincidsp.com/mailman/listinfo/davinci-linux-open-source

Reply via email to