Well, so far I loaded BSQ binary file into memory.  I made my own int[][] matrix class to store digital number(DN).  Actual value of int[][] is 16 bit signed short. 
 
  I created another int[] data for colorModel.  The data has 4 byte structure per each data element.  So that I can allocate each byte for A, R, G, B. 
 
Now problem begins.  Since the actual value read from disk was 2byte(16bit signed), I felt like I had to pack this 16 bit signed data to one of 8bit R, G, B value.
 
To to do that
1. I first added 32768 to make the positive value(Unsigned bit)
2. square root above value to get the 8bit int.
2. And shifted bit to place in above colorModel data[] such as
 
data[i] = (red<<16)|(green<<8)|blue
 
Now it seems to me I am losing quite amount of information.  When I checked the actual value of red(8bit int), the value ranged from 178 to 182. Same to other green and blue.   This small variance caused almost no changes through out the scene.  However it should have rendered true color scene, because I chose the band for human eye sensitive wavelength, such as  649 nm for red, 529nm for green and 431 nm for blue.
 
I must be losing some information when I square root it.  Thereafter, I took off the square root and got almost dark black scene.  Ok, my question is how I need to convert 16 bit signed bits to normal R,G,B of 8 bit each without loosing any value. 
 
Thank you for reading all this.
 
Kevin 
 

Reply via email to