So, I'm guessing that textures aren't loaded into memory before starting
the rendering?  I don't really mind waiting a few more seconds for
textures to load (if they're all loaded up front) if my app/game will
run faster as a result.

I think my idea was misunderstood (just an idea!).
I suggest convert->jpeg->original format because jpeg will remove some
data from the texture resulting in something smaller.  I'm only talking
about at load time, up front reducing the actual size of the texture. 
Not real time compression/decompression.

Here is a file with 100% quality, and the same at 50% quality:
 22407 Jun  7 13:25 hal.jpg
  4403 Jun  7 13:25 hal1.jpg

Now, the data that has been removed from the first file to create the
second is gone forever.  So, I guess what I really mean is "stripping"
the image.  There are other algorithms (with libraries to use, even)
that can do similar things.

So, here's what I'm thinking, just to be clear:
load texture
convert texture to lossy format
convert back
load into the proper place the (hopefully) smaller texture

Really, it's just size reduction, not compression.  The desired effect
is lowered bandwidth usage at the cost of lower quality textures.  Of
course, people without enough memory to chew through this stuff quickly
would not want this turned on.

-Al

On Fri, 2002-06-07 at 12:42, Ian Romanick wrote:
> On Fri, Jun 07, 2002 at 09:42:22AM -0400, Al Tobey wrote:
> 
> > Here's the idea: would it be beneficial to convert a texture to a
> > low-quality jpeg, then back again to take advantage of some of the
> > inherent lossiness?  It, in theory, should reduce the size of the
> > texture without effecting the bit depth and shouldn't require a ton of
> > code to get working (use libjpeg).  I don't forsee a big problem
> > performance wise if the textures are mangled during the loading stage,
> > but I'm still learning ...
> 
> Wha...?  Let me get this straight.  You're suggesting that when an OpenGL
> user calls glTexImage?D with GL_COMPRESSED_* as the internal format to
> compress the image as a JPEG.  Then, when the texture is used, decompress
> the texture and upload the uncompressed image (since no card that I know of
> can work directly with a JPEG as a texture) to the card?
> 
> It's an interesting idea, BUT unless you can get help from the card
> decompressing the JPEG on upload (perhaps the Radeon iDCT unit could help?)
> -or- you come up with some sort of blazing fast, hand-tuned, assembly-coded
> JPEG decoder, the performance will sink faster than the Titanic...and will
> rot at the bottom of the ocean for just as long. :)
> 
> Hmmm...I wonder if the fragment shader units on modern cards could be used
> to do a VRAM-to-VRAM decompression of such a texture...hmm...I still think
> the performance would be horrible, though.
> 
> -- 
> Tell that to the Marines!
> 
> _______________________________________________________________
> 
> Don't miss the 2002 Sprint PCS Application Developer's Conference
> August 25-28 in Las Vegas -- http://devcon.sprintpcs.com/adp/index.cfm
> 
> _______________________________________________
> Dri-devel mailing list
> [EMAIL PROTECTED]
> https://lists.sourceforge.net/lists/listinfo/dri-devel




********************************************************************
This email and any files transmitted with it are confidential
and intended solely for the use of the individual or entity
to whom they are addressed.  If you have received this 
email in error please notify the Priority Health Information
Services Department at (616) 942-0954.
********************************************************************


_______________________________________________________________

Don't miss the 2002 Sprint PCS Application Developer's Conference
August 25-28 in Las Vegas -- http://devcon.sprintpcs.com/adp/index.cfm

_______________________________________________
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel

Reply via email to