"Jon M. Taylor" wrote:

> > This has gotton WAY off-topic, but...
> >
> > >
> > >         Get used to it, because texture compression in hardware is here to
> > > stay.  People need to get used to machine-independent programming, and one
> > > aspect of that is that you have to give up pixel-accuracy and lossless
> > > compression.  It is worth it, though, because you gain enormous
> > > flexibility and portability.
> >
> > Compression has nothing to do with machine independency (unless you are saying
> > you want to run on legacy hardware).
> > Infact, it goes aganst platform
> > independency. The new platform now must have a decoder written for it as well as
> > the development machine.
>
>         The idea is that the decompression is done by the video hardware.
> That is what makes it platform-independent.  MPEG video is platform-
> independent for the same reason.

Keyword: -video- not 3D graphics. I see your point though.

> > The only reason you would want to compress is if you
> > have massive amounts of data (like video) or you are streeming it through a low
> > bandwidth pipe.
>
>         Modern video games require CONSIDERABLY more bandwidth than
> streaming video does.  Even with AGP4X (~5Gbps bus), games like Quake3 can
> peg the bus if you turn on all the fancy rendering options and crank the
> resolution up (and use a tuned OpenGL library).  That is why everyone is
> moving to support texture compression in hardware.

Hey, if it's fast and it looses nothing out of the original texture, I'm all for it.

> > No flexibility is gained.
>
>         You can use larger textures without the performance hit.  That's a
> good deal of added flexibility for games designers, I can assure you.
>
> > Avrage textures are about 512x512 and
> > including bump-maps and RGB coloring + alpha you might as well say 512x512x5. If
> > you have about 40 of these (which is about the amount you would use in an very
> > average game screen) you want no bottleneck like your decompressor to slow your
> > frame rates, even in hardware.
>
>         Decompression can be parallelized in hardware and only needs to be
> done once per texture and then cached internally if you keep your
> compressed textures in video memory. It will not impose ANY performance
> penalty unless you go crazy with lots of rapidly changing huge compressed
> textures.

Yes, now in this method, as I said above, I would go for compressed textures with
hardware comp/decomp, BUT only if it is not lossy compression we are talking about.
Currently, there is no non lossy uncopywrited bitmap compression method. Though
somebody could try a brightness wave with a color ratio. That wouldn't be lossy and
would get really good compression.

> > After that, what about multi layered textures,
>
>         What about them?  Multitexturing will come after texture
> decompression in the hardware pipe, and as such it should not impose any
> penalty.
>
> > and
> > I haven't even begun talking about double sided textures,
>
>         ?  I have not seen any example of any such thing in hardware.

Most PC hardware doesn't support them. But it will soon.

>
> > or even multi-layered
> > double sided textures with alpha holes.
>
>         I see what you mean now.  Argh!  Why do people do this sort of
> thing instead of using alpha texture composition?  Or even better, adding
> some extra geometry for the hole so the hardware can properly remove
> hidden surfaces?

I don't know why they do it. I personaly agree, it shouldn't be done. I also think
bumpmapping shouldn't be done (it should also be done in geomerty), but I guess we
really couldn't call it a "texture" then :-) Lets start a trend. Lets make up a name
for "textures" that doesn't imply the need for this various bull crap above. Lets
make it basicly mean "picture that is maped on to a 3D solid." Does anybody have a
word like that?


> > > > Also the important part is they aren't pixel accurate.
> > > > Sure the longer you spend decoding the wavelet the closer you get to the
> > > > orginal pixels.  The problem with this is you can't be assured that when you
> > > > are storing data for a sprite it will be decoded the same every time, by
> > > > every machine.
> > >
> > >         I got news for you, then: you have already lost that guarantee to
> > > hardware texture filtering, which blends depending on lighting
> > > characteristics and geometry.
> >
> > Which is exactly why you need to have acurate pixels. If you don't, your texture
> > starts to look smeared Especily if you get really close to an object, or if the
> > object has alpha in obscure places.
>
>         Which is why you are supposed to use alpha texture composition
> instead of an alpha color component, so that the hardware has the ability
> to apply the alpha op at the right place in the pixel pipeline.

Even so, if you compresses it with a jpeg or mpeg like algorithem, you could possably
smear the texture edges when it is rendered with filtering, alpha or not.Try this,
with xpaint or the Gimp make a picture with colored cicrcles layered on top of each
other and save it in any non-compressed format like ppm. Then use cjpeg or xv to
compress it (arround 60% should be good) then open them both in 2 diffrent xv or
display windows. It looks kinda pixeled in places there isn't even suposed to be
color in some places. If you biliner it now, it will look like a finger painting.

> > > > Might not be so bad for non-tiled textures wrapped around
> > > > polygon solids, actually for organic things they might not tile to bad
> > > > either especially if they were used in multi-layered textures were the seams
> > > > can be obscured some.
> > >
> > >         Precisely.  Everyone is doing compressed textures.
> >
> > Not me. If I did, my demos would turn to crap ( as if they don't already look
> > that way :-) )
>
>         Demos are not games.

But they are just as intensive on all parts of the hardware except input devices.

>
> > But your right, for a simple game, compressed textures are fine.
>
>         For a simple game, compressed textures are not necessary.  For a
> complex game, they are _vital_.  You simply cannot put fine details that
> don't fuzz out as you get close to them into your textures unless you
> compress them or use a technique like vector texturing.  And game
> designers need to be able to add fine details to their games to take it to
> the next level of realism.

Exactly! But if you compress them with a lossy algorithem, you loose that fine
detail. The only way to compress a texture with out it comming out looking like crap
is if it's not lossy by one pixel. For video, you can be lossy, but for textures
(unless your talking about a "video texture") you can't.

Reply via email to