On Wed, 18 Jul 2007 18:34:27 -0700
James Richard Tyrer <[EMAIL PROTECTED]> wrote:

> Attila Kinali wrote:
> > On Tue, 17 Jul 2007 11:37:08 -0700 James Richard Tyrer

> > The scaler isn't necessary for us as OGA will have one anyways (for
> > the 3D stuff).
> 
> I thought that the hardware scaler would only be used for video.

Doesn't make sense. The video is generated at the RAMDAC,
there we need already a scaled version. But as we have scalers
anyways for 3D we can recycle those (just map 2D data into
a 3D object and scale it)-

> > The color space YUV->RGB conversion is rather simple to implement.
> 
> Yes, simple to implement, but computationally expensive, and therefore, 
> costly to implement -- large amount of chip real estate needed.

Compared to most 3D operation it's even computationally cheap.
It's just 9 multiplications and 6 additions in the generall case
and if we limit us to one YUV->RGB formula than it's 7 multiplications
and 7 additions. Additionally, this is something that can be easily
pipelined so we should be able to spit out one converted sample
per clock cycle per pipline.
 
> > Upsampling from 4:2:2 to 4:4:4 is nothing difficult either (simple
> > FIR filter operating on scanlines), but the upsampling from 4:2:0 to
> > 4:2:2 is (upsampling in vertical direction, guess why they don't do
> > it). 
> 
> Perhaps it is because it isn't needed.  Which decoders output 4:2:0?

Uhmm.. MPEG1, MPEG2, MPEG4, H.264,..... 4:2:0 is the defacto
standard for video subsampling. The only case i know that regularly
uses 4:2:2 are video cameras because it is a lot easier to implement.

> > Deinterlacing is something that cannot be really done without 
> > some information from video source (or some assumptions on the video
> > output device) and if only a simple implementation is used (which i
> > assume), then the quality will suck.
> 
> Are we talking about motion compensating deinterlacing (for sources 
> originally shot with an interlaced video camera and recorded on tape)? 

No, we are talking about plain deinterlacing without any motion
compensation. The one needed if you display interlaced content
produced for TV consumption on a progressive display.

> or just cine deinterlacing where all that is needed is to rearrange the 
> fields from 3:2 pull down so that you always display an odd and an even 
> field from the same film frame together?

That's not deinterlacing but (inverse) telecine. It has nothing to
do with interlacing beside the fact that it has the same combing
effect and results from the same assumption of a interleaved display
working at a specific frame rate.

As such, inverse telecine is quite simple to implement (skip all
inserted frames and spread the remaining ones equally over the
time scale) while deinterlacing requires more work.

As a side note, applying deinterlacing to telecined content
or inverse telecine to deinterlaced content has quite bad
effects on the image quality.

> > All in all i would say we are better off to implement the stuff done
> > by this chip in OGA directly.
> 
> It is always better to implement stuff directly, unless it costs more to 
> reinvent the wheel.

We don't have to reinvent the wheel, it's already out there.
We just have to implement it again.

                        Attila Kinali

-- 
Praised are the Fountains of Shelieth, the silver harp of the waters,
But blest in my name forever this stream that stanched my thirst!
                         -- Deed of Morred
_______________________________________________
Open-hardware mailing list
[email protected]
http://lists.duskglow.com/mailman/listinfo/open-hardware

Reply via email to