On Sun, 2005-03-06 at 23:21 -0800, Brad Templeton wrote: > On Sun, Mar 06, 2005 at 10:38:00PM -0700, Blammo wrote: > > On Sun, 6 Mar 2005 09:00:19 -0800 (PST), Andrew Lynch <[EMAIL PROTECTED]> > > wrote: > > > There are links to other "low end" MythTV projects > > > included and a discussion of my project. I'd like to > > > include other peoples low end MythTV projects so reply > > > here or send me a message offline. > > > > I just built a low-end machine for a co-worker. By Low-end I mean what > > I would consider the be the bare minimum. Her own hardare, P2-300. I > > put 512M of my own ram into it, and we bought a PVR-250 the lowest end > > PCI Nvidia card we could get XvMC in. (Think it was an MX4000). > > The MX440 should also work. > > > > Overall, the playback and recording are flawless at 720x480 MPEG2. > > Commercial flagging takes roughly 8 hours for 1 hr of video. Menus, > > etc, are slow to come up, drop off. > > Actually, I'm surprised it is this poor. I built a system for > somebody with an Athlon 750mhz chip, only 2.5x faster than your 300mhz > chip. Yet it commercial flags an hour of TV in under 40 minutes. > > I am not recording at 720x480 though. You don't want to record at > 720x480 unless you are trying to burn DVDs. General wisdom is that > NTSC doesn't really send much more than 352 pixels of width, 480 is > certainly adequate and 680 is overkill for when you want to squeeze > out the absolute most. IIRC, DVDs work from 352 and 720. > > More to the point, as I learned on this very list, you will get a better > quality recording from 352x480 at the same bitrate (say 4 megabits as > a common rate) than you would from 720x480. Hard to say, but you might > get a better quality recording from 352x480 at 4 megabits than from > 720x480 at 5 megabits, because the 720 is throwing in wasted information, > which is somewhat compressed but not perfectly. >
Dish network apparently uses 480x480 images. In theory you would want to capture exactly 480x480 at the exact horizontal positions the original one was sampled, but of course thats not going to happen. How close it gets is something I'm really not that sure about. The nyquist (sp?) sampling theorem states that your sampling frequency must be at least equal to twice the highest frequency in your signal to preserve all the information. Now the highest frequency signal at 480x480 would in theory be alternating 0,255,0,255 etc.. (There are also other harmonics in representing a square wave pulse like that, but lets keep this simple.) If we call the period of one cycle of that X then the signals main frequency will be f=1/X, so that the sampling frequency required to fully preserve the information content would be f = 2/x or a sampling period of X/2. What does this mean? It means to truly preserve everything possible in the analog image you would need to sample at 960x480. This explanation misses an important point though. The original video would have been essentially low pass filtered (or maybe something more complex) to remove frequency components that could not be sampled in 480x480. (For an analog signal if you sample at too low a rate for the frequency content and then compute a spectrum higher frequency elements will be folded over into the lower frequency result.) The highest frequencies that should be present in the signal for it to be encoded to 480x480 initially would be half the max frequency previously mentioned, thus bringing us back to needing 480x480 to capture dish network. (The number of lines is a fixed number and thus no need to consider it.) Note that I am more or less looking at this from a purely time domain point of view. A 2 dimensional transform like used in mpeg should be analogous, but probably has some differences. So what is the conclusion after all this mess? Well if the analog signal is clean that is coming from the digital source, then there is no advantage to sampling at a greater frequency than you started with. Of course if you get high frequency noise on the analog signal past D/A stage all bets are off. For that matter I did a lot of hand waving in this post, mainly since I was trying to come to a conclusion myself. If one did a really detailed analysis it might vary a bit. -Robert Denier p.s: This explanation completely ignores sampling precision and the quality of the A/D. It might be possible to improve the resulting quality of the image if you get access to the raw pixels at 720x480 and then do a bilinear resize on them line by line down to 480x480. I believe the theory goes something like you assume the errors in the A/D are uniform with the final result being at a greater precision than would be possible sampling directly at 480x480. For that matter, I think some chips may do that already.
_______________________________________________ mythtv-users mailing list [email protected] http://mythtv.org/cgi-bin/mailman/listinfo/mythtv-users
