Re: [vdr] Deinterlace video
Don't forget that modern LCD screens only have the res they are rated for. So anything you send needs to be an exact division of that or you will have pixels lost or merged with others as they fall between displayable pixels. CRT's had more points of light they the highest res they where rated for. So you could deal with odd multiples better. On 1/28/2011 2:57 AM, Stuart Morris wrote: Standard definition video is going to be harder than I thought. I used xrandr to set this mode via HDMI to my LCD TV: # 1440x576i @ 50Hz (EIA/CEA-861B) ModeLine "1440x576" 27.000 1440 1464 1590 1728 576 581 587 625 -hsync -vsync Interlace The TV reported mode 576i ok, but the desktop graphics were unreadable. I tried to view an interlaced standard def video using my little test application and it looked awful. However the 1080i mode worked very well: # 1920x1080i @ 50Hz (EIA/CEA-861B) Modeline "1920x1080" 74.250 1920 2448 2492 2640 1080 1085 1095 1125 +hsync +vsync Interlace I think for standard definition video via HDMI there will be a need to upscale to a resolution better supported by HDMI and that requires inverse telecine and deinterlacing. This may still be within the capabilities of todays low power systems. My little test has staisfied me that 1080i or 1080p video can be displayed with interlaced output. Stuart BTW my hardware setup was an old Sony KDL32V2000, and AMD HD4200 integrated graphics with the AMD closed driver. ___ vdr mailing list vdr@linuxtv.org http://www.linuxtv.org/cgi-bin/mailman/listinfo/vdr
Re: [vdr] Deinterlace video (was: Replacing aging VDR for DVB-S2)
On Fri, 28 Jan 2011 09:57:50 + (GMT) Stuart Morris wrote: > Standard definition video is going to be harder than I thought. > I used xrandr to set this mode via HDMI to my LCD TV: > # 1440x576i @ 50Hz (EIA/CEA-861B) > ModeLine "1440x576" 27.000 1440 1464 1590 1728 576 581 587 625 -hsync > -vsync Interlace The TV reported mode 576i ok, but the desktop > graphics were unreadable. I tried to view an interlaced standard def > video using my little test application and it looked awful. However > the 1080i mode worked very well: # 1920x1080i @ 50Hz (EIA/CEA-861B) > Modeline "1920x1080" 74.250 1920 2448 2492 2640 1080 1085 1095 1125 > +hsync +vsync Interlace > > I think for standard definition video via HDMI there will be a need to > upscale to a resolution better supported by HDMI and that requires > inverse telecine and deinterlacing. This may still be within the > capabilities of todays low power systems. > > My little test has staisfied me that 1080i or 1080p video can be > displayed with interlaced output. > > Stuart > > BTW my hardware setup was an old Sony KDL32V2000, and AMD HD4200 > integrated graphics with the AMD closed driver. I have the same model of TV (I still think of mine as quite new!). For SD I just use 1280x720 progressive. The PC can deinterlace and upscale 576i with negligible CPU/GPU. I have to say xine's software rendering doesn't give as good a picture as the TV's DVB-T, but I thought subjectively upscaling to 720p looked better than using a native 576 line mode. I haven't had much success with libxine and VDPAU so far, but I haven't tried since updating my NVidia drivers etc to Debian "experimental" (260.19.21). The "unstable" ones are quite out of date (195.36.31) because of the impending Debian release. I've had VDPAU working OK in mplayer for ages though. The TV's 1280x720 modes are better for video than the 1360x768 native resolution because it automatically turns on some processing and colour balance features, but the overscan and scaling make it unsuitable for the desktop. Another feature of this TV is that 1280x720 forces 16:9, but 720x576 enables the various options for 4:3 (centre, zoom, "smart", 14:9) so you could use mode switching as a form of aspect ratio signalling. However, changing mode causes the picture and sound to blank out for several seconds :-(. ___ vdr mailing list vdr@linuxtv.org http://www.linuxtv.org/cgi-bin/mailman/listinfo/vdr
Re: [vdr] Deinterlace video (was: Replacing aging VDR for DVB-S2)
--- On Fri, 28/1/11, Lucian Muresan wrote: > From: Lucian Muresan > Subject: Re: [vdr] Deinterlace video (was: Replacing aging VDR for DVB-S2) > To: "VDR Mailing List" > Date: Friday, 28 January, 2011, 11:37 > On 28.01.2011 10:57, Stuart Morris > wrote: > [..] > > Standard definition video is going to be harder than I > thought. > > I used xrandr to set this mode via HDMI to my LCD TV: > > # 1440x576i @ 50Hz (EIA/CEA-861B) > > ModeLine "1440x576" 27.000 1440 1464 1590 1728 576 581 > 587 625 -hsync -vsync Interlace > > The TV reported mode 576i ok, but the desktop graphics > were unreadable. > > I guess that's because it's a very strange resolution with > strange > aspect ratio, shouldn't that have been 1024x576i to > maintain a 16:9 > aspect ratio with square pixels? I only found 1440 on BBC > HD which > broadcasts in 1080i but sets the aspect ratio flag to > 16:9... > > Lucian The HDMI spec has a minimum pixel clock rate, such that modes like 576i and 480i must repeat horizontal pixels to maintain a pixel rate above the minimum. There is also an embedded information field in the HDMI link that tells the HDMI sink (the TV) which pixel(s) to discard. There appears to be no way to control this information and the graphics card I assume is interpolating horizontally anyway (not repeating). This might explain why the display looked so awful. Stuart ___ vdr mailing list vdr@linuxtv.org http://www.linuxtv.org/cgi-bin/mailman/listinfo/vdr
Re: [vdr] Deinterlace video (was: Replacing aging VDR for DVB-S2)
On 28.01.2011 10:57, Stuart Morris wrote: [..] > Standard definition video is going to be harder than I thought. > I used xrandr to set this mode via HDMI to my LCD TV: > # 1440x576i @ 50Hz (EIA/CEA-861B) > ModeLine "1440x576" 27.000 1440 1464 1590 1728 576 581 587 625 -hsync -vsync > Interlace > The TV reported mode 576i ok, but the desktop graphics were unreadable. I guess that's because it's a very strange resolution with strange aspect ratio, shouldn't that have been 1024x576i to maintain a 16:9 aspect ratio with square pixels? I only found 1440 on BBC HD which broadcasts in 1080i but sets the aspect ratio flag to 16:9... Lucian ___ vdr mailing list vdr@linuxtv.org http://www.linuxtv.org/cgi-bin/mailman/listinfo/vdr
Re: [vdr] Deinterlace video (was: Replacing aging VDR for DVB-S2)
--- On Sat, 22/1/11, Niko Mikkilä wrote: > From: Niko Mikkilä > Subject: Re: [vdr] Deinterlace video (was: Replacing aging VDR for DVB-S2) > To: "VDR Mailing List" > Date: Saturday, 22 January, 2011, 17:17 > On 2011-01-22 08:16 +0100, Thomas > Hilber wrote: > > On Wed, Jan 19, 2011 at 12:42:40PM +, Stuart > Morris wrote: > > > conversion and then draw the first field to the > frame buffer. At the next > > > vertical sync the shader would convert the second > field and draw that to > > > the frame buffer. With VDPAU is there a new > OpenGL interop function that > > > > that's not the whole story. You still have to consider > synchronicity between > > incoming data rate (TV-stream) and outgoing data rate > (VGA/Video timing). > > Yep. As Stuart said, framedrops/duplicates will happen, but > with his > drawing technique they don't cause the player to lose field > sync. I > think that's already quite acceptable, since at least > recordings can be > played without any video judder if audio is resampled. > > --Niko Standard definition video is going to be harder than I thought. I used xrandr to set this mode via HDMI to my LCD TV: # 1440x576i @ 50Hz (EIA/CEA-861B) ModeLine "1440x576" 27.000 1440 1464 1590 1728 576 581 587 625 -hsync -vsync Interlace The TV reported mode 576i ok, but the desktop graphics were unreadable. I tried to view an interlaced standard def video using my little test application and it looked awful. However the 1080i mode worked very well: # 1920x1080i @ 50Hz (EIA/CEA-861B) Modeline "1920x1080" 74.250 1920 2448 2492 2640 1080 1085 1095 1125 +hsync +vsync Interlace I think for standard definition video via HDMI there will be a need to upscale to a resolution better supported by HDMI and that requires inverse telecine and deinterlacing. This may still be within the capabilities of todays low power systems. My little test has staisfied me that 1080i or 1080p video can be displayed with interlaced output. Stuart BTW my hardware setup was an old Sony KDL32V2000, and AMD HD4200 integrated graphics with the AMD closed driver. ___ vdr mailing list vdr@linuxtv.org http://www.linuxtv.org/cgi-bin/mailman/listinfo/vdr
Re: [vdr] Deinterlace video (was: Replacing aging VDR for DVB-S2)
On 2011-01-22 08:16 +0100, Thomas Hilber wrote: > On Wed, Jan 19, 2011 at 12:42:40PM +, Stuart Morris wrote: > > conversion and then draw the first field to the frame buffer. At the next > > vertical sync the shader would convert the second field and draw that to > > the frame buffer. With VDPAU is there a new OpenGL interop function that > > that's not the whole story. You still have to consider synchronicity between > incoming data rate (TV-stream) and outgoing data rate (VGA/Video timing). Yep. As Stuart said, framedrops/duplicates will happen, but with his drawing technique they don't cause the player to lose field sync. I think that's already quite acceptable, since at least recordings can be played without any video judder if audio is resampled. --Niko ___ vdr mailing list vdr@linuxtv.org http://www.linuxtv.org/cgi-bin/mailman/listinfo/vdr
Re: [vdr] Deinterlace video (was: Replacing aging VDR for DVB-S2)
On Wed, Jan 19, 2011 at 12:42:40PM +, Stuart Morris wrote: > conversion and then draw the first field to the frame buffer. At the next > vertical sync the shader would convert the second field and draw that to > the frame buffer. With VDPAU is there a new OpenGL interop function that that's not the whole story. You still have to consider synchronicity between incoming data rate (TV-stream) and outgoing data rate (VGA/Video timing). Want to say: VGA/Video timing must be dynamic at least in very small increments. AFAIK no graphics hardware support for this feature exists until today. Greetings from my [1] project. I didn't proceed with it lacking more recent HDTV capable graphics hardware suitable for this idea. - Thomas [1] http://lowbyte.de/vga-sync-fields/vga-sync-fields/README ___ vdr mailing list vdr@linuxtv.org http://www.linuxtv.org/cgi-bin/mailman/listinfo/vdr
Re: [vdr] Deinterlace video (was: Replacing aging VDR for DVB-S2)
--- On Thu, 20/1/11, Reinhard Nissl wrote: > From: Reinhard Nissl > Subject: Re: [vdr] Deinterlace video (was: Replacing aging VDR for DVB-S2) > To: vdr@linuxtv.org > Date: Thursday, 20 January, 2011, 16:32 > Hi, > > Am 19.01.2011 13:42, schrieb Stuart Morris: > > > One would need to be able to access the decoded frame > containing 2 fields > > and perhaps use an OpenGL shader to perform field > based colour space > > conversion and then draw the first field to the frame > buffer. At the next > > vertical sync the shader would convert the second > field and draw that to > > the frame buffer. With VDPAU is there a new OpenGL > interop function that > > allows access to the decoded frame? > > If you enable bob deinterlacing you'll get that. Just set > an > interlaced video mode of appropriate resolution. Cannot > tell > whether VDPAU honors TOP/BOTTOM field flag and displays the > frame > when the field is due. This was always a problem with xxmc > and > VIA EPIA CLE 266. Incorrect field order is most noticeable > on > fast movements. It is very similar to bob except the crucial difference is that both of the most recent fields are present in the frame buffer at the same time to avoid field display order problems. Where bob would simply scale each field to the full height of the frame buffer at field rate, we would need each field line drawn on alternate lines leaving the previous field on the lines in between (i.e. weave the 2 most recent fields together at field rate). Conventional bob on an interlaced display would sometimes display correctly and sometimes not depending on luck, because field synch would be required. ___ vdr mailing list vdr@linuxtv.org http://www.linuxtv.org/cgi-bin/mailman/listinfo/vdr
Re: [vdr] Deinterlace video (was: Replacing aging VDR for DVB-S2)
Hi, Am 19.01.2011 13:42, schrieb Stuart Morris: > One would need to be able to access the decoded frame containing 2 fields > and perhaps use an OpenGL shader to perform field based colour space > conversion and then draw the first field to the frame buffer. At the next > vertical sync the shader would convert the second field and draw that to > the frame buffer. With VDPAU is there a new OpenGL interop function that > allows access to the decoded frame? If you enable bob deinterlacing you'll get that. Just set an interlaced video mode of appropriate resolution. Cannot tell whether VDPAU honors TOP/BOTTOM field flag and displays the frame when the field is due. This was always a problem with xxmc and VIA EPIA CLE 266. Incorrect field order is most noticeable on fast movements. Bye. -- Dipl.-Inform. (FH) Reinhard Nissl mailto:rni...@gmx.de ___ vdr mailing list vdr@linuxtv.org http://www.linuxtv.org/cgi-bin/mailman/listinfo/vdr
Re: [vdr] Deinterlace video
I believe the issue with this flag is understandable when you consider the very simple nature of most set-top boxes decoding broadcast digital TV. It will always send video to the TV interlaced regardless of the content. So it does not care about de-interlacing. However it does need to know how to convert the decoded frame colour space and for this the interlace flag I suspect can be relied upon. If the content is flagged as interlaced, separate the decoded YUV frame into separate YUV fields then convert to RGB. If the flag is clear convert the decoded YUV frame to RGB. For all material send to the TV interlaced at the appropriate resolution. This will also be important if the applicatgion is ever likely to display video media other than broadcast TV where it is flagged as progressive. If however you wish to de-interlace the picture you will need sophisticated pulldown detection which will disable the deinterlacer when progressive content is detected. To detect 2:2 pulldown for example (typical progressive source material broadcast in the UK) you would need to detect combing artefacts within successive decoded frames. No combing/mouse teeth for several consecutive frames would then cause the de-interlacer to be disabled. 3:2 pulldown is a little easier to detect because there are flags to indicate fields must be repeated. However reconstruction and display of 3:2 video is more complicated. --- On Wed, 19/1/11, Timothy D. Lenz wrote: > From: Timothy D. Lenz > Subject: Re: [vdr] Deinterlace video > To: "VDR Mailing List" > Date: Wednesday, 19 January, 2011, 19:25 > Is it possible to figure out if the > stream is interlaced or not by looking at the stream? Seems > like it should be able to figure out within a frame or two > (.033ms) and then just ignore the useless flags? Needs to be > done with epg data. I think the Insignia boxes just try to > read data regardless of flags because they are able to find > data when atscepg won't. > > On 1/19/2011 8:55 AM, VDR User wrote: > > On Wed, Jan 19, 2011 at 5:47 AM, Tony Houghton > wrote: > >> I thought there was supposed to be a flag in MPEG > meta data which > >> indicates whether pairs of fields are interlaced > or progressive so > >> decoders can determine how to combine them without > doing any complicated > >> picture analysis. Are broadcasters not using the > flag properly, or xine > >> not reading it? xine-ui's preferences dialog has > an option to disable > >> interlacing for progressive material, have you set > that in whichever > >> front-end you're using? > > > > There is. Unfortunately I can't begin to count > the number of times > > I've seen the flag set incorrectly, essentially making > it useless. > > > > ___ > > vdr mailing list > > vdr@linuxtv.org > > http://www.linuxtv.org/cgi-bin/mailman/listinfo/vdr > > > > ___ > vdr mailing list > vdr@linuxtv.org > http://www.linuxtv.org/cgi-bin/mailman/listinfo/vdr > ___ vdr mailing list vdr@linuxtv.org http://www.linuxtv.org/cgi-bin/mailman/listinfo/vdr
Re: [vdr] Deinterlace video
Is it possible to figure out if the stream is interlaced or not by looking at the stream? Seems like it should be able to figure out within a frame or two (.033ms) and then just ignore the useless flags? Needs to be done with epg data. I think the Insignia boxes just try to read data regardless of flags because they are able to find data when atscepg won't. On 1/19/2011 8:55 AM, VDR User wrote: On Wed, Jan 19, 2011 at 5:47 AM, Tony Houghton wrote: I thought there was supposed to be a flag in MPEG meta data which indicates whether pairs of fields are interlaced or progressive so decoders can determine how to combine them without doing any complicated picture analysis. Are broadcasters not using the flag properly, or xine not reading it? xine-ui's preferences dialog has an option to disable interlacing for progressive material, have you set that in whichever front-end you're using? There is. Unfortunately I can't begin to count the number of times I've seen the flag set incorrectly, essentially making it useless. ___ vdr mailing list vdr@linuxtv.org http://www.linuxtv.org/cgi-bin/mailman/listinfo/vdr ___ vdr mailing list vdr@linuxtv.org http://www.linuxtv.org/cgi-bin/mailman/listinfo/vdr
Re: [vdr] Deinterlace video
You can't depend on the flag. It's a strange one. I have a channel that is reported as 1080i by the femon plugin but deint has to be off sometimes to reduce jitter. Other times it can be on. The FCC has gotten very lax in requirments and even more lax in inforcing what rules they do have. On 1/19/2011 6:47 AM, Tony Houghton wrote: On Wed, 19 Jan 2011 12:36:19 + (GMT) Stuart Morris wrote: For progressive HD material I have to manually turn off deinterlacing, then turn it on again for interlaced material. That's annoying. I thought there was supposed to be a flag in MPEG meta data which indicates whether pairs of fields are interlaced or progressive so decoders can determine how to combine them without doing any complicated picture analysis. Are broadcasters not using the flag properly, or xine not reading it? xine-ui's preferences dialog has an option to disable interlacing for progressive material, have you set that in whichever front-end you're using? ___ vdr mailing list vdr@linuxtv.org http://www.linuxtv.org/cgi-bin/mailman/listinfo/vdr ___ vdr mailing list vdr@linuxtv.org http://www.linuxtv.org/cgi-bin/mailman/listinfo/vdr
Re: [vdr] Deinterlace video
Maybe this would be something to request for vdr-xine update On 1/19/2011 4:24 AM, Torgeir Veimo wrote: On 19 January 2011 20:18, Stuart Morris wrote: IMHO the best way to go for a low power HTPC is to decode in hardware e.g. VDPAU, VAAPI, but output interlaced video to your TV and let the TV sort out deinterlacing and inverse telecine. Unfortunately, with VDPAU, the hardware combines fields into frames, then scales, which results in ghosting with interlaced material. So this approach would not work with stock xineliboutput, which uses a fixed output resolution. If you could avoid the scaling altogether with interlaced material, eg with a modified xineliboutput setup, then this would be feasible I guess. ref: http://www.mail-archive.com/vdr@linuxtv.org/msg09259.html http://www.mail-archive.com/xorg@lists.freedesktop.org/msg05270.html http://www.mail-archive.com/xorg@lists.freedesktop.org/msg05610.html ___ vdr mailing list vdr@linuxtv.org http://www.linuxtv.org/cgi-bin/mailman/listinfo/vdr
Re: [vdr] Deinterlace video
I thought it had to be deinterlaced as it was decoded. If we could just decode and send at was ever res (720p, 1080i, 1080p) the stream is in, then work would be offloaded to the TV. Might be a nice option for those of us with marginal video cards. On 1/19/2011 3:48 AM, Niko Mikkilä wrote: Well, flat panel TVs have similar deinterlacing algorithms as what VDPAU provides, but it would certainly be a nice alternative. These are the key requirements to achieve interlaced output: Get the right modelines for your video card and TV. Draw interlaced fields to your frame buffer at field rate and in the correct order (top field first or bottom field first). When drawing the field to the frame buffer, do not overwrite the previous field still in the frame buffer. Maintain 1:1 vertical scaling (no vertical scaling), so you will need to switch video output to match the source video height (480i, 576i or 1080i). Display the frame buffer at field rate and synchronised to the graphics card vertical sync. Finally, there is NO requirement to synchronise fields, fields are always displayed in the same order they are written to the frame buffer, even if occasionally fields are dropped. Interesting. Could you perhaps write full instructions to some suitable wiki and post the code that you used to do this? I'm sure others would like to try it too. --Niko ___ vdr mailing list vdr@linuxtv.org http://www.linuxtv.org/cgi-bin/mailman/listinfo/vdr
Re: [vdr] Deinterlace video (was: Replacing aging VDR for DVB-S2)
On Wed, Jan 19, 2011 at 5:47 AM, Tony Houghton wrote: > I thought there was supposed to be a flag in MPEG meta data which > indicates whether pairs of fields are interlaced or progressive so > decoders can determine how to combine them without doing any complicated > picture analysis. Are broadcasters not using the flag properly, or xine > not reading it? xine-ui's preferences dialog has an option to disable > interlacing for progressive material, have you set that in whichever > front-end you're using? There is. Unfortunately I can't begin to count the number of times I've seen the flag set incorrectly, essentially making it useless. ___ vdr mailing list vdr@linuxtv.org http://www.linuxtv.org/cgi-bin/mailman/listinfo/vdr
Re: [vdr] Deinterlace video (was: Replacing aging VDR for DVB-S2)
--- On Wed, 19/1/11, Torgeir Veimo wrote: > From: Torgeir Veimo > Subject: Re: [vdr] Deinterlace video (was: Replacing aging VDR for DVB-S2) > To: "VDR Mailing List" > Date: Wednesday, 19 January, 2011, 13:50 > On 19 January 2011 23:47, Tony > Houghton > wrote: > > I thought there was supposed to be a flag in MPEG meta > data which > > indicates whether pairs of fields are interlaced or > progressive so > > decoders can determine how to combine them without > doing any complicated > > picture analysis. Are broadcasters not using the flag > properly [...] > > Broadcasters can't even get the EPG data correct. > In my limited experience, watching UK Freeview recordings made with VDR, using Xines TVtime deinterlacer, with the progressive frame flag option set, deinterlace is on all of the time including video derived from a progressive film source, which is wrong. I think it is safe to rely on this flag for deciding on whether to convert colour space in fields or frames, but it seems it gives you no clues whether to deinterlace or not. ___ vdr mailing list vdr@linuxtv.org http://www.linuxtv.org/cgi-bin/mailman/listinfo/vdr
Re: [vdr] Deinterlace video (was: Replacing aging VDR for DVB-S2)
On 19 January 2011 23:47, Tony Houghton wrote: > I thought there was supposed to be a flag in MPEG meta data which > indicates whether pairs of fields are interlaced or progressive so > decoders can determine how to combine them without doing any complicated > picture analysis. Are broadcasters not using the flag properly [...] Broadcasters can't even get the EPG data correct. -- -Tor ___ vdr mailing list vdr@linuxtv.org http://www.linuxtv.org/cgi-bin/mailman/listinfo/vdr
Re: [vdr] Deinterlace video (was: Replacing aging VDR for DVB-S2)
On Wed, 19 Jan 2011 12:36:19 + (GMT) Stuart Morris wrote: > For progressive HD material I have to manually turn off deinterlacing, > then turn it on again for interlaced material. That's annoying. I thought there was supposed to be a flag in MPEG meta data which indicates whether pairs of fields are interlaced or progressive so decoders can determine how to combine them without doing any complicated picture analysis. Are broadcasters not using the flag properly, or xine not reading it? xine-ui's preferences dialog has an option to disable interlacing for progressive material, have you set that in whichever front-end you're using? ___ vdr mailing list vdr@linuxtv.org http://www.linuxtv.org/cgi-bin/mailman/listinfo/vdr
Re: [vdr] Deinterlace video (was: Replacing aging VDR for DVB-S2)
--- On Wed, 19/1/11, Niko Mikkilä wrote: > From: Niko Mikkilä > Subject: Re: [vdr] Deinterlace video (was: Replacing aging VDR for DVB-S2) > To: "VDR Mailing List" > Date: Wednesday, 19 January, 2011, 10:48 > ke, 2011-01-19 kello 10:18 +, > Stuart Morris kirjoitti: > > My experience with an nVidia GT220 has been less than > perfect. It can > > perform temporal+spatial+inverse_telecine on HD video > fast enough, but > > my PC gets hot and it truly sucks at 2:2 pulldown > detection. The > > result of this is when viewing progressive video > encoded as interlaced > > field pairs (2:2 pulldown), deinterlacing keeps > cutting in and out > > every second or so, ruining the picture quality. > > I think VDPAU's inverse telecine is only meant for non-even > cadences > like 3:2. Motion-adaptive deinterlacing handles 2:2 pullup > perfectly > well, so try without IVTC. > > > > IMHO the best way to go for a low power HTPC is to > decode in hardware > > e.g. VDPAU, VAAPI, but output interlaced video to your > TV and let the > > TV sort out deinterlacing and inverse telecine. > > Well, flat panel TVs have similar deinterlacing algorithms > as what VDPAU > provides, but it would certainly be a nice alternative. > > > These are the key requirements to achieve interlaced > output: > > > > Get the right modelines for your video card and TV. > Draw interlaced > > fields to your frame buffer at field rate and in the > correct order > > (top field first or bottom field first). When drawing > the field to the > > frame buffer, do not overwrite the previous field > still in the frame > > buffer. Maintain 1:1 vertical scaling (no vertical > scaling), so you > > will need to switch video output to match the source > video height > > (480i, 576i or 1080i). Display the frame buffer at > field rate and > > synchronised to the graphics card vertical sync. > Finally, there is NO > > requirement to synchronise fields, fields are always > displayed in the > > same order they are written to the frame buffer, even > if occasionally > > fields are dropped. > > Interesting. Could you perhaps write full instructions to > some suitable > wiki and post the code that you used to do this? I'm sure > others would > like to try it too. I can provide the simple bit of source code I have used to demonstrate the basic principle. Please see attached. #include #include #include #include #include #include #include #ifdef __MINGW32__ #undef main /* Prevents SDL from overriding main() */ #endif int main(int argc, char *argv[]) { AVFormatContext *pFormatCtx; int i, j, k, videoStream; AVCodecContext *pCodecCtx; AVCodec *pCodec; AVFrame *pFrame; AVPacketpacket; int frameFinished; struct SwsContext *img_convert_ctx; float aspect_ratio; int frmcnt = 0; int DispList; SDL_Surface *screen; SDL_Event event; if (argc < 2) { fprintf(stderr, "Usage: test \n"); exit(1); } // Register all formats and codecs av_register_all(); if (SDL_Init(SDL_INIT_VIDEO | SDL_INIT_AUDIO | SDL_INIT_TIMER)) { fprintf(stderr, "Could not initialize SDL - %s\n", SDL_GetError()); exit(1); } // Open video file if (av_open_input_file(&pFormatCtx, argv[1], NULL, 0, NULL)!=0) return -1; // Couldn't open file // Retrieve stream information if (av_find_stream_info(pFormatCtx)<0) return -1; // Couldn't find stream information // Dump information about file onto standard error dump_format(pFormatCtx, 0, argv[1], 0); // Find the first video stream videoStream=-1; for (i=0; inb_streams; i++) if (pFormatCtx->streams[i]->codec->codec_type==CODEC_TYPE_VIDEO) { videoStream=i; break; } if (videoStream==-1) return -1; // Didn't find a video stream // Get a pointer to the codec context for the video stream pCodecCtx=pFormatCtx->streams[videoStream]->codec; // Find the decoder for the video stream pCodec=avcodec_find_decoder(pCodecCtx->codec_id); if (pCodec==NULL) { fprintf(stderr, "Unsupported codec!\n"); return -1; // Codec not found } // Open codec pCodecCtx->thread_count = 2; // Use 2 processor cores if available if (avcodec_open(pCodecCtx, pCodec)<0) return -1; // Could not open codec // Allocate video frame pFrame=avcodec_alloc_frame(); // Calculate aspect ratio if (pCodecCtx->sample_aspect_ratio.nu
Re: [vdr] Deinterlace video (was: Replacing aging VDR for DVB-S2)
--- On Wed, 19/1/11, Torgeir Veimo wrote: > From: Torgeir Veimo > Subject: Re: [vdr] Deinterlace video (was: Replacing aging VDR for DVB-S2) > To: "VDR Mailing List" > Date: Wednesday, 19 January, 2011, 11:24 > On 19 January 2011 20:18, Stuart > Morris > wrote: > > IMHO the best way to go for a low power HTPC is to > decode in hardware e.g. VDPAU, VAAPI, but output interlaced > video to your TV and let the TV sort out deinterlacing and > inverse telecine. > > Unfortunately, with VDPAU, the hardware combines fields > into frames, > then scales, which results in ghosting with interlaced > material. > > So this approach would not work with stock xineliboutput, > which uses a > fixed output resolution. If you could avoid the scaling > altogether > with interlaced material, eg with a modified xineliboutput > setup, then > this would be feasible I guess. One would need to be able to access the decoded frame containing 2 fields and perhaps use an OpenGL shader to perform field based colour space conversion and then draw the first field to the frame buffer. At the next vertical sync the shader would convert the second field and draw that to the frame buffer. With VDPAU is there a new OpenGL interop function that allows access to the decoded frame? I should add I have not yet ventured into writing OpenGL shaders! Stu-e ___ vdr mailing list vdr@linuxtv.org http://www.linuxtv.org/cgi-bin/mailman/listinfo/vdr
Re: [vdr] Deinterlace video (was: Replacing aging VDR for DVB-S2)
--- On Wed, 19/1/11, Niko Mikkilä wrote: > From: Niko Mikkilä > Subject: Re: [vdr] Deinterlace video (was: Replacing aging VDR for DVB-S2) > To: "VDR Mailing List" > Date: Wednesday, 19 January, 2011, 11:43 > Replying to myself... > > ke, 2011-01-19 kello 12:48 +0200, Niko Mikkilä kirjoitti: > > ke, 2011-01-19 kello 10:18 +, Stuart Morris > kirjoitti: > > > My experience with an nVidia GT220 has been less > than perfect. It can > > > perform temporal+spatial+inverse_telecine on HD > video fast enough, but > > > my PC gets hot and it truly sucks at 2:2 pulldown > detection. The > > > result of this is when viewing progressive video > encoded as interlaced > > > field pairs (2:2 pulldown), deinterlacing keeps > cutting in and out > > > every second or so, ruining the picture quality. > > > > I think VDPAU's inverse telecine is only meant for > non-even cadences > > like 3:2. Motion-adaptive deinterlacing handles 2:2 > pullup perfectly > > well, so try without IVTC. > > Not perfectly well apparenty; there will be slight > artifacting at sharp > horizontal edges, so the trigger to deinterlace is pretty > low. Probably > to avoid any visible combing in interlaced video. > > Pullup seems to work fine for me though, but I only have > VP2/"VDPAU > feature set A" hardware. My problems with VDPAU inverse-telecine were apparent only on HD video. It did seem to be ok with SD video. With HD video, if I disabled inverse-telecine and left the advanced deinterlacer on, it (not surprisingly) deinterlaces the progressive picture resulting in loss of detail and twittering. For progressive HD material I have to manually turn off deinterlacing, then turn it on again for interlaced material. That's annoying. ___ vdr mailing list vdr@linuxtv.org http://www.linuxtv.org/cgi-bin/mailman/listinfo/vdr
Re: [vdr] Deinterlace video (was: Replacing aging VDR for DVB-S2)
Replying to myself... ke, 2011-01-19 kello 12:48 +0200, Niko Mikkilä kirjoitti: > ke, 2011-01-19 kello 10:18 +, Stuart Morris kirjoitti: > > My experience with an nVidia GT220 has been less than perfect. It can > > perform temporal+spatial+inverse_telecine on HD video fast enough, but > > my PC gets hot and it truly sucks at 2:2 pulldown detection. The > > result of this is when viewing progressive video encoded as interlaced > > field pairs (2:2 pulldown), deinterlacing keeps cutting in and out > > every second or so, ruining the picture quality. > > I think VDPAU's inverse telecine is only meant for non-even cadences > like 3:2. Motion-adaptive deinterlacing handles 2:2 pullup perfectly > well, so try without IVTC. Not perfectly well apparenty; there will be slight artifacting at sharp horizontal edges, so the trigger to deinterlace is pretty low. Probably to avoid any visible combing in interlaced video. Pullup seems to work fine for me though, but I only have VP2/"VDPAU feature set A" hardware. --Niko ___ vdr mailing list vdr@linuxtv.org http://www.linuxtv.org/cgi-bin/mailman/listinfo/vdr
Re: [vdr] Deinterlace video (was: Replacing aging VDR for DVB-S2)
On 19 January 2011 20:18, Stuart Morris wrote: > IMHO the best way to go for a low power HTPC is to decode in hardware e.g. > VDPAU, VAAPI, but output interlaced video to your TV and let the TV sort out > deinterlacing and inverse telecine. Unfortunately, with VDPAU, the hardware combines fields into frames, then scales, which results in ghosting with interlaced material. So this approach would not work with stock xineliboutput, which uses a fixed output resolution. If you could avoid the scaling altogether with interlaced material, eg with a modified xineliboutput setup, then this would be feasible I guess. ref: http://www.mail-archive.com/vdr@linuxtv.org/msg09259.html http://www.mail-archive.com/xorg@lists.freedesktop.org/msg05270.html http://www.mail-archive.com/xorg@lists.freedesktop.org/msg05610.html -- -Tor ___ vdr mailing list vdr@linuxtv.org http://www.linuxtv.org/cgi-bin/mailman/listinfo/vdr
Re: [vdr] Deinterlace video (was: Replacing aging VDR for DVB-S2)
ke, 2011-01-19 kello 10:18 +, Stuart Morris kirjoitti: > My experience with an nVidia GT220 has been less than perfect. It can > perform temporal+spatial+inverse_telecine on HD video fast enough, but > my PC gets hot and it truly sucks at 2:2 pulldown detection. The > result of this is when viewing progressive video encoded as interlaced > field pairs (2:2 pulldown), deinterlacing keeps cutting in and out > every second or so, ruining the picture quality. I think VDPAU's inverse telecine is only meant for non-even cadences like 3:2. Motion-adaptive deinterlacing handles 2:2 pullup perfectly well, so try without IVTC. > IMHO the best way to go for a low power HTPC is to decode in hardware > e.g. VDPAU, VAAPI, but output interlaced video to your TV and let the > TV sort out deinterlacing and inverse telecine. Well, flat panel TVs have similar deinterlacing algorithms as what VDPAU provides, but it would certainly be a nice alternative. > These are the key requirements to achieve interlaced output: > > Get the right modelines for your video card and TV. Draw interlaced > fields to your frame buffer at field rate and in the correct order > (top field first or bottom field first). When drawing the field to the > frame buffer, do not overwrite the previous field still in the frame > buffer. Maintain 1:1 vertical scaling (no vertical scaling), so you > will need to switch video output to match the source video height > (480i, 576i or 1080i). Display the frame buffer at field rate and > synchronised to the graphics card vertical sync. Finally, there is NO > requirement to synchronise fields, fields are always displayed in the > same order they are written to the frame buffer, even if occasionally > fields are dropped. Interesting. Could you perhaps write full instructions to some suitable wiki and post the code that you used to do this? I'm sure others would like to try it too. --Niko ___ vdr mailing list vdr@linuxtv.org http://www.linuxtv.org/cgi-bin/mailman/listinfo/vdr
Re: [vdr] Deinterlace video (was: Replacing aging VDR for DVB-S2)
--- On Tue, 18/1/11, Niko Mikkilä wrote: > From: Niko Mikkilä > Subject: Re: [vdr] Replacing aging VDR for DVB-S2 > To: "VDR Mailing List" > Date: Tuesday, 18 January, 2011, 13:06 > On 2011-01-15 22:36 +, Tony > Houghton wrote: > > > I wonder whether it might be possible to use a more > eonomical card > which > > is only powerful enough to decode 1080i without > deinterlacing it and > > take advantage of the abundant CPU power most people > have nowadays to > > perform software deinterlacing. It may not be possible > to have > something > > as sophisticated as NVidia's temporal + spatial, but > some of the > > existing software filters should scale up to HD > without overloading > the > > CPU seeing as it wouldn't be doing the decoding too. > > It's possible, but realtime GPU deinterlacing is more > energy-efficient: > > - For CPU deinterlacing, you'd need something like > Greedy2Frame or > TomsMoComp. They should give about the same quality as > Nvidia's temporal > deinterlacer, but the code would need to be threaded to > support > lower-frequency multicore CPUs. > > Yadif almost matches temporal+spatial in quality, but it > will also be > about 50% slower than Greedy2Frame. > > - Hardware-decoded video is already in the GPU memory and > moving > 1920x1080-pixel frames around is not free. > > - Simple motion-adaptive, edge-interpolating deinterlacing > can be easily > parallelized for GPU architectures, so it will be more > efficient than on > a serial processor. For example, GT 220 can do 1080i > deinterlacing at > more than 150 fps (output). Normal 50 fps deinterlacing > only causes > partial load and power consumption. GT 430 is currently > worse because of > an unoptimized filter implementation: > http://nvnews.net/vbulletin/showthread.php?p=2377750#post2377750 > > Still, only the latest CPU generation can reach that kind > of performance > with a highly optimized software deinterlacer. > > > > > Alternatively, use software decoding, and hardware > deinterlacing. > > GPU video decoding is very efficient thanks to dedicated > hardware. I'd > guess that current chips only use about 3 Watts for > high-bitrate > 1080i25. > > Also, decoding and filtering aren't executed on the same > parts of the > GPU chip. They are almost perfectly parallel processes, so > combined > throughput will be that of the slower process. > > > > Somewhere on linuxtv.org there's an article about > using fairly simple > > OpenGL to mimic what happens to interlaced video on a > CRT, but I don't > > know how good the results would look. > > Sounds like normal bobbing with interpolation. Even if it > simulates a > phosphor delay, it probably won't look much better than > MPlayer's -vf > tfields or the bobber in VDPAU. > > Sharp interlaced (and progressive) video is quite flickery > on a CRT too. > > > > BTW, speaking of temporal and spatial deinterlacing: > AFAICT one means > > combining fields to provide maximum resolution with > half the frame > rate > > of the interlaced fields, and the other maximises the > frame rate while > > discarding resolution; but which is which? And does > NVidia's temporal > + > > spatial try to give the best of both worlds through > some sort of > > interpolation? > > Temporal = motion adaptive deinterlacing at either half or > full field > rate. Some programs refer to the latter by "2x". "Motion > adaptive" means > that the filter detects interlaced parts of each frame and > adjusts > deinterlacing accordingly. This gives better quality at > stationary > parts. > > Temporal-spatial = Temporal with edge-directed > interpolation to smooth > jagged edges of moving objects. > > Both methods give about the same spatial and temporal > resolution but > temporal-spatial will look nicer. > > --Niko > > > > ___ > vdr mailing list > vdr@linuxtv.org > http://www.linuxtv.org/cgi-bin/mailman/listinfo/vdr > My experience with an nVidia GT220 has been less than perfect. It can perform temporal+spatial+inverse_telecine on HD video fast enough, but my PC gets hot and it truly sucks at 2:2 pulldown detection. The result of this is when viewing progressive video encoded as interlaced field pairs (2:2 pulldown), deinterlacing keeps cutting in and out every second or so, ruining the picture quality. IMHO the best way to go for a low power HTPC is to decode in hardware e.g. VDPAU, VAAPI, but output interlaced video to your TV and let the TV sort out deinterlacing and inverse telecine. I have experimented using FFMPEG and OpenGL and I achieved a very good quality picture on a 1080i CRT monitor (I have yet to try an HDMI flat panel TV). These are the key requirements to achieve interlaced output: Get the right modelines for your video card and TV. Draw interlaced fields to your frame buffer at field rate and in the correct order (top field first or bottom field first). When drawing the field to the frame buffer, do not overwrite the pr