Re: RFC: Xv field order

2009-06-24 Thread Thomas Hilber
On Wed, Jun 24, 2009 at 03:23:44PM +0200, Krzysztof Halasa wrote:
 I wonder what is the difference between the on-air frame rate and
 your card's (fixed) one? 100 ppm would need one second of additional
 (initial) buffering for ca. 3 hours of playback. I think the players
 initially buffer more than 1 second, don't they?

the problem is not the absolute accuracy of your graphics card.

the problem is: there are always differences between the on-air frame
rate and the card's fixed one if both are not synchronized. Even if
it were possible to adjust your graphics card exactly to 50Hz it 
would help nothing. Because the on-air frame rate always floats around
a litte. Besides that the graphics card of course never is running
at exactly 50Hz. May be somewhere in the range of 49.90 and 50.10 if
your are very optimistic.

In practice (after heavily experimenting with that) this leads 
to field/frame losses/duplicates at least every 45 seconds.

The only solution to avoid this judder is to synchronize the graphics
card to the base clock of your software player. This is what the
vga-sync-fields patch does.

Any buffering won't help. Because the problem arises between the
software player's base clock and the graphics card. And not between
the TV station and the software player.

Theoretically you could synchronize the software players clock to
the (fixed) graphics card clock for replay of recordings. Even
that is not a common practice under linux because the software 
player has no clue about the graphics card frame rate.

For live-TV this is not possible anyway. Your assumption of 
1 second buffer per 3 hours of playback is way too optimistic. 

- Thomas
___
xorg mailing list
xorg@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg


Re: RFC: Xv field order

2009-06-24 Thread Thomas Hilber
On Wed, Jun 24, 2009 at 07:33:47PM +0200, Krzysztof Halasa wrote:
 I already wrote about the difference and not about a card only, right?
 The broadcast is the most accurate and stable (at least here), almost
 always at 50 Hz +- maybe 10 ppm, maybe less than that.

finally it does not matter how stable the broadcast itself is. 
Rather how this stability is realized by the decoder.

 Of course. That's why I wrote about a buffering.

Buffering if at all only would help if your graphics card is slower
than the broadcast. If it's faster you get noticeable disturbances if
your buffer gets exhausted.

 No. In fact, I have just verified with a freq meter, it's orders of
 magnitude better. Aren't you using a miscalculated modeline (not 13.5

even that is way too much if you want directly route interlaced fields
from decoder to graphics card double buffer. Without dynamically syncing
decoder and graphics card you have no chance to stay within a time
window of about +-20ms for double buffer updates.

 MHz pixel clock or invalid total number of pixels or lines or something
 like that)? 0.2% difference is excessive.

I'm not speaking of my own setup. My algorithm dynamically adjusts with 
accuracy of +-0.001Hz. I'm speaking of common solutions for video 
playback using the XV extension.

 That would be true if you use some player-internal time base. But
 players can, and do, synchronize to the graphics card. IOW the card
 becomes the time base, and everything (sound) is adjusted to it.

are these solutions also available for live TV?

 Depends on the output driver. I think all sync drivers for mplayer
 and xine do it, don't they? (E.g. matrox driver)

can't tell. I don't know these proprietary solutions for matrox cards.

 Now the question is the interface between players and Xserver, not the
 internals of the driver, which BTW I'm using for almost two years
 without issues.

no problem. With my solution I setup a tiny D945GSEJT based linux SAT
receiver consuming only about 14W with high quality SCART/HDMI output.

I don't know any another solution providing these features under linux.

- Thomas

___
xorg mailing list
xorg@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg


Re: RFC: Xv field order

2009-06-23 Thread Thomas Hilber
On Tue, Jun 23, 2009 at 03:54:03PM +0200, Krzysztof Halasa wrote:
 DRM level. Unless I'm mistaken the patches don't touch the field order
 and sync, at least on the intel and/or i915 driver.

with a slight modification the patches can output BFF or TFF.
But I'm living in PAL land so I use TFF only ATM.

 going to implement is the BFF/TFF(/progressive) interface at XVideo and

what do you mean with progressive here? Do you mean 2 interlaced
fields in weaved format?

The vga-sync-fields patches effectively use these 2 weaved fields of 
softdecoder output and forward these directly to VGA/DVI output. To keep
synchronicity between stream and video timing the output line frequency
is continuously trimmed in very small steps. This works very well even
for live-TV where you must adopt exactly the stream clock delivered by
the TV station.

Trimming the graphics card video timing ensures that the weaved fields
always appear at the right time in the double buffer of graphics card.
Video output is interlaced. It's the task of the display to take care
of deinterlacing (if needed).
This relieves the PC from CPU intensive deinterlacing. So the patches work
very well with Asus EEE 701 or Intel D945GSEJT.

 DRM level. Unless I'm mistaken the patches don't touch the field order
 and sync, at least on the intel and/or i915 driver.

moving the sync point by 20ms yields in reversed field order output

Cheers
   Thomas

___
xorg mailing list
xorg@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg


Re: RFC: Xv field order

2009-06-23 Thread Thomas Hilber
On Mon, Jun 22, 2009 at 09:56:20PM +0200, Krzysztof Halasa wrote:
 My assumption (please correct if wrong):
 - it requires interlaced display mode (i830+: practically completed).

sorry, but opposite to i810 and i9xx the i830 is not capable of doing
any interlaced video output.

 - the video windows must not be scaled vertically.

this restriction applies for i810 but is no longer true for i9xx graphics

 - sync events for TFF and BFF are signaled by interrupts. For
   progressive video (with interlaced display mode, of course) we can

not necessarily by driver interrupts. In [1] (intel portion of
vga-sync-fields patch) I simply peek some registers (DOVSTA and PIPEA_DSL)
directly within the Xserver to determine the actual line + field position
of CRT controller at any time.

In [2] (radeon portion of the patch) I do the same with radeon graphics.
A small DRM-driver patch there is only needed to dynamically alter video
output timing.

You are right: for i810 some field related overlay regs must be 
reprogrammed during vertical retrace interrupts.
This is no longer true for i9xx architecture. 2 weaved fields are
processed there with no driver (interrupt) intervention.

- Thomas

[1] 
http://www.easy-vdr.de/git?p=frc.git/.git;a=blob;f=patches/xv-intel.patch;h=b54655a6d6473f55b49fac759f4afc77c101d69f;hb=20810204a85917449096a5fbba65d996c0a1ac2c

[2] 
http://www.easy-vdr.de/git?p=frc.git/.git;a=blob;f=patches/xv-radeon.patch;h=8c6ee3f199f70364c91057c634dfbf864c3db656;hb=20810204a85917449096a5fbba65d996c0a1ac2c
___
xorg mailing list
xorg@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg


Re: RFC: Xv field order

2009-06-23 Thread Thomas Hilber
On Tue, Jun 23, 2009 at 08:08:38PM +0200, Krzysztof Halasa wrote:
 Come on, playback of interlaced video only makes sense with vertically
 unscaled display. Otherwise you have to deinterlace first and this is
 hardly usable (except for maybe 1:2 scaling when you can just strip every

the newer type intel chips do that transparently in hardware. I have not
found documentation about how they exactly handle this (by scaling each field
independently or by deinterlacing?). But for watching 16:9
material on an 4:3 TV it's a very useful feature. Picture quality is
still very good.

 It doesn't depend on the chip.

yes it does. i815 chips and radeon pre-avivo chips can't scale
interlaced material vertically. But i9xx chips can do.

 I wonder... Can your current code support textured video? Multiple video
 windows? Don't you have reliability problems, caused by the 20 ms sleep
 taking longer than requested (due to lack of RT scheduling)?

no, my patch collection is for conventional TV - one screen only. 
My major concern was not to encounter any field loss. That's why I
synchronize field timing dynamically to the stream clock.

The stream clock is calculated within xine-lib (in my case). I built
extensive debug tools showing any reliability problems. If you aren't
running number crunching processes on the machine whilst watchin TV
there are no problems at all. On 2.6.26 or newer kernels I need not
alter scheduling policies.

- Thomas
___
xorg mailing list
xorg@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg


Re: RFC: Xv field order

2009-06-23 Thread Thomas Hilber
On Tue, Jun 23, 2009 at 10:19:25PM +0200, Krzysztof Halasa wrote:
 Thomas Hilber x...@toh.cx writes:
 Ah, I didn't know this. Is it supported by the textured video? Overlay
 only?

I haven't tried textured video since a while because this had bad
tearing effects at the time when I started my vga-sync-fields project.
This may have changed meanwhile.

Scaling interlaced material vertically is supported by at least overlay
video on i9xx chips.

Maybe somebody can roughly tell how this scaling feature has been realized
for intel i9xx series hardware? Would be very interesting. I haven't 
found any documentation about that yet.

  no, my patch collection is for conventional TV - one screen only. 
  My major concern was not to encounter any field loss. That's why I
  synchronize field timing dynamically to the stream clock.
 
 Sure, I OTOH work with non-live data.

right, when playing non-live data VGA output timing needs not to be synced
to an external signal. But for live-TV it's crucial if you don't want to lose
fields/frames.

The more I'm wondering there still does not exist any other solution to that
problem except my vga-sync-fields patch. Even vdpau does still NOT sync
video timing to an external clock source (e.g. PTS dictated by a TV station).

Which means that even vdpau loses/doubles fields/frames when watching
live-TV.

- Thomas

___
xorg mailing list
xorg@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg


[PATCH] Interlaced RGB/PAL Output with FRC for Intel i9xx Chipsets

2009-02-20 Thread Thomas Hilber
Hi list,

with the attached patch and xorg.conf you can use Intel i9xx chipsets to
directly drive RGB/PAL SCART devices like a TV set.

As with my former Radeon vga-sync-fields patch even/odd fields are routed
straightly from softdecoder to the VGA port. No software deinterlacing
takes place anymore thus saving CPU power and resulting in an artifact free
picture.

Full synchronicity between video stream and VGA video output timing is
provided by tiny real-time tweaks to the VGA video timing registers.

This is what FRC stands for: (f)rame (r)ate (c)ontrol

Please see attached explanation file how synchronization locks in after 
starting video replay.

All you additionally need is a special VGA2SCART adapter cable like this:

http://www.vdr-portal.de/board/thread.php?postid=742945#post742945

At least Intel i910 i915 i945 chipsets are known to be compatible with 
the patch. These chips are found in recent hardware like Intel D945GCLF
or Asus Pundit P5945GC including many netbooks. 

You can build a cheap but high quality video replay device with that.

For further reference about frame rate control please see also:

http://linuxtv.org/pipermail/vdr/2008-July/017347.html
http://lists.freedesktop.org/archives/xorg/2008-September/038296.html

Cheers,
  Thomas

diff -ur ../xserver-xorg-video-intel-2.3.2.org/src/i810_driver.c ./src/i810_driver.c
--- ../xserver-xorg-video-intel-2.3.2.org/src/i810_driver.c	2009-02-09 21:28:24.0 +0100
+++ ./src/i810_driver.c	2009-02-09 21:31:00.0 +0100
@@ -262,7 +262,11 @@
OPTION_NO_DDC,
OPTION_SHOW_CACHE,
OPTION_XVMC_SURFACES,
-   OPTION_PAGEFLIP
+   OPTION_PAGEFLIP,
+   OPTION_SYNC_FIELDS,
+   OPTION_YSCALE_FTUNE,
+   OPTION_YRGB_VPHASE,
+   OPTION_UV_VPHASE,
 } I810Opts;
  
 static const OptionInfoRec I810Options[] = {
@@ -275,7 +279,11 @@
{OPTION_NO_DDC,		NoDDC,	OPTV_BOOLEAN,	{0}, FALSE},
{OPTION_SHOW_CACHE,		ShowCache,	OPTV_BOOLEAN,	{0}, FALSE},
{OPTION_XVMC_SURFACES,	XvMCSurfaces,	OPTV_INTEGER,	{0}, FALSE},
-   {OPTION_PAGEFLIP,PageFlip, OPTV_BOOLEAN, {0},   FALSE},
+   {OPTION_PAGEFLIP,PageFlip, OPTV_BOOLEAN,	{0}, FALSE},
+   {OPTION_SYNC_FIELDS,		SyncFields,	OPTV_BOOLEAN,	{0}, FALSE},
+   {OPTION_YSCALE_FTUNE,	YScaleFineTune,OPTV_INTEGER,	{0}, FALSE},
+   {OPTION_YRGB_VPHASE,		YRGB_VPhase,	OPTV_INTEGER,	{0}, FALSE},
+   {OPTION_UV_VPHASE,		UV_VPhase,	OPTV_INTEGER,	{0}, FALSE},
{-1,NULL,		OPTV_NONE,	{0}, FALSE}
 };
 /* *INDENT-ON* */
@@ -2861,7 +2869,9 @@
 	 xf86DrvMsg(scrnIndex, X_PROBED,
 		Removing interlaced mode \%s\\n, mode-name);
   }
+#if 0 /* allow interlaced mode */
   return MODE_BAD;
+#endif
}
return MODE_OK;
 }
diff -ur ../xserver-xorg-video-intel-2.3.2.org/src/i830.h ./src/i830.h
--- ../xserver-xorg-video-intel-2.3.2.org/src/i830.h	2009-02-09 21:29:41.0 +0100
+++ ./src/i830.h	2009-02-09 21:31:00.0 +0100
@@ -524,6 +524,11 @@
Bool *overlayOn;
 #endif
 
+   Bool sync_fields;
+   int YScale_ftune;
+   int YRGB_vphase;
+   int UV_vphase;
+
/* EXA render state */
float scale_units[2][2];
   /** Transform pointers for src/mask, or NULL if identity */
diff -ur ../xserver-xorg-video-intel-2.3.2.org/src/i830_crt.c ./src/i830_crt.c
--- ../xserver-xorg-video-intel-2.3.2.org/src/i830_crt.c	2009-02-09 21:28:24.0 +0100
+++ ./src/i830_crt.c	2009-02-09 21:31:00.0 +0100
@@ -87,7 +87,7 @@
 if (pMode-Flags  V_DBLSCAN)
 	return MODE_NO_DBLESCAN;
 
-if (pMode-Clock  40 || pMode-Clock  25000)
+if (pMode-Clock  40 || pMode-Clock  12000) /* lower minimum dotclk */
 	return MODE_CLOCK_RANGE;
 
 return MODE_OK;
@@ -392,6 +392,12 @@
 out:
 i830ReleaseLoadDetectPipe (output, dpms_mode);
 
+#if 0 /* not yet */
+/*
+ * we also want to boot the Xserver without a CRT connected
+ */
+status = XF86OutputStatusConnected;
+#endif
 return status;
 }
 
diff -ur ../xserver-xorg-video-intel-2.3.2.org/src/i830_display.c ./src/i830_display.c
--- ../xserver-xorg-video-intel-2.3.2.org/src/i830_display.c	2009-02-09 21:28:24.0 +0100
+++ ./src/i830_display.c	2009-02-09 21:31:00.0 +0100
@@ -95,9 +95,9 @@
 #define I8XX_P2_LVDS_FAST	  7
 #define I8XX_P2_SLOW_LIMIT	 165000
 
-#define I9XX_DOT_MIN		  2
+#define I9XX_DOT_MIN		  12000 /* allow for PAL modes */
 #define I9XX_DOT_MAX		 40
-#define I9XX_VCO_MIN		140
+#define I9XX_VCO_MIN		100 /* allow for PAL modes */
 #define I9XX_VCO_MAX		280
 
 /* Haven't found any reason to go this fast, but newer chips support it */
@@ -964,6 +964,14 @@
 i830_crtc_mode_fixup(xf86CrtcPtr crtc, DisplayModePtr mode,
 		 DisplayModePtr adjusted_mode)
 {
+if (mode-Flags  V_INTERLACE) {
+   mode-CrtcVDisplay = adjusted_mode-CrtcVDisplay = mode-VDisplay;
+   mode-CrtcVSyncStart = adjusted_mode-CrtcVSyncStart = mode-VSyncStart;
+   mode-CrtcVSyncEnd = adjusted_mode-CrtcVSyncEnd = mode-VSyncEnd;
+   mode-CrtcVBlankStart = 

Re: problem with i830M interlaced VGA output (915G works fine)

2008-10-06 Thread Thomas Hilber
On Mon, Oct 06, 2008 at 09:58:57AM -0700, Keith Packard wrote:
 On Sun, 2008-10-05 at 09:21 +0200, Thomas Hilber wrote:
  Hi list,
  
  with the attached patch and xorg.conf I successfully use interlaced modes
  like 720x576i on a 915G chipset. For an i830M this unfortunately does not
  yet work completely. First of all thanks to Keith Packard and 
  Krzysztof Halasa for their basic PIPEACONF investigations posted here:
 
 I845 and earlier chips (I didn't look at the i855 or i865) do not
 support interlaced display at all.

but it's quite strange. Even my i810 is running in
interlaced mode (i.e. 720x576i) with exactly the patch above with no
problems at all.

OK, but if there is an 'interlace gap' between and including i830 - i865 then
I will stop to find a solution for this issue.

anyway, thank you for your answer!

- Thomas

___
xorg mailing list
xorg@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg