Re: Driver for CT69030 for rendering YUV data.

2004-01-08 Thread Billy Biggs
Tim Roberts ([EMAIL PROTECTED]):

 The most common are YUY2 and UYVY, both of which are 4:2:2 formats and
 have 12 bits per pixel.

  You mean 16 bits per pixel ;-)


  -Billy

___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


Re: Linear Allocator (Was Re: Alan H's new linear allocator and Neomagic Xv)

2003-10-30 Thread Billy Biggs
Alan Hourihane ([EMAIL PROTECTED]):

 Well, if someone else has a chip, or wants to update and test other
 drivers (but be careful with DRI enabled drivers as it needs more work
 in the driver). Here's a patch to the neomagic that should work, and
 could be used as a template for the other drivers.
 
 That's all. Most Xv (if not all) use linear allocation already and will
 take advantage of it straight away without any furthur modifications.

  Alan, do you know if it would help with the Radeon driver, re bug 830?

  http://bugs.xfree86.org/show_bug.cgi?id=830

  -Billy

___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


Re: Linear Allocator (Was Re: Alan H's new linear allocator and Neomagic Xv)

2003-10-30 Thread Billy Biggs
Alan Hourihane ([EMAIL PROTECTED]):

 I've actually already done it, but I'll probably leave it till after
 XFree86 4.4.0.

  Regarding bug 830, does this mean my users can expect to sometimes hit
these situations where XVIDEO apps can't run until they shut down a
large application like their web browser?

  Don't take this the wrong way, I don't mind, I just want to understand
the situation so I can design appropriate error messages.

  -Billy

 Once 4.4.0 is out, I'll merge that into the DRI CVS and create a branch
 for this work I've been doing on the radeon with regard to dynamic
 allocation  the DRI.
 
 Alan.
 
 On Thu, Oct 30, 2003 at 07:36:42AM -0800, Alex Deucher wrote:
  I'll take a look at fixing this in the radeon driver.  What needs to be
  done to play nice with the DRI?
  
  Alex
  
  --- Alan Hourihane [EMAIL PROTECTED] wrote:
   On Thu, Oct 30, 2003 at 07:47:06AM -0600, Billy Biggs wrote:
Alan Hourihane ([EMAIL PROTECTED]):

 Well, if someone else has a chip, or wants to update and test
   other
 drivers (but be careful with DRI enabled drivers as it needs more
   work
 in the driver). Here's a patch to the neomagic that should work,
   and
 could be used as a template for the other drivers.
 
 That's all. Most Xv (if not all) use linear allocation already
   and will
 take advantage of it straight away without any furthur
   modifications.

  Alan, do you know if it would help with the Radeon driver, re bug
   830?

  http://bugs.xfree86.org/show_bug.cgi?id=830
   
   Potentially - yes. But the DRI parts need a little more work to play
   nicely with the Linear allocator.
   
   Alan.
  
  
  __
  Do you Yahoo!?
  Exclusive Video Premiere - Britney Spears
  http://launch.yahoo.com/promos/britneyspears/
  ___
  Devel mailing list
  [EMAIL PROTECTED]
  http://XFree86.Org/mailman/listinfo/devel
 ___
 Devel mailing list
 [EMAIL PROTECTED]
 http://XFree86.Org/mailman/listinfo/devel
___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


MergedFB and XINERAMA

2003-09-08 Thread Billy Biggs
  As a more specific version of my last post, do MergedFB drivers not
support the XINERAMA client extension such that applications can know
that there are multiple heads?

  I would hope that these two things are not mutually exclusive.

  -Billy

___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


Re: MergedFB and XINERAMA

2003-09-08 Thread Billy Biggs
Alex Deucher ([EMAIL PROTECTED]):

 The radeon and Sis mergedfb drivers support their own internal
 xinerama extension to provide hints to xinerama aware apps.  They do
 not use the regular Xinerama extension.  the closed source nvidia
 driver does a similar thing.  

  Sure, I don't care about server internals, I only care that my
application can use libXinerama.a and get correct information.  I'm
trying to confirm that the hell I'm going through with the ATI firegl
driver is specific to that driver.

  -Billy

___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


Re: Dual-head without XINERAMA ?

2003-09-08 Thread Billy Biggs
Thomas Winischhofer ([EMAIL PROTECTED]):

 Billy Biggs wrote:
 
 http://vektor.ca/bugs/atidriver/xpdy2.log includes the lines
 screen #0:
   dimensions:2560x1024 pixels (867x347 millimeters)
   resolution:75x75 dots per inch
 is that any use ?
 
  Not in general.  I use the vidmode extension to get the current
  resolution, since my users often make 720x480 modelines and such
  things and switch to that (using ctrl-alt-+) to play video.
  However, I use the geometry information to calculate the pixel
  aspect ratio to use.
 
 Erm, I might be mistaken, but the geometry information has nothing to
 do with the current display mode. If I have a screen of 1024x768,
 260x195mm according to xdpyinfo, I still receive the same values after
 switching, say, to 1280x768 (which has a totally different aspect
 ratio)... hence, geometry is static and obviously independent of the
 current display mode...

  Calculating a pixel aspect ratio depends on the current resolution of
the display, and the geometry information of the display.  You're
correct, the goemetry information is static, but the resolution isn't,
that's why I have to use the vidmode extension to get the current
resolution.

  So in this case, vidmode tells me our resolution is 1280x1024, and X
  tells me that we're not using XINERAMA and that our geometry is
  867x347 millimeters.
 
   Makes sense?
 
 Not really. That xdpyinfo output is strange - 2560x1024 looks like two
 screens of 1280x1024 aside each other; is this a radeon machine using
 Alex' driver? Seems it's not the current one as it reports that
 Xinerama is not supported.

  Correct, I meant what I'm doing makes sense.  The result does not.
This is the ATI firegl driver and it does not seem to support XINERAMA.
There are actually two screens both of size 1280x1024 and this is why my
code comes up with an incorrect pixel aspect ratio.

  So, does it now make sense to you what I'm doing and why this is so
bad? :)   Sorry for the poor explanation.

  -Billy

___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


Re: Dual-head without XINERAMA ?

2003-09-08 Thread Billy Biggs
Thomas Winischhofer ([EMAIL PROTECTED]):

 No idea what the ATI folks have done there... seems to be some sort of
 mergedfb mode, too.
 
 Does that piece support normal dual head mode (speak: 2 device
 sections, 2 screen sections, etc)?

  Yes it does, but users always whine and complain when I tell them to
use it.

  -Billy

___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


Re: MergedFB and XINERAMA

2003-09-08 Thread Billy Biggs
Thomas Winischhofer ([EMAIL PROTECTED]):

 Alex Deucher wrote:
 it would appear so.
 
 --- Billy Biggs [EMAIL PROTECTED] wrote:
 
 Alex Deucher ([EMAIL PROTECTED]):
 
 
 The radeon and Sis mergedfb drivers support their own internal
 xinerama extension to provide hints to xinerama aware apps.  They
 
 do
 
 not use the regular Xinerama extension.  the closed source nvidia
 driver does a similar thing.  
 
  Sure, I don't care about server internals, I only care that my
 application can use libXinerama.a and get correct information.  I'm
 trying to confirm that the hell I'm going through with the ATI firegl
 driver is specific to that driver.
 
 A X log of this user using this specific mode would help...

  http://vektor.ca/bugs/atidriver/

  The X log is there.

  -Billy

___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


Dual-head without XINERAMA ?

2003-09-07 Thread Billy Biggs
  Hey all, I'm looking for some advice about a driver which is in
dual-head mode but does not seem to support XINERAMA.  This driver is
the latest ATI (firegl?) driver.  Logs from this server were sent to me
by the user:  http://vektor.ca/bugs/atidriver/


  I'm wondering if there are other drivers that do this, and if anyone
knows of a way I could detect this case.


  The problem is that my application needs geometry information to
calculate the pixel aspect ratio.  X tells me the geometry is 867mm x
347mm but the vidmode extension tells me the current resolution is
1280x1024, and so my pixel aspect ratio calculation thinks the user is
going through an anamorphic lens and they end up with very stretchy
looking video.

  I need to use the vidmode extension because people often switch to a
lower resolution to watch video, so I can't just use
DisplayWidth/DisplayHeight.

  Any advice?

  -Billy


___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


Re: Dual-head without XINERAMA ?

2003-09-07 Thread Billy Biggs
Andrew C Aitchison ([EMAIL PROTECTED]):

  The problem is that my application needs geometry information to
  calculate the pixel aspect ratio.  X tells me the geometry is 867mm
  x 347mm but the vidmode extension tells me the current resolution is
  1280x1024, and so my pixel aspect ratio calculation thinks the user
  is going through an anamorphic lens and they end up with very
  stretchy looking video.
 
 http://vektor.ca/bugs/atidriver/xpdy2.log includes the lines
   screen #0:
 dimensions:2560x1024 pixels (867x347 millimeters)
 resolution:75x75 dots per inch
 is that any use ?

  Not in general.  I use the vidmode extension to get the current
resolution, since my users often make 720x480 modelines and such things
and switch to that (using ctrl-alt-+) to play video.  However, I use the
geometry information to calculate the pixel aspect ratio to use.

  So in this case, vidmode tells me our resolution is 1280x1024, and X
tells me that we're not using XINERAMA and that our geometry is 867x347
millimeters.

  Makes sense?

  -Billy
___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


Performance problems revisited

2003-07-21 Thread Billy Biggs
  The 'Athlon related mystery' thread concluded by recommending the
removal of O_SYNC on the open to /dev/mem to solve performance problems
in SuSE's XFree86 packages [1].  I still have many users with similar
performance problems, and I want to know how to better debug it.

  1. A SuSE user with a Radeon 8500, P4 1.8, that gets 133fps with
 xvtest, but seemingly good performance from x11perf -shmput500 [2].
 Installing the O_SYNC fix from SuSE did not improve performance!
 Also something similar with Gentoo xfree-4.3.0-r2.

  2. An i815 with a Celeron 800, MX3S-T motherboard.  Gets about 60fps
 using xvtest, 273 blits/sec with x11perf -shmput500 at 16bpp, using
 Gentoo xfree-4.3.0-r2.  Does not use O_SYNC on /dev/mem.

 Would this have anything to do with it:
 (II) I810(0): xf86BindGARTMemory: bind key 0 at 0x
  (pgoffset 0)
 (WW) I810(0): xf86AllocateGARTMemory: allocation of 1024 pages
  failed (Cannot allocate memory)
 (II) I810(0): No physical memory available for 4194304 bytes of
  DCACHE

  3. A user with a SiS651, a P4 2.4G and Thomas' latest driver, using
 both DanielS's 4.3 packages and the standard 4.2.1 debian packages,
 gets terrible performance.  Only 430fps from xvtest while Thomas,
 with similar hardware, gets 1800fps.  ASUS Pundit.

 On this one we're completely stuck.  No O_SYNC.

  4. Vladimir Dergachev noted some odd performance statistics here [3].

  Are there more hardware or BIOS configurations can I check that can
change video memory performance?  These XVIDEO drivers usually do
nothing more than a memcpy(), at least for SiS and i815.  My list:

  - MTRRs enabled (they almost always are).
  - Compare xvtest with shmput500 speed, if it shows a discrepancy, look
at the driver source (but usually we learn nothing from that!)
  - Motherboard AGP chipset.  Does this matter if the driver uses the
CPU to copy?  Can just loading agpgart help?
  - Use of O_SYNC.  So far it seems just that SuSE release used it, and
some users it didn't seem to help.

  What else?

  -Billy

  [1]  This thread is not on mail-archive.com, which seems to have
   missed most of April/May of this year.

  [2]  See http://bugs.xfree86.org/show_bug.cgi?id=414 with detailed
   logs.

  [3]  http://www.mail-archive.com/devel%40xfree86.org/msg01756.html

___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


Re: Performance problems revisited

2003-07-21 Thread Billy Biggs
Michel Dänzer ([EMAIL PROTECTED]):

 On Mon, 2003-07-21 at 13:29, Billy Biggs wrote:
  
1. A SuSE user with a Radeon 8500, P4 1.8, that gets 133fps with
   xvtest, but seemingly good performance from x11perf -shmput500 [2].
 
 The radeon driver uses the CP for image writes, does
 
   Option XaaNoScanlineImageWriteRect
 
 have a similar impact on x11perf -shmput500 performance?

  I've added this question to the bug report.  (414 on bugzilla).

- Motherboard AGP chipset.  Does this matter if the driver uses the
  CPU to copy?  Can just loading agpgart help?
 
 No.
 
 I wonder if the video capture card could have something to do with it?

  It's idle when we test with 'xvtest'.

  -Billy
___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


Re: Performance problems revisited

2003-07-21 Thread Billy Biggs
Mark Vojkovich ([EMAIL PROTECTED]):

  While I'm at it, how hard do you think it would be to do triple
  buffering in the NVIDIA driver for this same problem?
 
 NVIDIA hardware can only double buffer.  Using more buffers than two
 would require queuing them up and programming the new buffers in the
 interrupt handler.

  Can you query which buffer is being displayed?  I'd just like to
replace tearing with frame drops if possible, so changing which buffer
is queued next would be sufficient.

  -Billy

___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


Re: Performance problems revisited

2003-07-21 Thread Billy Biggs
Mark Vojkovich ([EMAIL PROTECTED]):

 On Mon, 21 Jul 2003, Billy Biggs wrote:
 
  Mark Vojkovich ([EMAIL PROTECTED]):
  
While I'm at it, how hard do you think it would be to do triple
buffering in the NVIDIA driver for this same problem?
   
   NVIDIA hardware can only double buffer.  Using more buffers than two
   would require queuing them up and programming the new buffers in the
   interrupt handler.
  
Can you query which buffer is being displayed?  I'd just like to
  replace tearing with frame drops if possible, so changing which buffer
  is queued next would be sufficient.
 
   I know which one is being displayed, but I can't override the one
 that is pending.  If I get a PutImage request and there is already
 one buffer being displayed and one pending, I have three choices.
 
 1) Drop the new request entirely.
 2) Spin until a buffer is free.
 3) Overwrite the data in the pending buffer.  This will tear if the
buffer switches while your are overwriting.
 
 I currently do #3 and there is a #if in the nv driver allowing you
 to switch to #2.

  Does the NVIDIA driver do something different since it can listen on
the interrupt?  Does it queue it up or something?

  -Billy

___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


Re: text flickering when scrolling

2003-05-31 Thread Billy Biggs
Carsten Haitzler ([EMAIL PROTECTED]):

 On Fri, 30 May 2003 13:16:54 +0100 Asier Goikoetxea Yanci [EMAIL PROTECTED]
 babbled:
 
  Hi there,
  
  I have writen an application to scroll a text on a window (just like kbanner 
  screen saver but on a window). I provide different texts and the program 
  scrolls them one after the other quite smoothly.
  
  However, there is an issue that I didn't manage to solve, flickering. I've 
  been searching and I found that the flickering is caused because the print is 
  not synchronised with the vertical frecuency/signal (or at least is what I 
  understood). It seems that printing the text on the screen/window when vblank 
  is happening solves the problem of flickering. And it seems that I have to 
  use SYNC extension in order to solve this problem. Am I right? Correct me 
  please if I am wrong.
  
  Well, the thing is that I have not clue on how to use SYNC extension, not even
 
 that still wont solve the problem. use double buffering. draw to a
 pixmap - then when drawing is done copy it to the window.

  That will solve flickering, won't solve tearing.  I'd argue that both
are pretty important. :)

  -Billy

 
  if it would solve my problem. Any suggestion is wellcome.
  
  Thank you.
  
  Asier
  ___
  Devel mailing list
  [EMAIL PROTECTED]
  http://XFree86.Org/mailman/listinfo/devel
 
 
 -- 
 --- Codito, ergo sum - I code, therefore I am 
 The Rasterman (Carsten Haitzler)[EMAIL PROTECTED]
 [EMAIL PROTECTED]
 Mobile Phone: +61 (0)413 451 899Home Phone: 02 9698 8615
 ___
 Devel mailing list
 [EMAIL PROTECTED]
 http://XFree86.Org/mailman/listinfo/devel
___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


savage XVideo annoyance

2003-03-10 Thread Billy Biggs
  On startup, the savage driver protects its call to initialize its
XVideo code like this (savage_driver.c in 4.2.1 and 4.3):

if( !psav-NoAccel  !SavagePanningCheck(pScrn) )
SavageInitVideo( pScreen );

  The 'SavagePanningCheck()' call checks to make sure that the virtual
desktop resolution is equal to the current HDisplay/VDisplay, that is,
the resolution on startup.

  A user of my app complained that XVideo wasn't working.  As it turned
out, he had the resolutions listed backwards in his config file, so on
startup virtual != res, no Init called, and hence no XVideo surfaces.

  I have to wonder if that check is a bug.  It took a long time to
figure out why there were no XVideo surfaces, especially since no
message is sent to the log.  I can think of three resolutions:

  a) Patch to warn to the log file if the PanningCheck call fails.
  b) Take out the PanningCheck call or rewrite Init to work without it.
  c) Ignore this because some xrandr-relted stuff makes this obsolete in
 4.3 ?

  I am only qualified to do (a), and have no hardware to test with. :-)

  Thoughts?

-- 
Billy Biggs
[EMAIL PROTECTED]
___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


Re: Radeon driver hides cursor over XVideo windows?

2003-02-23 Thread Billy Biggs
Vladimir Dergachev ([EMAIL PROTECTED]):

 Which application was used ?

  Ok nevermind, the user now wants me to say that he is an idiot.

  Sorry for wasting your time,
  -Billy
___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


Re: controlling refresh rate of the graphics card

2003-01-30 Thread Billy Biggs
  Hello etienne,

  You seem to have two problems.  One I think is easy to solve, another
I think is very difficult.

 At 2003\01\23 10:45 +1100 Thursday, etienne deleflie wrote:
 
  My application uses hardware acceleration (using xv) to draw YUV to
  screen using XvShmPutImage(...)
 
  Video is being displayed at the rate of 25 fps  refresh
  rate is higher,  and not synced... so I sometimes get
  really bad shearing when there is lots of movments in the video.
  (shearing: visible lines where the graphics card sends a half
  updated image to screen)
 
  I am running Debian on a DELL 8200 laptop that has a Geforce Go
  card  in it.

  Your first problem seems to be one of shearing or tearing.  This is
usually caused by your XVideo driver having a single video buffer to
write to.  That is, after the call to XvShmPutImage, the driver just
copies the image straight to an off-screen overlay surface that's being
actively put on the screen by the card.

  So, to avoid this, most drivers use double buffering.  That is, they
write the image to  secondary off-screen buffer, then queue a 'page flip
on next retrace' command to the video card.  If you're using the nVidia
binary drivers, run 'xvinfo', I believe they have a special XV attribute
to enable or disable double buffering.  This will cure your shearing
problem.

  You then ask a completely separate question:

  anyone know if / how I can control the graphics card's refresh rate
  to match the video ?
 

etienne deleflie ([EMAIL PROTECTED]):

 I want to be able to control the refresh rate programatically. for
 a piece of software used in live video performance.
 
 I dont know if my software is going to be spitting out 25 fps or
 22.341 fps (depending on how much processing i am trying to do).
 so I want to make a call to the graphics card myself, to tell it to
 refresh exactly after each frame has been drawn.

  For best smoothness on 25fps video, it is nice to try and set the
refresh rate to a multiple of this and ensure that each frame is shown
for, say, exactly 3 refreshes.  This is usually not such a concern for
low framerate video: in my application it is a concern, since I output
at 60fps or 50fps, which is close to the monitor's frequency (my app is
http://tvtime.sourceforge.net).

  This problem is more difficult.  Last year I did some hacking on my X
server to allow me to change the refresh rate on the fly, but found that
the current code would cause the monitor to resync every time I did
that, making it useless to try and 'software-genlock' video.  I was able
to set specific refresh rates that were good for my video, so, 72hz for
24fps content, 120hz for 60fps content, etc, and then my player would
use a vsync interrupt I got from elsewhere to try and control blits.  I
was unable to come up with a system where you could keep updating the
refresh rate as the video plays.

  Based on your above problem of tearing and shearing though, I do not
think that you require this level of smoothness or control, and that you
will be happy once you get your XVideo driver to start double buffering.

  Good luck,

-- 
Billy Biggs
[EMAIL PROTECTED]
___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel