Re: [Xpert]Trident and XVideo

2002-01-04 Thread Alan Hourihane

You've got the options now to adjust yourself.

Option XvHsync value
Option XvVsync value

where value is an integer for displacement in that direction.

Alan.

On Fri, Jan 04, 2002 at 12:49:04PM +0100, [EMAIL PROTECTED] wrote:
 Hello,
 
 I have noticed a small but annoying problem in the Xv support with the Trident
 cyberblade e4. Some lines of garbage is displayed on the edges of the Xv 
 window (or the screen if i use it in fullscreen mode). Most often the visible
 lines are a blue in the top and som flickering garbage to the right(vertical),
 but in fullscreen mode i tend do get a green on in the bottom of the screen to.
 
 I am running a 4.1.99 version from CVS.
 
 If anyone tries to fix this, i will assist in any way i can (testing etc.).
 
 Regards
 
 Henrik
 [EMAIL PROTECTED]
 
 -- 
 44 6F 6E 27 74 20 79 6F 75 20 68 61 76 65 20 61 6E 79 74 68 69 6E 67 20
 62 65 74 74 65 72 20 74 6F 20 64 6F 20 74 68 61 6E 20 74 6F 20 63 6F 6E
 76 65 72 74 20 68 65 78 20 73 74 72 69 6E 67 73 3F
 ___
 Xpert mailing list
 [EMAIL PROTECTED]
 http://XFree86.Org/mailman/listinfo/xpert
___
Xpert mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/xpert



Re: [Xpert]Trident and XVideo

2002-01-04 Thread henrikj

Hello Alan,

This is the results i get from changing the sync values:

With Hsync 0 Vsync 0 i get a green line i the bottom of the window, and some
flickering to the right in the window. The bottom green line is not all green
there are som junk in it too.

If i increment XvVsync i get rid of the green line in the bottom but i get a
blue line in the top which should indicate that it was centered at 0.

When incrementing XvHsync i the line to the left was a copy of the the line nextto it 
and the flickering on the right did not go away. I also tried setting
XvHsync to 5, and i got a few blie lines to the left, but still get the
flickering to the right.

I have tried with several players/vido clips, they differ some in color
of the line in the bottom, but the flickering is always the same.

Regards

Henrik



On Fri, Jan 04, 2002 at 12:12:27PM +, Alan Hourihane wrote:
 You've got the options now to adjust yourself.
 
   Option XvHsync value
   Option XvVsync value
 
 where value is an integer for displacement in that direction.
 
 Alan.
 
 On Fri, Jan 04, 2002 at 12:49:04PM +0100, [EMAIL PROTECTED] wrote:
  Hello,
  
  I have noticed a small but annoying problem in the Xv support with the Trident
  cyberblade e4. Some lines of garbage is displayed on the edges of the Xv 
  window (or the screen if i use it in fullscreen mode). Most often the visible
  lines are a blue in the top and som flickering garbage to the right(vertical),
  but in fullscreen mode i tend do get a green on in the bottom of the screen to.
  
  I am running a 4.1.99 version from CVS.
  
  If anyone tries to fix this, i will assist in any way i can (testing etc.).
  
  Regards
  
  Henrik
  [EMAIL PROTECTED]
  
  -- 
  44 6F 6E 27 74 20 79 6F 75 20 68 61 76 65 20 61 6E 79 74 68 69 6E 67 20
  62 65 74 74 65 72 20 74 6F 20 64 6F 20 74 68 61 6E 20 74 6F 20 63 6F 6E
  76 65 72 74 20 68 65 78 20 73 74 72 69 6E 67 73 3F
  ___
  Xpert mailing list
  [EMAIL PROTECTED]
  http://XFree86.Org/mailman/listinfo/xpert
 

-- 
44 6F 6E 27 74 20 79 6F 75 20 68 61 76 65 20 61 6E 79 74 68 69 6E 67 20
62 65 74 74 65 72 20 74 6F 20 64 6F 20 74 68 61 6E 20 74 6F 20 63 6F 6E
76 65 72 74 20 68 65 78 20 73 74 72 69 6E 67 73 3F
___
Xpert mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/xpert



Re: [Xpert]Best 2D-only card for X11

2002-01-04 Thread Michel Dänzer

On Fri, 2002-01-04 at 08:49, Ross Vandegrift wrote:
  What is the best pure 2D card in X11 with non-binary-only (read nvidia)
 drivers? Does a Matrox G450/550 compare to GeForce  Radeon in 2D?
 
 Matrox is the only company I've ever heard make noise about their 2D
 performance.  The box from my G400 DualHead billed it as the fastest
 2D accelerator ever created.  Don't know if it's true, but the 2D
 performs quite well for me!

The mga driver has a very good reputation for 2D performance, but I just
replaced a G450 with a Rage128 Pro in this work machine and it's at
least as fast in general, in fact it feels slightly snappier, but maybe
that's just me. :) A Radeon should be significantly faster in turn. The
only thing lacking yet is Render acceleration for AA text. I'm
experimenting with that but no dice yet.


-- 
Earthling Michel Dänzer (MrCooper)/ Debian GNU/Linux (powerpc) developer
XFree86 and DRI project member   /  CS student, Free Software enthusiast
___
Xpert mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/xpert



[Xpert]XF86 CVS stability? / savage with XV

2002-01-04 Thread forrest whitcher


I'd like to be using XV with my savage IXC, which looks to be built
in the current CVS ... 

While I don't want to overwrite a working 4.1 install, CVS looks like
the only way to get this.

any reason to think this will / won't work?

forrest
___
Xpert mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/xpert



Re: [Xpert]Radeon VE getting faulty EDID from digital flatpanel

2002-01-04 Thread Ed Hudson


for radeon/ve's, and radeon/all-in-wonder's with dvi-d
connections

what i find works best is to let XFree86 -configure
build an XF86Config, then add

Modes 1024x768


(specifically, the exact size of the flat-pannel)
to the appropriate display depth (NOTE: NO OTHER TIMING INFO
or ModeLine for 1024x768!),
and then add a

VertRefresh  59.0 - 61.0

line in the Monitor section.

this works perfectly for the 3 flat-panels monitor's with
dvi-d inputs that i've tried it on:

sony sdm-81
philips brilliance 180p  (this has problems in non-x11
  with dvi-d input in general)
hp (10x7, two years old...)

(the first two are 1280x1024...)

please note that it is VERY important to have the dvi-d plugged
in at the bios-boot time, before doing any of this.  hot

-elh

 Hello,
 I have recently got this digital flat panel to work with a Radeon VE 
 (QY) in xf4.1.99, using a quite unusual approach.
 
 The problem is that the panel is not getting the timings it requires, 
 even though the X server successfully reads the EDID info. To get it to 
 work I hav to boot up with the panel connected to the DVI connector on 
 the Radeon VE, then disconnect it before starting the X server, so that 
 the EDID cannot be read (just turning the panel off doesn't help), and 
 then connect it after the server has come up.
 

___
Xpert mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/xpert



Re: [Xpert]Best 2D-only card for X11

2002-01-04 Thread Ross Vandegrift

  Matrox is the only company I've ever heard make noise about their 2D
  performance.  The box from my G400 DualHead billed it as the fastest
  2D accelerator ever created.  Don't know if it's true, but the 2D
  performs quite well for me!
 
 The mga driver has a very good reputation for 2D performance, but I just
 replaced a G450 with a Rage128 Pro in this work machine and it's at
 least as fast in general, in fact it feels slightly snappier, but maybe
 that's just me. :) A Radeon should be significantly faster in turn. The
 only thing lacking yet is Render acceleration for AA text. I'm
 experimenting with that but no dice yet.

Hmmm, that's really interesting.  Maybe I'll have to see if I can find
some ATI cards and do a comparison.  Is it most likely the hardware and
not the drivers?  I'm also mostly interested in fast 2D performance from
a card.

(A friend of mine has a Rage 128 Pro.  Maybe I'll see if I could borrow
it and do some benchmarks)

Ross Vandegrift
[EMAIL PROTECTED]
___
Xpert mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/xpert



[Xpert]Release of CyberBlade driver information

2002-01-04 Thread David L. Gehrt

I hope that this somehow wends its way to the correct people.  I
recently purchased a Toshiba 1805-274 laptop with a Trident CyberBlade-T
(Maybe CyberBlade-XP Ai1) graphics chip in part because I relied on the
representations on the Trident Web page that the graphics controller was
supported by Linux drivers.  Then I installed Linux (I use RH 7.2 which
came with XFree86 4.1) I discovered that the Linux Trident driver
support did not include the CyberBlade-T (or what ever graphics
controller is on the Toshiba laptop).  The then existing CyberBlade
driver produced an image but it was unsatisfactory.

After some examination of the XFree86 information resources, and
exchanging e-mail with a cooperative developer, the developer was able
to produce a driver which will be part of the XFree 4.2 distribution.

I downloaded the 4.1.99 sources (This being the 4.2 release candidate)
compiled and installed the resulting binaries including the CyberBlade
driver.  Upon starting the new XFree driver the problems earlier
experienced were gone, but the absence of acceleration code renders the
driver slow.  I consider the current driver unusable for applications
that require high performance graphics, and marginal, or slightly better
than marginal for normal applications.  My judgment  is based on
comparison with older graphics cards on several desk side systems which
graphics cards are a year or two old.

The problem seems to be the failure to release information on hardware
acceleration to XFree86 driver develops so that they can produce an open
source accelerated driver for the CyberBlade-T graphics interface.  I do
not understand this reluctance.  I have written to the XFree86 Xpert
mailing list that people considering a purchase of a system to run
Linux, and by implication any open source operating system avoid systems
that use CyberBlade graphics controllers until this situation gets
cleared up.

So the bottom line is that I am interested in the answers to a couple
of questions:

What is the justification for not releasing the information needed to
produce a driver that provides hardware acceleration for X Windows?
This really is not meant as an accusatory question, I really am
interested in understanding Trident's position on this.

How am I supposed to understand the assertion on your web page that
there are drivers for Linux, when that does not seem to accurate?  Again
these days I do not think that such an assertion can be accurate with
out full support for the windowing system Linux (or any other OS) uses.

Respectfully,

dlg

David L. Gehrt

P.S.  I would have sent e-mail to Toshiba, but the Toshiba web page did
not seem to list any e-mail addresses.
___
Xpert mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/xpert



Re: [Xpert]Best 2D-only card for X11

2002-01-04 Thread Owen Taylor


Ross Vandegrift [EMAIL PROTECTED] writes:

   Matrox is the only company I've ever heard make noise about their 2D
   performance.  The box from my G400 DualHead billed it as the fastest
   2D accelerator ever created.  Don't know if it's true, but the 2D
   performs quite well for me!
  
  The mga driver has a very good reputation for 2D performance, but I just
  replaced a G450 with a Rage128 Pro in this work machine and it's at
  least as fast in general, in fact it feels slightly snappier, but maybe
  that's just me. :) A Radeon should be significantly faster in turn. The
  only thing lacking yet is Render acceleration for AA text. I'm
  experimenting with that but no dice yet.
 
 Hmmm, that's really interesting.  Maybe I'll have to see if I can find
 some ATI cards and do a comparison.  Is it most likely the hardware and
 not the drivers?  I'm also mostly interested in fast 2D performance from
 a card.
 
 (A friend of mine has a Rage 128 Pro.  Maybe I'll see if I could borrow
 it and do some benchmarks)

Most 2D operations (blits, area fills, etc) are infinitely fast
these days for all practical purposes on modern cards with a driver
that can accelerate the basic operations. Performance has to do with:
 
 - Usage of video RAM.
 - Acceleration of RENDER extension if that is in use
 - Bus bandwidth. (speed of getting data to and from the card matters.)

Only the 3rd has any significant dependence on hardware alone; the
first is a function of the XFree86 core code mostly, combined with the
amount of video RAM available, the second is mostly a driver issue,
though speed does depend on the card; of the two I've tested with hw
accel, the G400 is darn fast, the nvidia binary drivers are a lot
faster yet.

I like the Matrox cards because they produce high quality output, are
pretty well accelerated, and have docs available to the community; but in
terms of pure speed, even for 2D operations, they probably lag recent
ATI and nvidia cards. ATI also does pretty well on the quality and
OSS areas, and if you have any interest in 3D, their cards are a better
bet. (Though the Matrox cards work fine for Quake3 level-games.)

In the end, 2D performance shouldn't be much of an issue for users on
any decently supported video card these days. The exceptions to this
are typically application, toolkit, server, or driver problems,
not HW limitations.

Regards,
Owen
___
Xpert mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/xpert



[Xpert]Xfree 4.1, Matrox G-450 Dualhead questions

2002-01-04 Thread Tom Manning

I've been reading everything I can find on this card, and there's so much 
conflicting and outdated info out there I'm getting confused.  I'm running 
Mandrake 8.1 on an older Pentium II, with a BX chipset.  

**I know some of these questions are newbie type questions, but if I fully 
understood the howtos and such I wouldn't ask, so please bare with me :-)**

I'm considering buying a G450 for dualhead on my machine, but I want to know 
if I can hook up a 19 (primary) monitor and a 15 (satellite) monitor with 
few problems, at different resolutions. I've heard that matrox has 
antialiasing problems with their drivers though. Is that still true? Does 
anyone have specifics? Can I run it in Xinerama, with one big desktop, at 
different resolutions? Do I have to, or can I have two separate desktops 
running, just sacrificing the ability to move windows from one screen to the 
other?

In short, why would I NOT want to buy this card?

Thanks for any help!

Tom
___
Xpert mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/xpert



[Xpert]Evaluating Memory Usage

2002-01-04 Thread Christopher Browne

Is there some reasonable way of actually evaluating what memory is
being used by X these days?

% ps aux | grep X | grep deferglyph
root   412  5.8 10.0 231512 64540 ?  S   13:12   4:23 /usr/bin/X11/X :0 
-deferglyphs 16 -nolisten tcp vt7 -auth /var/lib/gdm/:0.Xauth

This suggests that XFree86 (32MB GeForce2 card on Linux 2.4.16 with 4
virtual screens) is consuming something around 230MB of memory.  I
gather that the figure includes the 32MB, probably counted multiple
times.
--
(concatenate 'string cbbrowne @ntlug.org)
http://www.ntlug.org/~cbbrowne/internet.html
Lisp stoppped itself
FEP Command:
___
Xpert mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/xpert



[Xpert]Xv without the nvidia binary driver

2002-01-04 Thread ephemeron

With XFree86 4.1. it appears that downloading the nVidia binary drivers is the only 
way to get Xv working. Which can be a problem for those who aren't using an RPM-based 
system. Are there any plans for Xv support to be included in the xfree86 source?
___
Xpert mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/xpert



Re: [Xpert]Xv without the nvidia binary driver

2002-01-04 Thread Andy Ritger



On Sat, 5 Jan 2002, ephemeron wrote:

 With XFree86 4.1. it appears that downloading the nVidia binary drivers
 is the only way to get Xv working. Which can be a problem for those who
 aren't using an RPM-based system.

There are also tarballs available on NVIDIA's website.

- Andy Ritger


 Are there any plans for Xv support to
 be included in the xfree86 source?
 ___
 Xpert mailing list
 [EMAIL PROTECTED]
 http://XFree86.Org/mailman/listinfo/xpert

___
Xpert mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/xpert



[Xpert]ati mobility LY

2002-01-04 Thread mel kravitz

hi,
Just built xfree86-4.1.99(cvs source) on my new Sony GR290 laptop,
running NetBSD-1.5ZA(current), xf86config builds an /etc/X11/XF86Config
-which works well!  Congratulations to all those involved.
  -Mel

___
Xpert mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/xpert



Re: [Xpert]Xv without the nvidia binary driver

2002-01-04 Thread Mark Vojkovich

On Fri, 4 Jan 2002, Andy Ritger wrote:
 On Sat, 5 Jan 2002, ephemeron wrote:
 
  With XFree86 4.1. it appears that downloading the nVidia binary drivers
  is the only way to get Xv working. Which can be a problem for those who
  aren't using an RPM-based system.
 
 There are also tarballs available on NVIDIA's website.

  And the nv driver in XFree86 4.1 does support Xv for GeForce cards.
Not TNT/TNT2 cards, however.


Mark.

___
Xpert mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/xpert



Re: [Xpert]Xv without the nvidia binary driver

2002-01-04 Thread ephemeron

On Fri, 4 Jan 2002 12:28:15 -0800 (PST)
Andy Ritger [EMAIL PROTECTED] wrote:

 On Sat, 5 Jan 2002, ephemeron wrote:
 
  With XFree86 4.1. it appears that downloading the nVidia binary drivers
  is the only way to get Xv working. Which can be a problem for those who
  aren't using an RPM-based system.
 
 There are also tarballs available on NVIDIA's website.

I know. No problem. I have it up and running. The problem is, I don't
like the nVidia binaries. And not because they're binaries. It seems
that every time I compile a particular linux version. I have to
recompile the kernel-dependent binary. I don't have to reinstall or
recompile XFree86 proper just because I decided to go from kernel
2.4.17rc1 to 2.4.17rc2.

RPM's make the binary installation less of a chore. But I don't have an
RPM-based system. Wouldn't it be nice if the binaries were simple cp
path-to-driver /usr/X11R6/lib/modules/drivers/ affairs?
___
Xpert mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/xpert



Re: [Xpert]Evaluating Memory Usage

2002-01-04 Thread Jim Gettys

ps on many/most systems is completely useless for determining
X's memory usage.

X maps the display into its address space: sometimes the address
space required to do this is large/very large (it isn't just
the VRAM on the board, but also its register area, which can be
arbitrarily large).

On an alpha, for bad reasons, for example, it maps gigabytes of address
space, which is the extreme situation.

On my iPAQ handheld this instant it is 3 megabytes virtual, of which 1.5 
megabytes is shared (e.g. libc, etc).  The mainline XFree86 server is 
somewhat bigger, and applications can ask the X server to store arbitrary 
amounts of data on their behalf.

We should probably do an X extension to just report memory usage and
other statistics, and be done with this 15 year old FAQ (which is in
XFree86's FAQ's if I remember correctly).
- Jim


--
Jim Gettys
Cambridge Research Laboratory
Compaq Computer Corporation
[EMAIL PROTECTED]

___
Xpert mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/xpert



[Xpert]Using MMX assembly (for video card drivers)

2002-01-04 Thread Ewald Snel

Hi,

Could I use MMX assembly for improving the mga video driver? I wrote a 
vertical chrominance filter (*) for the XVideo module using inline MMX 
assembly. This allows me to improve output quality without any speed penalty.

Of course, I'm using #ifdef USE_MMX_ASM and the original C code as an 
alternative for other CPU architectures. Runtime detection of MMX support is 
not included yet, but will be added if MMX is allowed.

Thanks in advance,

ewald

(*) This fixes red blockiness (2x2 pixels) for DVD/MPEG movies
http://rambo.its.tudelft.nl/~ewald/xfree86-chrominance-filter.jpg
___
Xpert mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/xpert



Re: [Xpert]A couple Radeon 7500 questions...

2002-01-04 Thread Adam K Kirchhoff


On 29 Dec 2001, Michel [ISO-8859-1] Dänzer wrote:

 On Sat, 2001-12-29 at 05:11, Adam K Kirchhoff wrote:

  1) The XFree86 Radeon 7500 driver treats the monitor connected to the DVI
  port as the primary head.  To me, this makes perfect sense (if you're
  going to buy a digital flat panel, you're probably going to want to use it
  as the primary head).  However, this seems to be the reverse of the
  windows drivers for the card, which treats the CRT port as the primary
  head.  Does anyone know of a way to switch this in XFree86 (or even
  Windows)?  I was hoping it'd be a single config change in m XF86Config-4
  file, but according to the source for the driver, it looks like it was
  hardwired in.  How hard would it be to unhardwire it?

 Option CrtScreen seems to be what you want.

Sorry for taking so long to respond...  I've pulled the code from CVS
earlier this week, and recompiled.  I tried:

Option CrtScreen

But it doesn't seem to have any affect what so ever.

I'm not quite sure I made clear what I was aiming for :-)  Let me give a
quick description.

I have two monitors hooked up to the 7500.  On the left, what I'd like to
be my secondary screen, I have a 17 Sony CRT.  On the right (primary), I
have a 17 analog flat panel.

Under Linux, if I have the Flat panel attached to the DVI port (with the
DVI-CRT adapter), it get's treated, properly, as the primary head.  All
Xv applications (xawtv, smpeg, aviplay)  will display properly on this
head.  The CRT then gets treated as the secondary screen.  This is perfect
for what I want under Linux.

If, however, I boot into Windows, the flat panel (still hooked up to the
DVI port) is treated as the secondary screen, with the CRT as the primary.
Ideally, I'd like the screens to operate the same way under both operating
systems.

Now, of course I can switch the screens so that the flat panel is on the
CRT port and it'll get treated as the primary head under Windows, but not
under Linux, whether or not I use the CrtScreen option.

Any ideas?

Adam

___
Xpert mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/xpert



Re: [Xpert]ati mobility LY

2002-01-04 Thread Carles Pina i Estany


Hi,

I did the same:

XFree86 Version 4.1.99.4 / X Window System
(protocol Version 11, revision 0, vendor release 6600)
Release Date: 28 December 2001

And works fine (Debian Woody, Linux Kernel 2.4.17)

But if I change from text mode to graphic mode it hangs the XFree86

And you?

I will test de new CVS version...

And, more important feature: DPMS works fine in your configuration? XFree
don't suspend the monitor, and I think that this is why:

(WW) RADEON(0): Option DPMS is not used


Thank you very much

On Fri, 4 Jan 2002, mel kravitz wrote:

 hi,
 Just built xfree86-4.1.99(cvs source) on my new Sony GR290 laptop,
 running NetBSD-1.5ZA(current), xf86config builds an /etc/X11/XF86Config
 -which works well!  Congratulations to all those involved.
   -Mel

 ___
 Xpert mailing list
 [EMAIL PROTECTED]
 http://XFree86.Org/mailman/listinfo/xpert



Carles Pina i Estany | Nick: Pinux / Pine / Teufeus
E-Mail: [EMAIL PROTECTED] / [EMAIL PROTECTED] / [EMAIL PROTECTED]
http://www.salleURL.edu/~is08139/

   Y ahora unas palabras de parte de Telefónica: ´#@Ô?¨È ­joer!..@%$·()

___
Xpert mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/xpert



Re: [Xpert]Xfree 4.1, Matrox G-450 Dualhead questions

2002-01-04 Thread Carles Pina i Estany


Hi,

I have this card, with two monitors at the same resolution with FreeBSD
and Linux (XFree 4.0 and 4.1)

With Solaris you can use XFree or you can patch Solaris 8 and then works.

I have two icewm, one in first monitor and the other one in the other
monitor, without problems.

I don't test it, but I think that if I change this:

Section Screen
Identifier  Screen 0
Device  G400_1
Monitor Generic Monitor
DefaultDepth16
SubSection Display
Depth   16
Modes   800x600
EndSubSection
EndSection

Section Screen
Identifier  Screen 1
Device  G400_2
Monitor Generic Monitor
DefaultDepth16
SubSection Display
Depth   16
Modes   800x600
EndSubSection

Then two monitors will have two different resolutions, no?

Tuxracer works fine :-)



On Fri, 4 Jan 2002, Tom Manning wrote:

 I've been reading everything I can find on this card, and there's so much
 conflicting and outdated info out there I'm getting confused.  I'm running
 Mandrake 8.1 on an older Pentium II, with a BX chipset.

 **I know some of these questions are newbie type questions, but if I fully
 understood the howtos and such I wouldn't ask, so please bare with me :-)**

 I'm considering buying a G450 for dualhead on my machine, but I want to know
 if I can hook up a 19 (primary) monitor and a 15 (satellite) monitor with
 few problems, at different resolutions. I've heard that matrox has
 antialiasing problems with their drivers though. Is that still true? Does
 anyone have specifics? Can I run it in Xinerama, with one big desktop, at
 different resolutions? Do I have to, or can I have two separate desktops
 running, just sacrificing the ability to move windows from one screen to the
 other?

 In short, why would I NOT want to buy this card?

 Thanks for any help!

 Tom
 ___
 Xpert mailing list
 [EMAIL PROTECTED]
 http://XFree86.Org/mailman/listinfo/xpert



Carles Pina i Estany | Nick: Pinux / Pine / Teufeus
E-Mail: [EMAIL PROTECTED] / [EMAIL PROTECTED] / [EMAIL PROTECTED]
http://www.salleURL.edu/~is08139/

   Y ahora unas palabras de parte de Telefónica: ´#@Ô?¨È ­joer!..@%$·()

___
Xpert mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/xpert



Re: [Xpert]Xv without the nvidia binary driver

2002-01-04 Thread Yuri van Oers


On Sat, 5 Jan 2002, ephemeron wrote:

 On Fri, 4 Jan 2002 12:28:15 -0800 (PST)
 Andy Ritger [EMAIL PROTECTED] wrote:
 
  On Sat, 5 Jan 2002, ephemeron wrote:
  
   With XFree86 4.1. it appears that downloading the nVidia binary drivers
   is the only way to get Xv working. Which can be a problem for those who
   aren't using an RPM-based system.
  There are also tarballs available on NVIDIA's website.
 I know. No problem. I have it up and running. The problem is, I don't
 like the nVidia binaries. And not because they're binaries. It seems
 that every time I compile a particular linux version. I have to
 recompile the kernel-dependent binary.

The NVdriver is a (linux) kernel module.
When you compile a different kernel, any modules you use have to be
compiled for this new kernel.
The NVdriver is no different.

That's the way it works.

 RPM's make the binary installation less of a chore. But I don't have an
 RPM-based system. Wouldn't it be nice if the binaries were simple cp
 path-to-driver /usr/X11R6/lib/modules/drivers/ affairs?

I don't know what kind of a setup you've got, but I don't use RPMs either,
and a simple 
# cd /usr/src/NVIDIA_kernel-1.0-2314/
# make

does the trick for me.

You could write a little script, that compares the kernel and nvdriver
date. If the kernel is newer than the driver, let it run above commands.
Put the script in your rc.local or /rc.d/init.d or whatever and it's done
at boot.

Problem solved.

Greets,
Yuri

___
Xpert mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/xpert



Re: [Xpert]*Severe* time lossage with XF86 4.1.0 and S3Virge MX LCD

2002-01-04 Thread Kevin Brosius

Alan Hourihane wrote:
 
 
 On Thu, Jan 03, 2002 at 12:33:16PM -0800, Mark Vojkovich wrote:
  On Thu, 3 Jan 2002, Yuri van Oers wrote:
   On Wed, 2 Jan 2002, Kenneth Crudup wrote:
Looking thru a couple of months of list archives turned up nothing, so:
   
When running 4.1.0 on my laptop using a S3Virge MX, my system clock loses
at least two seconds per minute.
   
I've done everything I can to narrow this down, and the bottom line is
when X is running, even in a fresh weave (running X only), even when
the VT is not active, my clock is slowed. I've tried pulling out all
the optional stuff (fifo_aggressive, the pci-burst/retry stuff,
NoPM, you name it). I've got an AMD K6-2 at 400MHz.
  
   I must say I've been noticing a time shift in my PC's clock as well. It
   didn't bother me much, as the difference is not that big, over quite a
   long period of time (in my case, anyway). Therefore, I haven't sought to
   find the cause.
   My hardware differs a lot from yours, so if the X server is causing time
   loss, I wouldn't blame it on any of the drivers.
  
   I'm curious as to what others think and/or experience...
 
 I recall a typo in the 3Dlabs driver a long time ago that was
  doing outb to a register that messed with the clock.  I don't
  remember which.  You might look for portIO happening on
  non vga registers (ie, something outside of the 0x3cX range).
 
 That was in the 3.3.x days - a LONG time ago.
 
 Alan.

This is the first time I've heard anything like this recently.  How do
you notice the clock shift?  (Meaning, how do I try and repeat your
problem.)  I have a AMD based system here, running with a ViRGE DX. 
It's been up for a couple days, and the clock is currently ahead of the
time standard I set it from by a couple minutes.

What are you using to read and display time?  Are both the hardware and
soft clocks affected?  I assume it doesn't lose the 2 seconds/min when X
is not running?

Just out of curiosity, what happens if you comment out option 'dpms'
in your monitor section?

-- 
Kevin
___
Xpert mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/xpert



Re: [Xpert]Laptop backlight question

2002-01-04 Thread Kevin Brosius

Kenneth Crudup wrote:
 
 
 
 The main reason I went with the 4.X series is 'cause I'd read on the 'net
 that it has the ability to use APM (DPMS?) to darken laptop backlights,
 but I haven't seen the ability to do this anywhere.
 
 I've got a ProStar (AKA Sager/Kapok/Clevo/...) laptop with a dumber BIOS
 than my Transmonde, so it doesn't dim the backlight automatically via the
 BIOS and will stay on forever.
 


Does 'xset dpms force off' turn off the backlight on that machine?

-- 
Kevin
___
Xpert mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/xpert



Re: [Xpert]Using MMX assembly (for video card drivers)

2002-01-04 Thread Billy Biggs

Ewald Snel ([EMAIL PROTECTED]):

 Of course, I'm using #ifdef USE_MMX_ASM and the original C code as
 an alternative for other CPU architectures. Runtime detection of MMX
 support is not included yet, but will be added if MMX is allowed.

  I've also been playing with some mmx-ification of the XVideo routines,
for example I also did an SSE-4:2:0-to-4:2:2 function.

  There was some discussion on #xfree86 about efforts to have a nice
runtime detection mechanism somewhere.  Has anyone got any code for this
already done?  If not I might also have a go at it.

-- 
Billy Biggs
[EMAIL PROTECTED]
___
Xpert mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/xpert



[Xpert]Softbooting video card on PPC with OF ROMs

2002-01-04 Thread Derrik Pates

Has there been any development in this department? I was wondering if the
Mac-On-Linux OpenFirmware/OpenPROM implementation (minimal, but maybe
enough) would be a potential basis to make something like this work, as
part of an int10-like module. Any thoughts?

Derrik Pates  |   Sysadmin, Douglas School   |#linuxOS on EFnet
[EMAIL PROTECTED] | District (dsdk12.net)|#linuxOS on OPN

___
Xpert mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/xpert



Re: [Xpert]Using MMX assembly (for video card drivers)

2002-01-04 Thread greg wright

  I've also been playing with some mmx-ification of the XVideo routines,
  for example I also did an SSE-4:2:0-to-4:2:2 function.

I just did this too, MMX only though. How many cycles/pixel did you
end up with? What percentage of pairing did you achieve?

   There was some discussion on #xfree86 about efforts to have a nice
 runtime detection mechanism somewhere.  Has anyone got any code for this
 already done?  If not I might also have a go at it.


there are plenty of samples of this on Intel's site.

--greg

 


___
Xpert mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/xpert



[Xpert]Fixing XF86VidMode extension

2002-01-04 Thread Billy Biggs

  Over the holidays I fixed XF86VidModeAddModeLine and
XF86VidModeSwitchToMode in my local copy of 4_1-branch.  My patch though
is very strange and not entirely correct.

  I don't believe the current CVS code ever worked for anyone: I felt I
was completing the code rather than bugfixing it.  That said, I'm
worried that my patch may be missing the intention of the code and how
it is supposed to work.  For example, when should the new modelines be
validated and how to return errors to the user.

  Who is currently 'in charge' of this code?

  My application is my deinterlacer.  I was attempting to build
modelines on the fly to soft-genlock the monitor refresh to the incoming
video signal.  Since this is proving difficult, I now generate modelines
on the fly with sufficiently high refresh rates to hide judder effects.

  It's going ok so far. Much easier with a working VidMode extension. :)

-- 
Billy Biggs
[EMAIL PROTECTED]
___
Xpert mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/xpert



Re: [Xpert]Using MMX assembly (for video card drivers)

2002-01-04 Thread Erik Walthinsen

On Fri, 4 Jan 2002, greg wright wrote:

 I just did this too, MMX only though. How many cycles/pixel did you
 end up with? What percentage of pairing did you achieve?
Note that only P5-core chips care about pairing, per-se.  There are much
nastier issues involved in modern P6 cores.  I haven't thought about them
for quite a while, so it'd take me a while to dig out the stuff and put it
back into main memory, but I think I have a pretty good understanding of
how the P6 really works...

 there are plenty of samples of this on Intel's site.
Unfortunately that just isn't very useful outside Intel's world.  There
are about a half-dozen manufacturers of x86 chips that matter, and they
all have all sortsof bizarre quirks.  I ran across a sourceforge project a
few days ago (x86info I think) that tries to deal with that, but I didn't
look at the code.

There's a larger issue when it comes to other architectures.  There are
similar but in some cases nastier problems on things like PPC and Alpha.
This is why I want to gather all this into a single library.  It would go
closely with my other projects, SpeciaLib and libcodec, which focus on
run-time specialization of time-critical kernels, such as the
motion-compensation code in an MPEG decoder, or color-space
conversion/transliterations, etc. (as in the 4:2:0 to 4:2:2 problem).

You can see a lot of this stuff at http://codecs.org/, though specialib
itself isn't there because it's not anywhere near formed enough for CVS.

  Erik Walthinsen [EMAIL PROTECTED] - System Administrator
__
   /  \GStreamer - The only way to stream!
  || M E G A* http://gstreamer.net/ *
  _\  /_



___
Xpert mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/xpert



[Xpert]Re: Xfree4.1on a CyberBlade XP

2002-01-04 Thread Luis Miguel Tavora


Many thanks for all the replies I got from
my previous post.

I finally managed to get Xfree4.1 working on my
CyberBladeXP following the guidelines presented
in


http://www.deater.net/john/PavilionN5430.html



Cheers, 

Luis



Hi there.

Has anyboby managed to get  Xfree4.1.0-3 (which comes
with Rh7.2) running on a CyberBlade XP?

I've just updated my Toshiba 4600 to RH7.2 but didn't
manage to get the graphics card to work properly.


Any suggestions?

Thanks in advance

Luis Tavora




-
This mail sent through IMP: http://web.horde.org/imp/
___
Xpert mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/xpert



Re: [Xpert]Using MMX assembly (for video card drivers)

2002-01-04 Thread Billy Biggs

  I've also been playing with some mmx-ification of the XVideo
  routines, for example I also did an SSE-4:2:0-to-4:2:2 function.
 
 I just did this too, MMX only though. How many cycles/pixel did you
 end up with? What percentage of pairing did you achieve?

  I'll get some numbers in a sec.

  There was some discussion on #xfree86 about efforts to have a nice
  runtime detection mechanism somewhere.  Has anyone got any code for
  this already done?  If not I might also have a go at it.
 
 there are plenty of samples of this on Intel's site.

 And in many nice abstracted open source modules.  :)  Specifically I
meant code to put this somewhere appropriate in the X tree.

-- 
Billy Biggs
[EMAIL PROTECTED]
___
Xpert mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/xpert



Re: [Xpert]*Severe* time lossage with XF86 4.1.0 and S3Virge MX LCD

2002-01-04 Thread Kenneth Crudup

On Fri, 4 Jan 2002, Kevin Brosius wrote:

 This is the first time I've heard anything like this recently.  How do
 you notice the clock shift?

After the machine's been running for a few minutes, and before I start X,
date ; rdate time.nist.gov shows at most a second's difference. 30 mins
after X has been running, the same command sequence shows about 90 seconds
difference, with my laptop running behind time. It's gotten to the point
where I have to crontab rdate -s every 1/2 hr or so.

 [are the ] soft clocks affected?

No. hwclock shows a clock with a minor difference to the NIST clock.

 Just out of curiosity, what happens if you comment out option 'dpms'
 in your monitor section?

Tried that, too- no difference, but I'll try it again.

-Kenny

-- 
Kenneth R. Crudup   Sr. SW Engineer, Scott County Consulting, Washington, D.C.
Home1: PO Box 914 Silver Spring, MD 20910-0914 [EMAIL PROTECTED]
Home2: 38010 Village Cmn. #217  Fremont, CA 94536-7525  (510) 745-0101

___
Xpert mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/xpert



Re: [Xpert]Using MMX assembly (for video card drivers)

2002-01-04 Thread Ewald Snel

Hi,

  I wrote a vertical chrominance filter (*) for the XVideo module using
  inline MMX assembly. This allows me to improve output quality without
  any speed penalty.

   Do you mean for upsampling to 4:2:2 ?  How do you filter?  Do you
 average to create the new chroma line?

Something like that, the filter uses 0.75x nearest chrominance sample and 
0.25x second nearest chrominance sample. This is more accurate as it doesn't 
shift the chrominance signal by 1 pixel.

Here are the patches, the second one is for enabling the horizontal filtering 
in hardware:

http://rambo.its.tudelft.nl/~ewald/XFree86-4.1.99.4-mga-xv-mmx-chromafilter.patch
http://rambo.its.tudelft.nl/~ewald/XFree86-4.2.0-mga-xv-uvfilter.patch

These are not paired for Pentium MMX, but performance is already better than 
the C version (which compiles to slow movzx instructions). It's nearly 
optimal for AMD Athlon though (about 2 IPC using L1-cache).

bye,

ewald
___
Xpert mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/xpert



Re: [Xpert]Using MMX assembly (for video card drivers)

2002-01-04 Thread Erik Walthinsen

On Fri, 4 Jan 2002, Billy Biggs wrote:

   Please, please correct me if I'm wrong here.  In MPEG sampling, the
 chrominance sample is halfway between the two luminance samples on the
 same vertical scanline (by is138182):

o   o  where   o == luma sample
x  x == chroma sample
o   o
Note that this depends on which version of MPEG you're talking about.  I
forget which (I can look it up if anyone's interested), but one of the
MPEG standards specifies that the chroma samples are located between the
lumas in both dimensions, i.e.:

o   o
  x
o   o

   So, are not the chroma samples above and below the same distance away?
 I thought this was the purpose of MPEG sampling, that is, it's
 reasonable to convert to 4:2:2 sampling by doubling the scanlines.
Possibly, but you have to beware what the chroma position is for the 4:2:2
as well.  If the 4:2:2 specifies colocated first luma and chroma, it will
work nicely for the first form (above).  If in the middle, it'll work for
the second form.

   What do you mean by shifting the chroma by one pixel?
If a chroma sample is colocated with a luma sample (in either dimension),
you get the following:

ooooo
 x x
|^|^|

Where a single chroma sample impacts three adjacent pixels (note the
difference between pixel and sample...), and the luma samples in the
middle actually get chroma from two different chroma samples.  In this
case you have to give differing amounts to each new (resampled) sample,
according to the percentages mentioned previously.

  Erik Walthinsen [EMAIL PROTECTED] - System Administrator
__
   /  \GStreamer - The only way to stream!
  || M E G A* http://gstreamer.net/ *
  _\  /_

___
Xpert mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/xpert



Re: [Xpert]Using MMX assembly (for video card drivers)

2002-01-04 Thread Ewald Snel

Hi,

[...]

  Something like that, the filter uses 0.75x nearest chrominance sample
  and 0.25x second nearest chrominance sample. This is more accurate as
  it doesn't shift the chrominance signal by 1 pixel.

   Please, please correct me if I'm wrong here.  In MPEG sampling, the
 chrominance sample is halfway between the two luminance samples on the
 same vertical scanline (by is138182):

I think you're right, my interpolation looks like this :

o   o   (c=.75*c1 + .25*c0)
 c1
o   o   (c=.75*c1 + .25*c2)

o   o   (c=.75*c2 + .25*c1)
 c2
o   o   (c=.75*c2 + .25*c3)

[...]

   So, are not the chroma samples above and below the same distance away?
 I thought this was the purpose of MPEG sampling, that is, it's
 reasonable to convert to 4:2:2 sampling by doubling the scanlines.

It's reasonable, but doubling the scanlines will make the image look a little 
blocky as both scanlines use the same chrominance values. That's why you 
should use filtering.

   Are you sure that maybe the images where you see that nasty chroma
 artifact aren't from when the DVD is using interlaced encoding?  In this
 case, each second chroma sample is from a different field, and you can
 get blocky errors because you don't correllate samples correctly.

The source was a non-interlaced MPEG-1 video file. The red blocks are very 
small for (high resolution) DVD movies, but they are still visible.

   What do you mean by shifting the chroma by one pixel?

It's actually 0.5 pixel (my mistake :)) using the following filter :

o   o   (c=c1)
 c1
o   o   (c=.5*c1 + .5*c2)

o   o   (c=c2)
 c2
o   o   (c=.5*c2 + .5*c3)

bye,

ewald
___
Xpert mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/xpert



Re: [Xpert]Laptop backlight question

2002-01-04 Thread Kenneth Crudup

On Fri, 4 Jan 2002, Kevin Brosius wrote:

 Does 'xset dpms force off' turn off the backlight on that machine?

First thing I tried.

-Kenny

-- 
Kenneth R. Crudup   Sr. SW Engineer, Scott County Consulting, Washington, D.C.
Home1: PO Box 914 Silver Spring, MD 20910-0914 [EMAIL PROTECTED]
Home2: 38010 Village Cmn. #217  Fremont, CA 94536-7525  (510) 745-0101

___
Xpert mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/xpert



Re: [Xpert]Using MMX assembly (for video card drivers)

2002-01-04 Thread Billy Biggs

  To reply to my own mail  :)

Billy Biggs ([EMAIL PROTECTED]):

  It's actually 0.5 pixel (my mistake :)) using the following filter :
  
  o   o   (c=c1)
   c1
  o   o   (c=.5*c1 + .5*c2)
  
  o   o   (c=c2)
   c2
  o   o   (c=.5*c2 + .5*c3)
 
   I don't think this is right for MPEG2.

  I sent this and realized I might look like an asshole.  :)  This
should read:

  Thanks, I see what you mean now, and yeah, I think this filter is
wrong for filtering chroma from MPEG2.  :)

  Apologies.

-- 
Billy Biggs
[EMAIL PROTECTED]
___
Xpert mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/xpert



RE: [Xpert]Fixing XF86VidMode extension

2002-01-04 Thread Sottek, Matthew J


Let me summarize the options you've discussed and comment on each.
Assume 59.94 video.

Option 1: Run the display at 59.94. This is what you were attempting to do
by inserting modelines I presume? Using this method you don't
introduce any more judder than already existed in the video sequence.

The issue I have here is that you are inserting unknown modelines.
Really the only one with any right to determine the available modelines
is the graphics driver. The driver usually has (On other OS's, XFree
is a little different) a set of canned timings and then may par them
down or add a few more after talking it over with the monitor. XFree
moved most of this up into the device independent portion since most
drivers make use of the same canned timings. This isn't ideal but it
works most of the time. Allowing user defined modelines in XF86Config
is bad enough, but having Apps insert modelines on the fly is really
scary.  The ideal solution here would be to let the driver have a
set of available timings as well as the set of active ones (The
ones that are in use in the CTRL-ALT+- list) Then your app could
query for a driver supported set of timings, even when the user isn't
actively using them. At least this way the driver has the ability to
provide a table of known good timings.

Option 2: Run the display really fast and hope nobody notices. This is
the easiest and probably works pretty well. The faster the refresh the
smaller the added judder, go fast enough and it just doesn't matter
anymore.

Option 3: Work on the video stream to make the judder go away. This is
very hard but this seems to be the goal of your deinterlacer anyway
right?  The video you are getting at 59.94 may be the result of 3:2
pulldown so it may already have judder. You have to detect this and
get back to the 24fps to get rid of the judder. Plus you may have to
timeshift half the fields to get rid of the jaggies. Is it really
that absurd to add in the additional step of weighting the pixels as
was described in your link? Seems like that would produce excellent
results. This also has another advantage, it scales up with faster
processors.
For example assume infinite processor power. If your video is 59.94
with 3:2 pulldown you've got 24fps of real video. Assume your display
is going at 100hz. You could display 100fps by linearly weighting and
blending the pixels of your 24fps video to generate 100fps of unique
video. Basically this is motion blur for video.

The link you gave also suggests that flat panels with their always on
type pixels are not idea for video because the eye can detect the
judder more easily than with a crt's flashing pixels. Blurring the
video would probably produce better results at high speed than would
be produced with clean pixels.

I vote for #3, let me know when you're done :)

-Matt


___
Xpert mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/xpert