Re: Multiple video consoles

2003-03-03 Thread Sven Luther
On Sun, Mar 02, 2003 at 11:28:24PM -0500, David Dawes wrote:
 On Sat, Mar 01, 2003 at 10:34:20AM +0100, Sven Luther wrote:
 On Fri, Feb 28, 2003 at 04:19:37PM -0500, David Dawes wrote:
  Are you speaking about the current 4.3.0 or the stuff you are working on ?
  
  What I was working on.
 
 Ok, ...
 
 I take it, there will be a 4.4.0 before 5.0 ?
 
 Most likely.

:))

  of scaling are either handled by a hardware scaler (that may or may not
  be visible to the XFree86 server and user), or by having something in
  XFree86 that keeps a second copy of the image that is scaled in software.
 
 Mmm, you are speaking of a hardware scaller in the LCD monitor ? 
 
 I'm talking about a scaler anywhere between where the resolution is
 programmed and the physical display.  For laptop-type displays it's easy
 -- it's in the video hardware.  For digital connections to LCD displays
 I'm not sure which side of the DVI connector it's normally located.  I
 just know that I've seen it work in that case without needing to do
 anything special as a user or as a driver writer.  I don't know whether
 the cases I've seen are common or unusual.  I haven't played with enough
 of these HW combinations to know.

Mmm, it may be something special in the bios of those laptops, or even
some hardwired functionality, but in my case i need to program it by
hand, and i guess other chips will need this too, so we may as well
think of it.

 Well, from my experience (i have a Sony SDM-X52, with both a DVI
 connector and a standard VGA connector) this doesn't seem to happen. If
 i request a mode lower than what the LCD can display, i get only
 garbage, at least on the DVI channel. I believe the VGA channel can do
 more advanced things, but didn't sucessfully use them. On the other
 hand, my graphic hardware can do arbitrary scaling of the framebuffer
 before passing it to the monitor, but i have to program it explicitly. I
 guess that this is used by the bios at startup to convert the 640x480
 text mode to something my monitor supports, since the fonts appear a bit
 blurry.
 
 It sounds like that in current cases the driver should handle this type
 of scaling transparently.  The only extension that might be relevant is
 to allow the viewport to be set to a range of sizes rather than discrete
 mode sizes (as happens now).

Well, i have to calculate the scaling factor from the source
(framebuffer) width/height and the destination (mode resolution)
width/height, that is why i ask for a more granular handling of this.
Currently, you can do :

Section Screen

  ...

  SubSection Display
Depth   8
Modes   1024x768 800x600 640x480
  EndSubSection
  SubSection Display
Depth   15
Modes   1024x768 800x600 640x480
  EndSubSection
  ...
EndSection

(Well, actually, i have only 1024x768, since that is what the monitor
supports.)

What would be nice, would be if :

 1) you could have only one line for all the depth/bpp, or a possibility
to have multiple depth/bpp per display section.
 
 2) a way to tell the framebuffer/viewport sizes for each supported
resolution, something like :

  SubSection Display
Mode 1024x768
Viewport 0 0 1024 768
Viewport 0 0 800 600
Viewport 0 0 640 480
  EndSubSection

or maybe 

  SubSection Display
Framebuffer 1024 768
Modes 1024x768 800x600 640x480
  EndSubSection

Which would tell the driver that we only support outgoing resolution of
1024x768, but that framebuffer resolution of 1024x768, 800x600, and
640x480 are ok, and that we should scale from them to the 1024x768 one.
Maybe the syntax is not the best, but you get the idea.

I could do this by using an outgoing resolution size in the device specific
section, but this would not work best, since all the logic doing the
mode setting now is done for the resolution in the display setting.

I strongly advocate that you take in account such separation of the
outgoing resolution and the framebuffer size in any future configuration
scheme.

 Right.  I've only seen downscaling, and it's possible that I'm wrong
 about it it happening in the monitor rather than in the video hardware.

I think it is happening in the video hardware, at least for DVI
connections.

 BTW, do you know any doc on DVI and LCD monitors ? my monitor refuse to
 go to sleep when i am using the DVI channel, while it works fine with
 the VGA channel.
 
 I haven't seen any docs on those.  If there are related VESA specs, I
 should have them somewhere.

Mmm, i will be also looking.

 That said, another thing that would be nice, would be the possibility to
 specify one display section for every depth, instead of just copying it
 for each supported depth. Do many people in these times of 64+Mo of
 onboard memory specify different resolutions for different depths ?
 
 I think it'd be useful to be able to specify paramters that apply to
 all depths, but still allow a depth-specific subsection to override.
 That'd be a useful extension of the 

Re: Server doesn't build for me (setjmp)

2003-03-03 Thread Marc Aurele La France
On Sun, 2 Mar 2003, David Dawes wrote:

 On Sat, Mar 01, 2003 at 08:27:49PM -0700, Marc Aurele La France wrote:
 On Sat, 1 Mar 2003, Mark Vojkovich wrote:

setjmp is a *macro* (for __sigsetjmp) defined in /usr/include/setjmp.h.
  This is libc 2.2. so it doesn't set HAS_GLIBC_SIGSETJMP.
  SYMCFUNCALIAS chokes on this.  This is gcc 2.95.3.

 I think the HAS_GLIBC_SIGSETJMP set logic is wrong.

 You've got glibc 2.2.1, I'll guess.  The #if's should be looking for glibc
  2.2.2, not 2.2, although a host.def override is available (see
 xfree86.cf).

 OK, so using the version macros in features.h isn't good enough here,
 and it has to be done with the imake LinuxCLib*Version parameters instead.

I don't think that's necessary.  It is simpler to #define HAS_GLIBC_SIGSETJMP
for all of glibc 2.2.*, which is, in part, what I'll be committing
shortly, after I iron out my libc5 problem.

Marc.

+--+---+
|  Marc Aurele La France   |  work:   1-780-492-9310   |
|  Computing and Network Services  |  fax:1-780-492-1729   |
|  352 General Services Building   |  email:  [EMAIL PROTECTED]  |
|  University of Alberta   +---+
|  Edmonton, Alberta   |   |
|  T6G 2H1 | Standard disclaimers apply|
|  CANADA  |   |
+--+---+
XFree86 Core Team member.  ATI driver and X server internals.

___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


Re: Server doesn't build for me (setjmp)

2003-03-03 Thread Marc Aurele La France
On Mon, 3 Mar 2003, Marc Aurele La France wrote:

 On Sun, 2 Mar 2003, David Dawes wrote:
  On Sat, Mar 01, 2003 at 08:27:49PM -0700, Marc Aurele La France wrote:
  On Sat, 1 Mar 2003, Mark Vojkovich wrote:

 setjmp is a *macro* (for __sigsetjmp) defined in /usr/include/setjmp.h.
   This is libc 2.2. so it doesn't set HAS_GLIBC_SIGSETJMP.
   SYMCFUNCALIAS chokes on this.  This is gcc 2.95.3.

  I think the HAS_GLIBC_SIGSETJMP set logic is wrong.

  You've got glibc 2.2.1, I'll guess.  The #if's should be looking for glibc
   2.2.2, not 2.2, although a host.def override is available (see
  xfree86.cf).

  OK, so using the version macros in features.h isn't good enough here,
  and it has to be done with the imake LinuxCLib*Version parameters instead.

 I don't think that's necessary.  It is simpler to #define HAS_GLIBC_SIGSETJMP
 for all of glibc 2.2.*, which is, in part, what I'll be committing
 shortly, after I iron out my libc5 problem.

On a related matter, in libGLU's mysetjmp.h, there is

inline int
mysetjmp( JumpBuffer *j )
{
return ::setjmp( j-buf );
}

... and something similar for longjmp().  Now my (spoken) C++ is less than
adequate, but what the heck does :: do/mean?

Marc.

+--+---+
|  Marc Aurele La France   |  work:   1-780-492-9310   |
|  Computing and Network Services  |  fax:1-780-492-1729   |
|  352 General Services Building   |  email:  [EMAIL PROTECTED]  |
|  University of Alberta   +---+
|  Edmonton, Alberta   |   |
|  T6G 2H1 | Standard disclaimers apply|
|  CANADA  |   |
+--+---+
XFree86 Core Team member.  ATI driver and X server internals.

___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


Re: Misleading Makefile samples for Linux ?

2003-03-03 Thread Leif Delgass
On Mon, 3 Mar 2003, David Dawes wrote:

 On Mon, Mar 03, 2003 at 03:53:32PM +, David Woodhouse wrote:
 On Fri, 2003-02-28 at 17:31, David Dawes wrote:
  Makefile.kernel was supposed to the a Makefile suitable for dropping
  into the kernel source tree's drivers/char/drm directory.  It's never
  used directly from the XFree86 source tree, and that's probably why it
  is out of date.  I don't know if there's any point keeping it around or
  not.
 
 Note that Makefile.kernel could (and probably _should_) be used even
 when building as part of the XFree86 tree. The recommended way of
 building Linux kernel modules which are shipped outside the kernel tree
 is by running:
  make -C $LINUX_SRC_DIR SUBDIRS=`pwd` modules
 
 That's just about the only way to get the CFLAGS and other stuff correct
 for all versions of the kernel and all architectures.
 
 Is it safe these days to unconditionally use /lib/modules/`uname -r`/build
 for $LINUX_SRC_DIR?

I think we should at least remove /usr/include/linux as a fallback path
for finding the kernel headers (TREE in Makefile.linux), and replace it
with a message about setting up the build symlink in /lib/modules (which
is done by 'make modules install' on any halfway-recent 2.4.x kernel,
afaik) or using 'make TREE=/path/to/kernel-src-tree/include.' I've seen a
few messages from users on the dri lists where the build tried to use the
glibc kernel headers and fails.  Symlinking /usr/include/linux to the
source tree is now considered a no-no.
 
 Will a single Makefile.kernel work for all versions of the kernel,
 and handle various incompatibilities that arise from time to time
 that the current Makefile.linux is forced to work around?
 
 If so, then that's definitely the way to go.  I'd love to see
 something cleaner than what we currently have (the Makefile for
 the FreeBSD drm modules is very clean).
 
 David

Personally, I like using Makefile.linux.  I've never had any problems with 
it, and it's easier to build the kernel modules from my XFree86/DRI build 
tree than copying files to the kernel source tree.

-- 
Leif Delgass 
http://www.retinalburn.net

___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


Internal documentation

2003-03-03 Thread jkjellman



All,

Please excuse my ignorance here as I have only been 
on this list for a few weeks and am pretty new to XFree86 internals. That 
being said ...

I am working on modifying an input driver (or two 
:-) and am having a little trouble. I cannot find man pages or other 
documentation on internal X calls. For example, xf86Msg, xf86OpenSerial, 
etc. I have figured out some of these based on usage in the source files, 
but the less obvious ones are becoming a problem. For example, 
xf86PostMotionEvent has a series of parameters, most of which are hard coded in 
the drivers I am looking at. While I can search and read source code, this 
is not prefered as it is very time consuming when you know little if anything 
;-)

I also need to fine some calls to get information 
like screen size and to know if I can safely take over the screen (for 
calibration purposes) without adversely affecting the system. Fining 
pieces like this by reading source code would be nearly impossible.

Any help would be greatly 
appreaciated.

Take care,
KJohn


Re: Server doesn't build for me (setjmp)

2003-03-03 Thread Marc Aurele La France
On Mon, 3 Mar 2003, Stuart Anderson wrote:

  On a related matter, in libGLU's mysetjmp.h, there is

  inline int
  mysetjmp( JumpBuffer *j )
  {
  return ::setjmp( j-buf );
  }

  ... and something similar for longjmp().  Now my (spoken) C++ is less than
  adequate, but what the heck does :: do/mean?

 It means use the global namespace. In this case, it sez to call the setjmp()
 function from the C library.

OK.  Thanks.

Marc.

+--+---+
|  Marc Aurele La France   |  work:   1-780-492-9310   |
|  Computing and Network Services  |  fax:1-780-492-1729   |
|  352 General Services Building   |  email:  [EMAIL PROTECTED]  |
|  University of Alberta   +---+
|  Edmonton, Alberta   |   |
|  T6G 2H1 | Standard disclaimers apply|
|  CANADA  |   |
+--+---+
XFree86 Core Team member.  ATI driver and X server internals.

___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


Re: Server doesn't build for me (setjmp)

2003-03-03 Thread Mark Vojkovich
   I should note that setting HasGlibc21Sigsetjmp YES doesn't work
for me.  It complains about an undefined xf86setjmp when building
xf86sym.c.  Yes, I did make World.


Mark.

___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


Re: Problems compiling XFree86-4.3.0

2003-03-03 Thread Binesh Bannerjee
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On Sun, 2 Mar 2003, David Dawes wrote:

 On Sun, Mar 02, 2003 at 04:27:26PM -0500, Binesh Bannerjee wrote:
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1
 
 Hi...
  I've been trying to compile XFree86-4.3.0 ... And, actually the
 _compile_ (make World) works for me. When I try to make install tho,
 I get this error: (PARTIAL... I'll put a link to the full error somewhere,
 and not consume bandwith...)

 Did you change any build options from their defaults?

 Try 'make WORLDOPTS= World' and see where it stops, or search your World
 log file for the first error.  By default 'make World' will continue
 beyond errors.  I've never liked that behaviour personally, but it's
 easy to override by setting WORLDOPTS to be empty as above.

Cool! Thanks!

I'm with you on not liking that behaviour... I assumed that since make
World ran to completion, there were no errors. Turned out that the error
was that X assumes that cpp is in /usr/bin/cpp (which since I removed the
RH gcc and friends and installed from source was in /usr/local/bin/cpp.)

I made a link to /usr/local/bin/cpp in /usr/bin/cpp and everything ran
fine!

Thanks again!
Binesh


 David
 --
 David Dawes
 Release Engineer/Architect  The XFree86 Project
 www.XFree86.org/~dawes
 ___
 Devel mailing list
 [EMAIL PROTECTED]
 http://XFree86.Org/mailman/listinfo/devel


- --
I am Bane, and I could kill you... But death would only end your agony,
and silence your shame... Instead I will simply break you...
-- Bane to Batman, Knightfall

PGP  Key: http://www.hex21.com/~binesh/binesh-public.asc
Key fingerprint = 421D B4C2 2E96 B8EE 7190  A0CF B42F E71C 7FC3 AD96
SSH2 Key: http://www.hex21.com/~binesh/binesh-ssh2.pub
SSH1 Key: http://www.hex21.com/~binesh/binesh-ssh1.pub
OpenSSH  Key: http://www.hex21.com/~binesh/binesh-openssh.pub
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.2.1 (GNU/Linux)
Comment: pgpenvelope 2.10.2 - http://pgpenvelope.sourceforge.net/

iD8DBQE+Y9nFtC/nHH/DrZYRAgQ6AJ9wB+WiFYkaCj2jPr3kIbzHvn8vuwCgolu0
MfI1lSt5qRD1rJ4InHf4wBc=
=0hmW
-END PGP SIGNATURE-
___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


Questions on building XFree86?

2003-03-03 Thread Kendall Bennett
Hi Guys,

I just noticed that my XFree86 build seems to be building with debug info 
enabled. Is that the default when you do an install and 'make World'? How 
do you enable an optimised, no debug build by default? I want to build 
everything optimised by default and then switch to building just my 
module with debug info when I need it.

Also I have a few other questions about the internals of XFree86:

1. All mono bitmap data for glyphs etc appears to be stored in LSB format 
internally. A lot of PC hardware is MSB, and XAA bit twiddles the bits 
before passing it to the low level layers for hardware that is MSB only. 
I am wondering if there is a way to tell XFree86 to store internal 
bitmaps in MSB format instead so the native bitmaps will better match 
some hardware. Is that possible?

2. With the pixmap cache in offscreen memory, the way our driver is 
initialising it right now is that it is a large chunk of (x,y) 
addressable memory after the primary display buffer. Most modern cards 
can utilise linear memory which is more efficient as bitmaps can be more 
tighly packed and you can address the entire framebuffer (some cards 
cannot address a very large x,y coordinate space). Is it possible to use 
linear offscreen memory for the pixmap cache, or do we have to write our 
own pixmap cache handling code to make this work?

Thanks!

---
Kendall Bennett
Chief Executive Officer
SciTech Software, Inc.
Phone: (530) 894 8400
http://www.scitechsoft.com

~ SciTech SNAP - The future of device driver technology! ~

___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


Re: Questions on building XFree86?

2003-03-03 Thread Mark Vojkovich
On Mon, 3 Mar 2003, Kendall Bennett wrote:

 Hi Guys,
 
 I just noticed that my XFree86 build seems to be building with debug info 
 enabled. Is that the default when you do an install and 'make World'? How 
 do you enable an optimised, no debug build by default? I want to build 
 everything optimised by default and then switch to building just my 
 module with debug info when I need it.
 
 Also I have a few other questions about the internals of XFree86:
 
 1. All mono bitmap data for glyphs etc appears to be stored in LSB format 
 internally. A lot of PC hardware is MSB, and XAA bit twiddles the bits 
 before passing it to the low level layers for hardware that is MSB only. 
 I am wondering if there is a way to tell XFree86 to store internal 
 bitmaps in MSB format instead so the native bitmaps will better match 
 some hardware. Is that possible?

   No alot of the server expects it to match the depth 1 Ximage format,

 
 2. With the pixmap cache in offscreen memory, the way our driver is 
 initialising it right now is that it is a large chunk of (x,y) 
 addressable memory after the primary display buffer. Most modern cards 
 can utilise linear memory which is more efficient as bitmaps can be more 
 tighly packed and you can address the entire framebuffer (some cards 
 cannot address a very large x,y coordinate space). Is it possible to use 
 linear offscreen memory for the pixmap cache, or do we have to write our 
 own pixmap cache handling code to make this work?

   You'd have to write your own cache. 


Mark.

___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


Re: Questions on building XFree86?

2003-03-03 Thread Kendall Bennett
Mark Vojkovich [EMAIL PROTECTED] wrote:

  1. All mono bitmap data for glyphs etc appears to be stored in LSB format 
  internally. A lot of PC hardware is MSB, and XAA bit twiddles the bits 
  before passing it to the low level layers for hardware that is MSB only. 
  I am wondering if there is a way to tell XFree86 to store internal 
  bitmaps in MSB format instead so the native bitmaps will better match 
  some hardware. Is that possible?
 
No alot of the server expects it to match the depth 1 Ximage format,

So I assume that we will just have to do the bit twiddling on cards that 
are MSB only? I guess we can live with that ;-)

  2. With the pixmap cache in offscreen memory, the way our driver is 
  initialising it right now is that it is a large chunk of (x,y) 
  addressable memory after the primary display buffer. Most modern cards 
  can utilise linear memory which is more efficient as bitmaps can be more 
  tighly packed and you can address the entire framebuffer (some cards 
  cannot address a very large x,y coordinate space). Is it possible to use 
  linear offscreen memory for the pixmap cache, or do we have to write our 
  own pixmap cache handling code to make this work?
 
You'd have to write your own cache. 

Ok. Are there any drivers that presently implement their own cache we can 
look at, or will we have to start this from scratch? We already have our 
own offscreen buffer manager code, so we just need to figure out how to 
hook it into XAA somehow.

Regards,

---
Kendall Bennett
Chief Executive Officer
SciTech Software, Inc.
Phone: (530) 894 8400
http://www.scitechsoft.com

~ SciTech SNAP - The future of device driver technology! ~

___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


Re: Questions on building XFree86?

2003-03-03 Thread Mark Vojkovich
On Mon, 3 Mar 2003, Kendall Bennett wrote:

 Mark Vojkovich [EMAIL PROTECTED] wrote:
 
   1. All mono bitmap data for glyphs etc appears to be stored in LSB format 
   internally. A lot of PC hardware is MSB, and XAA bit twiddles the bits 
   before passing it to the low level layers for hardware that is MSB only. 
   I am wondering if there is a way to tell XFree86 to store internal 
   bitmaps in MSB format instead so the native bitmaps will better match 
   some hardware. Is that possible?
  
 No alot of the server expects it to match the depth 1 Ximage format,
 
 So I assume that we will just have to do the bit twiddling on cards that 
 are MSB only? I guess we can live with that ;-)

   There aren't really that many cards that are that way.  And
all the ones I can think of suck for other reasons (ie. only 
supporting the Microsoft bitmap format is the least of their problems).


 
   2. With the pixmap cache in offscreen memory, the way our driver is 
   initialising it right now is that it is a large chunk of (x,y) 
   addressable memory after the primary display buffer. Most modern cards 
   can utilise linear memory which is more efficient as bitmaps can be more 
   tighly packed and you can address the entire framebuffer (some cards 
   cannot address a very large x,y coordinate space). Is it possible to use 
   linear offscreen memory for the pixmap cache, or do we have to write our 
   own pixmap cache handling code to make this work?
  
 You'd have to write your own cache. 
 
 Ok. Are there any drivers that presently implement their own cache we can 
 look at, or will we have to start this from scratch? We already have our 
 own offscreen buffer manager code, so we just need to figure out how to 
 hook it into XAA somehow.

   It's alot of work and none of the drivers in the tree do it.
All the functions needed to do that should be hookable though.


Mark.

___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


Re: Questions on building XFree86?

2003-03-03 Thread Kendall Bennett
Mark Vojkovich [EMAIL PROTECTED] wrote:

  So I assume that we will just have to do the bit twiddling on cards that 
  are MSB only? I guess we can live with that ;-)
 
There aren't really that many cards that are that way.  And all
 the ones I can think of suck for other reasons (ie. only
 supporting the Microsoft bitmap format is the least of their
 problems). 

Yep, that is probably true. 

  Ok. Are there any drivers that presently implement their own cache we can 
  look at, or will we have to start this from scratch? We already have our 
  own offscreen buffer manager code, so we just need to figure out how to 
  hook it into XAA somehow.
 
It's alot of work and none of the drivers in the tree do it.
 All the functions needed to do that should be hookable though.

Ok thanks.

Regards,

---
Kendall Bennett
Chief Executive Officer
SciTech Software, Inc.
Phone: (530) 894 8400
http://www.scitechsoft.com

~ SciTech SNAP - The future of device driver technology! ~

___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


Re: Server doesn't build for me (setjmp)

2003-03-03 Thread David Dawes
On Mon, Mar 03, 2003 at 08:48:12AM -0700, Marc Aurele La France wrote:
On Sun, 2 Mar 2003, David Dawes wrote:

 On Sat, Mar 01, 2003 at 08:27:49PM -0700, Marc Aurele La France wrote:
 On Sat, 1 Mar 2003, Mark Vojkovich wrote:

setjmp is a *macro* (for __sigsetjmp) defined in /usr/include/setjmp.h.
  This is libc 2.2. so it doesn't set HAS_GLIBC_SIGSETJMP.
  SYMCFUNCALIAS chokes on this.  This is gcc 2.95.3.

 I think the HAS_GLIBC_SIGSETJMP set logic is wrong.

 You've got glibc 2.2.1, I'll guess.  The #if's should be looking for glibc
  2.2.2, not 2.2, although a host.def override is available (see
 xfree86.cf).

 OK, so using the version macros in features.h isn't good enough here,
 and it has to be done with the imake LinuxCLib*Version parameters instead.

I don't think that's necessary.  It is simpler to #define HAS_GLIBC_SIGSETJMP
for all of glibc 2.2.*, which is, in part, what I'll be committing
shortly, after I iron out my libc5 problem.

So that would mean using __sigsetjmp(env, 0) on all glibc 2.x.  I guess
that has to work for compatibility reasons.  It's definitely cleaner
and lower-impact than trying to keep track of the two cases separately.
As we've seen, the host.def override was incompletely implemented in
4.3.0.

David
-- 
David Dawes
Release Engineer/Architect  The XFree86 Project
www.XFree86.org/~dawes
___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


Re: HW/SW cursor switching broken?

2003-03-03 Thread Mark Vojkovich
On Mon, 3 Mar 2003, Mark Vojkovich wrote:

  
I just noticed that if I use a large root window cursor it doesn't 
 work anymore.  Actually I see a brief flash of it then it disappears.
 It looks to me like HW/SW cursor switching has broken.  Can someone
 else confirm this?
 

   It appears to be broken only if you say you support ARGB cursors.
I've seen pretty erratic behavior.  Sometimes it seems like SW/HW
cursor switching works and then I switch VTs and come back and it
doesn't work.  It looks like the SW cursor goes up but then something
promptly removes it.  Also, I've seen SetCursorColors called while
ARGB cursors are displayed.  This causes the nv driver to recolor
and install the last core cursor it saw. 

   I hope this is just because I haven't synced up in a few days
(doesn't build).


Mark.

___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


Re: Multiple video consoles

2003-03-03 Thread David Dawes
On Mon, Mar 03, 2003 at 10:31:56AM +0100, Sven Luther wrote:
On Sun, Mar 02, 2003 at 11:28:24PM -0500, David Dawes wrote:
 On Sat, Mar 01, 2003 at 10:34:20AM +0100, Sven Luther wrote:
 On Fri, Feb 28, 2003 at 04:19:37PM -0500, David Dawes wrote:
  Are you speaking about the current 4.3.0 or the stuff you are working on ?
  
  What I was working on.
 
 Ok, ...
 
 I take it, there will be a 4.4.0 before 5.0 ?
 
 Most likely.

:))

  of scaling are either handled by a hardware scaler (that may or may not
  be visible to the XFree86 server and user), or by having something in
  XFree86 that keeps a second copy of the image that is scaled in software.
 
 Mmm, you are speaking of a hardware scaller in the LCD monitor ? 
 
 I'm talking about a scaler anywhere between where the resolution is
 programmed and the physical display.  For laptop-type displays it's easy
 -- it's in the video hardware.  For digital connections to LCD displays
 I'm not sure which side of the DVI connector it's normally located.  I
 just know that I've seen it work in that case without needing to do
 anything special as a user or as a driver writer.  I don't know whether
 the cases I've seen are common or unusual.  I haven't played with enough
 of these HW combinations to know.

Mmm, it may be something special in the bios of those laptops, or even
some hardwired functionality, but in my case i need to program it by
hand, and i guess other chips will need this too, so we may as well
think of it.

 Well, from my experience (i have a Sony SDM-X52, with both a DVI
 connector and a standard VGA connector) this doesn't seem to happen. If
 i request a mode lower than what the LCD can display, i get only
 garbage, at least on the DVI channel. I believe the VGA channel can do
 more advanced things, but didn't sucessfully use them. On the other
 hand, my graphic hardware can do arbitrary scaling of the framebuffer
 before passing it to the monitor, but i have to program it explicitly. I
 guess that this is used by the bios at startup to convert the 640x480
 text mode to something my monitor supports, since the fonts appear a bit
 blurry.
 
 It sounds like that in current cases the driver should handle this type
 of scaling transparently.  The only extension that might be relevant is
 to allow the viewport to be set to a range of sizes rather than discrete
 mode sizes (as happens now).

Well, i have to calculate the scaling factor from the source
(framebuffer) width/height and the destination (mode resolution)
width/height, that is why i ask for a more granular handling of this.
Currently, you can do :

Section Screen

  ...

  SubSection Display
Depth   8
Modes   1024x768 800x600 640x480
  EndSubSection
  SubSection Display
Depth   15
Modes   1024x768 800x600 640x480
  EndSubSection
  ...
EndSection

(Well, actually, i have only 1024x768, since that is what the monitor
supports.)

What would be nice, would be if :

 1) you could have only one line for all the depth/bpp, or a possibility
to have multiple depth/bpp per display section.

Yep.

 2) a way to tell the framebuffer/viewport sizes for each supported
resolution, something like :

  SubSection Display
Mode 1024x768
Viewport 0 0 1024 768
Viewport 0 0 800 600
Viewport 0 0 640 480
  EndSubSection

or maybe 

  SubSection Display
Framebuffer 1024 768
Modes 1024x768 800x600 640x480
  EndSubSection

Which would tell the driver that we only support outgoing resolution of
1024x768, but that framebuffer resolution of 1024x768, 800x600, and
640x480 are ok, and that we should scale from them to the 1024x768 one.
Maybe the syntax is not the best, but you get the idea.

Actually, I don't understand what you're trying to do that can't be done
already.  The user shouldn't care that the panel is 1024x768 (other than
that it's the max available mode resolution).  The driver should figure
that out and take care of scaling the user's 800x600 mode request to
the physical output size of 1024x768.  As a user, when I specify 800x600,
I just want the physical screen to display an 800x600 pixel area on the
full screen.  I don't care of it's an 800x600 physical output mode or
if the 800x600 is scaled to some other physical output resolution.

The only new feature I see is that arbitrary scaling allows a potentially
much finer set of mode sizes than we're currently used to, and this
would be very useful for allowing real-time zooming and tracking windows
(including resizes).  That can be done with most modern CRTs too (with
some horizontal granularity limits), but I imagine that zooming would
be more seemless with the scaler method though than implementing it with
CRT resolution changes.

I could do this by using an outgoing resolution size in the device specific
section, but this would not work best, since all the logic doing the
mode setting now is done for the resolution in the display setting.

Can the driver query the panel's size?  

Re: Problems compiling XFree86-4.3.0

2003-03-03 Thread Kurt Wall
Feigning erudition, David Dawes wrote:

[evils of make -k]

% We could change the default, and let those who like the current behaviour
% run 'make WORLDOPTS=-k'.  Since the original reasons for this are less
% valid now (builds are much faster than they once were), and since it
% catches a lot of people out, maybe now is a good time to change the
% default.  Would anyone object strongly to that?

Heck, no. I object much more strongly to make -k and the attendant
confusion.

% I made a link to /usr/local/bin/cpp in /usr/bin/cpp and everything ran
% fine!
% 
% Maybe it'd be better to have cpp set to just 'cpp'?  It used to have a
% full path because it used to be set to /lib/cpp, which isn't likely to
% be in anyone's search path.  BTW, the /bin/cpp setting broke my RH 5.2
% test build until I set it to /lib/cpp in host.def.  I don't remember
% why it was changed from /lib/cpp, unless recent Linux distros don't have
% that link anymore.

/lib/cpp was removed because the FHS decrees something to the effect
that /lib should contain only those libraries necessary to boot the
system. /usr/bin has, likewise, been decreed the place to drop most
user-visible binaries, such as cpp.

Kurt
-- 
Take it easy, we're in a hurry.
___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel



Re: HW/SW cursor switching broken?

2003-03-03 Thread Marc Aurele La France
On Mon, 3 Mar 2003, Mark Vojkovich wrote:

 I just noticed that if I use a large root window cursor it doesn't
  work anymore.  Actually I see a brief flash of it then it disappears.
  It looks to me like HW/SW cursor switching has broken.  Can someone
  else confirm this?

It appears to be broken only if you say you support ARGB cursors.
 I've seen pretty erratic behavior.  Sometimes it seems like SW/HW
 cursor switching works and then I switch VTs and come back and it
 doesn't work.  It looks like the SW cursor goes up but then something
 promptly removes it.  Also, I've seen SetCursorColors called while
 ARGB cursors are displayed.  This causes the nv driver to recolor
 and install the last core cursor it saw.

Do you say you support both ARGB and traditional hardware cursors?  If you
you shouldn't be seeing any software cursors.

I hope this is just because I haven't synced up in a few days

Possible.  There was an eleventh-hour fix that went in.

 (doesn't build).

Soon.  Soon.

Marc.

+--+---+
|  Marc Aurele La France   |  work:   1-780-492-9310   |
|  Computing and Network Services  |  fax:1-780-492-1729   |
|  352 General Services Building   |  email:  [EMAIL PROTECTED]  |
|  University of Alberta   +---+
|  Edmonton, Alberta   |   |
|  T6G 2H1 | Standard disclaimers apply|
|  CANADA  |   |
+--+---+
XFree86 Core Team member.  ATI driver and X server internals.

___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


Re: HW/SW cursor switching broken?

2003-03-03 Thread Mark Vojkovich
On Mon, 3 Mar 2003, Marc Aurele La France wrote:

 On Mon, 3 Mar 2003, Mark Vojkovich wrote:
 
  I just noticed that if I use a large root window cursor it doesn't
   work anymore.  Actually I see a brief flash of it then it disappears.
   It looks to me like HW/SW cursor switching has broken.  Can someone
   else confirm this?
 
 It appears to be broken only if you say you support ARGB cursors.
  I've seen pretty erratic behavior.  Sometimes it seems like SW/HW
  cursor switching works and then I switch VTs and come back and it
  doesn't work.  It looks like the SW cursor goes up but then something
  promptly removes it.  Also, I've seen SetCursorColors called while
  ARGB cursors are displayed.  This causes the nv driver to recolor
  and install the last core cursor it saw.
 
 Do you say you support both ARGB and traditional hardware cursors?  If you
 you shouldn't be seeing any software cursors.

   If you use xsetroot to display a large enough cursor it will fall back
to software.  Escherknot should fall back to software on all hardware.
This all used to work fine.  I found that sometimes the HW/SW switching
works OK and then when I switch VTs and switch back the SW cursor
will display but gets removed (I see the flash).  This is the core
SW cursor not the ARGB SW cursor, though I haven't tried ARGB SW
cursors (I forgot how to set one as the root cursor).  

   Looking through the code, I can see that SetCursorColors gets
called while ARGB cursors are up.  This might not be harmful in
most drivers.  The nv driver, however, doesn't support 1 bpp
cursors but only 16 and 32 bpp cursors so this is its prompt to
to recolor and reinstall the last 1 bpp cursor it saw.  I guess I'll
have to set a flag in the driver and ignore the SetCursorColors
request when it's called while an ARGB cursor is displayed.


 
 I hope this is just because I haven't synced up in a few days
 
 Possible.  There was an eleventh-hour fix that went in.

I'm anxious to try it again when I can get it to build.


Mark.
 
  (doesn't build).
 
 Soon.  Soon.
 
 Marc.
 
 +--+---+
 |  Marc Aurele La France   |  work:   1-780-492-9310   |
 |  Computing and Network Services  |  fax:1-780-492-1729   |
 |  352 General Services Building   |  email:  [EMAIL PROTECTED]  |
 |  University of Alberta   +---+
 |  Edmonton, Alberta   |   |
 |  T6G 2H1 | Standard disclaimers apply|
 |  CANADA  |   |
 +--+---+
 XFree86 Core Team member.  ATI driver and X server internals.
 
 ___
 Devel mailing list
 [EMAIL PROTECTED]
 http://XFree86.Org/mailman/listinfo/devel
 

___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


Re: HW/SW cursor switching broken?

2003-03-03 Thread Keith Packard
Around 0 o'clock on Mar 4, Mark Vojkovich wrote:

 This is the core SW cursor not the ARGB SW cursor, though I haven't tried
 ARGB SW cursors (I forgot how to set one as the root cursor).

$ XCURSOR_THEME=redglass XCURSOR_SIZE=256 xsetroot -cursor_name shuttle

 I guess I'll
 have to set a flag in the driver and ignore the SetCursorColors
 request when it's called while an ARGB cursor is displayed.

The radeon driver already has such a flag.  Perhaps we should put code 
into the hw cursor layer as well (in case a future driver forgets).

One issue here is that cursors sent in ARGB format which are actually two 
color cursors get mapped by the extension to core cursors, and so 
RecolorCursor actually has an effect on them.  I think this is a bug which 
should get fixed up in DIX land though.

-keith


___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


bug in xset / xfree86-4 when using more than one mouse.

2003-03-03 Thread Jon Gabrielson
The command:

xset m 2/1

only affects the primary mouse.

It should either affect both, or preferably have an option
to specify which mouse to alter.

If this is the wrong list to post this to, please advise.


Thanks,


Jon.




___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


combine showkey xev

2003-03-03 Thread Robert Woerle Paceblade/Support
Hi

I ve a nice one :

I am using a TABLET PC and there we have multiple extra buttons both on
the keyboard and on the unit itself ..
now when i use xev to determine the Keycodes i get different ones for 
each button on keyboard , but 2 buttons on the unit itself get me the 
some code as i get from UP and from HOME 

when i now use showkey , i get different codes for those 2 buttons on 
the unit :-)  but now the  extra buttons on the keyboard doesnt send 
someting !!!

how can i tell X to combine these two cases so i get a different code 
for each button ??

Rob

___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel