Re: Multiple video consoles

2003-03-07 Thread Sven Luther
On Fri, Mar 07, 2003 at 12:31:18PM +, Dr Andrew C Aitchison wrote:
 On Fri, 7 Mar 2003, Sven Luther wrote:
 
  I don't really agree here, modes are for the outgoing resolution, not
  the input viewport. it would be far simpler to keep this simple
  acceptation, and add a new keyword for defining the input viewport.
 
 Have you looked at the Stretch option on say the NeoMagic driver ?
 I have a 1024x768 laptop display, and by default (ie unless I use
 option noStretch) all modes are stretched to fill the screen.
 Thus the modes (and modelines) describe the viewport size, not the
 output resolution.

Interesting, i suppose the scaling is also done in the driver then, i will
have a look at how it works when i get some free time.

I wonder how the driver knows what the laptop display size is ? do you
specify or does the monitor tell the driver about it with ddc ?

 So I don't agree with your description of what the words currently mean.
 Using viewport to describe the visible pixels of the 
 framebuffer and modes to describe the pixels of the monitor would be
 logical and consistent, but it does mean a change from the way modes
 is considered now.

Well, if you consider that the size given for the modes and the size of
the framebuffer are mostly exactly the same, you can hardly argue that
using modes for the framebuffer size is what most people think of when
they hear of modes.

Also, you have to consider how things work out from the driver
internals.

There is the DisplayModePtr mode, which as its name says is for the
outgoing mode, and has all the monitor timings. On the other hand, the
viewport source position and size is given by pScrn-frame{XY}{01},
which i suppose are calculated from the viewport (usually 0,0) and the
size of the current mode. Other framebuffer info include the
displaywidth (given by the virtual size, i guess) and the pixel depth.

So, we can do it in two ways :

  1) As i said, we simply add the size to the viewport keywords, which
  would be used to generate pScrn-frame{XY}{01}. No further driver
  changes are needed, apart from setting the appropriate scaling factor,
  or reject scaled modes if scalling is not allowed.

  2) We do it the other way, we use the mode info to mean the viewport
  source size. There is no way to set the real outgoing mode, so you can
  only hope that the monitor provides you the real data (unless you add
  a supported resolutions option to the monitor entry). And even such,
  you have to calculate the new outgoing mode, and there is no practical
  way for the user to specify the exact timing of the outgoing mode. No,
  there is, i suppose you would be able to use a Supported line in the
  monitor section and have there the lists of supported modes.

Both solution have advantages and disadvantages, i personnally think
that 1) is better, especially if you want to do more advanced stuff
later on, like zooming on windows (you would just call adjustframe each
time the window is moved) or such, it is also the one that needs least
overall changes.

Friendly,

Sven Luther
___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


Re: Multiple video consoles

2003-03-07 Thread Dr Andrew C Aitchison
On Fri, 7 Mar 2003, Sven Luther wrote:

 On Fri, Mar 07, 2003 at 12:31:18PM +, Dr Andrew C Aitchison wrote:
  On Fri, 7 Mar 2003, Sven Luther wrote:
  
   I don't really agree here, modes are for the outgoing resolution, not
   the input viewport. it would be far simpler to keep this simple
   acceptation, and add a new keyword for defining the input viewport.
  
  Have you looked at the Stretch option on say the NeoMagic driver ?
  I have a 1024x768 laptop display, and by default (ie unless I use
  option noStretch) all modes are stretched to fill the screen.
  Thus the modes (and modelines) describe the viewport size, not the
  output resolution.
 
 Interesting, i suppose the scaling is also done in the driver then, i will
 have a look at how it works when i get some free time.
 
 I wonder how the driver knows what the laptop display size is ? do you
 specify or does the monitor tell the driver about it with ddc ?

The driver gets it from the graphics chip.
DDC info on these systems comes from an external mointor if one is 
connected. DDC for the builtin screen does not exist.

-- 
Dr. Andrew C. Aitchison Computer Officer, DPMMS, Cambridge
[EMAIL PROTECTED]   http://www.dpmms.cam.ac.uk/~werdna

___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


Re: Multiple video consoles

2003-03-07 Thread Michel Dänzer
On Fre, 2003-03-07 at 14:48, Dr Andrew C Aitchison wrote:
 On Fri, 7 Mar 2003, Sven Luther wrote:
 
  I wonder how the driver knows what the laptop display size is ? do you
  specify or does the monitor tell the driver about it with ddc ?
 
 The driver gets it from the graphics chip.
 DDC info on these systems comes from an external mointor if one is 
 connected. DDC for the builtin screen does not exist.

It does here. There are other methods to fall back to though.


-- 
Earthling Michel Dänzer (MrCooper)/ Debian GNU/Linux (powerpc) developer
XFree86 and DRI project member   /  CS student, Free Software enthusiast

___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


Re: Multiple video consoles

2003-03-06 Thread Sven Luther
On Thu, Mar 06, 2003 at 12:27:41PM -0500, David Dawes wrote:
 On Tue, Mar 04, 2003 at 10:41:50AM +0100, Sven Luther wrote:
 
  I strongly advocate that you take in account such separation of the
  outgoing resolution and the framebuffer size in any future configuration
  scheme.
  
  We already have the Virtual size, which is the framebuffer size, and
  allows it to be separated from the viewport (mode) sizes.  I don't think
  the outgoing resolution belongs in the Screen/Display sections.  It
  should be between the Monitor data and the driver, with the driver using
  this information to determine the maximum viewport (mode) size allowable.
 
 Yes, but consider how the current display section works.
 
 You use the mode to specify outgoing resolution, but apart from the
 
 That's one way to look at it.  Another way to look at it is that you
 use the mode to specify the viewport size and you don't really care
 about how that gets implemented.  In the CRT case, both the viewport
 and outgoing resolution happen to be the same, so there is currently no
 distinction between these two.  I think that the latter interpretation
 more closely matches what the user would expect when moving from a CRT
 display to an LCD display, and that's how things appear to be handled
 with most video BIOS and Windows drivers at present.

But the mode contains more information than what is needed (the exact
timings), which will not be used. And this may be confusing.

 It's imaginable that there might be displays that one day support multiple
 outgoing resolutions as well as using a scaler.  It's also imaginable
 that displays will get smarter, and automatically take care of whatever
 resolution data the video card sends to it (as most CRT screens do
 today).  I'd suspect the latter given how things have developed in the
 past.

I don't know, i have the impression that this technology will more
probably be part of the video card, and not the monitor, but that may be
only me. I believe that the video card used in laptops also do the
scaling if needed, from a comment i read on the linux-fbdev mailing list
it seems that the fbdevs also do the scaling part themselves.

 But rather than speculating too much, it would be useful to do some
 research into current and developing standards/technology in this area.

That would be usefull, yes.

 builtin mode, there is no guarantee that the string used for the modes
 even correspond to said resolution, the user are used to this, but if we
 are going to do scaling, it really don't make sense to use 800x600 as
 mode, when what you really want to say is that you want a resolution of
 800x600.
 
 The parameters of the mode determine the resolution, not the name.

Exactly, and the mode has much more info than what is needed for setting
a viewport.

 However, a useful extension would be to place a special interpretation
 on mode names that fit a regular format (e.g., xresxyres@refresh).

Yes, and these are what the monitors tell the card trough ddc anyway.

 For CRT output, the VESA GTF can be used to construct matching timings.
 For DVI output, the driver uses the resolution parameters to calculate
 the scaling.

You see, again, you are speaking in video modes, but we want a
framebuffer size. What does the refresh have in common with the
framebuffer size ? It can evidently not be used to refer to the outgoing
mode, which will have different timing parameters than what your
xresxyres@refresh suggest.

 Also, if you still want to use a virtual screen bigger than the actual
 one, you still would need to specify the viewport.
 
   SubSection Display
 Virtual 1600 1200
 Mode 1024x768 (outgoing mode).
 Resolution 1280 1024
 Resolution 1024 768
 Resolution 800 600
 Resolution 640 480
   EndSubSection
 
 This way, we would have a 1600x1200 virtual screen, an outgoing
 resolution of 1024x768, which could be specified in the monitor
 section, and resolutions of 640x480 upto 1280x1024.
 
 Sure, you could also use the modes, but you would give to much info,
 after all you would only need the size of the mode, and not the rest of
 it.
 
 For built-in modes, you only need to give the size now.  With an extended
 interpretation for mode names as I suggested above, that would be the case
 for any mode size.

For the outgoing monitor timings, yes i agree.

I don't know, i still think that it would be best if we could separate
the information as follows :

  1) Information on the Framebuffer : virtual size and viewport size and
 position. If we have a shared framebuffer, then the virtual size is
 common to each head. Depth and Bpp information also goes here.

  2) Information on the outgoing modes. This is taken from a list of
 builtin modes, or better yet from the data that the monitor sends
 back trough the DDC channel.

And further, we would separate the information on the chips (the device
section) and the screens, as in modern chips, a part of the
configuration for both 

Re: Multiple video consoles

2003-03-06 Thread David Dawes
On Thu, Mar 06, 2003 at 07:01:35PM +0100, Sven Luther wrote:
On Thu, Mar 06, 2003 at 12:27:41PM -0500, David Dawes wrote:
 On Tue, Mar 04, 2003 at 10:41:50AM +0100, Sven Luther wrote:
 
  I strongly advocate that you take in account such separation of the
  outgoing resolution and the framebuffer size in any future configuration
  scheme.
  
  We already have the Virtual size, which is the framebuffer size, and
  allows it to be separated from the viewport (mode) sizes.  I don't think
  the outgoing resolution belongs in the Screen/Display sections.  It
  should be between the Monitor data and the driver, with the driver using
  this information to determine the maximum viewport (mode) size allowable.
 
 Yes, but consider how the current display section works.
 
 You use the mode to specify outgoing resolution, but apart from the
 
 That's one way to look at it.  Another way to look at it is that you
 use the mode to specify the viewport size and you don't really care
 about how that gets implemented.  In the CRT case, both the viewport
 and outgoing resolution happen to be the same, so there is currently no
 distinction between these two.  I think that the latter interpretation
 more closely matches what the user would expect when moving from a CRT
 display to an LCD display, and that's how things appear to be handled
 with most video BIOS and Windows drivers at present.

But the mode contains more information than what is needed (the exact
timings), which will not be used. And this may be confusing.

 It's imaginable that there might be displays that one day support multiple
 outgoing resolutions as well as using a scaler.  It's also imaginable
 that displays will get smarter, and automatically take care of whatever
 resolution data the video card sends to it (as most CRT screens do
 today).  I'd suspect the latter given how things have developed in the
 past.

I don't know, i have the impression that this technology will more
probably be part of the video card, and not the monitor, but that may be
only me. I believe that the video card used in laptops also do the
scaling if needed, from a comment i read on the linux-fbdev mailing list
it seems that the fbdevs also do the scaling part themselves.

 But rather than speculating too much, it would be useful to do some
 research into current and developing standards/technology in this area.

That would be usefull, yes.

 builtin mode, there is no guarantee that the string used for the modes
 even correspond to said resolution, the user are used to this, but if we
 are going to do scaling, it really don't make sense to use 800x600 as
 mode, when what you really want to say is that you want a resolution of
 800x600.
 
 The parameters of the mode determine the resolution, not the name.

Exactly, and the mode has much more info than what is needed for setting
a viewport.

It doesn't matter what extra information is there.  It only matters that
you have enough, and that the user doesn't need to specify more than is
needed.  In 99% of cases the user only specifies mode names these days
(in an xresxyres format), not all the parameters.

 However, a useful extension would be to place a special interpretation
 on mode names that fit a regular format (e.g., xresxyres@refresh).

Yes, and these are what the monitors tell the card trough ddc anyway.

 For CRT output, the VESA GTF can be used to construct matching timings.
 For DVI output, the driver uses the resolution parameters to calculate
 the scaling.

You see, again, you are speaking in video modes, but we want a

Only because that's how viewports are typically implemented with CRT
devices.  I don't see a good reason to treat things differently depending
on whether the viewport is implemented with CRT modes or a scaler in
the video chip.

framebuffer size. What does the refresh have in common with the
framebuffer size ? It can evidently not be used to refer to the outgoing

Nothing, so ignore it.  Supplying it would be optional anyway.

mode, which will have different timing parameters than what your
xresxyres@refresh suggest.

It seems that it has fixed timing parameters, so there's really no reason
for the user to need to know about them.  If the outgoing mode's refresh
rate is usefully variable, then the refresh parameter becomes useful
again.  Use the parameters that are useful, and ignore the others.  As
I've said, you only care that you have enough parameters for what you
need, and that the user doesn't need to supply more parameters than you
need.

There are existing cases where only a subset of the available timing
parameters are used.  The driver for the Intel 830M and later uses the
video BIOS to set the video modes, and the mode programmed is based
solely on the X, Y and refresh values, and not on the detailed timing
parameters.  The vesa driver (for VBE  3) uses only the X and Y
parameters.

Note also that the RandR extension deals with these three parameters
also, with refresh being optional.  And 

Re: Multiple Video Consoles

2003-03-05 Thread Alex Deucher
Jonathan,

   could you also post your XF86Config file?  I have some ideas on how
to extend this.  It's still kind of a hack, but here goes:

add an option to the radeon driver, say MergedFB or something like
that.  when that option is set to TRUE, it would skip the sections of
code that you have commented out.

next add sub options for MergedFB  like:

Option MFB-Xres 2048
Option MFB-Yres 768

these would set the virtualX and Y instead of having to hardcode them.

it's still hackey, but would clean it up a bit and allow run time
configuration.

Alex

--- Jonathan Thambidurai [EMAIL PROTECTED] wrote:
   I posted the following message to the DRI-devel lists the day before
 yesterday and was told it might be of interest to this discussion. 
 Additionally, I have attached some diffs, contrary to what is said as
 follows.
 
 
 I am pleased to report that thanks to the guidance Jens Owens gave in
 a
 previous message, I have made 3D work on two heads simultaneously
 (IIRC,
 the ATI Windows XP drivers didn't do this).  I have not provided a
 diff
 because it is quite a hack and very system specific, at the moment. 
 Effectively, I forced the virtual size to be 2048x768, hacked the
 RADEONDoAdjustFrame() function to fix views as I wanted them, used
 the
 default cloning stuff to setup the second monitor, and removed all
 the
 conditionals that were preventing dual-head+DRI from working.  I had
 to
 enable Xinerama (even though I have only one screen in the server
 setup)
 in the config file; otherwise, the desktop would end at 1024 instead
 of
 2048.  The problem I mentioned in a previous post -- not enough
 memory
 for direct rendering w/ two screens -- was solved when I set it to 16
 bpp.  Does anyone have any ideas for a more elegant implementation of
 this functionality, especially where the config file is concerned? 
 This
 is the first real code I have done in the Xserver and any input would
 be
 appreciated.
 
 --Jonathan Thambidurai
 
 
 p.s. If there is something strange about the diffs, please tell me;
 it
 is the first time I generated any
  --- /usr/local/src/XFree86.current/xc/programs/Xserver/GL/dri/dri.c
 2002-12-05 10:26:57.0 -0500
 +++ dri.c 2003-03-03 18:29:30.0 -0500
 @@ -137,13 +137,13 @@
  #endif
  
  #if defined(PANORAMIX) || defined(XFree86LOADER)
 -if (xineramaInCore) {
 - if (!noPanoramiXExtension) {
 - DRIDrvMsg(pScreen-myNum, X_WARNING,
 - Direct rendering is not supported when Xinerama is enabled\n);
 - return FALSE;
 - }
 -}
 +/* if (xineramaInCore) { */
 +/*   if (!noPanoramiXExtension) { */
 +/*   DRIDrvMsg(pScreen-myNum, X_WARNING, */
 +/*   Direct rendering is not supported when Xinerama is
 enabled\n); */
 +/*   return FALSE; */
 +/*   } */
 +/* } */
  #endif
  
  drmWasAvailable = drmAvailable();
  ---

/usr/local/src/XFree86.current/xc/programs/Xserver/hw/xfree86/drivers/ati/radeon_driver.c
 2003-02-04 20:48:27.0 -0500
 +++ radeon_driver.c   2003-03-03 19:16:23.0 -0500
 @@ -2754,24 +2754,29 @@
  xf86SetCrtcForModes(pScrn, 0);
  
  /* We need to adjust virtual size if the clone modes have larger
 - * display size.
 + * display size. JDTHAX04: hardcoding large virtual area
   */
  if (info-Clone  info-CloneModes) {
   DisplayModePtr  clone_mode = info-CloneModes;
   while (1) {
 - if ((clone_mode-HDisplay  pScrn-virtualX) ||
 - (clone_mode-VDisplay  pScrn-virtualY)) {
 - pScrn-virtualX =
 - pScrn-display-virtualX = clone_mode-HDisplay; 
 - pScrn-virtualY =
 - pScrn-display-virtualY = clone_mode-VDisplay; 
 - RADEONSetPitch(pScrn);
 - }
 +/*   if ((clone_mode-HDisplay  pScrn-virtualX) || */
 +/*   (clone_mode-VDisplay  pScrn-virtualY)) { */
 +/*   pScrn-virtualX = */
 +/*   pScrn-display-virtualX = clone_mode-HDisplay;  */
 +/*   pScrn-virtualY = */
 +/*   pScrn-display-virtualY = clone_mode-VDisplay;  */
 +/*   RADEONSetPitch(pScrn); */
 +/*   } */
   if (!clone_mode-next) break;
   clone_mode = clone_mode-next;
   }
  }
  
 +pScrn-virtualX = pScrn-display-virtualX = 2048;
 +pScrn-virtualY = pScrn-display-virtualY = 768;
 +RADEONSetPitch(pScrn);
 +xf86DrvMsg(pScrn-scrnIndex, X_NOTICE,
 +JDT HACK WORKING\n);
  pScrn-currentMode = pScrn-modes;
  xf86PrintModes(pScrn);
  
 @@ -3463,18 +3468,18 @@
   info-directRenderingEnabled = FALSE;
   else {
   /* Xinerama has sync problem with DRI, disable it for now */
 - if (xf86IsEntityShared(pScrn-entityList[0])) {
 - info-directRenderingEnabled = FALSE;
 - xf86DrvMsg(scrnIndex, X_WARNING,
 -Direct Rendering Disabled -- 
 -Dual-head configuration is not 

Re: Multiple video consoles

2003-03-04 Thread Sven Luther
On Mon, Mar 03, 2003 at 09:46:40PM -0500, David Dawes wrote:
  2) a way to tell the framebuffer/viewport sizes for each supported
 resolution, something like :
 
   SubSection Display
 Mode 1024x768
 Viewport 0 0 1024 768
 Viewport 0 0 800 600
 Viewport 0 0 640 480
   EndSubSection
 
 or maybe 
 
   SubSection Display
 Framebuffer 1024 768
 Modes 1024x768 800x600 640x480
   EndSubSection

Erm, this is the other way around, the Modes give the Framebuffer size,
and not the other way around, so, this one woudln't work.

 Which would tell the driver that we only support outgoing resolution of
 1024x768, but that framebuffer resolution of 1024x768, 800x600, and
 640x480 are ok, and that we should scale from them to the 1024x768 one.
 Maybe the syntax is not the best, but you get the idea.
 
 Actually, I don't understand what you're trying to do that can't be done
 already.  The user shouldn't care that the panel is 1024x768 (other than
 that it's the max available mode resolution).  The driver should figure
 that out and take care of scaling the user's 800x600 mode request to
 the physical output size of 1024x768.  As a user, when I specify 800x600,
 I just want the physical screen to display an 800x600 pixel area on the
 full screen.  I don't care of it's an 800x600 physical output mode or
 if the 800x600 is scaled to some other physical output resolution.


Yes, but we need to change the way we calculate output mode, and use the
fixed resolution, autodetected or with a monitor option like you propose
below.

 The only new feature I see is that arbitrary scaling allows a potentially
 much finer set of mode sizes than we're currently used to, and this
 would be very useful for allowing real-time zooming and tracking windows
 (including resizes).  That can be done with most modern CRTs too (with
 some horizontal granularity limits), but I imagine that zooming would
 be more seemless with the scaler method though than implementing it with
 CRT resolution changes.

Yes.

 I could do this by using an outgoing resolution size in the device specific
 section, but this would not work best, since all the logic doing the
 mode setting now is done for the resolution in the display setting.
 
 Can the driver query the panel's size?  If it can't, then it needs to
 be supplied somewhere.  It could be a new Option in the Monitor section
 that the driver checks for.  It would be best if the driver can auto-detect
 it though.

I guess it can, DDC should be able to provide that, but i haven't got to
there, and anyway, some monitor may have broken DDC, so better think of
a Option for it, in the monitor section would be the best place for it.

 I strongly advocate that you take in account such separation of the
 outgoing resolution and the framebuffer size in any future configuration
 scheme.
 
 We already have the Virtual size, which is the framebuffer size, and
 allows it to be separated from the viewport (mode) sizes.  I don't think
 the outgoing resolution belongs in the Screen/Display sections.  It
 should be between the Monitor data and the driver, with the driver using
 this information to determine the maximum viewport (mode) size allowable.

Yes, but consider how the current display section works.

You use the mode to specify outgoing resolution, but apart from the
builtin mode, there is no guarantee that the string used for the modes
even correspond to said resolution, the user are used to this, but if we
are going to do scaling, it really don't make sense to use 800x600 as
mode, when what you really want to say is that you want a resolution of
800x600.

Also, if you still want to use a virtual screen bigger than the actual
one, you still would need to specify the viewport.

  SubSection Display
Virtual 1600 1200
Mode 1024x768 (outgoing mode).
Resolution 1280 1024
Resolution 1024 768
Resolution 800 600
Resolution 640 480
  EndSubSection

This way, we would have a 1600x1200 virtual screen, an outgoing
resolution of 1024x768, which could be specified in the monitor
section, and resolutions of 640x480 upto 1280x1024.

Sure, you could also use the modes, but you would give to much info,
after all you would only need the size of the mode, and not the rest of
it.

  Some of the users of your driver probably will, so it's worth testing
  with it.
 
 Sure, but, err, its proprietary software i have no access too, right ?
 
 It never hurts to ask for a copy as a driver developer.  The worst they
 can say is no.  I find vmware very useful personally as well as for
 XFree86-related stuff (especially multi-platform build testing).

Ok, Will be asking them.

Friendly,

Sven Luther
___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


Re: Multiple video consoles

2003-03-03 Thread Sven Luther
On Sun, Mar 02, 2003 at 11:28:24PM -0500, David Dawes wrote:
 On Sat, Mar 01, 2003 at 10:34:20AM +0100, Sven Luther wrote:
 On Fri, Feb 28, 2003 at 04:19:37PM -0500, David Dawes wrote:
  Are you speaking about the current 4.3.0 or the stuff you are working on ?
  
  What I was working on.
 
 Ok, ...
 
 I take it, there will be a 4.4.0 before 5.0 ?
 
 Most likely.

:))

  of scaling are either handled by a hardware scaler (that may or may not
  be visible to the XFree86 server and user), or by having something in
  XFree86 that keeps a second copy of the image that is scaled in software.
 
 Mmm, you are speaking of a hardware scaller in the LCD monitor ? 
 
 I'm talking about a scaler anywhere between where the resolution is
 programmed and the physical display.  For laptop-type displays it's easy
 -- it's in the video hardware.  For digital connections to LCD displays
 I'm not sure which side of the DVI connector it's normally located.  I
 just know that I've seen it work in that case without needing to do
 anything special as a user or as a driver writer.  I don't know whether
 the cases I've seen are common or unusual.  I haven't played with enough
 of these HW combinations to know.

Mmm, it may be something special in the bios of those laptops, or even
some hardwired functionality, but in my case i need to program it by
hand, and i guess other chips will need this too, so we may as well
think of it.

 Well, from my experience (i have a Sony SDM-X52, with both a DVI
 connector and a standard VGA connector) this doesn't seem to happen. If
 i request a mode lower than what the LCD can display, i get only
 garbage, at least on the DVI channel. I believe the VGA channel can do
 more advanced things, but didn't sucessfully use them. On the other
 hand, my graphic hardware can do arbitrary scaling of the framebuffer
 before passing it to the monitor, but i have to program it explicitly. I
 guess that this is used by the bios at startup to convert the 640x480
 text mode to something my monitor supports, since the fonts appear a bit
 blurry.
 
 It sounds like that in current cases the driver should handle this type
 of scaling transparently.  The only extension that might be relevant is
 to allow the viewport to be set to a range of sizes rather than discrete
 mode sizes (as happens now).

Well, i have to calculate the scaling factor from the source
(framebuffer) width/height and the destination (mode resolution)
width/height, that is why i ask for a more granular handling of this.
Currently, you can do :

Section Screen

  ...

  SubSection Display
Depth   8
Modes   1024x768 800x600 640x480
  EndSubSection
  SubSection Display
Depth   15
Modes   1024x768 800x600 640x480
  EndSubSection
  ...
EndSection

(Well, actually, i have only 1024x768, since that is what the monitor
supports.)

What would be nice, would be if :

 1) you could have only one line for all the depth/bpp, or a possibility
to have multiple depth/bpp per display section.
 
 2) a way to tell the framebuffer/viewport sizes for each supported
resolution, something like :

  SubSection Display
Mode 1024x768
Viewport 0 0 1024 768
Viewport 0 0 800 600
Viewport 0 0 640 480
  EndSubSection

or maybe 

  SubSection Display
Framebuffer 1024 768
Modes 1024x768 800x600 640x480
  EndSubSection

Which would tell the driver that we only support outgoing resolution of
1024x768, but that framebuffer resolution of 1024x768, 800x600, and
640x480 are ok, and that we should scale from them to the 1024x768 one.
Maybe the syntax is not the best, but you get the idea.

I could do this by using an outgoing resolution size in the device specific
section, but this would not work best, since all the logic doing the
mode setting now is done for the resolution in the display setting.

I strongly advocate that you take in account such separation of the
outgoing resolution and the framebuffer size in any future configuration
scheme.

 Right.  I've only seen downscaling, and it's possible that I'm wrong
 about it it happening in the monitor rather than in the video hardware.

I think it is happening in the video hardware, at least for DVI
connections.

 BTW, do you know any doc on DVI and LCD monitors ? my monitor refuse to
 go to sleep when i am using the DVI channel, while it works fine with
 the VGA channel.
 
 I haven't seen any docs on those.  If there are related VESA specs, I
 should have them somewhere.

Mmm, i will be also looking.

 That said, another thing that would be nice, would be the possibility to
 specify one display section for every depth, instead of just copying it
 for each supported depth. Do many people in these times of 64+Mo of
 onboard memory specify different resolutions for different depths ?
 
 I think it'd be useful to be able to specify paramters that apply to
 all depths, but still allow a depth-specific subsection to override.
 That'd be a useful extension of the 

Re: Multiple video consoles

2003-03-03 Thread David Dawes
On Mon, Mar 03, 2003 at 10:31:56AM +0100, Sven Luther wrote:
On Sun, Mar 02, 2003 at 11:28:24PM -0500, David Dawes wrote:
 On Sat, Mar 01, 2003 at 10:34:20AM +0100, Sven Luther wrote:
 On Fri, Feb 28, 2003 at 04:19:37PM -0500, David Dawes wrote:
  Are you speaking about the current 4.3.0 or the stuff you are working on ?
  
  What I was working on.
 
 Ok, ...
 
 I take it, there will be a 4.4.0 before 5.0 ?
 
 Most likely.

:))

  of scaling are either handled by a hardware scaler (that may or may not
  be visible to the XFree86 server and user), or by having something in
  XFree86 that keeps a second copy of the image that is scaled in software.
 
 Mmm, you are speaking of a hardware scaller in the LCD monitor ? 
 
 I'm talking about a scaler anywhere between where the resolution is
 programmed and the physical display.  For laptop-type displays it's easy
 -- it's in the video hardware.  For digital connections to LCD displays
 I'm not sure which side of the DVI connector it's normally located.  I
 just know that I've seen it work in that case without needing to do
 anything special as a user or as a driver writer.  I don't know whether
 the cases I've seen are common or unusual.  I haven't played with enough
 of these HW combinations to know.

Mmm, it may be something special in the bios of those laptops, or even
some hardwired functionality, but in my case i need to program it by
hand, and i guess other chips will need this too, so we may as well
think of it.

 Well, from my experience (i have a Sony SDM-X52, with both a DVI
 connector and a standard VGA connector) this doesn't seem to happen. If
 i request a mode lower than what the LCD can display, i get only
 garbage, at least on the DVI channel. I believe the VGA channel can do
 more advanced things, but didn't sucessfully use them. On the other
 hand, my graphic hardware can do arbitrary scaling of the framebuffer
 before passing it to the monitor, but i have to program it explicitly. I
 guess that this is used by the bios at startup to convert the 640x480
 text mode to something my monitor supports, since the fonts appear a bit
 blurry.
 
 It sounds like that in current cases the driver should handle this type
 of scaling transparently.  The only extension that might be relevant is
 to allow the viewport to be set to a range of sizes rather than discrete
 mode sizes (as happens now).

Well, i have to calculate the scaling factor from the source
(framebuffer) width/height and the destination (mode resolution)
width/height, that is why i ask for a more granular handling of this.
Currently, you can do :

Section Screen

  ...

  SubSection Display
Depth   8
Modes   1024x768 800x600 640x480
  EndSubSection
  SubSection Display
Depth   15
Modes   1024x768 800x600 640x480
  EndSubSection
  ...
EndSection

(Well, actually, i have only 1024x768, since that is what the monitor
supports.)

What would be nice, would be if :

 1) you could have only one line for all the depth/bpp, or a possibility
to have multiple depth/bpp per display section.

Yep.

 2) a way to tell the framebuffer/viewport sizes for each supported
resolution, something like :

  SubSection Display
Mode 1024x768
Viewport 0 0 1024 768
Viewport 0 0 800 600
Viewport 0 0 640 480
  EndSubSection

or maybe 

  SubSection Display
Framebuffer 1024 768
Modes 1024x768 800x600 640x480
  EndSubSection

Which would tell the driver that we only support outgoing resolution of
1024x768, but that framebuffer resolution of 1024x768, 800x600, and
640x480 are ok, and that we should scale from them to the 1024x768 one.
Maybe the syntax is not the best, but you get the idea.

Actually, I don't understand what you're trying to do that can't be done
already.  The user shouldn't care that the panel is 1024x768 (other than
that it's the max available mode resolution).  The driver should figure
that out and take care of scaling the user's 800x600 mode request to
the physical output size of 1024x768.  As a user, when I specify 800x600,
I just want the physical screen to display an 800x600 pixel area on the
full screen.  I don't care of it's an 800x600 physical output mode or
if the 800x600 is scaled to some other physical output resolution.

The only new feature I see is that arbitrary scaling allows a potentially
much finer set of mode sizes than we're currently used to, and this
would be very useful for allowing real-time zooming and tracking windows
(including resizes).  That can be done with most modern CRTs too (with
some horizontal granularity limits), but I imagine that zooming would
be more seemless with the scaler method though than implementing it with
CRT resolution changes.

I could do this by using an outgoing resolution size in the device specific
section, but this would not work best, since all the logic doing the
mode setting now is done for the resolution in the display setting.

Can the driver query the panel's size?  

Re: Multiple video consoles

2003-03-02 Thread David Dawes
On Sat, Mar 01, 2003 at 10:34:20AM +0100, Sven Luther wrote:
On Fri, Feb 28, 2003 at 04:19:37PM -0500, David Dawes wrote:
 Are you speaking about the current 4.3.0 or the stuff you are working on ?
 
 What I was working on.

Ok, ...

I take it, there will be a 4.4.0 before 5.0 ?

Most likely.

 Well, i am not sure i follow you completely here, but my interrest in
 scaling is :
 
   o having one monitor display the same framebuffer area as the other,
   but in another resolution. Like when your laptop's LCD screen can only
   display 1024x768 but you have to do a presentation on a 800x600 video
   projector. You sent the framebuffer to be 800x600 to have maximum
   quality on the video projector, and scale it to 1024x768 on the
   mirrored display of your LCD screen. 
 
   o displaying lower video modes than what the LCD screen can display
   (or bigger modes also).
 
 The type of scaling that comes for free is when your LCD displays
 1024x768 and the video projector displays 800x600, but that 800x600 is
 just a 800x600 pixel subset of the full 1024x768 desktop.  Other forms

That is not scaling, you juts open a plain second viewport on the same
framebuffer.

 of scaling are either handled by a hardware scaler (that may or may not
 be visible to the XFree86 server and user), or by having something in
 XFree86 that keeps a second copy of the image that is scaled in software.

Mmm, you are speaking of a hardware scaller in the LCD monitor ? 

I'm talking about a scaler anywhere between where the resolution is
programmed and the physical display.  For laptop-type displays it's easy
-- it's in the video hardware.  For digital connections to LCD displays
I'm not sure which side of the DVI connector it's normally located.  I
just know that I've seen it work in that case without needing to do
anything special as a user or as a driver writer.  I don't know whether
the cases I've seen are common or unusual.  I haven't played with enough
of these HW combinations to know.

I am speaking about hardware scaller in the video chip, and altough not
all video chips have those, i guess some have and more will have. Or
else you could just re-use the video overlay unit for it or whatever.

 A lot of chipsets that drive LCD displays do transparent scaling where
 the user and XFree86 server see a 800x600 mode, and the graphic hardware
 scales that to the 1024x768 physical LCD screen.

Well, from my experience (i have a Sony SDM-X52, with both a DVI
connector and a standard VGA connector) this doesn't seem to happen. If
i request a mode lower than what the LCD can display, i get only
garbage, at least on the DVI channel. I believe the VGA channel can do
more advanced things, but didn't sucessfully use them. On the other
hand, my graphic hardware can do arbitrary scaling of the framebuffer
before passing it to the monitor, but i have to program it explicitly. I
guess that this is used by the bios at startup to convert the 640x480
text mode to something my monitor supports, since the fonts appear a bit
blurry.

It sounds like that in current cases the driver should handle this type
of scaling transparently.  The only extension that might be relevant is
to allow the viewport to be set to a range of sizes rather than discrete
mode sizes (as happens now).

 These would be static scalings, and could be set by specifying for the 
 viewport, not only the x/y corner like it is done right now, but also
 the source height and width, the scaling would then be set to the ratio
 between the height/width of the destination over the source.
 
 Keep in mind LCD monitors can only do fixed resolution mostly and will
 become more and more predominant.
 
 Most of the current LCD monitors that I've seen can do built-in scaling
 so that they can display non-native resolutions transparently to the user.

Mmm, maybe my monitor can, but the documentation i have doesn't speak
about it, and anyway, it has quite limited frequency ranges. Also, this
precludes doing more advanced stuff like i say below, orupscaling
instead of downscaling.

Right.  I've only seen downscaling, and it's possible that I'm wrong
about it it happening in the monitor rather than in the video hardware.

BTW, do you know any doc on DVI and LCD monitors ? my monitor refuse to
go to sleep when i am using the DVI channel, while it works fine with
the VGA channel.

I haven't seen any docs on those.  If there are related VESA specs, I
should have them somewhere.

 Then there is dynamic viewports, similar to what matrox does for windows
 zooming on their windows drivers (i have not seen this personnally
 though). You could designate a window, and have it be used for the
 viewport of a second head. The second viewport would follow the window
 and scale it apropriatedly, including if the window is moved around or
 resized.
 
 I don't know how the Matrox driver works specifically, but if it allows
 arbitrary scaling it may use hardware scaling for the second viewport
 (like XVideo usually 

Re: Multiple video consoles

2003-03-02 Thread David Dawes
On Sat, Mar 01, 2003 at 10:52:08AM +0100, Sven Luther wrote:
On Fri, Feb 28, 2003 at 04:19:37PM -0500, David Dawes wrote:
 On Fri, Feb 28, 2003 at 09:04:06PM +0100, Sven Luther wrote:
 Are you speaking about the current 4.3.0 or the stuff you are working on ?
 
 What I was working on.

BTW, is the stuff you were working on accessible on a CVS branch or
something such ?

No.

David
-- 
David Dawes
Release Engineer/Architect  The XFree86 Project
www.XFree86.org/~dawes
___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


Re: Multiple video consoles

2003-03-01 Thread Sven Luther
On Fri, Feb 28, 2003 at 04:19:37PM -0500, David Dawes wrote:
 Are you speaking about the current 4.3.0 or the stuff you are working on ?
 
 What I was working on.

Ok, ...

I take it, there will be a 4.4.0 before 5.0 ?

 Well, i am not sure i follow you completely here, but my interrest in
 scaling is :
 
   o having one monitor display the same framebuffer area as the other,
   but in another resolution. Like when your laptop's LCD screen can only
   display 1024x768 but you have to do a presentation on a 800x600 video
   projector. You sent the framebuffer to be 800x600 to have maximum
   quality on the video projector, and scale it to 1024x768 on the
   mirrored display of your LCD screen. 
 
   o displaying lower video modes than what the LCD screen can display
   (or bigger modes also).
 
 The type of scaling that comes for free is when your LCD displays
 1024x768 and the video projector displays 800x600, but that 800x600 is
 just a 800x600 pixel subset of the full 1024x768 desktop.  Other forms

That is not scaling, you juts open a plain second viewport on the same
framebuffer.

 of scaling are either handled by a hardware scaler (that may or may not
 be visible to the XFree86 server and user), or by having something in
 XFree86 that keeps a second copy of the image that is scaled in software.

Mmm, you are speaking of a hardware scaller in the LCD monitor ? 

I am speaking about hardware scaller in the video chip, and altough not
all video chips have those, i guess some have and more will have. Or
else you could just re-use the video overlay unit for it or whatever.

 A lot of chipsets that drive LCD displays do transparent scaling where
 the user and XFree86 server see a 800x600 mode, and the graphic hardware
 scales that to the 1024x768 physical LCD screen.

Well, from my experience (i have a Sony SDM-X52, with both a DVI
connector and a standard VGA connector) this doesn't seem to happen. If
i request a mode lower than what the LCD can display, i get only
garbage, at least on the DVI channel. I believe the VGA channel can do
more advanced things, but didn't sucessfully use them. On the other
hand, my graphic hardware can do arbitrary scaling of the framebuffer
before passing it to the monitor, but i have to program it explicitly. I
guess that this is used by the bios at startup to convert the 640x480
text mode to something my monitor supports, since the fonts appear a bit
blurry.

 These would be static scalings, and could be set by specifying for the 
 viewport, not only the x/y corner like it is done right now, but also
 the source height and width, the scaling would then be set to the ratio
 between the height/width of the destination over the source.
 
 Keep in mind LCD monitors can only do fixed resolution mostly and will
 become more and more predominant.
 
 Most of the current LCD monitors that I've seen can do built-in scaling
 so that they can display non-native resolutions transparently to the user.

Mmm, maybe my monitor can, but the documentation i have doesn't speak
about it, and anyway, it has quite limited frequency ranges. Also, this
precludes doing more advanced stuff like i say below, orupscaling
instead of downscaling.

BTW, do you know any doc on DVI and LCD monitors ? my monitor refuse to
go to sleep when i am using the DVI channel, while it works fine with
the VGA channel.

 Then there is dynamic viewports, similar to what matrox does for windows
 zooming on their windows drivers (i have not seen this personnally
 though). You could designate a window, and have it be used for the
 viewport of a second head. The second viewport would follow the window
 and scale it apropriatedly, including if the window is moved around or
 resized.
 
 I don't know how the Matrox driver works specifically, but if it allows
 arbitrary scaling it may use hardware scaling for the second viewport
 (like XVideo usually uses) to achieve this efficiently.  I don't know
 how it handles partially obscured or partialy off-screen windows.
 
 Tracking fully visible mode-line sized windows in a second viewport is
 the easiest subset of this whole problem to implement.  This is the part
 that could easily be implemented in 4.x without a lot of work.

Yes, altough if we could add a source w/h to the viewport option, we
could do arbitrary hardware scalling too. static scaling only though.

And if the hardware can do it, why limit ourself.

That said, another thing that would be nice, would be the possibility to
specify one display section for every depth, instead of just copying it
for each supported depth. Do many people in these times of 64+Mo of
onboard memory specify different resolutions for different depths ?

 And we would do dual head, not like now with splitting the framebuffer
 into two zones, one for each head, but by sharing the same framebuffer
 between both screens, this would give free dual head DRI also, if the 3D
 engine supports such big displays. Overlay and cursor still would need
 to be 

Re: Multiple video consoles

2003-03-01 Thread Sven Luther
On Fri, Feb 28, 2003 at 04:19:37PM -0500, David Dawes wrote:
 On Fri, Feb 28, 2003 at 09:04:06PM +0100, Sven Luther wrote:
 Are you speaking about the current 4.3.0 or the stuff you are working on ?
 
 What I was working on.

BTW, is the stuff you were working on accessible on a CVS branch or
something such ?

Friendly,

Sven Luther
___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


Multiple video consoles. What happened to multi-user?

2003-03-01 Thread Yitzhak Bar Geva








The discussion thread
has focused on multi-head for a single user. What about plans for multi-user? Matrox
and Nvidia have four port cards. Why couldnt a single system (maybe multi-processor)
support eight simultaneous users if it had two of those cards and USB input
devices? What would be the preferred method of directing developments towards
this goal?








Re: Multiple video consoles

2003-03-01 Thread Andrew C Aitchison
On Sat, 1 Mar 2003, Sven Luther wrote:

 That said, another thing that would be nice, would be the possibility to
 specify one display section for every depth, instead of just copying it
 for each supported depth. Do many people in these times of 64+Mo of
 onboard memory specify different resolutions for different depths ?

I don't know if it makes sense from a code point of view, but from
the config file side, I'd suggest allowing a Display subsection
to have multiple Depth qualifiers (possiby FbBpp and Visual too).

-- 
Dr. Andrew C. Aitchison Computer Officer, DPMMS, Cambridge
[EMAIL PROTECTED]   http://www.dpmms.cam.ac.uk/~werdna

___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


Re: Multiple video consoles

2003-02-28 Thread David Dawes
On Thu, Feb 27, 2003 at 10:11:34AM +0100, Sven Luther wrote:

BTW, Dawes, what are the plans for post 4.3.0 XFree86 ? This kind of
thing would most assuredly go into the thinking about 5.x, but some of
the stuff here, and about the dual-head/one FB (which would allow DRI on
dual head cards) could also be implemented in the current setting.

We definitely want to discuss the dual-seat possibilities in the context
of 5.0.

I agree that dual-head/one FB (single seat) can be handled in the current
4.x environment.  Several 3rd party drivers already handle this in one
way or another.  I did some configuration and infrastructure related
work on this for a project that got cut.  One of the things this handled
was the configuration for mutiple monitor viewports being attached to
a single screen.  Now that 4.3.0 is out, I'd like to go back and finish
that off, and modify one of the existing dual CRTC drivers to make use
of it.

David
-- 
David Dawes
Release Engineer/Architect  The XFree86 Project
www.XFree86.org/~dawes
___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


Re: Multiple video consoles

2003-02-28 Thread Sven Luther
On Fri, Feb 28, 2003 at 11:59:48AM -0500, David Dawes wrote:
 On Thu, Feb 27, 2003 at 10:11:34AM +0100, Sven Luther wrote:
 
 BTW, Dawes, what are the plans for post 4.3.0 XFree86 ? This kind of
 thing would most assuredly go into the thinking about 5.x, but some of
 the stuff here, and about the dual-head/one FB (which would allow DRI on
 dual head cards) could also be implemented in the current setting.
 
 We definitely want to discuss the dual-seat possibilities in the context
 of 5.0.
 
 I agree that dual-head/one FB (single seat) can be handled in the current
 4.x environment.  Several 3rd party drivers already handle this in one
 way or another.  I did some configuration and infrastructure related
 work on this for a project that got cut.  One of the things this handled
 was the configuration for mutiple monitor viewports being attached to
 a single screen.  Now that 4.3.0 is out, I'd like to go back and finish
 that off, and modify one of the existing dual CRTC drivers to make use
 of it.

There was some discution about this in the DRI mailing list, and i am
also currently writing a driver which would need this kind of thing.

I guess that you can tell the driver via the device section that it is
to share the Framebuffer between monitors and that you can then use the
viewport on the display subsection to set the viewport to wherever you
want.

Now, if you want one of the viewports to do some scaling too, either
because your LCD monitor is fixed size, and a program want to run in
another size, or for having one viewport displaying a zoomed part of the
other or whatever. I think this is not currently possible, but i may be
wrong. Also it would be nice if we could follow a window with a
viewport, and adjust the zoom factor accordyingly.

BTW, is it normal that SDL games requesting 640x480 try to set it even
if i did only specify 1024x768 in the monitor modes, and thus give blank
screens ? I observed this both in my being worked on driver and in the
vesa driver (using frozen-bubbles and solarwolf in fullscreen mode).

Friendly,

Sven Luther
___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


Re: Multiple video consoles

2003-02-28 Thread Sven Luther
On Fri, Feb 28, 2003 at 02:06:35PM -0500, David Dawes wrote:
 On Fri, Feb 28, 2003 at 06:27:20PM +0100, Sven Luther wrote:
 On Fri, Feb 28, 2003 at 11:59:48AM -0500, David Dawes wrote:
  On Thu, Feb 27, 2003 at 10:11:34AM +0100, Sven Luther wrote:
  
  BTW, Dawes, what are the plans for post 4.3.0 XFree86 ? This kind of
  thing would most assuredly go into the thinking about 5.x, but some of
  the stuff here, and about the dual-head/one FB (which would allow DRI on
  dual head cards) could also be implemented in the current setting.
  
  We definitely want to discuss the dual-seat possibilities in the context
  of 5.0.
  
  I agree that dual-head/one FB (single seat) can be handled in the current
  4.x environment.  Several 3rd party drivers already handle this in one
  way or another.  I did some configuration and infrastructure related
  work on this for a project that got cut.  One of the things this handled
  was the configuration for mutiple monitor viewports being attached to
  a single screen.  Now that 4.3.0 is out, I'd like to go back and finish
  that off, and modify one of the existing dual CRTC drivers to make use
  of it.
 
 There was some discution about this in the DRI mailing list, and i am
 also currently writing a driver which would need this kind of thing.
 
 I guess that you can tell the driver via the device section that it is
 to share the Framebuffer between monitors and that you can then use the
 viewport on the display subsection to set the viewport to wherever you
 want.
 
 The static configuration handles associating multiple monitors, sets of
 modes, initial viewport positioning, etc with a single Device/Screen.

Are you speaking about the current 4.3.0 or the stuff you are working on ?

 Now, if you want one of the viewports to do some scaling too, either
 because your LCD monitor is fixed size, and a program want to run in
 another size, or for having one viewport displaying a zoomed part of the
 other or whatever. I think this is not currently possible, but i may be
 wrong. Also it would be nice if we could follow a window with a
 viewport, and adjust the zoom factor accordyingly.
 
 Mode switching would work for multiple monitors, and they could be made
 to switch independently.  Handling this switching, and providing useful
 run-time control over the origin of the viewports is the next step after
 the static configuration.  It could be handled with some combination of
 hot keys, pointer scrolling, and/or a control client.
 
 Are you also interested in doing scaling other than what you get for
 free by having one monitor display at a lower resolution?

Well, i am not sure i follow you completely here, but my interrest in
scaling is :

  o having one monitor display the same framebuffer area as the other,
  but in another resolution. Like when your laptop's LCD screen can only
  display 1024x768 but you have to do a presentation on a 800x600 video
  projector. You sent the framebuffer to be 800x600 to have maximum
  quality on the video projector, and scale it to 1024x768 on the
  mirrored display of your LCD screen. 

  o displaying lower video modes than what the LCD screen can display
  (or bigger modes also).

These would be static scalings, and could be set by specifying for the 
viewport, not only the x/y corner like it is done right now, but also
the source height and width, the scaling would then be set to the ratio
between the height/width of the destination over the source.

Keep in mind LCD monitors can only do fixed resolution mostly and will
become more and more predominant.

Then there is dynamic viewports, similar to what matrox does for windows
zooming on their windows drivers (i have not seen this personnally
though). You could designate a window, and have it be used for the
viewport of a second head. The second viewport would follow the window
and scale it apropriatedly, including if the window is moved around or
resized.

And we would do dual head, not like now with splitting the framebuffer
into two zones, one for each head, but by sharing the same framebuffer
between both screens, this would give free dual head DRI also, if the 3D
engine supports such big displays. Overlay and cursor still would need
to be done separatedly.

 BTW, is it normal that SDL games requesting 640x480 try to set it even
 if i did only specify 1024x768 in the monitor modes, and thus give blank
 screens ? I observed this both in my being worked on driver and in the
 vesa driver (using frozen-bubbles and solarwolf in fullscreen mode).
 
 I've seen games that just put a 640x480 window in one corner of the
 1024x768 screen when there's no 640x480 monitor mode available.

Well, apparently SDL will default to the next higher supported mode, but
apparently something is broken there. But still, X should not try
setting a mode not declared in the XF86Config file, whatever the app
asks.

Friendly,

Sven Luther
___
Devel mailing list
[EMAIL 

Re: Multiple video consoles

2003-02-28 Thread David Dawes
On Fri, Feb 28, 2003 at 09:04:06PM +0100, Sven Luther wrote:
On Fri, Feb 28, 2003 at 02:06:35PM -0500, David Dawes wrote:
 On Fri, Feb 28, 2003 at 06:27:20PM +0100, Sven Luther wrote:
 On Fri, Feb 28, 2003 at 11:59:48AM -0500, David Dawes wrote:
  On Thu, Feb 27, 2003 at 10:11:34AM +0100, Sven Luther wrote:
  
  BTW, Dawes, what are the plans for post 4.3.0 XFree86 ? This kind of
  thing would most assuredly go into the thinking about 5.x, but some of
  the stuff here, and about the dual-head/one FB (which would allow DRI on
  dual head cards) could also be implemented in the current setting.
  
  We definitely want to discuss the dual-seat possibilities in the context
  of 5.0.
  
  I agree that dual-head/one FB (single seat) can be handled in the current
  4.x environment.  Several 3rd party drivers already handle this in one
  way or another.  I did some configuration and infrastructure related
  work on this for a project that got cut.  One of the things this handled
  was the configuration for mutiple monitor viewports being attached to
  a single screen.  Now that 4.3.0 is out, I'd like to go back and finish
  that off, and modify one of the existing dual CRTC drivers to make use
  of it.
 
 There was some discution about this in the DRI mailing list, and i am
 also currently writing a driver which would need this kind of thing.
 
 I guess that you can tell the driver via the device section that it is
 to share the Framebuffer between monitors and that you can then use the
 viewport on the display subsection to set the viewport to wherever you
 want.
 
 The static configuration handles associating multiple monitors, sets of
 modes, initial viewport positioning, etc with a single Device/Screen.

Are you speaking about the current 4.3.0 or the stuff you are working on ?

What I was working on.

 Now, if you want one of the viewports to do some scaling too, either
 because your LCD monitor is fixed size, and a program want to run in
 another size, or for having one viewport displaying a zoomed part of the
 other or whatever. I think this is not currently possible, but i may be
 wrong. Also it would be nice if we could follow a window with a
 viewport, and adjust the zoom factor accordyingly.
 
 Mode switching would work for multiple monitors, and they could be made
 to switch independently.  Handling this switching, and providing useful
 run-time control over the origin of the viewports is the next step after
 the static configuration.  It could be handled with some combination of
 hot keys, pointer scrolling, and/or a control client.
 
 Are you also interested in doing scaling other than what you get for
 free by having one monitor display at a lower resolution?

Well, i am not sure i follow you completely here, but my interrest in
scaling is :

  o having one monitor display the same framebuffer area as the other,
  but in another resolution. Like when your laptop's LCD screen can only
  display 1024x768 but you have to do a presentation on a 800x600 video
  projector. You sent the framebuffer to be 800x600 to have maximum
  quality on the video projector, and scale it to 1024x768 on the
  mirrored display of your LCD screen. 

  o displaying lower video modes than what the LCD screen can display
  (or bigger modes also).

The type of scaling that comes for free is when your LCD displays
1024x768 and the video projector displays 800x600, but that 800x600 is
just a 800x600 pixel subset of the full 1024x768 desktop.  Other forms
of scaling are either handled by a hardware scaler (that may or may not
be visible to the XFree86 server and user), or by having something in
XFree86 that keeps a second copy of the image that is scaled in software.

A lot of chipsets that drive LCD displays do transparent scaling where
the user and XFree86 server see a 800x600 mode, and the graphic hardware
scales that to the 1024x768 physical LCD screen.

These would be static scalings, and could be set by specifying for the 
viewport, not only the x/y corner like it is done right now, but also
the source height and width, the scaling would then be set to the ratio
between the height/width of the destination over the source.

Keep in mind LCD monitors can only do fixed resolution mostly and will
become more and more predominant.

Most of the current LCD monitors that I've seen can do built-in scaling
so that they can display non-native resolutions transparently to the user.

Then there is dynamic viewports, similar to what matrox does for windows
zooming on their windows drivers (i have not seen this personnally
though). You could designate a window, and have it be used for the
viewport of a second head. The second viewport would follow the window
and scale it apropriatedly, including if the window is moved around or
resized.

I don't know how the Matrox driver works specifically, but if it allows
arbitrary scaling it may use hardware scaling for the second viewport
(like XVideo usually uses) to achieve this efficiently.  I 

Re: Multiple video consoles

2003-02-28 Thread Michel Dänzer
On Fre, 2003-02-28 at 21:04, Sven Luther wrote:
 On Fri, Feb 28, 2003 at 02:06:35PM -0500, David Dawes wrote:
  On Fri, Feb 28, 2003 at 06:27:20PM +0100, Sven Luther wrote:
 
  BTW, is it normal that SDL games requesting 640x480 try to set it even
  if i did only specify 1024x768 in the monitor modes, and thus give blank
  screens ? I observed this both in my being worked on driver and in the
  vesa driver (using frozen-bubbles and solarwolf in fullscreen mode).
  
  I've seen games that just put a 640x480 window in one corner of the
  1024x768 screen when there's no 640x480 monitor mode available.
 
 Well, apparently SDL will default to the next higher supported mode, but
 apparently something is broken there. But still, X should not try
 setting a mode not declared in the XF86Config file, whatever the app
 asks.

Have you checked the log file? Maybe modes are added from DDC, by RandR,
... ?


-- 
Earthling Michel Dänzer (MrCooper)/ Debian GNU/Linux (powerpc) developer
XFree86 and DRI project member   /  CS student, Free Software enthusiast

___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


Re: Multiple video consoles

2003-02-27 Thread Sven Luther
On Wed, Feb 26, 2003 at 09:40:18PM -0500, David Dawes wrote:
 On Wed, Feb 26, 2003 at 09:25:21PM +0100, Sven Luther wrote:
 On Wed, Feb 26, 2003 at 09:27:50PM +0200, Yitzhak Bar Geva wrote:
  Greatly encouraged by your response, thanks!
  
  Someone reported that X works with the multi-head console  support
  in Linux 2.5 kernels.
  
  I did some searching for multi-head consoles under 2.5 kernel, but
  didn't see anything. I would be highly appreciative if you could give me
  some pointers. As far as I could see, the Linux Console Project is
  defunct, but there is definitely work on multiple input devices going
  on.
 
 The correct place is the linux-fbdev project on sourceforge, especially
 their mailing list, James Simmon is the main developper of the new
 console code, and you have to look into the late 2.5.5x at least to get
 working stuff.
 
 That said, XFree86 people don't like fbdev much, and anyway, i don't
 
 Not necessarily :-)  I recently wrote an fbdev driver for Intel 830M
 and later chipsets (www.xfree86.org/~dawes/intelfb.html, and it should
 be in new -ac kernels).  It was fun doing some graphics stuff outside
 of XFree86 for a change.  It's basically a 2.4.x driver right now, and
 still needs to be ported to the latest 2.5.6x fbdev interfaces.

Well, the 2.5.x drivers (the new API) are a lot easier to write, since a
lot of common stuff has been abstracted. I have plans to write a HOWTO
or something once i get time for it again.

Friendly,

Sven Luther
___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


Re: Multiple video consoles

2003-02-27 Thread Sven Luther
On Wed, Feb 26, 2003 at 05:12:32PM -0600, jkjellman wrote:
 Absolutely right, but ...
 
 This can be done if two servers are used.  The point I was making earlier in
 this thread was that used hacked kernels and servers are a bad thing.  If
 two consoles (including keyboards) could be operated on a single box, then
 two separate X servers could also be run.  The biggest problem is not the
 display, but rather that both X and Linux have a single console keyboard
 ingrained in their code.
 
 Any thoughts on how this might be circumvented using existing pieces?

The new fbdev API and console layer should handle this just fine, not
that i have personal experience with it, but that seemed to be the
intention from what i followed in the linux-fbdev mailing list.

Friendly,

Sven Luther
___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


Re: Multiple video consoles

2003-02-27 Thread Sven Luther
On Wed, Feb 26, 2003 at 10:47:39PM +, Andrew C Aitchison wrote:
  How do you imagine this would work when both head are using a
  shared accel (XAA or DRI) engine ?
 
 I thought that the whole point of the kernel DRI was to stop multiple apps
 from fighting over the hardware. If the X server and several libGL apps

Well, it is more like multiple OpenGL X clients and the X drivers
themselves. I don't hear of anyone running DRI on fbdev and X alongside
each other, and all the stuff is initialized inside the X driver anyway.

 can share the hardware, adding another X server should be possible.

It is not as simple, since for one chip driving two screens, there are
some things that can only be done chip wide, and others that can be done
separatedly on each head. Also, i think the DRI only bothers about the
3D engine, not really about things like mode setup and so. And even
(onchip) memory management is not (yet) well separated. There is a
proposal for DRI about that from Ian Romanick, but it does only concern
the DRI, and it is not clear if the OS memory manager can be made to
work with it also, or how.

 For it to work nicely the proposed extension to RandR which allows the 
 frame-buffer to be re-arranged (remember that we have dropped on the fly
 trade off between depth and number of pixels from 4.3) would help,

Mmm, there is another discution in the DRI mailing list about dual head
with a single framebuffer, where i think RandR could be used to do one
the fly head re-organization. But again, i don't really believe that
this would help out if we plan to have two separate users, one on each
head/seat/whatever.

 and I think we would want something (probably fbdev) to share out the 
 frame-buffer.

The fbdev and the drm would work, i think. You would use the fbdev for
mode setting and such, and the drm for acceleration, the fbdev has not
enough logic for it right now. but again, the fbdev and the drm don't
cooperate very well, especially since the drm is initialized from the X
driver.

 I suppose we could go the other way, and do two seats
 within one X server.

Is this possible ? Not currently i guess, but it is a feature asked for
since some time, for doing cheap terminals, instead of having one cheap
box driving one terminal, you would driver two with it, thus almost
halving the cost. That said, if one head crashes, the other goes too.

 I'd want one seat to be called say machine:0 and the other machine:1
 ie listen on two sockets/ports.
 This would definitely be a case for two pointers and two sets of focus
 (which people seem to want for other reasons).
 Would the window scheduling be good enough to ensure that one seat
 can't consume all the cycles ?
 I'd be particularly worried that information could leak between seat.
 Do we use separate threads (or processes) for each seat;
 someone recently mentioned that the server isn't thread-safe.
 Conceptually I feel that all this should be left to the kernel,
 and we should run a separate X server for each seat.

Lot of good questions ...

BTW, Dawes, what are the plans for post 4.3.0 XFree86 ? This kind of
thing would most assuredly go into the thinking about 5.x, but some of
the stuff here, and about the dual-head/one FB (which would allow DRI on
dual head cards) could also be implemented in the current setting.

Friendly,

Sven Luther
___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


Re: Multiple video consoles

2003-02-27 Thread Aivils . Stoss
Hi, all

 Long time  I maintain Linux 2.4.XX kernel tree , wich support multiple
consoles.
Ground priciple:
linus tree kernel has 64 virtual consoles == virtual terminal.
All 64 accessible by one user.

Tuned 2.4.XX-backstreet-ruby kernel has same 64 virtual consoles but one
user
can use only range of consoles. Current I use different terminology:
virtual
terminlal (VT) == 8 virtual consoles (VC), Each VT may be bounded with
independ
keyboard. Normal (non root) user can access only VC inside their VT. Root
can
access all 64 VC.

XFree86 is suid root process. XFree86 can access any VC. So XFree86 with
parameter vtXX can choose right keyboard, if exist multiple VT == 8 VC.

Current 2.4.XX-backstreet-ruby support only one text-mode console VGA, but
XFree86 do not ask for text mode console. We can use stealth DUMMY console
,
which emulate addidtional VT and bounded with additional keyboards.

files:
http://startx.times.lv/

broken project:
http://linuxconsole.sf.net
linuxconsole.bkbits.com
partialy worked project, same console code
fbdev.bkbits.com

Aivils Stoss
please replay to me to. I do not feel like xf86 developer.

p.s.
Why I should pacth PCI-handling of XFree or Why XFree search and freezy
innocent VGA adapters?

I just rewrite:
http://sourceforge.net/mailarchive/message.php?msg_id=2907175
 does anyone know why you'd want this kind of locking going on anyway?
is
 it to avoid two x servers trying to drive the same head?
 Don't know.

I'd guess it has to do with braid-deadness of older VGA-compatible PCI/AGP
hardware. There might be no other way to init the chips (change mode, even
change single palette entry) than to go trough the VGA-compatible I/O
ports. This range is legacy address decoded by the PCI devices. Since
these legacy I/O ranges cannot be relocated (like normal MMIO I/O in PCI),
only one PCI device (in system) may have it's legacy VGA address decoders
enabled at single instance. This means that for most older hardware, one
needs to specifically disable the I/O decoders on _all_ other VGA
compatible PCI devices for the duration of programming the one that is
really required. If more than one PCI device will attempt at decoding the
same legacy I/O range (0x3d0-smt), serious brain damage would occur (could
prompt a #SERR on PCI). Only the recent graphics chips may be programmed
purely through MMIO I/O and even then they might require some short setup
before going into that mode.

(at least) because of this reason, xf86 has no choice but disable all I/O
decoders on all VGA-compatible devices, and it goes a bit further by
disabling all VGA-compatible PCI devices that it will not drive just to be
on the safe side.

Unless there is some system-provided common arbitration for these legacy
ranges, this is the only way (from xf86's point of view). The right place
for linux would be the kernel, although this might upset some xf86 folk
since their code also does a lot of other stuff as well (like run the BIOS
code init from previously disabled PCI display hardware to get it up and
running, DDC probing if otherwise not possible).

In my opinion a more generic resource management for this would be nice in
the kernel but atm xf86 assumes (rightly so) that it will be the only
application doing graphics stuff. This is difficult to solve (although in
my opinion possible).

Feel free to correct me if you feel that the technical data is inaccurate,
I've currently trying to understand the lower levels of PCI and AGP
eventually and don't claim to be an expert on it.

Aleksandr Koltsoff


___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


Re: Multiple video consoles

2003-02-26 Thread jkjellman



Yitzhak,

I too am interested in this. I have only seen 
":hacks" that require two X servers (one normal and one modified) plus a 
modified kernel. This is not a very pretty sight to say the 
least.

Please copy myself if you receive any "private" 
replies as it sounds like we are looking for the same thing.

Take care,
KJohn

  - Original Message - 
  From: 
  Yitzhak Bar 
  Geva 
  To: [EMAIL PROTECTED] 
  Sent: Wednesday, February 26, 2003 10:03 
  AM
  Subject: Multiple video consoles
  
  
  What is the status of simultaneous 
  multiple video console operation for full multiuser X on one 
  machine?


Re: Multiple video consoles

2003-02-26 Thread Dr Andrew C Aitchison
On Wed, 26 Feb 2003, Yitzhak Bar Geva wrote:

 What is the status of simultaneous multiple video console operation for
 full multiuser X on one machine?

Someone reported that X works with the multi-head console  support
in Linux 2.5 kernels.

As far as I am concerned, that is the right way to go:
get multi-heads working on the console, then run X on top of that.

-- 
Dr. Andrew C. Aitchison Computer Officer, DPMMS, Cambridge
[EMAIL PROTECTED]   http://www.dpmms.cam.ac.uk/~werdna

___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


RE: Multiple video consoles

2003-02-26 Thread Yitzhak Bar Geva
Greatly encouraged by your response, thanks!

Someone reported that X works with the multi-head console  support
in Linux 2.5 kernels.

I did some searching for multi-head consoles under 2.5 kernel, but
didn't see anything. I would be highly appreciative if you could give me
some pointers. As far as I could see, the Linux Console Project is
defunct, but there is definitely work on multiple input devices going
on.
Yitzhak

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf
Of Dr Andrew C Aitchison
Sent: Wednesday, February 26, 2003 8:40 PM
To: [EMAIL PROTECTED]
Subject: Re: Multiple video consoles

On Wed, 26 Feb 2003, Yitzhak Bar Geva wrote:

 What is the status of simultaneous multiple video console operation
for
 full multiuser X on one machine?

Someone reported that X works with the multi-head console  support
in Linux 2.5 kernels.

As far as I am concerned, that is the right way to go:
get multi-heads working on the console, then run X on top of that.

-- 
Dr. Andrew C. Aitchison Computer Officer, DPMMS, Cambridge
[EMAIL PROTECTED]   http://www.dpmms.cam.ac.uk/~werdna

___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel

___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


Re: Multiple video consoles

2003-02-26 Thread Sven Luther
On Wed, Feb 26, 2003 at 06:40:07PM +, Dr Andrew C Aitchison wrote:
 On Wed, 26 Feb 2003, Yitzhak Bar Geva wrote:
 
  What is the status of simultaneous multiple video console operation for
  full multiuser X on one machine?
 
 Someone reported that X works with the multi-head console  support
 in Linux 2.5 kernels.
 
 As far as I am concerned, that is the right way to go:
 get multi-heads working on the console, then run X on top of that.

Does it really work ? With 2.4 multi-headed console, X blanks the second
head when it launches, even if i don't display anything on the second
head. I tried tailing the /var/log/XFree86.0.log on it, but to no avail.

BTW, i suppose you mean dual head as in one X on one head, and another X
(with another user, keyboard and mouse) on the second head. How do you
imagine this would work when both head are using a shared accel (XAA or
DRI) engine ?

Friendly,

Sven Luther
___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


Re: Multiple video consoles

2003-02-26 Thread Sven Luther
On Wed, Feb 26, 2003 at 09:27:50PM +0200, Yitzhak Bar Geva wrote:
 Greatly encouraged by your response, thanks!
 
 Someone reported that X works with the multi-head console  support
 in Linux 2.5 kernels.
 
 I did some searching for multi-head consoles under 2.5 kernel, but
 didn't see anything. I would be highly appreciative if you could give me
 some pointers. As far as I could see, the Linux Console Project is
 defunct, but there is definitely work on multiple input devices going
 on.

The correct place is the linux-fbdev project on sourceforge, especially
their mailing list, James Simmon is the main developper of the new
console code, and you have to look into the late 2.5.5x at least to get
working stuff.

That said, XFree86 people don't like fbdev much, and anyway, i don't
think you can handle the dual head/one accel engine this way.

Friendly,

Sven Luther
___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


Re: Multiple video consoles

2003-02-26 Thread jkjellman
Absolutely right, but ...

This can be done if two servers are used.  The point I was making earlier in
this thread was that used hacked kernels and servers are a bad thing.  If
two consoles (including keyboards) could be operated on a single box, then
two separate X servers could also be run.  The biggest problem is not the
display, but rather that both X and Linux have a single console keyboard
ingrained in their code.

Any thoughts on how this might be circumvented using existing pieces?

Take care,
KJohn

- Original Message -
From: Sven Luther [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Wednesday, February 26, 2003 2:25 PM
Subject: Re: Multiple video consoles


 On Wed, Feb 26, 2003 at 09:27:50PM +0200, Yitzhak Bar Geva wrote:
  Greatly encouraged by your response, thanks!
 
  Someone reported that X works with the multi-head console  support
  in Linux 2.5 kernels.
 
  I did some searching for multi-head consoles under 2.5 kernel, but
  didn't see anything. I would be highly appreciative if you could give me
  some pointers. As far as I could see, the Linux Console Project is
  defunct, but there is definitely work on multiple input devices going
  on.

 The correct place is the linux-fbdev project on sourceforge, especially
 their mailing list, James Simmon is the main developper of the new
 console code, and you have to look into the late 2.5.5x at least to get
 working stuff.

 That said, XFree86 people don't like fbdev much, and anyway, i don't
 think you can handle the dual head/one accel engine this way.

 Friendly,

 Sven Luther
 ___
 Devel mailing list
 [EMAIL PROTECTED]
 http://XFree86.Org/mailman/listinfo/devel

___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


Re: Multiple video consoles

2003-02-26 Thread David Dawes
On Wed, Feb 26, 2003 at 09:25:21PM +0100, Sven Luther wrote:
On Wed, Feb 26, 2003 at 09:27:50PM +0200, Yitzhak Bar Geva wrote:
 Greatly encouraged by your response, thanks!
 
 Someone reported that X works with the multi-head console  support
 in Linux 2.5 kernels.
 
 I did some searching for multi-head consoles under 2.5 kernel, but
 didn't see anything. I would be highly appreciative if you could give me
 some pointers. As far as I could see, the Linux Console Project is
 defunct, but there is definitely work on multiple input devices going
 on.

The correct place is the linux-fbdev project on sourceforge, especially
their mailing list, James Simmon is the main developper of the new
console code, and you have to look into the late 2.5.5x at least to get
working stuff.

That said, XFree86 people don't like fbdev much, and anyway, i don't

Not necessarily :-)  I recently wrote an fbdev driver for Intel 830M
and later chipsets (www.xfree86.org/~dawes/intelfb.html, and it should
be in new -ac kernels).  It was fun doing some graphics stuff outside
of XFree86 for a change.  It's basically a 2.4.x driver right now, and
still needs to be ported to the latest 2.5.6x fbdev interfaces.

think you can handle the dual head/one accel engine this way.

David
-- 
David Dawes
Release Engineer/Architect  The XFree86 Project
www.XFree86.org/~dawes
___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel