On Mon, Oct 5, 2009 at 7:54 PM, Dave Airlie <[email protected]> wrote:
> This was in ajax's summary and I just wanted to raise the issues with
> this before anyone seriously started thinking about it.
>
> 1. We should not fail modesetting unless its an extreme situation, we
> should endeavour to provide the remote client as much information as
> it requires to pick a set of modes the gpu can actually set. It this
> is just a workaround for this case then you should fix the randr
> protocol to allow smarter client situations.
>
> 2. multi-seat on a single card - what happens if we give a crtc to each
> user with outputs partitioned between them? what if a single X server
> isn't running all the bits of the card, or has them all exposed.
>
> I'd be interested in the situations where something like this is
> actually required, i.e. I'm sure it can allow for smoother mode
> setting when someone swaps connectors, but thats probably not the
> common case.

Arron and I were thinking mainly for mode validation across multiple
monitors.  You may have two or more huge monitors connected to a
low-end card that can't drive both at full res due to bandwidth
limitations.  It can however drive one monitor at full res just fine.
So how do we deal with this?  Which modes on which monitors do we
reject or do we keep them all and have one of the monitors flicker or
blink or not come on due to underflow.

Alex
_______________________________________________
xorg-devel mailing list
[email protected]
http://lists.x.org/mailman/listinfo/xorg-devel

Reply via email to