Firstname Lastname wrote:
> 
> I am pleased to see that i created such a comotion with my last few posts...
>   a lot of good ideas are floating around, many of which i still need to
> respond to.
> 
> On the lighter side, i'd like to make a couple of comments about what i
> could imagine input devices being used for....
> 
> Imagine a computer with three monitors (not hard for me ;), two mice, and
> two keyboards.  Then imagine X running accross all three of them as one big
> desktop.  I would propose that as i work with all three screens, while using
> mouse 1 and keyboard 1, that as a friend walks up, i should be able to hand
> him mouse2 and keyboard2, and he should be able to use a seperate pointer
> for his mouse, to select a differnt window then i am working in, and type on
> the same 3-screen array, just in a differnt window, as i work.
> 
> I have no idea what changes would be necicarry in XGGI to be able to do
> that.

you would basically need to rewrite the X protocol. Further, Xlib's event
structure only knows about a keyboard and a mouse, not two or three, neither
other input devices.

In the berlin project we are addressing these issues. Focus management and
even handling has been designed with arbitrary numbers and types of input
devices in mind. The idea is that you configure the server to synthetize
events out of atomic properties (bitsets, positions, values, toggles etc.).
Then you create logical devices and logical events inside the server which
are routed to whatever target you want.
This kind of thing can't simply be done by extending an existing (X) protocol,
the whole architecture must be constructed with this in mind.

Regards,        Stefan
_______________________________________________________              
              
Stefan Seefeld
Departement de Physique
Universite de Montreal
email: [EMAIL PROTECTED]

_______________________________________________________

      ...ich hab' noch einen Koffer in Berlin...

Reply via email to