I haven't ruled out the possibility of a faster interface than XML. The approach I try to take it to make it work correctly and later make it fast. Given the portability issues just to initiate communication between two separate processes, that part can certainly be done in XML. The communication can continue in XML if that works well. If not, a later step would be to send an XML command to open a new channel, specifically for faster IO between the processes. There was a desire mentioned in the threads for shared memory access, so that would be an example of a channel that could be opened with an XML based command.

Probably one thing of concern with XML speed is when the XML validater is run on every single message, but this is optional. To decode just the basics of XML (the delimiters) is fast. The choice to run the validater is like a choice  you make when you access your bank account via a website with a security certificate. If you know the connection is already secure, you wouldn't need to require an additional security certificate for every piece of communication sent and received. The same principle applies to XML, if you already know the process on the other end has formatted the data correctly, then there is no need to run the validator.

I can say this, it wouldn't be slower than how LLSD is being used already.

Further, instead of an XML command to open a new channel, sockets allow one to dedicate the channel that is already open. So, communication could start and look like...

<?xml version="1.0">
<open dedicated="1" client="gesture"/>
gesture/nod
lookatPoint/100,100,100
gesture/yes
lookatPoint/110,110,100
lookatPoint/100,100,100
...

Notice, XML only starts the communication, and the rest continues as you expect below.

Philip Rosedale wrote:
do we need to use any sort of XML intepreter?   My preference would be simple UDP with the contents being a name/value pair:

gesture/nod
lookatPoint/100,100,100

etc.  Seems important that this be extremely lightweight and fast.   I'm a bit out of the loop, development wise, so maybe i'm wrong. 

P

Dzonatas Sol wrote:
Without much eyeballs on the code yet, it appears 
newview/llvoiceclient.h/cpp can be pretty much shallow copied and used 
to get a head start on this.

* Personally, I would do this copy and replace the vivox specific IO 
with the standard XML interpreter (if not already done).
* Even if llgestureclient.h/cpp seems like the logical new name for it, 
I think I would start off with llgenericxmlclient.h/cpp and derive from 
there.

Some viewers already have a socket interface to local apps, maybe they 
already have done above to some degree and can easily change it to meet 
such criteria of XML.

In my viewer, I have the option to either support an API (to 
libsnowglobe.so) that handles the dynamics of UTF16 or UTF8 across 
platforms, which would mean duplicate methods for both cases, platform 
specific selection messes, and more just to access the LL-API. It would 
be easier if I just used the XML client interface, which would let the 
XML interpreter do the job to detect and translate between the wide 
variety of different UTF formats and not worry about word size or endianess.

I may have something to submit. =)

Philip Rosedale wrote:
  
It would be fabulous if someone here wants to do a patch for 
Snowglobe that does the viewer side of this (see doc), which is to 
listen for local network packets and trigger gestures and set lookAt 
based on the contents of those packets.
    

_______________________________________________
Policies and (un)subscribe information available here:
http://wiki.secondlife.com/wiki/SLDev
Please read the policies before posting to keep unmoderated posting privileges
  

_______________________________________________
Policies and (un)subscribe information available here:
http://wiki.secondlife.com/wiki/SLDev
Please read the policies before posting to keep unmoderated posting privileges

Reply via email to