Hi Cliff,

On So, 2014-11-02 at 09:06 -0500, Clifford Yapp wrote:
> On Sat, Nov 1, 2014 at 4:02 PM, Csaba Nagy <ncs...@javampire.com> wrote:
> > If it would be me, I would skip QT and go for VR + some game controller
> > for the next generation GUI :-)
> 
> Heh.  One principle of working with existing software projects
> (especially LARGE software projects) is that you have to work via
> incremental steps to your final desired goal.

I agree completely with you, that was more like a joke from my part, not
very well expressed though...


> > The ultimate thing would be to get kinect connected and make it possible
> > to edit geometry through body/hand movements ;-)
> >
> > Anybody else thinking that would be cool ?
> 
> Very cool - my suggestion if you want to explore that direction (at
> least in the short term) would be to see how amenable the new
> web-based interface work is to integration with that sort of
> integration.  It's a good bet that the web interface will be the first
> one to work on andriod, for example...

That could actually be the way to go - the easiest way to feed the
headset would be to provide a service (web, etc.) where it can connect
and get the view for both eyes, which means slightly different
positions. Then repeat for the updated view-point when the user turns
head/otherwise navigates.

The problem is to get this working with as little lag as possible - a
head-set user would expect that the view turns immediately as you turn
your head.

I see a few possibilities:

 * convert the geometry to some generic format and let a generic viewer
process it (which I think is the take of the current OGV project);

 * open up a geometry-server where full BRL-CAD is installed, and serve
wire-frames (or any other detail) based on requests coming from the
mobile device;

The first one would probably allow for less lagging, but would be less
integrated with BRL-CAD. The second option would allow full viewing +
editing of the geometry, but needs real fast/no lag communication
between the server and the viewer, and a specialized protocol too if the
existing ones won't cut it.

I'm not sure if all this makes sense, but I think it would be relatively
easy to write a program which opens a BRL-CAD geometry file, and then
listens on a port for incoming requests for wire-frames/renderings, plus
possibly accepting editing commands too. The headset would then connect
to that port and ask for the required views as the user turns
head/navigates, plus send any editing command the user might generate.

BTV, the displayed view doesn't have to be targeted to VR headsets, a
single view could be served just as well, to accommodate plain mobile
devices or the web.

Does any of this make sense to you ? Any pointers to existing
efforts/code ?

Cheers,
Csaba



------------------------------------------------------------------------------
_______________________________________________
BRL-CAD Developer mailing list
brlcad-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/brlcad-devel

Reply via email to