Re: Proposal Virtual Reality View Lock Spec

2014-03-26 Thread Rob Manson

Hi,

we've already implemented an open source library that integrates WebGL, 
WebAudio, DeviceOrientation and gUM to create an easy to use VR/AR 
framework that runs on Rift, Glass, mobile, tablet, pc, etc.


Here's an overview of our API.

  https://buildar.com/awe/tutorials/intro_to_awe.js/index.html

And the project is in our github repos.

  https://github.com/buildar/awe.js

Hope that's relevant.

PS: We're also working on a Depth Stream Extension proposal to add depth 
camera support to gUM.


roBman


On 27/03/14 5:58 AM, Lars Knudsen wrote:
I think it could make sense to put stuff like this as an extension on 
top of WebGL and WebAudio as they are the only two current APIs close 
enough to the bare metal/low latency/high performance to get a decent 
experience.  Also - I seem to remember that some earlier generation VR 
glasses solved the game support problem by providing their own GL and 
Joystick drivers (today - probably device orientation events) so many 
games didn't have to bother (too much) with the integration.


In theory - we could:

 - extend (if needed at all) WebGL to provide stereo vision
 - hook up WebAudio as is (as it supports audio objects, Doppler 
effect, etc. similar to OpenAL
 - hook up DeviceOrientation/Motion in Desktop browsers if a WiiMote, 
HMD or other is connected

 - hook up getUserMedia as is to the potential VR camera

..and make it possible to do low latency paths/hooks between them if 
needed.


It seems that all (or at least most) of the components are already 
present - but proper hooks need to be made for desktop browsers at 
least (afaik .. it's been a while ;))


- Lars


On Wed, Mar 26, 2014 at 7:18 PM, Brandon Jones bajo...@google.com 
mailto:bajo...@google.com wrote:


So there's a few things to consider regarding this. For one, I
think your ViewEvent structure would need to look more like this:

interface ViewEvent : UIEvent {
readonly attribute Quaternion orientation; // Where Quaternion is
4 floats. Prevents gimble lock.
readonly attribute float offsetX; // offset X from the calibrated
center 0 in millimeters
readonly attribute float offsetY; // offset Y from the calibrated
center 0 in millimeters
readonly attribute float offsetZ; // offset Z from the calibrated
center 0 in millimeters
readonly attribute float accelerationX; // Acceleration along
X axis in m/s^2
readonly attribute float accelerationY; // Acceleration along
Y axis in m/s^2
readonly attribute float accelerationZ; // Acceleration along
Z axis in m/s^2
}

You have to deal with explicit units for a case like this and not
clamped/normalized values. What would a normalized offset of 1.0
mean? Am I slightly off center? At the other end of the room? It's
meaningless without a frame of reference. Same goes
for acceleration. You can argue that you can normalize to 1.0 ==
9.8 m/s^2 but the accelerometers will happily report values
outside that range, and at that point you might as well just
report in a standard unit.

As for things like eye position and such, you'd want to query that
separately (no sense in sending it with every device), along with
other information about the device capabilities (Screen
resolution, FOV, Lens distortion factors, etc, etc.) And you'll
want to account for the scenario where there are more than one
device connected to the browser.

Also, if this is going to be a high quality experience you'll want
to be able to target rendering to the HMD directly and not rely on
OS mirroring to render the image. This is a can of worms in and of
itself: How do you reference the display? Can you manipulate a DOM
tree on it, or is it limited to WebGL/Canvas2D? If you can render
HTML there how do the appropriate distortions get applied, and how
do things like depth get communicated? Does this new rendering
surface share the same Javascript scope as the page that launched
it? If the HMD refreshes at 90hz and your monitor refreshes at
60hz, when does requestAnimationFrame fire? These are not simple
questions, and need to be considered carefully to make sure that
any resulting API is useful.

Finally, it's worth considering that for a VR experience to be
effective it needs to be pretty low latency. Put bluntly: Browser
suck at this. Optimizing for scrolling large pages of flat
content, text, and images is very different from optimizing for
realtime, super low latency I/O. If you were to take an Oculus
Rift and plug it into one of the existing browser/Rift demos
https://github.com/Instrument/oculus-bridge with Chrome, you'll
probably find that in the best case the rendering lags behind your
head movement by about 4 frames. Even if your code is rendering at
a consistent 60hz that means you're seeing ~67ms of lag, which
will result in a motion-sickness-inducing 

Re: New manifest spec - ready for FPWD?

2013-11-26 Thread Rob Manson

That's a great overview!

There's 2 points I think haven't fully been addressed.

1. Section 8. Navigation
Much of this work (and HTML5 in general) is about bringing the Web 
Platform up to being equal with native apps.  But one thing that the 
Web does that native apps can't do is deep linking (ignoring the 
fustercluck of intents). I think it would provide a significant 
advantage if it was also possible to deep link into installed web apps.  
I understand this is very complex and I'm not proposing any solution 
right now.  But if we don't include this then we are in effect cutting 
web applications down to the level of native apps instead of leaping 
ahead of them.


Use Case: Social sharing
User A and User B both have the same web app installed on the devices 
they are using.  User A finds a resource they like inside the app and 
decide to share this from within the app through one of their social 
networks.  User B sees this link in their social feed and taps on it.  
Since User B also has this web app installed it would be nice to be 
able to open that resource directly within the installed app.  Otherwise 
User X's browser would just open it like a normal web resource.  This 
can also be relevant for the same user using the same web app across 
multiple devices.
NOTE: This is one of the key drivers we have found for building business 
cases of Web instead of Native.



2. Section 6. Start page
This is lightly touched on and slightly related to the point above, but 
the common experience especially on iOS is that even when you background 
an installed app and then foreground it again it reloads the entire 
state.  This effectively breaks the UX and makes this mode almost 
unusable.  It's definitely possible to use localStorage, etc. to work 
around this but the UX is horrible.  Allowing installed apps to persist 
their resources and loaded state across background/foreground (and 
ideally even launches) would be a massive step forward.  Perhaps naming 
this a First use page would help clarify this focus?


roBman


On 27/11/13 8:02 AM, Marcos Caceres wrote:

Over the last few weeks, a few of us folks in the Web Mob IG have been 
investigating the use cases and requirements for bookmarking web apps to home 
screen. The output of that research  is this living document:
http://w3c-webmob.github.io/installable-webapps/

That (ongoing) research is helping to inform the manifest spec. A bunch of us 
have been working together on IRC, twitter, etc. on a new version of the 
manifest spec:
http://w3c.github.io/manifest/

The Editors would appreciate if people take a look and see if you agree with 
the feature set.

** Right now, please refrain from bike-shedding! **

Unless anyone objects, the Editors would like to request the start of a CFC 
towards publishing a FPWD of the manifest spec.








Re: CfC: publish WD of Streams API; deadline Nov 3

2013-10-31 Thread Rob Manson
Along with WebSockets as Aymeric mentioned...WebRTC DataChannels are 
also missing.


And I think Aymeric's point about MediaStream is important too...but 
there is very strong push-back from within the Media Capture  Streams 
TF that they don't think this is relevant 8/


Also, here's a couple of links for things I've shared/proposed recently 
related to this.


public message
http://lists.w3.org/Archives/Public/public-media-capture/2013Sep/0229.html

presentation
http://www.slideshare.net/robman/web-standards-for-ar-workshop-at-ismar13

code
https://github.com/buildar/getting_started_with_webrtc#image_processing_pipelinehtml 



All thoughts and feedback welcome.


roBman


On 29/10/13 10:22 PM, Aymeric Vitte wrote:
I have suggested some additions/changes in my latest reply to the 
Overlap thread.


The list of streams producers/consumers is not final but obviously 
WebSockets are missing.


Who is coordinating each group that should get involved? MediaStream 
for example should be based on the Stream interface and all related 
streams proposals.


Regards,

Aymeric

Le 28/10/2013 16:29, Arthur Barstow a écrit :
Feras and Takeshi have begun merging their Streams proposal and this 
is a Call for Consensus to publish a new WD of Streams API using the 
updated ED as the basis:


https://dvcs.w3.org/hg/streams-api/raw-file/tip/Overview.htm

Please note the Editors may update the ED before the TR is published 
(but they do not intend to make major changes during the CfC).


Agreement to this proposal: a) indicates support for publishing a new 
WD; and b) does not necessarily indicate support of the contents of 
the WD.


If you have any comments or concerns about this proposal, please 
reply to this e-mail by November 3 at the latest. Positive response 
to this CfC is preferred and encouraged and silence will be assumed 
to mean agreement with the proposal.


-Thanks, ArtB








Re: CfC: publish WD of Streams API; deadline Nov 3

2013-10-31 Thread Rob Manson

Sounds good.

Thanks.

roBman


On 1/11/13 4:43 PM, Feras Moussa wrote:

Yes, WebSockets was missing - I've gone ahead and updated the spec to include 
it.

Thanks for sharing the links, the content is well thought out. In particular, 
your diagram does a good job summarizing some of the key consumers and 
producers that come to play regarding Streams. I'll review it in detail.

DataChannels also seem like a possible candidate, although I'm not yet very 
familiar with them. This can be something reviewed and thought through, and 
added accordingly.


  Who is coordinating each group that should get involved? MediaStream for 
example should be based on the Stream interface and all related streams 
proposals.

Once we come to a consensus in the WG what Streams look like and their role, we 
can begin to coordinate what the impact is on other groups.

Thanks,
Feras




Date: Fri, 1 Nov 2013 16:05:22 +1100
From: rob...@mob-labs.com
To: public-webapps@w3.org
Subject: Re: CfC: publish WD of Streams API; deadline Nov 3

Along with WebSockets as Aymeric mentioned...WebRTC DataChannels are
also missing.

And I think Aymeric's point about MediaStream is important too...but
there is very strong push-back from within the Media Capture  Streams
TF that they don't think this is relevant 8/

Also, here's a couple of links for things I've shared/proposed recently
related to this.

public message
http://lists.w3.org/Archives/Public/public-media-capture/2013Sep/0229.html

presentation
http://www.slideshare.net/robman/web-standards-for-ar-workshop-at-ismar13

code
https://github.com/buildar/getting_started_with_webrtc#image_processing_pipelinehtml


All thoughts and feedback welcome.


roBman


On 29/10/13 10:22 PM, Aymeric Vitte wrote:

I have suggested some additions/changes in my latest reply to the
Overlap thread.

The list of streams producers/consumers is not final but obviously
WebSockets are missing.

Who is coordinating each group that should get involved? MediaStream
for example should be based on the Stream interface and all related
streams proposals.

Regards,

Aymeric

Le 28/10/2013 16:29, Arthur Barstow a écrit :

Feras and Takeshi have begun merging their Streams proposal and this
is a Call for Consensus to publish a new WD of Streams API using the
updated ED as the basis:

https://dvcs.w3.org/hg/streams-api/raw-file/tip/Overview.htm

Please note the Editors may update the ED before the TR is published
(but they do not intend to make major changes during the CfC).

Agreement to this proposal: a) indicates support for publishing a new
WD; and b) does not necessarily indicate support of the contents of
the WD.

If you have any comments or concerns about this proposal, please
reply to this e-mail by November 3 at the latest. Positive response
to this CfC is preferred and encouraged and silence will be assumed
to mean agreement with the proposal.

-Thanks, ArtB








Re: [XHR] Setting the User-Agent header

2012-09-04 Thread Rob Manson
+1

roBman


On Wed, 5 Sep 2012 14:03:35 +1000
Mark Nottingham m...@mnot.net wrote:

 The current draft of XHR2 doesn't allow clients to set the UA header.
 
 That's unfortunate, because part of the intent of the UA header is to
 identify the software making the request, for debugging / tracing
 purposes. 
 
 Given that lots of libraries generate XHR requests, it would be
 natural for them to identify themselves in UA, by appending a token
 to the browser's UA (the header is a list of product tokens).  As it
 is, they have to use a separate header.
 
 I understand that it may not be desirable to allow UA to be
 overwritten, but making it append-only would be valuable...
 
 Cheers,
 
 
 --
 Mark Nottingham   http://www.mnot.net/



Re: [whatwg] File API Streaming Blobs

2011-08-08 Thread Rob Manson
Sorry to jump in the middle of your discussion but after reading Eric's
questions e.g.

I haven't fully absorbed the MediaStream API, but perhaps it
would be more natural to make a connector in that API rather
than modifying Blob?

I think this use case also applies for other stream/file/blob analysis
and processing e.g. Augmented Reality, Object Recognition, DSP, etc.

I've raised this recently across the related groups [1] with a simple
use case and a number of supporting requirements.


roBman

[1] http://lists.w3.org/Archives/Public/public-webrtc/2011Jul/0170.html 


On Mon, 2011-08-08 at 23:59 +0200, Simon Heckmann wrote:
 It's actually confidential company data, I  was thinking off. Together with 
 the DOMCrypt API I thought this could be a valid use case. But I think there 
 might be more cases in which it might make sense to preprocess locally stored 
 video data.
 
 Kind regards,
 Simon Heckmann
 
 
 Am 08.08.2011 um 23:51 schrieb Glenn Maynard gl...@zewt.org:
 
  On Mon, Aug 8, 2011 at 4:31 PM, Simon Heckmann si...@simonheckmann.de 
  wrote:
  Well, not directly an answer to your question, but the use case I had in 
  mind is the following:
  
  A large encrypted video (e.g. HD movie with 2GB) file is stored using the 
  File API, I then want to decrypt this file and start playing with only a 
  minor delay. I do not want to decrypt the entire file before it can be 
  viewed. As long as such as use case gets covered I am fine with everything.
  
  Assuming you're thinking of DRM, are there any related use cases other than 
  crypto?  Encryption for DRM, at least, isn't a very compelling use case; 
  client-side Javascript encryption is a very weak level of protection 
  (putting aside, for now, the question of whether the web can or should be 
  attempting to handle DRM in the first place).  If it's not DRM you're 
  thinking of, can you clarify?
  
  -- 
  Glenn Maynard