Re: [whatwg] Proposal for HTML5: Motion sensing input device (Kinect, SoftKinetic, Asus Xtion)

2012-09-01 Thread Jesús Ruiz García
Hello Ian, and thank you for considering my proposal ;)

I do not know if my proposal should go in getUserMedia() specifications, It's
something that you know better than I, but I would like to take into
account some
details:
- canvas support or any technology that could be drawn on the screen.
- navigation by gestures or voice.
- scanning capability of the human body: to store clothes and other more
specialized sites in these issues.

I think with these capabilities, accomplish more quality and usability on
websites.

It is hard work, especially for giving support in all systems.

Before leaving, how could I add this proposal to the wiki?. I guess I have
to register, because I see the possibility to Login.
Anyway, I usually go by the IRC channel, so we can talk in #whatwg channel.

Best regards and thanks for everything.

*Jesús Ruiz
jesusr...@php.net
jesusruiz2...@gmail.com*

2012/8/30 Ian Hickson i...@hixie.ch

 On Mon, 25 Jun 2012, Jesús Ruiz García wrote:
 
  My proposal for HTML5 is to make it functional with Kinect, SoftKinetic,
  Asus Xtion, and similar devices to interact with the web.

 On Mon, 25 Jun 2012, Tab Atkins Jr. wrote:
 
  The ability to capture sound and video from the user's devices and
  manipulate it in the page is already being exposed by the getUserMedia
  function.  Theoretically, a Kinect can provide this information.
 
  More advanced functionality like Kinect's depth information probably
  needs more study and experience before we start thinking about adding it
  to the language itself.

 On Wed, 27 Jun 2012, Robert O'Callahan wrote:
 
  If we were going to support anything like this, I think the best
  approach would be to have a new track type that getUserMedia can return
  in a MediaStream, containing depth buffer data.

 This seems like a solid approach. I recommend that further work on this
 happen in the WebRTC mailing lists where the getUserMedia() spec lives.


 On Fri, 29 Jun 2012, Jesús Ruiz García wrote:
 
  Seeing that my proposal has not been completely rejected, could I add
  this to the Category: Proposals for the Wiki?:
  http://wiki.whatwg.org/wiki/Category:Proposals

 Sure, but the mailing lists are what matter at the end of the day. :-)

 --
 Ian Hickson   U+1047E)\._.,--,'``.fL
 http://ln.hixie.ch/   U+263A/,   _.. \   _\  ;`._ ,.
 Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'


Re: [whatwg] Proposal for HTML5: Motion sensing input device (Kinect, SoftKinetic, Asus Xtion)

2012-09-01 Thread Jesús Ruiz García
Apparently there are several projects initiated with Kinect in JavaScript. One
of them is this:
http://kinect.childnodes.com/

Surely it is appropriate to start a new project, because you certainly
have more
experience.

A greeting.


Re: [whatwg] Proposal for HTML5: Motion sensing input device (Kinect, SoftKinetic, Asus Xtion)

2012-08-30 Thread Ian Hickson
On Mon, 25 Jun 2012, Jes�s Ruiz Garc�a wrote:
 
 My proposal for HTML5 is to make it functional with Kinect, SoftKinetic,
 Asus Xtion, and similar devices to interact with the web.

On Mon, 25 Jun 2012, Tab Atkins Jr. wrote:
 
 The ability to capture sound and video from the user's devices and 
 manipulate it in the page is already being exposed by the getUserMedia 
 function.  Theoretically, a Kinect can provide this information.
 
 More advanced functionality like Kinect's depth information probably 
 needs more study and experience before we start thinking about adding it 
 to the language itself.

On Wed, 27 Jun 2012, Robert O'Callahan wrote:
 
 If we were going to support anything like this, I think the best 
 approach would be to have a new track type that getUserMedia can return 
 in a MediaStream, containing depth buffer data.

This seems like a solid approach. I recommend that further work on this 
happen in the WebRTC mailing lists where the getUserMedia() spec lives.


On Fri, 29 Jun 2012, Jes�s Ruiz Garc�a wrote:
 
 Seeing that my proposal has not been completely rejected, could I add 
 this to the Category: Proposals for the Wiki?: 
 http://wiki.whatwg.org/wiki/Category:Proposals

Sure, but the mailing lists are what matter at the end of the day. :-)

-- 
Ian Hickson   U+1047E)\._.,--,'``.fL
http://ln.hixie.ch/   U+263A/,   _.. \   _\  ;`._ ,.
Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'

Re: [whatwg] Proposal for HTML5: Motion sensing input device (Kinect, SoftKinetic, Asus Xtion)

2012-06-29 Thread Jesús Ruiz García
One last question, if not too much trouble.

Seeing that my proposal has not been completely rejected, could I add this
to the Category: Proposals for the Wiki?:
http://wiki.whatwg.org/wiki/Category:Proposals

What do you think?

A greeting.

2012/6/28 Jesús Ruiz García jesusruiz2...@gmail.com

 One problem that I think that can happen is that there are no official
 drivers for Linux and MAC.
 Microsoft should give a solution to this. Although I found that there is a
 project called OpenKinect that seems to have advanced work.
 http://openkinect.org/wiki/Main_Page

 However as mentioned, to support Kinect and similar devices should not be a
 priority actually.

 A greeting ;)

 2012/6/27 Silvia Pfeiffer silviapfeiff...@gmail.com

 On Wed, Jun 27, 2012 at 1:56 PM, Robert O'Callahan rob...@ocallahan.org
 wrote:
  On Tue, Jun 26, 2012 at 8:22 AM, Tab Atkins Jr. jackalm...@gmail.com
 wrote:
 
  The ability to capture sound and video from the user's devices and
  manipulate it in the page is already being exposed by the getUserMedia
  function.  Theoretically, a Kinect can provide this information.
 
  More advanced functionality like Kinect's depth information probably
  needs more study and experience before we start thinking about adding
  it to the language itself.
 
 
  If we were going to support anything like this, I think the best
 approach
  would be to have a new track type that getUserMedia can return in a
  MediaStream, containing depth buffer data.

 I agree.

 Experimentation with this in a non-live manner is already possible by
 using a @kind=metadata track and putting the Kinect's depth
 information into a WebVTT file to use in parallel with the video.

 WebM has further defined how to encapsulate WebVTT into a WebM text
 track [1], so you could even put this information into a video file.
 I believe the same is possible with MPEG [2].

 The exact format for how the Kinect's depth information is delivered
 as a timed metadata track would need to be specified before it could
 turn into its own @kind track type and deliver it live.


 Cheers,
 Silvia.
 [1]
 http://wiki.webmproject.org/webm-metadata/temporal-metadata/webvtt-in-webm
 [2] http://html5.cablelabs.com/tracks/media-container-mapping.html





Re: [whatwg] Proposal for HTML5: Motion sensing input device (Kinect, SoftKinetic, Asus Xtion)

2012-06-28 Thread Jesús Ruiz García
One problem that I think that can happen is that there are no official
drivers for Linux and MAC.
Microsoft should give a solution to this. Although I found that there is a
project called OpenKinect that seems to have advanced work.
http://openkinect.org/wiki/Main_Page

However as mentioned, to support Kinect and similar devices should not be a
priority actually.

A greeting ;)

2012/6/27 Silvia Pfeiffer silviapfeiff...@gmail.com

 On Wed, Jun 27, 2012 at 1:56 PM, Robert O'Callahan rob...@ocallahan.org
 wrote:
  On Tue, Jun 26, 2012 at 8:22 AM, Tab Atkins Jr. jackalm...@gmail.com
 wrote:
 
  The ability to capture sound and video from the user's devices and
  manipulate it in the page is already being exposed by the getUserMedia
  function.  Theoretically, a Kinect can provide this information.
 
  More advanced functionality like Kinect's depth information probably
  needs more study and experience before we start thinking about adding
  it to the language itself.
 
 
  If we were going to support anything like this, I think the best approach
  would be to have a new track type that getUserMedia can return in a
  MediaStream, containing depth buffer data.

 I agree.

 Experimentation with this in a non-live manner is already possible by
 using a @kind=metadata track and putting the Kinect's depth
 information into a WebVTT file to use in parallel with the video.

 WebM has further defined how to encapsulate WebVTT into a WebM text
 track [1], so you could even put this information into a video file.
 I believe the same is possible with MPEG [2].

 The exact format for how the Kinect's depth information is delivered
 as a timed metadata track would need to be specified before it could
 turn into its own @kind track type and deliver it live.


 Cheers,
 Silvia.
 [1]
 http://wiki.webmproject.org/webm-metadata/temporal-metadata/webvtt-in-webm
 [2] http://html5.cablelabs.com/tracks/media-container-mapping.html



Re: [whatwg] Proposal for HTML5: Motion sensing input device (Kinect, SoftKinetic, Asus Xtion)

2012-06-27 Thread Robert O'Callahan
On Tue, Jun 26, 2012 at 8:22 AM, Tab Atkins Jr. jackalm...@gmail.comwrote:

 The ability to capture sound and video from the user's devices and
 manipulate it in the page is already being exposed by the getUserMedia
 function.  Theoretically, a Kinect can provide this information.

 More advanced functionality like Kinect's depth information probably
 needs more study and experience before we start thinking about adding
 it to the language itself.


If we were going to support anything like this, I think the best approach
would be to have a new track type that getUserMedia can return in a
MediaStream, containing depth buffer data.

Rob
-- 
“You have heard that it was said, ‘Love your neighbor and hate your enemy.’
But I tell you, love your enemies and pray for those who persecute you,
that you may be children of your Father in heaven. ... If you love those
who love you, what reward will you get? Are not even the tax collectors
doing that? And if you greet only your own people, what are you doing more
than others? [Matthew 5:43-47]


Re: [whatwg] Proposal for HTML5: Motion sensing input device (Kinect, SoftKinetic, Asus Xtion)

2012-06-27 Thread Silvia Pfeiffer
On Wed, Jun 27, 2012 at 1:56 PM, Robert O'Callahan rob...@ocallahan.org wrote:
 On Tue, Jun 26, 2012 at 8:22 AM, Tab Atkins Jr. jackalm...@gmail.comwrote:

 The ability to capture sound and video from the user's devices and
 manipulate it in the page is already being exposed by the getUserMedia
 function.  Theoretically, a Kinect can provide this information.

 More advanced functionality like Kinect's depth information probably
 needs more study and experience before we start thinking about adding
 it to the language itself.


 If we were going to support anything like this, I think the best approach
 would be to have a new track type that getUserMedia can return in a
 MediaStream, containing depth buffer data.

I agree.

Experimentation with this in a non-live manner is already possible by
using a @kind=metadata track and putting the Kinect's depth
information into a WebVTT file to use in parallel with the video.

WebM has further defined how to encapsulate WebVTT into a WebM text
track [1], so you could even put this information into a video file.
I believe the same is possible with MPEG [2].

The exact format for how the Kinect's depth information is delivered
as a timed metadata track would need to be specified before it could
turn into its own @kind track type and deliver it live.


Cheers,
Silvia.
[1] http://wiki.webmproject.org/webm-metadata/temporal-metadata/webvtt-in-webm
[2] http://html5.cablelabs.com/tracks/media-container-mapping.html


[whatwg] Proposal for HTML5: Motion sensing input device (Kinect, SoftKinetic, Asus Xtion)

2012-06-25 Thread Jesús Ruiz García
Hello and thanks for reading this message.

I start indicating that this message can be considered useless. I apologize
for this.

A few weeks ago I was in the chat #WHATWG, and asked how to send an email
to the list on a proposal to HTML5 (JavaScript).
I've taken a few days before sending this email, because I have been
investigating whether there was a similar project in production, and I've
seen one.

My proposal for HTML5 is to make it functional with Kinect, SoftKinetic,
Asus Xtion, and similar devices to interact with the web.
Logically, Kinect is the device most commonly used, would be ideal for this
proposal.

Kinect patent must be owned by Microsoft. I am informed that in HTML5,
there have been discussions on these issues of patents, so this aspect
could possibly be some kind of problem.
From my point of view, it would sell more devices of this type. Surely even
in future be replaced by webcams these devices more powerful and included
as standard on all computers.

Also, users would have an advance on the web. Because really, I mean just
to give you support to make gestures to browse the web, but for more useful
things.
I have some functions to the web, and see that are not being developed:

*- Online shopping or Online retailing:* Want to buy clothes, but do not
know what size you are using actually. Online stores may have an option to
run Kinect and scan your body to tell you the correct size for you, for
that article.We could even see if that shirt looks good on you or not.

*- Webs makeup/hair salon:* With face recognition, could learn/test
different makeup on the market. Obviously these products would be tested
virtually, and then could be purchased.

*- Webs fitness/rehabilitation:* While this can be considered as a
videogame, I see it more as an application. It would check if the person is
performing well exercise, getting not cause any injury. Rhythm of physical
exercise, and progress in their mobility.

*- Possible support for Canvas:* Interact with Canvas, via Kinect. Although
this can be done also with multitouch technology.

There are many ideas, but these are four simple possibilities that have
been happening while I was writing this text.

Currently being developed by *MIT*, a JavaScript library, called *DepthJS*.
It Allows Any Web page to interact With the Microsoft Kinect using
Javascript:
https://github.com/doug/depthjs
A few months ago that they do not update, and so far only allow web
browsing via gestures. I suppose that at the moment, not perform a body
scan to display it on screen in the browser.

Microsoft according to some reports, also being developed for Xbox 360, a
version of Internet Explorer with supports Kinect.

Well, with this information, you can become a situation of my proposal.

I apologize as said at the beginning of this text, if this proposal is
absurd, or not is functional in the HTML5 philosophy, and is better to have
a separate library as *DepthJS* for this.

I hope to know your opinion and read your comments.

Regards and thanks for reading this proposal. I hope to know your opinion
and read your comments.

Sorry for my English, I'm Spanish.

*Jesús Ruiz*
jesusr...@php.net
jesusruiz2...@gmail.com


Re: [whatwg] Proposal for HTML5: Motion sensing input device (Kinect, SoftKinetic, Asus Xtion)

2012-06-25 Thread Tab Atkins Jr.
On Mon, Jun 25, 2012 at 9:10 AM, Jesús Ruiz García
jesusruiz2...@gmail.com wrote:
 I start indicating that this message can be considered useless. I apologize
 for this.

 A few weeks ago I was in the chat #WHATWG, and asked how to send an email
 to the list on a proposal to HTML5 (JavaScript).
 I've taken a few days before sending this email, because I have been
 investigating whether there was a similar project in production, and I've
 seen one.

 My proposal for HTML5 is to make it functional with Kinect, SoftKinetic,
 Asus Xtion, and similar devices to interact with the web.
 Logically, Kinect is the device most commonly used, would be ideal for this
 proposal.

 Kinect patent must be owned by Microsoft. I am informed that in HTML5,
 there have been discussions on these issues of patents, so this aspect
 could possibly be some kind of problem.
 From my point of view, it would sell more devices of this type. Surely even
 in future be replaced by webcams these devices more powerful and included
 as standard on all computers.

 Also, users would have an advance on the web. Because really, I mean just
 to give you support to make gestures to browse the web, but for more useful
 things.
 I have some functions to the web, and see that are not being developed:

 *- Online shopping or Online retailing:* Want to buy clothes, but do not
 know what size you are using actually. Online stores may have an option to
 run Kinect and scan your body to tell you the correct size for you, for
 that article.We could even see if that shirt looks good on you or not.

 *- Webs makeup/hair salon:* With face recognition, could learn/test
 different makeup on the market. Obviously these products would be tested
 virtually, and then could be purchased.

 *- Webs fitness/rehabilitation:* While this can be considered as a
 videogame, I see it more as an application. It would check if the person is
 performing well exercise, getting not cause any injury. Rhythm of physical
 exercise, and progress in their mobility.

 *- Possible support for Canvas:* Interact with Canvas, via Kinect. Although
 this can be done also with multitouch technology.

 There are many ideas, but these are four simple possibilities that have
 been happening while I was writing this text.

 Currently being developed by *MIT*, a JavaScript library, called *DepthJS*.
 It Allows Any Web page to interact With the Microsoft Kinect using
 Javascript:
 https://github.com/doug/depthjs
 A few months ago that they do not update, and so far only allow web
 browsing via gestures. I suppose that at the moment, not perform a body
 scan to display it on screen in the browser.

 Microsoft according to some reports, also being developed for Xbox 360, a
 version of Internet Explorer with supports Kinect.

 Well, with this information, you can become a situation of my proposal.

 I apologize as said at the beginning of this text, if this proposal is
 absurd, or not is functional in the HTML5 philosophy, and is better to have
 a separate library as *DepthJS* for this.

 I hope to know your opinion and read your comments.

The ability to capture sound and video from the user's devices and
manipulate it in the page is already being exposed by the getUserMedia
function.  Theoretically, a Kinect can provide this information.

More advanced functionality like Kinect's depth information probably
needs more study and experience before we start thinking about adding
it to the language itself.

~TJ


Re: [whatwg] Proposal for HTML5: Motion sensing input device (Kinect, SoftKinetic, Asus Xtion)

2012-06-25 Thread Jesús Ruiz García
Thank you for your answer Tab Atkins.

Indeed, as you say, I see this as a proposal for the future. getUserMedia is
perfect for these features, as you indicate.

I will be alert to any news that occurs on this subject, for if I can
contribute something.

I think other proposals that may help improve the websites.

A greeting and thanks.

2012/6/25 Tab Atkins Jr. jackalm...@gmail.com

 On Mon, Jun 25, 2012 at 9:10 AM, Jesús Ruiz García
 jesusruiz2...@gmail.com wrote:
  I start indicating that this message can be considered useless. I
 apologize
  for this.
 
  A few weeks ago I was in the chat #WHATWG, and asked how to send an email
  to the list on a proposal to HTML5 (JavaScript).
  I've taken a few days before sending this email, because I have been
  investigating whether there was a similar project in production, and I've
  seen one.
 
  My proposal for HTML5 is to make it functional with Kinect, SoftKinetic,
  Asus Xtion, and similar devices to interact with the web.
  Logically, Kinect is the device most commonly used, would be ideal for
 this
  proposal.
 
  Kinect patent must be owned by Microsoft. I am informed that in HTML5,
  there have been discussions on these issues of patents, so this aspect
  could possibly be some kind of problem.
  From my point of view, it would sell more devices of this type. Surely
 even
  in future be replaced by webcams these devices more powerful and included
  as standard on all computers.
 
  Also, users would have an advance on the web. Because really, I mean just
  to give you support to make gestures to browse the web, but for more
 useful
  things.
  I have some functions to the web, and see that are not being developed:
 
  *- Online shopping or Online retailing:* Want to buy clothes, but do not
  know what size you are using actually. Online stores may have an option
 to
  run Kinect and scan your body to tell you the correct size for you, for
  that article.We could even see if that shirt looks good on you or not.
 
  *- Webs makeup/hair salon:* With face recognition, could learn/test
  different makeup on the market. Obviously these products would be tested
  virtually, and then could be purchased.
 
  *- Webs fitness/rehabilitation:* While this can be considered as a
  videogame, I see it more as an application. It would check if the person
 is
  performing well exercise, getting not cause any injury. Rhythm of
 physical
  exercise, and progress in their mobility.
 
  *- Possible support for Canvas:* Interact with Canvas, via Kinect.
 Although
  this can be done also with multitouch technology.
 
  There are many ideas, but these are four simple possibilities that have
  been happening while I was writing this text.
 
  Currently being developed by *MIT*, a JavaScript library, called
 *DepthJS*.
  It Allows Any Web page to interact With the Microsoft Kinect using
  Javascript:
  https://github.com/doug/depthjs
  A few months ago that they do not update, and so far only allow web
  browsing via gestures. I suppose that at the moment, not perform a body
  scan to display it on screen in the browser.
 
  Microsoft according to some reports, also being developed for Xbox 360, a
  version of Internet Explorer with supports Kinect.
 
  Well, with this information, you can become a situation of my proposal.
 
  I apologize as said at the beginning of this text, if this proposal is
  absurd, or not is functional in the HTML5 philosophy, and is better to
 have
  a separate library as *DepthJS* for this.
 
  I hope to know your opinion and read your comments.

 The ability to capture sound and video from the user's devices and
 manipulate it in the page is already being exposed by the getUserMedia
 function.  Theoretically, a Kinect can provide this information.

 More advanced functionality like Kinect's depth information probably
 needs more study and experience before we start thinking about adding
 it to the language itself.

 ~TJ