Re: [whatwg] PeerConnection, MediaStream, getUserMedia(), and other feedback

2011-08-02 Thread Per-Erik Brodin

On 2011-07-26 07:30, Ian Hickson wrote:

On Tue, 19 Jul 2011, Per-Erik Brodin wrote:


Perhaps now that there is no longer any relation to tracks on the media
elements we could also change Track to something else, maybe Component.
I have had people complaining to me that Track is not really a good name
here.


I'm happy to change the name if there's a better one. I'm not sure
Component is any better than Track though.


OK, let's keep Track until someone comes up with a better name then.


Good. Could we still keep audio and video in separate lists though? It
makes it easier to check the number of audio or video components and you
can avoid loops that have to check the kind for each iteration if you
only want to operate on one media type.


Well in most (almost all?) cases, there'll be at most one audio track and
at most one video track, which is why I didn't put them in separate lists.
What use cases did you have in mind where there would be enough tracks
that it would be better for them to be separate lists?


Yes, you're right, but even with zero or one track it's more convenient 
to have them separate because that way you can more easily check if the 
stream contains any audio and/or video tracks and check the number of 
tracks of each kind. I also think it will be problematic if we would 
like to add another kind at a later stage if all tracks are in the same 
list since people will make assumptions that audio and video are the 
only kinds.



I also think that it would be easier to construct new MediaStream
objects from individual components rather than temporarily disabling the
ones you do not want to copy to the new MediaStream object and then
re-enabling them again afterwards.


Re-enabling them afterwards would re-include them in the copies, too.


Why is this needed? If a new MediaStream object is constructed from 
another MediaStream I think it would be simpler to just let that be a 
clone of the stream with all tracks present (with the enabled/disabled 
states independently set).



The main use case here is temporarily disabling a video or audio track in
a video conference. I don't understand how your proposal would work for
that. Can you elaborate?


A new MediaStream object is created from the video track of a 
LocalMediaStream to be used as self-view. The LocalMediaStream can then 
be sent over PeerConnection and the video track disabled without 
affecting the MediaStream being played back locally in the self-view. In 
addition, my proposal opens up for additional use cases that require 
combining tracks from different streams, such as recording a 
conversation (a number of audio tracks from various streams, local and 
remote combined to a single stream).



It is also unclear to me what happens to a LocalMediaStream object that
is currently being consumed in that case.


Not sure what you mean. Can you elaborate?


I was under the impression that, if a stream of audio and video is being 
sent to one peer and then another peer joins but only audio should be 
sent, then video would have to be temporarily disabled in the first 
stream in order to construct a new MediaStream object containing only 
the audio track. Again, it would be simpler to construct a new 
MediaStream object from just the audio track and send that.



Why should the label the same as the parent on the newly constructed
MediaStream object?


The label identifies the source of the media. It's the same source, so,
same label.


I agree, but usually you have more than one source in a MediaStream and 
if you construct a new MediaStream from it which doesn't contain all of 
the sources from the parent I don't think the label should be the same. 
By the way, what happens if you call getUserMedia() twice and get the 
same set of sources both times, do you get the same label then? What if 
the user selects different sources the second time?



If you send two MediaStream objects constructed from the same
LocalMediaStream over a PeerConnection there needs to be a way to
separate them on the receiving side.


What's the use case for sending the same feed twice?


If the labels are the same then that should indicate that it's 
essentially the same stream and there should be no need to send it 
twice. If the streams are not composed of the same underlying sources 
then you may want to send them both and the labels should differ.



I also think it is a bit unfortunate that we now have a 'label' property
on the track objects that means something else than the 'label' property
on MediaStream, perhaps 'description' would be a more suitable name for
the former.


In what sense do they mean different things? I don't understand the
problem here. Can you elaborate?


As Tommy pointed out, label on MediaStream is an identifier for the 
stream whereas label och MediaStreamTrack is a description of the source.



The current design is just the result of needing to define what
happens when you call getRecordedData() twice in a row. Could you

[whatwg] PeerConnection, MediaStream, getUserMedia(), and other feedback

2011-07-28 Thread Stefan Håkansson LK
On Tue, Jul 26, 2011 at 07:30, Ian Hickson ian at hixie.ch wrote:


  If you send two MediaStream objects constructed from the same
  LocalMediaStream over a PeerConnection there needs to be a way to
  separate them on the receiving side.

 What's the use case for sending the same feed twice?


There's no proper use case as such but the spec allows this.
The question is how serious a problem this is. If you want to fork, and make 
both (all) versions available at the peer, would you not transmit the full 
stream and fork at the receiving end for efficiency reasons? And if you really 
want to fork at the sender, one way to separate them is to use one 
PeerConnection per fork.

Stefan

Re: [whatwg] PeerConnection, MediaStream, getUserMedia(), and other feedback

2011-07-26 Thread Mark Callow
There is a lot more that could be done than simply triggering the flash.
See /The Frankencamera: An Experimental Platform for Computational
Photography/ http://graphics.stanford.edu/papers/fcam/ and The FCAM
API http://fcam.garage.maemo.org/.

Regards

-Mark

On 26/07/2011 14:30, Ian Hickson wrote:
 On Thu, 14 Jul 2011 04:09:40 +0530, Ian Hickson i...@hixie.ch wrote:

Another question is flash. As far as I have seen, there seems to be 
no option to specify whether the camera needs to use flash or not. 
Is this decision left up to the device? (If someone is making an app 
which is just clicking a picture of the person, then it would be 
nice to have the camera use flash in low light conditions).
  
   getUserMedia() returns a video stream, so it wouldn't use a flash.
  
  Wouldn't it make sense to have a provision for flash separately then? I 
  think a lot of apps would like just a picture instead of video, and in 
  those cases, flash would be required. Maybe a seperate provision in the 
  spec which defines whether to use flash, and if so, for how many 
  miliseconds. Is that doable?


Re: [whatwg] PeerConnection, MediaStream, getUserMedia(), and other feedback

2011-07-26 Thread ᛏᚮᛘᛘᚤ
On Tue, Jul 26, 2011 at 07:30, Ian Hickson i...@hixie.ch wrote:


  If you send two MediaStream objects constructed from the same
  LocalMediaStream over a PeerConnection there needs to be a way to
  separate them on the receiving side.

 What's the use case for sending the same feed twice?


There's no proper use case as such but the spec allows this.




  I also think it is a bit unfortunate that we now have a 'label' property
  on the track objects that means something else than the 'label' property
  on MediaStream, perhaps 'description' would be a more suitable name for
  the former.

 In what sense do they mean different things? I don't understand the
 problem here. Can you elaborate?


label on a MediaStream is a unique identifier, while the label on a
MediaStreamTrack is just a description like Logitech Vision Pro, Line In
or Built-in Mic. I too find this a bit odd.




 On Wed, 20 Jul 2011, Tommy Widenflycht (á~[~Oá~Z®á~[~Xá~[~Xá~Z¤) wrote:
  On Mon, Jul 18, 2011 at 20:38, Ian Hickson i...@hixie.ch wrote:
   On Mon, 18 Jul 2011, Tommy Widenflycht (á~[~Oá~Z®á~[~Xá~[~Xá~Z¤)
 wrote:
   
I am very confused regarding the below paragraph from the latest
spec:
   
When a track in a MediaStream parent is disabled, any
MediaStreamTrack objects corresponding to the tracks in any
MediaStream objects that were created from parent are disassociated
from any track, and must not be reused for tracks again. If a
disabled track in a MediaStream parent is re-enabled, from the
perspective of any MediaStream objects that were created from parent
it is a new track and thus new MediaStreamTrack objects must be
created for the tracks that correspond to the re-enabled track.
   
After cloning a LocalMediaStream it looks like this:
   
LocalMediaStream - MediaStream1
Track1(E)   Track1(E)
Track2(E)   Track2(E)
Track3(E)   Track3(E)
   
and as I interpret the spec it looks like this if Track1 in the
LocalMediaStream is disabled:
   
LocalMediaStream - MediaStream1
Track1(D)   Track2(E)
Track2(E)   Track3(E)
Track3(E)
  
   Correct so far (though I'd avoid the term cloning since it's not
   quite what's going on here -- the spec uses forking, which may be
   closer though is still not ideal).
  
So Track1 disappears from the MediaStream1 object and doesn't come
back even if Track1 in the LMS object is enabled:
   
LocalMediaStream - MediaStream1
Track1(E)   Track2(E)
Track2(E)   Track3(E)
Track3(E)
  
   No, it'll create a new track object:
  
LocalMediaStream - MediaStream1
Track1(E)   Track4(E)
Track2(E)   Track2(E)
Track3(E)   Track3(E)
  
   This is specified in the sentence that starts If a disabled track in
   a MediaStream parent is re-enabled
 
  Thanks for the explanation. To me this sounds overly complicated, why
  not just make it so that an disable of a track will override the track
  settings for forked MediaStreams?

 I don't understand what you mean. How would that be different?


If I may make an analogy to the real world: plumbing.

Each fork of a MediaStream is a new joint in the pipe, my suggestion
introduces a tap at each joint. No matter how you open and close the tap at
the end (or middle); if any previous tap is closed there's nothing coming
through.
The spec currently removes and add the entire pipe after the changed joint.



  Also some follow-up questions regarding the new TrackLists:
 
  What should happen when a track fails? Should the entire stream fail,
  the MSTrack silently be removed or the MSTrack disassociated with the
  track (and thus becoming a do-nothing object)?

 What do you mean by fails?


Yanking the USB cable to the camera for example. This should imho stop the
MS, not just silently send black video.


  What should happen when a stream with two or more video tracks is
  associated to a video tag? Just render the first enabled one?

 Same as if you had a regular video file with multiple tracks.


And that is? Sorry, this might be written down somewhere and I have missed
it.


/Tommy

-- 
Tommy Widenflycht, Senior Software Engineer
Google Sweden AB, Kungsbron 2, SE-11122 Stockholm, Sweden
Org. nr. 556656-6880
And yes, I have to include the above in every outgoing email according to EU
law.


Re: [whatwg] PeerConnection, MediaStream, getUserMedia(), and other feedback

2011-07-26 Thread sc...@crisscott.com


- Reply message -
From: Ian Hickson i...@hixie.ch
Date: Tue, Jul 26, 2011 1:30 am
Subject: [whatwg] PeerConnection, MediaStream, getUserMedia(),  and other 
feedback
To: wha...@whatwg.org wha...@whatwg.org



Re: [whatwg] PeerConnection, MediaStream, getUserMedia(), and other feedback

2011-07-25 Thread Ian Hickson
On Thu, 14 Jul 2011, Shwetank Dixit wrote:
 On Thu, 14 Jul 2011 04:09:40 +0530, Ian Hickson i...@hixie.ch wrote:
   
   Another question is flash. As far as I have seen, there seems to be 
   no option to specify whether the camera needs to use flash or not. 
   Is this decision left up to the device? (If someone is making an app 
   which is just clicking a picture of the person, then it would be 
   nice to have the camera use flash in low light conditions).
 
  getUserMedia() returns a video stream, so it wouldn't use a flash.
 
 Wouldn't it make sense to have a provision for flash separately then? I 
 think a lot of apps would like just a picture instead of video, and in 
 those cases, flash would be required. Maybe a seperate provision in the 
 spec which defines whether to use flash, and if so, for how many 
 miliseconds. Is that doable?

In response to getUserMedia()? I don't really understand how that would 
work. Could you elaborate? How do you envisage the API working? Maybe a 
concrete example would help.

I'm particularly concerned about two things: preventing hostile sites from 
abusing a flash feature to troll the user, and preventing well-meaning but 
poorly designed sites from using the flash when the user doesn't want it 
to (e.g. when taking a photograph in an area where a flash isn't desired).


On Thu, 14 Jul 2011, timeless wrote:

 I'd expect a web app to have no idea about device camera specifications 
 and thus to not be able to properly specify a flash duration. I don't 
 see how such a thing is valuable.
 
 If a user is in a movie theater, or a museum, it's quite likely they 
 won't notice a web app is forcing a flash. Let the user control flash 
 through a useragent only or host application only mode. I believe the 
 hazards of exposing flash duration outweigh any benefits. The only 
 application class I know of built using control of camera flash is 
 flash-light, and that's both a hack and not guaranteed to be workable 
 for all possible flash technologies.

Right.


On Fri, 15 Jul 2011, Shwetank Dixit wrote:
 
 Just like, just allowing the web app to use the camera as it is will not 
 make sense, and presumably, user agents will implement a authorization 
 by the user before the app gains access to the camera (something like 
 'This application requests access to the camera. Allow for now/Always 
 Allow/Never Allow/Close' just like you do in geolocation right now) ... 
 just like that, you could do it for flash, where the app only gains 
 access to it if the user allows it. If that is the implementation, i do 
 not think there would be much hazards in allowing flash access.

This is quickly going to get frustrating to the user. In general, we'd 
rather not have any such prompts. For example, for video, well-designed 
browsers are likely not going to have a yes/no prompt, instead they'll 
just have a prompt that asks the user what camera they want to use. This 
is far less frustrating to the user.


 Apart from helping capture images/video in low light conditions, there 
 are a few other use cases for flash such as the flash light thing you 
 mentioned, as well as a possible S.O.S type app.

 I'm fine if the consensus is that the device/user agent will handle the 
 issue of flash by showing some sort of control where the user can click 
 between 'flash on/off/auto'. That will cover *most* of the use cases, 
 which is recording images/video in low light conditions. If so, then it 
 might be good to specify that somewhere in the spec just to make things 
 a bit clearer?

Ok, done.


On Tue, 19 Jul 2011, Per-Erik Brodin wrote:
 
 Perhaps now that there is no longer any relation to tracks on the media 
 elements we could also change Track to something else, maybe Component. 
 I have had people complaining to me that Track is not really a good name 
 here.

I'm happy to change the name if there's a better one. I'm not sure 
Component is any better than Track though.


 Good. Could we still keep audio and video in separate lists though? It 
 makes it easier to check the number of audio or video components and you 
 can avoid loops that have to check the kind for each iteration if you 
 only want to operate on one media type.

Well in most (almost all?) cases, there'll be at most one audio track and 
at most one video track, which is why I didn't put them in separate lists. 
What use cases did you have in mind where there would be enough tracks 
that it would be better for them to be separate lists?


 I also think that it would be easier to construct new MediaStream 
 objects from individual components rather than temporarily disabling the 
 ones you do not want to copy to the new MediaStream object and then 
 re-enabling them again afterwards.

Re-enabling them afterwards would re-include them in the copies, too.

The main use case here is temporarily disabling a video or audio track in 
a video conference. I don't understand how your proposal would work for 
that. Can you 

Re: [whatwg] PeerConnection, MediaStream, getUserMedia(), and other feedback

2011-07-19 Thread Per-Erik Brodin

On 2011-07-14 00:39, Ian Hickson wrote:

In response to off-list feedback, I've renamed StreamTrack to
MediaStreamTrack to be clearer about its relationship to the other
interfaces.


Perhaps now that there is no longer any relation to tracks on the media 
elements we could also change Track to something else, maybe Component. 
I have had people complaining to me that Track is not really a good name 
here.



On Wed, 8 Jun 2011, Per-Erik Brodin wrote:


The TrackList feature seems to be a good way to control the different
components of a Stream. Although it is said that tracks provide a way to
temporarily disable a local camera, due to the nature of the
ExclusiveTrackList it is still not possible to disable video altogether,
i.e. to 'pull down the curtain' in a video conference. I noticed that
there is a bug filed on this issue but I do not think the proposed
solution there is quite right. There is a state in which no tracks are
selected in an ExclusiveTrackList, when the selected index returned is
-1. A quick fix would be to allow also setting the active track to -1 in
order to deselect all the other tracks.


This is fixed now, hopefully. Let me know if the fix is not sufficient.

(I replaces the videoTracks and audioTracks lists with a single tracks
list in which you can enable and disable individual tracks.)


Good. Could we still keep audio and video in separate lists though? It 
makes it easier to check the number of audio or video components and you 
can avoid loops that have to check the kind for each iteration if you 
only want to operate on one media type. I also think that it would be 
easier to construct new MediaStream objects from individual components 
rather than temporarily disabling the ones you do not want to copy to 
the new MediaStream object and then re-enabling them again afterwards. 
It is also unclear to me what happens to a LocalMediaStream object that 
is currently being consumed in that case.


Why should the label the same as the parent on the newly constructed 
MediaStream object? If you send two MediaStream objects constructed from 
the same LocalMediaStream over a PeerConnection there needs to be a way 
to separate them on the receiving side. I also think it is a bit 
unfortunate that we now have a 'label' property on the track objects 
that means something else than the 'label' property on MediaStream, 
perhaps 'description' would be a more suitable name for the former.



We prefer having a StreamRecorder that you have to stop in order get the
recorded data (like the previous one, but with asynchronous Blob retrieval)
and we do not understand the use cases for the current proposal where
recording continues until the recorder is garbage collected (or the Stream
ends) and you always get the data from the beginning of the recording. This
also has to be tied to application quota in some way.


The current design is just the result of needing to define what happens
when you call getRecordedData() twice in a row. Could you elaborate on
what API you think we should have?


What I am thinking of is something similar to what was proposed in
http://lists.whatwg.org/htdig.cgi/whatwg-whatwg.org/2011-March/030921.html
although that does not take quota into account. Preferably quota should 
be expressed in media time and that is heavily dependent on the format 
being used and, regardless of any codecs, I still think that the format 
has to be specified somehow. Perhaps it would be best to push recording 
to v2 since this does not seem to be the primary use case for people 
currently showing the most interest in this part of the spec.



Instead of blob: we would like to use stream: for the Stream URLs so
that we very early on in the media engine selection can use the protocol
scheme to determine how the URL will be handled. Blobs are typically
handled in the same way as other media playback. The definition of
stream: could be the same as for blob:.


Why can't the UA know which blob: URLs point to streams and which point to
blobs?


I was not saying that it would not be possible to keep track of which 
blob: URLs that point to blobs and which point to streams just that we 
want to avoid doing that in the early stage of the media engine 
selection. In my opinion a stream is quite the opposite of a blob 
(unknown, perhaps infinite length vs. fixed length) so when printing the 
URLs for debugging purposes it would also be much nicer to have two 
different protocol schemes. If I remember correctly the discussions 
leading up to the renaming of createBlobURL to createObjectURL assumed 
that there would be stream: URLs.



Actually, the spec doesn't currently say anything happens when a stream
that is being transmitted just ends, either. I guess I should spec that...

...ok, now the spec is clear that an ended stream transmits blackness and
silence. Same with if some tracks are disabled. (Blackness only if there's
a video track; silence only if there's an audio track.)


OK, I guess that 

Re: [whatwg] PeerConnection, MediaStream, getUserMedia(), and other feedback

2011-07-16 Thread timeless
On Fri, Jul 15, 2011 at 1:55 PM, Shwetank Dixit shweta...@opera.com wrote:
 Just like, just allowing the web app to use the camera as it is will not
 make sense, and presumably, user agents will implement a authorization by
 the user before the app gains access to the camera (something like 'This
 application requests access to the camera. Allow for now/Always Allow/Never
 Allow/Close' just like you do in geolocation right now) ... just like that,
 you could do it for flash, where the app only gains access to it if the user
 allows it. If that is the implementation, i do not think there would be much
 hazards in allowing flash access.

Ignoring that there are in fact dangers for 'always allow' in the
location case. let's consider that it's probably the average choice of
a user (for geolocation, and you're suggesting the same ui for here,
thus it's probably what users will select without understanding any
risks).

if the user does choose always allow for this app, and later they're
in a museum or some other interesting location, suddenly they're in
trouble, even though originally when they allowed the thing it was ok.
that ui is not a good idea for this. the right approach is to just let
the user use their camera normally. the camera will manage flash (it
has to).


Re: [whatwg] PeerConnection, MediaStream, getUserMedia(), and other feedback

2011-07-15 Thread Shwetank Dixit

On Thu, 14 Jul 2011 18:53:00 +0530, timeless timel...@gmail.com wrote:


I'd expect a web app to have no idea about device camera
specifications and thus to not be able to properly specify a flash
duration. I don't see how such a thing is valuable.

If a user is in a movie theater, or a museum, it's quite likely they
won't notice a web app is forcing a flash. Let the user control flash
through a useragent only or host application only mode. I believe the
hazards of exposing flash duration outweigh any benefits. The only
application class I know of built using control of camera flash is
flash-light, and that's both a hack and not guaranteed to be
workable for all possible flash technologies.


Just like, just allowing the web app to use the camera as it is will not  
make sense, and presumably, user agents will implement a authorization by  
the user before the app gains access to the camera (something like 'This  
application requests access to the camera. Allow for now/Always  
Allow/Never Allow/Close' just like you do in geolocation right now) ...  
just like that, you could do it for flash, where the app only gains access  
to it if the user allows it. If that is the implementation, i do not think  
there would be much hazards in allowing flash access.


Apart from helping capture images/video in low light conditions, there are  
a few other use cases for flash such as the flash light thing you  
mentioned, as well as a possible S.O.S type app.


I'm fine if the consensus is that the device/user agent will handle the  
issue of flash by showing some sort of control where the user can click  
between 'flash on/off/auto'. That will cover *most* of the use cases,  
which is recording images/video in low light conditions. If so, then it  
might be good to specify that somewhere in the spec just to make things a  
bit clearer?




On 7/14/11, Shwetank Dixit shweta...@opera.com wrote:

On Thu, 14 Jul 2011 04:09:40 +0530, Ian Hickson i...@hixie.ch wrote:




Another question is flash. As far as I have seen, there seems to be no
option to specify whether the camera needs to use flash or not. Is  
this

decision left up to the device? (If someone is making an app which is
just clicking a picture of the person, then it would be nice to have  
the

camera use flash in low light conditions).

getUserMedia() returns a video stream, so it wouldn't use a flash.


Wouldn't it make sense to have a provision for flash separately then? I
think a lot of apps would like just a picture instead of video, and in
those cases, flash would be required. Maybe a seperate provision in the
spec which defines whether to use flash, and if so, for how many
miliseconds. Is that doable?
--
Shwetank Dixit
Web Evangelist,
Site Compatibility / Developer Relations / Core Engineering Group
Member - W3C Mobile Web for Social Development (MW4D) Group
Member - Web Standards Project (WaSP) - International Liaison Group
Opera Software - www.opera.com

Using Opera's revolutionary email client: http://www.opera.com/mail/






--
Shwetank Dixit
Web Evangelist,
Site Compatibility / Developer Relations / Core Engineering Group
Member - W3C Mobile Web for Social Development (MW4D) Group
Member - Web Standards Project (WaSP) - International Liaison Group
Opera Software - www.opera.com

Using Opera's revolutionary email client: http://www.opera.com/mail/


Re: [whatwg] PeerConnection, MediaStream, getUserMedia(), and other feedback

2011-07-14 Thread Shwetank Dixit

On Thu, 14 Jul 2011 04:09:40 +0530, Ian Hickson i...@hixie.ch wrote:




Another question is flash. As far as I have seen, there seems to be no
option to specify whether the camera needs to use flash or not. Is this
decision left up to the device? (If someone is making an app which is
just clicking a picture of the person, then it would be nice to have the
camera use flash in low light conditions).

getUserMedia() returns a video stream, so it wouldn't use a flash.


Wouldn't it make sense to have a provision for flash separately then? I  
think a lot of apps would like just a picture instead of video, and in  
those cases, flash would be required. Maybe a seperate provision in the  
spec which defines whether to use flash, and if so, for how many  
miliseconds. Is that doable?

--
Shwetank Dixit
Web Evangelist,
Site Compatibility / Developer Relations / Core Engineering Group
Member - W3C Mobile Web for Social Development (MW4D) Group
Member - Web Standards Project (WaSP) - International Liaison Group
Opera Software - www.opera.com

Using Opera's revolutionary email client: http://www.opera.com/mail/


Re: [whatwg] PeerConnection, MediaStream, getUserMedia(), and other feedback

2011-07-14 Thread timeless
I'd expect a web app to have no idea about device camera
specifications and thus to not be able to properly specify a flash
duration. I don't see how such a thing is valuable.

If a user is in a movie theater, or a museum, it's quite likely they
won't notice a web app is forcing a flash. Let the user control flash
through a useragent only or host application only mode. I believe the
hazards of exposing flash duration outweigh any benefits. The only
application class I know of built using control of camera flash is
flash-light, and that's both a hack and not guaranteed to be
workable for all possible flash technologies.

On 7/14/11, Shwetank Dixit shweta...@opera.com wrote:
 On Thu, 14 Jul 2011 04:09:40 +0530, Ian Hickson i...@hixie.ch wrote:


 Another question is flash. As far as I have seen, there seems to be no
 option to specify whether the camera needs to use flash or not. Is this
 decision left up to the device? (If someone is making an app which is
 just clicking a picture of the person, then it would be nice to have the
 camera use flash in low light conditions).
 getUserMedia() returns a video stream, so it wouldn't use a flash.

 Wouldn't it make sense to have a provision for flash separately then? I
 think a lot of apps would like just a picture instead of video, and in
 those cases, flash would be required. Maybe a seperate provision in the
 spec which defines whether to use flash, and if so, for how many
 miliseconds. Is that doable?
 --
 Shwetank Dixit
 Web Evangelist,
 Site Compatibility / Developer Relations / Core Engineering Group
 Member - W3C Mobile Web for Social Development (MW4D) Group
 Member - Web Standards Project (WaSP) - International Liaison Group
 Opera Software - www.opera.com

 Using Opera's revolutionary email client: http://www.opera.com/mail/


-- 
Sent from my mobile device


[whatwg] PeerConnection, MediaStream, getUserMedia(), and other feedback

2011-07-13 Thread Ian Hickson

In response to off-list feedback, I've renamed StreamTrack to 
MediaStreamTrack to be clearer about its relationship to the other 
interfaces.


On Wed, 1 Jun 2011, Tommy Widenflycht (�~[~O�~Z��~[~X�~[~X�~Z�) wrote:
 
 We are having a bit of discussion regarding the correct behaviour when 
 mandatory arguments are undefined, see this webkit bug for history: 
 https://bugs.webkit.org/show_bug.cgi?id=60622
 
 Could we have some clarification for the below cases, please: [...]

Hopefully Aryeh and Cameron have sufficiently clarified this; please let 
me know if not.


On Wed, 8 Jun 2011, Per-Erik Brodin wrote:
 
 The TrackList feature seems to be a good way to control the different 
 components of a Stream. Although it is said that tracks provide a way to 
 temporarily disable a local camera, due to the nature of the 
 ExclusiveTrackList it is still not possible to disable video altogether, 
 i.e. to 'pull down the curtain' in a video conference. I noticed that 
 there is a bug filed on this issue but I do not think the proposed 
 solution there is quite right. There is a state in which no tracks are 
 selected in an ExclusiveTrackList, when the selected index returned is 
 -1. A quick fix would be to allow also setting the active track to -1 in 
 order to deselect all the other tracks.

This is fixed now, hopefully. Let me know if the fix is not sufficient.

(I replaces the videoTracks and audioTracks lists with a single tracks 
list in which you can enable and disable individual tracks.)


 I think a note would be appropriate that although the label on a 
 GeneratedStream is guaranteed to be unique for the conceptual stream, 
 there are situations where one ends up with multiple Stream objects with 
 the same label. For example, if the remote peer adds a stream, then 
 removes it, then adds the same stream again, you would end up with two 
 Stream objects with the same label if a reference to the removed Stream 
 is kept. Also, if the remote peer takes a stream that it receives and 
 sends it back you will end up with a Stream object that has the same 
 label as a local GeneratedStream object.

Done.


 We prefer having a StreamRecorder that you have to stop in order get the
 recorded data (like the previous one, but with asynchronous Blob retrieval)
 and we do not understand the use cases for the current proposal where
 recording continues until the recorder is garbage collected (or the Stream
 ends) and you always get the data from the beginning of the recording. This
 also has to be tied to application quota in some way.

The current design is just the result of needing to define what happens 
when you call getRecordedData() twice in a row. Could you elaborate on 
what API you think we should have?


 The recording example does not seem correct either, it never calls 
 record() and then it calls getRecordedData() directly on the 
 GeneratedStream object.

Fixed.


 Instead of blob: we would like to use stream: for the Stream URLs so 
 that we very early on in the media engine selection can use the protocol 
 scheme to determine how the URL will be handled. Blobs are typically 
 handled in the same way as other media playback. The definition of 
 stream: could be the same as for blob:.

Why can't the UA know which blob: URLs point to streams and which point to 
blobs?


 In addStream(), the readyState of the Stream is not checked to see if it is
 ENDED, in which case adding a stream should fail (perhaps throwing a TypeError
 exception like when passing null).

The problem is that if we do that there'd be a race condition: what 
happens if the stream is ended between the time the script tests whether 
the stream is ended or not and the time the stream is passed to the 
object? I would rather that not be unreliable.

Actually, the spec doesn't currently say anything happens when a stream 
that is being transmitted just ends, either. I guess I should spec that...

...ok, now the spec is clear that an ended stream transmits blackness and 
silence. Same with if some tracks are disabled. (Blackness only if there's 
a video track; silence only if there's an audio track.)


 When a received Stream is removed its readyState is not set to ENDED 
 (and no 'ended' event is dispatched).

I've clarified this so that it is clear that the state change and event do 
happen.


 PeerConnection is an EventTarget but it still uses a callback for the 
 signaling messages and this mixture of events and callbacks is a bit 
 awkward in my opinion. If you would like to change the function that 
 handles signaling messages after calling the constructor you would have 
 to wrap a function call inside the callback to the actual signal 
 handling function, instead of just (re-)setting an onsignal (or 
 whatever) attribute listener (the event could reuse the MessageEvent 
 interface).

When would you change the callback?

My concern with making the callback an event handler is that it leads to a 
set of poor failure modes and