Re: [whatwg] Iframe Sandbox Attribute - allow-plugins?

2011-07-13 Thread Jonas Sicking
On Wed, Jul 13, 2011 at 9:49 PM, Anne van Kesteren  wrote:
>
> On Wed, 13 Jul 2011 23:13:05 +0200, Julian Reschke  
> wrote:
>>
>> Yes, but we can *define* the flag in HTML and write down what it means with 
>> respect to plugin APIs.
>
> It seems much better to wait until it can actually be implemented.

Especially since it's not at all clear to me that a specific opt-in
mechanism is at all needed once we have the appropriate plugin APIs
implemented. And those APIs are needed anyway if we want to allow
plugins in any form in the sandbox.

/ Jonas


Re: [whatwg] Iframe Sandbox Attribute - allow-plugins?

2011-07-13 Thread Anne van Kesteren
On Wed, 13 Jul 2011 23:13:05 +0200, Julian Reschke   
wrote:
Yes, but we can *define* the flag in HTML and write down what it means  
with respect to plugin APIs.


It seems much better to wait until it can actually be implemented.


--
Anne van Kesteren
http://annevankesteren.nl/


Re: [whatwg] Proposal for a MediaSource API that allows sending media data to a HTMLMediaElement

2011-07-13 Thread Robert O'Callahan
On Thu, Jul 14, 2011 at 4:35 AM, Aaron Colwell  wrote:

> I am open to suggestions. My intent was that the browser would not attempt
> to cache any data passed into append(). It would just demux the buffers that
> are sent in. When a seek is requested, it flushes whatever it has and waits
> for more data from append().  If the web application wants to do caching it
> can use the WebStorage or File APIs. If the browser's media engine needs a
> certain amount of "preroll" data before it starts playback it can signal
> this explicitly through new attributes or just use HAVE_FUTURE_DATA
> & HAVE_ENOUGH_DATA readyStates to signal when it has enough.


OK, I sorta get the idea. I think you're defining a new interface to the
media processing pipeline that integrates with the demuxer and codecs at a
different level to regular media resource loading. (For example, all the
browser's built-in logic for seeking and buffering would have to be disabled
and/or bypassed.) As such, it would have to be carefully specified,
potentially in a container- or codec-dependent way, unlike APIs like Blobs
which work "just like" regular media resource loading and can thus work with
any container/codec.

I'm not sure what the best way to do this is, to be honest. It comes down to
the use-cases. If you want to experiment with different seeking strategies,
can't you just do that in Chrome itself? If you want scriptable adaptive
streaming (or even if you don't) then I think we want APIs for seamless
transitioning along a sequence of media resources, or between resources
loaded in parallel.

Rob
-- 
"If we claim to be without sin, we deceive ourselves and the truth is not in
us. If we confess our sins, he is faithful and just and will forgive us our
sins and purify us from all unrighteousness. If we claim we have not sinned,
we make him out to be a liar and his word is not in us." [1 John 1:8-10]


[whatwg] PeerConnection, MediaStream, getUserMedia(), and other feedback

2011-07-13 Thread Ian Hickson

In response to off-list feedback, I've renamed StreamTrack to 
MediaStreamTrack to be clearer about its relationship to the other 
interfaces.


On Wed, 1 Jun 2011, Tommy Widenflycht (�~[~O�~Z��~[~X�~[~X�~Z�) wrote:
> 
> We are having a bit of discussion regarding the correct behaviour when 
> mandatory arguments are undefined, see this webkit bug for history: 
> https://bugs.webkit.org/show_bug.cgi?id=60622
> 
> Could we have some clarification for the below cases, please: [...]

Hopefully Aryeh and Cameron have sufficiently clarified this; please let 
me know if not.


On Wed, 8 Jun 2011, Per-Erik Brodin wrote:
> 
> The TrackList feature seems to be a good way to control the different 
> components of a Stream. Although it is said that tracks provide a way to 
> temporarily disable a local camera, due to the nature of the 
> ExclusiveTrackList it is still not possible to disable video altogether, 
> i.e. to 'pull down the curtain' in a video conference. I noticed that 
> there is a bug filed on this issue but I do not think the proposed 
> solution there is quite right. There is a state in which no tracks are 
> selected in an ExclusiveTrackList, when the selected index returned is 
> -1. A quick fix would be to allow also setting the active track to -1 in 
> order to deselect all the other tracks.

This is fixed now, hopefully. Let me know if the fix is not sufficient.

(I replaces the videoTracks and audioTracks lists with a single tracks 
list in which you can enable and disable individual tracks.)


> I think a note would be appropriate that although the label on a 
> GeneratedStream is guaranteed to be unique for the conceptual stream, 
> there are situations where one ends up with multiple Stream objects with 
> the same label. For example, if the remote peer adds a stream, then 
> removes it, then adds the same stream again, you would end up with two 
> Stream objects with the same label if a reference to the removed Stream 
> is kept. Also, if the remote peer takes a stream that it receives and 
> sends it back you will end up with a Stream object that has the same 
> label as a local GeneratedStream object.

Done.


> We prefer having a StreamRecorder that you have to stop in order get the
> recorded data (like the previous one, but with asynchronous Blob retrieval)
> and we do not understand the use cases for the current proposal where
> recording continues until the recorder is garbage collected (or the Stream
> ends) and you always get the data from the beginning of the recording. This
> also has to be tied to application quota in some way.

The current design is just the result of needing to define what happens 
when you call getRecordedData() twice in a row. Could you elaborate on 
what API you think we should have?


> The recording example does not seem correct either, it never calls 
> record() and then it calls getRecordedData() directly on the 
> GeneratedStream object.

Fixed.


> Instead of blob: we would like to use stream: for the Stream URLs so 
> that we very early on in the media engine selection can use the protocol 
> scheme to determine how the URL will be handled. Blobs are typically 
> handled in the same way as other media playback. The definition of 
> stream: could be the same as for blob:.

Why can't the UA know which blob: URLs point to streams and which point to 
blobs?


> In addStream(), the readyState of the Stream is not checked to see if it is
> ENDED, in which case adding a stream should fail (perhaps throwing a TypeError
> exception like when passing null).

The problem is that if we do that there'd be a race condition: what 
happens if the stream is ended between the time the script tests whether 
the stream is ended or not and the time the stream is passed to the 
object? I would rather that not be unreliable.

Actually, the spec doesn't currently say anything happens when a stream 
that is being transmitted just ends, either. I guess I should spec that...

...ok, now the spec is clear that an ended stream transmits blackness and 
silence. Same with if some tracks are disabled. (Blackness only if there's 
a video track; silence only if there's an audio track.)


> When a received Stream is removed its readyState is not set to ENDED 
> (and no 'ended' event is dispatched).

I've clarified this so that it is clear that the state change and event do 
happen.


> PeerConnection is an EventTarget but it still uses a callback for the 
> signaling messages and this mixture of events and callbacks is a bit 
> awkward in my opinion. If you would like to change the function that 
> handles signaling messages after calling the constructor you would have 
> to wrap a function call inside the callback to the actual signal 
> handling function, instead of just (re-)setting an onsignal (or 
> whatever) attribute listener (the event could reuse the MessageEvent 
> interface).

When would you change the callback?

My concern with making the callback an event handler is that 

Re: [whatwg] Iframe Sandbox Attribute - allow-plugins?

2011-07-13 Thread Julian Reschke

On 2011-07-13 22:58, Adam Barth wrote:

On Wed, Jul 13, 2011 at 1:55 PM, Julian Reschke  wrote:

On 2011-07-13 22:31, Adam Barth wrote:

Adding allow-plugins today would defeat the prevention of parent
redirection.

The short answer is we need an API for informing plugins of the
sandbox flags and a way of confirming that the plugins understand
those bits before we can allow plugins inside sandboxed frames.


...but that API is outside the scope of what the W3C and the WhatWG
currently do, so I think it would be great if defining this flag could be
decoupled from progress on the plugin API layers.


It is coupled in the sense that we can't implement the flag unless and
until such a plug-in API exists.


Yes, but we can *define* the flag in HTML and write down what it means 
with respect to plugin APIs.


Best regards, Julian


Re: [whatwg] Iframe Sandbox Attribute - allow-plugins?

2011-07-13 Thread Adam Barth
On Wed, Jul 13, 2011 at 1:55 PM, Julian Reschke  wrote:
> On 2011-07-13 22:31, Adam Barth wrote:
>> Adding allow-plugins today would defeat the prevention of parent
>> redirection.
>>
>> The short answer is we need an API for informing plugins of the
>> sandbox flags and a way of confirming that the plugins understand
>> those bits before we can allow plugins inside sandboxed frames.
>
> ...but that API is outside the scope of what the W3C and the WhatWG
> currently do, so I think it would be great if defining this flag could be
> decoupled from progress on the plugin API layers.

It is coupled in the sense that we can't implement the flag unless and
until such a plug-in API exists.

Adam


Re: [whatwg] Iframe Sandbox Attribute - allow-plugins?

2011-07-13 Thread Julian Reschke

On 2011-07-13 22:31, Adam Barth wrote:

Adding allow-plugins today would defeat the prevention of parent redirection.

The short answer is we need an API for informing plugins of the
sandbox flags and a way of confirming that the plugins understand
those bits before we can allow plugins inside sandboxed frames.


...but that API is outside the scope of what the W3C and the WhatWG 
currently do, so I think it would be great if defining this flag could 
be decoupled from progress on the plugin API layers.


Best regards, Julian


Re: [whatwg] Iframe Sandbox Attribute - allow-plugins?

2011-07-13 Thread Adam Barth
Adding allow-plugins today would defeat the prevention of parent redirection.

The short answer is we need an API for informing plugins of the
sandbox flags and a way of confirming that the plugins understand
those bits before we can allow plugins inside sandboxed frames.

Adam


On Wed, Jul 13, 2011 at 12:53 PM, John Richards
 wrote:
> http://www.whatwg.org/specs/web-apps/current-work/multipage/the-iframe-element.html#attr-iframe-sandbox
>
> Are there plans to have an 'allow-plugins' value?
>
> I'm assuming there will be use-cases where the only protection that is
> desired is prevention of parent redirection.
>
> Thanks
>


[whatwg] Iframe Sandbox Attribute - allow-plugins?

2011-07-13 Thread John Richards
http://www.whatwg.org/specs/web-apps/current-work/multipage/the-iframe-element.html#attr-iframe-sandbox

Are there plans to have an 'allow-plugins' value?

I'm assuming there will be use-cases where the only protection that is
desired is prevention of parent redirection.

Thanks


Re: [whatwg] Proposal for a MediaSource API that allows sending media data to a HTMLMediaElement

2011-07-13 Thread Aaron Colwell
On Tue, Jul 12, 2011 at 5:05 PM, Robert O'Callahan wrote:

> On Wed, Jul 13, 2011 at 12:00 PM, Aaron Colwell wrote:
>
>> On Tue, Jul 12, 2011 at 4:44 PM, Robert O'Callahan 
>> wrote:
>>
>>> I had imagined that this API would let the author feed in the same data
>>> as you would load from some URI. But that can't be what's happening, since
>>> in some element implementations (e.g., Gecko's) loaded data is buffered
>>> internally and seeking might not require any new data to be loaded.
>>>
>>>
>>  No. The idea is to allow JavaScript to manage fetching the media data so
>> various fetching strategies could be implemented without needing to change
>> the browser. My initial motivation is for supporting adaptive streaming with
>> this mechanism, but I think various media mashup and delivery scenarios
>> could be explored with this.
>>
>
> I don't think you can do that with this API without making huge assumptions
> about what the browser's demuxer, internal caching, etc are doing.
>
>
I am open to suggestions. My intent was that the browser would not attempt
to cache any data passed into append(). It would just demux the buffers that
are sent in. When a seek is requested, it flushes whatever it has and waits
for more data from append().  If the web application wants to do caching it
can use the WebStorage or File APIs. If the browser's media engine needs a
certain amount of "preroll" data before it starts playback it can signal
this explicitly through new attributes or just use HAVE_FUTURE_DATA
& HAVE_ENOUGH_DATA readyStates to signal when it has enough.

Aaron