Re: WS/Service Workers, TLS and future apps - [was Re: HTTP is just fine]

2015-12-02 Thread Aymeric Vitte


Le 01/12/2015 20:41, Brad Hill a écrit :
>> As far as I see it, a "mixed content" has the word "content", which is
> supposed to designate something that can be included in a web page and
> therefore be dangerous.
> 
> "Mixed Content" (and "mixed content blocking") is a term of art that has
> been in use for many years in the browser community.  As such, you are
> correct that it is a bit inadequate in that it's coinage predates the
> widespread use of AJAX-type patterns, but we felt that it was better to
> continue to use and refine a well-known term than to introduce a new
> term. Apologies if this has created any confusion.
> 
> More than just "content", the question boils down to "what does seeing
> the lock promise the user?"  Browsers interpret the lock as a promise to
> the user that the web page / application they are currently interacting
> with is safe according to the threat model of TLS.  That is to say, it
> is protected end-to-end on the network path against attempts to
> impersonate, eavesdrop or modify traffic.  In a modern browser, this
> includes not just fetches of images and script done declaratively in
> HTML, but any kind of potentially insecure network communication.
> 

Then you should follow your rules and apply this policy to WebRTC, ie
allow WebRTC to work only with http.

> In general, we've seen this kind of argument before for building secure
> protocols at layer 7 on top of insecure transports - notably Netflix's
> "Message Security Layer".  Thus far, no browser vendors have been
> convinced that these are good ideas, that there is a way to safely allow
> their use in TLS-protected contexts, or that building these systems with
> existing TLS primitives is not an adequate solution.

Netflix's MSL doc is not clear and just mentions an issue with http (not
ws) inside https, probably they have the very same problem with non
valid certificates.

Browser vendors are convinced that http with ws is better than https
with ws...

Could you please provide an example of serious issue regarding https
with ws?

> 
> In specific, I don't think you'll ever find support for treating Tor
> traffic that is subject to interception and modification after it leaves
> an exit node as equivalent to HTTPS

??? Do you really know the Tor protocol?

And where did you see this in the use cases (something that exits)?

If something has to exit Tor it's obvious that https must be used, the
exit node is by design a potential mitm.

But https inside the Tor protocol.

Using wss to send the Tor traffic does not change anything to this
situation.

This contradiction that you highlight here just shows again that the
current rules are not logical.

, especially since we know there are
> active attacks being mounted against this traffic on a regular basis.
>  (This is why I suggested .onion sites as potentially secure contexts,
> which do not suffer from the same exposure outside of the Tor network.)

Same as above, that's the third or fourth time in this thread that I am
talked about the fb hidden service and its .onion certificate, supposed
to improve some security while it does not (or a very little in case fb
hidden service is not located with the Tor server, but that's more a fb
configuration issue)

That's another proof that https does not improve anything here.

While browsing hidden services could be a use case in the future (once
the "browsing paradox" is solved), I am not talking about this for now,
I am talking about using the Tor protocol or another secure protocole
for multiple services.


> 
> -Brad
> 
> On Tue, Dec 1, 2015 at 5:42 AM Aymeric Vitte <vitteayme...@gmail.com
> <mailto:vitteayme...@gmail.com>> wrote:
> 
> 
> 
> Le 01/12/2015 05:31, Brad Hill a écrit :
> > Let's keep this discussion civil, please.
> 
> Maybe some wording was a little tough below, apologies for this, the
> logjam attack is difficult to swallow, how something that is supposed to
> protect forward secrecy can do quietly the very contrary without even
> having the keys compromised, it's difficult to understand too why TLS
> did not implement a mechanism to protect the DH client public key.
> 
> >
> > The reasons behind blocking of non-secure WebSocket connections from
> > secure contexts are laid out in the following document:
> >
> > http://www.w3.org/TR/mixed-content/
> >
> > A plaintext ws:// connection does not meet the requirements of
> > authentication, encryption and integrity, so far as the user agent is
> > able to tell, so it cannot allow it.
> 
> The spec just mentions to align the behavior of ws to xhr, fetch and
> eventsour

Re: WS/Service Workers, TLS and future apps - [was Re: HTTP is just fine]

2015-12-02 Thread Aymeric Vitte


Le 02/12/2015 13:18, Florian Bösch a écrit :
> On Wed, Dec 2, 2015 at 12:50 PM, Aymeric Vitte <vitteayme...@gmail.com
> <mailto:vitteayme...@gmail.com>> wrote:
> 
> Then you should follow your rules and apply this policy to WebRTC, ie
> allow WebRTC to work only with http.
> 
> 
> Just as a sidenote, WebRTC also does UDP and there's no TLS over UDP.
> Also WebRTC does P2P, and there's no certificates/authorities there (you
> could encrypt, but I don't think it does even when using TCP/IP (which
> it doesn't in case of streaming video over UDP).

See https://github.com/Ayms/node-Tor#security, WebRTC uses DTLS with
self-signed certifcates + a third party mechanism supposed to secure the
connection.

As a matter of fact this is almost exactly the same mechanism used by
the Tor network, where the CERTS cells use the long term ID key of a Tor
node to make sure that you are discussing with that one.

This does not prevent of course from discussing with a malicious node
not identified as such with valid long term ID keys, which is not a
problem for Tor (but is a problem for WebRTC), as long as it behaves as
expected, and if it does not, this will be detected.

The above mechanism is specific to the Tor network, for other uses of
the Tor protocol an alternative is explained here:
https://github.com/Ayms/node-Tor#pieces-and-sliding-window for WebRTC

And again, adding a TLS layer on top of all this is of complete no use.

-- 
Get the torrent dynamic blocklist: http://peersm.com/getblocklist
Check the 10 M passwords list: http://peersm.com/findmyass
Anti-spies and private torrents, dynamic blocklist: http://torrent-live.org
Peersm : http://www.peersm.com
torrent-live: https://github.com/Ayms/torrent-live
node-Tor : https://www.github.com/Ayms/node-Tor
GitHub : https://www.github.com/Ayms



Re: WS/Service Workers, TLS and future apps - [was Re: HTTP is just fine]

2015-12-01 Thread Aymeric Vitte


Le 01/12/2015 05:31, Brad Hill a écrit :
> Let's keep this discussion civil, please.  

Maybe some wording was a little tough below, apologies for this, the
logjam attack is difficult to swallow, how something that is supposed to
protect forward secrecy can do quietly the very contrary without even
having the keys compromised, it's difficult to understand too why TLS
did not implement a mechanism to protect the DH client public key.

> 
> The reasons behind blocking of non-secure WebSocket connections from
> secure contexts are laid out in the following document:
> 
> http://www.w3.org/TR/mixed-content/
> 
> A plaintext ws:// connection does not meet the requirements of
> authentication, encryption and integrity, so far as the user agent is
> able to tell, so it cannot allow it.  

The spec just mentions to align the behavior of ws to xhr, fetch and
eventsource without giving more reasons, or I missed them.

Let's concentrate on ws here.

As far as I see it, a "mixed content" has the word "content", which is
supposed to designate something that can be included in a web page and
therefore be dangerous.

WS cannot include anything in a page by itself, it is designed to
discuss with external entities, for other purposes than fetching
resources (images, js, etc) from web servers that are logically tied to
a domain, in that case you can use xhr or other fetching means instead.

Therefore it is logical to envision that those external entities used
with WS are not necessarily web servers and might not have valid
certificates.

WS cannot hurt anything unless the application decides to insert the
results in the page, which is not something problematic with WS only,
the application is loaded via https, so is supposed to be secure but if
it is doing wrong things nothing can save you, even wss.

Unlike usual fetching means, in case of malicious use WS cannot really
hurt like scaning URLs, this is a specific protocol not talked by anybody.

As a result of the current policy, if we want to establish WS with
entities that can't have a valid certificate, we must load the code via
http which is obviously completely insecure.

So forbiding WS with https is just putting the users at risk and prevent
any current or future uses of ws with entities that can't have a valid
certificate, therefore reducing the interest and potential of ws to
something very small.


> 
> If there is a plausible mechanism by which browsers could distinguish
> external communications which meet the necessary security criteria using
> protocols other than TLS or authentication other than from the Web PKI,
> there is a reasonable case to be made that such could be considered as
> potentially secure origins and URLs.  (as has been done to some extent
> for WebRTC, as you have already noted)
> 

To some extent yes... maybe some other solutions could be studied via
something like letsencrypt to get automatically something like temporary
valid certificates (which then might eliminate the main topic of this
discussion if feasible)


> If you want to continue this discussion here, please:
> 
> 1) State your use cases clearly for those on this list who do not
> already know them.  You want to "use the Tor protocol" over websockets? 

Note: I am not affiliated at all to the Tor project

Yes, that's already a reality with projects such as Peersm and Flashproxy.

Peersm has the onion proxy function inside browsers which establishes
Tor circuits with Tor nodes using WS.

Flashproxy connects a censored Tor user to a Tor node and relays the Tor
protocol using WS between both.

> To connect to what?  Why? 

The Tor protocol is just an example, let's see it as another secure
protocol.

If we go further, we can imagine what I described here:
https://mailman.stanford.edu/pipermail/liberationtech/2015-November/015680.html

Which mentions the "browsing paradox" too.

Not usual I believe (last paragraph, the idea is not to proxy only to
URLs as it is today but to proxy to interfaces, such as ws,xhr and
WebRTC), but this will happen one day (maybe I should patent this too...)

Applications are numerous and not restricted to those examples.

 Why is it important to bootstrap an
> application like this over regular http(s) instead of, for example, as
> an extension or modified user-agent like TBB?

The Tor Browser is designed for secure browsing, because for example
solving the "browsing paradox" and having the onion proxy inside
browsers is not enough to insure secure anonymous browsing, Tor
Browser's features will still be required.

Some applications need some modifications of the browser, some others
need extensions but plenty of applications could be installed from the
web sites directly.

The obvious advantages are: no installation, works on any
device/platform, no dev/maintenance of the application for different
platforms with the associated risks (sw integrity) and complexity for
the users (not talking here about the problematic aspect of code loading
for a web 

Re: WS/Service Workers, TLS and future apps - [was Re: HTTP is just fine]

2015-11-30 Thread Aymeric Vitte
Not sure that you know what you are talking about here, maybe influenced
by fb's onion things, or you misunderstood what I wrote.

I am not talking about the Tor network, neither the Hidden services, I
am talking about the Tor protocol itself, that's different and it is
known to be strong, but this is just an example, let's see it as another
secure protocol to connect browsers to other entities that can not have
valid certificates for obvious reasons.

Whatever number of bits are used for RSA/sym crypto/SHA the Tor protocol
is resistant to the logjam trivial DH_export quasi undetectable
downgrade attack that nobody anticipated during years, on purpose or
not, I don't know, but that's obvious that the DH client public key for
TLS could have been protected by the public key of the server, like the
Tor protocol is doing, so maybe you should refrain your compliments
about TLS.

And the Tor protocol have TLS on top of it, so below the right sequence
is ws + TLS + Tor protocol.

And it checks that the one you are connected to is the one with whom you
have established the TLS connection (who can be a MITM again, but you
don't care, you just want to be sure with whom you are discussing with,
like what WebRTC is trying to do)

But again, that's not really the subject of the discussion, the subject
is what is really the problem of letting an interface that has access to
nothing (WS) work with https? Knowing that you can use it with another
protocol that you can estimate better, but could be worse, again what
does it hurt?

Or just deprecate ws because if it has to work only with entities that
own valid certificates, then it's of quasi no use for the future.

Le 30/11/2015 21:00, Brad Hill a écrit :
> I don't think there is universal agreement among browser engineers (if
> anyone agrees at all) with your assertion that the Tor protocol or even
> Tor hidden services are "more secure than TLS".  TLS in modern browsers
> requires RSA 2048-bit or equivalent authentication, 128-bit symmetric
> key confidentiality and SHA-256 or better integrity.If .onion
> identifiers and the Tor protocol crypto were at this level of strength,
> it would be reasonable to argue that a .onion connection represented a
> "secure context", and proceed from there.  In the meantime, with .onion
> site security (without TLS) at 80-bits of truncation of a SHA-1 hash of
> a 1024 bit key, I don't think you'll get much traction in insisting it
> is equivalent to or better than TLS.
> 
> On Mon, Nov 30, 2015 at 7:52 AM Aymeric Vitte <vitteayme...@gmail.com
> <mailto:vitteayme...@gmail.com>> wrote:
> 
> Redirecting this to WebApps since it's probable that we are facing a
> design mistake that might amplify by deprecating non TLS connections. I
> have submitted the case to all possible lists in the past, never got a
> clear answer and was each time redirected to another list (ccing
> webappsec but as a whole I think that's a webapp matter, so please don't
> state only that "downgrading a secure connection to an insecure one is
> insecure").
> 
> The case described below is simple:
> 
> 1- https page loading the code, the code establishes ws + the Tor
> protocol to "someone" (who can be a MITM or whatever, we don't care as
> explained below)
> 
> 2- http page loading the code, the code establishes ws + the Tor
> protocol
> 
> 3- https page loading the code, the code establishes wss + the Tor
> protocol
> 
> 4- https page loading the code, the code establishes normal wss
> connections
> 
> 3 fails because the WS servers have self-signed certificates.
> 
> What is insecure between 1 and 2? Obviously this is 2, because loading
> the code via http.
> 
> Even more, 1 is more secure than 4, because the Tor protocol is more
> secure than TLS.
> 
> It's already a reality that projects are using something like 1 and will
> continue to build systems on the same principles (one can't argue that
> such systems are unsecure or unlikely to happen, that's not true, see
> the Flashproxy project too).
> 
> But 1 fails too, because ws is not allowed inside a https page, so we
> must use 2, which is insecure and 2 might not work any longer later.
> 
> Service Workers are doing about the same, https must be used, as far as
> I understand Service Workers can run any browser instance in background
> even if the spec seems to focus more on the offline aspects, so I
> suppose that having 1 inside a (background) Service Worker will fail
> too.
> 
> Now we have the "new" "progressive Web Apps" which surprisingly present
> as a revolution the possibility to have a web app look like a n

WS/Service Workers, TLS and future apps - [was Re: HTTP is just fine]

2015-11-30 Thread Aymeric Vitte
Redirecting this to WebApps since it's probable that we are facing a
design mistake that might amplify by deprecating non TLS connections. I
have submitted the case to all possible lists in the past, never got a
clear answer and was each time redirected to another list (ccing
webappsec but as a whole I think that's a webapp matter, so please don't
state only that "downgrading a secure connection to an insecure one is
insecure").

The case described below is simple:

1- https page loading the code, the code establishes ws + the Tor
protocol to "someone" (who can be a MITM or whatever, we don't care as
explained below)

2- http page loading the code, the code establishes ws + the Tor protocol

3- https page loading the code, the code establishes wss + the Tor protocol

4- https page loading the code, the code establishes normal wss connections

3 fails because the WS servers have self-signed certificates.

What is insecure between 1 and 2? Obviously this is 2, because loading
the code via http.

Even more, 1 is more secure than 4, because the Tor protocol is more
secure than TLS.

It's already a reality that projects are using something like 1 and will
continue to build systems on the same principles (one can't argue that
such systems are unsecure or unlikely to happen, that's not true, see
the Flashproxy project too).

But 1 fails too, because ws is not allowed inside a https page, so we
must use 2, which is insecure and 2 might not work any longer later.

Service Workers are doing about the same, https must be used, as far as
I understand Service Workers can run any browser instance in background
even if the spec seems to focus more on the offline aspects, so I
suppose that having 1 inside a (background) Service Worker will fail too.

Now we have the "new" "progressive Web Apps" which surprisingly present
as a revolution the possibility to have a web app look like a native app
while it can be done on iOS since the begining, same thing for some
offline caching features that were possible before, but this indeed
brings new things, hopefully we can have one day something like all the
cordova features inside browsers + background/headless browser instances.

So we are talking about web apps here, not about a web page loading
plenty of http/https stuff, web apps that can be used as
independant/native apps or nodes to relay traffic and therefore discuss
with some entities that can't be tied to a domain and can only use
self-signed certificates (like WebRTC peers, why do we have a security
exception here allowing something for WebRTC and not for this case?).

Then 1 must be possible with WS and Service Workers, because there are
no reasons why it should not be allowed and this will happen in the
future under different forms (see the link below), that's not illogical,
if you use wss then you expect it to work as such (ie fail with
self-signed certificates for example), if you use ws (what terrible
things can happen with ws exactly? ws can't access the DOM or whatever)
then you are on your own and should better know what you are doing,
that's not a reason to force you to use much more insecure 2.

Such apps can be loaded while navigating on a web site, entirely (ie the
web site is the app), or for more wide distribution from different sites
than the original app site via an iframe (very ugly way) or extracted as
a component (cool way, does not seem to be foreseen by anybody) with
user prompt/validation ("do you want to install application X?")
possibly running in background when needed in a sandboxed context with
service workers.

Le 25/11/2015 17:43, Aymeric Vitte a écrit :
> 
> 
> Le 20/11/2015 12:35, Richard Barnes a écrit :
>> On Thu, Nov 19, 2015 at 8:40 AM, Hanno Böck <ha...@hboeck.de> wrote:
>>
>>>> It's amazing how the same wrong arguments get repeated again and
>>>> again...
>>>>
>> +1000
>>
>> All of these points have been raised and rebutted several times.  My
>> favorite reference is:
>>
>> https://konklone.com/post/were-deprecating-http-and-its-going-to-be-okay
>>
>>
>>
> 
> You might not break the current internet but its future.
> 
> Example: https://bugzilla.mozilla.org/show_bug.cgi?id=917829
> 
> How do you intend to solve this? ie the case of an entity that just
> cannot have valid certificates and/or implements a secure protocol on
> top of an insecure one (ws here for Peersm project, the other party can
> be by design a "MITM" but we completely don't care per the secure
> protocol used, the MITM will not know what happens next)?
> 
> Like WebRTC too, but there is an exception for that one, self-signed
> certificates are (by some luck) accepted.
> 
> It's obvious that browsers will be used for new services involving those
> mechanisms in the future, like P2P systems as sket

Re: WS/Service Workers, TLS and future apps - [was Re: HTTP is just fine]

2015-11-30 Thread Aymeric Vitte
What are you talking about?

The logjam attack just shows that you (spec security experts of major
internet companies) are incompetent, or just knew about it.

You don't know Tor "plenty well", I am not referring at all to hidden
services, the fb case, or the ridiculous related case of a https cert
over a .onion

And for WebRTC which "requires the website to specify the key
fingerprint of the remote party, so you're secure against any attacker
besides the website", this is a really funny (not to say completely
stupid) solution involving a "website"
(https://github.com/Ayms/node-Tor#security), which shows again that you
are completely missing my point, a "website" is not the future and web
apps must be able to work without a "website", so with entities that
cannot have valid certificates.

Maybe some projects like letsencrypt should study the case.

Let's stop talking about Tor, please just explain why ws cannot be used
with https.

Le 01/12/2015 00:08, Richard Barnes a écrit :
> On Mon, Nov 30, 2015 at 5:52 PM, Aymeric Vitte <vitteayme...@gmail.com
> <mailto:vitteayme...@gmail.com>> wrote:
> 
> You must be kidding, the logjam attack showed the complete failure of
> TLS
> 
> 
> Sure, protocols have bugs, and bugs get fixed.  The things we require
> for HTTPS aren't even design goals of Tor.
> 
>  
> 
> and your 1/2/3 (notwithstanding the useless discussions about CAs &
> co), which does not apply to the Tor protocol that you don't know
> apparently but that fulfills 1/2/3
> 
> 
> I know Tor plenty well.  It's good for what it's designed for (e.g.,
> anonymity), but it's not designed to meet the requirements of HTTPS.
> 
> You may be interested in this Tor blog post that points out some
> advantages of doing HTTPS over Tor:
> 
> https://blog.torproject.org/blog/facebook-hidden-services-and-https-certs
> 
>  
> 
> I am not a Tor advocate, this is just an example illustrating why there
> are no reasons to forbid ws with https, and ws with https with service
> workers, and ws with https with future things, do you think that
> browsers will continue to discuss in the future with good old entities
> tied to a good old domain with a good old certificate?
> 
> Then what about WebRTC and DTLS self-signed certificates that the web is
> trying to secure by some strange ways?
> 
> 
> You seem to be missing the fact that WebRTC has additional security
> layers on top of the certificates.  The WebRTC connection process
> requires the website to specify the key fingerprint of the remote party,
> so you're secure against any attacker besides the website.  And if you
> don't trust that site, there's an identity layer that can provide
> additional authentication.
> 
> https://w3c.github.io/webrtc-pc/#sec.identity-proxy
> 
> --Richard
> 
> 
>  
> 
> 
> Le 30/11/2015 22:45, Richard Barnes a écrit :
> >
> >
> > On Mon, Nov 30, 2015 at 4:39 PM, Aymeric Vitte <vitteayme...@gmail.com 
> <mailto:vitteayme...@gmail.com>
> > <mailto:vitteayme...@gmail.com <mailto:vitteayme...@gmail.com>>>
> wrote:
> >
> > Not sure that you know what you are talking about here, maybe
> influenced
> > by fb's onion things, or you misunderstood what I wrote.
> >
> > I am not talking about the Tor network, neither the Hidden
> services, I
> > am talking about the Tor protocol itself, that's different and
> it is
> > known to be strong, but this is just an example, let's see it
> as another
> > secure protocol to connect browsers to other entities that can
> not have
> > valid certificates for obvious reasons.
> >
> >
> > HTTPS gives you the following essential properties:
> > 1. Authentication: You know that you're talking to who you think
> you're
> > talking to.
> > 2. Confidentiality: Nobody else can see what you're saying
> > 3. Integrity: Nobody else can interfere with your communications
> >
> > Show me another protocol that achieves those properties, and maybe
> we'll
> > have something to talk about.  Tor doesn't.
> >
> > --Richard
> >
> >
> > Whatever number of bits are used for RSA/sym crypto/SHA the
> Tor protocol
> > is resistant to the logjam trivial DH_export quasi undetectable
> > downgrade attack that nobody anticipated during years, on
> purpose or
> > not, I don't know, but that's obvious that the DH client
> 

Re: WS/Service Workers, TLS and future apps - [was Re: HTTP is just fine]

2015-11-30 Thread Aymeric Vitte
You must be kidding, the logjam attack showed the complete failure of
TLS and your 1/2/3 (notwithstanding the useless discussions about CAs &
co), which does not apply to the Tor protocol that you don't know
apparently but that fulfills 1/2/3

I am not a Tor advocate, this is just an example illustrating why there
are no reasons to forbid ws with https, and ws with https with service
workers, and ws with https with future things, do you think that
browsers will continue to discuss in the future with good old entities
tied to a good old domain with a good old certificate?

Then what about WebRTC and DTLS self-signed certificates that the web is
trying to secure by some strange ways?

Le 30/11/2015 22:45, Richard Barnes a écrit :
> 
> 
> On Mon, Nov 30, 2015 at 4:39 PM, Aymeric Vitte <vitteayme...@gmail.com
> <mailto:vitteayme...@gmail.com>> wrote:
> 
> Not sure that you know what you are talking about here, maybe influenced
> by fb's onion things, or you misunderstood what I wrote.
> 
> I am not talking about the Tor network, neither the Hidden services, I
> am talking about the Tor protocol itself, that's different and it is
> known to be strong, but this is just an example, let's see it as another
> secure protocol to connect browsers to other entities that can not have
> valid certificates for obvious reasons.
> 
> 
> HTTPS gives you the following essential properties:
> 1. Authentication: You know that you're talking to who you think you're
> talking to.
> 2. Confidentiality: Nobody else can see what you're saying
> 3. Integrity: Nobody else can interfere with your communications
> 
> Show me another protocol that achieves those properties, and maybe we'll
> have something to talk about.  Tor doesn't.
> 
> --Richard
> 
> 
> Whatever number of bits are used for RSA/sym crypto/SHA the Tor protocol
> is resistant to the logjam trivial DH_export quasi undetectable
> downgrade attack that nobody anticipated during years, on purpose or
> not, I don't know, but that's obvious that the DH client public key for
> TLS could have been protected by the public key of the server, like the
> Tor protocol is doing, so maybe you should refrain your compliments
> about TLS.
> 
> And the Tor protocol have TLS on top of it, so below the right sequence
> is ws + TLS + Tor protocol.
> 
> And it checks that the one you are connected to is the one with whom you
> have established the TLS connection (who can be a MITM again, but you
> don't care, you just want to be sure with whom you are discussing with,
> like what WebRTC is trying to do)
> 
> But again, that's not really the subject of the discussion, the subject
> is what is really the problem of letting an interface that has access to
> nothing (WS) work with https? Knowing that you can use it with another
> protocol that you can estimate better, but could be worse, again what
> does it hurt?
> 
> Or just deprecate ws because if it has to work only with entities that
> own valid certificates, then it's of quasi no use for the future.
> 
> Le 30/11/2015 21:00, Brad Hill a écrit :
> > I don't think there is universal agreement among browser engineers (if
> > anyone agrees at all) with your assertion that the Tor protocol or even
> > Tor hidden services are "more secure than TLS".  TLS in modern browsers
> > requires RSA 2048-bit or equivalent authentication, 128-bit symmetric
> > key confidentiality and SHA-256 or better integrity.If .onion
> > identifiers and the Tor protocol crypto were at this level of strength,
> > it would be reasonable to argue that a .onion connection represented a
> > "secure context", and proceed from there.  In the meantime, with .onion
> > site security (without TLS) at 80-bits of truncation of a SHA-1 hash of
> > a 1024 bit key, I don't think you'll get much traction in insisting it
> > is equivalent to or better than TLS.
> >
> > On Mon, Nov 30, 2015 at 7:52 AM Aymeric Vitte <vitteayme...@gmail.com 
> <mailto:vitteayme...@gmail.com>
> > <mailto:vitteayme...@gmail.com <mailto:vitteayme...@gmail.com>>>
> wrote:
> >
> > Redirecting this to WebApps since it's probable that we are
> facing a
> > design mistake that might amplify by deprecating non TLS
> connections. I
> > have submitted the case to all possible lists in the past,
> never got a
> > clear answer and was each time redirected to another list (ccing
> > webappsec but as a whole I think that's a webapp matte

Re: App-to-App interaction APIs - one more time, with feeling

2015-10-22 Thread Aymeric Vitte
Whatever we call it the "protocol worker" exchanges data between two
apps, and as I suggested the data can be a Web Component itself, part of
it, what is needed by the receiving app, the result can be anything too,
it can be passed back to the calling app, be HTML code, be a customized
Web Component, etc, etc so I believe this is doing the copy and paste
mentionned below, I don't see any limitation and I don't see any
technical difficulty to spec this.

Probably the Intents spec should be restarted based on this instead of
being closed.

Le 21/10/2015 17:33, Daniel Buchner a écrit :
> I believe you may be conflating two rather distinct code activities.
> What this API (or one like it) would provide is the ability for an app
> to form any sort of request, whether it originates via user input or
> just of its own code/needs, and have the user's preferred handler deal
> with the request.
> 
> You keep talking about copy/paste of a math formula as if it's a gotcha
> - but at this point, I have no idea how it affects this API. But to
> better understand, let me address your hypothetical in the way the
> proposed API would deal with it:
> 
> Let's say an app had a text input for math formulas, and the user
> entered one via typing, copy/paste, speech, whatever, it doesn't matter.
> The app itself doesn't know how to handle math formulas, so it creates a
> protocol worker to open a connection to the user's web+math provider.
> Let's imagine the user has preselected Wolfram Alpha. A connection would
> be opened between the app and Wolfram, the app then sends a
> request/payload to Wolfram, and Wolfram does whatever the generally
> agreed upon action is for handling this type of web+math request. Once
> Wolfram is finished, it sends back response data over the protocol
> handler connection, to the app.
> 
> I'm *really* trying to understand what you believe this API lacks for
> handling the flow listed above. To end the status quo of every site on
> the Web hard coding its activities to a few lucky providers who out
> spend other to win the dev marketing Olympics, you must have a mechanism
> in place that allows users to add, select, and manage providers, and
> connect to the user's providers for handling of associated activities -
> full stop.
> 
> Please consider carefully the above detailed explanation and let me know
> if there's anything left that's unclear.
> 
> - Daniel
> 
> 
> 
> 
> On Wed, Oct 21, 2015 at 7:55 AM -0700, "Paul Libbrecht"
> > wrote:
> 
> Hello Daniel,
> 
> Maybe things can be said like this: copy and paste lets you choose where
> you paste and what you paste, protocol handlers don't. Here's a more
> detailed answer.
> 
> With a mathematical formula information at hand, you can do a zillion
> things, assuming there's a single thing is not reasonable, even
> temporarily. For example, a very normal workflow could be the following:
> copy from a web-page, paste into a computation engine, adjust, derive,
> paste into a dynamic geometry tool, then paste one of the outputs into a
> mail.
> Providing configurable protocol handlers, even to the finest grade, is
> not a solution to this workflow I feel.
> 
> Providing dialogs to ask the user where he wants the information at hand
> to be opened gets closer but there's still the idea of selection and
> cursor which protocol handlers do not seem to be ready to perform.
> 
> However, I believe that copy-and-paste (and drag-and-drop) is part of an
> app-to-app interaction APIs.
> 
> Paul
> 
> 
>> Daniel Buchner 
>> 20 octobre 2015 18:36
>>
>> I’m trying to understand exactly why you see your example /(“//there
>> was a person who invented a "math" protocol handler. For him it meant
>> that formulæ be read out loud (because his mission is making the web
>> accessible to people with disabilities including eyes) but clearly
>> there was no way to bring a different target.”) /as something this
>> initiative is blocked by or cannot serve.
>>
>>  
>>
>> If you were to create a custom, community-led protocol definition for
>> math equation handling, like web+math, apps would send a standard
>> payload of semantic data, as defined here: http://schema.org/Code
>> ,
>> and it would be handled by whatever app the user had installed to
>> handle it. Given the handler at the other end is sending back data
>> that would be displayed in the page, there’s no reason JAWS or any
>> other accessibility app would be blocked from reading its output – on
>> either side of the connection.
>>
>>  
>>
>> I can’t really make sense of this part of your email, can you clarify?
>> à“Somehow, I can't really be convinced by such a post except asking
>> the user what 

Re: App-to-App interaction APIs - one more time, with feeling

2015-10-18 Thread Aymeric Vitte
Please stop on your side giving lessons again and stop trying to
isolate/elude my initial answer, and refrain people on this list not to
be insulting first.

This one was not insulting, just a general consideration and you should
consider it.

But indeed, back to the "in-scope" technical discussion copied below,
waiting for comments, unless you cut it or try to distract it again.

"
This approach [1] and [2] looks quite good, simple and can cover all cases.

I don't know if we can call it a Web Component really for all cases but
let's call it as such.

In [2] examples the Bio component could be extracted to be passed to the
editor for example and/or could be shared on fb, and idem from fb be
edited, shared, etc

Or let's imagine that I am a 0 in web programming and even Web
Components are too complicate for me, I put an empty Google map and
edit/customize it via a Google map editor, there is [3] maybe too but
anyway the use cases are legions.

The Intent service would then be a visible or a silent Web Component
discussing with the Intent client using postMessage.

Maybe the process could  be instanciated with something specific in href
(as suggested in [2] again) but an Intent object still looks mandatory.

But in case of visible Intent service, the pop-up style looks very ugly
and old, something should be modified so it seems to appear in the
calling page, then the Intent service needs to have the possibility to
become invisible (after login for example).

I don't see any technical difficulty to spec and implement this (except
maybe how to avoid the horrible pop-up effect) and this covers everything."



Le 18/10/2015 20:49, Chaals McCathie Nevile a écrit :
> On Sun, 18 Oct 2015 19:09:42 +0200, Aymeric Vitte
> <vitteayme...@gmail.com> wrote:
> 
>> Le 17/10/2015 16:19, Anders Rundgren a écrit :
>>> Unless you work for a browser vendor or is generally "recognized" for
>>> some specialty, nothing seems to be of enough interest to even get
>>> briefly evaluated.
>>>
>>
>> Right, that's a deficiency of the W3C/WHATWG/whatever specs process,
>> where people well seated in their big companies/org comfortable chairs
>> lack imagination, innovation, are very long to produce anything and just
>> spec for themselves things that become obsolete as soon as they have
>> released it, or things that just never match the reality and general use
>> cases, and they generally disconsider other opinions, although they
>> recognize usually at the end that they messed up, then they respecc it
>> and the loop starts again.
> 
> To be clear, this is a clear example of the lack of civility that I
> referred to earlier, which is inappropriate as noted in the Workmode
> document: https://github.com/w3c/WebPlatformWG/blob/gh-pages/WorkMode.md
> 
> Please refrain from insulting people (whether individually or as a
> group), and stick to in-scope technical discussion.
> 
> For the chairs
> 
> Chaals
> 

-- 
Get the torrent dynamic blocklist: http://peersm.com/getblocklist
Check the 10 M passwords list: http://peersm.com/findmyass
Anti-spies and private torrents, dynamic blocklist: http://torrent-live.org
Peersm : http://www.peersm.com
torrent-live: https://github.com/Ayms/torrent-live
node-Tor : https://www.github.com/Ayms/node-Tor
GitHub : https://www.github.com/Ayms



Re: App-to-App interaction APIs - one more time, with feeling

2015-10-18 Thread Aymeric Vitte


Le 17/10/2015 16:19, Anders Rundgren a écrit :
> Unless you work for a browser vendor or is generally "recognized" for some
> specialty, nothing seems to be of enough interest to even get briefly
> evaluated.
> 

Right, that's a deficiency of the W3C/WHATWG/whatever specs process,
where people well seated in their big companies/org comfortable chairs
lack imagination, innovation, are very long to produce anything and just
spec for themselves things that become obsolete as soon as they have
released it, or things that just never match the reality and general use
cases, and they generally disconsider other opinions, although they
recognize usually at the end that they messed up, then they respecc it
and the loop starts again.

> Regarding App-to-App interaction I'm personally mainly into the
> Web-to-Native variant.

That's a very poor system, I think you are still in your long never
ending quest of seeking for "something in the web that could match what
you want to do" but probably it's not that one.

Do people here mean that we are going forever to exchange text, images,
files, stuff like this only?

That's the vision?

Can't we share Web Components? Which can be any app with the possibility
to interact with it?

That's what for example the Web Intents should do, again you should not
close the group.

-- 
Get the torrent dynamic blocklist: http://peersm.com/getblocklist
Check the 10 M passwords list: http://peersm.com/findmyass
Anti-spies and private torrents, dynamic blocklist: http://torrent-live.org
Peersm : http://www.peersm.com
torrent-live: https://github.com/Ayms/torrent-live
node-Tor : https://www.github.com/Ayms/node-Tor
GitHub : https://www.github.com/Ayms



Re: Making progress with items in Web Platform Re: App-to-App interaction APIs - one more time, with feeling

2015-10-17 Thread Aymeric Vitte


Le 17/10/2015 17:58, Chaals McCathie Nevile a écrit :
> Aymeric, that could apply to you to - and in fact the requirement to
> behave courteously is a general one for this list and others of the Web
> Platform WG

Replying only to this for now, you don't know what you are talking about
and don't try to give lessons about things you don't know, courtesy is
something that other people should learn on this list.

And don't change the initial subject of this thread or open a new one.

But don't worry, unlike other people on this list, I will always remain
polite.

Now, please discontinue this thread and let's talk the original one.

I would like to know what inspired W3C folks think about what I wrote,
this is short but everything is in there.

-- 
Get the torrent dynamic blocklist: http://peersm.com/getblocklist
Check the 10 M passwords list: http://peersm.com/findmyass
Anti-spies and private torrents, dynamic blocklist: http://torrent-live.org
Peersm : http://www.peersm.com
torrent-live: https://github.com/Ayms/torrent-live
node-Tor : https://www.github.com/Ayms/node-Tor
GitHub : https://www.github.com/Ayms



Re: App-to-App interaction APIs - one more time, with feeling

2015-10-16 Thread Aymeric Vitte
Ccing the authors of [1], [2] and [3] if there is still an interest.

> 
> at this stage we don't have a deliverable for this work - i.e. the W3C
> members haven't approved doing something like this in Web Platform working
> group. Given that people repeatedly attempt to do it, I think the
> conversation is worth having. Personally I think this is something the
> Web needs

Indeed, that will be more than time to do so, but the current view of
the main actors or past specs seems a kind of narrowed and not very
imaginative/innovative, I don't think you should close the web intents
task force [5] but restart it on new bases.

This approach [1] and [2] looks quite good, simple and can cover all cases.

I don't know if we can call it a Web Component really for all cases but
let's call it as such.

In [2] examples the Bio component could be extracted to be passed to the
editor for example and/or could be shared on fb, and idem from fb be
edited, shared, etc

Or let's imagine that I am a 0 in web programming and even Web
Components are too complicate for me, I put an empty Google map and
edit/customize it via a Google map editor, there is [3] maybe too but
anyway the use cases are legions.

That's incredible that nobody can see this and that [1] did not get any
echo (this comment I especially dedicate it to some people that will
recognize themselves about some inappropriate comments, not to say more,
they made regarding the subject related to the last paragraph of this post).

The Intent service would then be a visible or a silent Web Component
discussing with the Intent client using postMessage.

Maybe the process could  be instanciated with something specific in href
(as suggested in [2] again) but an Intent object still looks mandatory.

But in case of visible Intent service, the pop-up style looks very ugly
and old, something should be modified so it seems to appear in the
calling page, then the Intent service needs to have the possibility to
become invisible (after login for example).

I don't see any technical difficulty to spec and implement this (except
maybe how to avoid the horrible pop-up effect) and this covers everything.


> If that happens the next step is to change our charter.
> 
> That is an administrative thing that takes a few weeks (largely to ensure
> we get the IPR protection W3C standards can enjoy, which happens because
> we spend the time to do the admin with legal processes) if there is some
> broad-based support.

Unfortunately, despite of our efforts and patience, due to the lack of
agreement on this matter with the related W3C members, unless people
decide to restrict Intents to some trivial edit, share uses of simple
images, text, files, which looks quite limited (but surprisingly seems
enough for Microsoft, Mozilla and Google) and will necessarily end-up
redoing the spec again several years later, the specs will inevitably
cross again the path of the patent you know [4], for parts related to
the extraction mechanisms that time, which anyway the web will one day
implement.


[1]
https://lists.w3.org/Archives/Public/public-web-intents/2014Oct/0001.html
[2]
http://dev.mygrid.org.uk/blog/2014/10/you-want-to-do-what-theres-an-app-for-that/
[3] https://lists.w3.org/Archives/Public/public-web-intents/2015Feb/
[4] https://lists.w3.org/Archives/Public/public-webapps/2015AprJun/0911.html
[5]
https://lists.w3.org/Archives/Public/public-web-intents/2015Oct/.html

-- 
Get the torrent dynamic blocklist: http://peersm.com/getblocklist
Check the 10 M passwords list: http://peersm.com/findmyass
Anti-spies and private torrents, dynamic blocklist: http://torrent-live.org
Peersm : http://www.peersm.com
torrent-live: https://github.com/Ayms/torrent-live
node-Tor : https://www.github.com/Ayms/node-Tor
GitHub : https://www.github.com/Ayms



Re: PSA: Web Components vs Extract Widget patent

2015-06-18 Thread Aymeric Vitte
Sorry Art but the W3C disclosed the story on this list without advising
us (and disclosed/archived the original email containing some private
details without any authorization), we know why, it's probably
understandable (although questionable), why on this list and why we
should not respond here is not, this situation probably makes us look
like kind of trolls, which we hate, so it's done now and we will
continue to follow up on this list if necessary.

Anyway, we don't want to discuss further the details, the gists are
enough, if people still find unthinkable that the components model was
invented before the components model, then make up your mind, it was,
and there is a particular story around this.

I think that the W3C should revert to other actions to resolve the
issue, our initial proposal (which, for everybody's knowledge, is just
to find an agreement that compensate the costs of our pasts projects
related to the patent, this seems fair to us but looks apparently
complicate to appreciate for the W3C members) still stands until we take
further actions, for obvious reasons we cannot give up with this.

Le 17/06/2015 23:36, Arthur Barstow a écrit :
 On 6/17/15 5:29 PM, Aymeric Vitte wrote:
 
 As directed in [1], if anyone wants to reply to this thread, please use
 the public-patent-issues list and do *not* reply on public-webapps.
 
 -Thank you, AB
 
 [1]
 https://lists.w3.org/Archives/Public/public-webapps/2015AprJun/0912.html

-- 
Check the 10 M passwords list: http://peersm.com/findmyass
Anti-spies and private torrents, dynamic blocklist: http://torrent-live.org
Peersm : http://www.peersm.com
torrent-live: https://github.com/Ayms/torrent-live
node-Tor : https://www.github.com/Ayms/node-Tor
GitHub : https://www.github.com/Ayms



Re: PSA: Web Components vs Extract Widget patent

2015-06-17 Thread Aymeric Vitte
FYI, the gists have been moved (and somewhere componentized) here:

http://www.peersm.com/gist1.html: Extract Widget Patent FR2962237 -
Process to create an application of gadget type incorporated into a
container of widget type

http://www.peersm.com/gist2.html: Patent FR2962237 - Main claim, scope
and applications

http://www.peersm.com/gist3.html: Polymer vs Extract Widget patent -
Code comparison

http://www.peersm.com/gist4.html: Components model comparison with the
patent

Le 16/06/2015 01:27, Aymeric Vitte a écrit :
 We have made a comparison between the Components model and the patent
 here: https://gist.github.com/Ayms/bf6a9f121e1ebd93cf22
 
 From the end-2011 Components Model we have added some
 properties/thoughts/proposals from the patent and our past projects.
 
 Each time we read something about the components subject that's
 obviously related to the patent, and will end-up filling all of the
 claims of the patent, for example server side rendering (that we
 discovered while writing this gist reading the 'Custom Elements:
 is=' thread), which we called in the gist Virtualization and
 virtual rendering is part of the claims of the patent, that would be
 funny to hear that this was existing before.
 
 So it's useless to continue writing to describe the evidence, the gists
 will be surely more useful for further uses, because, unfortunately, it
 seems like we are not on the way to reach an agreement since six months
 today, we did not get any answer/feedback since the (unexpected and
 curious) public disclosure of this, so we will do what we have to do and
 probably go to litigation.
 

-- 
Check the 10 M passwords list: http://peersm.com/findmyass
Anti-spies and private torrents, dynamic blocklist: http://torrent-live.org
Peersm : http://www.peersm.com
torrent-live: https://github.com/Ayms/torrent-live
node-Tor : https://www.github.com/Ayms/node-Tor
GitHub : https://www.github.com/Ayms



Re: PSA: Web Components vs Extract Widget patent

2015-06-15 Thread Aymeric Vitte
We have made a comparison between the Components model and the patent
here: https://gist.github.com/Ayms/bf6a9f121e1ebd93cf22

From the end-2011 Components Model we have added some
properties/thoughts/proposals from the patent and our past projects.

Each time we read something about the components subject that's
obviously related to the patent, and will end-up filling all of the
claims of the patent, for example server side rendering (that we
discovered while writing this gist reading the 'Custom Elements:
is=' thread), which we called in the gist Virtualization and
virtual rendering is part of the claims of the patent, that would be
funny to hear that this was existing before.

So it's useless to continue writing to describe the evidence, the gists
will be surely more useful for further uses, because, unfortunately, it
seems like we are not on the way to reach an agreement since six months
today, we did not get any answer/feedback since the (unexpected and
curious) public disclosure of this, so we will do what we have to do and
probably go to litigation.

-- 
Check the 10 M passwords list: http://peersm.com/findmyass
Anti-spies and private torrents, dynamic blocklist: http://torrent-live.org
Peersm : http://www.peersm.com
torrent-live: https://github.com/Ayms/torrent-live
node-Tor : https://www.github.com/Ayms/node-Tor
GitHub : https://www.github.com/Ayms



Re: PSA: Web Components vs Extract Widget patent

2015-05-28 Thread Aymeric Vitte
Please see https://gist.github.com/Ayms/64dfd60ddae05aeaff4e

This is a brief code comparison between the extract widget projects from
the patent and the Polymer project.

Since we knew exactly what we had to look for, we found it, the
conclusion is:

Not seeing obvious similitudes between the Polymer and the EW code,
therefore between the Web Components and the patent, or arguing that
these are standard practices while we see that today it's still
difficult to accomplish and that we still need to hack the entire DOM
all possible ways to do it, as well as performing very strange and not
usual manipulations, could only be the result of very bad faith.

Hopefully the DOM hacking will stop when the specs will be implemented
inside browsers.

Probably the next step is to link to how we have set in 2007 a
template-like element associated with a framework implementing some
HTML5 features, before HTML5.

There is a logic between all our projects and their history, same as the
Web Components logic.

Instead of trying to refute the creativity and innovation of the patent,
maybe we should make it happen the way it was intended to, please see
the Note in the gist.

Maybe it should become obvious too that if we can import and construct a
shadow domed web component, we can extract it the same way, that's the
same thing.

Regards,

Aymeric

Le 22/05/2015 00:01, Aymeric Vitte a écrit :
 Since this is public now for everybody, please let me give some
 additional information.
 
 We think that the extraction mechanisms described in the patent and not
 covered by any spec will happen one day too, and could be integrated in
 the Web Components spec, the purpose being to extract a customized
 custom element from any site, not only from the constructor site, it's
 probably very simple to specify now, if there is some interest we could
 participate to this.
 
 From a technical standpoint, please see below everything we have written
 about the obvious similitudes between the patent (2010) and the Web
 Components (2012), as well as the widget-like projects (2013/2014).
 
 If someone finds some anteriority, then please advise, this is what we
 have been asking for since 6 months, but please read what follows before.
 
 The enormous difference between what describes the patent and all
 existing technologies when we issued it is that all existing
 technologies were producing gadgets:
 
 - that were sandboxed (iframes for example) and could not interact with
 the other elements of the web page where they were injected
 - that were displayed alone
 - that needed some specific format, development skills or tools (like
 browsers, frameworks, apis) to be created and displayed
 - that could not necessarily render or adapt on any devices, like mobiles
 
 To my knowledge, at that time no project never envisioned at any moment
 any gadgets that could be integrated into a web page as normal browser
 elements (ie DOM elements) on any device possibly interacting with the
 other browser elements of the page while keeping their own properties
 not interfering between each others, which is very exactly what the
 patent describes and what the Web Components are about.
 
 One of the reason probably is that this was extraordinarly complicate to
 perform at that time, like for our past projects which were difficult to
 implement, and still is today, except if we use the Web Components or
 widget-like concepts of today helped by the improvements brought by
 ES6/7 and HTML.
 
 The patent describes an universal method to accomplish the above and the
 definition of a gadget in the patent is very clear regarding its
 ability to interact with the rest of the page.
 
 I have tried to detail all this and performed a detailed comparison with
 the Web Components and widget-like projects here:
 
 - Main claim, scope and applications
 https://gist.github.com/Ayms/efc919d6d6381c37dbbe
 
 Which shows that not only Web Components are impacted, but all
 widget-like projects, thousands of projects, and soon or later all projects.
 
 and here:
 
 -Extract Widget Patent FR2962237 - Process to create an application of
 gadget type incorporated into a container of widget type -
 https://gist.github.com/Ayms/ee9f99e5dfabb68bcc27
 
 The second gist is a bit long and the translation of the patent probably
 not perfect, beside the detailed comparison for each claim of the patent
 with the Web Components, the most interesting section is probably: The
 patent vs the traditional approach = Web Components
 https://gist.github.com/Ayms/ee9f99e5dfabb68bcc27#the-patent-vs-the-traditional-approach--the-web-components
 
 Regarding a possible solution, to make it short, since now 6 months
 (realizing at last that all the components projects were infringing the
 patent) we have stated that we would like to find an agreement so we
 transfer the rights of the patent to the W3C members and they sublicense
 it royalty free for everybody via the W3C.
 
 This agreement should cover at least

Re: PSA: Web Components vs Extract Widget patent

2015-05-21 Thread Aymeric Vitte
Since this is public now for everybody, please let me give some
additional information.

We think that the extraction mechanisms described in the patent and not
covered by any spec will happen one day too, and could be integrated in
the Web Components spec, the purpose being to extract a customized
custom element from any site, not only from the constructor site, it's
probably very simple to specify now, if there is some interest we could
participate to this.

From a technical standpoint, please see below everything we have written
about the obvious similitudes between the patent (2010) and the Web
Components (2012), as well as the widget-like projects (2013/2014).

If someone finds some anteriority, then please advise, this is what we
have been asking for since 6 months, but please read what follows before.

The enormous difference between what describes the patent and all
existing technologies when we issued it is that all existing
technologies were producing gadgets:

- that were sandboxed (iframes for example) and could not interact with
the other elements of the web page where they were injected
- that were displayed alone
- that needed some specific format, development skills or tools (like
browsers, frameworks, apis) to be created and displayed
- that could not necessarily render or adapt on any devices, like mobiles

To my knowledge, at that time no project never envisioned at any moment
any gadgets that could be integrated into a web page as normal browser
elements (ie DOM elements) on any device possibly interacting with the
other browser elements of the page while keeping their own properties
not interfering between each others, which is very exactly what the
patent describes and what the Web Components are about.

One of the reason probably is that this was extraordinarly complicate to
perform at that time, like for our past projects which were difficult to
implement, and still is today, except if we use the Web Components or
widget-like concepts of today helped by the improvements brought by
ES6/7 and HTML.

The patent describes an universal method to accomplish the above and the
definition of a gadget in the patent is very clear regarding its
ability to interact with the rest of the page.

I have tried to detail all this and performed a detailed comparison with
the Web Components and widget-like projects here:

- Main claim, scope and applications
https://gist.github.com/Ayms/efc919d6d6381c37dbbe

Which shows that not only Web Components are impacted, but all
widget-like projects, thousands of projects, and soon or later all projects.

and here:

-Extract Widget Patent FR2962237 - Process to create an application of
gadget type incorporated into a container of widget type -
https://gist.github.com/Ayms/ee9f99e5dfabb68bcc27

The second gist is a bit long and the translation of the patent probably
not perfect, beside the detailed comparison for each claim of the patent
with the Web Components, the most interesting section is probably: The
patent vs the traditional approach = Web Components
https://gist.github.com/Ayms/ee9f99e5dfabb68bcc27#the-patent-vs-the-traditional-approach--the-web-components

Regarding a possible solution, to make it short, since now 6 months
(realizing at last that all the components projects were infringing the
patent) we have stated that we would like to find an agreement so we
transfer the rights of the patent to the W3C members and they sublicense
it royalty free for everybody via the W3C.

This agreement should cover at least the costs of our past projects
related to the patent from 2007 to 2012 developed by Naïs' team, which
were all killed when we realized that Google had decided to deprecate
its Search API, but not only Google is in cause here since no Search
APIs were available from any of the major internet companies.

We will not elaborate on this here, this situation leaded us to develop
our current privacy/anonymity oriented projects, which usually everybody
loves until we talk about financing, that's what the potential agreement
will be used for, if any.

Regards,

Aymeric

Le 20/05/2015 15:24, Arthur Barstow a écrit :
 Hi All,
 
 For those interested in Web Components, please note I received a related
 e-mail titled Web Components vs Extract Widget patent. I forwarded
 this e-mail (which has an attachment) to the www-archive list:
 
 https://lists.w3.org/Archives/Public/www-archive/2015May/0008.html
 
 Please do NOT discuss the specifics of the referenced patent on
 public-webapps.
 
 Yves, Xiaoqian - will this information result in the formation of a
 Patent Advisory Group [PAG]? If yes, when will that PAG be launched?
 
 -Thanks, ArtB
 
 [PAG] http://www.w3.org/Consortium/Patent-Policy-20040205/#sec-Exception
 
 
 

-- 
Check the 10 M passwords list: http://peersm.com/findmyass
Anti-spies and private torrents, dynamic blocklist: http://torrent-live.org
Peersm : http://www.peersm.com
torrent-live: https://github.com/Ayms/torrent-live
node-Tor : 

Re: File API: reading a Blob

2014-09-11 Thread Aymeric Vitte
But I suppose that should be one of the first use case for Google to 
introduce streams with MSE, no?


To be more clear about what I mean by back pressure for things coming 
from outside of the browser:


- XHR: the Streams API should define how xhr gets chunks using Range 
according to the flow and adapt accordingly transparently for the users


- WebSockets: use something like bufferedAmount but that can be notified 
to the users, so the users can adapt the flow, currently bufferedAmount 
is not extremely usefull since you might need to do some polling to 
check it.



Le 11/09/2014 08:36, Takeshi Yoshino a écrit :
On Thu, Sep 11, 2014 at 8:47 AM, Aymeric Vitte vitteayme...@gmail.com 
mailto:vitteayme...@gmail.com wrote:


Does your experimentation pipe the XHR stream to MSE? Obviously
that should be the target for yt, this would be a first real
application of the Streams API.


It's not yet updated to use the new Streams. Here's our layout test 
for MSE. responseType = 'legacystream' makes the XHR return the old 
version of the stream.


https://code.google.com/p/chromium/codesearch#chromium/src/third_party/WebKit/LayoutTests/http/tests/media/media-source/mediasource-append-legacystream.htmlq=createMediaXHRsq=package:chromiumtype=csl=12

You can find the following call in the file.

sourceBuffer.appendStream(xhr.response);


Because the Streams API does not define how to apply back pressure
to XHR, but does define how to apply back pressure between XHR and
MSE.

Probably the spec should handle on a per case basis what should be
the behavior in term of back pressure for things coming from
outside of the browser (xhr, websockets, webrtc, etc , not
specified as far as I know) and for things going on inside the
browser (already specified)

Le 08/09/2014 06:58, Takeshi Yoshino a écrit :

On Thu, Sep 4, 2014 at 7:02 AM, Aymeric Vitte
vitteayme...@gmail.com mailto:vitteayme...@gmail.com wrote:

The fact is that most of the W3C groups impacted by streams
(File, indexedDB, MSE, WebRTC, WebCrypto, Workers, XHR,
WebSockets, Media Stream, etc, I must forget a lot here) seem
not to care a lot about it and maybe just expect streams to
land in the right place in the APIs when they are available,
by some unknown magic.

I still think that the effort should start from now for all
the APIs (as well as the implementation inside browsers,
which apparently has started for Chrome, but Chrome was
supposed to have started some implementation of the previous
Streams APIs, so it's not very clear), and that it should be
very clearly synchronized, disregarding vague assumptions
from the groups about low/high level and Vx releases, eluding
the issue.


Chrome has an experimental implementation [1] of the new Streams
API [2] integrated with XHR [3] behind a flag.

We receive data from the browser process over IPC (both network
and blob case). The size of data chunks and arrival timing depend
on various factors. The received chunks are passed to the
XMLHttpRequest class on the same thread as JavaScript runs. We
create a new instance of ReadableStream [4] on arrival of the
first chunk. On every chunk arrival, we create an ArrayBuffer
from the chunk and then call [[enqueue]](chunk) [5] equivalent
C++ function to put it into the ReadableStream.

The ReadableStream is available from the response attribute in
the LOADING and DONE state (if no error). The chunks pushed to
the ReadableStream become available for read immediately.

Any problem occurs while loading data from network/blob, we call
[[error]](e) [6] equivalent C++ function with an exception as
defined in the XHR spec for sync XHR.

Currently, XMLHttpRequest doesn't exert any back pressure. We
plan to do something not to read too much data from disk/network.
It might be worth specifying something about the flow control in
the abstract read from blob/network operation at standard level.

[1]

https://code.google.com/p/chromium/codesearch#chromium/src/third_party/WebKit/LayoutTests/http/tests/xmlhttprequest/response-stream.html
[2] https://github.com/whatwg/streams
https://github.com/whatwg/streams#readablestream
[3] https://github.com/tyoshino/streams_integration/
[4] https://github.com/whatwg/streams#readablestream
[5] https://github.com/whatwg/streams#enqueuechunk
[6] https://github.com/whatwg/streams#errore


-- 
Peersm :http://www.peersm.com

torrent-live:https://github.com/Ayms/torrent-live
node-Tor :https://www.github.com/Ayms/node-Tor
GitHub :https://www.github.com/Ayms




--
Peersm : http://www.peersm.com
torrent-live: https://github.com/Ayms/torrent-live
node-Tor : https://www.github.com/Ayms/node-Tor
GitHub : https://www.github.com/Ayms



Re: File API: reading a Blob

2014-09-04 Thread Aymeric Vitte

Arun,

I know you (File API) care about it, I was more refering to other groups 
that seem not to care a lot, leading to absurd situations where we are 
streaming things without streams and have to implement some strange 
inadapted mechanisms for flow control/backpressure for example.


The examples I gave in this thread are just a small subset of what I 
(everybody) need, which involves all the groups listed below.


Despite of the fact that the spec is probably mature enough, I am not 
sure it can really be finalized without parallel field experimentation, 
which to work well needs to involve several groups and browsers from 
now, despite of the efforts of people involved the process is really too 
long.


If we take the File API to address your concern, probably the question 
is not whether the earlier (or whatever) version should be modified 
(because the answer would be obviously yes for me, use cases are legion) 
but to make it work on the field with streams and finalize the spec 
accordingly, same thing for the other APIs.


Regards,

Aymeric

Le 04/09/2014 02:39, Arun Ranganathan a écrit :
On Sep 3, 2014, at 6:02 PM, Aymeric Vitte vitteayme...@gmail.com 
mailto:vitteayme...@gmail.com wrote:




The fact is that most of the W3C groups impacted by streams (File, 
indexedDB, MSE, WebRTC, WebCrypto, Workers, XHR, WebSockets, Media 
Stream, etc, I must forget a lot here) seem not to care a lot about 
it and maybe just expect streams to land in the right place in the 
APIs when they are available, by some unknown magic.



I care about it. Till the API is totally baked, I’m amenable to 
getting the model right. File API now refers to chunks read, which is 
more correct. But I understand that your use cases aren’t catered to 
just yet; FileReader/FileReaderSync don’t do easily extractable partials.


I’d like to see if there’s interest in the earlier proposal, to 
extract a stream straight from Blob.





I still think that the effort should start from now for all the APIs 
(as well as the implementation inside browsers, which apparently has 
started for Chrome, but Chrome was supposed to have started some 
implementation of the previous Streams APIs, so it's not very clear), 
and that it should be very clearly synchronized, disregarding vague 
assumptions from the groups about low/high level and Vx releases, 
eluding the issue.



What issue is being eluded? Seems like another of your main use cases 
is to have URL.createObjectURL or URL.createFor return a streamable 
resource. I agree that’s a good problem to solve.


— A*




--
Peersm : http://www.peersm.com
torrent-live: https://github.com/Ayms/torrent-live
node-Tor : https://www.github.com/Ayms/node-Tor
GitHub : https://www.github.com/Ayms



Re: File API: reading a Blob

2014-09-03 Thread Aymeric Vitte
Sorry to interfer then but your discussion with Arun seems to have no 
point if streams are there.


But some points I mentioned have, whether it's really linked to the 
initial subject or not.


The fact is that most of the W3C groups impacted by streams (File, 
indexedDB, MSE, WebRTC, WebCrypto, Workers, XHR, WebSockets, Media 
Stream, etc, I must forget a lot here) seem not to care a lot about it 
and maybe just expect streams to land in the right place in the APIs 
when they are available, by some unknown magic.


I still think that the effort should start from now for all the APIs (as 
well as the implementation inside browsers, which apparently has started 
for Chrome, but Chrome was supposed to have started some implementation 
of the previous Streams APIs, so it's not very clear), and that it 
should be very clearly synchronized, disregarding vague assumptions from 
the groups about low/high level and Vx releases, eluding the issue.



Le 01/09/2014 11:14, Anne van Kesteren a écrit :

On Thu, Aug 28, 2014 at 2:15 AM, Aymeric Vitte vitteayme...@gmail.com wrote:

I have contributed to streams already, of course it will solve most of what
is being discussed here but when? And then what's the point of this
discussion?

This discussion between me and Arun is about how to describe various
algorithms in File API and Fetch, ideally enabling APIs on top of
those low-level descriptions later on. But it is about describing the
behavior of existing APIs in more detail.




--
Peersm : http://www.peersm.com
torrent-live: https://github.com/Ayms/torrent-live
node-Tor : https://www.github.com/Ayms/node-Tor
GitHub : https://www.github.com/Ayms




Re: IE - Security error with new Worker(URL.createObjectURL(new Blob([workerjs],{type:'text/javascript'})))

2014-08-30 Thread Aymeric Vitte

Thanks, please let us know when the desktop fix will be available.

Regards

Le 30/08/2014 01:21, Travis Leithead a écrit :

Just a quick followup: this bug fix has been shipped in our recent Windows 
Phone 8.1 update [1]. It will come to a Desktop browser near you soon.

[1] 
http://blogs.msdn.com/b/ie/archive/2014/07/31/the-mobile-web-should-just-work-for-everyone.aspx

-Original Message-
From: Travis Leithead
Sent: Tuesday, June 10, 2014 5:54 PM
To: Aymeric Vitte; Web Applications Working Group WG (public-webapps@w3.org)
Cc: Adrian Bateman
Subject: RE: IE - Security error with new Worker(URL.createObjectURL(new 
Blob([workerjs],{type:'text/javascript'})))

Unfortunately, there is not presently a way for you to track it externally. :-(

MS Connect is the best we have so far, and I know that it is not great. We 
recognize this is a problem and hope to be able to improve the situation soon.
  
-Original Message-

From: Aymeric Vitte [mailto:vitteayme...@gmail.com]
Sent: Tuesday, June 10, 2014 3:00 AM
To: Travis Leithead; Web Applications Working Group WG (public-webapps@w3.org)
Subject: Re: IE - Security error with new Worker(URL.createObjectURL(new 
Blob([workerjs],{type:'text/javascript'})))

Thanks, any way to track/be notified when this will be available?

Regards

Aymeric

Le 06/06/2014 19:42, Travis Leithead a écrit :

Well, in IE's defense, this is not specifically allowed by: 
http://www.w3.org/TR/workers/#dom-worker. Regardless, the product team is 
working to fix this so that it works in IE as well. Stay tuned. I updated the 
Connect bug below.

-Original Message-
From: Aymeric Vitte [mailto:vitteayme...@gmail.com]
Sent: Friday, June 6, 2014 6:25 AM
To: Web Applications Working Group WG (public-webapps@w3.org)
Cc: Travis Leithead
Subject: IE - Security error with new Worker(URL.createObjectURL(new 
Blob([workerjs],{type:'text/javascript'})))

Why IE(11) does not allow this while this is working on FF and Chrome?
[1] seems to suggest that it is by design.

Regards

Aymeric

[1]
https://connect.microsoft.com/IE/feedback/details/779379/unable-to-spawn-worker-from-blob-url



--
Peersm : http://www.peersm.com
torrent-live: https://github.com/Ayms/torrent-live
node-Tor : https://www.github.com/Ayms/node-Tor
GitHub : https://www.github.com/Ayms




Re: File API: reading a Blob

2014-08-27 Thread Aymeric Vitte
I have contributed to streams already, of course it will solve most of 
what is being discussed here but when? And then what's the point of this 
discussion?


But maybe it will not solve everything, like my indexedDB example, which 
is not even considered in indexedDB V2 as far as I know


Le 25/08/2014 11:46, Anne van Kesteren a écrit :

On Sun, Aug 24, 2014 at 11:59 PM, Aymeric Vitte vitteayme...@gmail.com wrote:

File, XHR and indexedDB should handle partial data, I thought I understood
from messages last year that it was clearly identified as a mistake.

This will be solved with streams. I recommend contributing to
https://github.com/whatwg/streams because until we have such a
primitive, there's nothing to expose.




--
Peersm : http://www.peersm.com
torrent-live: https://github.com/Ayms/torrent-live
node-Tor : https://www.github.com/Ayms/node-Tor
GitHub : https://www.github.com/Ayms




Re: File API: reading a Blob

2014-08-24 Thread Aymeric Vitte

Back in this thread lately too...

I still don't see how you intend to solve the use case I gave:

- load 'something' progressively, get chunks (should be possible via xhr 
but it's not)
- handle it via File augmenting the related Blob (not possible) and 
store it progressively in indexedDB (not possible too), or do the 
contrary, store it progressively and get it as an (uncomplete) Blob from 
indexedDB that will get augmented while it is stored (not possible again)

- URL.createObjectURL(something) (will not work)

where 'something' has a size of 1 GB (video for example)

That's equivalent to do with your browser:

file:/something or http://abcd.com/something where something is a file 
on you computer being downloaded or on a web site, so not complete.


Which works very well since a long time.

File, XHR and indexedDB should handle partial data, I thought I 
understood from messages last year that it was clearly identified as a 
mistake.


Regards,

Aymeric

Le 11/08/2014 13:24, Anne van Kesteren a écrit :

On Fri, Aug 8, 2014 at 5:56 PM, Arun Ranganathan a...@mozilla.com wrote:

I strongly think we should leave FileReaderSync and FileReader alone. Also note 
that FileReaderSync and XHR (sync) are not different, in that both don’t do 
partial data. But we should have a stream api that evolves to read, and it 
might be something off Blob itself.

Seems fair.



Other than “chunks of bytes” which needs some normative backbone, is the basic 
abstract model what you had in mind? If so, that might be worth getting into 
File APi and calling it done, because that’s a reusable abstract model for 
Fetch and for FileSystem.

Yeah that looks good. https://whatwg.github.io/streams/ defines chunks
and such, but is not quite there yet. But it is what we want to build
this upon eventually.



--
Peersm : http://www.peersm.com
torrent-live: https://github.com/Ayms/torrent-live
node-Tor : https://www.github.com/Ayms/node-Tor
GitHub : https://www.github.com/Ayms




Re: File API: reading a Blob

2014-07-17 Thread Aymeric Vitte

Arun,

Streams will solve this, assuming they are implemented by all related 
APIs (this is not obvious, most of WGs seem not to care at all now that 
they have implemented their own so-called streams)


Quoting your other answer:

Well, the bug that removed them 
is:https://www.w3.org/Bugs/Public/show_bug.cgi?id=23158  and dates to last year.

Indeed, you removed this right away after I mentioned to the group last 
year that it was not implemented inside browsers and that, designed as 
it was (to display a progress bar), it seemed not very useful, but I 
insisted at that time that what was useful was to have the delta data, 
so I don't really see a consensus of not implementing partial data.


2. Use cases. MOST reads are useful with all data, after which you could use 
Blob manipulation (e.g.*slice*). New result objects each time didn’t seem 
optimal. And, it was something punted to Streams since that seemed like a 
longer term direction. There was also the idea of a Promise-based File API that 
could be consumed by the FileSystem API.

I don't get this, most reads inside browsers are about fetching, and 
fetching does increment a resource while it is loaded/displayed (image, 
video, crypto operations, etc)


And it's question of the fetch API in this thread.

XHR has the same issue, fortunately you can use XHR with Range, now I 
think this issue should stop propagating.


1. Decoding was an issue with*readAsText*. I suppose we could make that method 
alone be all or nothing.

I don't see where the issue can be, it should behave like textdecoder 
and reinject the undecoded bytes to the next chunk, personnaly I find 
really useless to offer the 'text' methods in files/fetching APIs.


So, I would insist again that you go for 1, whether it's promises like, 
append like, filewriter like, we must get access to partial data and we 
must be able to increment a blob (as well as its storage in indexedDB, 
other remark sent to the group some time ago, maybe you can combine both 
for the implementation since I don't really know what gets broken in the 
abstract model you mention)


Regards

Aymeric


Le 17/07/2014 15:10, Arun Ranganathan a écrit :

Aymeric,

On Jul 16, 2014, at 8:20 AM, Aymeric Vitte vitteayme...@gmail.com 
mailto:vitteayme...@gmail.com wrote:



Example:
var myfile=new Blob();

//chunks are coming in
myfile=new Blob([myfile,chunk],...)

//mylink A tag
mylink.href=URL.createObjectURL(myfile)

click on mylink -- does not work

Expected behavior: the file (a video for example) should play as it 
is incremented.


This is inconsistent with the standard files behavior (see [1] for 
example), this example should work without having to use the Media 
Source Extensions API.



This is a great use case, but breaks File API currently (and the 
abstract model, too).


Options to consider might be:

1. Including partial Blob data with the existing FIle API. But we 
already ship as is, so we could forego decoding to get this right, but 
that doesn’t seem optimal either.


2. Use a new API for this, such as Streams. Since we’ll use streams 
for non-file cases, it could be useful here (and might be a better fit 
for the use case, which features heavy video use).


— A*


--
Peersm : http://www.peersm.com
node-Tor : https://www.github.com/Ayms/node-Tor
GitHub : https://www.github.com/Ayms



Re: File API: reading a Blob

2014-07-16 Thread Aymeric Vitte


Le 10/07/2014 19:05, Arun Ranganathan a écrit :

We agreed some time ago to not have partial data.


I still think that's a big mistake. Even if the Streams API will solve 
this, this should be corrected in the File API.


Unless I am misusing the API, you can not even increment a Blob, you 
have to create a new one each time, this should be corrected too.


Example:
var myfile=new Blob();

//chunks are coming in
myfile=new Blob([myfile,chunk],...)

//mylink A tag
mylink.href=URL.createObjectURL(myfile)

click on mylink -- does not work

Expected behavior: the file (a video for example) should play as it is 
incremented.


This is inconsistent with the standard files behavior (see [1] for 
example), this example should work without having to use the Media 
Source Extensions API.


Regards

Aymeric

[1] https://github.com/Ayms/torrent-live

--
Peersm : http://www.peersm.com
node-Tor : https://www.github.com/Ayms/node-Tor
GitHub : https://www.github.com/Ayms




Re: IE - Security error with new Worker(URL.createObjectURL(new Blob([workerjs],{type:'text/javascript'})))

2014-06-12 Thread Aymeric Vitte

So, should I ask again in 1, 3 or 6 months (or more?) ?

Regards

Aymeric

Le 11/06/2014 02:53, Travis Leithead a écrit :

Unfortunately, there is not presently a way for you to track it externally. :-(

MS Connect is the best we have so far, and I know that it is not great. We 
recognize this is a problem and hope to be able to improve the situation soon.
  
-Original Message-

From: Aymeric Vitte [mailto:vitteayme...@gmail.com]
Sent: Tuesday, June 10, 2014 3:00 AM
To: Travis Leithead; Web Applications Working Group WG (public-webapps@w3.org)
Subject: Re: IE - Security error with new Worker(URL.createObjectURL(new 
Blob([workerjs],{type:'text/javascript'})))

Thanks, any way to track/be notified when this will be available?

Regards

Aymeric

Le 06/06/2014 19:42, Travis Leithead a écrit :

Well, in IE's defense, this is not specifically allowed by: 
http://www.w3.org/TR/workers/#dom-worker. Regardless, the product team is 
working to fix this so that it works in IE as well. Stay tuned. I updated the 
Connect bug below.

-Original Message-
From: Aymeric Vitte [mailto:vitteayme...@gmail.com]
Sent: Friday, June 6, 2014 6:25 AM
To: Web Applications Working Group WG (public-webapps@w3.org)
Cc: Travis Leithead
Subject: IE - Security error with new Worker(URL.createObjectURL(new 
Blob([workerjs],{type:'text/javascript'})))

Why IE(11) does not allow this while this is working on FF and Chrome?
[1] seems to suggest that it is by design.

Regards

Aymeric

[1]
https://connect.microsoft.com/IE/feedback/details/779379/unable-to-spawn-worker-from-blob-url



--
Peersm : http://www.peersm.com
node-Tor : https://www.github.com/Ayms/node-Tor
GitHub : https://www.github.com/Ayms




Re: IE - Security error with new Worker(URL.createObjectURL(new Blob([workerjs],{type:'text/javascript'})))

2014-06-10 Thread Aymeric Vitte

Thanks, any way to track/be notified when this will be available?

Regards

Aymeric

Le 06/06/2014 19:42, Travis Leithead a écrit :

Well, in IE's defense, this is not specifically allowed by: 
http://www.w3.org/TR/workers/#dom-worker. Regardless, the product team is 
working to fix this so that it works in IE as well. Stay tuned. I updated the 
Connect bug below.

-Original Message-
From: Aymeric Vitte [mailto:vitteayme...@gmail.com]
Sent: Friday, June 6, 2014 6:25 AM
To: Web Applications Working Group WG (public-webapps@w3.org)
Cc: Travis Leithead
Subject: IE - Security error with new Worker(URL.createObjectURL(new 
Blob([workerjs],{type:'text/javascript'})))

Why IE(11) does not allow this while this is working on FF and Chrome?
[1] seems to suggest that it is by design.

Regards

Aymeric

[1]
https://connect.microsoft.com/IE/feedback/details/779379/unable-to-spawn-worker-from-blob-url



--
Peersm : http://www.peersm.com
node-Tor : https://www.github.com/Ayms/node-Tor
GitHub : https://www.github.com/Ayms




IE - Security error with new Worker(URL.createObjectURL(new Blob([workerjs],{type:'text/javascript'})))

2014-06-06 Thread Aymeric Vitte
Why IE(11) does not allow this while this is working on FF and Chrome? 
[1] seems to suggest that it is by design.


Regards

Aymeric

[1] 
https://connect.microsoft.com/IE/feedback/details/779379/unable-to-spawn-worker-from-blob-url


--
Peersm : http://www.peersm.com
node-Tor : https://www.github.com/Ayms/node-Tor
GitHub : https://www.github.com/Ayms




Re: Starting work on Indexed DB v2 spec - feedback wanted

2014-04-18 Thread Aymeric Vitte
I don't see it on the wiki, so resuggesting storing blobs with partial 
blobs, as discussed here 
http://lists.w3.org/Archives/Public/public-webapps/2013OctDec/0657.html


Regards

Aymeric

Le 16/04/2014 20:49, Joshua Bell a écrit :
At the April 2014 WebApps WG F2F [1] there was general agreement that 
moving forward with an Indexed Database v2 spec was a good idea. Ali 
Alabbas (Microsoft) has volunteered to co-edit the spec with me. 
Maintaining compatibility is the highest priority; this will not break 
the existing API.


We've been tracking additional features for quite some time now, both 
on the wiki [2] and bug tracker [3]. Several are very straightforward 
(continuePrimaryKey, batch gets, binary keys, ...) and have already 
been implemented in some user agents, and it will be helpful to 
document these. Others proposals (URLs, Promises, full text search, 
...) are much more complex and will require additional implementation 
feedback; we plan to add features to the v2 spec based on implementer 
acceptance.


This is an informal call for feedback to implementers on what is 
missing from v1:


* What features and functionality do you see as important to include?
* How would you prioritize the features?

If there's anything you think is missing from the wiki [2], or want to 
comment on the importance of a particular feature, please call it out 
- replying here is great. This will help implementers decide what work 
to prioritize, which will drive the spec work. We'd also like to keep 
the v2 cycle shorter than the v1 cycle was, so timely feedback is 
appreciated - there's always room for a v3.


[1] http://www.w3.org/2014/04/10-webapps-minutes.html
[2] http://www.w3.org/2008/webapps/wiki/IndexedDatabaseFeatures
[3] 
https://www.w3.org/Bugs/Public/buglist.cgi?bug_status=RESOLVEDcomponent=Indexed%20Database%20APIlist_id=34841product=WebAppsWGquery_format=advancedresolution=LATER


PS: Big thanks to Zhiqiang Zhang for his Indexed DB implementation 
report, also presented at the F2F.


--
Peersm : http://www.peersm.com
node-Tor : https://www.github.com/Ayms/node-Tor
GitHub : https://www.github.com/Ayms



Re: Update on Streams API Status

2014-02-10 Thread Aymeric Vitte


Le 07/02/2014 10:52, Anne van Kesteren a écrit :

As for createObjectURL(), it has not been a great success for Blob
objects. I'm not really sure we should widen that experiment. At least
not until the way they are supposed to be implemented for Blob objects
has actually been done in practice.


Do you mean that browsers are not implementing correctly createObjectURL 
right now?


Regards

Aymeric

--
Peersm : http://www.peersm.com
node-Tor : https://www.github.com/Ayms/node-Tor
GitHub : https://www.github.com/Ayms




Re: Request for feedback: Streams API

2013-12-05 Thread Aymeric Vitte
I am not especially connnected to MediaStream/ WebRTC, so probably it's 
more efficient if Rob/Arthur do it.


I forward it to WebCrypto.

Right now there is still a list of bugs but regarding the current 
edition I would comment what I already said separately to Takeshi/Feras: 
I am not very convinced by the readExact method.


Regards

Aymeric

Le 04/12/2013 22:19, Rob Manson a écrit :

Hi Feras/Takeshi,

thanks for proactively dealing with all our feedback 8)

I'll definitely see if there's any further feedback on the updated 
spec from the people that participated at the FOMS session.


And I'd also be happy to do the same with the Media Capture and 
Streams TF/WG too as this relates directly to the post-processing use 
cases I'm particularly interested in.


roBman


On 5/12/13 8:04 AM, Feras Moussa wrote:

Thanks Art.

We've also had Rob (cc'd) interested from the FOMS (Open Media 
Standards) group. I'll follow up with Rob for further feedback from 
that group.



In the spec, we tried to capture all the various areas we think this 
spec can affect - this is the stream consumers/producers section 
(http://dvcs.w3.org/hg/streams-api/raw-file/tip/Overview.htm#producers-consumers)


In addition to the ones you've outlined,the one that comes to mind 
from the list in the spec would be the web-crypto group.


-Feras



Date: Wed, 4 Dec 2013 12:57:50 -0500
From: art.bars...@nokia.com
To: feras.mou...@hotmail.com; dome...@domenicdenicola.com; 
vitteayme...@gmail.com

CC: public-webapps@w3.org
Subject: Re: Request for feedback: Streams API

Thanks for the update Feras.

Re getting `wide review` of the latest [ED], which groups, lists and
individuals should be asked to review the spec?

In IRC just now, jgraham mentioned TC39, WHATWG and Domenic. Would
someone please ask these two groups to review the latest ED?

Aymeric - would you please ask the WebRTC list(s) to review the latest
ED or provide the list name(s) and I'll ask them.

-Thanks, ArtB

[ED] https://dvcs.w3.org/hg/streams-api/raw-file/tip/Overview.htm

On 12/4/13 11:27 AM, ext Feras Moussa wrote:

The editors of the Streams API have reached a milestone where we feel
many of the major issues that have been identified thus far are now
resolved and incorporated in the editors draft.

The editors draft [1] has been heavily updated and reviewed the past
few weeks to address all concerns raised, including:
1. Separation into two distinct types -ReadableByteStream and
WritableByteStream
2. Explicit support for back pressure management
3. Improvements to help with pipe( ) and flow-control management
4. Updated spec text and diagrams for further clarifications

There are still a set of bugs being tracked in bugzilla. We would like
others to please review the updated proposal, and provide any feedback
they may have (or file bugs).

Thanks.
-Feras


[1] https://dvcs.w3.org/hg/streams-api/raw-file/tip/Overview.htm




--
Peersm : http://www.peersm.com
node-Tor : https://www.github.com/Ayms/node-Tor
GitHub : https://www.github.com/Ayms




Re: IndexedDB, Blobs and partial Blobs - Large Files

2013-12-04 Thread Aymeric Vitte
OK for the different records but just to understand correctly, when you 
fetch {chunk1, chunk2, etc} or [chunk1, chunk2, etc], does it do 
something else than just keeping references to the chunks and storing 
them again with (new?) references if you didn't do anything with the chunks?


Regards

Aymeric

Le 03/12/2013 22:12, Jonas Sicking a écrit :

On Tue, Dec 3, 2013 at 11:55 AM, Joshua Bell jsb...@google.com wrote:

On Tue, Dec 3, 2013 at 4:07 AM, Aymeric Vitte vitteayme...@gmail.com
wrote:

I am aware of [1], and really waiting for this to be available.

So you are suggesting something like {id:file_id, chunk1:chunk1,
chunk2:chunk2, etc}?

No, because you'd still have to fetch, modify, and re-insert the value each
time. Hopefully implementations store blobs by reference so that doesn't
involve huge data copies, at least.

That's what the Gecko implementation does. When reading a Blob from
IndexedDB, and then store the same Blob again, that will not copy any
of the Blob data, but simply just create another reference to the
already existing data.

/ Jonas


--
Peersm : http://www.peersm.com
node-Tor : https://www.github.com/Ayms/node-Tor
GitHub : https://www.github.com/Ayms




Re: IndexedDB, Blobs and partial Blobs - Large Files

2013-12-03 Thread Aymeric Vitte

I am aware of [1], and really waiting for this to be available.

So you are suggesting something like {id:file_id, chunk1:chunk1, 
chunk2:chunk2, etc}?


Related to [1] I have tried a workaround (not for fun, because I 
needed to test at least with two different browsers): store the chunks 
as ArrayBuffers in an Array {id:file_id, [chunk1, chunk2,... ]}, after 
testing different methods the idea was to new Blob([chunk1, chunk2,... 
]) on query and avoid creating a big ArrayBuffer on update.


Unfortunately, with my configuration, Chrome crashes systematically on 
update for big files (tested with 250 MB file and chunks of 2 MB, does 
not seem to be something really enormous).


Then I was thinking to use different keys as you suggest but maybe it's 
not very easy to manipulate and you still have to use an Array to 
concatenate, what's the best method?


Regards,

Aymeric

[1] http://code.google.com/p/chromium/issues/detail?id=108012

Le 02/12/2013 23:38, Joshua Bell a écrit :
On Mon, Dec 2, 2013 at 9:26 AM, Aymeric Vitte vitteayme...@gmail.com 
mailto:vitteayme...@gmail.com wrote:


This is about retrieving a large file with partial data and
storing it in an incremental way in indexedDB.

...

This seems not efficient at all, was it never discussed the
possibility to be able to append data directly in indexedDB?


You're correct, IndexedDB doesn't have a notion of updating part of a 
value, or even querying part of a value (other than via indexes). 
We've received developer feedback that partial data update and query 
would both be valuable, but haven't put significant thought into how 
it would be implemented. Conceivably you could imagine an API for 
get or put with an additional keypath into the object. We 
(Chromium) currently treat the stored value as opaque so we'd need to 
deserialize/reserialize the entire thing anyway unless we added extra 
smarts in there, at which point a smart caching layer implemented in 
JS and tuned for the webapp might be more effective.


Blobs are pesky since they're not mutable. So even with the above 
hand-waved API you'd still be paying for a fetch/concatenate/store. 
(FWIW, Chromium's support for Blobs in IndexedDB is still in progress, 
so this is all in the abstract.)


I think the best advice at the moment for dealing with incremental 
data in IDB is to store the chunks under separate keys, and 
concatenate when either all of the data has arrived or lazily on use.





--
Peersm : http://www.peersm.com
node-Tor : https://www.github.com/Ayms/node-Tor
GitHub : https://www.github.com/Ayms



IndexedDB, Blobs and partial Blobs - Large Files

2013-12-02 Thread Aymeric Vitte

This is related to [1] (Use case) and [2] (Reduced test case)

This is about retrieving a large file with partial data and storing it 
in an incremental way in indexedDB.


Instead of maintaining an incremented Blob, the real use case is 
retrieving the Blob from indexedDB, doing new Blob([Blob, chunk]) and 
storing it again in indexedDB, for each chunk, decreasing performances 
even more.


Of course you could wait for having the complete Blob and store it, but 
that's not something very realistic, if you are streaming a 500 MB file 
and an error occurs most likely you would prefer to resume the download 
rather than restarting from the begining.


This seems not efficient at all, was it never discussed the possibility 
to be able to append data directly in indexedDB?


Regards

Aymeric

[1] https://bugzilla.mozilla.org/show_bug.cgi?id=944918
[2] https://bugzilla.mozilla.org/show_bug.cgi?id=945281

--
Peersm : http://www.peersm.com
node-Tor : https://www.github.com/Ayms/node-Tor
GitHub : https://www.github.com/Ayms




Re: Thoughts behind the Streams API ED

2013-11-12 Thread Aymeric Vitte

Takeshi,

See discussion here too: https://github.com/whatwg/streams/issues/33

The problem with stop again is that I need to handle myself the clone 
operations, the advantage of stop-eof is:


- clone the operation
- close it
- restart from the clone

And as I mentioned before this would work for any operation that has 
unresolved bytes (TextDecoder, etc) without the need of modifying the 
operation API for explicit clones or options.


Regards

Aymeric


Le 08/11/2013 14:31, Takeshi Yoshino a e'crit :
On Fri, Nov 8, 2013 at 8:54 PM, Aymeric Vitte vitteayme...@gmail.com 
mailto:vitteayme...@gmail.com wrote:


I would expect Poc (stop, keep 0xd1 for the next data) and сия

It can be seen a bit different indeed, while with crypto you
expect the finalization of the operation since the begining (but
only by computing the latest bytes), here you can not expect the
string since the begining of course.

It just depends how the Operation (here TextDecoder) handles
stop but I find it very similar, TextEncoder closes the operation
with the bytes it has and clone its state (ie do nothing here
except clearing resolved bytes and keeping unresolved ones for
data to come).


I'd say more generally that stop() is kinda in-band control signal 
that is inserted between elements of the stream and distinguishable 
from the elements. As you said, interpretation of the stop() symbol 
depends on what the destination is.


One thing I'm still not sure is that I think you can just add stop() 
equivalent method to the destination, and

- pipe() data until the point you were calling stop()
- call the stop() equivalent on e.g. hash
- restart pipe()

At least our spec allows for this. Of course, it's convenient that 
Stream can carry such a signal. But there's trade-off between the 
convenience and API size. Similar to decision whether we include 
abort() to WritableByteStream or not.


Extremely, abort(), close() and stop() can be merged into one method 
(unless abort() method has a functionality to abandon already written 
data). They're all signal inserting methods.


close() - signal(FIN)
stop(info) - signal(CONTROL, info)
abort(error) - signal(ABORT, error)

and the signal is packed and inserted into the stream's internal buffer.


--
Peersm : http://www.peersm.com
node-Tor : https://www.github.com/Ayms/node-Tor
GitHub : https://www.github.com/Ayms



Re: Thoughts behind the Streams API ED

2013-11-12 Thread Aymeric Vitte
No, see my previous reply, unless I am proven incorrect, I still think 
we should have:


- pause/unpause
- stop/(implicit resume)

Regards,

Aymeric

Le 11/11/2013 22:06, Takeshi Yoshino a écrit :

Aymeric,

Re: pause()/resume(),

I've moved flow control functionality for non-exact read() method to a 
separate attribute pullAmount [1] [2]. pullAmount limits the max size 
of data to be read by read() method. Currently the pipe() method is 
specified not to respect pullAmount but we can choose to have it to 
respect pullAmount, i.e. pausing pipe()'s transfer when pullAmount is 
set to 0. Does this work for your use case?


[1] https://www.w3.org/Bugs/Public/show_bug.cgi?id=23790
[2] https://dvcs.w3.org/hg/streams-api/rev/8a7f99536516


--
Peersm : http://www.peersm.com
node-Tor : https://www.github.com/Ayms/node-Tor
GitHub : https://www.github.com/Ayms



Re: Thoughts behind the Streams API ED

2013-11-08 Thread Aymeric Vitte
Please see here https://github.com/whatwg/streams/issues/33, I realized 
that this would apply to operations like textDecoder too without the 
need of an explicit stream option, so that's no more WebCrypto only related.


Regards

Aymeric

Le 07/11/2013 11:25, Aymeric Vitte a écrit :


Le 07/11/2013 10:42, Takeshi Yoshino a écrit :
On Thu, Nov 7, 2013 at 6:27 PM, Aymeric Vitte vitteayme...@gmail.com 
mailto:vitteayme...@gmail.com wrote:



Le 07/11/2013 10:21, Takeshi Yoshino a écrit :

On Thu, Nov 7, 2013 at 6:05 PM, Aymeric Vitte
vitteayme...@gmail.com mailto:vitteayme...@gmail.com wrote:

stop/resume:

Indeed as I mentioned this is related to WebCrypto Issue22
but I don't think this is a unique case. Issue22 was closed
because of lack of proposals to solve it, apparently I was
the only one to care about it (but I saw recently some other
messages that seem to be related), and finally this would
involve a public clone method with associated security concerns.

But with Streams it could be different, the application will
internally clone the state of the operation probably
eliminating the security issues, as simple as that.

To describe simply the use case, let's take a progressive
hash computing 4 bytes by 4 bytes:

incoming stream: ABCDE bytes
hash operation: process ABCD, keep E for the next computation
incoming stream: FGHI bytes + STOP-EOF
hash operation: process EFGH, process STOP-EOF: clone the
state of the hash, close the operation: digest hash with I


So, here, partial hash for ABCDEFGH is output


No, you get the digest for ABCDEFGHI and you get a cloned
operation which will restart from ABCDEFGH



OK.



resume:
incoming stream: JKLF
hash operation (clone): process IJKL, keep F for next
computation
etc...


and if we close the stream here we'll get a hash for
ABCDEFGHIJKLFPPP (P is padding). Right?


If you close the stream here you get the digest for ABCDEFGHIJKLF


resume happens implicitly when new data comes in without explicit 
method call say resume()?


Good question, I would say yes, so we don't need resume finally, but 
maybe others have different opinion, let's ask whatwg if they foresee 
this case.





So you do not restart the operation as if it was the first
time it was receiving data, you just continue it from the
state it was when stop was received.

That's not so unusual to do this, it has been requested many
times in node.



-- 
Peersm :http://www.peersm.com

node-Tor :https://www.github.com/Ayms/node-Tor
GitHub :https://www.github.com/Ayms




--
Peersm :http://www.peersm.com
node-Tor :https://www.github.com/Ayms/node-Tor
GitHub :https://www.github.com/Ayms


--
Peersm : http://www.peersm.com
node-Tor : https://www.github.com/Ayms/node-Tor
GitHub : https://www.github.com/Ayms



Re: Thoughts behind the Streams API ED

2013-11-08 Thread Aymeric Vitte

I would expect Poc (stop, keep 0xd1 for the next data) and сия

It can be seen a bit different indeed, while with crypto you expect the 
finalization of the operation since the begining (but only by computing 
the latest bytes), here you can not expect the string since the begining 
of course.


It just depends how the Operation (here TextDecoder) handles stop but 
I find it very similar, TextEncoder closes the operation with the bytes 
it has and clone its state (ie do nothing here except clearing 
resolved bytes and keeping unresolved ones for data to come).


Regards

Aymeric

Le 08/11/2013 11:33, Takeshi Yoshino a écrit :

Sorry. I've cut the input at wrong position.

textDecoderStream.write(arraybuffer of 0xd0 0xa0 0xd0 0xbe 0xd1 0x81 
0xd1);

textDecoderStream.stop();
textDecoderStream.write(arraybuffer of 0x81 0xd0 0xb8 0xd1 0x8f)


--
Peersm : http://www.peersm.com
node-Tor : https://www.github.com/Ayms/node-Tor
GitHub : https://www.github.com/Ayms




Re: Thoughts behind the Streams API ED

2013-11-07 Thread Aymeric Vitte

stop/resume:

Indeed as I mentioned this is related to WebCrypto Issue22 but I don't 
think this is a unique case. Issue22 was closed because of lack of 
proposals to solve it, apparently I was the only one to care about it 
(but I saw recently some other messages that seem to be related), and 
finally this would involve a public clone method with associated 
security concerns.


But with Streams it could be different, the application will internally 
clone the state of the operation probably eliminating the security 
issues, as simple as that.


To describe simply the use case, let's take a progressive hash computing 
4 bytes by 4 bytes:


incoming stream: ABCDE bytes
hash operation: process ABCD, keep E for the next computation
incoming stream: FGHI bytes + STOP-EOF
hash operation: process EFGH, process STOP-EOF: clone the state of the 
hash, close the operation: digest hash with I


resume:
incoming stream: JKLF
hash operation (clone): process IJKL, keep F for next computation
etc...

So you do not restart the operation as if it was the first time it was 
receiving data, you just continue it from the state it was when stop was 
received.


That's not so unusual to do this, it has been requested many times in node.

pipe/fork: I think b is better.

Regards

Aymeric



Le 06/11/2013 20:15, Takeshi Yoshino a écrit :
On Wed, Nov 6, 2013 at 7:33 PM, Aymeric Vitte vitteayme...@gmail.com 
mailto:vitteayme...@gmail.com wrote:


I have seen the different bugs too, some comments:

- maybe I have missed some explaination or some obvious thing but
I don't understand very well right now the difference/use between
readable/writablebytestream and bytestream


ReadableByteStream and WritableByteStream are defining interfaces not 
only for ByteStream but more generally for other APIs. For example, we 
discussed how WebCrypto's encryption method should work with Stream 
concept recently, and one idea you showed was making WebCrypto.subtle 
return an object (which I called filter) to which we can pipe data. 
By defining a protocol how to pass data to consumer as the 
WritableByteStream interface, we can reuse it later for defining IDL 
for those filters. Similarly, ReadableByteStream can provide uniform 
protocol how data producing APIs should communicate with consumers.


ByteStream is now a class inheriting both ReadableByteStream and 
WritableByteStream (sorry, I forgot to include inheritance info in the 
IDL).


- pause/unpause: as far as I understand the whatwg spec does not
recommend it but I don't understand the reasons. As I previously
mentionned the idea is to INSERT a pause signal in the stream, you
can not control the stream and therefore know when you are pausing it.


Maybe after decoupling the interface, pause/unpause are things to be 
added to ByteStream? IIUC, pause prevents data from being read from a 
ByteStream, and unpause removes the dam?


- stop/resume: same, see my previous post, the difference is that
the consuming API should clone the state of the operation and
close the current operation as if eof was received, then restart
from the clone on resume


Sorry that I haven't replied to your one.

Your post about those methods: 
http://lists.w3.org/Archives/Public/public-webapps/2013OctDec/0343.html

WebCrypto ISSUE-22: http://www.w3.org/2012/webcrypto/track/issues/22

Maybe I still don't quite understand your ideas. Let me confirm.

stop() tells the consumer API implementing WritableByteStream that it 
should behave as if it received EOF, but when resume() is called, 
restart processing the data written between stop() call and resume() 
call as if the API received data for the first time?


How should stop() work for ByteStream? ByteStream's read() method will 
receive EOF at least once when all data written before stop() call has 
been read, and it keeps returning EOF until resume() tells the 
ByteStream to restart outputting?


I've been feeling that your use case is very specific to WebCrypto. 
Saving state and restoring it sounds more like feature request for 
WebCrypto, not a Stream.


But I'm a bit interested in what your stop()/resume() enables. With 
this feature, ByteStream becomes message stream which is convenient 
for handling WebSocket.


- pipe []/fork: I don't see why the fast stream should wait for
the slow one, so maybe the stream is forked and pause can be used
for the slow one


There could be apps that want to limit memory usage strictly. We think 
there're two strategies fork() can take.

a) wait until the slowest substream consumes
b) grow not to block the fastest substream while keeping data for the 
slowest


a) is useful to limit memory usage. b) is more performance oriented.


- flow control: could it be possible to advertise a maximum
bandwidth rate for a stream?


It's currently communicated as window similar to TCP. Consumer can 
adjust size argument of read() and frequency of read() call to match

Re: Thoughts behind the Streams API ED

2013-11-07 Thread Aymeric Vitte


Le 07/11/2013 10:21, Takeshi Yoshino a écrit :
On Thu, Nov 7, 2013 at 6:05 PM, Aymeric Vitte vitteayme...@gmail.com 
mailto:vitteayme...@gmail.com wrote:


stop/resume:

Indeed as I mentioned this is related to WebCrypto Issue22 but I
don't think this is a unique case. Issue22 was closed because of
lack of proposals to solve it, apparently I was the only one to
care about it (but I saw recently some other messages that seem to
be related), and finally this would involve a public clone method
with associated security concerns.

But with Streams it could be different, the application will
internally clone the state of the operation probably eliminating
the security issues, as simple as that.

To describe simply the use case, let's take a progressive hash
computing 4 bytes by 4 bytes:

incoming stream: ABCDE bytes
hash operation: process ABCD, keep E for the next computation
incoming stream: FGHI bytes + STOP-EOF
hash operation: process EFGH, process STOP-EOF: clone the state of
the hash, close the operation: digest hash with I


So, here, partial hash for ABCDEFGH is output


No, you get the digest for ABCDEFGHI and you get a cloned operation 
which will restart from ABCDEFGH




resume:
incoming stream: JKLF
hash operation (clone): process IJKL, keep F for next computation
etc...


and if we close the stream here we'll get a hash for ABCDEFGHIJKLFPPP 
(P is padding). Right?


If you close the stream here you get the digest for ABCDEFGHIJKLF


So you do not restart the operation as if it was the first time it
was receiving data, you just continue it from the state it was
when stop was received.

That's not so unusual to do this, it has been requested many times
in node.



--
Peersm : http://www.peersm.com
node-Tor : https://www.github.com/Ayms/node-Tor
GitHub : https://www.github.com/Ayms



Re: Thoughts behind the Streams API ED

2013-11-07 Thread Aymeric Vitte


Le 07/11/2013 10:42, Takeshi Yoshino a écrit :
On Thu, Nov 7, 2013 at 6:27 PM, Aymeric Vitte vitteayme...@gmail.com 
mailto:vitteayme...@gmail.com wrote:



Le 07/11/2013 10:21, Takeshi Yoshino a écrit :

On Thu, Nov 7, 2013 at 6:05 PM, Aymeric Vitte
vitteayme...@gmail.com mailto:vitteayme...@gmail.com wrote:

stop/resume:

Indeed as I mentioned this is related to WebCrypto Issue22
but I don't think this is a unique case. Issue22 was closed
because of lack of proposals to solve it, apparently I was
the only one to care about it (but I saw recently some other
messages that seem to be related), and finally this would
involve a public clone method with associated security concerns.

But with Streams it could be different, the application will
internally clone the state of the operation probably
eliminating the security issues, as simple as that.

To describe simply the use case, let's take a progressive
hash computing 4 bytes by 4 bytes:

incoming stream: ABCDE bytes
hash operation: process ABCD, keep E for the next computation
incoming stream: FGHI bytes + STOP-EOF
hash operation: process EFGH, process STOP-EOF: clone the
state of the hash, close the operation: digest hash with I


So, here, partial hash for ABCDEFGH is output


No, you get the digest for ABCDEFGHI and you get a cloned
operation which will restart from ABCDEFGH



OK.



resume:
incoming stream: JKLF
hash operation (clone): process IJKL, keep F for next computation
etc...


and if we close the stream here we'll get a hash for
ABCDEFGHIJKLFPPP (P is padding). Right?


If you close the stream here you get the digest for ABCDEFGHIJKLF


resume happens implicitly when new data comes in without explicit 
method call say resume()?


Good question, I would say yes, so we don't need resume finally, but 
maybe others have different opinion, let's ask whatwg if they foresee 
this case.





So you do not restart the operation as if it was the first
time it was receiving data, you just continue it from the
state it was when stop was received.

That's not so unusual to do this, it has been requested many
times in node.



-- 
Peersm :http://www.peersm.com

node-Tor :https://www.github.com/Ayms/node-Tor
GitHub :https://www.github.com/Ayms




--
Peersm : http://www.peersm.com
node-Tor : https://www.github.com/Ayms/node-Tor
GitHub : https://www.github.com/Ayms



Re: Thoughts behind the Streams API ED

2013-11-06 Thread Aymeric Vitte

I have seen the different bugs too, some comments:

- maybe I have missed some explaination or some obvious thing but I 
don't understand very well right now the difference/use between 
readable/writablebytestream and bytestream


- pause/unpause: as far as I understand the whatwg spec does not 
recommend it but I don't understand the reasons. As I previously 
mentionned the idea is to INSERT a pause signal in the stream, you can 
not control the stream and therefore know when you are pausing it.


- stop/resume: same, see my previous post, the difference is that the 
consuming API should clone the state of the operation and close the 
current operation as if eof was received, then restart from the clone on 
resume


- pipe []/fork: I don't see why the fast stream should wait for the slow 
one, so maybe the stream is forked and pause can be used for the slow one


- flow control: could it be possible to advertise a maximum bandwidth 
rate for a stream?


Regards

Aymeric

Le 04/11/2013 18:11, Takeshi Yoshino a écrit :
I'd like to summarize my ideas behind this API surface since the 
overlap thread is too long. We'll put these into bug entries soon.


Feedback on Overlap thread, especially Issac's exhaustive list of 
considerations and conversation with Aymeric were very helpful. In 
reply to his mail, I drafted my initial proposal [2] in past which 
addresses almost all of them. Since the API surface was so big, I 
tried to compact it while incorporating Promises. Current ED [3] 
addresses not all but some of important requirements. I think it's a 
good (re)starting point.


* Flow control
read() and write() in the ED does provide flow control by controlling 
the timing of resolution of the returned Promise. A Stream would have 
a window to limit data to be buffered in it. If a big value is passed 
as size parameter of read(), it may extend the window if necessary.


In reading data as a DOMString, the size param of read() doesn't 
specify exact raw size of data to be read out. It just works as 
throttle to prevent internal buffer from being drained too fast. 
StreamReadResult tells how many bytes were actually consumed.


If more explicit and precise flow control is necessary, we could 
cherry pick some from my old big API proposal [1]. For example, making 
window size configurable.


If it makes sense, size can be generalized to be cost of each element. 
It would be useful when trying to generalize Stream to various objects.


To make window dynamically adjustable, we could introduce methods such 
as drainCapacity(), expandCapacity() to it.


* Passive producer
Thinking of producers like a random number generator, it's not always 
good to ask a producer to prepare data and push it to a Stream using 
write(). This was possible in [2], but not in the ED. This can be 
addressed for example by adding one overload to write().
- write() and write(size) doesn't write data but wait until the Stream 
can accept some amount or the specified size of data.


* Conversion from existing active and unstoppable producer API
E.g. WebSocket invokes onmessage immediately when new data is 
available. For this kind of API, finite size Stream cannot absorb the 
production. So, there'll be need to buffer read data manually. In [2], 
Stream always accepted write() even if buffer is full assuming that if 
necessary the producer should be using onwritable method.


Currently, only one write() can be issued concurrently, but we can 
choose to have Stream queue write() requests in it.


* Sync read if possible
By adding sync flag to StreamReadResult and introducing 
StreamWriteResult to signal if read was done sync (data is the actual 
result) or async (data is a Promise) to save Promise post tasking cost.


I estimated that post tasking overhead should be negligible for bulk 
reading, and when to read small fields, we often read some amount into 
ArrayBuffer and then parse it directly. So it's currently excluded.


* Multiple consumers
pipe() can take multiple destination Streams. This allows for 
mirroring Stream's output into two or more Streams. I also considered 
making Stream itself consumable, but I thought it complicates API and 
implementation.

- Elements in internal buffer need to be reference counted.
- It must be able to distinguish consumers.

If one of consumers is fast and the other is slow, we need to wait for 
the slower one before starting processing the rest in the original 
Stream. We can choose to allow multiple consumers to address this by 
introducing a new concept Cursor that represents reading context. 
Cursor can be implemented as a new interface or Stream that refers to 
(and adds reference count to elements of) the original Stream's 
internal buffer.


Needs some more study to figure out if context approach is really 
better than pipe()-ing to new Stream instance.


* Splitting InputStream (ReadableStream) and OutputStream (WritableStream)
Writing part and reading part of the API can be split into two 

Re: CfC: publish WD of Streams API; deadline Nov 3

2013-11-04 Thread Aymeric Vitte

Yes for the different questions, I first mentioned it in [1]

That's not an invention of mine, node is doing this:

var handle=function(req,res) {
var raw=fs.createReadStream(file);
res.writeHead(200,head);

//and the magic comes here:

raw.pipe(zlib.createGzip()).pipe(res);

};

http.createServer(handle).listen(80,function() {});

If the concept is correct and applicable (with promises), as previously 
mentioned, a good example could be WebCrypto:


crypto.subtle.encrypt(aesAlgorithmEncrypt, aesKey, sourceStream).createStream()


So the WebCrypto API does not have to be modified and just need to 
support the createStream method, so a two lines change in the spec.


Regards,

Aymeric

[1] http://lists.w3.org/Archives/Public/public-webapps/2013JulSep/0593.html

Le 04/11/2013 00:41, Feras Moussa a écrit :

Streams instantiations somewhere make me think to the structured clone
algorithm, as I proposed before there should be a method like a
createStream so you just need to say for a given API that it supports
this method and you don't have to modify the API except for specific
cases (xhr,ws,etc), like for the structured clone algo, and this is missing.

This is an interesting idea. But I'm not entirely clear on your proposal. Is 
[1] where you mentioned it, or is there another thread I've missed?

You're not proposing changing the stream constructor, but rather also defining 
a generic way an API can add support for stream by implementing a 
strongly-defined createStream method?

Is your thinking to have this in order to give users a consistent way to obtain 
a stream from various APIs?
On first thought I like the idea, but I think once we settle on a definition of 
'Stream', we can asses what is really required for other APIs to begin 
supporting it. If so, I can create a bug to track this concept.

[1] http://lists.w3.org/Archives/Public/public-webapps/2013OctDec/0246.html




Date: Sun, 3 Nov 2013 23:16:12 +0100
From: vitteayme...@gmail.com
To: art.bars...@nokia.com
CC: public-webapps@w3.org
Subject: Re: CfC: publish WD of Streams API; deadline Nov 3

Yes, with good results, groups are throwing the ball to others... I
don't know right now all the groups that might need to be involved,
that's the reason of my question.

4 days out without internet connection, usually one email every two
weeks on the subject and suddendly tons of emails, looks like a
conspiracy...

I will reread the threads (still perplex about some issues, a txt stream
is a binary stream that should be piped to textEncoder/Decoder from my
standpoint, making it a special case just complicates everything, maybe
it's too late to revert this) but it looks like the consensus is to wait
for Domenic's proposal, OK but as I mentioned he missed some points in
the current proposal and it's interesting to read carefully the Overlap
thread, and I find it important to have a simple way to handle
ArrayBuffer, View, Blob without converting all the time.

Streams instantiations somewhere make me think to the structured clone
algorithm, as I proposed before there should be a method like a
createStream so you just need to say for a given API that it supports
this method and you don't have to modify the API except for specific
cases (xhr,ws,etc), like for the structured clone algo, and this is missing.

Regards

Aymeric

Le 03/11/2013 19:02, Arthur Barstow a écrit :

Hi Aymeric,

On 10/29/13 7:22 AM, ext Aymeric Vitte wrote:

Who is coordinating each group that should get involved?

I thought you agreed to do that ;).


MediaStream for example should be based on the Stream interface and
all related streams proposals.

More seriously though, this is good to know, and if there is
additional coordination that needs to be done, please let us know.

-Thanks, ArtB



--
Peersm : http://www.peersm.com
node-Tor : https://www.github.com/Ayms/node-Tor
GitHub : https://www.github.com/Ayms




--
Peersm : http://www.peersm.com
node-Tor : https://www.github.com/Ayms/node-Tor
GitHub : https://www.github.com/Ayms




Re: CfC: publish WD of Streams API; deadline Nov 3

2013-11-04 Thread Aymeric Vitte
The main difference is that you are explaining in details what you are 
doing and the current draft does not, I will look more closely and 
comment, I am not sure the ES style really helps everybody understanding it.


Regards,

Aymeric

Le 04/11/2013 09:52, Domenic Denicola a écrit :

From: Arthur Barstow [mailto:art.bars...@nokia.com]


Domenic - Mike Smith mentioned you have worked on a related spec. What is the 
URL?

We are working on a streams specification which addresses the appropriate 
requirements at https://github.com/whatwg/streams.

It is still a work in progress, but the most important differences in approach 
and API can be seen. In particular, the extensive Requirements section details 
the problems a streaming API should solve; very few of them are solved by the 
draft this CfC was targeted at.

I will be continuing to work on it throughout the week, as time permits, to 
flesh out more of the ideas that are currently sketches or one-sentence 
summaries, and instead making them complete APIs.


Also, are you interested and willing to work with Feras and Takeshi on a 
joint/converged spec in the context of WebApps?

I welcome any input and help from Feras, Takeshi, or any others who wish to be 
involved. I am already getting great feedback and input from many quarters, 
including the Node.js community, the web developer community, a couple 
implementers, and a few editors of related specifications (such as the serial 
port API, the raw socket API, the XHR standard, and the service worker spec). 
Pull requests or discussion in the issue tracker would definitely be welcome, 
as there is much work left to do!

As for *where* the work is done, I will be working within the context of the 
WHATWG to produce this specification. My understanding is that usually the W3C 
picks some point in time to fork WHATWG specifications into W3C ones, changes 
some minor details (such as removing authorship information and changing the 
genders used in examples), then advancing it through the usual 
ED/WD/LCWD/CR/PR/REC track in order to get patent disclosure. I'm very 
interested in ensuring patent disclosure for the streams specification, so I 
hope someone takes on this work, but I do not think it would be a good use of 
my time to do so, as from what I understand there are people at the W3C who 
have this process down to an art.


--
Peersm : http://www.peersm.com
node-Tor : https://www.github.com/Ayms/node-Tor
GitHub : https://www.github.com/Ayms




Re: CfC: publish WD of Streams API; deadline Nov 3

2013-11-03 Thread Aymeric Vitte
Yes, with good results, groups are throwing the ball to others... I 
don't know right now all the groups that might need to be involved, 
that's the reason of my question.


4 days out without internet connection, usually one email every two 
weeks on the subject and suddendly tons of emails, looks like a 
conspiracy...


I will reread the threads (still perplex about some issues, a txt stream 
is a binary stream that should be piped to textEncoder/Decoder from my 
standpoint, making it a special case just complicates everything, maybe 
it's too late to revert this) but it looks like the consensus is to wait 
for Domenic's proposal, OK but as I mentioned he missed some points in 
the current proposal and it's interesting to read carefully the Overlap 
thread, and I find it important to have a simple way to handle 
ArrayBuffer, View, Blob without converting all the time.


Streams instantiations somewhere make me think to the structured clone 
algorithm, as I proposed before there should be a method like a 
createStream so you just need to say for a given API that it supports 
this method and you don't have to modify the API except for specific 
cases (xhr,ws,etc), like for the structured clone algo, and this is missing.


Regards

Aymeric

Le 03/11/2013 19:02, Arthur Barstow a écrit :

Hi Aymeric,

On 10/29/13 7:22 AM, ext Aymeric Vitte wrote:
Who is coordinating each group that should get involved? 


I thought you agreed to do that ;).

MediaStream for example should be based on the Stream interface and 
all related streams proposals.


More seriously though, this is good to know, and if there is 
additional coordination that needs to be done, please let us know.


-Thanks, ArtB




--
Peersm : http://www.peersm.com
node-Tor : https://www.github.com/Ayms/node-Tor
GitHub : https://www.github.com/Ayms




Re: Overlap between StreamReader and FileReader

2013-11-03 Thread Aymeric Vitte

The idea did not come from mimicing WebRTC:

- pause/unpause: insert pause in the stream, stop processing the data 
when pause is reached (but don't close the operation, see below), buffer 
next data coming in, restart from pause on unpause


Use case: flow control, window flow control gets empty, wait signal from 
the receiver to reinitialize the window and restart


- stop/resume : different from close, stop: insert a specific eof-stop 
in the stream, the API closes the operation while receiving it, buffer 
data, restart the operation on resume in the state it was before 
receiving eof-stop


It's more tricky, use case is the one I gave before: specific 
progressive hash, close a hash and resume it from the state it was 
before closing it, the feature has been asked several time to node for 
example.


Whether it's implementable, I don't know, but I don't see why it could 
not be, uses cases are real (myself but I am not the only one)


Regards,

Aymeric

Le 30/10/2013 12:49, Takeshi Yoshino a écrit :
On Wed, Oct 30, 2013 at 8:14 PM, Takeshi Yoshino tyosh...@google.com 
mailto:tyosh...@google.com wrote:


On Wed, Oct 23, 2013 at 11:42 PM, Aymeric Vitte
vitteayme...@gmail.com mailto:vitteayme...@gmail.com wrote:

- pause: pause the stream, do not send eof



Sorry, what will be paused? Output?


http://lists.w3.org/Archives/Public/public-webrtc/2013Oct/0059.html
http://www.w3.org/2011/04/webrtc/wiki/Transport_Control#Pause.2Fresume

So, you're suggesting that we make Stream be a convenient point where 
we can dam up data flow and skip adding methods to pausing data 
producing and consuming to producer/consumer APIs? I.e. we make it 
able to prevent data queued in a Stream from being read. This 
typically means asynchronously suspending ongoing pipe() or read() 
call on the Stream with no-argument or very large argument.


- unpause: restart the stream

And flow control should be back and explicit, not sure right
now how to define it but I think it's impossible for a js app
to do a precise flow control, and for existing APIs like
WebSockets it's not easy to control the flow and avoid in some
situations to overload the UA.



--
Peersm : http://www.peersm.com
node-Tor : https://www.github.com/Ayms/node-Tor
GitHub : https://www.github.com/Ayms



Re: publish WD of Streams API; deadline Nov 3

2013-10-30 Thread Aymeric Vitte
As you mention streams are existing since the begining of times and it's 
incredible that this does not exist on a web platform, but apparently 
the subject is still not so easy, node changed its stream quite a lot of 
time.


I probably will not be able to answer back in the coming days but your 
judgement is tough, this API is the concatenation of thoughts of the 
Overlap thread. Indeed it must clarify the writable/readable aspects, as 
well as the flow control/congestion, moreover than, despite of what you 
are saying, the API does support multiple consumers, and I/O, it's close 
to node streams.


Of course you must conflate text and binary, you can not spend your 
time converting to text, to ArrayBuffer, to ArrayBufferView, to Blob, 
the user API knows what it is streaming.


Regards,

Aymeric




Le 30/10/2013 19:04, Domenic Denicola a écrit :

From: Arthur Barstow [mailto:art.bars...@nokia.com]


If you have any comments or concerns about this proposal, please reply to
this e-mail by November 3 at the latest.

I have some concerns about this proposal, and do not think it is solving the 
problem at hand in an appropriate fashion. I believe it does not provide the 
appropriate primitives the web platform needs for a streams API. Most 
seriously, I think it has ignored the majority of lessons learned from existing 
JavaScript streaming APIs.

Here are specific critiques, in ascending order of importance.

- It has a read(n) interface, which is not valuable [1] but constraints the API 
in several awkward ways.

- It assumes MIME types are at all relevant to a streaming data abstraction, 
when this is not at all the case.

- In general, is far too backward-looking in attempting to integrate things 
like blobs or object URLs into what is supposed to be a forward-looking 
primitive for the future of the extensible web. Replacing these various 
disparate concepts is what developers want from streams [2].

- It conflates text streams and binary data. As outlined in previous messages 
[1], what type of data the stream contains *must* be an *immutable* property of 
the stream. In contrast, the proposed API actively encourage mixing multiple 
data types within a stream via readType, readEncoding, and the overloaded write 
method.

- It conflates readable and writable streams, which prevents a whole class of 
abstractions and use cases like: read-only file streams; write-only HTTP 
requests; and duplex streams which read to one channel and write to another 
(e.g. a websocket, where writing pushes data to the server and reading reads 
data from the server---not the written data, but data that the server writes to 
you). Indeed, the only use case this proposal supports is transform streams, 
which take data in one end and output new data on the other end.

- It provides no mechanism for backpressure signaling to readable stream data 
sources or from writable stream sinks. As we have heard previously, any stream 
API that does not treat backpressure as a primary issue is not a stream API at 
all. [3]

- More generally, it does not operate at the correct level of abstraction, 
which should be close to the I/O primitives streams are meant to expose. This 
is evident in the general lack of APIs  for handing interaction with and 
signaling to underlying I/O sources or sinks.

- It's pipe mechanism is poorly thought out, and does not build on top of the 
existing primitives; indeed, it seems to install some kind of mutex lock that 
prevents the other primitives from being used until the pipe is complete. The 
primitives do not support multiple consumers, so the pipe mechanism handles 
that case in an ad-hoc way. It is not composable into chains in the fashion of 
traditional stream piping. Its lack of backpressure support prevents expression 
of key use cases such as piping a fast data connection (e.g. disk) to a slow 
data connection (e.g. push to a slow mobile device); piping a slow connection 
to a fast one; piping through encoders/compressers/decoders/decompressors; and 
so on. In general, it appears to take no inspiration from prior art, which is a 
shame since existing stream implementations all agree on how pipe should work.

In light of these critiques, I feel that this API is not worth pursuing and 
should not proceed to Working Draft status. If we are to bring streaming data 
to the web platform, we should instead do it correctly, learning the lessons of 
the JavaScript stream APIs that came before us, and provide a powerful 
primitive that gives developers what they have asked for and serves as 
something we can layer the web's many streaming I/O interfaces on top of.

I have concrete suggestions as to what such an API could look like—and, more 
importantly, how its semantics would significantly differ from this one—which I 
hope to flesh out and share more broadly by the end of this weekend. However, 
since the call for comments phase has commenced, I thought it important to 
voice these objections as soon 

Re: CfC: publish WD of Streams API; deadline Nov 3

2013-10-29 Thread Aymeric Vitte
I have suggested some additions/changes in my latest reply to the 
Overlap thread.


The list of streams producers/consumers is not final but obviously 
WebSockets are missing.


Who is coordinating each group that should get involved? MediaStream for 
example should be based on the Stream interface and all related streams 
proposals.


Regards,

Aymeric

Le 28/10/2013 16:29, Arthur Barstow a écrit :
Feras and Takeshi have begun merging their Streams proposal and this 
is a Call for Consensus to publish a new WD of Streams API using the 
updated ED as the basis:


https://dvcs.w3.org/hg/streams-api/raw-file/tip/Overview.htm

Please note the Editors may update the ED before the TR is published 
(but they do not intend to make major changes during the CfC).


Agreement to this proposal: a) indicates support for publishing a new 
WD; and b) does not necessarily indicate support of the contents of 
the WD.


If you have any comments or concerns about this proposal, please reply 
to this e-mail by November 3 at the latest. Positive response to this 
CfC is preferred and encouraged and silence will be assumed to mean 
agreement with the proposal.


-Thanks, ArtB



--
Peersm : http://www.peersm.com
node-Tor : https://www.github.com/Ayms/node-Tor
GitHub : https://www.github.com/Ayms




Re: Overlap between StreamReader and FileReader

2013-10-23 Thread Aymeric Vitte
Your filter idea seems to be equivalent to a createStream that I 
suggested some time ago (like node), what about:


var encryptionPromise = crypto.subtle.encrypt(aesAlgorithmEncrypt, 
aesKey, sourceStream).createStream();


So you don't need to modify the APIs where you can not specify the 
responseType.


I was thinking to add stop/resume and pause/unpause:

- stop: insert eof in the stream
Example : finalize the hash when eof is received

- resume: restart from where the stream stopped
Example : restart the hash from the state the operation was before 
receiving eof (related to Issue22 in WebCrypto that was closed without 
any solution, might imply to clone the state of the operation)


- pause: pause the stream, do not send eof

- unpause: restart the stream

And flow control should be back and explicit, not sure right now how to 
define it but I think it's impossible for a js app to do a precise flow 
control, and for existing APIs like WebSockets it's not easy to control 
the flow and avoid in some situations to overload the UA.


Regards,

Aymeric

Le 21/10/2013 13:14, Takeshi Yoshino a écrit :

Sorry for blank of ~2 weeks.

On Fri, Oct 4, 2013 at 5:57 PM, Aymeric Vitte vitteayme...@gmail.com 
mailto:vitteayme...@gmail.com wrote:


I am still not very familiar with promises, but if I take your
preceeding example:


var sourceStream = xhr.response;
var resultStream = new Stream();
var fileWritingPromise = fileWriter.write(resultStream);
var encryptionPromise = crypto.subtle.encrypt(aesAlgorithmEncrypt,
aesKey, sourceStream, resultStream);
Promise.all(fileWritingPromise, encryptionPromise).then(
  ...
);


I made a mistake. The argument of Promise.all should be an Array. So, 
[fileWritingPromise, encryptionPromise].




shoud'nt it be more something like:

var sourceStream = xhr.response;
var encryptionPromise = crypto.subtle.encrypt(aesAlgorithmEncrypt,
aesKey);
var resultStream=sourceStream.pipe(encryptionPromise);
var fileWritingPromise = fileWriter.write(resultStream);
Promise.all(fileWritingPromise, encryptionPromise).then(
  ...
);


Promises just tell the user completion of each operation with some 
value indicating the result of the operation. It's not destination of 
data.


Do you think it's good to create objects representing each encrypt 
operation? So, some objects called filter is introduced and the code 
would be like:


var pipeToFilterPromise;

var encryptionFilter;
var fileWriter;

xhr.onreadystatechange = function() {
  ...
  } else if (this.readyState == this.LOADING) {
if (this.status != 200) {
  ...
}

var sourceStream = xhr.response;

encryptionFilter = 
crypto.subtle.createEncryptionFilter(aesAlgorithmEncrypt, aesKey);

// Starts the filter.
var encryptionPromise = encryptionFilter.encrypt();
// Also starts pouring data but separately from promise creation.
pipeToFilterPromise = sourceStream.pipe(encryptionFilter);

fileWriter = ...;
// encryptionFilter works as data producer for FileWriter.
var fileWritingPromise = fileWriter.write(encryptionFilter);

// Set only handler for rejection now.
pipeToFilterPromise.catch(
  function(result) {
xhr.abort();
encryptionFilter.abort();
fileWriter.abort();
  }
);

encryptionPromise.catch(
  function(result) {
xhr.abort();
fileWriter.abort();
  }
);

fileWritingPromise.catch(
  function(result) {
xhr.abort();
encryptionFilter.abort();
  }
);

// As encryptionFilter will be (successfully) closed only
// when XMLHttpRequest and pipe() are both successful.
// So, it's ok to set handler for fulfillment now.
Promise.all([encryptionPromise, fileWritingPromise]).then(
  function(result) {
// Done everything successfully!
// We come here only when encryptionFilter is close()-ed.
fileWriter.close();
processFile();
  }
);
  } else if (this.readyState == this.DONE) {
if (this.status != 200) {
  encryptionFilter.abort();
  fileWriter.abort();
} else {
  // Now we know that XHR was successful.
  // Let's close() the filter to finish encryption
  // successfully.
  pipeToFilterPromise.then(
function(result) {
  // XMLHttpRequest closes sourceStream but pipe()
  // resolves pipeToFilterPromise without closing
  // encryptionFilter.
  encryptionFilter.close();
}
  );
}
  }
};
xhr.send();

encryptionFilter has the same interface as normal stream but encrypts 
piped data. Encrypted data is readable from it. It has special 
methods, encrypt() and abort().


processFile() is hypothetical function must be called only when all of 
loading, encryption and saving to file were successful.



or

var sourceStream = xhr.response;
var encryptionPromise = crypto.subtle.encrypt

Re: Missing Features: Stream Control

2013-10-18 Thread Aymeric Vitte
Should this not be synchronized with the Streams API? Please see recent 
evolutions [1] and thread [2] (where WebRTC streams are mentioned)


Ccing Webapps and Takeshi/Feras.

Regarding your proposal I was about to propose to add about the same 
thing in Streams: at least a stop method (which would send an EOF) and 
maybe a resume method, or something like your pause/unpause.


For flow control I think I am changing my mind, it looks impossible to 
do a precise flow control with js, so probably the API itself should 
handle it.


Regards,

Aymeric

[1] http://lists.w3.org/Archives/Public/public-webapps/2013OctDec/0152.html
[2] http://lists.w3.org/Archives/Public/public-webapps/2013OctDec/0049.html

Le 18/10/2013 18:24, Adam Roach a écrit :
We've recently had some implementors reach out to ask about how to do 
certain things with the WebRTC interfaces, and I've been pretty well 
unable to turn up anything that makes a lot of sense for most of the 
interesting stream-control interfaces.


Some of this is talked about here, with an implication that the author 
of this wiki page, at least, would be willing to leave some of them 
unspecified for the first version of WebRTC:


http://www.w3.org/2011/04/webrtc/wiki/Transport_Control#Needs_further_discussion.2C_maybe_for_later_versions

Unfortunately, I don't think we can leave stream pause/unpause or 
stream rejection/removal out of the first version of the document. The 
good news is that I think we have all the interfaces necessary to do 
these things; we just need to spell out clear semantics for them.


As a high-level thumbnail sketch, here's what I think makes sense:

 1. To pause a track that you are sending, set enabled=false on a
MediaStreamTrack that you have added to a PeerConnection. This
information will cause the PeerConnection to stop sending the
associated RTP. Maybe this triggers an onnegotiationneeded to set
the corresponding m-line to recvonly or inactive and maybe it just
stops sending the stream; I'm not sure which makes more sense.
  * To unpause, set enabled back to true, and the steps are reversed
  * To pause all the MSTs associated with a MediaStream, use
enabled on the MediaStream itself.

 2. To pause a track that you are receiving, set enabled=false on the
MediaStreamTrack that you received from the PeerConnection via
onaddstream. This will cause the MST to stop providing media to
whatever sink it has been wired to, and trigger an
onnegotiationneeded event. The subsequent CreateOffer sets the
corresponding m-line to sendonly or inactive, as appropriate.
  * As above, this operation can also be performed on a
MediaStream to impact all of its tracks.

 3. To reject a track that has been offered, call stop() on the
corresponding MediaStreamTrack after it has been received via
onaddstream, but before calling CreateAnswer. This will cause it
to be rejected with a port number of zero.

 4. To remove a track from an ongoing session, call stop() on the
corresponding MediaStreamTrack object. This will (a) immediately
stop transmitting associated RTP, and (b) trigger an
onnegotiationneeded event.
  * If both the sending and receiving MST associated with that
m-line have been stop()ed, then the subsequent CreateOffer
sets the port on the corresponding m-line to zero.
  * If only one of the two MSTs associated with that m-line has
been stop()ed, then the subsequent CreateOffer sets the
corresponding m-line to sendonly, receiveonly, or inactive, as
appropriate.

Does this make sense to everyone? It seems pretty clean, easy to 
specify, and reasonable to implement. Best of all, we're not changing 
any currently defined interfaces, just providing clear semantics for 
certain operations.


/a



--
Peersm : http://www.peersm.com
node-Tor : https://www.github.com/Ayms/node-Tor
GitHub : https://www.github.com/Ayms



Re: Missing Features: Stream Control

2013-10-18 Thread Aymeric Vitte


Le 18/10/2013 19:31, Stefan Håkansson LK a écrit :

I think we're talking about completely different streams, and what Adam
is proposing is applicable for MediaStreamTracks in the context of a
WebRTC PeerConnection.
I don't see why, a stream is a stream, this would be strange that a 
Streams API is defined and WebRTC is using something different, moreover 
that this does not change the spec, the spec just has to support Streams


--
Peersm : http://www.peersm.com
node-Tor : https://www.github.com/Ayms/node-Tor
GitHub : https://www.github.com/Ayms




Re: Missing Features: Stream Control

2013-10-18 Thread Aymeric Vitte


Le 18/10/2013 19:13, Adam Roach a écrit :
To be clear, the .enabled flag and .stop() method are already there, 
and they already pause/unpause the stream and tear it down, 
respectively. I'm just proposing concrete semantics for how they 
interact with any PeerConnection that the track is associated with.


But they are not in the Streams proposal, see my other answer.

--
Peersm : http://www.peersm.com
node-Tor : https://www.github.com/Ayms/node-Tor
GitHub : https://www.github.com/Ayms




Re: Missing Features: Stream Control

2013-10-18 Thread Aymeric Vitte
What I am saying here is that there should be an unique and unified 
Streams API supported by all related APIs, each group where it is 
relevant should feel concerned instead of thinking another one might be.


Le 18/10/2013 20:04, Adam Roach a écrit :

On 10/18/13 12:47, Aymeric Vitte wrote:


Le 18/10/2013 19:13, Adam Roach a écrit :
To be clear, the .enabled flag and .stop() method are already there, 
and they already pause/unpause the stream and tear it down, 
respectively. I'm just proposing concrete semantics for how they 
interact with any PeerConnection that the track is associated with.


But they are not in the Streams proposal, see my other answer.



To make my point clearer: this isn't the right mailing list to talk 
about making changes to the API on MediaStream and MediaStreamTrack. 
If I had been proposing to change those APIs, it would have been on 
the public-media-capture list, not the public-webrtc list.


All I was trying to talk about is how the existing API influences the 
behavior of RTCPeerConnection, which is why I was talking about it on 
this list. If you want to propose changes to MediaStream and 
MediaStreamTrack, please take them to public-media-capture.


/a


--
Peersm : http://www.peersm.com
node-Tor : https://www.github.com/Ayms/node-Tor
GitHub : https://www.github.com/Ayms




Re: [streams-api] Seeking status and plans

2013-10-10 Thread Aymeric Vitte
I think the plan should be more here now: 
http://lists.w3.org/Archives/Public/public-webapps/2013OctDec/0049.html


Regards

Aymeric

Le 02/10/2013 18:32, Arthur Barstow a écrit :

Hi Feras,

If any of the data for the Streams API spec in [PubStatus] is not 
accurate, please provide corrections.


Also, please see the following thread and let us know your plan for 
this spec 
http://lists.w3.org/Archives/Public/public-webapps/2013JulSep/0599.html.


-Thanks, ArtB

[PubStatus] http://www.w3.org/2008/webapps/wiki/PubStatus



--
Peersm : http://www.peersm.com
node-Tor : https://www.github.com/Ayms/node-Tor
GitHub : https://www.github.com/Ayms




Re: Overlap between StreamReader and FileReader

2013-10-04 Thread Aymeric Vitte
I am still not very familiar with promises, but if I take your 
preceeding example:


var sourceStream = xhr.response;
var resultStream = new Stream();
var fileWritingPromise = fileWriter.write(resultStream);
var encryptionPromise = crypto.subtle.encrypt(aesAlgorithmEncrypt, 
aesKey, sourceStream, resultStream);

Promise.all(fileWritingPromise, encryptionPromise).then(
  ...
);


shoud'nt it be more something like:

var sourceStream = xhr.response;
var encryptionPromise = crypto.subtle.encrypt(aesAlgorithmEncrypt, aesKey);
var resultStream=sourceStream.pipe(encryptionPromise);
var fileWritingPromise = fileWriter.write(resultStream);
Promise.all(fileWritingPromise, encryptionPromise).then(
  ...
);

or

var sourceStream = xhr.response;
var encryptionPromise = crypto.subtle.encrypt(aesAlgorithmEncrypt, aesKey);
var hashPromise = crypto.subtle.digest(hash);
var resultStream = sourceStream.pipe([encryptionPromise,hashPromise]);
var fileWritingPromise = fileWriter.write(resultStream);
Promise.all(fileWritingPromise, resultStream).then(
  ...
);

Regards

Aymeric

Le 03/10/2013 10:27, Takeshi Yoshino a écrit :
Formatted and published my latest proposal at github after 
incorporating Aymeric's multi-dest idea.


http://htmlpreview.github.io/?https://github.com/tyoshino/stream/blob/master/streams.html


On Sat, Sep 28, 2013 at 11:45 AM, Kenneth Russell k...@google.com 
mailto:k...@google.com wrote:


This looks nice. It looks like it should already handle the flow
control issues mentioned earlier in the thread, simply by
performing the read on demand, though reporting the result
asynchronously.


Thanks, Kenneth for reviewing.


--
Peersm : http://www.peersm.com
node-Tor : https://www.github.com/Ayms/node-Tor
GitHub : https://www.github.com/Ayms



Re: Overlap between StreamReader and FileReader

2013-09-26 Thread Aymeric Vitte

Looks good, comments/questions :

- what's the use of readEncoding?

- StreamReadType: add MediaStream? (and others if existing)

- would it be possible to pipe from StreamReadType to other StreamReadType?

- would it be possible to pipe from a source to different targets (my 
example of encrypt/hash at the same time)?


- what is the link between the API and the Stream 
(responseType='stream')? How do you handle this for APIs where 
responseType does not really apply (mspack, crypto...)


Regards

Aymeric

Le 26/09/2013 06:17, Takeshi Yoshino a écrit :
As we don't see any strong demand for flow control and sync read 
functionality, I've revised the proposal.


Though we can separate state/error signaling from Stream and keep them 
done by each API (e.g. XHR) as Aymeric said, EoF signal still needs to 
be conveyed through Stream.




enum StreamReadType {
  ,
  blob,
  arraybuffer,
  text
};

interface StreamConsumeResult {
  readonly attribute boolean eof;
  readonly any data;
  readonly unsigned long long size;
};

[Constructor(optional DOMString mime)]
interface Stream {
  readonly attribute DOMString type;  // MIME type

  // Rejected on error. No more write op shouldn't be made.
  //
  // Fulfilled when the write completes. It doesn't guarantee that the 
written data has been

  // read out successfully.
  //
  // The contents of ArrayBufferView must not be modified until the 
promise is fulfilled.

  //
  // Fulfill may be delayed when the Stream considers itself to be full.
  //
  // write(), close() must not be called again until the Promise of 
the last write() is fulfilled.

  Promisevoid write((DOMString or ArrayBufferView or Blob)? data);
  void close();

  attribute StreamReadType readType;
  attribute DOMString readEncoding;

  // read(), skip(), pipe() must not be called again until the Promise 
of the last read(), skip(), pipe() is fulfilled.


  // Rejected on error. No more read op shouldn't be made.
  //
  // If size is specified,
  // - if EoF: fulfilled with data up to EoF
  // - otherwise: fulfilled with data of size bytes
  //
  // If size is omitted, (all or part of) data available for read now 
will be returned.

  //
  // If readType is set to text, size of the result may be smaller 
than the value specified for the size argument.

  PromiseStreamConsumeResult read(optional [Clamp] long long size);

  // Rejected on error. Fulfilled on completion.
  //
  // .data of result is not used. .size of result is the skipped amount.
  PromiseStreamConsumeResult skip([Clamp] long long size);  // .data 
is skipped size


  // Rejected on error. Fulfilled on completion.
  //
  // If size is omitted, transfer until EoF is encountered.
  //
  // .data of result is not used. .size of result is the size of data 
transferred.
  PromiseStreamConsumeResult pipe(Stream destination, optional 
[Clamp] long long size);

};



--
Peersm : http://www.peersm.com
node-Tor : https://www.github.com/Ayms/node-Tor
GitHub : https://www.github.com/Ayms




Re: Overlap between StreamReader and FileReader

2013-09-25 Thread Aymeric Vitte


Le 24/09/2013 21:24, Takeshi Yoshino a écrit :
On Wed, Sep 25, 2013 at 12:41 AM, Aymeric Vitte 
vitteayme...@gmail.com mailto:vitteayme...@gmail.com wrote:


Did you see
http://lists.w3.org/Archives/Public/public-webapps/2013JulSep/0593.html
?


Yes. This example seems to be showing how to connect only 
producer/consumer APIs which support Stream. Right?


Yes but if something like createStream is generic then any API could 
support it, for further APIs or next versions it can be built in.




In such a case, all the flow control stuff would be basically hidden, 
and if necessary each producer/consumer/transformer/filter/etc. may 
expose flow control related parameter in their own form, and configure 
connected input/output streams accordingly. E.g. stream_xhr may choose 
to have large write buffer for performance, or have small one and make 
some backpressure to stream_ws1 for memory efficiency.


Yes



My understanding is that the flow control APIs like mine are intended 
to be used by JS code implementing some converter, consumer, etc. 
while built-in stuff like WebCrypt would be evolved to accept Stream 
directly and handle flow control in e.g. C++ world.




BTW, I'm discussing this to provide data points to decide whether to 
include flow control API or not. I'm not pushing it. I appreciate if 
other participants express opinions about this.


Not sure to get what you mean between your API flow control and built-in 
flow control... I think the main purpose of the Stream API should be to 
handle more efficiently streaming without having to handle ArrayBuffers 
copy, split, concat, etc, to abstract the use of ArrayBuffer, 
ArrayBufferView, Blob, txt so you don't spend your time converting 
things and to connect simply different streams.


--
Peersm : http://www.peersm.com
node-Tor : https://www.github.com/Ayms/node-Tor
GitHub : https://www.github.com/Ayms



Re: Overlap between StreamReader and FileReader

2013-09-24 Thread Aymeric Vitte
Did you see 
http://lists.w3.org/Archives/Public/public-webapps/2013JulSep/0593.html ?


Attempt to find a link between the data producers APIs and a Streams API 
like yours.


Regards

Aymeric

Le 20/09/2013 15:16, Takeshi Yoshino a écrit :
On Sat, Sep 14, 2013 at 12:03 AM, Aymeric Vitte 
vitteayme...@gmail.com mailto:vitteayme...@gmail.com wrote:



I take this example to understand if this could be better with a
built-in Stream flow control, if so, after you have defined the
right parameters (if possible) for the streams flow control, you
could process delta data while reading the file and restream them
directly via WebSockets, and this would be great but again not
sure that a universal solution can be found.


I think what we can do is just providing helper to make it easier to 
build such an intelligent and app specific flow control logic.


Maybe one of the points of your example is that we're not always be 
able to calculate good readableThreshold. I'm also not so sure how 
much of apps in the world can benefit from this kind of APIs.


For consumers that can do flow control well on receive window basis, 
my API should work well (unnecessary events are not dispatched. chunks 
are consolidated. lazier ArrayBuffer creation). WebSocket has (broken) 
bufferedAmount attribute for window based flow control. Are you using 
it as a hint?




--
Peersm : http://www.peersm.com
node-Tor : https://www.github.com/Ayms/node-Tor
GitHub : https://www.github.com/Ayms



Re: Overlap between StreamReader and FileReader

2013-09-13 Thread Aymeric Vitte
Here for the examples: 
http://lists.w3.org/Archives/Public/public-webapps/2013JulSep/0453.html


Simple ones leading to a simple Streams interface, I thought this was 
the spirit of the original Streams API proposal.


Now you want a stream interface so you can code some js like mspack on 
top of it.


I am still missing a part of the puzzle or how to use it: as you mention 
the stream is coming from somewhere (File, indexedDB, WebSocket, XHR, 
WebRTC, etc) you have a limited choice of APIs to get it, so msgpack 
will act on top of one of those APIs, no? (then back to the examples above)


How can you get the data another way?

Regards,

Aymeric

Le 13/09/2013 06:36, Takeshi Yoshino a écrit :
On Fri, Sep 13, 2013 at 5:15 AM, Aymeric Vitte vitteayme...@gmail.com 
mailto:vitteayme...@gmail.com wrote:


Isaac said too So, just to be clear, I'm **not** suggesting that
browser streams copy Node streams verbatim..


I know. I wanted to kick the discussion which was stopped for 2 weeks.

Unless you want to do node inside browsers (which would be great
but seems unlikely) I still don't see the relation between this
kind of proposal and existing APIs.


What do you mean by existing APIs? I was thinking that we've been 
discussing what Stream read/write API for manual consuming/producing 
by JavaScript code should be like.


Could you please give an example very different from the ones I
gave already?


Sorry, which mail?

One of what I was imaging is protocol parsing. Such as msgpack, 
protocol buffer. It's good that ArrayBuffers of exact size is obtained.


OTOH, as someone pointed out, Stream should have some flow control 
mechanism not to pull unlimited amount of data from async storage, 
network, etc. readableSize in my proposal is an example of how we make 
the limit controllable by an app.


We could also depend on the size argument of read() call. But thinking 
of protocol parsing again, it's common that we have small fields such 
as 4, 8, 16 bytes. If read(size) is configured to pull size bytes from 
async storage, it's inefficient. Maybe we could have some hard coded 
limit, e.g. 1MiB and use max(hardCodedLimit, requestedReadSize).


I'm fine with the latter.

You have reverted to EventTarget too instead of promises, why?


There was no intention to object against use of Promise. Sorry that I 
wasn't clear. I'm rather interested in receiving sequence of data as 
they become available (corresponds to Jonas's ChunkedData version read 
methods) with just one read call. Sorry that I didn't mention 
explicitly, but listeners on the proposed API came from ChunkedData 
object. I thought we can put them on Stream itself by giving up 
multiple read scenario.


writeable/readableThreshold can be safely removed from the API if we 
agree it's not important. If the threshold stuff are removed, flush() 
and pull() will also be removed.




--
jCore
Email :  avi...@jcore.fr
Peersm : http://www.peersm.com
iAnonym : http://www.ianonym.com
node-Tor : https://www.github.com/Ayms/node-Tor
GitHub : https://www.github.com/Ayms
Web :www.jcore.fr
Extract Widget Mobile : www.extractwidget.com
BlimpMe! : www.blimpme.com



Re: Overlap between StreamReader and FileReader

2013-09-13 Thread Aymeric Vitte


Le 13/09/2013 14:23, Takeshi Yoshino a écrit :
On Fri, Sep 13, 2013 at 6:08 PM, Aymeric Vitte vitteayme...@gmail.com 
mailto:vitteayme...@gmail.com wrote:


Now you want a stream interface so you can code some js like
mspack on top of it.

I am still missing a part of the puzzle or how to use it: as you
mention the stream is coming from somewhere (File, indexedDB,
WebSocket, XHR, WebRTC, etc) you have a limited choice of APIs to
get it, so msgpack will act on top of one of those APIs, no? (then
back to the examples above)

How can you get the data another way?


Do you mean that those data producer APIs should be changed to provide 
read-by-delta-data, and manipulation of data by js code should happen 
there instead of at the output of Stream?


Yes, exactly, except if you/someone see another way of getting the data 
inside the browser and turning the flow into a stream without using 
these APIs.


Regards,

Aymeric

--
jCore
Email :  avi...@jcore.fr
Peersm : http://www.peersm.com
iAnonym : http://www.ianonym.com
node-Tor : https://www.github.com/Ayms/node-Tor
GitHub : https://www.github.com/Ayms
Web :www.jcore.fr
Extract Widget Mobile : www.extractwidget.com
BlimpMe! : www.blimpme.com



Re: Overlap between StreamReader and FileReader

2013-09-13 Thread Aymeric Vitte


Le 13/09/2013 15:11, Takeshi Yoshino a écrit :
On Fri, Sep 13, 2013 at 9:50 PM, Aymeric Vitte vitteayme...@gmail.com 
mailto:vitteayme...@gmail.com wrote:



Le 13/09/2013 14:23, Takeshi Yoshino a écrit :

Do you mean that those data producer APIs should be changed to
provide read-by-delta-data, and manipulation of data by js code
should happen there instead of at the output of Stream?


Yes, exactly, except if you/someone see another way of getting the
data inside the browser and turning the flow into a stream without
using these APIs.


I agree that there're various states and things to handle for each of 
the producer APIs, and it might be judicious not to convey such API 
specific info/signal through Stream.


I don't think it's bad to convert xhr.DONE to stream.close() manually 
as in your example 
http://lists.w3.org/Archives/Public/public-webapps/2013JulSep/0453.html.


But, regarding flow control, as I said in the other mail just posted, 
if we start thinking of flow control more seriously, maybe the right 
approach is to have unified flow control method and the point to 
define such a fine-grained flow control is Stream, not each API.


Maybe, I was not at the start of this thread too so I don't know exactly 
what was the original idea (and hope I am not screwing it up here). But 
I am not sure it's possible to define a universal flow control.


Example: I am currently experiencing some flow control issues for 
project [1], basically the sender reads a file AsArrayBuffer from 
indexedDB where it's stored as a Blob. Since we can not get delta data 
while reading the File for now, the sender waits for having the whole 
ArrayBuffer, then slices it, processes the blocks and sends them via 
WebSockets. If you implement a basic loop, of course you overload the 
sender's UA and connection. So the system makes some calculation in 
order to allow only half of the bandwidth to be used, aggregate the 
blocks until it finds out that the size of the aggregation meets the 
bandwidth requirement for the aggregated blocks to be sent every 50ms.


Then it uses a poor setTimeout to flush the data which screw up all the 
preceding calculations since setTimeout fires whenever it likes. Maybe 
there are smarter ways to do this, I was thinking to use workers so you 
can get a more precise clock via postMessages but I did not try.


In addition to the bandwidth control there is a window for flow control.

I take this example to understand if this could be better with a 
built-in Stream flow control, if so, after you have defined the right 
parameters (if possible) for the streams flow control, you could process 
delta data while reading the file and restream them directly via 
WebSockets, and this would be great but again not sure that a universal 
solution can be found.



If we're not, yes, maybe your proposal (deltaResponse) should be enough.


What is sure is that delta data should be made available instead of 
incremental ones.


[1] http://www.peersm.com

--
jCore
Email :  avi...@jcore.fr
Peersm : http://www.peersm.com
iAnonym : http://www.ianonym.com
node-Tor : https://www.github.com/Ayms/node-Tor
GitHub : https://www.github.com/Ayms
Web :www.jcore.fr
Extract Widget Mobile : www.extractwidget.com
BlimpMe! : www.blimpme.com



Re: Overlap between StreamReader and FileReader

2013-09-12 Thread Aymeric Vitte
Apparently we are not talking about the same thing, while I am thinking 
to a high level interface your interface is taking care of the 
underlying level.


Like node's streams, node had to define it since it was not existing 
(but is someone using node's streams as such or does everybody use the 
higher levels (net, ssl/tls, http, https)?), I have been working since 
quite some time on projects streaming things in all possible ways inside 
browsers or with node and I never felt any need for such a proposal.


So, to understand where the mismatch comes from, could you please 
highlight a web use case/code example based on your proposal?


Regards,

Aymeric

Le 11/09/2013 18:14, Takeshi Yoshino a écrit :
I forgot to add an attribute to specify the max size of backing store. 
Maybe it should be added to the constructor.


On Wed, Sep 11, 2013 at 11:24 PM, Takeshi Yoshino tyosh...@google.com 
mailto:tyosh...@google.com wrote:


  any peek(optional [Clamp] long long size, optional [Clamp] long
long offset);


peek with offset doesn't make sense for text mode reading. Some 
exception should be throw for that case.


- readableSize attribute returns (number of readable bytes as of
the last time the event loop started executing a task) - (bytes
consumed by read() method).


+ (bytes added by write() and transferred to read buffer synchronously)



The concept of this interface is
- to allow bulk transfer from internal asynchronous storage (e.g. 
network, disk based backing store) to JS world but delay conversion 
(e.g. into DOMString, ArrayBuffer).

- not to ask an app to do such transfer explicitly



--
jCore
Email :  avi...@jcore.fr
Peersm : http://www.peersm.com
iAnonym : http://www.ianonym.com
node-Tor : https://www.github.com/Ayms/node-Tor
GitHub : https://www.github.com/Ayms
Web :www.jcore.fr
Extract Widget Mobile : www.extractwidget.com
BlimpMe! : www.blimpme.com



Re: Overlap between StreamReader and FileReader

2013-09-12 Thread Aymeric Vitte
Isaac said too So, just to be clear, I'm *not* suggesting that browser 
streams copy Node streams verbatim..


Unless you want to do node inside browsers (which would be great but 
seems unlikely) I still don't see the relation between this kind of 
proposal and existing APIs.


Could you please give an example very different from the ones I gave 
already?


WebCrypto seems to be waiting for a Streams interface to be able to 
perform simple progressive operations, which have been (unexpectedly) 
removed from the spec, with outstanding features like the stream itself 
being able to predict its end... I don't think it's required and even 
possible, streams inside browsers only need to handle delta data, the 
rest being handled by the APIs using the streams (including end of the 
stream, flow control  co), cf my simple proposal.


You have reverted to EventTarget too instead of promises, why?

Regards

Aymeric

Le 12/09/2013 20:36, Takeshi Yoshino a écrit :
On Thu, Sep 12, 2013 at 10:58 PM, Aymeric Vitte 
vitteayme...@gmail.com mailto:vitteayme...@gmail.com wrote:


Apparently we are not talking about the same thing, while I am
thinking to a high level interface your interface is taking care
of the underlying level.


How much low level stuff to expose would basically affect high level 
interface design, I think.


Like node's streams, node had to define it since it was not
existing (but is someone using node's streams as such or does
everybody use

...snip...

So, to understand where the mismatch comes from, could you please
highlight a web use case/code example based on your proposal?


I'm still thinking how much we should include in the API, too. This 
proposal is just a trial to address the requirements Isaac listed. So, 
each feature should correspond to some of his example code.


--
jCore
Email :  avi...@jcore.fr
Peersm : http://www.peersm.com
iAnonym : http://www.ianonym.com
node-Tor : https://www.github.com/Ayms/node-Tor
GitHub : https://www.github.com/Ayms
Web :www.jcore.fr
Extract Widget Mobile : www.extractwidget.com
BlimpMe! : www.blimpme.com



Overlap between StreamReader and FileReader -[Re: File API - Progress - Question about Partial Blob data]

2013-09-09 Thread Aymeric Vitte

Redirecting this thread to the overlap... thread because it is the same.

For Cyril -- I think the mistake is that XHR does provide incremental 
data on 'loading' instead of delta data.


I have read again: 
http://lists.w3.org/Archives/Public/public-webapps/2013AprJun/0727.html.


And still do not get it quite well, especially the presence of 
eventtarget, as if for example the API was supposed to be able to 
predict the end of a stream.


Again, I don't think that's the job of the API, its only job is to be 
able to provide delta data, then the user implementation takes care of 
the rest.


I read again too 
https://dvcs.w3.org/hg/streams-api/raw-file/tip/Overview.htm. This is 
about the same as the File API.


Then I tried to figure out what would be the use of a Stream and came up 
with the following examples:


var stream=new Stream();
document.getElementById('video').src=URL.createObjectURL(stream);

var ws=new WebSocket(xxx);
ws.onmessage=function(evt) {
stream.append(evt.data)
//or calculate the hash of the stream using delta data
//or encrypt the stream using delta data
//etc
};

or

var xhr_object = new XMLHttpRequest();
xhr_object.open(GET,yyy,true);
client.responseType=stream;
xhr_object.onreadystatechange=function() {
if (xhr_object.readyState===3) {
stream.append(this.deltaresponse);
//or same as above
};
};
xhr_object.send();

or

var pc = new RTCPeerConnection(xxx);
pc.onaddstream = function(evt){
stream.append(evt.stream);
//or same as above
}

//Stream to ArrayBuffer
var arraybuffer;
stream.readBinary().then(function(response) {
arraybuffer=response;
//we should be able here to read the stream per blocks of a given 
size so we don't have to reslice the entire result to process it

//not sure how this can be speced with promises...
});

So unless there are some use cases that are not similar to the above 
examples, maybe the Streams API should just be something like:


partial interface Blob {
  PromiseArrayBuffer readBinary(BlobReadParams); (+block option)
  PromiseDOMString readText(BlobReadTextParams); (+block option)
};

interface Stream : Blob {
  PromiseStream append(ArrayBufferView or Stream or MediaStream or 
Blob or ...);

};

and XHR is modified to add a property returning delta data.

Regards,

Aymeric


Le 05/09/2013 14:43, Cyril Concolato a écrit :

Hi all,

Le 29/08/2013 01:25, Aymeric Vitte a écrit :
The Streams API says for now This event handler should mimic the 
|FileReader.onprogress| 
http://dev.w3.org/2006/webapi/FileAPI/#dfn-onprogress event 
handler


The second proposal is not very explicit for now but there is a read 
resolver.


This discussion seems to be the same as the Overlap between 
StreamReader and FileReader thread.


Now, I don't know what is the plan for the File API V2/Streams API 
(Promises? Schedule?) probably I am missing some details but I don't 
see what's the difficulty to replace the partial Blob as it is today 
by delta data (both for Files and Streams), the API does not have to 
care about non consumed data since the 
reader/parser/whatever_handles_the_data takes care of it (as long as 
delta data passed to the callback are not modified by the read, cf 
the example I gave for the above thread)
I fully agree with Aymeric. Can someone summarizes what's the history 
behind XHR that makes it hard to change (or better give an example 
that would break)?


I would like to see progress on the Stream API (how can I help?) 
because it solves one use case on which I'm working: download and 
aggregation of resources via XHR and in parallel use of the 
aggregation via a media element. This is similar to the MediaSource 
approach but for simpler progressive download cases. This is a bit 
different from the use cases I've seen on this list. The data is not 
consumed by JavaScript calls but by the browser directly. The JS would 
just use a series of StreamBuilder.append calls.


Cyril



Regards,

Aymeric


Le 27/08/2013 01:37, Kenneth Russell a écrit :
On Fri, Aug 23, 2013 at 8:35 AM, Arun Ranganathana...@mozilla.com  
wrote:

On Aug 22, 2013, at 12:07 PM, Jonas Sicking wrote:


I think you might have misunderstood my initial comment.

I agree that the current partial data solution is not good. I 
think we

should remove it.

I'd really like other implementors to weigh in before we remove 
Partial Blob Data.  Cc'ing folks who helped with it.

Eric Urhane asked me to follow up on this thread on behalf of Gregg
Tavares who unfortunately left Google.

The current spec for partial blob data is too inefficient, because it
accumulates all of the data since the beginning of the download. This
is not what's desired for streaming downloads of large data sets.
What's needed is a way to retrieve the data downloaded since the last
query. Several web developers have asked about this recently as
they're trying to stream ever larger 3D data sets into the browser.



I think we should instead create a better

Re: File API - Progress - Question about Partial Blob data

2013-08-28 Thread Aymeric Vitte
The Streams API says for now This event handler should mimic the 
|FileReader.onprogress| 
http://dev.w3.org/2006/webapi/FileAPI/#dfn-onprogress event handler


The second proposal is not very explicit for now but there is a read 
resolver.


This discussion seems to be the same as the Overlap between 
StreamReader and FileReader thread.


Now, I don't know what is the plan for the File API V2/Streams API 
(Promises? Schedule?) probably I am missing some details but I don't see 
what's the difficulty to replace the partial Blob as it is today by 
delta data (both for Files and Streams), the API does not have to care 
about non consumed data since the 
reader/parser/whatever_handles_the_data takes care of it (as long as 
delta data passed to the callback are not modified by the read, cf the 
example I gave for the above thread)


Regards,

Aymeric


Le 27/08/2013 01:37, Kenneth Russell a écrit :

On Fri, Aug 23, 2013 at 8:35 AM, Arun Ranganathan a...@mozilla.com wrote:

On Aug 22, 2013, at 12:07 PM, Jonas Sicking wrote:


I think you might have misunderstood my initial comment.

I agree that the current partial data solution is not good. I think we
should remove it.



I'd really like other implementors to weigh in before we remove Partial Blob 
Data.  Cc'ing folks who helped with it.

Eric Urhane asked me to follow up on this thread on behalf of Gregg
Tavares who unfortunately left Google.

The current spec for partial blob data is too inefficient, because it
accumulates all of the data since the beginning of the download. This
is not what's desired for streaming downloads of large data sets.
What's needed is a way to retrieve the data downloaded since the last
query. Several web developers have asked about this recently as
they're trying to stream ever larger 3D data sets into the browser.



I think we should instead create a better solution in v2 of the API
which is based on something other than FileReader and which has the
ability to deliver data on the form of here's the data that was
loaded since last notification.


I agree that we should do a better way.

Agreed. It would be really good to finally make progress in this area.

It sounds like Microsoft's Streams API proposal at
https://dvcs.w3.org/hg/streams-api/raw-file/tip/Overview.htm or
tyoshino's Streams with Promises propsal at
http://htmlpreview.github.io/?https://github.com/tyoshino/stream/blob/master/streams.html
are two leading contenders. I personally don't care which flavor is
chosen so long as things move forward. Microsoft's proposal does seem
more fully fleshed out. (At least, it contains fewer instances of the
word blah. :) )

-Ken


--
jCore
Email :  avi...@jcore.fr
Peersm : http://www.peersm.com
iAnonym : http://www.ianonym.com
node-Tor : https://www.github.com/Ayms/node-Tor
GitHub : https://www.github.com/Ayms
Web :www.jcore.fr
Extract Widget Mobile : www.extractwidget.com
BlimpMe! : www.blimpme.com



Re: File API - Progress - Question about Partial Blob data

2013-08-22 Thread Aymeric Vitte
Yes, stucked was not the right term, the workaround is to slice the 
blob once the file has been readed and to use workers so the UA is not 
blocked by the operation, both are indeed not great at all, taking a lot 
of time and complicating the implementation while it could be easily 
implemented on a streaming style, I will watch the API handling 
incremental data but I still don't get the use of the partial blob as it 
is in the spec.


Regards

Aymeric

Le 22/08/2013 01:21, Jonas Sicking a écrit :

On Wed, Aug 21, 2013 at 3:56 PM, Aymeric Vitte vitteayme...@gmail.com wrote:

Combination of the File API, indexedDB, etc (and future WebCrypto) is really
great, I did not expect to be stucked by something that looks trivial:
calculate the hash of a file while you are reading it.

Until we add API for incremental data, what you can do is to use
Blob.slice() to read the blob in small parts. Not great, but at least
it'll get you unstuck.

/ Jonas


--
jCore
Email :  avi...@jcore.fr
iAnonym : http://www.ianonym.com
node-Tor : https://www.github.com/Ayms/node-Tor
GitHub : https://www.github.com/Ayms
Web :www.jcore.fr
Extract Widget Mobile : www.extractwidget.com
BlimpMe! : www.blimpme.com




Re: Overlap between StreamReader and FileReader

2013-08-22 Thread Aymeric Vitte



Le 22/08/2013 09:28, Jonas Sicking a écrit :

Does anyone have examples of code that uses the Node.js API? I'd love
to look at how people practically end up consuming data?

I am doing something like this:

var parse=function() {
//process this.stream_
this.queue_.shift();
if (this.queue_.length) {
this.queue_[0]();
};
};
var process=function(data) {
return function() {
this.stream_=[this.stream_,data].concatBuffers();
parse.call(this);
};
};
var on_data=function(data) {
this.queue_=this.queue_||[];
this.queue_.push(process(data).bind(this));
if (this.queue_.length===1) {
this.queue_[0]();
};
};
request.on('data',function(data) {
on_data.call(this,data);
});

I don't remember exactly if it's due to my implementation or node 
(because I am using both node's Buffers and Typed Arrays) but I 
experienced some problems where data was modified while it was being 
processed, that's why this.stream_ is freezing the data received (with 
remaining bytes received earlier, see next sentence) until it is processed.


Coming back to my previous TextEncoder/Decoder remark for utf-8 parsing, 
I don't know how to do this with native node functions.


Regards

Aymeric

--
jCore
Email :  avi...@jcore.fr
iAnonym : http://www.ianonym.com
node-Tor : https://www.github.com/Ayms/node-Tor
GitHub : https://www.github.com/Ayms
Web :www.jcore.fr
Extract Widget Mobile : www.extractwidget.com
BlimpMe! : www.blimpme.com




Re: File API - Progress - Question about Partial Blob data

2013-08-21 Thread Aymeric Vitte


Le 21/08/2013 19:03, Jonas Sicking a écrit :

On Tue, Aug 20, 2013 at 4:13 PM, Aymeric Vitte vitteayme...@gmail.com wrote:

The specs says :

 It can also return partial Blob data. Partial Blob data is the part of the
File or Blob that has been read into memory currently; when processing the
read method readAsText, partial Blob data is a DOMString that is incremented
as more bytes are loaded (a portion of the total) [ProgressEvents], and when
processing readAsArrayBuffer partial Blob data is an ArrayBuffer
[TypedArrays] object consisting of the bytes loaded so far (a portion of the
total)[ProgressEvents]. The list below is normative for the result attribute
and is the conformance criteria for this attribute

What is the rationale for that? The result attribute should better contain
for progress events the latest data read and not the data read from the
begining that you could easily reconstitute while the contrary requires more
work.

Use case: calculate the hash of a file while you are reading it.

Regards

Aymeric

PS: I did not test it in all browsers, but unless I am using it wrongly, the
result attribute is always null for progress events.

I agree that we need a way to read from a File/Blob such that you get
incremental data, i.e. that you only get the data read since the
last data deliver, rather than getting an ever increasing result
representing all data from the beginning of the File/Blob.

However I don't think FileReader is going to be that API. FileReader
was specifically designed after XMLHttpRequest, which I think in
hindsight was a bad idea. However it was the request that we got from
several authors.

I put an initial sketch of what I think the future of File/Blob
reading should look like here:
http://lists.w3.org/Archives/Public/public-webapps/2013AprJun/0727.html


I have commented this thread for utf-8 streaming, the use case is not 
the same but somewhere similar with the hash example.



That proposal supports what you are asking for, though of course there
are plenty of debate needed on various aspects of that proposal.

All that said, I thought that we had tried to avoid the mistake that
XHR did of exposing ever-increasing partial results as data was being
loaded. Exposing partial data can require quadratic memory allocations
since the implementation will have to keep reallocating and copying
data.

I thought that we had decided not to expose partial results during the
loading. And instead only expose a result at the end of the load. But
I see that the spec now calls for exposing partial results in a few
cases. Did that change?

What does implementations do? Looking at Gecko's implementation I
don't think we ever expose partial results.


FF/Nightly does not expose any partial results, but progress events are 
fired. Probably others are doing the same, so the progress events can 
only be used to display a progress bar, ie nobody cares about the 
partial incremented information defined in the spec.


Whether it's the File API or the Stream API, I don't find it very 
different, I don't know about the implementations details but I don't 
find it very extraordinary to expose for progress events partial non 
incremented results not corellated to the final result.


Combination of the File API, indexedDB, etc (and future WebCrypto) is 
really great, I did not expect to be stucked by something that looks 
trivial: calculate the hash of a file while you are reading it.


So, since apparently there is no use for the incremented data maybe the 
spec should be changed to expose delta data instead for progress events.




/ Jonas


--
jCore
Email :  avi...@jcore.fr
iAnonym : http://www.ianonym.com
node-Tor : https://www.github.com/Ayms/node-Tor
GitHub : https://www.github.com/Ayms
Web :www.jcore.fr
Extract Widget Mobile : www.extractwidget.com
BlimpMe! : www.blimpme.com




Re: File API - Progress - Question about Partial Blob data

2013-08-21 Thread Aymeric Vitte


Le 21/08/2013 16:16, Arun Ranganathan a écrit :

On Aug 20, 2013, at 7:13 PM, Aymeric Vitte wrote:


The specs says :

 It can also return /partial Blob data/. Partial Blob data is the 
part of the |File| http://www.w3.org/TR/FileAPI/#dfn-file or |Blob| 
http://www.w3.org/TR/FileAPI/#dfn-Blob that has been read into 
memory /currently/; when processing the read method 
http://www.w3.org/TR/FileAPI/#read-methods |readAsText| 
http://www.w3.org/TR/FileAPI/#dfn-readAsText, partial Blob data is 
a |DOMString| that is incremented as more bytes are |loaded| (a 
portion of the |total|) [ProgressEvents 
http://www.w3.org/TR/FileAPI/#ProgressEvents], and when processing 
|readAsArrayBuffer| 
http://www.w3.org/TR/FileAPI/#dfn-readAsArrayBuffer partial Blob 
data is an |ArrayBuffer| [TypedArrays 
http://www.w3.org/TR/FileAPI/#TypedArrays] object consisting of the 
bytes |loaded| so far (a portion of the |total|)[ProgressEvents 
http://www.w3.org/TR/FileAPI/#ProgressEvents]. The list below is 
normative for the |result| attribute and is the conformance criteria 
for this attribute


What is the rationale for that? The result attribute should better 
contain for progress events the latest data read and not the data 
read from the begining that you could easily reconstitute while the 
contrary requires more work.


Use case: calculate the hash of a file while you are reading it.



The ask for deltas to be sent with progress notifications came up on 
this listserv before -- see the thread starting at 
http://lists.w3.org/Archives/Public/public-webapps/2013JanMar/0069.html


I agree that the deltas model is also useful, but you can see there's 
some implementation history with XHR here as well.  The use case is to 
receive the file bits as they are constituted from the read (just as 
with an HTTP request, where you get the bits so far till the rest 
are constituted).


A good way to solve the use case of meaningful deltas might be with 
the Streams API, still TBD.



PS: I did not test it in all browsers, but unless I am using it 
wrongly, the result attribute is always null for progress events.



But not null for the onload ?  In many cases, a progress might not 
fire, depending on file size.


No, only for progress, which does fire for files of several MB.

I don't know about the XHR history but what is the use case of 
incremented data (which is not implemented currently)?




-- A*


--
jCore
Email :  avi...@jcore.fr
iAnonym : http://www.ianonym.com
node-Tor : https://www.github.com/Ayms/node-Tor
GitHub : https://www.github.com/Ayms
Web :www.jcore.fr
Extract Widget Mobile : www.extractwidget.com
BlimpMe! : www.blimpme.com



Re: Overlap between StreamReader and FileReader

2013-07-31 Thread Aymeric Vitte
I read quickly the thread but it seems like this is exactly the issue I 
had doing [1].


The use case was just decoding utf-8 html chunked buffers and modifying 
the content on the fly to stream it somewhere else.


It had to work inside browsers and with node (which as far as I know 
does not handle correctly this case, but I did not check latest evolutions)


The solution was [2], TextEncoder/Decoder with a super usefull streaming 
option.


[1] https://www.github.com/Ayms/node-Tor
[2] http://code.google.com/p/stringencoding/

Regards

Aymeric

Le 31/07/2013 21:20, Jonas Sicking a écrit :

On Wed, Jul 31, 2013 at 10:17 AM, Domenic Denicola
dome...@domenicdenicola.com wrote:

From: Anne van Kesteren [ann...@annevk.nl]


It seems though that if you can change the way bytes are consumed while reading 
a stream you will end up with problematic scenarios. E.g. you consume 2 bytes 
of a 4-byte utf-8 sequence. Then switch to reading code points... Instantiating 
a ByteStream or TextStream in advance would address that.

Yes, and I think I would actually prefer such an API honestly. But IIRC Jonas 
earlier wanted to be able to do both binary and text in the same stream (did he 
have a specific use case?), and presumably that motivated Node's design as well.

I don't have very concrete use-cases in mind. But basically
consumption of any format that contains both textual and binary data.
If we don't think the world contains enough such formats to worry
about, then maybe my use case isn't strong enough.

I think both pdf and various microsoft document formats fall into this
category though.


I guess you can just say that if you're in binary mode, you should know what 
you're doing, and know precisely when is the correct time to switch to string 
mode. If you switch in the middle of a four-byte sequence, you presumably meant 
to do so, and deserve to get back the mangled characters that result.

To make this work might require some kind of put the bytes back primitive, to avoid a 
situation where you read too far in binary mode and want to back up a bit before you 
engage string mode. I guess this is Node.js's [unshift][1].

Note that the read too far issue isn't text specific. When consuming
any format which uses a terminator (null or any more complicated
pattern) you will have to consume in minimal chunks, often
byte-by-byte, to make sure you don't go past that terminator.


It would be cool to avoid all this though and just read either bytes or 
strings, without allowing switching. (Maybe, feed the byte stream into a string 
decoder transform, and get back a string stream?)

Being able to convert between text and binary streams do work well
when the whole stream is either textual or binary. It's not clear to
me how to do it if you are dealing with a stream that contains both.
Though I'd be interested to see proposals.

/ Jonas



--
jCore
Email :  avi...@jcore.fr
iAnonym : http://www.ianonym.com
node-Tor : https://www.github.com/Ayms/node-Tor
GitHub : https://www.github.com/Ayms
Web :www.jcore.fr
Extract Widget Mobile : www.extractwidget.com
BlimpMe! : www.blimpme.com