Re: Directory Upload Proposal

2015-05-13 Thread Michaela Merz

I strongly support this proposal. However - if it is possible to upload
a directly, it should also be possible to download it, including
exporting it from the download / sandbox into the "real" world.

I would also like to ask for an opinion on letting the developers choose
how they actually want to handle the uploaded data. If possible, the
browser should be able to make the uploaded directory available as one
blob (maybe in zip or tar format) or in multiple blobs (individual files).

Michaela


On 05/13/2015 04:19 PM, Ali Alabbas wrote:
> Thank you for the feedback Jonas. After speaking to the OneDrive team at 
> Microsoft, they let me know that their use case for this would involve hiding 
> the file input and just spoofing a click to invoke the file/folder picker. 
> This makes me believe that perhaps there should be less of an emphasis on UI 
> and more on providing the functionality that developers need. On that note, 
> there is actually a 5th option that we can entertain. We could have three 
> different kinds of file inputs: one type for files, another for directories, 
> and yet another for handling both files and directories. This means that if a 
> developer has a use case for only wanting a user to pick a directory (even on 
> Mac OS X), they would have that option. This would also future-proof for 
> Windows/Linux if they ever introduce a file and folder picker in the future. 
> This kind of solution would be more additive in nature which would mitigate 
> any heavy changes that could run the risk of breaking sites.
> 
> What do you think about this option?
> 
> Thanks,
> Ali
> 
> On Friday, May 8, 2015 at 3:29 PM, Jonas Sicking  wrote:
> 
>> On Tue, May 5, 2015 at 10:50 AM, Ali Alabbas  wrote:
>>> I recommend that we change the "dir" attribute to "directories" and keep 
>>> "directory" the same as it is now to avoid clashing with the existing "dir" 
>>> attribute on the HTMLInputElement. All in favor?
>>
>> There's no current "directory" attribute, and the current "dir"
>> attribute stands for "direction" and not "directory", so I'm not sure which 
>> clash you are worried about?
>>
>> But I'm definitely fine with "directories". I've used that in the examples 
>> below.
>>
>>> As for the behavior of setting the "directories" attribute on a file input, 
>>> we have the following options:
>>>
>>> 1) Expose two buttons in a file input ("choose files" and "choose 
>>> directory") (see Mozilla's proposal [1])
>>> - To activate the "choose directory" behavior of invoking the 
>>> directory picker there would need to be a method on the 
>>> HTMLInputElement e.g. chooseDirectory()
>>> - To activate the "choose files" behavior of invoking the files 
>>> picker, we continue to use click() on the file input
>>>
>>> 2) Expose two buttons in file input for Windows/Linux ("choose files" 
>>> and "choose directory") and one button for Mac OS X ("choose files and 
>>> directories")
>>> - Allows users of Mac OS X to use its unified File/Directory picker
>>> - Allows users of Windows/Linux to specify if they want to pick files 
>>> or a directory
>>> - However, there would still need to be a method to activate the 
>>> "choose directory" button's behavior of invoking the directory picker 
>>> (e.g. chooseDirectory() on the HTMLInputElement)
>>> - This results in two different experiences depending on the OS
>>>
>>> 3) Expose only one button; Windows/Linux ("choose directory") and Mac 
>>> OS X ("choose files and directories")
>>> - Allows users of Mac OS X to use its unified File/Directory picker
>>> - Windows/Linux users are only able to select a directory
>>> - click() is used to activate these default behaviors (no need for an 
>>> extra method such as chooseDirectory() on the HTMLInputElement 
>>> interface)
>>> - For Windows/Linux, in order to have access to a file picker, 
>>> app/site developers would need to create another file input without 
>>> setting the "directories" attribute
>>> - Can have something like isFileAndDirectorySupported so that 
>>> developers can feature detect and decide if they need to have two 
>>> different file inputs for their app/site (on Windows/Linux) or if they 
>>> just need one (on Mac OS X) that can allow both files and directories
>>>
>>> 4) Expose only one button ("choose directory")
>>> - User must select a directory regardless of OS or browser (this 
>>> normalizes the user experience and the developer design paradigm)
>>> - To make users pick files rather than directories, the developer 
>>> simply does not set the "directories" attribute which would show the 
>>> default file input
>>> - Developers that want to allow users the option to select directory 
>>> or files need to provide two different inputs regardless of OS or 
>>> browser
>>
>> Hi Ali,
>>
>> I think the only really strong requirement that I have is that I'd like to 
>> enable the platform widget on OSX which allows picking a file or a 
>> directory. This might also be useful in the future on other 

Re: The futile war between Native and Web

2015-02-19 Thread Michaela Merz

I am not sure about that. Based on the premise that the browser itself
doesn't leak data, I think it is possible to make a web site safe.  In
order to achieve that, we to make sure, that

a) the (script) code doesn't misbehave (=CSP);
b) the integrity of the (script) code is secured on the server and while
in transit;

I believe both of those imperative necessities are achievable.

Michaela


On 02/19/2015 01:43 PM, Jeffrey Walton wrote:
> On Thu, Feb 19, 2015 at 1:44 PM, Bjoern Hoehrmann  wrote:
>> * Jeffrey Walton wrote:
>>> Here's yet another failure that Public Key Pinning should have
>>> stopped, but the browser's rendition of HPKP could not stop because of
>>> the broken security model:
>>> http://arstechnica.com/security/2015/02/lenovo-pcs-ship-with-man-in-the-middle-adware-that-breaks-https-connections/.
>> In this story the legitimate user with full administrative access to the
>> systems is Lenovo. I do not really see how actual user agents could have
>> "stopped" anything here. Timbled agents that act on behalf of someone
>> other than the user might have denied users their right to modify their
>> system as Lenovo did here, but that is clearly out of scope of browsers.
>> --
> Like I said, the security model is broken and browser based apps can
> only handle low value data.
>
> Jeff
>




Re: The futile war between Native and Web

2015-02-16 Thread Michaela Merz

Well - there is no "direct" pathway between the browser and the chip
card terminal. And .. of course can the user manipulate all of
Javascript client-side, eg. patch variables to his liking. But that is
true (though harder) for any code that runs client side. The card
terminal could provide a rest api that would allow a browser to post the
amount to be paid into the terminal. That would be a very safe solution
- not only in regard to web, but in regard to any communications with
the card terminal as there would be now vulnerable code on the client.

mm.


On 02/16/2015 10:19 AM, Anders Rundgren wrote:
> On 2015-02-16 16:54, Michaela Merz wrote:
>> This discussion is (in part) superfluous. Because a lot of people and
organizations are using the web even for the most secure applications.
Heck - they even send confidential data via plain old e-mail - they
would even use AOL if that would still be possible - in other words:
Most simply don't care.  The web is THE universal applicable platform
for .. well .. everything.  So - it's the job of the browser vendors in
cooperation with the web-developers to provide an environment that is up
to the task. And I strongly believe that a safe and secure JavaScript
environment is achievable as long as the browsers do their part (strict
isolation between tabs would be such a thing).
>
> On paper it is doable, in reality it is not.
>
> You would anyway end-up with proprietary "AppStores" with granted
"Apps" and then I don't really see the point insisting on using
web-technology anymore.
> General code-signing like used in Windows application doesn't help, it
is just one OK button more to click before running.
>
> Anders
>
>>
>> I am aware of the old notion, that JavaScript crypto is not "safe".
But I say it *can*' be.  CSP is a huge leap forward to make the browser
a safe place for the handling of confidential data.
>>
>> Michaela
>>
>> On 02/16/2015 03:40 AM, Anders Rundgren wrote:
>> > On 2015-02-16 09:34, Anne van Kesteren wrote:
>> >> On Sun, Feb 15, 2015 at 10:59 PM, Jeffrey Walton
 wrote:
>> >>> For the first point, Pinning with Overrides
>> >>> (tools.ietf.org/html/draft-ietf-websec-key-pinning) is a perfect
>> >>> example of the wrong security model. The organizations I work
with did
>> >>> not drink the Web 2.0 koolaide, its its not acceptable to them
that an
>> >>> adversary can so easily break the secure channel.
>> >>
>> >> What would you suggest instead?
>> >>
>> >>
>> >>> For the second point, and as a security architect, I regularly reject
>> >>> browser-based apps that operate on medium and high value data because
>> >>> we can't place the security controls needed to handle the data. The
>> >>> browser based apps are fine for low value data.
>> >>>
>> >>> An example of the lack of security controls is device
provisioning and
>> >>> client authentication. We don't have protected or isolated storage,
>> >>> browsers can't safely persist provisioning shared secrets, secret
>> >>> material is extractable (even if marked non-extractable), browsers
>> >>> can't handle client certificates, browsers are more than happy to
>> >>> cough up a secret to any server with a certificate or public key
(even
>> >>> the wrong ones), ...
>> >>
>> >> So you would like physical storage on disk to be segmented by eTLD+1
>> >> or some such?
>> >>
>> >> As for the certificate issues, did you file bugs?
>> >>
>> >>
>> >> I think there definitely is interest in making the web suitable for
>> >> this over time. It would help if the requirements were documented
>> >> somewhere.
>> >
>> > There are no universal and agreed-upon requirements for dealing with
>> > client-certificates which is why this has been carried out in the past
>> > through proprietary plugins.  These have now been outlawed (for good
>> > reasons), but no replacement has been considered.
>> >
>> > There were some efforts recently
>> > http://www.w3.org/2012/webcrypto/webcrypto-next-workshop/
>> > which though were rejected by Mozilla, Google and Facebook.
>> >
>> > And there we are...which I suggest a "short-cut":
>> >
https://lists.w3.org/Archives/Public/public-web-intents/2015Feb/.html
>> > which initially was pointed out by Ryan Sleevy:
>> >
https://lists.w3.org/Archives/Public/public-webcrypto-comments/2015Jan/.html
>> >
>> > Anders
>> >
>>
>>
>
>




Re: The futile war between Native and Web

2015-02-16 Thread Michaela Merz
This discussion is (in part) superfluous. Because a lot of people and
organizations are using the web even for the most secure applications.
Heck - they even send confidential data via plain old e-mail - they
would even use AOL if that would still be possible - in other words:
Most simply don't care.  The web is THE universal applicable platform
for .. well .. everything.  So - it's the job of the browser vendors in
cooperation with the web-developers to provide an environment that is up
to the task. And I strongly believe that a safe and secure JavaScript
environment is achievable as long as the browsers do their part (strict
isolation between tabs would be such a thing).

I am aware of the old notion, that JavaScript crypto is not "safe". But
I say it *can*' be.  CSP is a huge leap forward to make the browser a
safe place for the handling of confidential data.

Michaela

On 02/16/2015 03:40 AM, Anders Rundgren wrote:
> On 2015-02-16 09:34, Anne van Kesteren wrote:
>> On Sun, Feb 15, 2015 at 10:59 PM, Jeffrey Walton 
wrote:
>>> For the first point, Pinning with Overrides
>>> (tools.ietf.org/html/draft-ietf-websec-key-pinning) is a perfect
>>> example of the wrong security model. The organizations I work with did
>>> not drink the Web 2.0 koolaide, its its not acceptable to them that an
>>> adversary can so easily break the secure channel.
>>
>> What would you suggest instead?
>>
>>
>>> For the second point, and as a security architect, I regularly reject
>>> browser-based apps that operate on medium and high value data because
>>> we can't place the security controls needed to handle the data. The
>>> browser based apps are fine for low value data.
>>>
>>> An example of the lack of security controls is device provisioning and
>>> client authentication. We don't have protected or isolated storage,
>>> browsers can't safely persist provisioning shared secrets, secret
>>> material is extractable (even if marked non-extractable), browsers
>>> can't handle client certificates, browsers are more than happy to
>>> cough up a secret to any server with a certificate or public key (even
>>> the wrong ones), ...
>>
>> So you would like physical storage on disk to be segmented by eTLD+1
>> or some such?
>>
>> As for the certificate issues, did you file bugs?
>>
>>
>> I think there definitely is interest in making the web suitable for
>> this over time. It would help if the requirements were documented
>> somewhere.
>
> There are no universal and agreed-upon requirements for dealing with
> client-certificates which is why this has been carried out in the past
> through proprietary plugins.  These have now been outlawed (for good
> reasons), but no replacement has been considered.
>
> There were some efforts recently
> http://www.w3.org/2012/webcrypto/webcrypto-next-workshop/
> which though were rejected by Mozilla, Google and Facebook.
>
> And there we are...which I suggest a "short-cut":
> https://lists.w3.org/Archives/Public/public-web-intents/2015Feb/.html
> which initially was pointed out by Ryan Sleevy:
>
https://lists.w3.org/Archives/Public/public-webcrypto-comments/2015Jan/.html
>
> Anders
>




Re: [clipboard] Feature detect Clipboard API support?

2015-02-11 Thread Michaela Merz

AFAIK, you can't trigger a clip board request without human interaction.

 $('#element).off().on('click',function(e) {
var clip = new ClipboardEvent('copy');
clip.clipboardData.setData('text/plain','some data');
clip.preventDefault();
e.target.dispatchEvent(clip);
});

This unfortunately won't work in my environment since my code is not
'trusted'.

m.

On 02/11/2015 12:21 PM, James M. Greene wrote:

> The current spec still leaves me a bit unclear about if 
> implementors must include the ability to feature detect Clipboard 
> API support, which I think is a critical requirement.
> 
> In particular, I /need/ to be able to detect support for the 
> Clipboard API (click-to-copy support, in particular) in advance
> and without the user's interaction in order to know if I need to
> load a Flash fallback or not.
> 
> If this is even /possible/ based on the current spec, the only way 
> I can see that might make that possible is if executing 
> `document.execCommand("copy")` synthetically (without user 
> interaction) MUST still fire the `beforecopy`/`copy` events [but 
> obviously not execute the associated default action since it must 
> not be authorized to inject into the clipboard].  However, I don't 
> feel that the spec defines any clear stance on that.
> 
> Example detection attempt (more verbose version on JSBin [1]):
> 
> ```js var execResult, isSupported = false;
> 
> if (typeof window.ClipboardEvent !== "undefined" && 
> window.ClipboardEvent) { var checkSupport = function(e) { 
> isSupported = !!e && e.type === "copy" && !!e.clipboardData && e 
> instanceof window.ClipboardEvent; 
> document.removeEventListener("copy", checkSupport, false); }; 
> document.addEventListener("copy", checkSupport, false);
> 
> try { execResult = document.execCommand("copy"); } catch (e) { 
> execResult = e; }
> 
> document.removeEventListener("copy", checkSupport, false);
> 
> // Should I care about the `execResult` value for feature testing?
>  // I don't think so.
> 
> if (!isSupported) { // Fallback to Flash clipboard usage } } ```
> 
> 
> This currently yields poor results, as well as arguably false 
> positives for `window.ClipboardEvent` (conforming to an earlier 
> version of the spec, perhaps?) in Firefox 22+ (22-35, currently) 
> and pre-Blink Opera (<15).
> 
> It also causes security dialogs to popup in IE9-11 when invoking 
> `document.execCommand("copy")` if you do not first verify that 
> `window.ClipboardEvent` is present. That is obviously harmful to 
> user experience.
> 
> Can we agree upon some consistent feature detection technique to 
> add to the spec that can be guaranteed to yield correct results?
> I would love it if it were as simple as verifying that 
> `window.ClipboardEvent` existed but, as I mentioned, that is 
> already yielding false positives today.
> 
> 
> 
> [1]: http://jsbin.com/davoxa/edit?html,js,output
> 
> 
> 
> Sincerely, James Greene




Re: do not deprecate synchronous XMLHttpRequest

2015-02-10 Thread Michaela Merz

That is good news indeed. And I am glad to help.

m.



On 02/10/2015 03:02 PM, Jonas Sicking wrote:
> On Tue, Feb 10, 2015 at 12:43 PM, Michaela Merz 
>  wrote:
>> Blobs are immutable but it would be cool to have blob 'pipes' or
>> FIFOs allowing us to stream from those pipes by feeding them via
>> AJAX.
> 
> Since it sounds like you want to help with this, there's good
> news! There's an API draft available. It would be very helpful to
> have web developers, such as yourself, to try it out and find any
> rough edges before it ships in browsers.
> 
> There's a spec and a prototype implementation over at 
> https://github.com/whatwg/streams/
> 
> It would be very helpful if you try it out and report any problems 
> that you find or share any cool demos that you create.
> 
> / Jonas
> 




Re: do not deprecate synchronous XMLHttpRequest

2015-02-10 Thread Michaela Merz
Marc:

Its not about getting rid of badly designed APIs. It's about the feeling
of not being taken seriously. The web-developers are the people who have
to use the available browser technologies to provide what users want.
And often we can't oblige because - well, browsers don't implement it
for whatever reason.

Examples: Safari doesn't allow the export of arbitrary data blobs into
the file system. This is a major problem and has been reported numerous
times on their bug tracker - to no avail. What good is a "maybe" for
canplay on media files? Why can't we still not paste directly into the
clip board? Blobs are immutable but it would be cool to have blob
'pipes' or FIFOs allowing us to stream from those pipes by feeding them
via AJAX.

It would really be great, if browser-developers would be more open to
suggestions from the web-developer communities. We are a team and both
groups should cooperate better for the benefit of all web users.

m.


On 02/10/2015 02:01 PM, Marc Fawzi wrote:
> <<
> Reminds me on the days when
> Microsoft was trying to tell me what's good and what's not good.
>>>
> 
> At least Microsoft didn't put a backdoor in Windows that can divulge
> your local IP (within a LAN) to the outside world. They call it WebRTC.
> If you want something to complain about there are far more troubling
> things than the well intended effort to rid the web of APIs that are
> simply badly designed...
> 
> 
> 
> On Tue, Feb 10, 2015 at 11:51 AM, Michaela Merz
> mailto:michaela.m...@hermetos.com>> wrote:
> 
> 
> Interesting notion. Thanks for sharing. Reminds me on the days when
> Microsoft was trying to tell me what's good and what's not good.
> 
> m.
> 
> 
> 
> On 02/10/2015 12:10 PM, Florian Bösch wrote:
> > On Tue, Feb 10, 2015 at 4:24 PM, Glenn Adams  <mailto:gl...@skynav.com>
> > <mailto:gl...@skynav.com <mailto:gl...@skynav.com>>> wrote:
> >
> > Morality should not be legislated!
> >
> >
> > Browser vendors can (and do) do whatever they please. You're free
> > to start your own browser and try getting it among the people.
> > Legislation doesn't enter the picture, you have free choice in
> > every respect. It's every-bodies pejorative to publish software
> > both in source or compiled however they see fit. Hyperbole much?
> 
> 
> 




Re: do not deprecate synchronous XMLHttpRequest

2015-02-10 Thread Michaela Merz

Interesting notion. Thanks for sharing. Reminds me on the days when
Microsoft was trying to tell me what's good and what's not good.

m.



On 02/10/2015 12:10 PM, Florian Bösch wrote:
> On Tue, Feb 10, 2015 at 4:24 PM, Glenn Adams  > wrote:
> 
> Morality should not be legislated!
> 
> 
> Browser vendors can (and do) do whatever they please. You're free
> to start your own browser and try getting it among the people.
> Legislation doesn't enter the picture, you have free choice in
> every respect. It's every-bodies pejorative to publish software
> both in source or compiled however they see fit. Hyperbole much?




Re: do not deprecate synchronous XMLHttpRequest

2015-02-10 Thread Michaela Merz

LOL .. good one. But it's not only about whether or not s-xhr should be
depreciated. It's also about the focus and scope of the browsers teams work.

m.


On 02/10/2015 11:28 AM, Marc Fawzi wrote:
> Here is a really bad idea:
> 
> Launch an async xhr and monitor its readyState in a while loop and don't exit 
> the loop till it has finished.
> 
> Easier than writing charged emails. Less drain on the soul. 
> 
> Sent from my iPhone
> 
>> On Feb 10, 2015, at 8:48 AM, Michaela Merz  
>> wrote:
>>
>> No argument in regard to the problems that might arise from using sync
>> calls.  But it is IMHO not the job of the browser developers to decide
>> who can use what, when and why. It is up the guys (or gals) coding a
>> web site to select an appropriate AJAX call to get the job done.
>>
>> Once again: Please remember that it is your job to make my (and
>> countless other web developers) life easier and to give us more
>> choices, more possibilities to do cool stuff. We appreciate your work.
>> But must of us don't need hard coded education in regard to the way we
>> think that web-apps and -services should be created.
>>
>> m.
>>
>>> On 02/10/2015 08:47 AM, Ashley Gullen wrote:
>>> I am on the side that synchronous AJAX should definitely be
>>> deprecated, except in web workers where sync stuff is OK.
>>>
>>> Especially on the modern web, there are two really good
>>> alternatives: - write your code in a web worker where synchronous
>>> calls don't hang the browser - write async code which doesn't hang
>>> the browser
>>>
>>> With modern tools like Promises and the new Fetch API, I can't
>>> think of any reason to write a synchronous AJAX request on the main
>>> thread, when an async one could have been written instead with
>>> probably little extra effort.
>>>
>>> Alas, existing codebases rely on it, so it cannot be removed
>>> easily. But I can't see why anyone would argue that it's a good
>>> design principle to make possibly seconds-long synchronous calls on
>>> the UI thread.
>>>
>>>
>>>
>>>
>>> On 9 February 2015 at 19:33, George Calvert 
>>> >> <mailto:george.calv...@loudthink.com>> wrote:
>>>
>>> I third Michaela and Gregg.
>>>
>>> __ __
>>>
>>> It is the app and site developers' job to decide whether the user 
>>> should wait on the server — not the standard's and, 99.9% of the 
>>> time, not the browser's either.
>>>
>>> __ __
>>>
>>> I agree a well-designed site avoids synchronous calls.  BUT —
>>> there still are plenty of real-world cases where the best choice is
>>> having the user wait: Like when subsequent options depend on the
>>> server's reply.  Or more nuanced, app/content-specific cases where
>>> rewinding after an earlier transaction fails is detrimental to the
>>> overall UX or simply impractical to code.
>>>
>>> __ __
>>>
>>> Let's focus our energies elsewhere — dispensing with browser 
>>> warnings that tell me what I already know and with deprecating 
>>> features that are well-entrenched and, on occasion, incredibly 
>>> useful.
>>>
>>> __ __
>>>
>>> Thanks, George Calvert
>>
>>




Re: do not deprecate synchronous XMLHttpRequest

2015-02-10 Thread Michaela Merz
No argument in regard to the problems that might arise from using sync
calls.  But it is IMHO not the job of the browser developers to decide
who can use what, when and why. It is up the guys (or gals) coding a
web site to select an appropriate AJAX call to get the job done.

Once again: Please remember that it is your job to make my (and
countless other web developers) life easier and to give us more
choices, more possibilities to do cool stuff. We appreciate your work.
But must of us don't need hard coded education in regard to the way we
think that web-apps and -services should be created.

m.

On 02/10/2015 08:47 AM, Ashley Gullen wrote:
> I am on the side that synchronous AJAX should definitely be
> deprecated, except in web workers where sync stuff is OK.
> 
> Especially on the modern web, there are two really good
> alternatives: - write your code in a web worker where synchronous
> calls don't hang the browser - write async code which doesn't hang
> the browser
> 
> With modern tools like Promises and the new Fetch API, I can't
> think of any reason to write a synchronous AJAX request on the main
> thread, when an async one could have been written instead with
> probably little extra effort.
> 
> Alas, existing codebases rely on it, so it cannot be removed
> easily. But I can't see why anyone would argue that it's a good
> design principle to make possibly seconds-long synchronous calls on
> the UI thread.
> 
> 
> 
> 
> On 9 February 2015 at 19:33, George Calvert 
>  > wrote:
> 
> I third Michaela and Gregg.
> 
> __ __
> 
> It is the app and site developers' job to decide whether the user 
> should wait on the server — not the standard's and, 99.9% of the 
> time, not the browser's either.
> 
> __ __
> 
> I agree a well-designed site avoids synchronous calls.  BUT —
> there still are plenty of real-world cases where the best choice is
> having the user wait: Like when subsequent options depend on the
> server's reply.  Or more nuanced, app/content-specific cases where
> rewinding after an earlier transaction fails is detrimental to the
> overall UX or simply impractical to code.
> 
> __ __
> 
> Let's focus our energies elsewhere — dispensing with browser 
> warnings that tell me what I already know and with deprecating 
> features that are well-entrenched and, on occasion, incredibly 
> useful.
> 
> __ __
> 
> Thanks, George Calvert
> 
> 




Re: Violation of mail list policy [Was: Fwd: Re: do not deprecate synchronous XMLHttpRequest]

2015-02-09 Thread Michaela Merz
I do apologize for the unfortunate selection of some words in this
posting. It was not my intention to attack, to insult or to offend anybody.

Michaela



On 02/09/2015 12:53 PM, Arthur Barstow wrote:
> Michaela,
>
> Some of the language you used in [1] is offensive. Per the group's
mail list etiquette policy ([2]), please do not use such language on
WebApps' mail lists.
>
> -Regards, AB
>
> [1]
https://lists.w3.org/Archives/Public/public-webapps/2015JanMar/0538.html
> [2]
https://www.w3.org/2008/webapps/wiki/WorkMode#Mail_List_Policy.2C_Usage.2C_Etiquette.2C_etc.
>
>




Re: do not deprecate synchronous XMLHttpRequest

2015-02-06 Thread Michaela Merz
Florian:

I ain't got a problem with synchronous calls. Its just that I had the
need to rant because the rift between you guys and simple developer
folks is getting deeper every day. If somebody fucks up his web site
because he doesn't get the differences between asynchronous and
synchronous calls, that's his prerogative.

m.






On 02/06/2015 12:50 PM, Florian Bösch wrote:
> On Fri, Feb 6, 2015 at 7:38 PM, Michaela Merz mailto:michaela.m...@hermetos.com>> wrote:
>
> it would be the job of the browser development community to find a
way to make such calls less harmful.
>
> If there was a way to make synchronous calls less harmful, it'd have
been implemented a long time ago. There isn't.
>
> You could service synchronous semantics with co-routine based
schedulers. It wouldn't block the main thread, but there'd still be
nothing going on while your single-threaded code waits for the XHR to
complete, and so it's still bad UX. Solving the bad UX would require you
to deal with the scheduler (spawn microthreads that do other things so
it's not bad UX). Regardless, ES-discuss isn't fond of co-routines, so
that's not gonna happen.




Re: do not deprecate synchronous XMLHttpRequest

2015-02-06 Thread Michaela Merz
Ryosuke:

I understand the reasoning behind the thought. But it is IMHO not the
job of  browser implementations to educated web developers or to tell
them, how things should (not) be done. All I am asking is to keep in
mind that it is us who actually makes the content - the very reason for
browsers to be developed and improved. And - seeing the e-mail address
and hoping that you have some influence on the development of "Safari": 
Please make the necessary improvements so that Safari can be used in a
highly complex script environment. That includes indexeddb/file handle
and the possibilities to export and store arbitrary blobs of data into
the file system (eg. createObjectURL for any kind of data). Thanks.

m.



On 02/06/2015 12:30 PM, Ryosuke Niwa wrote:
>
>> On Feb 6, 2015, at 9:27 AM, Michaela Merz
 wrote:
>>
>> Well .. may be some folks should take a deep breath and think what
they are doing. I am 'just' coding web services and too often I find
myself asking: Why did the guys think that this would make sense?
Indexeddb is such a case. It might be a clever design, but it's horrible
from a coders perspective.
>>
>> Would it have been the end of the world to stick with some kind of
database language most coders already are familiar with? Same with (sand
boxed) file system access. Google went the right way with functions
trying to give us what we already knew: files, dirs, read, write,
append.  But that's water under the bridge.
>>
>> I have learned to code my stuff in a way that I have to invest time
and work so that my users don't have to. This is IMHO a good approach.
>
> In that regard, I'm on the same boat.  I still write simple web apps
in PHP with PostgreSQL instead of Scala/Ruby and a non-schema database
today.  So I totally understand your sentiment.  However,
>
>> Unfortunately - some people up the chain have a different approach.
Synchronous calls are bad. Get rid of them. Don't care if developers
have a need for it. Why bother. Our way or highway. Frankly - I find
that offensive.  If you believe that synchronous calls are too much of a
problem for the browser, find a way for the browser to deal with it.
>
> The problem isn't so much that it causes a problem for browser
implementors but rather it results in poor user experience.  While a
synchronous XHR is on the fly, user cannot interact with the page at
all.  With some spinner, etc... UI indicating that the user has to wait,
the user can at least still click on hyperlinks and so forth.
>
> Since browser vendors (and I hope so are other participants of the
working group) are interested in providing better user experience, we
would like websites to use asynchronous XHR.
>
> Having said that, don't worry.  Synchronous XHR isn't going away
anytime soon.  As long as real websites are using synchronous XHR,
browser vendors aren't going to remove/unsupport it.
>
> - R. Niwa
>
>




Re: do not deprecate synchronous XMLHttpRequest

2015-02-06 Thread Michaela Merz
Well yeah. But the manufacturer of your audio equipment doesn't come
into your home to yank the player out of your setup. But that's not
really the issue here. We're talking about technology that is being
developed so that people like me can build good content. As long as
there are a lot of people out there using synchronous calls, it would be
the job of the browser development community to find a way to make such
calls less harmful.

Michaela


On 02/06/2015 12:28 PM, Marc Fawzi wrote:
> I have several 8-track tapes from the early-to-mid 70s that I'm really
> fond of. They are bigger than my iPod. Maybe I can build an adapter
> with mechanical parts, magnetic reader and A/D convertor etc. But
> that's my job, not Apple's job.
>
> The point is, old technologies die all the time, and people who want
> to hold on to old content and have it play on the latest player
> (browser) need to either recode the content or build
> adapters/hosts/wrappers such that the old content will think it's
> running in the old player.  
>
> As far as stuff we build today, we have several options for waiting
> until ajax response comes back, and I'm not why we'd want to block
> everything until it does. It sounds unreasonable. There are legitimate
> scenarios for blocking the event loop but not when it comes to
> fetching data from a server. 
>
>
>
>
>
> On Fri, Feb 6, 2015 at 9:27 AM, Michaela Merz
> mailto:michaela.m...@hermetos.com>> wrote:
>
>
> Well .. may be some folks should take a deep breath and think what
> they are doing. I am 'just' coding web services and too often I
> find myself asking: Why did the guys think that this would make
> sense? Indexeddb is such a case. It might be a clever design, but
> it's horrible from a coders perspective.
>
> Would it have been the end of the world to stick with some kind of
> database language most coders already are familiar with? Same with
> (sand boxed) file system access. Google went the right way with
> functions trying to give us what we already knew: files, dirs,
> read, write, append.  But that's water under the bridge.
>
> I have learned to code my stuff in a way that I have to invest
> time and work so that my users don't have to. This is IMHO a good
> approach. Unfortunately - some people up the chain have a
> different approach. Synchronous calls are bad. Get rid of them.
> Don't care if developers have a need for it. Why bother. Our way
> or highway. Frankly - I find that offensive.  If you believe that
> synchronous calls are too much of a problem for the browser, find
> a way for the browser to deal with it.
>
> Building browsers and adding functionality is not and end in
> itself. The purpose is not to make cool stuff. We don't need
> people telling us what we are allowed to do. Don't get me wrong: I
> really appreciate your work and I am exited about what we can do
> in script nowadays. But please give more thought to the folks
> coding web sites. We are already dealing with a wide variety of
> problems: From browser incompatibilities, to responsive designs,
> server side development, sql, memcached, php, script - you name
> it. Try to make our life easier by keeping stuff simple and
> understandable even for those, who don't have the appreciation or
> the understanding what's going on under the hood of a browser.
>
> Thanks.
>
> Michaela
>
>
>
>
>
> On 02/06/2015 09:54 AM, Florian Bösch wrote:
> >
> > I had an Android device, but now I have an iPhone. In
> addition to the popup problem, and the fake "X" on ads, the iPhone
> browsers (Safari, Chrome, Opera) will start to show a site, then
> they will lock up for 10-30 seconds before finally becoming
> responsive.
> >
> >
> > Via. Ask Slashdot:
> 
> http://ask.slashdot.org/story/15/02/04/1626232/ask-slashdot-gaining-control-of-my-mobile-browser
> >
> > Note: Starting with Gecko 30.0 (Firefox 30.0 / Thunderbird
> 30.0 / SeaMonkey 2.27), synchronous requests on the main thread
> have been deprecated due to the negative effects to the user
> experience.
> >
> >
> > 
> > Via
> 
> https://developer.mozilla.org/en-US/docs/Web/API/XMLHttpRequest/Synchronous_and_Asynchronous_Requests
> >
> > Heads up! The XMLHttpRequest2 spec was recently changed to
> prohibit sending a synchronous request whenxhr.responseType is
> set. The idea behind the change is to help mitigate further usage
> of synchronous xhrs wherever possible.
> >
> >
> > Via
> http://updates.html5rocks.com/2012/01/Getting-Rid-of-Synchronous-XHRs
> >
> > 
>
>
>



Re: do not deprecate synchronous XMLHttpRequest

2015-02-06 Thread Michaela Merz

Well .. may be some folks should take a deep breath and think what they
are doing. I am 'just' coding web services and too often I find myself
asking: Why did the guys think that this would make sense? Indexeddb is
such a case. It might be a clever design, but it's horrible from a
coders perspective.

Would it have been the end of the world to stick with some kind of
database language most coders already are familiar with? Same with (sand
boxed) file system access. Google went the right way with functions
trying to give us what we already knew: files, dirs, read, write,
append.  But that's water under the bridge.

I have learned to code my stuff in a way that I have to invest time and
work so that my users don't have to. This is IMHO a good approach.
Unfortunately - some people up the chain have a different approach.
Synchronous calls are bad. Get rid of them. Don't care if developers
have a need for it. Why bother. Our way or highway. Frankly - I find
that offensive.  If you believe that synchronous calls are too much of a
problem for the browser, find a way for the browser to deal with it.

Building browsers and adding functionality is not and end in itself. The
purpose is not to make cool stuff. We don't need people telling us what
we are allowed to do. Don't get me wrong: I really appreciate your work
and I am exited about what we can do in script nowadays. But please give
more thought to the folks coding web sites. We are already dealing with
a wide variety of problems: From browser incompatibilities, to
responsive designs, server side development, sql, memcached, php, script
- you name it. Try to make our life easier by keeping stuff simple and
understandable even for those, who don't have the appreciation or the
understanding what's going on under the hood of a browser.

Thanks.

Michaela




On 02/06/2015 09:54 AM, Florian Bösch wrote:
>
> I had an Android device, but now I have an iPhone. In addition to
the popup problem, and the fake "X" on ads, the iPhone browsers (Safari,
Chrome, Opera) will start to show a site, then they will lock up for
10-30 seconds before finally becoming responsive.
>
>
> Via. Ask Slashdot:
http://ask.slashdot.org/story/15/02/04/1626232/ask-slashdot-gaining-control-of-my-mobile-browser
>
> Note: Starting with Gecko 30.0 (Firefox 30.0 / Thunderbird 30.0 /
SeaMonkey 2.27), synchronous requests on the main thread have been
deprecated due to the negative effects to the user experience.
>
>
> 
> Via
https://developer.mozilla.org/en-US/docs/Web/API/XMLHttpRequest/Synchronous_and_Asynchronous_Requests
>
> Heads up! The XMLHttpRequest2 spec was recently changed to
prohibit sending a synchronous request whenxhr.responseType is set. The
idea behind the change is to help mitigate further usage of synchronous
xhrs wherever possible.
>
>
> Via http://updates.html5rocks.com/2012/01/Getting-Rid-of-Synchronous-XHRs
>
> 




Re: do not deprecate synchronous XMLHttpRequest

2015-02-06 Thread Michaela Merz

I second Gregg's suggestion. It should be up to the developer to decide
whether he wants to block or not.


On 02/05/2015 08:58 PM, Gregg Tracton wrote:
> I disagree with deprecating synchronous XMLHttpRequest:
>
> 1) it is not upward compatible & so can break numerous sites.
> Many websites do not have active development, and framework updates
> that fix this are even slower to roll out to web apps.  Many web
> app clients would much prefer a sub-optimal experience than a
> broken website.
>
> 2) A better way to approach this might be to respect the async=false
> setting but have the browser move the script thread to another thread which
> is blocked until the jax (not ajax anymore) completes.  Make the browser do
> the heavy lifting so scripts remain simple.
>
> 3) Loading long chains of on-demand content becomes unnecessarily complex.
> Example: a config file that specifies URLs for column headers which specify
>  URLs for content requires 3 nested .success handlers.  With async=false,
>  one can simple write those sequentially.
>
> 4) Have it been considered if jQuery can create a work-around to simulate
> async=false?  If not, do not deprecate, as there will be even more
> browser-specific code splintering.
>
> 5) When data loads slowly, many sites will show a "please wait"
> view anyway, which disables useful interactions, so how much value
> does this deprecation add to usability?
>
> 6) Do you really want script writers to deal with scroll events while
> an ajax is outstanding?  That seems to be beyond the ability of a plug-in
> to handle in the general case. async=false really simplifies some tasks.
>
> --Gregg Tracton, Chapel Hill, NC, USA
>
>
>
>




Re: Security use cases for packaging

2015-01-29 Thread Michaela Merz

Pardon my french, but the whole idea is ridiculous. Web development is
fluid and flexible. While I most certainly understand the idea and the
need for secured loadable code (AFAIK I brought up this issue about 2
months ago), packaging and complicated signing is counter productive.
What about external scripts like jquery? Do I really need to download a
complete package because I fixed a stupid typo in one of the scripts?

Maybe I am completely on the wrong track here (please correct me if I
am) - but I think signed code should be handled completely different -
thus preserving the flexibility of the LAMP/Script environment as we
know it.

Michaela



On 01/30/2015 03:22 AM, Daniel Kahn Gillmor wrote:
> On Thu 2015-01-29 20:14:59 -0500, Yan Zhu wrote:
>> A signed manifest-like package description that lists the hash and
>> location of every resource seems fine as long as all the resources are
>> downloaded and verified before running the app. Perhaps this kills
>> some of the performance benefits motivating packaging in the first
>> place. :(
> Why would you need to fetch all the pieces before running the app?
> Consider a manifest includes an integrity check covering resources X, Y,
> and Z, but X is the only bit of code that runs first, and Y and Z aren't
> loaded.
>
> If you can validate the manifest, then you know you only run X if you've
> verified the manifest and X's integrity.  If the user triggers an action
> that requires resource Y, then you fetch it but don't use it unless it
> matches the integrity check.
>
> (i haven't developed webapps myself for ages, and the idea of a signed
> webapp is relatively new to me, so feel free to explain any obvious part
> that i'm missing)
>
> --dkg





Re: What I am missing

2014-11-19 Thread Michaela Merz
Yes - it establishes provenance and protects against unauthorized 
manipulation. CSP is only as good as the content it protects. If the 
content has been manipulated server side - e.g. by unauthorized access - 
CSP is worthless.


Michaela



On 11/19/2014 10:03 AM, ☻Mike Samuel wrote:

Browser signature checking gives you nothing that CSP doesn't as far
as the security of pages composed from a mixture of content from
different providers.

As Florian points out, signing only establishes provenance, not any
interesting security properties.

I can always write a page that runs an interpreter on data loaded from
a third-party source even if that data is not loaded as script, so a
signed application is always open to confused deputy problems.

Signing is a red herring which distracts from the real problem : the
envelope model in which secrets are scoped to the whole content of the
envelope, makes it hard to decompose a document into multiple trust
domains that reflect the fact that the document is really composed of
content with multiple different provenances.

By the envelope model, I mean the assumption that the user-agent
receives a document and a wrapper, and makes trust decisions for the
whole of the document based on the content of the wrapper.

The envelope does all of
1. establishing a trust domain, the origin
2. bundles secrets, usually in cookies and headers
3. bundles content, possibly from multiple sources.

Ideally our protocols would be written in such a way that secrets can
be scoped to content with a way to allow nesting without inheritance.

This can be kludged on top of iframes, but only at the cost of a lot
of engineering effort.

On Wed, Nov 19, 2014 at 10:27 AM, Michaela Merz
 wrote:

I don't disagree. But what is wrong with the notion of introducing an
_additional_ layer of certification? Signed script and/or html would most
certainly make it way harder to de-face a website or sneak malicious code
into an environment.  I strongly believe that just for this reason alone, we
should think about signed content - even without additional potentially
unsafe functionality.

Michaela



On 11/19/2014 09:21 AM, Pradeep Kumar wrote:

Michaela,

As Josh said earlier, signing the code (somehow) will not enhance security.
It will open doors for more threats. It's better and more open, transparent
and in sync with the spirit of open web to give the control to end user and
not making them to relax today on behalf of other signing authorities.

On 19-Nov-2014 8:44 pm, "Michaela Merz"  wrote:

You are correct. But all those services are (thankfully) sand boxed or
read only. In order to make a browser into something even more useful, you
have to relax these security rules a bit. And IMHO that *should* require
signed code - in addition to the users consent.

Michaela



On 11/19/2014 09:09 AM, Pradeep Kumar wrote:

Even today, browsers ask for permission for geolocation, local storage,
camera etc... How it is different from current scenario?

On 19-Nov-2014 8:35 pm, "Michaela Merz" 
wrote:


That is relevant and also not so. Because Java applets silently grant
access to a out of sandbox functionality if signed. This is not what I am
proposing. I am suggesting a model in which the sandbox model remains intact
and users need to explicitly agree to access that would otherwise be
prohibited.

Michaela





On 11/19/2014 12:01 AM, Jeffrey Walton wrote:

On Wed, Nov 19, 2014 at 12:35 AM, Michaela Merz
 wrote:

Well .. it would be a "all scripts signed" or "no script signed" kind
of a
deal. You can download malicious code everywhere - not only as scripts.
Signed code doesn't protect against malicious or bad code. It only
guarantees that the code is actually from the the certificate owner ..
and
has not been altered without the signers consent.

Seems relevant: "Java’s Losing Security Legacy",
http://threatpost.com/javas-losing-security-legacy and "Don't Sign
that Applet!", https://www.cert.org/blogs/certcc/post.cfm?EntryID=158.

Dormann advises "don't sign" so that the code can't escape its sandbox
and it stays restricted (malware regularly signs to do so).








Re: What I am missing

2014-11-19 Thread Michaela Merz
How would an unsigned script be able to exploit functionality from a 
signed script if it's an either/or case - you have either all scripts 
signed or no extended features? and: Think about this: a website can be 
totally safe today and deliver exploits tomorrow without the user even 
noticing. It happened before and it will happen again. Signed content 
would prevent this by warning the user about missing or wrong signatures 
- even if signed script would not add a single extended function. I 
understand that signing code does not lead to the solution of all evils. 
But it would add another layer that needs to be broken if somebody gains 
access to a website and starts to modify code.


Michaela

On 11/19/2014 11:14 AM, Marc Fawzi wrote:

<<

So there is no way for an unsigned script to exploit security
holes in a signed script?

Of course there's a way. But by the same token, there's a way a signed 
script can exploit security holes in another signed script. Signing 
itself doesn't establish any trust, or security.

>>

Yup, that's also what I meant. Signing does not imply secure, but to 
the average non-technical user a "signed app from a trusted party" may 
convey both trust and security, so they wouldn't think twice about 
installing such a script even if it asked for some powerful 
permissions that can be exploited by another script.


<<

Funny you mention crypto currencies as an idea to get inspiration
from..."Trust but verify" is detached from that... a browser can
monitor what the signed scripts are doing and if it detects a
potentially malicious pattern it can halt the execution of the
script and let the user decide if they want to continue...

That's not working for a variety of reasons. The first reason is that 
identifying what a piece of software does intelligently is one of 
those really hard problems. As in Strong-AI hard.

>>

Well, the user can setup the rules of what is considered a malicious 
action and that there would be ready made configurations (best 
practices codified in config) that would be the default in the 
browser. And then they can exempt certain scripts.


I realize this is an open ended problem and no solution is going to 
address it 100% ... It's the nature of open systems to be open to 
attacks but it's how the system deals with the attack that 
differentiates it. It's a wide open area of research I think, or 
should be.


But do we want a security model that's not extensible and not 
flexible? The answer is most likely NO.






On Tue, Nov 18, 2014 at 11:03 PM, Florian Bösch > wrote:


On Wed, Nov 19, 2014 at 7:54 AM, Marc Fawzi mailto:marc.fa...@gmail.com>> wrote:

So there is no way for an unsigned script to exploit security
holes in a signed script?

Of course there's a way. But by the same token, there's a way a
signed script can exploit security holes in another signed script.
Signing itself doesn't establish any trust, or security.

Funny you mention crypto currencies as an idea to get
inspiration from..."Trust but verify" is detached from that...
a browser can monitor what the signed scripts are doing and if
it detects a potentially malicious pattern it can halt the
execution of the script and let the user decide if they want
to continue...

That's not working for a variety of reasons. The first reason is
that identifying what a piece of software does intelligently is
one of those really hard problems. As in Strong-AI hard. Failing
that, you can monitor what APIs a piece of software makes use of,
and restrict access to those. However, that's already satisfied
without signing by sandboxing. Furthermore, it doesn't entirely
solve the problem as any android user will know. You get a
ginormeous list of premissions a given piece of software would
like to use and the user just clicks "yes". Alternatively, you get
malware that's not trustworthy, that nobody managed to properly
review, because the non trusty part was burried/hidden by the
author somewhere deep down, to activate only long after trust
extension by fiat has happened.

But even if you'd assume that this somehow would be an acceptable
model, what do you define as "malicious"? Reformatting your
machine would be malicious, but so would be posting on your
facebook wall. What constitutes a malicious pattern is actually
more of a social than a technical problem.






Re: What I am missing

2014-11-19 Thread Michaela Merz


First: You don't have to sign your code. Second: We rely on 
"centralization" for TLS as well. Third: Third-party verification can be 
done within the community itself 
(https://www.eff.org/deeplinks/2014/11/certificate-authority-encrypt-entire-web) 
.


Michaela


On 11/19/2014 09:41 AM, Anne van Kesteren wrote:

On Wed, Nov 19, 2014 at 4:27 PM, Michaela Merz
 wrote:

I don't disagree. But what is wrong with the notion of introducing an
_additional_ layer of certification?

Adding an additional layer of centralization.







Re: What I am missing

2014-11-19 Thread Michaela Merz


I don't disagree. But what is wrong with the notion of introducing an 
_additional_ layer of certification? Signed script and/or html would 
most certainly make it way harder to de-face a website or sneak 
malicious code into an environment.  I strongly believe that just for 
this reason alone, we should think about signed content - even without 
additional potentially unsafe functionality.


Michaela


On 11/19/2014 09:21 AM, Pradeep Kumar wrote:


Michaela,

As Josh said earlier, signing the code (somehow) will not enhance 
security. It will open doors for more threats. It's better and more 
open, transparent and in sync with the spirit of open web to give the 
control to end user and not making them to relax today on behalf of 
other signing authorities.


On 19-Nov-2014 8:44 pm, "Michaela Merz" <mailto:michaela.m...@hermetos.com>> wrote:


You are correct. But all those services are (thankfully) sand
boxed or read only. In order to make a browser into something even
more useful, you have to relax these security rules a bit. And
IMHO that *should* require signed code - in addition to the users
consent.

Michaela



On 11/19/2014 09:09 AM, Pradeep Kumar wrote:


Even today, browsers ask for permission for geolocation, local
storage, camera etc... How it is different from current scenario?

On 19-Nov-2014 8:35 pm, "Michaela Merz"
mailto:michaela.m...@hermetos.com>>
wrote:


That is relevant and also not so. Because Java applets
silently grant access to a out of sandbox functionality if
signed. This is not what I am proposing. I am suggesting a
model in which the sandbox model remains intact and users
need to explicitly agree to access that would otherwise be
prohibited.

Michaela





On 11/19/2014 12:01 AM, Jeffrey Walton wrote:

    On Wed, Nov 19, 2014 at 12:35 AM, Michaela Merz
mailto:michaela.m...@hermetos.com>> wrote:

Well .. it would be a "all scripts signed" or "no
script signed" kind of a
deal. You can download malicious code everywhere -
not only as scripts.
Signed code doesn't protect against malicious or bad
code. It only
guarantees that the code is actually from the the
certificate owner .. and
has not been altered without the signers consent.

Seems relevant: "Java's Losing Security Legacy",
http://threatpost.com/javas-losing-security-legacy and
"Don't Sign
that Applet!",
https://www.cert.org/blogs/certcc/post.cfm?EntryID=158.

Dormann advises "don't sign" so that the code can't
escape its sandbox
and it stays restricted (malware regularly signs to do so).









Re: What I am missing

2014-11-19 Thread Michaela Merz
You are correct. But all those services are (thankfully) sand boxed or 
read only. In order to make a browser into something even more useful, 
you have to relax these security rules a bit. And IMHO that *should* 
require signed code - in addition to the users consent.


Michaela



On 11/19/2014 09:09 AM, Pradeep Kumar wrote:


Even today, browsers ask for permission for geolocation, local 
storage, camera etc... How it is different from current scenario?


On 19-Nov-2014 8:35 pm, "Michaela Merz" <mailto:michaela.m...@hermetos.com>> wrote:



That is relevant and also not so. Because Java applets silently
grant access to a out of sandbox functionality if signed. This is
not what I am proposing. I am suggesting a model in which the
sandbox model remains intact and users need to explicitly agree to
access that would otherwise be prohibited.

Michaela





On 11/19/2014 12:01 AM, Jeffrey Walton wrote:

On Wed, Nov 19, 2014 at 12:35 AM, Michaela Merz
mailto:michaela.m...@hermetos.com>> wrote:

Well .. it would be a "all scripts signed" or "no script
signed" kind of a
deal. You can download malicious code everywhere - not
only as scripts.
Signed code doesn't protect against malicious or bad code.
It only
guarantees that the code is actually from the the
certificate owner .. and
has not been altered without the signers consent.

Seems relevant: "Java's Losing Security Legacy",
http://threatpost.com/javas-losing-security-legacy and "Don't Sign
that Applet!",
https://www.cert.org/blogs/certcc/post.cfm?EntryID=158.

Dormann advises "don't sign" so that the code can't escape its
sandbox
and it stays restricted (malware regularly signs to do so).







Re: What I am missing

2014-11-19 Thread Michaela Merz


That is relevant and also not so. Because Java applets silently grant 
access to a out of sandbox functionality if signed. This is not what I 
am proposing. I am suggesting a model in which the sandbox model remains 
intact and users need to explicitly agree to access that would otherwise 
be prohibited.


Michaela





On 11/19/2014 12:01 AM, Jeffrey Walton wrote:

On Wed, Nov 19, 2014 at 12:35 AM, Michaela Merz
 wrote:

Well .. it would be a "all scripts signed" or "no script signed" kind of a
deal. You can download malicious code everywhere - not only as scripts.
Signed code doesn't protect against malicious or bad code. It only
guarantees that the code is actually from the the certificate owner .. and
has not been altered without the signers consent.

Seems relevant: "Java’s Losing Security Legacy",
http://threatpost.com/javas-losing-security-legacy and "Don't Sign
that Applet!", https://www.cert.org/blogs/certcc/post.cfm?EntryID=158.

Dormann advises "don't sign" so that the code can't escape its sandbox
and it stays restricted (malware regularly signs to do so).





Re: What I am missing

2014-11-19 Thread Michaela Merz
I am not sure if I understand your question. Browsers can't be code 
servers at least not today.


Michaela



On 11/19/2014 08:43 AM, Pradeep Kumar wrote:


How the browsers can be code servers? Could you please explain a 
little more...


On 19-Nov-2014 7:51 pm, "Michaela Merz" <mailto:michaela.m...@hermetos.com>> wrote:


Thank you Jonas. I was actually thinking about the security model of
FirefoxOS or Android apps. We write powerful "webapps" nowadays. And
with "webapps" I mean regular web pages with a lot of script/html5
functionality. The browsers are fast enough to do a variety of things:
from running a linux kernel, to playing dos-games,  doing crypto,
decoding and streaming mp3. I understand a browser to be an operating
system on top of an operating system. But the need to protect the user
is a problem if you want to go beyond what is possible today.

I am asking to consider a model, where a signed script package
notifies
a user about is origin and signer and even may ask the user for
special
permissions like direct file system access or raw networking
sockets or
anything else that would, for safety reasons, not be possible today.

The browser would remember the origin ip and the signature of the
script
package and would re-ask for permission if something changes. It would
refuse to run if the signature isn't valid or expired.

It wouldn't change a thing in regard to updates. You would just
have to
re-sign your code before you make it available. I used to work a lot
with java applets (signed and un-signed) in the old days, I am working
with android apps today. Signing is just another step in the work
chain.

Signed code is the missing last element in the CSP - TLS environment .
Let's make the browser into something that can truly be seen as an
alternative operating system on top of an operating system.

Michaela

On 11/19/2014 08:33 AM, Jonas Sicking wrote:
> On Tue, Nov 18, 2014 at 7:40 PM, Boris Zbarsky mailto:bzbar...@mit.edu>> wrote:
>> On 11/18/14, 10:26 PM, Michaela Merz wrote:
>>> First: We need signed script code.
>> For what it's worth, Gecko supported this for a while.  See
>>

<http://www-archive.mozilla.org/projects/security/components/signed-scripts.html>.
>> In practice, people didn't really use it, and it made the
security model a
>> _lot_ more complicated and hard to reason about, so the feature
was dropped.
>>
>> It would be good to understand how proposals along these lines
differ from
>> what's already been tried and failed.
> The way we did script signing back then was nutty in several
ways. The
> signing we do in FirefoxOS is *much* simpler. Simple enough that no
> one has complained about the complexity that it has added to Gecko.
>
> Sadly enhanced security models that use signing by a trusted party
> inherently looses a lot of the advantages of the web. It means that
> you can't publish a new version of you website by simply uploading
> files to your webserver whenever you want. And it means that you
can't
> generate the script and markup that make up your website dynamically
> on your webserver.
>
> So I'm by no means arguing that FirefoxOS has the problem of
signing solved.
>
> Unfortunately no one has been able to solve the problem of how to
> grant web content access to capabilities like raw TCP or UDP sockets
> in order to access legacy hardware and protocols, or how to get
> read/write acccess to your photo library in order to build a photo
> manager, without relying on signing.
>
> Which has meant that the web so far is unable to "compete with
native"
> in those areas.
>
> / Jonas
>







Re: What I am missing

2014-11-19 Thread Michaela Merz

Perfect is the enemy of good. I understand the principles and problems
of cryptography. And in the same way we rely on TLS and its security
model today we would be able to put some trust into the same
architecture for signing script.

FYI: Here's how signing works for java applets: You need to get a
software signing key from a CA who will do some form of verification in
regard to your organization. You use that key to sign you code package.

Will this forever prevent the spread of malware ? Of course not. Yes -
you can distribute malware if you get somebodies signing key into your
possession or even buy a signature key and sign your malware yourself.
But as long as the user is made aware of the fact that the code has the
permission to access the file system, he can decide for himself if he
wants to allow that or not. Much in the same way that he decides to
download a possibly malicious peace of software today.  There will never
be absolute security - especially not for users who give a damn about
what they are doing. Should that prevent us from continuing the
evolution of the web environment? We might as well kill ourselves
because we are afraid to die ;)

One last thing: Most Android malware was spread on systems, where owners
disabled the security of the system .. eg by rooting the devices.

Michaela

> Suppose you get a piece of signed content, over whatever way it was
> delivered. Suppose also that this content you got has the ability to
> read all your private data, or reformat your machine. So it's
> basically about trust. You need to establish a secure channel of
> communication to obtain a public key that matches a signature, in such
> a way that an attackers attempt to self-sign malicious content is
> foiled. And you need to have a way to discover (after having
> established that the entity is the one who was intended and that the
> signature is correct), that you indeed trust that entity.
>
> These are two very old problems in cryptography, and they cannot be
> solved by cryptography. There are various approaches to this problem
> in use today:
>
>   * TLS and its web of trust: The basic idea being that there is a
> hierarchy of signatories. It works like this. An entity provides a
> certificate for the connection, signing it with their private key.
> Since you cannot establish a connection without a public key that
> matches the private key, verifying the certificate is easy. This
> entity in turn, refers to another entity which provided the
> signature for that private key. They refer to another one, and so
> forth, until you arrive at the root. You implicitly trust root.
> This works, but it has some flaws. At the edge of the web, people
> are not allowed to self-sign, so they obtain their (pricey) key
> from the next tier up. But the next tier up can't go and bother
> the next tier up everytime they need to provide a new set of keys
> to the edge. So they get blanket permission to self-sign, implying
> that it's possible for the next tier up to establish and maintain
> a trust relationship to them. As is easily demonstratable, this
> can, and often does, go wrong, where some CA gets compromised.
> This is always bad news to whomever obtained a certificate from
> them, because now a malicious party can pass themselves off as them.
>   * App-stores and trust royalty: This is really easy to describe, the
> app store you obtain something from signs the content, and you
> trust the app-store, and therefore you trust the content. This
> can, and often does go wrong, as android/iOS malware amply
> demonstrates.
>
> TSL cannot work perfectly, because it is built on implied trust along
> the chain, and this can get compromised. App-stores cannot work
> perfectly because the ability to review content is quickly exceeded by
> the flood of content. Even if app-stores where provided with the full
> source, they would have no time to perform a proper review, and so
> time and time again malware slips trough the net.
>
> You can have a technical solution for signing, and you still haven't
> solved any bit of how to trust a piece of content. About the only way
> that'd be remotely feasible is if you piggyback on an existing
> implementation of a trust mechanism/transport security layer, to
> deliver the signature. For instance, many websites that allow you to
> d/l an executable provide you with a checksum of the content. The idea
> being that if the page is served up over TLS, then you've established
> that the checksum is delivered by the entity, which is also supposed
> to deliver the content. However, this has a hidden trust model that
> established trust without a technical solution. It means a user
> assumes he trusts that website, because he arrived there on his own
> volition. The same cannot be said about side-channel delivered pieces
> of content that composit into a larger whole (like signed scripts).
> For instan

Re: What I am missing

2014-11-19 Thread Michaela Merz
Thank you Jonas. I was actually thinking about the security model of
FirefoxOS or Android apps. We write powerful "webapps" nowadays. And
with "webapps" I mean regular web pages with a lot of script/html5
functionality. The browsers are fast enough to do a variety of things:
from running a linux kernel, to playing dos-games,  doing crypto,
decoding and streaming mp3. I understand a browser to be an operating
system on top of an operating system. But the need to protect the user
is a problem if you want to go beyond what is possible today.

I am asking to consider a model, where a signed script package notifies
a user about is origin and signer and even may ask the user for special
permissions like direct file system access or raw networking sockets or
anything else that would, for safety reasons, not be possible today.

The browser would remember the origin ip and the signature of the script
package and would re-ask for permission if something changes. It would
refuse to run if the signature isn't valid or expired.

It wouldn't change a thing in regard to updates. You would just have to
re-sign your code before you make it available. I used to work a lot
with java applets (signed and un-signed) in the old days, I am working
with android apps today. Signing is just another step in the work chain.

Signed code is the missing last element in the CSP - TLS environment .
Let's make the browser into something that can truly be seen as an
alternative operating system on top of an operating system.

Michaela

On 11/19/2014 08:33 AM, Jonas Sicking wrote:
> On Tue, Nov 18, 2014 at 7:40 PM, Boris Zbarsky  wrote:
>> On 11/18/14, 10:26 PM, Michaela Merz wrote:
>>> First: We need signed script code.
>> For what it's worth, Gecko supported this for a while.  See
>> <http://www-archive.mozilla.org/projects/security/components/signed-scripts.html>.
>> In practice, people didn't really use it, and it made the security model a
>> _lot_ more complicated and hard to reason about, so the feature was dropped.
>>
>> It would be good to understand how proposals along these lines differ from
>> what's already been tried and failed.
> The way we did script signing back then was nutty in several ways. The
> signing we do in FirefoxOS is *much* simpler. Simple enough that no
> one has complained about the complexity that it has added to Gecko.
>
> Sadly enhanced security models that use signing by a trusted party
> inherently looses a lot of the advantages of the web. It means that
> you can't publish a new version of you website by simply uploading
> files to your webserver whenever you want. And it means that you can't
> generate the script and markup that make up your website dynamically
> on your webserver.
>
> So I'm by no means arguing that FirefoxOS has the problem of signing solved.
>
> Unfortunately no one has been able to solve the problem of how to
> grant web content access to capabilities like raw TCP or UDP sockets
> in order to access legacy hardware and protocols, or how to get
> read/write acccess to your photo library in order to build a photo
> manager, without relying on signing.
>
> Which has meant that the web so far is unable to "compete with native"
> in those areas.
>
> / Jonas
>





Re: What I am missing

2014-11-18 Thread Michaela Merz

TLS doesn't protect you against code that has been altered server side -
without the signers consent. It would alert the user, if unsigned
updates would be made available.

Ajax downloads still require a download link (with the bloburl) to be
displayed requiring an additional click. User clicks download .. ajax
downloads the data, creates blob url as src which the user has to click
to 'copy' the blob onto the userspace drive. Would be better to skip the
final part.

In regard to accept: I wasn't aware of the fact that I can accept a
socket on port 80 to serve a HTTP session. You're saying I could with
what's available today?

Michaela



On 11/19/2014 06:34 AM, Florian Bösch wrote:
> On Wed, Nov 19, 2014 at 4:26 AM, Michaela Merz
> mailto:michaela.m...@hermetos.com>> wrote:
>
> First: We need signed script code. We are doing a lot of stuff with
> script - we could safely do even more, if we would be able to safely
> deliver script that has some kind of a trust model.
>
> TLS exists.
>  
>
> I am thinking about
> signed JAR files - just like we did with java applets not too long
> ago.
> Maybe as an extension to the CSP enviroment .. and a nice frame around
> the browser telling the user that the site is providing trusted /
> signed
> code.
>
> Which is different than TLS how?
>  
>
> Signed code could allow more openness, like true full screen, 
>
> Fullscreen is possible
> today, 
> https://developer.mozilla.org/en-US/docs/Web/Guide/API/DOM/Using_full_screen_mode
>  
>
> or simpler ajax downloads.
>
> Simpler how?
>  
>
> Second: It would be great to finally be able to accept incoming
> connections.
>
> WebRTC allows the browser to accept incoming connections. The WebRTC
> data channel covers both TCP and UDP connectivity.
>  
>
> There's access to cameras and microphones - why not allow
> us the ability to code servers in the browser?
>
> You can. There's even P2P overlay networks being done with WebRTC.
> Although they're mostly hampered by the existing support for WebRTC
> data channels, which isn't great yet.



Re: What I am missing

2014-11-18 Thread Michaela Merz
Well .. it would be a "all scripts signed" or "no script signed" kind of
a deal. You can download malicious code everywhere - not only as
scripts. Signed code doesn't protect against malicious or bad code. It
only guarantees that the code is actually from the the certificate owner
.. and has not been altered without the signers consent.

Michaela
 


On 11/19/2014 06:14 AM, Marc Fawzi wrote:
> "Allowing this script to run may open you to all kinds of malicious
> attacks by 3rd parties not associated with the party whom you're
> trusting." 
>
> If I give App XYZ super power to do anything, and XYZ gets
> compromised/hacked then I'll be open to all sorts of attacks.
>
> It's not an issue of party A trusting party B. It's an issue of
> trusting that party B has no security holes in their app whatsoever,
> and that is one of the hardest things to guarantee.
>
>
> On Tue, Nov 18, 2014 at 8:00 PM, Michaela Merz
> mailto:michaela.m...@hermetos.com>> wrote:
>
>
> Yes Boris - I know. As long as it doesn't have advantages for the user
> or the developer - why bother with it? If signed code would allow
> special features - like true fullscreen or direct file access  - it
> would make sense. Signed code would make script much more resistant to
> manipulation and therefore would help in environments where trust
> and/or
> security is important.
>
> We use script for much, much more than we did just a year or so ago.
>
> Michaela
>
>
>
> On 11/19/2014 04:40 AM, Boris Zbarsky wrote:
> > On 11/18/14, 10:26 PM, Michaela Merz wrote:
> >> First: We need signed script code.
> >
> > For what it's worth, Gecko supported this for a while.  See
> >
> 
> <http://www-archive.mozilla.org/projects/security/components/signed-scripts.html>.
> >  In practice, people didn't really use it, and it made the security
> > model a _lot_ more complicated and hard to reason about, so the
> > feature was dropped.
> >
> > It would be good to understand how proposals along these lines
> differ
> > from what's already been tried and failed.
> >
> > -Boris
> >
>
>
>
>



Re: What I am missing

2014-11-18 Thread Michaela Merz

Yes Boris - I know. As long as it doesn't have advantages for the user
or the developer - why bother with it? If signed code would allow
special features - like true fullscreen or direct file access  - it
would make sense. Signed code would make script much more resistant to
manipulation and therefore would help in environments where trust and/or
security is important.

We use script for much, much more than we did just a year or so ago.

Michaela



On 11/19/2014 04:40 AM, Boris Zbarsky wrote:
> On 11/18/14, 10:26 PM, Michaela Merz wrote:
>> First: We need signed script code.
>
> For what it's worth, Gecko supported this for a while.  See
> <http://www-archive.mozilla.org/projects/security/components/signed-scripts.html>.
>  In practice, people didn't really use it, and it made the security
> model a _lot_ more complicated and hard to reason about, so the
> feature was dropped.
>
> It would be good to understand how proposals along these lines differ
> from what's already been tried and failed.
>
> -Boris
>





What I am missing

2014-11-18 Thread Michaela Merz

Hi there:

Though I am not part of the browser developing community, I am doing web
development since before the days of Marc Andreessen - when we had
neither script or even text flowing around images. So you may understand
how much I I enjoy what you are doing and that I can't wait for new
functionality to emerge. webRTC, websockets, file systems, indexeddb you
name it - cool stuff that enables folks like me to change the web into
something truly awesome. Thanks a lot for your work.

But .. there's something missing. Actually: There are two things
missing. Maybe .. just maybe .. I am able to convince you to at least
think about what I am suggesting.

First: We need signed script code. We are doing a lot of stuff with
script - we could safely do even more, if we would be able to safely
deliver script that has some kind of a trust model. I am thinking about
signed JAR files - just like we did with java applets not too long ago.
Maybe as an extension to the CSP enviroment .. and a nice frame around
the browser telling the user that the site is providing trusted / signed
code. Signed code could allow more openness, like true full screen, or
simpler ajax downloads. 

Second: It would be great to finally be able to accept incoming
connections. There's access to cameras and microphones - why not allow
us the ability to code servers in the browser? Maybe in combination with
my suggestion above? Websites would be able to offer webdav simply by
'mounting' the browser (no pun intended) and the browser would do
caching/forwarding/encrypting ..  Imaging being able to directly access
files on a web site without web download.

I could go on for another hour or two. But I don't want to sound
ungrateful.

Thank you for reading this.

Michaela






 





Re: ZIP archive API?

2013-05-06 Thread Michaela Merz
I second that. Thanks Florian.




On 05/03/2013 02:52 PM, Florian Bösch wrote:
> I'm interested a JS API that does the following:
>
> Unpacking:
> - Receive an archive from a Dataurl, Blob, URL object, File (as in
> filesystem API) or Arraybuffer
> - List its content and metadata
> - Unpack members to Dataurl, Blob, URL object, File or Arraybuffer
>
> Packing:
> - Create an archive
> - Put in members passing a Dataurl, Blob, URL object, File or Arraybuffer
> - Serialize archive to Dataurl, Blob, URL object, File or Arraybuffer
>
> To avoid the whole worker/proxy thing and to allow authors to
> selectively choose how they want to handle the data, I'd like to see
> synchronous and asynchronous versions of each. I'd make synchronicity
> an argument/flag or something to avoid API clutter like packSync,
> packAsync, writeSync, writeAsync, and rather like write(data,
> callback|boolean).
>
> - Pythons zipfile API is ok, except the getinfo/setinfo stuff is a bit
> over the top: http://docs.python.org/3/library/zipfile.html
> - Pythons tarfile API is less clutered and easier to
> use: http://docs.python.org/3/library/tarfile.html
> - zip.js isn't really usable as it doesn't support the full range of
> types (Dataurl, Blob, URL object, File or Arraybuffer) and for
> asynchronous operation needs to rely on a worker, which is bothersome
> to setup: http://stuk.github.io/jszip/
>
> My own implementation of the tar format only targets array buffers and
> works synchronously, as in.
>
> var archive = new TarFile(arraybuffer);
> var memberArrayBuffer = archive.get('filename');
>
>
>
> On Fri, May 3, 2013 at 2:37 PM, Anne van Kesteren  > wrote:
>
> On Thu, May 2, 2013 at 1:15 AM, Paul Bakaus  > wrote:
> > Still waiting for it as well. I think it'd be very useful to
> transfer sets
> > of assets etc.
>
> Do you have anything in particular you'd like to see happen first?
> It's pretty clear we should expose more here, but as with all things
> we should do it in baby steps.
>
>
> --
> http://annevankesteren.nl/
>
>



smime.p7s
Description: S/MIME Cryptographic Signature