Hi Tom,

my answer below, that's a nice discussion :-)

I will take a couple of email on that topic, i love your long email!

On 2/25/12 1:39 AM, Tom Ritter wrote:
[snip]
> I'm critical of javascript cryptography because it's _dangerous_.  I'm
> not unconvinced it's possible to do 'securely' (for most values of
> securely) - I am however made way more nervous by claims like "Keep it
> strong and resist objectically again claim and critics, as for many
> years JS has been considered crap-stuff, but in the HTML5 world things
> are changing and crypto-veterans must update their vision!".
> Crypto-veterans usually have a reason for their skepticism.

Well, what i just realized is that "in most discussion" you engage, the
base of discussion related to JS-crypto is "it doesn't work".

While there are many stuff to do and research work todo, most problems
can be fixed and managed, most people just "feel" that's not possible to
do JS crypto securely.

A statement like "JS crypto doesn't work" is an absolute statement
followed by many persons.
While we all here realized that there are many boundaries of making it
securely, several browser improvements recently introduced, others that
will come this year, with a continuous improvement.

So my point in saying "resist objectively" is because it's a matter to
stimulate research and discuss on all the elements in a specific threat
scenario, without allowing the "general feeling" that "JS crypto doesn't
work" that very often represent a limit to the stimulus of research.

> I tried to create a taxonomy of problems that javascript cryptography
> must overcome:
That's cool and very useful, we should probably discuss on it and dump
the result of knowledge on OpenPGPJS wiki.

I would also invite to consider in the "taxonomy of problems" the
possible way of deployments of JS crypto:

- Mobile Application
Javascript can be used to build mobile applications (PhoneGap) where the
code is delivered compiled in a normal application.

- Self-contained "offline" HTML5 Application
Single self-contained HTML5 application can be used by users, not
necessarily downloading it every time from the web.
Those can be online (interactive with external services via REST) or
offline (interacting with local data).
Two example are the "online" TiddlyWiki (http://www.tiddlywiki.com/) and
the "offline" SelfDecryptEmail (http://leemon.com/crypto/SelfDecrypt.html)

- Pure Web application (web-delivered)
That's the most common context we think at and for which most security
risks are considered/perceived.

- Browser plug-in application
That's the typical option to be provided and suggest to let the user
have a verification method for the JS code  (code verification and/or
code delivery).

In any of the previous "use of JS crypto" the taxonomy of problems
behave differently, so i would suggest to include also those elements
into the resulting matrix.

> I suggest using this as a baseline for the sanity check.  (It may not
> cover everything though!) For each item: how can OpenPGP.js mitigate
> the risk in code and in deployment guidelines.  What is the threat
> model for each: do you attempt to defend against it, or not.

I would add as additional element to be considered *the context of use*.

While in some context you may need a "0/1" approach (secure or not
secure), in several other you may need a "0.......1" approach (different
gradient of security from very secure up to not secure) where a values
in the middle could be valid enough.

In the crypto and surveillance arm-race the "ability to use" tactical
encryption and communication technologies, that can be deployed,
distributed and used easily, rapidly and on-fields, are a very important
element.

This apply also to military fields where you don't always need to
"something 100% secure", while you swap security for flexibility of use
following your tactical requirements.

Flexibility of use may be required due to "tactical requirements".
For example if you're working on organizing protests in syria, time and
tool availability it's an important matter, maybe you don't care about
having 100% secure as you have "time and resource constrain".

In that case the "0/1" (secure/not secure) approach doesn't work well
and the added security provided by a "wrongly-used-JS-crypto" still may
be provide an extremely high value.

So i would suggest also to consider this real-world-scenario where the
"tactical use" may need a swap of security in exchange of flexibility,
while the other options are just "less valid in the context" (for
example where downloading GPG.exe let your enemy infect it with a trojan
and this may represent an every higher risk).

In our matrix of consideration, after having highlighted for each
"problem" in the "specific deployment method", we would need to have as
a result "which is the residual risk" for "that specific deployment method".

A post-processing analysis of the "residual risk" considered in the
"contest of use" can tell for example if this is acceptable or not
(considering the alternatives).

> There's a lot to think about.  A sanity check is a very good idea.
Yeah, it's a quite complicated environment.
> 
> -tom

-naif

p.s. Cryptocat (JS implemented) chat encryption protocol has been
published https://crypto.cat/about/spec-rev1.pdf
_______________________________________________

http://openpgpjs.org

Reply via email to