(not an S/MIME thread, I promise)

tl;dr summary: threshold signed software updates are useful, you should 
consider desktop java for your next crypto project (yes really)


-----


For people who live in Switzerland, this evening I will be giving a talk at 
Rackspace Zürich about the development of Lighthouse, a Bitcoin crowdfunding 
wallet app:

http://www.meetup.com/Bitcoin-Meetup-Switzerland/events/219667761/

If you’re local, why not come along?  I’ll upload the sides tomorrow.

Bitcoin wallets and e2e messaging apps have some things in common - they both 
involve key management and end to end usage of crypto technologies. 
Decentralisation is an attractive property in both areas. So I think it’s worth 
sharing experiences, as there is a lot of overlap.

Online updates in crypto apps

Modern users expect silent and continuous improvement in their software. They 
don’t want to manually approve updates and many won’t do so even when prompted. 
One reason for people to prefer web apps is that updates are invisible. It’s no 
surprise that Google chose the web model for its own client apps. Chrome 
technically has versions, but they are never shown to the user and the product 
evolves silently.

When you combine a threat model that includes provider coercion with the desire 
for modern UX, it’s misleading to claim that client side key management is 
better than hosted webmail because in both cases the provider can obtain your 
data. By simply pushing an online update that steals the unencrypted data, the 
entire scheme is undone. HushMail is an example of this principle in practice. 
They advertised encrypted email, but when they were served with a court order 
as part of a drugs investigation they were forced to back door their own 
software - and it worked:

http://www.wired.com/2007/11/encrypted-e-mai/ 
<http://www.wired.com/2007/11/encrypted-e-mai/>

Being open source doesn’t automatically fix this. Whilst a tiny minority of 
enthusiastic and technical users might compare the source code to the downloads 
once, they are unlikely to do so on a rolling basis, year after year, all for 
free. And they probably won’t audit it. And there is no requirement that every 
user get the same software as the rest. A targeted back door could be pushed 
only to the targeted users.

When it comes to information dodgy governments might be the primary threat, but 
for Bitcoin we have to worry about plain old hackers and thieves. So far there 
has only been one case where money was taken from a wallet through a bogus 
software update. The maker of StrongCoin discovered that hacked/stolen coins 
were being controlled in his wallet. It was a “web wallet” in which the keys 
were managed client side, but the software of course was updated every time the 
browser refreshed the page. He pushed an update that automatically grabbed the 
stolen money, so he could return it to the rightful owner. A modern day Robin 
Hood. It won’t surprise you to hear that this was controversial.

https://bitcoinmagazine.com/4273/ozcoin-hacked-stolen-funds-seized-and-returned-by-strongcoin/
 
<https://bitcoinmagazine.com/4273/ozcoin-hacked-stolen-funds-seized-and-returned-by-strongcoin/>


Threshold signed updates in Lighthouse

When writing my own wallet I thought about this issue a lot. I don’t want to 
find I got hacked and someone pushed a steal-all-the-coins software update. So 
I wrote a framework that supports:

Reproducible builds
Threshold multi-signature signing of delta updates
Ability for users to upgrade/downgrade at will, from within the app UI
Update UI can be as silent or as noisy as the app developer wishes

… and the key thing is, doing this is cheap and takes little effort. The 
framework was already adopted by a few other projects.

I think the model whereby users place faith in a handful of trusted auditors is 
going to work a lot better than the “open source and pray” approach that’s 
standard today. Saying “the source is open so we are trustworthy” is 
meaningless to non programmers. Saying “every version you run is audited by 
respected companies spread across America, Europe and Russia” allows people to 
use their human knowledge of reputation and politics to evaluate security 
claims, and have ongoing confidence in them. This model can also be compatible 
with proprietary software.

Unfortunately I haven’t found someone who is willing to do audit work for 
Lighthouse yet. I hope if the app gets popular enough I’ll find someone. As it 
involves real ongoing work though, it might require incentivisation.

Implementing crypto apps using the JVM

Lighthouse, like a few other modern Bitcoin apps, is internally a desktop Java 
app. This is not as crazy as it sounds. Java has changed a lot in recent years 
and I think it’s now a highly suitable platform for development of end-to-end 
crypto apps …. possibly moreso than HTML5/Chrome.

A few points in its favour:

The latest versions can produce self contained packages for each platform 
(exe/msi on windows, DMG on MacOS, DEB/RPM or tarball on Linux). The user does 
not need to have the JRE installed and they won’t even know Java is involved. 
This turns the JVM into a big runtime library and removes most of the 
deployment headaches.
There is a totally new, written from scratch GUI toolkit that is somewhat 
similar to Cocoa. The UI is a GPU accelerated scene graph using OpenGL or 
Direct3D as appropriate. It’s easy to do fancy animations, fades, blurs, etc. 
It has a full data binding framework built in.
The new UI toolkit is inspired by web technologies. You style the UI with a 
dialect of CSS and lay it out with a vaguely HTMLish XML dialect, though there 
is a graphical designer tool as well. You can use web fonts like FontAwesome 
and can easily imitate the look of frameworks like Bootstrap (and I do). You 
can implement your UI logic in Javascript if you like. You can embed videos or 
a full webkit if you need real html5.
You are not restricted to using Java, which by now is an old and mediocre 
language. You can use almost any modern language that isn’t C++ or Go, like 
Python, Ruby, JavaScript (with performance close to V8), mixed oop/functional 
languages like Scala or Kotlin …. there are even dialects of Haskell, Lisp and 
Erlang available!
Backend code is reusable immediately on Android, and via a translation layer on 
iOS like RoboVM or J2ObjC. The Google Inbox product shares >50% of its code 
across all platforms this way.

... and of course, threshold signed software updates.

One advantage to doing things this way is you don’t spend much time fighting 
the framework, because it was designed for apps from the ground up. You also 
get access to mature crypto libraries like Bouncy Castle.

There’s a video of the app on the website here:

https://www.vinumeris.com/lighthouse <https://www.vinumeris.com/lighthouse>

Sandboxing for minimizing audit overhead

One issue with accountable crypto is keeping the auditing cost manageable. 
One-off audits are incredibly expensive. Rolling audits of every change going 
into a codebase to catch developer backdoors …. too expensive to be affordable.

For apps built on the JVM there is an interesting possibility. We can use the 
platform’s sandboxing features to isolate code modules from each other, meaning 
that the auditor can focus their effort on reviewing changes to the core 
security code (and the sandbox itself of course, but that should rarely change).

For example in an email app, the compose UI and encryption module could be 
sandboxed from things like the address book code, app preferences, code that 
speaks IMAP etc. If you know that malicious code in the IMAP parser can’t 
access the users private keys, you can refocus your audit elsewhere.

But wait! Isn’t the JVM sandbox a famously useless piece of Swiss cheese?!?

Yes, but also no. It certainly was riddled with exploits and zero days, back in 
2012/2013. But Oracle has spent enormous sums of money on auditing the JVM in 
recent years. In 2014 there were no zero day’s at all. There were sandbox 
escapes, and I expect them to continue surfacing, but they were all found by 
whitehat auditing efforts. What’s more, many of those exploits were via modules 
like movie playback or audio handling - things that a typical crypto sandbox 
would just lock off access to entirely.

So it’s starting to look like in practice, as long as the VM itself is kept up 
to date and the sandboxed code isn’t given access to the full range of APIs, 
the sandbox would be strong enough that a typical software company wouldn’t be 
able to break out of it even under duress. The cost of finding a working 
exploit would be too high.

Type safety and crypto code

The trend towards usage of Javascript crypto worries me. We’ve seen this cause 
big problems in the Bitcoin world, with several private key compromises 
directly caused by Javascript’s weak type safety. I wrote an article on this 
problem here:

https://medium.com/@octskyward/type-safety-and-rngs-40e3ec71ab3a 
<https://medium.com/@octskyward/type-safety-and-rngs-40e3ec71ab3a>

Using stricter, more type safe languages can help avoid real security exploits 
and (imo) the benefits are so strong that it’s worth avoiding web based app 
stacks just for this reason.


Attachment: smime.p7s
Description: S/MIME cryptographic signature

_______________________________________________
Messaging mailing list
[email protected]
https://moderncrypto.org/mailman/listinfo/messaging

Reply via email to