Re: [cryptography] Meet the groundbreaking new encryption app set to revolutionize privacy...

2013-02-09 Thread Andreas Bürki


Am 09.02.2013 08:06, schrieb Jon Callas:

 But I think there has to be fun with security. We talk a lot about how 
 security has to be usable, but I think fun is up there, too. If it's fun, 
 people will use it. They make their mistakes cheaply, and in a reasonably 
 safe environment. Most of all, they'll actually use it. That's been the 
 challenge of the last couple decades, getting people to use it. People use 
 things that they play with. I think thus that play is part of security, too. 
 What's groundbreaking in what we're doing is that we're having fun and 
 encouraging others to do so, too.

+1 - A mixture of fun *and* education elements would attract my
interest. Definitely.


hugi (a *dumb average User* - No joke)


___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography

-- 

Andreas Bürki

abue...@anidor.com
S/MIME certificate - SHA1 fingerprint:
ED:A5:F3:60:70:8B:4C:16:44:18:96:AE:67:B9:CA:77:AE:DA:83:11
GnuPG - GPG fingerprint:
5DA7 5F48 25BD D2D7 E488 05DF 5A99 A321 7E42 0227



smime.p7s
Description: S/MIME Cryptographic Signature
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Meet the groundbreaking new encryption app set to revolutionize privacy...

2013-02-08 Thread ianG

On 7/02/13 23:56 PM, Thierry Moreau wrote:

ianG wrote:


[Hushmail design]  isn't
perfect but it was a whole lot better than futzing around with OpenPGP
keys and manual decrypting.  And it was the latter 'risk' view that
won, Hushmail filled that niche between the hard core pgp community,
and the people who did business and needed an easy tool.

Don't be suspicious, be curious -- this is where security is at.
Human rights reporters already put their life on the line.  Your
mission is not to protect their life absolutely,


One design aspect seems missing from the high-level discussion: how do
you define the security mechanism failure mode? You have basically two
options: connect with an insecure protocol, or do not connect at all.

If it's a life-preserving application, this question should be addressed
explicitly. A fail safe system may be either way, but stakeholders
should know which way. Airplane pilots are trained according to the
failure mode of each aircraft subsystem. E.g. if two-way radio fails,
the pilot may remain confident (from an indication on the cockpit) that
the air traffic controller (ATC) still sees the aircraft identifier on
the radar (see Wikipedia entry for transponder) during the emergency
landing. Thus the decision to land at the major airport (instead of a
secondary airport with less traffic in conflict but lower grade
facilities) is taken based on the fail-safe property of the
aircraft-to-ATC communications subsystem.



A fine puzzle.  From those assumptions -- training, indicators, 
redundancy -- here's my answer to the question.


In typical Internet user security situations, those things either don't 
exist or aren't reliable.  Consider users with SSL, padlocks, etc. 
Faced with difficulty in ensuring the efficacy of these assumptions, 
what does a rational designer do?  To my mind, it is this:  there is 
only one mode, and it is secure.  If circumstances are that your packets 
(secure, monocular) are not getting thru, then there is no connection.


Another way of looking at this is to ask how the indication is driven? 
If it is possible to show a good indication of potential insecurity, why 
isn't it possible to fix the problem?  In security protocols, we 
generally strive to fix everything we can, so that the model is perfect. 
 Typically, our society is cannibalistic and seizes on any weakness as 
a chance to feed.  Only a complete security model is any good in our 
market.  Whilst very annoying at times, it does rather stress that we do 
not have a good understanding of how to deliver a half service.


my 2c.

iang

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Meet the groundbreaking new encryption app set to revolutionize privacy...

2013-02-08 Thread Jon Callas
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Thanks for your comments, Ian. I think they're spot on.

At the time that the so-called Arab Spring was going on, I was invited to a 
confab where there were a bunch of activists and it's always interesting to 
talk to people who are on the ground. One of the things that struck me was 
their commentary on how we can help them.

A thing that struck me was one person who said, Don't patronize us. We know 
what we're doing, we're the ones risking our lives. Actually, I lied. That 
person said, don't fucking patronize us so as to make the point stronger. One 
example this person gave was that they talked to people providing some social 
meet-up service and they wanted that service to use SSL. They got a lecture how 
SSL was flawed and that's why they weren't doing it. In my opinion, this was 
just an excuse -- they didn't want to do SSL for whatever reason (very likely 
just the cost and annoyance of the certs), and the imperfection was an excuse. 
The activists saw it as being patronizing and were very, very angry. They had 
people using this service, and it would be safer with SSL. Period.

This resonates with me because of a number of my own peeves. I have called this 
the the security cliff at times. The gist is that it's a long way from no 
security to the top -- what we'd all agree on as adequate security. The cliff 
is the attitude that you can't stop in the middle. If you're not going to go 
all the way to the top, then you might as well not bother. So people don't 
bother.

This effect is also the same thing as the best being the enemy of the good, and 
so on. We're all guilty of it. It's one of my major peeves about security, and 
I sometimes fall into the trap of effectively arguing against security because 
something isn't perfect. Every one of us has at one time said that some 
imperfect security is worse than nothing because it might lull people into 
thinking it's perfect -- or something like that. It's a great rhetorical 
flourish when one is arguing against some bit of snake oil or cargo-cult 
security. Those things really exist and we have to argue against them. However, 
this is precisely being patronizing to the people who really use them to 
protect themselves.

Note how post-Diginotar, no one is arguing any more for SSL Everywhere. Nothing 
helps the surveillance state more than blunting security everywhere.

Jon


-BEGIN PGP SIGNATURE-
Version: PGP Universal 3.2.0 (Build 1672)
Charset: us-ascii

wj8DBQFRFVFhsTedWZOD3gYRAjX5AKCw+SBcR1TDlDuPorgri2makt30wACgs3iI
2f+SwEqjbAVyPhf9SH67Aa8=
=tB7/
-END PGP SIGNATURE-
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Meet the groundbreaking new encryption app set to revolutionize privacy...

2013-02-08 Thread Jon Callas
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

I am separating this from my previous as I went into a rant.

As we were designing Silent Text, we talked to a lot of people about what they 
needed. I don't remember who told me this anecdote, but this person went over 
to a colleague's office after they'd been texting to just talk. They walked 
into the colleagues office and noticed their phone open with a conversation 
plainly visible with someone else. A third party who was their mutual colleague 
was texting about that meeting.

In short: Alice goes to Bob's office for a meeting and sees texts from Charlie 
about that meeting, including comments about Alice.

There wasn't anything untoward about the texting. No insults about Alice or 
anything, but there was an obvious privacy loss here. What if it *had* been 
included an intemperate comment about our Alice? Alice said nothing about it to 
Bob, but I got an earful. That earful included the opinion that the threat of 
accidental disclosure of messages within a group of people is greater than 
either the messages being plucked out of the air or seizure and forensic 
groveling over the device. Alice's opinion was that when people have a secure 
communications channel, they loosen up and say things that are more dramatic 
than they would be otherwise. It's not that they're more honest, they're less 
honest. They're exaggerated to the point of hyperbolic at times. Alice said 
that she knew that she'd texted some things to Bob that she really wouldn't 
want the person she'd said them about to see them. They were said quickly, in 
frustration, and so on. It's not that they'd be taken out of context, it's 
 that they'd be taken *in* context.

It's interesting underlying the story, Alice suddenly saw Bob not as an ally in 
snark, but a threat -- the sort of person who leaves their phone unlocked on 
their desk. Bob, of course, would say something like that if the texts had been 
potentially offensive, he'd have locked his phone. This explanation would thus 
convince Alice that Bob is *really* not to be trusted with snark.

This is incredibly perceptive, that the greatest security threat is not the 
threat from outside, it's the threat from inside. It is exactly Douglas Adams's 
point about the babelfish that by removing barriers to communication, it 
created more and bloodier wars than anything else.

That's where Burn Notice came from. It's a safety net so that when Charlie 
texts Bob, I'm tired of Alice always... it goes away.

What I find amusing is the reaction to it all around. There's a huge 
manic-depressive, bimodal reaction. Lots of people get ahold of this and 
they're like girls who've gotten ahold of makeup for the first time. ZOMG! You 
mean my eyelids can be PURPLE and SPARKLY? This is the same thing that happens 
when people discover font libraries or text-to-speech systems. For a couple of 
days that someone gets the new app, there's nothing but text messages that are 
self-destructing, purple, sparkly eyelids with font-laden Tourette's Syndrome 
with the Mission Impossible theme song playing in the background. (Note, if you 
are using Silent Text, you can't actually make the text purple, nor sparkly, 
nor change fonts. You need to put all of that in a PDF or an animated GIF -- 
and you will. This is a metaphor, not a requirements document.)

The next thing that happens is that they are so impressed with some 
particularly inspired bit self-desctructing childishness that they take a 
screen shot. As they gaze at the screen shot, or sometimes just as they take 
the screen shot, light dawns. Oh. You mean Oh. Then the depressive phase 
kicks in.

Back in the dark ages, PGP had the For Your Eyes Only feature. This is pretty 
much the ancestor of Burn Notice. Simultaneously useful and worthless. It's 
useful because it signals to your partner that this is not only secret but 
sensitive and does something to stop accidental disclosure. It is utterly 
ineffective against a hostile partner for many of the same reasons. We did all 
sorts of silly things with FYEO that included an anti-TEMPEST/Van Eck font, and 
other things. Silent Text actually has an FYEO feature that isn't exposed, 
thank heavens.

I mention all of that because once you're in the depressive phase, it's easy to 
go down the same rathole we did with FYEO. I spent time researching if you can 
prevent screen shots on iOS (you can't). I did this while telling people that 
it was dumb because I can take a picture of my iPhone with my iPad. I held up 
my phone to video chat and said, Here, see this? This is what you can do!

Sanity prevailed, but I think that fifteen years of FYEO helped a lot. When you 
stare into self-destructing messages, trying to figure out how make them really 
go away flawlessly, they stare back. You will end up trying to figure out how 
to do a destructive two-phase commit, what class libraries need to be patched 
so those that non-mutable strings inherit from 

Re: [cryptography] Meet the groundbreaking new encryption app set to revolutionize privacy...

2013-02-07 Thread ianG

On 7/02/13 02:35 AM, Jeffrey Walton wrote:

On Wed, Feb 6, 2013 at 7:17 AM, Moti m...@cyberia.org.il wrote:

Interesting read.
Mostly because the people behind this project.
http://www.slate.com/articles/technology/future_tense/2013/02/silent_circle_s_latest_app_democratizes_encryption_governments_won_t_be.html


No offense to folks like Mr. Zimmermann, but I'm very suspect of his
claims. I still remember the antithesis of the claims reported at
http://www.wired.com/threatlevel/2007/11/encrypted-e-mai/.



When we [0] were building the original Hushmail applet, we knew the flaw 
- the company could switch the applet on the customer.  The response to 
that was to publish the applet, and then the customer could check the 
applet wasn't switched.


Now, you can look at this two ways:  one is that it isn't perfect as 
nobody would bother to check their applet.  Another is that it isn't 
perfect but it was a whole lot better than futzing around with OpenPGP 
keys and manual decrypting.  And it was the latter 'risk' view that won, 
Hushmail filled that niche between the hard core pgp community, and the 
people who did business and needed an easy tool.


This is also the same thing that is the achilles heel of Skype.  It 
turns out (rumour has it) that the attack kit for Skype that circulated 
in the late 00s amongst the TLAs was simply a PC breach kit that 
captured the Skype externals - keystrokes, voice, screen etc.  Once the 
TLAs had that, they were happy and they shut up.  It was easier for them 
to breach the PC, slip in the wrapper tacker, and listen in than 
seriously hack the skype model.  And, then, media perception that Skype 
was unhackable worked again, everyone was happy.


Same will be true of Silent Circle, and they will already know this 
(note that I have nothing to do with them, I just read the model like 
anyone else).  The security requirement here is that they don't need it 
to be completely unbreakable, they just have to push 99% of the attacks 
onto the next easy thing -- the phone itself.  Security is lowest common 
denominator, not highest uncommon numerator.  See below.


FWIW, their security model looks pretty damn good, in that it is nicely 
balanced to their business model (the only metric that matters) and they 
trialled this through several iterations (ZRTP, I think).  They are the 
right team.  Even their business customer looks fantastic (hints 
abound).  If you're looking for an investment tip, this wouldn't be so 
far off ;)




I'm also suspect of ... the sender of the file can set it [the
program?] on a timer so that it will automatically “burn” - deleting
it [encrypted file] from both devices after a set period of, say,
seven minutes. Apple does not allow arbitrary background processing -
its usually limited to about 20 minutes. So the process probably won't
run on schedule or it will likely be prematurely terminated. In
addition, Flash Drives and SSDs are notoriously difficult to wipe an
unencrypted secret.



Don't be suspicious, be curious -- this is where security is at. 
Remember:  The threat is always on the node, it is never on the wire.


Looking back at that Hushmail app, another anecdote.  When I was doing 
business with a guy who was security paranoid, he used an unpublished 
nym, encrypted his messages with PGP, and then sent them via Hushmail to 
me.  Life then turned aggressive, and we ended up in court.  His side 
demanded discovery.  I took all his untraceable, pgp-encrypted and 
Hushmail-protected mails and filed them in as cleartext discovery, as I 
was severely told to do by the court.  Oops.  From there they entered 
into the transcript as evidence, and from there, others were able to 
acquire the roadmap via subpoena.


The threat is always on the node.  Never the wire.

Your node, your partners node, your partner's friend's node   It is 
this that the Mission Impossible deletion feature is aimed at, and it is 
this real world node threat that it viably addresses.  This is what 
people want.  The fact that it is theoretically imperfect doesn't make 
it unreasonable.




Perhaps a properly scoped PenTest with published results would ally my
suspicions. It would be really bad if people died: ... a handful of
human rights reporters in Afghanistan, Jordan, and South Sudan have
tried Silent Text’s data transfer capability out, using it to send
photos, voice recordings, videos, and PDFs securely.



Nah, this again is the wrong approach.  Instead think of it this way: 
of 100 human rights reporters, if 99 are protected by this tool, and one 
dies, that is probably a positive.  If 100 human rights reporters are 
scared away by media geeks that say it is unlikely to be perfect, and 
instead they use gmail, and 99 are caught (remember Petreus) then this 
is probably a negative.


Human rights reporters already put their life on the line.  Your mission 
is not to protect their life absolutely, as if we are analysing the need 
for a neighbour's swimming pool 

Re: [cryptography] Meet the groundbreaking new encryption app set to revolutionize privacy...

2013-02-07 Thread Thierry Moreau

ianG wrote:


[Hushmail design]  isn't
perfect but it was a whole lot better than futzing around with OpenPGP 
keys and manual decrypting.  And it was the latter 'risk' view that won, 
Hushmail filled that niche between the hard core pgp community, and the 
people who did business and needed an easy tool.


Don't be suspicious, be curious -- this is where security is at. 

Human rights reporters already put their life on the line.  Your mission 
is not to protect their life absolutely,


One design aspect seems missing from the high-level discussion: how do 
you define the security mechanism failure mode? You have basically two 
options: connect with an insecure protocol, or do not connect at all.


If it's a life-preserving application, this question should be addressed 
explicitly. A fail safe system may be either way, but stakeholders 
should know which way. Airplane pilots are trained according to the 
failure mode of each aircraft subsystem. E.g. if two-way radio fails, 
the pilot may remain confident (from an indication on the cockpit) that 
the air traffic controller (ATC) still sees the aircraft identifier on 
the radar (see Wikipedia entry for transponder) during the emergency 
landing. Thus the decision to land at the major airport (instead of a 
secondary airport with less traffic in conflict but lower grade 
facilities) is taken based on the fail-safe property of the 
aircraft-to-ATC communications subsystem.


Regards,

--
- Thierry Moreau

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Meet the groundbreaking new encryption app set to revolutionize privacy...

2013-02-06 Thread Ben Laurie
On 6 February 2013 23:35, Jeffrey Walton noloa...@gmail.com wrote:
 On Wed, Feb 6, 2013 at 7:17 AM, Moti m...@cyberia.org.il wrote:
 Interesting read.
 Mostly because the people behind this project.
 http://www.slate.com/articles/technology/future_tense/2013/02/silent_circle_s_latest_app_democratizes_encryption_governments_won_t_be.html

 No offense to folks like Mr. Zimmermann, but I'm very suspect of his
 claims. I still remember the antithesis of the claims reported at
 http://www.wired.com/threatlevel/2007/11/encrypted-e-mai/.

 I'm also suspect of ... the sender of the file can set it [the
 program?] on a timer so that it will automatically “burn” - deleting
 it [encrypted file] from both devices after a set period of, say,
 seven minutes. Apple does not allow arbitrary background processing -
 its usually limited to about 20 minutes. So the process probably won't
 run on schedule or it will likely be prematurely terminated. In
 addition, Flash Drives and SSDs are notoriously difficult to wipe an
 unencrypted secret.

And there's also the issue that there's really no way you can force
the recipient to delete the file. A point they even illustrate later
in the article:

A few weeks ago, it was used in South Sudan to transmit a video of
brutality that took place at a vehicle checkpoint. Once the recording
was made, it was sent encrypted to Europe using Silent Text, and
within a few minutes, it was burned off of the sender’s device. Even
if authorities had arrested and searched the person who transmitted
it, they would never have found the footage on the phone. Meanwhile,
the film, which included location data showing exactly where it was
taken, was already in safe hands thousands of miles away—without
having been intercepted along the way—where it can eventually be used
to build a case documenting human rights abuses.

i.e, _not_ burned by the recipient.

So, is the claim that they've invented delete?

 Perhaps a properly scoped PenTest with published results would ally my
 suspicions. It would be really bad if people died: ... a handful of
 human rights reporters in Afghanistan, Jordan, and South Sudan have
 tried Silent Text’s data transfer capability out, using it to send
 photos, voice recordings, videos, and PDFs securely.

This sounds just like step one in another Haystack-like fiasco.
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Meet the groundbreaking new encryption app set to revolutionize privacy...

2013-02-06 Thread Jon Callas
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1


On Feb 6, 2013, at 3:35 PM, Jeffrey Walton wrote:

 On Wed, Feb 6, 2013 at 7:17 AM, Moti m...@cyberia.org.il wrote:
 Interesting read.
 Mostly because the people behind this project.
 http://www.slate.com/articles/technology/future_tense/2013/02/silent_circle_s_latest_app_democratizes_encryption_governments_won_t_be.html
 
 No offense to folks like Mr. Zimmermann, but I'm very suspect of his
 claims. I still remember the antithesis of the claims reported at
 http://www.wired.com/threatlevel/2007/11/encrypted-e-mai/.
 
 I'm also suspect of ... the sender of the file can set it [the
 program?] on a timer so that it will automatically “burn” - deleting
 it [encrypted file] from both devices after a set period of, say,
 seven minutes. Apple does not allow arbitrary background processing -
 its usually limited to about 20 minutes. So the process probably won't
 run on schedule or it will likely be prematurely terminated. In
 addition, Flash Drives and SSDs are notoriously difficult to wipe an
 unencrypted secret.
 
 Perhaps a properly scoped PenTest with published results would ally my
 suspicions. It would be really bad if people died: ... a handful of
 human rights reporters in Afghanistan, Jordan, and South Sudan have
 tried Silent Text’s data transfer capability out, using it to send
 photos, voice recordings, videos, and PDFs securely.

No offense is taken. You don't even need a pen test. I'll tell you how it works.

There's no magic there. Every message that we send has metadata on it that is a 
timeout. The timer starts when you get the message. So if I send you a seven 
minute timeout while you're on an airplane, the seven minutes starts when you 
receive the message.

And you are correct, the iOS app model doesn't allow background tasks, so if 
you switch away from the app for an hour, the delete doesn't happen until you 
switch back to the app. Until Apple lets us do something in the background, 
we're stuck with that limitation. It's that simple. We hope to do better on 
Android. And if someone from Apple happens to be listening in, we'd love to be 
able to schedule some deletions.

Deleting the things, however, is trivial. This is a place that iOS shines. 
Every file is encrypted with a unique key and if you delete the file, it is 
cryptographically erased. You're correct in that flash *is* notoriously 
difficult to wipe unencrypted secrets. Fortunately for us, all the flash on iOS 
is encrypted and the crypto management is easy to use.

Jon


-BEGIN PGP SIGNATURE-
Version: PGP Universal 3.2.0 (Build 1672)
Charset: windows-1252

wj8DBQFRE1VKsTedWZOD3gYRAvfHAJ0dd9tSABRZkJxtdM4QbcI+d/jQqACgnPN7
nZ0rsFPcGCU9KNQEqSu70HU=
=nsyj
-END PGP SIGNATURE-
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography