Re: [liberationtech] Addressing Imbalances in Communications via Cryptographic Redaction

2017-06-25 Thread Seth David Schoen
Brian Dickens writes:

> The concept is a HTML5 "jQuery" widget you can put on web forms (any
> number of them) which gives the author a redaction pen, to mark out
> sensitive portions.  The sensitive portions are never sent to the
> server, but the rest of it can be.  Then a certificate is generated
> allowing selective revelation to which parties you wish.

Hi Brian,

I'm not sure that you ought to allow people to see the number of
redacted characters.  I know this looks like a nice user experience,
but in other contexts, people have been able to use this information
to more readily guess the content of what was redacted.  For example,
suppose that what's redacted is the name of a person (a witness, victim,
or suspect in a crime, for instance).  Then a third party can test a
hypothesis about the person's identity by seeing if the length of their
name matches the length of the redaction.  That could be especially
damaging if the person's name is unusually short or unusually long.

You might also want to encourage people to think about other
language-based information leaks when redacting.  For example, they
may want to redact additional words to avoid revealing whether redacted
words start with vowels, and to avoid revealing grammatical categories.

-- 
Seth Schoen  
Senior Staff Technologist   https://www.eff.org/
Electronic Frontier Foundation  https://www.eff.org/join
815 Eddy Street, San Francisco, CA  94109   +1 415 436 9333 x107
-- 
Liberationtech is public & archives are searchable on Google. Violations of 
list guidelines will get you moderated: 
https://mailman.stanford.edu/mailman/listinfo/liberationtech. Unsubscribe, 
change to digest, or change password by emailing the moderator at 
zakwh...@stanford.edu.


Re: [liberationtech] unmonitored international communication?

2016-03-03 Thread Seth David Schoen
Carolyn Santo writes:

> The recent talk about video games made me wonder about using them as
> a communication channel that might not be monitored by repressive
> governments.

I've heard this idea is interesting to anti-censorship campaigners as
well as to spy agencies.

A disadvantage is that historically a lot of video game network
protocols haven't been even transport-encrypted, let alone end-to-end
encrypted.  So someone monitoring the network could likely even search
for text strings in the traffic and find them, or in any case could
develop software to interpret the game traffic.  This could change if
more game protocols would run over TLS or DTLS.

A further disadvantage is that the game operators themselves could
monitor in-game communications and many of them probably have tools to
do this, not least because multiplayer online games have been plagued by
harassment and griefing and the game operators may want to have an easy
way to review users' communications (which in turn can be applied to
consensual communications too).  Jurisdictions that impose surveillance
capability mandates (like the U.S.) may try to apply these to some kinds
of in-game communications.

An advantage is that some, but not all, surveillance systems may
have been programmed to systematically discard most gaming-related
traffic as uninteresting.  And any given game, especially one that's
not super-popular, might be far down the list of platforms for which a
particular surveillance system or organization develops analysis tools.

-- 
Seth Schoen  
Senior Staff Technologist   https://www.eff.org/
Electronic Frontier Foundation  https://www.eff.org/join
815 Eddy Street, San Francisco, CA  94109   +1 415 436 9333 x107
-- 
Liberationtech is public & archives are searchable on Google. Violations of 
list guidelines will get you moderated: 
https://mailman.stanford.edu/mailman/listinfo/liberationtech. Unsubscribe, 
change to digest, or change password by emailing moderator at 
compa...@stanford.edu.


Re: [liberationtech] secure voice options for china?

2015-02-17 Thread Seth David Schoen
Tim Libert writes:

 thanks all for the many good suggestions!  however, in absence of a clear 
 consensus, I will advise my friend to avoid voice and stick to encrypted 
 email.  my understanding is that the new leadership in china isn’t f#cking 
 around, so the risk/reward equation here suggests heightened caution - 
 especially as I cannot make assumptions on technical know-how of parties 
 involved.

A countervailing point is that encrypted e-mail with the mainstream
technologies used for that purpose never provides forward secrecy, while
most voice encryption techniques do.  So with the use of encrypted e-mail,
there is an ongoing risk into the future (assuming that a recipient's
private key still exists somewhere), while with the voice encryption,
the risk may be time-limited -- assuming that the implementations were
correct enough, and that the key exchange was based on a mathematical
problem that will remain hard for an attacker.

As a simple analogy, sometimes people prefer to have a phone call about
sensitive matters because it doesn't create records, while writing a
letters would make a paper trail.  The technical reasons behind the
analogy don't transfer at all, but there might still be something to the
intuition that the encrypted phone call can be more ephemeral than the
encrypted mail.

-- 
Seth Schoen  sch...@eff.org
Senior Staff Technologist   https://www.eff.org/
Electronic Frontier Foundation  https://www.eff.org/join
815 Eddy Street, San Francisco, CA  94109   +1 415 436 9333 x107
-- 
Liberationtech is public  archives are searchable on Google. Violations of 
list guidelines will get you moderated: 
https://mailman.stanford.edu/mailman/listinfo/liberationtech. Unsubscribe, 
change to digest, or change password by emailing moderator at 
compa...@stanford.edu.

Re: [liberationtech] Tailored Crypto Workshops in Brussels

2014-09-02 Thread Seth David Schoen
Piotr Chmielnicki writes:

 I'm a bit shocked by the content of this email.
 
 Securing data of persons as important as the European Commission
 Officials should be the full time work of a dedicated elite infosec
 crew. I would be very surprised if there were no such things in place.

When I went Washington to lobby staff of U.S. legislators about
surveillance issues last fall, it appeared that most U.S. legislative
offices had little or no official information security resources, plans,
tech support, etc.  There are legislative committees that officially
deal with classified information, and those committees get official
information security support (including SCIFs in which to hold classified
conversations), but for the ordinary legislative office where the member
of the legislature works on a day-to-day basis, not so much.

Clearly there are some people who investigate particular cases
of espionage and try to detect or punish it, but in terms of giving
resources to the legislators and their staff members in order to protect
themselves, not much, from what I heard during the lobbying meetings.
The staff members do receive official Blackberries, but they and the
legislators also conduct legislative business over ordinary e-mail and
telephone calls, including mobile calls from ordinary smartphones.

I also remember talking to a junior diplomat from a Western European
country at a conference last year.  My impression from him was that
he _did_ have official information security briefings and resources,
but found them fairly rudimentary, and that they didn't really stop
his colleagues from doing appreciable amounts of work over unencrypted
channels.

One challenge for both the legislators and the diplomat was that, even
if they did have some kind of encrypted communications channel to use
for internal communications within their organizations, they often had
to have discussions with people from _other_ organizations and countries,
and nobody had made arrangements to secure those communications.

-- 
Seth Schoen  sch...@eff.org
Senior Staff Technologist   https://www.eff.org/
Electronic Frontier Foundation  https://www.eff.org/join
815 Eddy Street, San Francisco, CA  94109   +1 415 436 9333 x107
-- 
Liberationtech is public  archives are searchable on Google. Violations of 
list guidelines will get you moderated: 
https://mailman.stanford.edu/mailman/listinfo/liberationtech. Unsubscribe, 
change to digest, or change password by emailing moderator at 
compa...@stanford.edu.



Re: [liberationtech] Snakeoil and suspicious encryption services

2014-07-21 Thread Seth David Schoen
Aymeric Vitte writes:

 You obviously don't know what you are talking about or just did not
 get what I explained or just do not understand http versus https or
 the contrary, or just do not understand the web, what's on client
 side (browser) or on server side, or don't get that your extension
 can be mitmed too including its signature.
 
 So unfortunately I have to stop this discussion right here with you,
 not to waste the time of serious people on this list, if you want to
 restart with another tone, then please go, but first checkout what
 is writen on Peersm site, everything is explained, including your
 focus on elementary mitm issue, your arguments and judgement are so
 basic that I am wondering why I am answering it, you should do some
 reading, and if you can trivially defeat Peersm, then just show us
 how

The Peersm homepage is loaded over HTTP and right at the bottom says

/divscript type=text/javascript id=script 
src=https://peersm.com/node-browser.js; 
nonce=90f64442274ffb89dd6a1c409f28404e35d514f6/script

Well, that nonce is probably different for different users.

An attacker (like the barista) can make that get loaded as

/divscript type=text/javascript id=script 
src=https://peeersm.com/node-browser.js; 
nonce=90f64442274ffb89dd6a1c409f28404e35d514f6/script

The version of the node-browser.js file there can be slightly changed to
leak the user's crypto keys by synthesizing an HTTP GET to some other
host with the user's private keys as part of the URL.  The security of
your crypto protocol is not relevant to this attack because substituting
a modified client leaks key material _outside of your protocol_, much
as redirecting http://www.mozilla.com/ via a captive portal and then
giving users a backdoored download of Firefox would allow leaking TLS
session keys (without breaking TLS).

It's true that the user could detect the change if they view source,
but the change may be a very small percentage of the real code, so
it's a pretty significant practical question how the user will detect
the change.  (And will the user be willing to check what is currently
456 kB of Javascript every time they use Peersm?)

There have also been some tricks to make it hard for the user to view
source (or to make the source that appears look wrong).  Hopefully future
browsers will reliably show authentic source code, but even if they
do, we're left with the how does the user know that this 456 kB of
Javascript is really right? problem.  The fact that it was apparently
loaded from the right domain is not very satisfactory by itself, both
because of small typos, attempts to make it look like the Javascript was
loaded over a CDN for efficiency reasons, and maybe homoglyph attacks,
while even if the user is sure that it was genuinely loaded over HTTPS
from peersm.com, there is still a risk that you as the software developer
could somehow be forced to give particular users a fake version (based on
their client IP address), or that the peersm.com server could be hacked
and could give some users a fake version without your even realizing it.

I would be happy to accept that browser extensions only partially address
these threats -- especially threats of attacks by the browser extension
developer, or involving attacks against the extension developer's
infrastructure.  We have better assurances that the network operator can't
substitute Javascript code for the Javascript code that we wanted (by
installing the browser extension only over HTTPS, and not re-downloading
it every time).  But currently, we don't have a good assurance that
the browser extension we got is the same one that everyone else got
(and that may have been audited), nor that we get the same updates as
all other users.  I think it's quite important for browser developers
and browser extension developers to address these limitations.

-- 
Seth Schoen  sch...@eff.org
Senior Staff Technologist   https://www.eff.org/
Electronic Frontier Foundation  https://www.eff.org/join
815 Eddy Street, San Francisco, CA  94109   +1 415 436 9333 x107
-- 
Liberationtech is public  archives are searchable on Google. Violations of 
list guidelines will get you moderated: 
https://mailman.stanford.edu/mailman/listinfo/liberationtech. Unsubscribe, 
change to digest, or change password by emailing moderator at 
compa...@stanford.edu.



Re: [liberationtech] when you are using Tor, Twitter will blocked your acc

2014-06-09 Thread Seth David Schoen
Griffin Boyce writes:

   I'd recommend reaching out formally (perhaps to privacy@ ?) and
 proposing a whitelist or other special consideration for Tor users.

It seems obviously crazy to me for Twitter to prevent people from
accessing it over Tor, both in light of widespread censorship of Twitter
on different networks and in light of governments' attempts to find out
where users of services are connecting from.

On the other hand, if a service is viewing anomalous originating IP
address as an indicator of compromise, then using Tor destroys that
information source.  For example, if Twitter whitelists Tor exit nodes
and says that connecting from them is never viewed as suspicious, then
anybody who knows this and compromises a Twitter user's account can
just use the stolen account over Tor and never get detected or blocked.

I guess there are some people who try to compromise Twitter accounts
who wouldn't learn about this policy and take advantage of it, but
that seems like a significant assumption.  So, should Twitter just
stop enforcing the compromise detection entirely when users connect
via anonymity services?  It seems like that would significantly
undermine the compromise detection.

One alternative idea is to have a flag on people's accounts that says
OK to connect via anonymity services; then a question is how people
can get that flag (ideally, without getting the account blocked even
once) and how someone who hijacks an account can be prevented from
setting the flag maliciously.

-- 
Seth Schoen  sch...@eff.org
Senior Staff Technologist   https://www.eff.org/
Electronic Frontier Foundation  https://www.eff.org/join
815 Eddy Street, San Francisco, CA  94109   +1 415 436 9333 x107
-- 
Liberationtech is public  archives are searchable on Google. Violations of 
list guidelines will get you moderated: 
https://mailman.stanford.edu/mailman/listinfo/liberationtech. Unsubscribe, 
change to digest, or change password by emailing moderator at 
compa...@stanford.edu.


Re: [liberationtech] PGP WOT

2014-03-23 Thread Seth David Schoen
Jonathan Wilkes writes:

 Furthermore, couldn't I periodically query every publicly accessible
 PGP keyserver (maybe do it in a distributed manner) to see who
 signed what, and then mirror that web of trust with the keys I
 control?
 
 Furthermore, couldn't I also upload keys with same name/email
 addresses for any keys that existed before I started, lie about the
 creation date, and work those into my hall of mirrors?

Micah Lee's OHM talk addressed these problems:

https://program.ohm2013.org/event/113.html

https://github.com/micahflee/trollwot

https://github.com/micahflee/trollwot/blob/master/trollwot.pdf

(It doesn't really propose solutions, just highlights the problems very
well.)

-- 
Seth Schoen  sch...@eff.org
Senior Staff Technologist   https://www.eff.org/
Electronic Frontier Foundation  https://www.eff.org/join
815 Eddy Street, San Francisco, CA  94109   +1 415 436 9333 x107
-- 
Liberationtech is public  archives are searchable on Google. Violations of 
list guidelines will get you moderated: 
https://mailman.stanford.edu/mailman/listinfo/liberationtech. Unsubscribe, 
change to digest, or change password by emailing moderator at 
compa...@stanford.edu.


Re: [liberationtech] Amazing New Privacy Product for Webcams

2014-03-02 Thread Seth David Schoen
Guido Witmond writes:

 Blocking a camera (and muting it's microphone) are wise things to do,
 but here Yahoo had 'forgotten' to implement end-to-end encryption.

... or even client-server encryption between the user and Yahoo.

(Disclosure: my employer has a competing webcam privacy tool.)

-- 
Seth Schoen  sch...@eff.org
Senior Staff Technologist   https://www.eff.org/
Electronic Frontier Foundation  https://www.eff.org/join
815 Eddy Street, San Francisco, CA  94109   +1 415 436 9333 x107
-- 
Liberationtech is public  archives are searchable on Google. Violations of 
list guidelines will get you moderated: 
https://mailman.stanford.edu/mailman/listinfo/liberationtech. Unsubscribe, 
change to digest, or change password by emailing moderator at 
compa...@stanford.edu.


Re: [liberationtech] Privus?

2014-02-28 Thread Seth David Schoen
Hisham writes:

 Hello LibTech crowd,
 
 Sorry if this has been discussed here before but is anybody here familiar
 with a software called Privus?
 https://www.kickstarter.com/projects/857935876/175768761?token=bbfb88ac
 
 Its developers promote it as an encryption service that offers absolutely
 unbreakable security.
 It uses OTP encryption technology, that developers claim is harder to break
 that PGP.

OTPs can be absolutely unbreakable, but you have to generate the pads in
an absolutely random manner, distribute them over an absolutely secure
channel, store them with an absolutely secure storage method, and then
only use each one once.

Governments have, from World War II to today, tried to actually follow
these rules (with physical distribution of key material).  It's been
expensive and cumbersome because each pair of potential communicating
parties need to have -- in advance! -- as much key material as the total
amount of communication that they may ever do.  They can't send any
more new key material electronically (unless they want to burn some
other existing key material); effectively, it's subject to a
conservation law.

Tools that claim to use an OTP that don't involve physical key material
distribution (like, meet the person in person and give them a key that
they have to keep physically secure, and make sure that the key is as
long as all the messages that you may exchange before you next see them
again) are doing it wrong.

A lot of people hear about the use of XOR in OTPs and think of some way
to create the pad based on a smaller amount of information that can be
exchanged in another way.  If you do this, the pad is actually a stream
cipher and the absolute security guarantees are lost.  (The goal of a
stream cipher is to make an encryption keystream from a short key in such
a way that someone who doesn't know the key can't determine the
keystream, nor detect any regularities in it.  The keystream plays the
role of a one-time pad key, but it is not truly random because it's
produced by a deterministic means.)  There are many stream ciphers out
there, and some of them are thought to offer good security, but none is
provably unbreakable and some have been broken in practice.

-- 
Seth Schoen  sch...@eff.org
Senior Staff Technologist   https://www.eff.org/
Electronic Frontier Foundation  https://www.eff.org/join
815 Eddy Street, San Francisco, CA  94109   +1 415 436 9333 x107
-- 
Liberationtech is public  archives are searchable on Google. Violations of 
list guidelines will get you moderated: 
https://mailman.stanford.edu/mailman/listinfo/liberationtech. Unsubscribe, 
change to digest, or change password by emailing moderator at 
compa...@stanford.edu.


Re: [liberationtech] Secure Email Survey

2013-11-25 Thread Seth David Schoen
carlo von lynX writes:

 Hm, federation is so commonly expected to be the normality that
 any distributed system is filed under p2p even if, like Tor, it
 runs on thousands of servers, thus rather distant from what p2p
 was supposed to mean. Tor started as P2P, but I think it isn't
 anymore.

I don't think Tor was ever peer-to-peer.  It has a directory listing
all of the public routers; originally the directory was maintained
by hand by the Tor developers, rather than by automated announcement
notices from new routers to the directory servers.

I think the you should make every Tor user be a relay question has
been in the FAQ all along:

https://www.torproject.org/docs/faq.html.en#EverybodyARelay

-- 
Seth Schoen  sch...@eff.org
Senior Staff Technologist   https://www.eff.org/
Electronic Frontier Foundation  https://www.eff.org/join
815 Eddy Street, San Francisco, CA  94109   +1 415 436 9333 x107
-- 
Liberationtech is public  archives are searchable on Google. Violations of 
list guidelines will get you moderated: 
https://mailman.stanford.edu/mailman/listinfo/liberationtech. Unsubscribe, 
change to digest, or change password by emailing moderator at 
compa...@stanford.edu.


Re: [liberationtech] Google Unveils Tools to Access Web From Repressive Countries | TIME.com

2013-10-21 Thread Seth David Schoen
Jillian C. York writes:

 Since I already have more skepticism of Google Ideas and Jared Cohen than I
 need, let me pose this question:
 
 With the understanding that uProxy provides no anonymity protections, *is
 it providing anything that other circumvention tools do not already?*
 What's unique about it?

It seems to me that there's a larger pool of IP addresses that are potentially
less convenient to blacklist (although I'm concerned that the colo/residential
IP address distinction beloved of antispam activists could lead some
governments to try completely blocking connections to overseas residential
addresses by default -- pushing a norm that there should just not be any
international pure peer-to-peer services).

If you do the circumvention in a more peer-to-peer fashion, perhaps there aren't
many people who know about the proxy and there aren't such a statistically
remarkable number of people connecting to it.

-- 
Seth Schoen  sch...@eff.org
Senior Staff Technologist   https://www.eff.org/
Electronic Frontier Foundation  https://www.eff.org/join
815 Eddy Street, San Francisco, CA  94109   +1 415 436 9333 x107
-- 
Liberationtech is public  archives are searchable on Google. Violations of 
list guidelines will get you moderated: 
https://mailman.stanford.edu/mailman/listinfo/liberationtech. Unsubscribe, 
change to digest, or change password by emailing moderator at 
compa...@stanford.edu.


Re: [liberationtech] scrambler

2013-08-29 Thread Seth David Schoen
Michael Hicks writes:

 ok so I guess I just send u guys the links and u check out my software and 
 Vet it? This was made for people to be able to protect their privacy and the 
 NSA can't hack it No One can it's impossible. all the information is at 
 scrambler.webs.com

It's true that no one can crack a one-time pad, which your software
claims to implement.  A one-time pad might be useful for some people,
though it's possible that they shouldn't then use a computer to encrypt
and decrypt, because using a computer introduces new vulnerabilities
(like radiofrequency emanations and remote software exploits).

There might still be cryptographic vulnerabilities in the random number
generation that your software uses.  There was recently a high-profile
vulnerability in the random number generation provided by the Java
implementation on Android, which allowed keys to be compromised.  If
there were a similar vulnerability in the Java implementations people
use with your software, it might have similar consequences -- which
might not be the fault of your software, but might still undermine its
security.

A one-time pad is probably not very useful to most people who need to
communicate securely because they have to find a safe way, ahead of
time, to distribute and store the key material with each potential
party that they may communicate with.  That's a pretty heavy burden,
especially when people are meeting new contacts and wanting to
communicate with those contacts (without having been able to arrange
a prior physical key distribution).

It also doesn't integrate easily with any form of communications
other than exchanging files, although it would be possible to extend
it to other things like e-mail or IM if you could manage the sequence
numbers properly to avoid reusing key material (something our existing
protocols don't really help with).

If you read _Between Silk and Cyanide_, there's a good and interesting
historical account of wartime military use of one-time pads.  One of
the messages seems to be that it was quite expensive and cumbersome,
though perhaps well worth it for the particular application.  It's hard
to imagine many audiences prepared to actually bear these costs for
many of their communications today.  We already see people complaining
about the effort and overhead of things like PGP merely because some
aspects of the key management are made explicit to the user.  For
one-time pads _every_ aspect of key management is made explicit -- and
manual, and requiring the exchange of physical objects!

My intuition is that people who feel that one-time pads are necessary
should probably learn to operate them by hand, the way the SOE agents
in that book did.

-- 
Seth Schoen  sch...@eff.org
Senior Staff Technologist   https://www.eff.org/
Electronic Frontier Foundation  https://www.eff.org/join
815 Eddy Street, San Francisco, CA  94109   +1 415 436 9333 x107
-- 
Liberationtech is a public list whose archives are searchable on Google. 
Violations of list guidelines will get you moderated: 
https://mailman.stanford.edu/mailman/listinfo/liberationtech. Unsubscribe, 
change to digest, or change password by emailing moderator at 
compa...@stanford.edu.


Re: [liberationtech] [guardian-dev] An email service that requires GPG/PGP?

2013-08-09 Thread Seth David Schoen
Tim Prepscius writes:

 We want to get to a state where an e-mail server is easy to set up.
 And runs with *non governmental* issued ssl certificates.

I think this might reflect a misperception of the threat model around
misissuance of certificates.

If you think governments are likely to use their own CAs for spying by
issuing fraudulent certificates, you want to remove trust for those
CAs _in your web browser_.  Having a valid, correct, and publicly issued
certificate from such a CA does not make the CA operator any more able
to spy on you.

There was a lot of concern when CNNIC became a root CA in mainstream
browsers because of the perception that the Chinese government could
force CNNIC to misissue certificates to facilitate surveillance.  But
this risk would be a reason for users not to trust the CNNIC root in
their browsers, not directly a reason for sites to avoid getting certs
from CNNIC.  The cert isn't some kind of poison for private
communications that use it, it's just a way of telling browsers that your
key is OK to use.  If you have a cert that tells browsers that your key
is OK to use and the browsers will accept it and you agree with the
contents of that cert, the cert is fine for you to use on your site.

The risk to me from, say, CNNIC is that even though I use a cert from
StartCom, CNNIC will secretly misissue a different cert for my site
containing a public key controlled by the Chinese government, and then
the government can use that to spy on some users who communicate with
my site.  The risk is not that I would ask CNNIC's CA for a cert for my
site containing my actual public key and that they would say yes and give
it to me. :-)

-- 
Seth Schoen  sch...@eff.org
Senior Staff Technologist   https://www.eff.org/
Electronic Frontier Foundation  https://www.eff.org/join
815 Eddy Street, San Francisco, CA  94109   +1 415 436 9333 x107
-- 
Liberationtech is a public list whose archives are searchable on Google. 
Persistent violations of list guidelines will get you moderated: 
https://mailman.stanford.edu/mailman/listinfo/liberationtech. Unsubscribe, 
change to digest, or change password by emailing moderator at 
compa...@stanford.edu or changing your settings at 
https://mailman.stanford.edu/mailman/listinfo/liberationtech


Re: [liberationtech] Internet blackout

2013-06-13 Thread Seth David Schoen
Rich Kulawiec writes:

 Usenet has long since demonstrated the ability to route around
 amazing amounts of damage and flakiness and to maintain communications
 over very slow (including sneakernet) links.
 
 Arguably, that sentence describes the normal operational state of the
 network on a typical summer day just like this one, 30 years ago. ;-)
 
 Usenet has some very nice properties for applications like this:
 
 1. There is no centralization.  Thus there is no single target to
 shut down or block.
 
 2. Messages are not addressed to individuals.  This frustrates
 some traffic analysis.
 
 3. It's transport-agnostic.  Messages can be passed via IP, via UUCP,
 by USB stick, CD, DVD, etc.
 
 4. It's highly delay-tolerant.
 
 5. It's content-agnostic.
 
 6. It's highly fault-tolerant.
 
 7. It doesn't require real-time IP connectivity.  In areas where
 IP connectivity is scarce, expensive, intermittment, wiretapped
 or blocked, this is a big plus.
 
 8. It's standardized.
 
 9. Mature open-source software already exists for it.
 
 10. Peering relationships can be ad-hoc.

These properties are really awesome.  One thing that I'm concerned
about is that classic Usenet doesn't really do authenticity.  It
was easy for people to spoof articles, although there would be
_some_ genuine path information back to the point where the spoofed
article originated.  It seems like if we're talking about using
Usenet in an extremely hostile environment, spoofing and forgery
are pretty significant threats (including classic problems like
spoofed control messages! but also cases of nodes modifying
message content).  A lot of the great properties you've mentioned
above that Usenet has already demonstrated have more to do with
performing well over slow or unreliable network links, but perhaps
not over actively hostile ones.

Some Usenet clients support PGP signing, but that may be of limited
use unless most users can verify and generate signatures.

-- 
Seth Schoen  sch...@eff.org
Senior Staff Technologist   https://www.eff.org/
Electronic Frontier Foundation  https://www.eff.org/join
815 Eddy Street, San Francisco, CA  94109   +1 415 436 9333 x107
--
Too many emails? Unsubscribe, change to digest, or change password by emailing 
moderator at compa...@stanford.edu or changing your settings at 
https://mailman.stanford.edu/mailman/listinfo/liberationtech


Re: [liberationtech] NSA, FBI, Verizon caught red handed spying on US citizens in the US

2013-06-07 Thread Seth David Schoen
Anthony Papillion writes:

 It's up to us to protect ourselves and, thankfully, we have the
 technology to do just that.

(As I suggested in a previous message, I strongly support greater use
of privacy-enhancing technologies, and finding tactics to increase the
demand for them.)

I think it's become clear that traffic and location data is much harder to
protect technologically than content.  Advocates for privacy-enhancing
technology sometimes don't appreciate or don't effectively communicate
the scope of this problem.  I've seen a lot of people in the last day
or so referring to the need to encrypt everything.

Encrypting everything is surely of tremendous benefit for privacy, but
in low-latency packet-switched networks, it has no effect at all on the
ability to perform traffic analysis.  In order to get networks that we
don't control to deliver our communications to the parties we choose, we
have to tell the intermediaries who run the networks where to send the
communications, affixing identifiers like IP addresses and PSTN numbers.
Then the network operators can record and disclose all of that
information.  And the implications of that information are significant,
especially when it includes or implies location data.

We just recently had a discussion here that touched on how difficult
it might be to make a mobile phone that doesn't allow location
tracking.  I think it's possible with a significant engineering
effort, but the easiest ways to design and deploy mobile communications
networks all automatically make users' locations trackable.

The best widely-used tool to defend against traffic analysis is Tor,
but Tor's developers readily concede that it has a lot of important
limitations and that there's no obvious path around many of them.
Two of these important limitations (not the only ones) are:

① Anonymization adds latency to communications.  Better anonymization
usually adds more latency.  Everywhere else, communications engineers
are struggling to take the latency out of people's communications.
At least in some systems, anonymity engineers are struggling to put
it in.

② Network adversaries can notice that things coming out of a system
correspond to things going in.

Here's one of many statements of these two issues as they relate to
systems like Tor:

   Furthermore, Onion Routing makes no attempt to stop timing attacks
   using traffic analysis at the network endpoints. They assume that
   the routing infrastructure is uniformly busy, thus making passive
   intra-network timing difficult. However, the network might not
   be statistically uniformly busy, and attackers can tell if two
   parties are communicating via increased traffic at their respective
   endpoints. This endpoint-linkable timing attack remains a difficulty
   for all low-latency networks.

http://www.freehaven.net/src/related-comm.thtml

These issues are less severe if people are using e-mail or (maybe
better yet) forum posting, over an encrypted channel to a popular
service that many people use.  But they're quite serious for voice
calls, video conferencing, and even instant messaging.

-- 
Seth Schoen  sch...@eff.org
Senior Staff Technologist   https://www.eff.org/
Electronic Frontier Foundation  https://www.eff.org/join
815 Eddy Street, San Francisco, CA  94109   +1 415 436 9333 x107
--
Too many emails? Unsubscribe, change to digest, or change password by emailing 
moderator at compa...@stanford.edu or changing your settings at 
https://mailman.stanford.edu/mailman/listinfo/liberationtech

Re: [liberationtech] Cell phone tracking

2013-05-31 Thread Seth David Schoen
Eugen Leitl writes:

 There might be use cases for using end-to-end encrypting 
 VoIP phones on Mifi over 3G/4G (assuming you can penetrate 
 the double NAT), as here both security compartments are 
 separate.

That seems to have some clear potential privacy and security benefits,
but if you use a MiFi with a 3G account registered in your own name,
the carrier will still be able to track the location of the MiFi
device itself and associate it with your identity.

We could imagine 3G interfaces with frequently randomized IMEIs and the
use of blinded signatures to pay for service, so that the carrier will
know that someone has paid but not who the device owner is.  (Refilling
a prepaid account with that kind of mechanism needn't be much more
complicated than prepaid refills today, especially when the user tops
up their account at a kiosk with an electronic terminal as opposed to
with an online credit card payment or by buying a scratch-off card.)  I
think this gets us back to the political problem that some governments
have already made the use of these mechanisms _illegal_*.

A pretty common challenge for situations like this is that if a telco
wanted to actively cooperate in order to deliberately know less about
its customers, we might be able to figure out a way to make it work
technically.  But telcos generally don't want to do that and governments
don't want the telcos to do it either.  And this applies to other kinds
of service providers too; there's great research from the academic
cryptography world about privacy-protective ways of providing many
services but today's service providers are mostly reluctant to make use
of this research or other crypto tools to reduce what they know about
users (with a couple of shining exceptions).

Arvind Narayanan has just pushed a two-part paper in _IEEE Security 
Privacy_ about exactly this point:

http://randomwalker.info/publications/crypto-dream-part1.pdf
http://randomwalker.info/publications/crypto-dream-part2.pdf

Narayanan argues that a mis-alignment of incentives frequently occurs
to discourage the use of cryptography to protect privacy (particularly
in the strongest end-to-end sense) and that there is minimal demand for
protecting data against intermediaries and service providers.

(I find this paper extremely depressing, but it does describe actual
events.  If I were writing this paper, I would continue to ask how
we can increase demand for cryptographic privacy mechanisms rather
than declaring defeat.)


* To pick up on Narayanan's argument, even if this kind of service is
  legal and even if carriers thought it was a reasonable service for
  them to offer, we might expect problems with demand for it.  One
  problem for the level of demand for blinded e-cash payments for
  telecommunications services is that if users lose their mobile
  devices and don't have suitable backups, they lose all of their
  prepaid account value (because it existed only in the form of e-cash
  on the devices).  This is different from the status quo where prepaid
  balances can be associated with an account that persists and can be
  claimed by a user if even they lose a particular device.  Methods of
  paying for services that have cash-like privacy properties like cash
  could be unpopular because they expose to customers to cash-like
  risks.  And many people now prefer to pay for point-of-sale
  transactions with credit cards despite the major privacy losses
  compared to cash; probably people who regularly accept that trade-off
  would be skeptical that totally anonymous prepaid service accounts are
  a benefit.  I've recently done some research and writing about anonymous
  payments for transportation services and seen that transportation
  agencies expect very few users to prefer unregistered cash-equivalent
  payment methods that are purchased in cash.  That might be partly a
  self-fulfilling prophecy (if the agencies don't promote the idea that
  it's good to pay for transportation in a way that leaves fewer records,
  and don't do more to make this convenient, clearly fewer people will do
  it), but it's also surely based in part on their observations from
  customers' behavior.

-- 
Seth Schoen  sch...@eff.org
Senior Staff Technologist   https://www.eff.org/
Electronic Frontier Foundation  https://www.eff.org/join
815 Eddy Street, San Francisco, CA  94109   +1 415 436 9333 x107
--
Too many emails? Unsubscribe, change to digest, or change password by emailing 
moderator at compa...@stanford.edu or changing your settings at 
https://mailman.stanford.edu/mailman/listinfo/liberationtech


Re: [liberationtech] A tool for encrypted laptops

2013-05-30 Thread Seth David Schoen
Tom Ritter writes:

 On 25 March 2013 11:57, Tom Ritter t...@ritter.vg wrote:
  It the moment it only supports Bitlocker, but support for Truecrypt is
  coming[0].  \
 
 Due to some internal confusion, this happened a little bit ago, but I
 didn't know about it.  You can now tell it I'm smarter than you and
 have FDE you don't know about[0].  This will let it work with
 Truecrypt.
 
 Mac and Linux support are still stalled.  Julian Oliver posted a quick
 script for Linux that emulates some amount of the functionality last
 March, I'm reposting:

Jacob Appelbaum and I have some data sources for doing the whole thing
in the thread at

https://github.com/iSECPartners/yontma/issues/2

I'm not sure how fancy we want to make this.

-- 
Seth Schoen  sch...@eff.org
Senior Staff Technologist   https://www.eff.org/
Electronic Frontier Foundation  https://www.eff.org/join
815 Eddy Street, San Francisco, CA  94109   +1 415 436 9333 x107
--
Too many emails? Unsubscribe, change to digest, or change password by emailing 
moderator at compa...@stanford.edu or changing your settings at 
https://mailman.stanford.edu/mailman/listinfo/liberationtech


Re: [liberationtech] Cell phone tracking

2013-05-24 Thread Seth David Schoen
Yosem Companys writes:

 From: Dan Gillmor d...@gillmor.com
 
 Given the vanishingly small likelihood that companies or governments
 will do anything about cell phone tracking, I'm interested in what
 countermeasures we can take individually. The obvious one is to turn
 off GPS except on rare occasions.
 
 I'll be discussing all this in an upcoming book, and in my Guardian
 column soon. So I'd welcome ideas.

As other people have said, GPS isn't necessary for cell phone tracking;
it can also be used in tracking but the tracking works well by
triangulation.  The tracking of Malte Spitz

https://www.ted.com/speakers/malte_spitz.html
http://www.zeit.de/digital/datenschutz/2011-03/data-protection-malte-spitz

used this process.

I'm curious whether people in some countries have had success using
wifi-only phones, including to make and receive calls by VoIP.  There
are ways that wifi can be more private in some ways in some situations
compared to the GSM network, but it's also much, much less ubiquitous.

-- 
Seth Schoen  sch...@eff.org
Senior Staff Technologist   https://www.eff.org/
Electronic Frontier Foundation  https://www.eff.org/join
815 Eddy Street, San Francisco, CA  94109   +1 415 436 9333 x107
--
Too many emails? Unsubscribe, change to digest, or change password by emailing 
moderator at compa...@stanford.edu or changing your settings at 
https://mailman.stanford.edu/mailman/listinfo/liberationtech


Re: [liberationtech] Encrypted smartphone addressbook/contact list?

2013-05-06 Thread Seth David Schoen
Bernard Tyers - ei8fdb writes:

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1
 
 Hello all,
 
 Has anyone come across an encrypted address book / contact list application 
 for smartphone devices?

Note that some (or many) of these don't work very well against a
sophisticated attacker.

http://www.elcomsoft.com/WP/BH-EU-2012-WP.pdf

I'm still working on a longer analysis of this problem based on an
earlier question on this list, but this point is relevant here too. :-)

-- 
Seth Schoen  sch...@eff.org
Senior Staff Technologist   https://www.eff.org/
Electronic Frontier Foundation  https://www.eff.org/join
815 Eddy Street, San Francisco, CA  94109   +1 415 436 9333 x107
--
Too many emails? Unsubscribe, change to digest, or change password by emailing 
moderator at compa...@stanford.edu or changing your settings at 
https://mailman.stanford.edu/mailman/listinfo/liberationtech


Re: [liberationtech] Android Full-Disk Encryption Cracked

2013-04-29 Thread Seth David Schoen
Griffin Boyce writes:

 Hashkill can now determine the master password for Android's full-disk
 encryption scheme.
 
 image showing the process: http://i.imgur.com/bFUf7lR.png
 script: https://github.com/gat3way/hashkill
 
 Thoughts?

It seems like this is just a tool for doing dictionary and
brute force attacks against these passwords, not a class-break
that is inherently able to decrypt every single Android device.

So, if your Android FDE passphrase is long and unpredictable
enough, this tool should still not be able to crack it.

There are a lot of problems about disk encryption on small
mobile devices.  One that was highlighted by Belenko and
Sklyarov at Black Hat EU 2012 is that mobile device CPUs are
relatively slow, so it's difficult to do very large numbers of
iterations of key derivation functions, which would make
brute-force cracking slower.

http://www.elcomsoft.com/WP/BH-EU-2012-WP.pdf
https://en.wikipedia.org/wiki/Key_derivation_function

The more KDF iterations that are used, the slower _both_
unlocking by the legitimate authenticated user and offline
cracking will be.  But if the legitimate user's device has
a slow CPU, the user may not accept the human-perceptible
delays that would result from using a lot of iterations.

This tradeoff is a pretty fundamental problem.  The user
wants to unlock their device using a very short, easy-to-
remember code.  They want the device to be able to unlock
quickly when this code is entered, using information that
can be calculated from the code in a short time on a
comparatively slow mobile CPU.  Then they also want someone
with a very fast cracking device like a desktop GPU not to
be able to brute-force that same code quickly.

Belenko and Sklyarov also observed that some mobile crypto
applications were just not using KDFs at all or were using
them improperly, but I don't know of an indication that
that's true of the official Android FDE.  Another problem
is that, especially if people are using touchscreens, they
may want a very short unlock PIN rather than a long
passphrase, which will inherently favor cracking.  (For
example, if you imagine a system with a 5-digit numeric
PIN, you can quickly conclude that there is no number of
KDF iterations that will be acceptable to the mobile device
user and be a practical deterrent to a brute-force attacker
with even a single desktop GPU, at least for KDFs that can
be implemented efficiently on a GPU.)

I don't think this problem is very well appreciated by
mobile device crypto users!

Two ways to address this that come to mind would be using
tamper-resistant hardware (which apparently Apple is doing
for crypto in iOS devices) to store or generate the
decryption keys using cryptographic secrets kept inside
the particular device itself, and finding some way for
the user to somehow input a much higher entropy unlock
password.

-- 
Seth Schoen  sch...@eff.org
Senior Staff Technologist   https://www.eff.org/
Electronic Frontier Foundation  https://www.eff.org/join
815 Eddy Street, San Francisco, CA  94109   +1 415 436 9333 x107
--
Too many emails? Unsubscribe, change to digest, or change password by emailing 
moderator at compa...@stanford.edu or changing your settings at 
https://mailman.stanford.edu/mailman/listinfo/liberationtech


Re: [liberationtech] Android Full-Disk Encryption Cracked

2013-04-29 Thread Seth David Schoen
Nathan of Guardian writes:

 Yubikey combined with a short user password is a potential option for the 
 second idea, with devices that have USB Host mode:
 
  
 https://guardianproject.info/2012/01/04/strong-mobile-passwords-with-yubikey-usb-token/

That's pretty awesome, and very creative.

I hope people will pay attention to this sentence in your post:

  By combining the long password from the Yubikey with a short memorized
  version, a certain amount of security is preserved even if the key is
  physically stolen along with your mobile device.

So users shouldn't skip the short memorized password part!  (In
that scenario, the security level is probably reduced to the
length of the user password.  One could imagine a future Yubikey
using NFC in an interactive protocol in a way where this is no
longer true, but maybe tamper-resistant key storage inside phones
is likely to come about sooner.)

-- 
Seth Schoen  sch...@eff.org
Senior Staff Technologist   https://www.eff.org/
Electronic Frontier Foundation  https://www.eff.org/join
815 Eddy Street, San Francisco, CA  94109   +1 415 436 9333 x107
--
Too many emails? Unsubscribe, change to digest, or change password by emailing 
moderator at compa...@stanford.edu or changing your settings at 
https://mailman.stanford.edu/mailman/listinfo/liberationtech


Re: [liberationtech] suggestions for a remote wipe software for Windows?

2013-04-03 Thread Seth David Schoen
Griffin Boyce writes:

   Well, http://preyproject.com/ would be better for a layperson who doesn't
 have the time/interest to encrypt.  But it's not impossible to disable or
 anything.  And in the meantime the thief would have access to your data.
  Depends on whether you are more looking to get it back (no guarantees), or
 protect your info (all but guaranteed if encrypted).

I think Prey is a pretty compelling choice for a lot of cases, but looking
briefly at the documentation it seems that their remote wipe functionality
for laptops is currently quite limited.  And that's confirmed by looking at
the secure module in the Prey source code.

https://github.com/prey/prey-bash-client-modules

https://github.com/prey/prey-bash-client-modules/blob/master/secure/platform/windows/functions
https://github.com/prey/prey-bash-client-modules/blob/master/secure/core/functions
https://github.com/prey/prey-bash-client-modules/blob/master/secure/core/run

I've suggested Prey to people before for tracking stolen devices in order to
recover them, but I don't think I could recommend it for remote wipe.  It seems
to mainly use plain rm to delete the contents of a small number of directories,
and to call an API to clear MSIE browser history data.  For many users, this is
a pretty incomplete notion of wipe, and most of the content deleted this way
will be recoverable by forensics.

A further problem that comes to mind is that sending a signal to a phone (that
uses 3G networks) to wipe itself is going to be easier in a lot of cases than
to a laptop (that uses mainly wifi, and maybe not opportunistically).  The
laptop will likely be offline by default if someone removes it from its normal
environment, so it won't hear the wipe signal.  Solutions like Prey for laptops
mainly work because thieves or downstream purchasers may voluntarily connect
stolen laptops to networks to use them without reinstalling them (at least if
the laptops don't require, or seem not to require, a login password!).

Mike Cardwell actually uses a decoy operating system (with Prey) on his laptop
in order to tempt thieves to use it:

https://grepular.com/Protecting_a_Laptop_from_Simple_and_Sophisticated_Attacks

I'm quite impressed with his setup, which took him a great deal of time and
thought.  He relies entirely on encryption to get the equivalent of remote
wiping; his Prey install is there just to increase his chances of finding the
laptop if it's taken by common thieves.

This is some ways away from the original poster's question about remote wiping
a Windows installation.   I guess I want to agree with Eugen Leitl (and Mike
Cardwell) that disk encryption ultimately does that job better, mainly since a
sophisticated or targeted attacker wouldn't connect the laptop to a network
before making a copy of the hard drive.  For Windows users who've been denied
BitLocker by Microsoft's price discrimination, there's TrueCrypt.

-- 
Seth Schoen  sch...@eff.org
Senior Staff Technologist   https://www.eff.org/
Electronic Frontier Foundation  https://www.eff.org/join
454 Shotwell Street, San Francisco, CA  94110   +1 415 436 9333 x107
--
Too many emails? Unsubscribe, change to digest, or change password by emailing 
moderator at compa...@stanford.edu or changing your settings at 
https://mailman.stanford.edu/mailman/listinfo/liberationtech


Re: [liberationtech] An encryption project

2013-01-28 Thread Seth David Schoen
Cooper Quintin writes:

 Paul,
 If you, as you say, do not have much experience in breaking/testing
 encryption or the details of modern methods, I must assume that you are
 not, in fact a professional cryptographer. (That's okay! Neither am I!)
  That being the case, I must ask you to PLEASE, PLEASE, PLEASE not
 implement any sort of cryptographic solution yourself. ESPECIALLY if it
 is intended to be used under  circumstances that the senders life may
 depend on it being secure.

It seems even people with quite a bit of expertise often get crypto
wrong in ways that can seriously undermine the resulting system's
security.  There are lots of examples.  Sometimes the mistakes are a
matter of not understanding the state of the art or the purposes for
which a cryptographic operation is meant to be used.  Sometimes
they're about very subtle issues of information leakage that would
have been hard to anticipate; two neat examples are timing attacks
(where the amount of time that it takes for security-related code to
run may reveal information including secret keys and passwords!) and
recent attacks using information leakage from compression (whether
compression succeeds in reducing the length of a message, and by how
much, reveals information about the message's content).

Application implementers wouldn't have thought that it's bad for
different operations to take slightly different amounts of time
(it turns out that even things like testing whether
submitted_password == correct_password can be bad, because the
amount of time it takes to fail can reveal which character of
the submitted_password was wrong!); nor would they have thought
that it could be bad to compress messages before encrypting them.
But these simple decisions could lead to a complete compromise of
the system.

Another striking example about the need to understand the
properties of one's crypto tools is the simple failure modes
of ECB (traditionally the default block cipher mode!):

https://en.wikipedia.org/wiki/Block_cipher_modes_of_operation#Electronic_codebook_.28ECB.29

Check out the shadow penguin in the encrypted image. :-)

I'm optimistic that systems like NaCl are going to help with some
of this in the long run by providing useful abstractions that have
most of the security properties that application developers would
like, with immunity against many of the potential traps for the
unwary.  I'm not sure we're there yet, though.

-- 
Seth Schoen  sch...@eff.org
Senior Staff Technologist   https://www.eff.org/
Electronic Frontier Foundation  https://www.eff.org/join
454 Shotwell Street, San Francisco, CA  94110   +1 415 436 9333 x107
--
Unsubscribe, change to digest, or change password at: 
https://mailman.stanford.edu/mailman/listinfo/liberationtech


Re: [liberationtech] fossjobs - first job platform exclusively for FOSS jobs

2012-11-17 Thread Seth David Schoen
Tianay Pulphus writes:

 What's the story behind the name? What's a foss? Is it a play on boss?

It's Icelandic for waterfall :-þ, but in this case it refers to free
and open source software.

Free and open source software are historically different names for the
same software, but each name is preferred by different people who have
different emphasis.

(A common misconception is that free software must be reciprocally
licensed and that open source software must be permissively licensed.
This misconception comes about partly because people assume that the
two terms must refer to different software, or because people who prefer
to talk about free software often prefer reciprocal licenses, while
people who prefer permissive licenses usually prefer to talk about
open source.  In fact, both terms are properly applied to both kinds
of licenses, and the difference in primarily one of connotation, not
denotation.)

FOSS (as well as FLOSS, adding libre) is an umbrella term that
acknowledges both names without choosing between them.

-- 
Seth Schoen  sch...@eff.org
Senior Staff Technologist   https://www.eff.org/
Electronic Frontier Foundation  https://www.eff.org/join
454 Shotwell Street, San Francisco, CA  94110   +1 415 436 9333 x107
--
Unsubscribe, change to digest, or change password at: 
https://mailman.stanford.edu/mailman/listinfo/liberationtech


Re: [liberationtech] Bitly Safety (was Stanford Bitly Enterprise Account)

2012-11-16 Thread Seth David Schoen
Nick Daly writes:

 On Fri, Nov 16, 2012 at 4:41 PM, Griffin Boyce griffinbo...@gmail.com wrote:
  All URL shorteners have the problem of not being transparent with
  destination. The risk of this is amplified on places like Twitter,
  where the shortened version can be copied and pasted numerous times.
 
  So I would recommend using a site like unshorten.it (or bit.ly itself)
  to actually see where a link leads.
 
 Someone should register long.er for stretching links :)

There's no er top-level domain, though you could have

stret.ch
increa.se
maximi.se (Commonwealth English)

and perhaps puns like

magni.fi

A great exercise for my Unix book would be to use regular expressions to
figure out which dictionary words can potentially be registered as domain
names!

-- 
Seth Schoen  sch...@eff.org
Senior Staff Technologist   https://www.eff.org/
Electronic Frontier Foundation  https://www.eff.org/join
454 Shotwell Street, San Francisco, CA  94110   +1 415 436 9333 x107
--
Unsubscribe, change to digest, or change password at: 
https://mailman.stanford.edu/mailman/listinfo/liberationtech


Re: [liberationtech] Bitly Safety (was Stanford Bitly Enterprise Account)

2012-11-16 Thread Seth David Schoen
Parker Higgins writes:

 On 11/16/12 3:03 PM, Seth David Schoen wrote:
  There's no er top-level domain
 
 I understand I'm getting a bit afield, but there is a .er ccTLD, for
 Eritrea:
 
 https://en.wikipedia.org/wiki/.er
 
 Granted, there's no known registry. And you can't get a domain at the
 second level. So you're unlikely to get long.er anyway.

Whoops, I need to actually look at the ccTLD list when making such an
assertion.  Thanks for the correction.

-- 
Seth Schoen  sch...@eff.org
Senior Staff Technologist   https://www.eff.org/
Electronic Frontier Foundation  https://www.eff.org/join
454 Shotwell Street, San Francisco, CA  94110   +1 415 436 9333 x107
--
Unsubscribe, change to digest, or change password at: 
https://mailman.stanford.edu/mailman/listinfo/liberationtech


Re: [liberationtech] Privacy in Ubuntu 12.10

2012-11-08 Thread Seth David Schoen
Micah Lee writes:

 Before 12.10 the Ubuntu GUI installer only let you set up home directory
 encryption using encryptfs, which is different than full disk
 encryption.

For anyone hoping to read about the details of this technology, you
probably want the (possibly counterintuitive) spelling eCryptfs.

-- 
Seth Schoen  sch...@eff.org
Senior Staff Technologist   https://www.eff.org/
Electronic Frontier Foundation  https://www.eff.org/join
454 Shotwell Street, San Francisco, CA  94110   +1 415 436 9333 x107
--
Unsubscribe, change to digest, or change password at: 
https://mailman.stanford.edu/mailman/listinfo/liberationtech


Re: [liberationtech] Silent Circle to publish source code?

2012-10-11 Thread Seth David Schoen
Nathan writes:

 Like organic, open-source is a term that is easily claimed but
 not often truly fulfilled. Nadim should be given more credit for the
 completely transparent and engaged open-source project he runs, and for
 defending an approach and philosophy that he is completely living up to.

Further to that, I hope people in situations like this won't be sloppy
with the distinction between open source and viewable source code.
Publishing source code gives some of the important benefits of open
source, but not all of them.

Open source doesn't just mean access to the source code.
http://opensource.org/osd.html

-- 
Seth Schoen  sch...@eff.org
Senior Staff Technologist   https://www.eff.org/
Electronic Frontier Foundation  https://www.eff.org/join
454 Shotwell Street, San Francisco, CA  94110   +1 415 436 9333 x107
--
Unsubscribe, change to digest, or change password at: 
https://mailman.stanford.edu/mailman/listinfo/liberationtech


Re: [liberationtech] Revised Liberationtech Mailing List Guidelines

2012-08-04 Thread Seth David Schoen
Greg Norcie writes:

 This is a good logic, but there is still a problem even if Google scans
 uploads.
 
 Both state and nonstate actors often use zero day vulnerabilities. Since
 a zero day has never been seen before, there is no signature for it in
 any virus database.

This is totally true in general, and of course these zero days have been
used in real attacks, and of course Google can't necessarily recognize
zero-day vulnerabilities.

In the particular case of text documents shared through Google Docs -- as
opposed to Word files hosted for download with some sort of file sharing
site! -- I think malware is a comparatively minor risk.  The reason is that
when you upload a document to Google Docs, Google imports the content of
the document into Google's own internal format.  When you then download a
document from Google Docs, Google is generating _a new document from
scratch_ with the same text and formatting content as the original, but
the result is not the same file that was originally uploaded.

If someone mails you an attachment, or hosts a document file of their own
creation on a web site, your word processor could be compromised if there
are software vulnerabilities that the document exploits, like a buffer
overflow.  And this is also true of, say, a PDF document that you're going
to open in a PDF reader; we know that there have been exploits used in the
wild against PDF readers.

By contrast, if you were to import some Microsoft Word file into Google
Docs and then export the resulting Google Docs document in Microsoft Word
format, what you'd get back would _not_ be the original file or any
modified form of the original file.  Instead, you would get a completely
new Microsoft Word file, generated from scratch by Google, with essentially
the same textual content as the original.  (And if you were to export the
Google Docs document as a PDF, what you'd get would be a PDF that Google
generated from scratch.)

Since these documents are being generated by Google in this way, using
its own internally-developed software, Google will presumably create safe
and valid documents for its users, not ones that contain exploits and
malware.



We might still worry that someone could _upload_ a malicious document to
Google in order to attack Google's import process (and perhaps attack the
Google Docs servers in various ways, whether to disable other security
features or access private information), but I presume Google's security
folks have been very cautious about this aspect and Google Docs import
is probably much less vulnerable to malware and exploits than the file
import features in popular desktop word processors like Microsoft Word,
OpenOffice, and LibreOffice.  (Also, attackers can study the binary code
of Microsoft Word -- as well as Microsoft's security patches to it! --
or the source code of OpenOffice and LibreOffice -- as well as their
developers' security patches to them! -- in order to try to find specific
vulnerabilities.  It's harder for attackers to speculate usefully about
what vulnerabilities may exist in Google Docs import functionality because
the attackers probably don't have access to any of the Google Docs code,
whether source or binary.  So even if there are exploitable vulnerabilities
in the way Google Docs parses documents, it will be much harder for
attackers to find and exploit them than it would be for published desktop
software.)

(How do I square this with my observation that Google can't necessarily
recognize vulnerabilities?  I think the main point is that the zero-day
vulnerabilities we're likely to encounter are vulnerabilities in
desktop software.  Google may not be able to detect these, but it may not
be vulnerable to them either!  And with cautious programming, it can also
default to rejecting files that are suspicious in some general ways, even
if it doesn't know exactly what's bad about them.  For instance, Andreas
Bogk gave a talk last year at the CCC Camp about a PDF security scanner
he's been developing which is able to reject several kinds of invalid PDFs
automatically.  Some of those invalid PDFs may be innocent and not contain
any malware or exploits, but Google could still use a scanner like this to
reject them and refuse to import them out of an abundance of caution.)

-- 
Seth Schoen  sch...@eff.org
Senior Staff Technologist   https://www.eff.org/
Electronic Frontier Foundation  https://www.eff.org/join
454 Shotwell Street, San Francisco, CA  94110   +1 415 436 9333 x107
___
liberationtech mailing list
liberationtech@lists.stanford.edu

Should you need to change your subscription options, please go to:

https://mailman.stanford.edu/mailman/listinfo/liberationtech

If you would like to receive a daily digest, click yes (once you click above) 
next to would you like to receive list mail batched in a daily digest?

You will need the user name and password you receive from the list 

Re: [liberationtech] Fwd: Re: secure wipe of flash memory

2012-07-15 Thread Seth David Schoen
oli writes:

 take the liberty...

So I think there are a couple of interesting questions about how well you
can clear flash storage by simple overwriting of free space.  Remember
that you have several layers in between your write operation and the
actual flash blocks.  Wei et al. say from experiments that overwriting
free space is _not_ very effective.

https://www.usenix.org/events/fast11/tech/full_papers/Wei.pdf

One issue I wonder about is whether a regular user program can succeed in
filling the whole flash device.  On Linux filesystems in the ext2 series,
there is a notion of blocks reserved for the superuser.  E.g., from
tune2fs(8):

 Set the percentage of the filesystem which may only be allocated
 by privileged processes.   Reserving some number  of  filesystem
 blocks for use by privileged processes is done to avoid filesys‐
 tem fragmentation, and to allow system  daemons,  such  as  sys‐
 logd(8),  to continue to function correctly after non-privileged
 processes are prevented from writing to  the  filesystem.   Nor‐
 mally, the default percentage of reserved blocks is 5%.

Some Android systems may use these filesystems on their flash storage; I
don't know if the same concept exists in other filesystems.  (Due to wear
leveling, I guess you would have a different 5% of the underlying blocks
that you fail to overwrite each time.)

The other is whether the controller actually keeps some blocks in reserve
relative to those that it reports the existence of to software.  My
understanding is that for magnetic storage, there are more blocks on the
physical disk than are reported to the ATA layer, and the controller uses
the extra blocks for transparent remapping in case of physical errors, and
maybe for other purposes.  If I understand Wei et al. correctly, they found
this issue was even _more_ pronounced on flash devices and is a major reason
that overwriting free space is not so effective.  They say that [t]he SSDs
we tested contain between 6 and 25% more physical flash storage than they
advertise as their logical capacity.

-- 
Seth Schoen  sch...@eff.org
Senior Staff Technologist   https://www.eff.org/
Electronic Frontier Foundation  https://www.eff.org/join
454 Shotwell Street, San Francisco, CA  94110   +1 415 436 9333 x107
___
liberationtech mailing list
liberationtech@lists.stanford.edu

Should you need to change your subscription options, please go to:

https://mailman.stanford.edu/mailman/listinfo/liberationtech

If you would like to receive a daily digest, click yes (once you click above) 
next to would you like to receive list mail batched in a daily digest?

You will need the user name and password you receive from the list moderator in 
monthly reminders. You may ask for a reminder here: 
https://mailman.stanford.edu/mailman/listinfo/liberationtech

Should you need immediate assistance, please contact the list moderator.

Please don't forget to follow us on http://twitter.com/#!/Liberationtech