Re: [Wikimedia-l] Disinformation regarding perfect forward secrecy for HTTPS

2013-08-03 Thread Ryan Lane
On Fri, Aug 2, 2013 at 7:23 PM, Anthony wikim...@inbox.org wrote:

 On Fri, Aug 2, 2013 at 10:07 PM, Anthony wikim...@inbox.org wrote:

 
  Anthony wrote:
  
   How much padding is already inherent in HTTPS?
 
  None, which is why Ryan's Google Maps fingerprinting example works.
 
 
  Citation needed.
 

 Also please address
 https://en.wikipedia.org/wiki/Block_cipher_modes_of_operation#Padding

 It seems that the ciphers which run in CBC mode, at least, are padded.
  Wikipedia currently seems to be set to use RC4 128.  I'm not sure what, if
 any, padding is used by that cipher.  But presumably Wikipedia will switch
 to a better cipher if Wikimedia cares about security.


We're currently have RC4 and AES ciphers in our list, but have RC4 listed
first and have a server preference list to combat BEAST. TLS 1.1/1.2 are
enabled and I'll be adding the GCM ciphers to the beginning of the list
either during Wikimania or as soon as I get back.

- Ryan
___
Wikimedia-l mailing list
Wikimedia-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l, 
mailto:wikimedia-l-requ...@lists.wikimedia.org?subject=unsubscribe

Re: [Wikimedia-l] Disinformation regarding perfect forward secrecy for HTTPS

2013-08-03 Thread Anthony
On Sat, Aug 3, 2013 at 4:19 AM, Ryan Lane rl...@wikimedia.org wrote:

 On Fri, Aug 2, 2013 at 7:23 PM, Anthony wikim...@inbox.org wrote:
  It seems that the ciphers which run in CBC mode, at least, are padded.
   Wikipedia currently seems to be set to use RC4 128.  I'm not sure what,
 if
  any, padding is used by that cipher.  But presumably Wikipedia will
 switch
  to a better cipher if Wikimedia cares about security.
 

 We're currently have RC4 and AES ciphers in our list, but have RC4 listed
 first and have a server preference list to combat BEAST. TLS 1.1/1.2 are
 enabled and I'll be adding the GCM ciphers to the beginning of the list
 either during Wikimania or as soon as I get back.


Rereading that it looks like I might have implied that Wikimedia didn't
care about security.  That was absolutely not my intended implication.
 Sorry about that.
___
Wikimedia-l mailing list
Wikimedia-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l, 
mailto:wikimedia-l-requ...@lists.wikimedia.org?subject=unsubscribe

Re: [Wikimedia-l] Disinformation regarding perfect forward secrecy for HTTPS

2013-08-03 Thread Tyler Romeo
On Sat, Aug 3, 2013 at 4:19 AM, Ryan Lane rl...@wikimedia.org wrote:

 We're currently have RC4 and AES ciphers in our list, but have RC4 listed
 first and have a server preference list to combat BEAST. TLS 1.1/1.2 are
 enabled and I'll be adding the GCM ciphers to the beginning of the list
 either during Wikimania or as soon as I get back


If possible, could a quick announcement be made (either here or on wikitech
or on bug 52496), when we start supporting GCM? Much appreciated.

*-- *
*Tyler Romeo*
Stevens Institute of Technology, Class of 2016
Major in Computer Science
www.whizkidztech.com | tylerro...@gmail.com
___
Wikimedia-l mailing list
Wikimedia-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l, 
mailto:wikimedia-l-requ...@lists.wikimedia.org?subject=unsubscribe

Re: [Wikimedia-l] Disinformation regarding perfect forward secrecy for HTTPS

2013-08-02 Thread George Herbert



On Aug 1, 2013, at 10:07 PM, Ryan Lane rl...@wikimedia.org wrote:

 Also,
 our resources are delivered from a number of urls (upload, bits, text)
 making it easier to identify resources. Even with padding you can take the
 relative size of resources being delivered, and the order of those sizes
 and get a pretty good idea of the article being viewed. If there's enough
 data you may be able to identify multiple articles and see if the
 subsequent article is a link from the previous article, making guesses more
 accurate. It only takes a single accurate guess for an edit to identify an
 editor and see their entire edit history.
 
 Proper support of pipelining in browsers or multiplexing in protocols like
 SPDY would help this situation. There's probably a number of things we can
 do to improve the situation without pipelining or newer protocols, and
 we'll likely put some effort into this front. I think this takes priority
 over PFS as PFS isn't helpful if decryption isn't necessary to track
 browsing habits.


This needs some proper crypto expert vetting, but...

It would be trivial (both in effort and impact on customer bandwidth) to pad 
everything to a 1k boundary on https transmission once we get there.  A 
variable length non-significant header field can be used.  Forcing such size 
counts into very large bins will degrade fingerprinting significantly.

It would also not be much more effort or customer impact to pad to the next 
larger 1k size for a random large fraction of transmissions.  One could imagine 
a user setting where one could opt in or out of that, for example, and perhaps 
a set of relative inflation scheme sizes one could choose from (10% inflated, 
25% inflated, 50%, 50% plus 10% get 1-5 more k of padding, ...).

Even the slightest of these options (under https everywhere) starts to give 
plausible deniability to someone's browsing; the greater ones would make 
fingerprinting quite painful, though running a statistical exercise of such 
options to see how hard it would make it seems useful to understand the 
effects...

The question is, what is the point of this?  Provide very strong user 
obfuscation?  Provide at least minimal individual evidentiary obfuscation from 
the level of what a US court (for example) might consider scientifically 
reliable, to block use of that history in trials (even if educated guesses 
still might be made by law enforcement as to the articles)?

Countermeasures are responses to attain specific goals.  What are the goals 
people care about for such a program, and what are the Foundation willing to 
consider worth supporting with bandwidth $$ or programmer time?  How do we come 
up with a list of possible goals and prioritize amongst them in both a 
technical and policy/goals sense?

I believe that PFS will come out higher here as it's cost is really only CPU 
crunchies and already existent software settings to choose from, and its 
benefits to long term total obscurability are significant if done right.

No quantity of countermeasures beat inside info, and out-of-band compromise of 
our main keys ends up being attractive enough as the only logical attack once 
we start down this road at all past HTTPS-everywhere.  One time key compromise 
is far more likely than realtime compromise of PFS keys as they rotate, though 
even that is possible given sufficiently motivated successful stealthy 
subversion.  The credible ability to in the end be confident that's not 
happening is arguably the long term ceiling for how high we can realistically 
go with countermeasures, and contains operational security and intrusion 
detection features as its primary limits rather than in-band behavior.

At some point the ops team would need a security team, an IDS team, and a 
counterintelligence team to watch the other teams, and I don't know if the 
Foundation cares that much or would find operating that way to be a more 
comfortable moral and practical stance...


George William Herbert
Sent from my iPhone


___
Wikimedia-l mailing list
Wikimedia-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l, 
mailto:wikimedia-l-requ...@lists.wikimedia.org?subject=unsubscribe

Re: [Wikimedia-l] Disinformation regarding perfect forward secrecy for HTTPS

2013-08-02 Thread James Salsman
George William Herbert wrote:
...
 It would also not be much more effort or customer impact
 to pad to the next larger 1k size for a random large fraction
 of transmissions.

Padding each transmission with a random number of bytes, up to say 50
or 100, might provide a greater defense against fingerprinting while
saving massive amounts of bandwidth.

... At some point the ops team would need a security team,
 an IDS team, and a counterintelligence team to watch the
 other teams, and I don't know if the Foundation cares that
 much or would find operating that way to be a more
 comfortable moral and practical stance...

I'm absolutely sure that they do care enough to get it right, but I
think that approach might be overkill. Just one or two cryptology
experts to make the transition to HTTPS, PFS, and whatever padding is
prudent would really help. I also hope that, if there is an effort to
spread disinformation about the value of such techniques, that the
Foundation might consider joining with e.g. the EFF to help fight it.
I think it's likely that a single cryptology consultant would probably
be able to make great progress in both. Getting cryptography right
isn't so much as a time-intensive task as it is sensitive to
experience and training.

Setting up and monitoring with ongoing auditing can often be
automated, but does require the continued attention of at least one
highly skilled expert, and preferably more than one in case the first
one gets hit by a bus.

___
Wikimedia-l mailing list
Wikimedia-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l, 
mailto:wikimedia-l-requ...@lists.wikimedia.org?subject=unsubscribe

Re: [Wikimedia-l] Disinformation regarding perfect forward secrecy for HTTPS

2013-08-02 Thread Anthony
On Fri, Aug 2, 2013 at 1:32 PM, James Salsman jsals...@gmail.com wrote:

 George William Herbert wrote:
 ...
  It would also not be much more effort or customer impact
  to pad to the next larger 1k size for a random large fraction
  of transmissions.

 Padding each transmission with a random number of bytes, up to say 50
 or 100, might provide a greater defense against fingerprinting while
 saving massive amounts of bandwidth.


Or it might provide virtually no defense and not save any bandwidth.
___
Wikimedia-l mailing list
Wikimedia-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l, 
mailto:wikimedia-l-requ...@lists.wikimedia.org?subject=unsubscribe

Re: [Wikimedia-l] Disinformation regarding perfect forward secrecy for HTTPS

2013-08-02 Thread Marc A. Pelletier
On 08/02/2013 01:32 PM, James Salsman wrote:
 Padding each transmission with a random number of bytes, up to say 50
 or 100, might provide a greater defense against fingerprinting while
 saving massive amounts of bandwidth.

It would slightly change the algorithm used to make the fingerprint, not
make it any significantly higher, and you'd want to have some fuzz in
the match process anyways since you wouldn't necessarily want to have to
fiddle with your database at every edit.

The combination of at least this size with at least that many
secondary documents of at least those sizes in that order is probably
sufficient to narrow the match to a very tiny minority of articles.
You'd also need to randomize delays, shuffle load order, load blinds,
etc.  A minor random increase of size in document wouldn't even slow
down the process.

-- Marc


___
Wikimedia-l mailing list
Wikimedia-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l, 
mailto:wikimedia-l-requ...@lists.wikimedia.org?subject=unsubscribe

Re: [Wikimedia-l] Disinformation regarding perfect forward secrecy for HTTPS

2013-08-02 Thread Anthony
How much padding is already inherent in HTTPS?  Does the protocol pad to
the size of the blocks in the block cipher?

Seems to me that any amount of padding is going to give little bang for the
buck, at least without using some sort of pipelining.  You could probably
do quite a bit if you redesigned Mediawiki from scratch using all those
newfangled asynchronous javascript techniques, but that's not exactly an
easy task.  :)


On Fri, Aug 2, 2013 at 3:45 PM, Marc A. Pelletier m...@uberbox.org wrote:

 On 08/02/2013 01:32 PM, James Salsman wrote:
  Padding each transmission with a random number of bytes, up to say 50
  or 100, might provide a greater defense against fingerprinting while
  saving massive amounts of bandwidth.

 It would slightly change the algorithm used to make the fingerprint, not
 make it any significantly higher, and you'd want to have some fuzz in
 the match process anyways since you wouldn't necessarily want to have to
 fiddle with your database at every edit.

 The combination of at least this size with at least that many
 secondary documents of at least those sizes in that order is probably
 sufficient to narrow the match to a very tiny minority of articles.
 You'd also need to randomize delays, shuffle load order, load blinds,
 etc.  A minor random increase of size in document wouldn't even slow
 down the process.

 -- Marc


 ___
 Wikimedia-l mailing list
 Wikimedia-l@lists.wikimedia.org
 Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l,
 mailto:wikimedia-l-requ...@lists.wikimedia.org?subject=unsubscribe

___
Wikimedia-l mailing list
Wikimedia-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l, 
mailto:wikimedia-l-requ...@lists.wikimedia.org?subject=unsubscribe

Re: [Wikimedia-l] Disinformation regarding perfect forward secrecy for HTTPS

2013-08-02 Thread James Salsman
Marc A. Pelletier wrote:
...
 A minor random increase of size in document wouldn't even slow
 down [fingerprinting.]

That's absolutely false. The last time I measured the sizes of all
9,625 vital articles, there was only one at the median length of
30,356 bytes but four articles up to 50 bytes larger. Scale that up to
4,300,000 articles, and are you suggesting anyone is seriously going
to try fingerprinting secondary characteristics for buckets of 560
articles? It would not only slow them down, it would make their false
positive rate useless.

This is why we need cryptography experts instead of laypeople making
probabilistic inferences on Boolean predicates.

Marc, I note that you have recommending not keeping the Perl CPAN
modules up to date on Wikimedia Labs:
http://www.mediawiki.org/w/index.php?title=Wikimedia_Labs/Tool_Labs/Needed_Toolserver_featuresdiff=678902oldid=678746
saying that out of date packages are the best tested when in fact
almost all CPAN packages have their own unit tests. That sort of
reasoning is certain to allow known security vulnerabilities to
persist when they could easily be avoided.

Anthony wrote:

 How much padding is already inherent in HTTPS?

None, which is why Ryan's Google Maps fingerprinting example works.

... Seems to me that any amount of padding is going to give little
 bang for the buck

Again, can we please procure expert opinions instead of relying on the
existing pool of volunteer and staff opinions, especially when there
is so much FUD prevalent discouraging the kinds of encryption which
would most likely strengthen privacy?

___
Wikimedia-l mailing list
Wikimedia-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l, 
mailto:wikimedia-l-requ...@lists.wikimedia.org?subject=unsubscribe

Re: [Wikimedia-l] Disinformation regarding perfect forward secrecy for HTTPS

2013-08-02 Thread Matthew Flaschen
On 08/02/2013 05:06 PM, James Salsman wrote:
 Marc, I note that you have recommending not keeping the Perl CPAN
 modules up to date on Wikimedia Labs:
 http://www.mediawiki.org/w/index.php?title=Wikimedia_Labs/Tool_Labs/Needed_Toolserver_featuresdiff=678902oldid=678746
 saying that out of date packages are the best tested when in fact
 almost all CPAN packages have their own unit tests. That sort of
 reasoning is certain to allow known security vulnerabilities to
 persist when they could easily be avoided.

Besides being from a few months ago, and unrelated to this conversation,
I think that's a mis-characterization of what he said.

He said in general he would lean towards keeping the distribution's
versions since those are the better tested ones, but noted it should be
looked at on a package-by-package basis, and that there may well be
good reasons to bump up to a more recent version (a security
vulnerability that the distro isn't fixing rapidly enough would be such
a reason).

It seems from the context better tested meant something like people
are using this in practice in real environments, not only automated
testing.

Matt Flaschen

___
Wikimedia-l mailing list
Wikimedia-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l, 
mailto:wikimedia-l-requ...@lists.wikimedia.org?subject=unsubscribe

Re: [Wikimedia-l] Disinformation regarding perfect forward secrecy for HTTPS

2013-08-02 Thread Marc A. Pelletier
On 08/02/2013 05:50 PM, Matthew Flaschen wrote:
 It seems from the context better tested meant something like people
 are using this in practice in real environments, not only automated
 testing.

And, indeed, given the constraints and objectives of the Tool Labs
(i.e.: no secrecy, all open source and data, high reliability), the more
important concern is tested to be robust; I'd deviate from
distribution packaging in the case where a security issue could lead to
escalation, but concerns about data leaks are not an issue.

And whilst I am not a cryptography expert (depending, I suppose, how to
define expert) I happen to be very well versed in security protocol
design and zero-information analysis (but lack the math acument for
cryptography proper so I have to trust the Blums and Shamirs of this
world at their word).

For what concerns us here in traffic analysis, TLS is almost entirely
worthless *on its own*.  It is a necessary step, and has a great number
of /other/ benefits that justify its deployment without having anything
to do with the NSA's snooping.  I was not making an argument against it.

What I /am/ saying, OTOH, is that random padding without (at least)
pipelining and placards *is* worthless to protect against traffic
analysis since any reliable method to do it would be necessarily robust
against deviation in size.  Given that it has a cost to implement and
maintain, and consumes resources, it would be counterproductive to do
that.  It would give false reassurance of higher security without
actually bringing any security benefit.  I.e.: theatre.

-- Marc


___
Wikimedia-l mailing list
Wikimedia-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l, 
mailto:wikimedia-l-requ...@lists.wikimedia.org?subject=unsubscribe

Re: [Wikimedia-l] Disinformation regarding perfect forward secrecy for HTTPS

2013-08-02 Thread James Salsman
... random padding without (at least) pipelining and
 placards *is* worthless to protect against traffic analysis

No, that is not true, and
http://www.ieee-security.org/TC/SP2012/papers/4681a332.pdf
explains why. Padding makes it difficult but not impossible to distinguish
between two HTTPS destinations. 4,300,000 destinations is right out.

 since any reliable method to do it would be necessarily robust
 against deviation in size

That's like saying any reliable method to solve satisfiability in
polynomial time would be necessarily robust against variations in the
number of terms per expression. It's not even wrong.

When is the Foundation going to obtain the expertise to protect readers
living under regimes which completely forbid HTTPS access to Wikipedia,
like China? I suppose I better put that bug about steganography for the
surveillance triggers from TOM-Skype in bugzilla. I wish that could have
happened before everyone goes to Hong Kong.
___
Wikimedia-l mailing list
Wikimedia-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l, 
mailto:wikimedia-l-requ...@lists.wikimedia.org?subject=unsubscribe

Re: [Wikimedia-l] Disinformation regarding perfect forward secrecy for HTTPS

2013-08-02 Thread Marc A. Pelletier
On 08/02/2013 08:15 PM, James Salsman wrote:
 No, that is not true, and
 http://www.ieee-security.org/TC/SP2012/papers/4681a332.pdf
 explains why. Padding makes it difficult but not impossible to distinguish
 between two HTTPS destinations. 4,300,000 destinations is right out.

... have you actually /read/ that paper? Not only does it discuss how
naive countermeasures like you suggest aren't even able to protect
against identification at that coarse level, they are presuming much
*less* available data to make a determination than what is readily
available from visiting /one/ article (let alone what extra information
you can extract from one or two consecutive articles because of the
correlation provided by the links).

Traffic analysis is a hard attack to protect against, and just throwing
random guesses at what makes it harder is not useful (and yes, padding
is just a random guess that is /well known/ in the litterature to not
help against TA despite its benefits in certain kinds of known plaintext
and feedback ciphers).

I recommend you read ''Secure Transaction Protocol Analysis: Models and
Applications'', by Chen et al (ISBN 9783540850731).  It's already a
little out of date and a bit superficial, but will give you a good basic
working knowledge of the problem set and some viable approaches to the
subject.

-- Marc


___
Wikimedia-l mailing list
Wikimedia-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l, 
mailto:wikimedia-l-requ...@lists.wikimedia.org?subject=unsubscribe

Re: [Wikimedia-l] Disinformation regarding perfect forward secrecy for HTTPS

2013-08-02 Thread James Salsman
Marc A. Pelletier wrote:
...
 http://www.ieee-security.org/TC/SP2012/papers/4681a332.pdf
...
 have you actually /read/ that paper?

Of course I have. Have you read the conclusions at the bottom right of page
344? What kind of an adversary trying to infer our readers' article
selections is going to be able to use accuracy 10% better than a coin flip?
The National Pointless Trial Attorney's Employment Security Agency?
___
Wikimedia-l mailing list
Wikimedia-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l, 
mailto:wikimedia-l-requ...@lists.wikimedia.org?subject=unsubscribe

Re: [Wikimedia-l] Disinformation regarding perfect forward secrecy for HTTPS

2013-08-02 Thread Anthony
 Anthony wrote:
 
  How much padding is already inherent in HTTPS?

 None, which is why Ryan's Google Maps fingerprinting example works.


Citation needed.


 ... Seems to me that any amount of padding is going to give little
  bang for the buck

 Again, can we please procure expert opinions instead of relying on the
 existing pool of volunteer and staff opinions, especially when there
 is so much FUD prevalent discouraging the kinds of encryption which
 would most likely strengthen privacy?


Feel free.  But don't talk about what is most likely if you're not
interested in being told that you're wrong.
___
Wikimedia-l mailing list
Wikimedia-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l, 
mailto:wikimedia-l-requ...@lists.wikimedia.org?subject=unsubscribe

Re: [Wikimedia-l] Disinformation regarding perfect forward secrecy for HTTPS

2013-08-02 Thread Anthony
On Fri, Aug 2, 2013 at 10:07 PM, Anthony wikim...@inbox.org wrote:


 Anthony wrote:
 
  How much padding is already inherent in HTTPS?

 None, which is why Ryan's Google Maps fingerprinting example works.


 Citation needed.


Also please address
https://en.wikipedia.org/wiki/Block_cipher_modes_of_operation#Padding

It seems that the ciphers which run in CBC mode, at least, are padded.
 Wikipedia currently seems to be set to use RC4 128.  I'm not sure what, if
any, padding is used by that cipher.  But presumably Wikipedia will switch
to a better cipher if Wikimedia cares about security.
___
Wikimedia-l mailing list
Wikimedia-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l, 
mailto:wikimedia-l-requ...@lists.wikimedia.org?subject=unsubscribe

Re: [Wikimedia-l] Disinformation regarding perfect forward secrecy for HTTPS

2013-08-02 Thread James Salsman
 please address
https://en.wikipedia.org/wiki/Block_cipher_modes_of_operation#Padding

Sure. As soon as someone creates
http://en.wikipedia.org/wiki/Sunset_Shimmerso I can use an appropriate
example.
___
Wikimedia-l mailing list
Wikimedia-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l, 
mailto:wikimedia-l-requ...@lists.wikimedia.org?subject=unsubscribe

Re: [Wikimedia-l] Disinformation regarding perfect forward secrecy for HTTPS

2013-08-02 Thread James Salsman
Anthony, padding in this context means adding null or random bytes to the
end of encrypted TCP streams in order to obscure their true length. The
process of adding padding is entirely independent of the choice of
underlying cipher.

In this case, however, we have been discussing perfect forward secrecy,
which is dependent on the particular cypher. ECDHE-RSA-RC4-SHA is an
example of a PFS cipher and TLS key exchange protocol choice widely
supported by Apache supporting PFS.

The English Wikipedia articles on these subjects are all mostly
start-class, so please try Google, Google Scholar, and WP:RX for more
information.
___
Wikimedia-l mailing list
Wikimedia-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l, 
mailto:wikimedia-l-requ...@lists.wikimedia.org?subject=unsubscribe

Re: [Wikimedia-l] Disinformation regarding perfect forward secrecy for HTTPS

2013-08-02 Thread Anthony
On Fri, Aug 2, 2013 at 11:09 PM, James Salsman jsals...@gmail.com wrote:

 Anthony, padding in this context means adding null or random bytes to the
 end of encrypted TCP streams in order to obscure their true length. The
 process of adding padding is entirely independent of the choice of
 underlying cipher.


My point is that if the stream is encrypted using a block cipher (at least,
in CBC mode), then it's already padded to the block size of the cipher.

That's the more complete answer to my question of How much padding is
already inherent in HTTPS?  HTTPS itself does not have any inherent
padding, but when used with certain block ciphers, it does.

By the way, for most hours it's around 2.1-2.3 million, not 4.3 million.
 Wikimedia has been kind enough to give us a list of which pages are viewed
each hour of the day, along with the size of each page:
http://dumps.wikimedia.org/other/pagecounts-raw/

In this case, however, we have been discussing perfect forward secrecy,
 which is dependent on the particular cypher. ECDHE-RSA-RC4-SHA is an
 example of a PFS cipher and TLS key exchange protocol choice widely
 supported by Apache supporting PFS.


PFS is the method of key exchange.  You can use it with various different
ciphers.  From what I'm reading it can be used with AES and CBC, which
would be a block cipher which pads to 128 or 256 bytes.
___
Wikimedia-l mailing list
Wikimedia-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l, 
mailto:wikimedia-l-requ...@lists.wikimedia.org?subject=unsubscribe

Re: [Wikimedia-l] Disinformation regarding perfect forward secrecy for HTTPS

2013-08-02 Thread Anthony
On Fri, Aug 2, 2013 at 11:33 PM, Anthony wikim...@inbox.org wrote:

 AES and CBC, which would be a block cipher which pads to 128 or 256 bytes.


I mean bits, of course.
___
Wikimedia-l mailing list
Wikimedia-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l, 
mailto:wikimedia-l-requ...@lists.wikimedia.org?subject=unsubscribe

Re: [Wikimedia-l] Disinformation regarding perfect forward secrecy for HTTPS

2013-08-01 Thread Ryan Lane
On Thu, Aug 1, 2013 at 1:33 PM, James Salsman jsals...@gmail.com wrote:

 With the NSA revelations over the past months, there has been some very
 questionable information starting to circulate suggesting that trying to
 implement perfect forward secrecy for https web traffic isn't worth the
 effort. I am not sure of the provenance of these reports, and I would like
 to see a much more thorough debate on their accuracy or lack thereof. Here
 is an example:

 http://tonyarcieri.com/imperfect-forward-secrecy-the-coming-cryptocalypse

 As my IETF RFC coauthor Harald Alvestrand told me: The stuff about 'have
 to transmit the session key I the clear' is completely bogus, of course.
 That's what Diffie-Hellman is all about.

 Ryan Lane tweeted yesterday: It's possible to determine what you've been
 viewing even with PFS. And no, padding won't help. And he wrote on today's
 Foundation blog post, Enabling perfect forward secrecy is only useful if
 we also eliminate the threat of traffic analysis of HTTPS, which can be
 used to detect a user’s browsing activity, even when using HTTP, citing
 http://blog.ioactive.com/2012/02/ssl-traffic-analysis-on-google-maps.html

 It is not at all clear to me that discussion pertains to PFS or Wikimedia
 traffic in any way.

 I strongly suggest that the Foundation contract with well-known independent
 reputable cryptography experts to resolve these questions. Tracking and
 correcting misinformed advice, perhaps in cooperation with the EFF, is just
 as important.


Well, my post was reviewed by quite a number of tech staff and no one
rebutted my claim.

Assuming traffic analysis can be used to determine your browsing habits as
they are occurring (which is likely not terribly hard for Wikipedia) then
there's no point in forward secrecy because there's no point in decrypting
the traffic. It would protect passwords, but people should be changing
their passwords occasionally anyway, right?

Using traffic analysis it's also likely possible to correlate edits with
users as well, based on timings of requests and the public data available
for revisions.

I'm not saying that PFS is worthless, but I am saying that implementing PFS
without first solving the issue of timing and traffic analysis
vulnerabilities is a waste of our server's resources.

- Ryan
___
Wikimedia-l mailing list
Wikimedia-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l, 
mailto:wikimedia-l-requ...@lists.wikimedia.org?subject=unsubscribe

Re: [Wikimedia-l] Disinformation regarding perfect forward secrecy for HTTPS

2013-08-01 Thread James Salsman
Ryan Lane wrote:
...
 Assuming traffic analysis can be used to determine your browsing
 habits as they are occurring (which is likely not terribly hard for Wikipedia)

The Google Maps example you linked to works by building a huge
database of the exact byte sizes of satellite image tiles. Are you
suggesting that we could fingerprint articles by their sizes and/or
the sizes of the images they load?

But if so, in your tweet you said padding wouldn't help. But padding
would completely obliterate that size information, wouldn't it?

___
Wikimedia-l mailing list
Wikimedia-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l, 
mailto:wikimedia-l-requ...@lists.wikimedia.org?subject=unsubscribe

Re: [Wikimedia-l] Disinformation regarding perfect forward secrecy for HTTPS

2013-08-01 Thread Ryan Lane
On Thursday, August 1, 2013, James Salsman wrote:

 Ryan Lane wrote:
 ...
  Assuming traffic analysis can be used to determine your browsing
  habits as they are occurring (which is likely not terribly hard for
 Wikipedia)

 The Google Maps example you linked to works by building a huge
 database of the exact byte sizes of satellite image tiles. Are you
 suggesting that we could fingerprint articles by their sizes and/or
 the sizes of the images they load?


Of course. They can easily crawl us, and we provide everything for
download. Unlike sites like facebook or google, our content is delivered
exactly the same to nearly every user.



 But if so, in your tweet you said padding wouldn't help. But padding
 would completely obliterate that size information, wouldn't it?


Only Opera has pipelining enabled, so resource requests are serial. Also,
our resources are delivered from a number of urls (upload, bits, text)
making it easier to identify resources. Even with padding you can take the
relative size of resources being delivered, and the order of those sizes
and get a pretty good idea of the article being viewed. If there's enough
data you may be able to identify multiple articles and see if the
subsequent article is a link from the previous article, making guesses more
accurate. It only takes a single accurate guess for an edit to identify an
editor and see their entire edit history.

Proper support of pipelining in browsers or multiplexing in protocols like
SPDY would help this situation. There's probably a number of things we can
do to improve the situation without pipelining or newer protocols, and
we'll likely put some effort into this front. I think this takes priority
over PFS as PFS isn't helpful if decryption isn't necessary to track
browsing habits.

Of course the highest priority is simply to enable HTTPS by default, as it
forces the use of traffic analysis or decryption, which is likely a high
enough bar to hinder tracking efforts for a while.

- Ryan
___
Wikimedia-l mailing list
Wikimedia-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l, 
mailto:wikimedia-l-requ...@lists.wikimedia.org?subject=unsubscribe