Re: [Wikimedia-l] Why the WP will never be a real encyclopaedia

2013-08-02 Thread Peter Southwood
Rui, His point is valid. You have a valid point but use an invalid argument 
to support it.

Cheers,
Peter
- Original Message - 
From: Rui Correia correia@gmail.com

To: Wikimedia Mailing List wikimedia-l@lists.wikimedia.org
Sent: Thursday, August 01, 2013 11:19 PM
Subject: Re: [Wikimedia-l] Why the WP will never be a real encyclopaedia



Asaf

So you mostly agree with m, but prefer to come out knee-jerking first and
only after that showing that you somehow agree.

The elephant in the room is so big that we there isn't even enough room to
breathe properly to get enough oxygen to our brains.

Rui

On 1 August 2013 23:10, Asaf Bartov abar...@wikimedia.org wrote:

Your disqualification of Wikipedia from being called an encyclopedia is, 
of

course, equally (indeed, more) applicable to _all other encyclopedias,
ever_.  It is therefore incumbent on your to either agree that there has
never been an encyclopedia yet, or that your bar for what constitutes 
an

encyclopedia is not a useful one.

We all agree the Khoi, and African topics in general (but also 
Vietnamese,

and Guatemalan, and Albanian, and...[1]) are underrepresented in the
volunteer-built encyclopedia we all cherish.

What _would_ be useful are realistic ideas about how to address this
underrepresentation.

   A.

[1] Two years ago, I spent 5-minutes preparing a presentation that makes
this point when someone suggested that the English Wikipedia is... kinda
done?  It's at http://prezi.com/szjdvdbtl0j_/is-wikipedia-done/


On Thu, Aug 1, 2013 at 1:22 PM, Rui Correia correia@gmail.com 
wrote:


 Dear Colleagues at the Foundation

 I just came across an artecle called White Africans of European
ancestry.
 What is that even supposed to mean?  Who would be any other white
people
 if not of Europen ancestry? What other white people (yes, WP has a
 definition of white people could these be? Especially as it already
says
 on the talk page that Arabs don't count.


 When we have 'white people' creating every conceivable article about
'white
 people', but we have no 'Khoi' people writing about 'Khoi people, then 
 we

 can't call the WP an encyclopaedia. But them the rules do say -
somewhere -
 that just because  And those just because rules are all over 
 the

 place - you can't use what was done in one case to justify another
similar
 case because someone is bound to throw a just because rule at you. 
 But

 the just because ... rule applies only when it is convenient - the
 corollary of the just because .. is I know more rules and tricks 
 than
 you and I will win this/ I will not allow you to have your way even if 
 I

 have to break all the rules and make new ones as I go along.

 So, just because there isn't an artice about Khoi people living in
 Denmark is no reason to not have an article about White Europens of
 Europen descent livng in Patagonia or White Europens of Europen 
 descent

 livng in Timbaktu. We have allowed ourselves to fall victim of the
digital
 divide - the Khoi don't have computers and internet, white Europeans 
 do.

 That is not an encyclopaedia.

 Why don't we have a page on Black Americans of African ancestry?
 Or Black Europeans of African ancestry? Strangely enough, type Black
 African and you get redirected to Black people, BUT the redirect
actually
 takes you all the way down to Africa - yes, the article on Black people
 does not start with Africa, but with the United States, then Brazil 
 


 Like I said, When we have 'white people' creating every conceivable
article
 about 'white people', but we have no 'Khoi' people writing about 'Khoi
 people, ...

 The same goes for the so-called Biographies of Living People. I had 
 my

 first clash on WP on the issue of the dual nationality of Nelly
Furtado.
 Two hundred million people see her as Portuguese, three - yes, 3 -
editors
 disagree and BRAG they will NEVER ALLOW it. The rationale changes, as 
 can
 be seen from the talk pages and archives. They go as far as 
 'challenging'

 editors that NF sees herself as Portuguese, to then dismiss all the
 evidence as not good enough - even Nelly HERSELF saying she is 
 PORTUGESE

 was thrown out! Why? Obvious! She doesn't count, she is not a NEUTRAL
 source!!! We have become a joke!

 How about being constructive?

 If we can come up with every conceivable script in the world, why has
 nobody come up with a script for controversial articles that would 
 appear

 on the the edit page - like the script that says the article is
protected -
 ALERTING unsuspecting editors to the fact that said article is
cotroversial
 for xand y reason, and that if the edit the editor is about to do falls
 under that theme, to please first read the talk page, with a direct 
 link
 ALSO to an explanation on BLP and the issue of ethnic background/ 
 present

 nationality. It would save lots of wasted time and effort and the three
 editors who spend sleepless nights reverting the artcile might actually
do
 something constructive 

Re: [Wikimedia-l] Disinformation regarding perfect forward secrecy for HTTPS

2013-08-02 Thread George Herbert



On Aug 1, 2013, at 10:07 PM, Ryan Lane rl...@wikimedia.org wrote:

 Also,
 our resources are delivered from a number of urls (upload, bits, text)
 making it easier to identify resources. Even with padding you can take the
 relative size of resources being delivered, and the order of those sizes
 and get a pretty good idea of the article being viewed. If there's enough
 data you may be able to identify multiple articles and see if the
 subsequent article is a link from the previous article, making guesses more
 accurate. It only takes a single accurate guess for an edit to identify an
 editor and see their entire edit history.
 
 Proper support of pipelining in browsers or multiplexing in protocols like
 SPDY would help this situation. There's probably a number of things we can
 do to improve the situation without pipelining or newer protocols, and
 we'll likely put some effort into this front. I think this takes priority
 over PFS as PFS isn't helpful if decryption isn't necessary to track
 browsing habits.


This needs some proper crypto expert vetting, but...

It would be trivial (both in effort and impact on customer bandwidth) to pad 
everything to a 1k boundary on https transmission once we get there.  A 
variable length non-significant header field can be used.  Forcing such size 
counts into very large bins will degrade fingerprinting significantly.

It would also not be much more effort or customer impact to pad to the next 
larger 1k size for a random large fraction of transmissions.  One could imagine 
a user setting where one could opt in or out of that, for example, and perhaps 
a set of relative inflation scheme sizes one could choose from (10% inflated, 
25% inflated, 50%, 50% plus 10% get 1-5 more k of padding, ...).

Even the slightest of these options (under https everywhere) starts to give 
plausible deniability to someone's browsing; the greater ones would make 
fingerprinting quite painful, though running a statistical exercise of such 
options to see how hard it would make it seems useful to understand the 
effects...

The question is, what is the point of this?  Provide very strong user 
obfuscation?  Provide at least minimal individual evidentiary obfuscation from 
the level of what a US court (for example) might consider scientifically 
reliable, to block use of that history in trials (even if educated guesses 
still might be made by law enforcement as to the articles)?

Countermeasures are responses to attain specific goals.  What are the goals 
people care about for such a program, and what are the Foundation willing to 
consider worth supporting with bandwidth $$ or programmer time?  How do we come 
up with a list of possible goals and prioritize amongst them in both a 
technical and policy/goals sense?

I believe that PFS will come out higher here as it's cost is really only CPU 
crunchies and already existent software settings to choose from, and its 
benefits to long term total obscurability are significant if done right.

No quantity of countermeasures beat inside info, and out-of-band compromise of 
our main keys ends up being attractive enough as the only logical attack once 
we start down this road at all past HTTPS-everywhere.  One time key compromise 
is far more likely than realtime compromise of PFS keys as they rotate, though 
even that is possible given sufficiently motivated successful stealthy 
subversion.  The credible ability to in the end be confident that's not 
happening is arguably the long term ceiling for how high we can realistically 
go with countermeasures, and contains operational security and intrusion 
detection features as its primary limits rather than in-band behavior.

At some point the ops team would need a security team, an IDS team, and a 
counterintelligence team to watch the other teams, and I don't know if the 
Foundation cares that much or would find operating that way to be a more 
comfortable moral and practical stance...


George William Herbert
Sent from my iPhone


___
Wikimedia-l mailing list
Wikimedia-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l, 
mailto:wikimedia-l-requ...@lists.wikimedia.org?subject=unsubscribe

Re: [Wikimedia-l] Why the WP will never be a real encyclopaedia

2013-08-02 Thread Peter Southwood

Journalist = professional troll
Explains but does not justify.
Peter
- Original Message - 
From: Rui Correia correia@gmail.com

To: Wikimedia Mailing List wikimedia-l@lists.wikimedia.org
Sent: Thursday, August 01, 2013 10:55 PM
Subject: Re: [Wikimedia-l] Why the WP will never be a real encyclopaedia



Denny

If you going to shoot me down as a troll, then I can say only that you are
one of those that refuse to see the elephant in the room. I am a 
journalist

(and a journalism trainer), I know that if I want others to read what I
have to say I need to come up a headline that will attract attention, 
while

at the same time abiding by age-old ethic standards - and I have done so.

Who controls what is said has become a big problem on the English and to a
degree the Portuguese WPs. Be fair to yourself, step back and just look at
some articles to see how many times a day they get reverted. The rot has
become endemic - there are so many people who do nothing but revert the
whole day without EVER contributing anything. Yes, I know that a lot of 
the

reverting is to undo the work of vandals with nothing better to do, but
most of it is done to preserve the view thae a specific article has
'acquired' through time.

Can you honesty tell me that you have not come across articles that are
'untouchable'? That you know they convey a view that is not entirely 
right,

but YOU and I cannot change it? Can you tell me that you have not come
across editors who are hell-bent on preserving this or that article just 
as

it is?

Rui

On 1 August 2013 22:40, Denny Vrandečić 
denny.vrande...@wikimedia.dewrote:



Rui,

if your basic assumption is that Wikipedia will never be a real
encyclopedia because of the lack of diversity among its contributors, I
would like to know of any other encyclopedia that is anywhere close to 
the
diversity among its contributors that Wikipedia has (just a side-note, 
the

original Encyclopédie had an even worse bias towards aristocratic, male
French than Wikipedias does, as surprising as it sounds). So, which
Encyclopedia do you consider a real encyclopedia at all?

Also, never mind the fact that we already sport such a diversity -- we 
are

actively aiming and striving for even more diversity, and we are not
comparing us to the usually abysmal record of other encyclopedias, but
merely to our own high, maybe even unreachable ideals.

So, whereas I fully agree that there is a lot about Wikipedia that can be
improved, I am not sure that a mail that starts with the statement Why 
the

Wikipedia will never be a real encyclopedia deserves even the
consideration that I offered you here, and is to be considered anything
beyond trolling.

All the best,
Denny



2013/8/1 Rui Correia correia@gmail.com

 Dear Colleagues at the Foundation

 I just came across an artecle called White Africans of European
ancestry.
 What is that even supposed to mean?  Who would be any other white
people
 if not of Europen ancestry? What other white people (yes, WP has a
 definition of white people could these be? Especially as it already
says
 on the talk page that Arabs don't count.


 When we have 'white people' creating every conceivable article about
'white
 people', but we have no 'Khoi' people writing about 'Khoi people, then 
 we

 can't call the WP an encyclopaedia. But them the rules do say -
somewhere -
 that just because  And those just because rules are all over 
 the

 place - you can't use what was done in one case to justify another
similar
 case because someone is bound to throw a just because rule at you. 
 But

 the just because ... rule applies only when it is convenient - the
 corollary of the just because .. is I know more rules and tricks 
 than
 you and I will win this/ I will not allow you to have your way even if 
 I

 have to break all the rules and make new ones as I go along.

 So, just because there isn't an artice about Khoi people living in
 Denmark is no reason to not have an article about White Europens of
 Europen descent livng in Patagonia or White Europens of Europen 
 descent

 livng in Timbaktu. We have allowed ourselves to fall victim of the
digital
 divide - the Khoi don't have computers and internet, white Europeans 
 do.

 That is not an encyclopaedia.

 Why don't we have a page on Black Americans of African ancestry?
 Or Black Europeans of African ancestry? Strangely enough, type Black
 African and you get redirected to Black people, BUT the redirect
actually
 takes you all the way down to Africa - yes, the article on Black people
 does not start with Africa, but with the United States, then Brazil 
 


 Like I said, When we have 'white people' creating every conceivable
article
 about 'white people', but we have no 'Khoi' people writing about 'Khoi
 people, ...

 The same goes for the so-called Biographies of Living People. I had 
 my

 first clash on WP on the issue of the dual nationality of Nelly
Furtado.
 Two hundred million people see her as Portuguese, 

Re: [Wikimedia-l] Why the WP will never be a real encyclopaedia

2013-08-02 Thread Mathieu Stumpf
Hey, what about writing the White people self-centered writings 
article? ;P


Le 2013-08-01 22:22, Rui Correia a écrit :

Dear Colleagues at the Foundation

I just came across an artecle called White Africans of European 
ancestry.
What is that even supposed to mean?  Who would be any other white 
people

if not of Europen ancestry? What other white people (yes, WP has a
definition of white people could these be? Especially as it already 
says

on the talk page that Arabs don't count.


When we have 'white people' creating every conceivable article about 
'white
people', but we have no 'Khoi' people writing about 'Khoi people, 
then we
can't call the WP an encyclopaedia. But them the rules do say - 
somewhere -
that just because  And those just because rules are all over 
the
place - you can't use what was done in one case to justify another 
similar
case because someone is bound to throw a just because rule at you. 
But

the just because ... rule applies only when it is convenient - the
corollary of the just because .. is I know more rules and tricks 
than
you and I will win this/ I will not allow you to have your way even 
if I

have to break all the rules and make new ones as I go along.

So, just because there isn't an artice about Khoi people living in
Denmark is no reason to not have an article about White Europens of
Europen descent livng in Patagonia or White Europens of Europen 
descent
livng in Timbaktu. We have allowed ourselves to fall victim of the 
digital
divide - the Khoi don't have computers and internet, white Europeans 
do.

That is not an encyclopaedia.

Why don't we have a page on Black Americans of African ancestry?
Or Black Europeans of African ancestry? Strangely enough, type 
Black
African and you get redirected to Black people, BUT the redirect 
actually
takes you all the way down to Africa - yes, the article on Black 
people
does not start with Africa, but with the United States, then Brazil 



Like I said, When we have 'white people' creating every conceivable 
article
about 'white people', but we have no 'Khoi' people writing about 
'Khoi

people, ...

The same goes for the so-called Biographies of Living People. I had 
my
first clash on WP on the issue of the dual nationality of Nelly 
Furtado.
Two hundred million people see her as Portuguese, three - yes, 3 - 
editors
disagree and BRAG they will NEVER ALLOW it. The rationale changes, as 
can
be seen from the talk pages and archives. They go as far as 
'challenging'

editors that NF sees herself as Portuguese, to then dismiss all the
evidence as not good enough - even Nelly HERSELF saying she is 
PORTUGESE

was thrown out! Why? Obvious! She doesn't count, she is not a NEUTRAL
source!!! We have become a joke!

How about being constructive?

If we can come up with every conceivable script in the world, why has
nobody come up with a script for controversial articles that would 
appear
on the the edit page - like the script that says the article is 
protected -
ALERTING unsuspecting editors to the fact that said article is 
cotroversial
for xand y reason, and that if the edit the editor is about to do 
falls
under that theme, to please first read the talk page, with a direct 
link
ALSO to an explanation on BLP and the issue of ethnic background/ 
present
nationality. It would save lots of wasted time and effort and the 
three
editors who spend sleepless nights reverting the artcile might 
actually do

something constructive for a change.

In closing, of the nine people featured in photos on that page, I 
know
(have met 5) and correspond with 2 - I can guarantee that all five of 
them
(and most likley all 9 [or the descendents of those no longer with 
us])

would object to being featured in such a racist article.

I will write to them about this. I know that each one is not a valid 
source
about him/ herself and therefore them objecting will probably not 
count.
Just as an side, in case you didn't know, the census in Brazil is 
done on
the basis of how people see themselves - white, back, green, pink - 
and
then we carry those figures here in the WP. Ah, sorry, those figures 
are
credible, because they come from the CIA fact book, people speaking 
for

themselves are not.


Best regards,

Rui
--
Rui Correia
Advocacy, Human Rights, Media and Language Consultant


--
Association Culture-Libre
http://www.culture-libre.org/

___
Wikimedia-l mailing list
Wikimedia-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l, 
mailto:wikimedia-l-requ...@lists.wikimedia.org?subject=unsubscribe

Re: [Wikimedia-l] Disinformation regarding perfect forward secrecy for HTTPS

2013-08-02 Thread James Salsman
George William Herbert wrote:
...
 It would also not be much more effort or customer impact
 to pad to the next larger 1k size for a random large fraction
 of transmissions.

Padding each transmission with a random number of bytes, up to say 50
or 100, might provide a greater defense against fingerprinting while
saving massive amounts of bandwidth.

... At some point the ops team would need a security team,
 an IDS team, and a counterintelligence team to watch the
 other teams, and I don't know if the Foundation cares that
 much or would find operating that way to be a more
 comfortable moral and practical stance...

I'm absolutely sure that they do care enough to get it right, but I
think that approach might be overkill. Just one or two cryptology
experts to make the transition to HTTPS, PFS, and whatever padding is
prudent would really help. I also hope that, if there is an effort to
spread disinformation about the value of such techniques, that the
Foundation might consider joining with e.g. the EFF to help fight it.
I think it's likely that a single cryptology consultant would probably
be able to make great progress in both. Getting cryptography right
isn't so much as a time-intensive task as it is sensitive to
experience and training.

Setting up and monitoring with ongoing auditing can often be
automated, but does require the continued attention of at least one
highly skilled expert, and preferably more than one in case the first
one gets hit by a bus.

___
Wikimedia-l mailing list
Wikimedia-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l, 
mailto:wikimedia-l-requ...@lists.wikimedia.org?subject=unsubscribe

Re: [Wikimedia-l] Disinformation regarding perfect forward secrecy for HTTPS

2013-08-02 Thread Anthony
On Fri, Aug 2, 2013 at 1:32 PM, James Salsman jsals...@gmail.com wrote:

 George William Herbert wrote:
 ...
  It would also not be much more effort or customer impact
  to pad to the next larger 1k size for a random large fraction
  of transmissions.

 Padding each transmission with a random number of bytes, up to say 50
 or 100, might provide a greater defense against fingerprinting while
 saving massive amounts of bandwidth.


Or it might provide virtually no defense and not save any bandwidth.
___
Wikimedia-l mailing list
Wikimedia-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l, 
mailto:wikimedia-l-requ...@lists.wikimedia.org?subject=unsubscribe

Re: [Wikimedia-l] Disinformation regarding perfect forward secrecy for HTTPS

2013-08-02 Thread Marc A. Pelletier
On 08/02/2013 01:32 PM, James Salsman wrote:
 Padding each transmission with a random number of bytes, up to say 50
 or 100, might provide a greater defense against fingerprinting while
 saving massive amounts of bandwidth.

It would slightly change the algorithm used to make the fingerprint, not
make it any significantly higher, and you'd want to have some fuzz in
the match process anyways since you wouldn't necessarily want to have to
fiddle with your database at every edit.

The combination of at least this size with at least that many
secondary documents of at least those sizes in that order is probably
sufficient to narrow the match to a very tiny minority of articles.
You'd also need to randomize delays, shuffle load order, load blinds,
etc.  A minor random increase of size in document wouldn't even slow
down the process.

-- Marc


___
Wikimedia-l mailing list
Wikimedia-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l, 
mailto:wikimedia-l-requ...@lists.wikimedia.org?subject=unsubscribe

Re: [Wikimedia-l] Disinformation regarding perfect forward secrecy for HTTPS

2013-08-02 Thread Anthony
How much padding is already inherent in HTTPS?  Does the protocol pad to
the size of the blocks in the block cipher?

Seems to me that any amount of padding is going to give little bang for the
buck, at least without using some sort of pipelining.  You could probably
do quite a bit if you redesigned Mediawiki from scratch using all those
newfangled asynchronous javascript techniques, but that's not exactly an
easy task.  :)


On Fri, Aug 2, 2013 at 3:45 PM, Marc A. Pelletier m...@uberbox.org wrote:

 On 08/02/2013 01:32 PM, James Salsman wrote:
  Padding each transmission with a random number of bytes, up to say 50
  or 100, might provide a greater defense against fingerprinting while
  saving massive amounts of bandwidth.

 It would slightly change the algorithm used to make the fingerprint, not
 make it any significantly higher, and you'd want to have some fuzz in
 the match process anyways since you wouldn't necessarily want to have to
 fiddle with your database at every edit.

 The combination of at least this size with at least that many
 secondary documents of at least those sizes in that order is probably
 sufficient to narrow the match to a very tiny minority of articles.
 You'd also need to randomize delays, shuffle load order, load blinds,
 etc.  A minor random increase of size in document wouldn't even slow
 down the process.

 -- Marc


 ___
 Wikimedia-l mailing list
 Wikimedia-l@lists.wikimedia.org
 Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l,
 mailto:wikimedia-l-requ...@lists.wikimedia.org?subject=unsubscribe

___
Wikimedia-l mailing list
Wikimedia-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l, 
mailto:wikimedia-l-requ...@lists.wikimedia.org?subject=unsubscribe

Re: [Wikimedia-l] Disinformation regarding perfect forward secrecy for HTTPS

2013-08-02 Thread James Salsman
Marc A. Pelletier wrote:
...
 A minor random increase of size in document wouldn't even slow
 down [fingerprinting.]

That's absolutely false. The last time I measured the sizes of all
9,625 vital articles, there was only one at the median length of
30,356 bytes but four articles up to 50 bytes larger. Scale that up to
4,300,000 articles, and are you suggesting anyone is seriously going
to try fingerprinting secondary characteristics for buckets of 560
articles? It would not only slow them down, it would make their false
positive rate useless.

This is why we need cryptography experts instead of laypeople making
probabilistic inferences on Boolean predicates.

Marc, I note that you have recommending not keeping the Perl CPAN
modules up to date on Wikimedia Labs:
http://www.mediawiki.org/w/index.php?title=Wikimedia_Labs/Tool_Labs/Needed_Toolserver_featuresdiff=678902oldid=678746
saying that out of date packages are the best tested when in fact
almost all CPAN packages have their own unit tests. That sort of
reasoning is certain to allow known security vulnerabilities to
persist when they could easily be avoided.

Anthony wrote:

 How much padding is already inherent in HTTPS?

None, which is why Ryan's Google Maps fingerprinting example works.

... Seems to me that any amount of padding is going to give little
 bang for the buck

Again, can we please procure expert opinions instead of relying on the
existing pool of volunteer and staff opinions, especially when there
is so much FUD prevalent discouraging the kinds of encryption which
would most likely strengthen privacy?

___
Wikimedia-l mailing list
Wikimedia-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l, 
mailto:wikimedia-l-requ...@lists.wikimedia.org?subject=unsubscribe

Re: [Wikimedia-l] Disinformation regarding perfect forward secrecy for HTTPS

2013-08-02 Thread Matthew Flaschen
On 08/02/2013 05:06 PM, James Salsman wrote:
 Marc, I note that you have recommending not keeping the Perl CPAN
 modules up to date on Wikimedia Labs:
 http://www.mediawiki.org/w/index.php?title=Wikimedia_Labs/Tool_Labs/Needed_Toolserver_featuresdiff=678902oldid=678746
 saying that out of date packages are the best tested when in fact
 almost all CPAN packages have their own unit tests. That sort of
 reasoning is certain to allow known security vulnerabilities to
 persist when they could easily be avoided.

Besides being from a few months ago, and unrelated to this conversation,
I think that's a mis-characterization of what he said.

He said in general he would lean towards keeping the distribution's
versions since those are the better tested ones, but noted it should be
looked at on a package-by-package basis, and that there may well be
good reasons to bump up to a more recent version (a security
vulnerability that the distro isn't fixing rapidly enough would be such
a reason).

It seems from the context better tested meant something like people
are using this in practice in real environments, not only automated
testing.

Matt Flaschen

___
Wikimedia-l mailing list
Wikimedia-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l, 
mailto:wikimedia-l-requ...@lists.wikimedia.org?subject=unsubscribe

Re: [Wikimedia-l] Disinformation regarding perfect forward secrecy for HTTPS

2013-08-02 Thread Marc A. Pelletier
On 08/02/2013 05:50 PM, Matthew Flaschen wrote:
 It seems from the context better tested meant something like people
 are using this in practice in real environments, not only automated
 testing.

And, indeed, given the constraints and objectives of the Tool Labs
(i.e.: no secrecy, all open source and data, high reliability), the more
important concern is tested to be robust; I'd deviate from
distribution packaging in the case where a security issue could lead to
escalation, but concerns about data leaks are not an issue.

And whilst I am not a cryptography expert (depending, I suppose, how to
define expert) I happen to be very well versed in security protocol
design and zero-information analysis (but lack the math acument for
cryptography proper so I have to trust the Blums and Shamirs of this
world at their word).

For what concerns us here in traffic analysis, TLS is almost entirely
worthless *on its own*.  It is a necessary step, and has a great number
of /other/ benefits that justify its deployment without having anything
to do with the NSA's snooping.  I was not making an argument against it.

What I /am/ saying, OTOH, is that random padding without (at least)
pipelining and placards *is* worthless to protect against traffic
analysis since any reliable method to do it would be necessarily robust
against deviation in size.  Given that it has a cost to implement and
maintain, and consumes resources, it would be counterproductive to do
that.  It would give false reassurance of higher security without
actually bringing any security benefit.  I.e.: theatre.

-- Marc


___
Wikimedia-l mailing list
Wikimedia-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l, 
mailto:wikimedia-l-requ...@lists.wikimedia.org?subject=unsubscribe

Re: [Wikimedia-l] Disinformation regarding perfect forward secrecy for HTTPS

2013-08-02 Thread James Salsman
... random padding without (at least) pipelining and
 placards *is* worthless to protect against traffic analysis

No, that is not true, and
http://www.ieee-security.org/TC/SP2012/papers/4681a332.pdf
explains why. Padding makes it difficult but not impossible to distinguish
between two HTTPS destinations. 4,300,000 destinations is right out.

 since any reliable method to do it would be necessarily robust
 against deviation in size

That's like saying any reliable method to solve satisfiability in
polynomial time would be necessarily robust against variations in the
number of terms per expression. It's not even wrong.

When is the Foundation going to obtain the expertise to protect readers
living under regimes which completely forbid HTTPS access to Wikipedia,
like China? I suppose I better put that bug about steganography for the
surveillance triggers from TOM-Skype in bugzilla. I wish that could have
happened before everyone goes to Hong Kong.
___
Wikimedia-l mailing list
Wikimedia-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l, 
mailto:wikimedia-l-requ...@lists.wikimedia.org?subject=unsubscribe

Re: [Wikimedia-l] Disinformation regarding perfect forward secrecy for HTTPS

2013-08-02 Thread Marc A. Pelletier
On 08/02/2013 08:15 PM, James Salsman wrote:
 No, that is not true, and
 http://www.ieee-security.org/TC/SP2012/papers/4681a332.pdf
 explains why. Padding makes it difficult but not impossible to distinguish
 between two HTTPS destinations. 4,300,000 destinations is right out.

... have you actually /read/ that paper? Not only does it discuss how
naive countermeasures like you suggest aren't even able to protect
against identification at that coarse level, they are presuming much
*less* available data to make a determination than what is readily
available from visiting /one/ article (let alone what extra information
you can extract from one or two consecutive articles because of the
correlation provided by the links).

Traffic analysis is a hard attack to protect against, and just throwing
random guesses at what makes it harder is not useful (and yes, padding
is just a random guess that is /well known/ in the litterature to not
help against TA despite its benefits in certain kinds of known plaintext
and feedback ciphers).

I recommend you read ''Secure Transaction Protocol Analysis: Models and
Applications'', by Chen et al (ISBN 9783540850731).  It's already a
little out of date and a bit superficial, but will give you a good basic
working knowledge of the problem set and some viable approaches to the
subject.

-- Marc


___
Wikimedia-l mailing list
Wikimedia-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l, 
mailto:wikimedia-l-requ...@lists.wikimedia.org?subject=unsubscribe

Re: [Wikimedia-l] Disinformation regarding perfect forward secrecy for HTTPS

2013-08-02 Thread James Salsman
Marc A. Pelletier wrote:
...
 http://www.ieee-security.org/TC/SP2012/papers/4681a332.pdf
...
 have you actually /read/ that paper?

Of course I have. Have you read the conclusions at the bottom right of page
344? What kind of an adversary trying to infer our readers' article
selections is going to be able to use accuracy 10% better than a coin flip?
The National Pointless Trial Attorney's Employment Security Agency?
___
Wikimedia-l mailing list
Wikimedia-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l, 
mailto:wikimedia-l-requ...@lists.wikimedia.org?subject=unsubscribe

Re: [Wikimedia-l] Disinformation regarding perfect forward secrecy for HTTPS

2013-08-02 Thread Anthony
 Anthony wrote:
 
  How much padding is already inherent in HTTPS?

 None, which is why Ryan's Google Maps fingerprinting example works.


Citation needed.


 ... Seems to me that any amount of padding is going to give little
  bang for the buck

 Again, can we please procure expert opinions instead of relying on the
 existing pool of volunteer and staff opinions, especially when there
 is so much FUD prevalent discouraging the kinds of encryption which
 would most likely strengthen privacy?


Feel free.  But don't talk about what is most likely if you're not
interested in being told that you're wrong.
___
Wikimedia-l mailing list
Wikimedia-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l, 
mailto:wikimedia-l-requ...@lists.wikimedia.org?subject=unsubscribe

Re: [Wikimedia-l] Disinformation regarding perfect forward secrecy for HTTPS

2013-08-02 Thread Anthony
On Fri, Aug 2, 2013 at 10:07 PM, Anthony wikim...@inbox.org wrote:


 Anthony wrote:
 
  How much padding is already inherent in HTTPS?

 None, which is why Ryan's Google Maps fingerprinting example works.


 Citation needed.


Also please address
https://en.wikipedia.org/wiki/Block_cipher_modes_of_operation#Padding

It seems that the ciphers which run in CBC mode, at least, are padded.
 Wikipedia currently seems to be set to use RC4 128.  I'm not sure what, if
any, padding is used by that cipher.  But presumably Wikipedia will switch
to a better cipher if Wikimedia cares about security.
___
Wikimedia-l mailing list
Wikimedia-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l, 
mailto:wikimedia-l-requ...@lists.wikimedia.org?subject=unsubscribe

Re: [Wikimedia-l] Disinformation regarding perfect forward secrecy for HTTPS

2013-08-02 Thread James Salsman
 please address
https://en.wikipedia.org/wiki/Block_cipher_modes_of_operation#Padding

Sure. As soon as someone creates
http://en.wikipedia.org/wiki/Sunset_Shimmerso I can use an appropriate
example.
___
Wikimedia-l mailing list
Wikimedia-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l, 
mailto:wikimedia-l-requ...@lists.wikimedia.org?subject=unsubscribe

Re: [Wikimedia-l] Disinformation regarding perfect forward secrecy for HTTPS

2013-08-02 Thread James Salsman
Anthony, padding in this context means adding null or random bytes to the
end of encrypted TCP streams in order to obscure their true length. The
process of adding padding is entirely independent of the choice of
underlying cipher.

In this case, however, we have been discussing perfect forward secrecy,
which is dependent on the particular cypher. ECDHE-RSA-RC4-SHA is an
example of a PFS cipher and TLS key exchange protocol choice widely
supported by Apache supporting PFS.

The English Wikipedia articles on these subjects are all mostly
start-class, so please try Google, Google Scholar, and WP:RX for more
information.
___
Wikimedia-l mailing list
Wikimedia-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l, 
mailto:wikimedia-l-requ...@lists.wikimedia.org?subject=unsubscribe

Re: [Wikimedia-l] Disinformation regarding perfect forward secrecy for HTTPS

2013-08-02 Thread Anthony
On Fri, Aug 2, 2013 at 11:09 PM, James Salsman jsals...@gmail.com wrote:

 Anthony, padding in this context means adding null or random bytes to the
 end of encrypted TCP streams in order to obscure their true length. The
 process of adding padding is entirely independent of the choice of
 underlying cipher.


My point is that if the stream is encrypted using a block cipher (at least,
in CBC mode), then it's already padded to the block size of the cipher.

That's the more complete answer to my question of How much padding is
already inherent in HTTPS?  HTTPS itself does not have any inherent
padding, but when used with certain block ciphers, it does.

By the way, for most hours it's around 2.1-2.3 million, not 4.3 million.
 Wikimedia has been kind enough to give us a list of which pages are viewed
each hour of the day, along with the size of each page:
http://dumps.wikimedia.org/other/pagecounts-raw/

In this case, however, we have been discussing perfect forward secrecy,
 which is dependent on the particular cypher. ECDHE-RSA-RC4-SHA is an
 example of a PFS cipher and TLS key exchange protocol choice widely
 supported by Apache supporting PFS.


PFS is the method of key exchange.  You can use it with various different
ciphers.  From what I'm reading it can be used with AES and CBC, which
would be a block cipher which pads to 128 or 256 bytes.
___
Wikimedia-l mailing list
Wikimedia-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l, 
mailto:wikimedia-l-requ...@lists.wikimedia.org?subject=unsubscribe

Re: [Wikimedia-l] Disinformation regarding perfect forward secrecy for HTTPS

2013-08-02 Thread Anthony
On Fri, Aug 2, 2013 at 11:33 PM, Anthony wikim...@inbox.org wrote:

 AES and CBC, which would be a block cipher which pads to 128 or 256 bytes.


I mean bits, of course.
___
Wikimedia-l mailing list
Wikimedia-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l, 
mailto:wikimedia-l-requ...@lists.wikimedia.org?subject=unsubscribe

[Wikimedia-l] Another example of encryption FUD

2013-08-02 Thread James Salsman
http://www.technologyreview.com/news/517781/math-advances-raise-the-prospect-of-an-internet-security-crisis/
is another example of a very highly placed secondary news source casting
fear, uncertainty, and doubt on the value of industry-standard encryption
practices which is not only based on the unchecked alleged reliability mere
primary sources, but on such sources who willingly refer to themselves as
black hats, meaning malicious actors.

That is preposterousness.

Elliptic curve-based cryptography is profoundly secure and has been
considered such ever since it came into vogue in the 1980s and well before
its predecessors were exposed as flawed.

Although some mathematicians have suggested that they are not completely
invulnerable to advances in quantum computing, the discrete logarithm
problem and elliptic curve-based trapdoor functions are both completely
impenetrable to any currently known applications of qubit-based attacks.

Shame on those who promulgate such FUD.
___
Wikimedia-l mailing list
Wikimedia-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l, 
mailto:wikimedia-l-requ...@lists.wikimedia.org?subject=unsubscribe