Re: [Wikitech-l] dnschain

2014-04-30 Thread Tyler Romeo
This isn't really relevant to MediaWiki, and the proposal is so ridiculous
I can only assume it is some sort of joke project.

For others seeing this thread, I found all the good quotes for you:

 DNSChain stops the NSA

 .dns is a meta-TLD because unlike traditional TLDs, it is not meant to
globally resolve to a specific IP [...] you cannot register a meta-TLD
because you already own them!

I think ICANN might take issue with that. (Also, a good read of RFC 3686 is
necessary here.)

 // hijack and record all HTTPS communications to this site
 function do_TLS_MITM(connection) {
 if (
 // let's not get caught by pinning, shall we?
 isPinnedSite(connection.website, connection.userAgent)
 // never hijack those EFF nuisances, they're annoying
 || isOnBlacklist(connection.ip)
 // hijack only 5% of connections to avoid detection
 || randomIntBetween(1, 100)  5
 )
 {
 return false;
 }
 return mitm_and_store_in_database(connection);
 }

I'd *love* to see the implementation of mitm_and_store_in_database.

Also, fun to note that the entire application is written in CoffeeScript.

*-- *
*Tyler Romeo*
Stevens Institute of Technology, Class of 2016
Major in Computer Science


On Wed, Apr 30, 2014 at 1:41 AM, James Salsman jsals...@gmail.com wrote:

 Would someone please review this DNS proposal for secure HTTPS?

 https://github.com/okTurtles/dnschain
 http://okturtles.com/other/dnschain_okturtles_overview.pdf
 http://okturtles.com/

 It is new but it appears to be the most correct secure DNS solution for
 HTTPS security at present. Thank you.

 Best regards,
 James Salsman
 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] dnschain

2014-04-30 Thread Martijn Hoekstra
On Apr 30, 2014 8:21 AM, Tyler Romeo tylerro...@gmail.com wrote:

 This isn't really relevant to MediaWiki, and the proposal is so ridiculous
 I can only assume it is some sort of joke project.

 For others seeing this thread, I found all the good quotes for you:

  DNSChain stops the NSA

  .dns is a meta-TLD because unlike traditional TLDs, it is not meant to
 globally resolve to a specific IP [...] you cannot register a meta-TLD
 because you already own them!

 I think ICANN might take issue with that. (Also, a good read of RFC 3686
is
 necessary here.)

  // hijack and record all HTTPS communications to this site
  function do_TLS_MITM(connection) {
  if (
  // let's not get caught by pinning, shall we?
  isPinnedSite(connection.website, connection.userAgent)
  // never hijack those EFF nuisances, they're annoying
  || isOnBlacklist(connection.ip)
  // hijack only 5% of connections to avoid detection
  || randomIntBetween(1, 100)  5
  )
  {
  return false;
  }
  return mitm_and_store_in_database(connection);
  }

 I'd *love* to see the implementation of mitm_and_store_in_database.

 Also, fun to note that the entire application is written in CoffeeScript.

 *-- *
 *Tyler Romeo*
 Stevens Institute of Technology, Class of 2016
 Major in Computer Science

In the PDF I read this is saver than lower level languages, because it has
no null pointer exceptions.

I for one would prefer not to take safety advice from this person.

--Martijn



 On Wed, Apr 30, 2014 at 1:41 AM, James Salsman jsals...@gmail.com wrote:

  Would someone please review this DNS proposal for secure HTTPS?
 
  https://github.com/okTurtles/dnschain
  http://okturtles.com/other/dnschain_okturtles_overview.pdf
  http://okturtles.com/
 
  It is new but it appears to be the most correct secure DNS solution for
  HTTPS security at present. Thank you.
 
  Best regards,
  James Salsman
  ___
  Wikitech-l mailing list
  Wikitech-l@lists.wikimedia.org
  https://lists.wikimedia.org/mailman/listinfo/wikitech-l
 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] Mailman 3

2014-04-30 Thread Martijn Hoekstra
I just stumbled across http://terriko.dreamwidth.org/151005.html

Since I frequently hear that mailman is a spawn of Satan if not Satan
Incarnate himself, it might be of interest to take a preliminary look.

--Martijn
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] dnschain

2014-04-30 Thread Daniel Friesen
I'll start with their vs. Convergence since I was pretty enthusiastic
when I read up on how it works.
The okTurtles/DNSChain authors don't seem to understand how Convergence
works at all (we'll it's either that or they do understand it but are
maliciously misrepresenting it).
They make ridiculous statements like  It depends on group consensus,
but the group might not be very bright. What happens then? lies like 
It also does not provide MITM protection on first-visit then say  all
of the notary info appears to be stored locally to the computer, or even
the browser. That is rather inconvenient for most users glossing over
how that's pretty much the same as how they handle what DNSChain server
you trust. Likewise their  the website claims that it’s simple to use,
which we have to disagree with because users are asked to manage a list
of notaries. statement (besides making trusting notaries sound harder
than it is) skips the part where DNSChain recommends that everyone run
their own DNSChain daemon or at least maintain a reference to a public
DNSChain service which is basically the same as a reference to a
Convergence notary.

For a bit of reference the basic idea of Convergence they're calling
group consensus is this:
You have a list of notaries you trust. We'll call them say W, X, Y, and Z.
When you visit a new website over HTTPS you get a SSL certificate from
them and you need to know if you can trust it.
To figure this out you talk to all your notaries and ask them if you can
trust this certificate.
Each of your notaries looks up the site and tells you what certificate
they see.
If all or most of them say they see the same certificate then it's safe.
Each of these notaries would be run by different organizations in
different locations, they can be in different countries under different
governments, and you can include ones for different organizations you
trust (you could include Mozilla and EFF if they ran notaries).
The basic idea is that in order to compromise your connection the MITM
compromising your connection would have to collude with or intercept all
outgoing traffic for nearly all of these notaries you trust.

((You could even run notaries which instead of looking at certificates
themselves used some other method – DNSSEC, Pinning, EFF's SSL
Observatory – to test a certificate))j

The best explanation of Convergence is probably this video:
https://www.youtube.com/watch?v=Z7Wl2FW2TcA

~Daniel Friesen (Dantman, Nadir-Seen-Fire) [http://danielfriesen.name/]

On 2014-04-29, 10:41 PM, James Salsman wrote:
 Would someone please review this DNS proposal for secure HTTPS?

 https://github.com/okTurtles/dnschain
 http://okturtles.com/other/dnschain_okturtles_overview.pdf
 http://okturtles.com/

 It is new but it appears to be the most correct secure DNS solution for
 HTTPS security at present. Thank you.

 Best regards,
 James Salsman
 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] dnschain

2014-04-30 Thread Daniel Friesen
((In a separate email since my first on Convergence was long))
Now on DNSChain itself.

The authors like to put down other methods that are attempting to secure
the communications to major websites we use today using .com domains,
ensuring that when we want to talk to Facebook, Google, Wikimedia,
Mozilla, EFF, or that guy over in the corner who can't afford a
certificate from a CA's website we know we're talking to the website
instead of having to rely on the broken CA system.
So how does DNSChain secure your communication to a .com site without
relying on CA certificates?
Simple, it doesn't.

When visiting .bit sites using DNSChain it looks up the DNS in Namecoin
using Namecoin's standard for the DNS of .bit domains, which includes
the fingerprints of the site for TLS.
So on https://*.bit/ you can get a fairly nice secure connection
(assuming you're not being phished and also either running a dnschain
daemon on your local mahine or running dnscrypt-proxy on your local
machine and connecting to a server running dnschain you can trust 100%).

When visiting the rest of the web... Well it just proxies whatever
normal public dns you tell it to, ;) which defaults to 8.8.8.8 without
supporting the fallback 8.8.4.4.

Their answer to using the rest of the web is to install the okTurtles
plugin into your browser and encrypt communication PGP style. This of
course assumes that both you and the other party have setup an id/* for
yourself in namecoin presumably through okTurtles and are both using
DNSChain+okTurtles, relies on hardcoded per-site hacks to figure out who
you're talking to (or has you manually enter their namecoin/okTurtles
id), of course will probably bypass any WYSIWYG editor or feature the
site has (otherwise the site could intercept the communication you're
encrypting), and the site you're connecting to it hasn't secured isn't
tricking it into selecting the wrong namecoin user.
As for on the current web making sure you're sending your password to
the right person, no one is intercepting your credit card details, who
you're talking to isn't being tracked by anyone but the site itself,
etc... well okTurtles just leaves that up to the same certificate
authorities they don't trust.


Of course before we go try and setup wikipedia.bit (whoopsie, looks like
someone already swiped it) we'll probably want to support a .onion domain.

~Daniel Friesen (Dantman, Nadir-Seen-Fire) [http://danielfriesen.name/]

On 2014-04-29, 10:41 PM, James Salsman wrote:
 Would someone please review this DNS proposal for secure HTTPS?

 https://github.com/okTurtles/dnschain
 http://okturtles.com/other/dnschain_okturtles_overview.pdf
 http://okturtles.com/

 It is new but it appears to be the most correct secure DNS solution for
 HTTPS security at present. Thank you.

 Best regards,
 James Salsman
 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] dnschain

2014-04-30 Thread James Salsman
Daniel Friesen wrote:

... The okTurtles/DNSChain authors...
 make ridiculous statements like It depends on group
 consensus, but the group might not be very bright. What
 happens then?

While I agree with much of Daniel's analysis, that part was actually
the most compelling of all the arguments against convergence.io,
except for the part about okTurtles/dnschain accepting multiple
passwords which decrypt the same cyphertext to different data sets,
because http://xkcd.com/538/

And that part is more than compelling enough for me to remain
convinced that okTurtles/dnschain is superior to Convergence.

I enjoyed the https://www.youtube.com/watch?v=Z7Wl2FW2TcA
video because I used to sit 10 meters from Kipp Hickman at Netscape
when he was adding certificate authorities to SSL. I remember him
joking about it in the hallway, right next to the letter from the NSA
which said Netscape would be in trouble if they didn't comply with
various demands which someone had pinned up across from Dan Mosedale's
cube. Five years later I was reviewing CALEA compliance documents at
Cisco. I wonder what Mosedale wants to do for DNS these days.

Best regards,
James Salsman

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] dnschain

2014-04-30 Thread James Salsman
 it just proxies whatever normal public dns you tell it to

Presumably they seed the namecoin table with DNS records and use those
instead when they exist? I don't know whether those can be expired
efficiently.

 As for on the current web making sure you're sending
 your password to the right person, no one is intercepting
 your credit card details, who you're talking to isn't being
 tracked by anyone but the site itself, etc... well okTurtles
 just leaves that up to the same certificate authorities
 they don't trust

It seems like they would take the next logical step and verify
namecoin-cached public key fingerprints of both the site and the
certificate before initiating a traditional SSL connection (and/or
better revocation support.)

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] dnschain

2014-04-30 Thread Daniel Friesen
On 2014-04-30, 4:55 AM, James Salsman wrote:
 it just proxies whatever normal public dns you tell it to
 Presumably they seed the namecoin table with DNS records and use those
 instead when they exist? I don't know whether those can be expired
 efficiently.
Nope,
https://github.com/okTurtles/dnschain/blob/master/src/lib/dns.coffee#L172

 As for on the current web making sure you're sending
 your password to the right person, no one is intercepting
 your credit card details, who you're talking to isn't being
 tracked by anyone but the site itself, etc... well okTurtles
 just leaves that up to the same certificate authorities
 they don't trust
 It seems like they would take the next logical step and verify
 namecoin-cached public key fingerprints of both the site and the
 certificate before initiating a traditional SSL connection (and/or
 better revocation support.)
You may be misunderstanding something. id/* and d/* entries (foo.bit =
d/foo in namecoin) are part of the namecoin core software itself. And
namecoin has no support for carrying any DNS or TLS fingerprints besides
the d/* entries for .bit domains. The people behind okTurtles/DNSChain
did not create namecoin, neither of the two authors of DNSChain have
contributed a single line of code to namecoin. They can't add new
features to namecoin, only use the ones that already exist. All they're
doing with DNSChain is creating DNS + a HTTP API built on top of
namecoin. An implementation which (as far as the public link pages and
wiki I can find) the namecoin community doesn't even recognize. The
namecoin community appears to be working on implementing DNS, etc... for
namecoin itself.

Oh and the actual Namecoin community is using Convergence as the base
for one of the ways they're implementing .bit support, lol.

~Daniel Friesen (Dantman, Nadir-Seen-Fire) [http://danielfriesen.name/]


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Mailman 3

2014-04-30 Thread Sumana Harihareswara
On 04/30/2014 04:20 AM, Martijn Hoekstra wrote:
 I just stumbled across http://terriko.dreamwidth.org/151005.html
 
 Since I frequently hear that mailman is a spawn of Satan if not Satan
 Incarnate himself, it might be of interest to take a preliminary look.
 
 --Martijn

Martijn, thanks for mentioning Oda's post and bringing up Mailman 3. A
bug to upgrade at least some small Wikimedia lists to Mailman 3 is here
https://bugzilla.wikimedia.org/show_bug.cgi?id=64547 . There I link to
some samples so people can check out the new archiver (a replacement for
pipermail); I mostly like it.

-- 
Sumana Harihareswara
Senior Technical Writer
Wikimedia Foundation

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] Image scaling proposal: server-side mip-mapping

2014-04-30 Thread Brion Vibber
There've been some issues reported lately with image scaling, where
resource usage on very large images has been huge (problematic for batch
uploads from a high-resolution source). Even scaling time for typical
several-megapixel JPEG photos can be slower than desired when loading up
into something like the MMV extension.

I've previously proposed limiting the generatable thumb sizes and
pre-generating those fixed sizes at upload time, but this hasn't been a
popular idea because of the lack of flexibility and potentially poor
client-side scaling or inefficient network use sending larger-than-needed
fixed image sizes.

Here's an idea that blends the performance benefits of pre-scaling with the
flexibility of our current model...


A classic technique in 3d graphics is
mip-mappinghttps://en.wikipedia.org/wiki/Mip-mapping,
where an image is pre-scaled to multiple resolutions, usually each 1/2 the
width and height of the next level up.

When drawing a textured polygon on screen, the system picks the most
closely-sized level of the mipmap to draw, reducing the resources needed
and avoiding some classes of aliasing/moiré patterns when scaling down. If
you want to get fancy you can also use trilinear
filteringhttps://en.wikipedia.org/wiki/Trilinear_filtering,
where the next-size-up and next-size-down mip-map levels are combined --
this further reduces artifacting.


I'm wondering if we can use this technique to help with scaling of very
large images:
* at upload time, perform a series of scales to produce the mipmap levels
* _don't consider the upload complete_ until those are done! a web uploader
or API-using bot should probably wait until it's done before uploading the
next file, for instance...
* once upload is complete, keep on making user-facing thumbnails as
before... but make them from the smaller mipmap levels instead of the
full-scale original


This would avoid changing our external model -- where server-side scaling
can be used to produce arbitrary-size images that are well-optimized for
their target size -- while reducing resource usage for thumbs of huge
source images. We can also still do things like applying a sharpening
effect on photos, which people sorely miss when it's missing.

If there's interest in investigating this scenario I can write up an RfC
with some more details.


(Properly handling multi-page files like PDFs, DjVu, or paged TIFFs could
complicate this by making the initial rendering extraction pretty slow,
though, so that needs consideration.)

-- brion
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Image scaling proposal: server-side mip-mapping

2014-04-30 Thread Gabriel Wicke
On 04/30/2014 12:51 PM, Brion Vibber wrote:
 * at upload time, perform a series of scales to produce the mipmap levels
 * _don't consider the upload complete_ until those are done! a web uploader
 or API-using bot should probably wait until it's done before uploading the
 next file, for instance...
 * once upload is complete, keep on making user-facing thumbnails as
 before... but make them from the smaller mipmap levels instead of the
 full-scale original

 If there's interest in investigating this scenario I can write up an RfC
 with some more details.
Yes, please do! This is very close to what Aaron  me have been discussing
recently on the ops list as well.

Gabriel

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l