Re: [2016] client-side signature checking of Debian archives (Re: When should we https our mirrors?)

2023-06-01 Thread Andrey Rakhmatullin
On Thu, Jun 01, 2023 at 07:07:04AM -0400, Michael Lazin wrote:
> I realize it is work but it would be good if apt had an option for https.
It does.

> You can still update with FTP mirrors.  Wouldn't it be a good idea to allow
> using https and keep http as a fall back for those who need an http mirror?
It's allowed.



Re: [2016] client-side signature checking of Debian archives (Re: When should we https our mirrors?)

2023-06-01 Thread Michael Lazin
I realize it is work but it would be good if apt had an option for https.
You can still update with FTP mirrors.  Wouldn't it be a good idea to allow
using https and keep http as a fall back for those who need an http mirror?


Thank you,

Michael Lazin

.. τὸ γὰρ αὐτὸ νοεῖν ἐστίν τε καὶ εἶναι.


On Thu, Jun 1, 2023 at 5:05 AM James Addison  wrote:

> On Thu, Jun 1, 2023, 02:08 Simon Richter  wrote:
>
>>
>> The reason for the change is that it reduces user confusion. Users are
>> learning that unencrypted HTTP has neither integrity nor
>> confidentiality, and that they should actively check that web sites use
>> HTTPS, so we have gotten several inquiries why apt uses an "insecure"
>> protocol.
>>
>
> That's fair.  If I remember correctly, Debian's use of unencrypted HTTP by
> default for apt sources was confusing to me too, and is the reason I
> learned that integrity can be provided over an insecure digital channel
> without requiring encryption.  I didn't write a mailing list message to
> mention that confusion and the resulting understanding at the time however
> (and I acknowledge that HTTPS can be beneficial not only for integrity but
> to increase the cost of other attacks).
>
> I'm OK with the documentation change although I can't promise to stop
> grumbling about it in future (and/or possibly changing my mind about it).
>
>>


Re: [2016] client-side signature checking of Debian archives (Re: When should we https our mirrors?)

2023-06-01 Thread James Addison
On Thu, Jun 1, 2023, 02:08 Simon Richter  wrote:

>
> The reason for the change is that it reduces user confusion. Users are
> learning that unencrypted HTTP has neither integrity nor
> confidentiality, and that they should actively check that web sites use
> HTTPS, so we have gotten several inquiries why apt uses an "insecure"
> protocol.
>

That's fair.  If I remember correctly, Debian's use of unencrypted HTTP by
default for apt sources was confusing to me too, and is the reason I
learned that integrity can be provided over an insecure digital channel
without requiring encryption.  I didn't write a mailing list message to
mention that confusion and the resulting understanding at the time however
(and I acknowledge that HTTPS can be beneficial not only for integrity but
to increase the cost of other attacks).

I'm OK with the documentation change although I can't promise to stop
grumbling about it in future (and/or possibly changing my mind about it).

>


Re: [2016] client-side signature checking of Debian archives (Re: When should we https our mirrors?)

2023-05-31 Thread Simon Richter

Hi,


   - when you use switches, the local network segment has no other nodes
   - if there were other nodes, they would likely miss some packets in
the conversation, which means they cannot generate checksums
   - there is no software that can perform this inspection



Yep, there are limitations to work within, and the result would be the
groundwork for some kind of 'integrity appliance', either within the
local network or deliberately outside of it.


There are commercial security solutions that aim to implement this, and 
many of these also contain solutions to inspect the contents of TLS 
traffic, usually by creating their own CA and requiring clients to 
install their certificate as trusted.


These are then rolled out on the edge of company networks. As you can 
imagine, that comes with its own set of downsides, and I'd argue it 
makes things a lot less secure, especially if you treat it as a product 
that can be installed once and then forgotten about.


For the specific case of package downloads, I'd also argue that it is 
unnecessary, because apt already verifies integrity on the client, and 
such an appliance would have no additional information to act on that 
the client does not have.



An alternative (that may
already exist, I would be surprised if not) would be to run
two-or-more instances of the same computing hardware with key I/O
interfaces between all of them.


This is being done already, but systems like these are generally not 
connected to the Internet.



I think it's fine if we're making a change that is broadly agreed
upon; I wasn't able to find a decision that would be suitable to link
to from the documentation (or commit log) when searching the mailing
lists earlier, and since it's debatably-meritable change with
potential impact on users and infrastructure (albeit in a gradual way)
I wanted to raise some awareness and also check where the reasoning
for the change originates.


I'd also have to search the archives, but I remember the last discussion 
as I was part of it.


The reason for the change is that it reduces user confusion. Users are 
learning that unencrypted HTTP has neither integrity nor 
confidentiality, and that they should actively check that web sites use 
HTTPS, so we have gotten several inquiries why apt uses an "insecure" 
protocol.


It takes time to explain that apt performs its own integrity checks, and 
we can not and should not explain away the missing confidentiality, 
because we're happy that they want to protect their information and are 
insisting on using TLS -- that is far from perfect, but already a vast 
improvement.


Infrastructure-wise, Docker is the far bigger problem, because people 
keep downloading the same packages over and over again, and the 
traditional mirror network donated by universities already had trouble 
keeping up. Two large CDNs are also donating mirrors now, and we use 
these preferentially.


The CDN mirrors also come with caching infrastructure and can terminate 
TLS, which the traditional network cannot because there is no 
key/certificate distribution infrastructure (and such a thing can only 
exist within a single organization), so the last strong technical reason 
for not using HTTPS went away.


The remaining technical reasons are weak, so a non-technical reason 
prompted the change in default.


   Simon



Re: [2016] client-side signature checking of Debian archives (Re: When should we https our mirrors?)

2023-05-31 Thread James Addison
Hi Simon - thanks for the response.  Please find my reply inline below:

On Wed, 31 May 2023 at 11:07, Simon Richter  wrote:
>
> On 5/31/23 05:42, James Addison wrote:
>
> >* It allows other devices on the local network segment to inspect the
> >  content that other nodes are sending and receiving.
>
> That is very theoretical:
>
>   - when you use switches, the local network segment has no other nodes
>   - if there were other nodes, they would likely miss some packets in
> the conversation, which means they cannot generate checksums
>   - there is no software that can perform this inspection

Yep, there are limitations to work within, and the result would be the
groundwork for some kind of 'integrity appliance', either within the
local network or deliberately outside of it.  An alternative (that may
already exist, I would be surprised if not) would be to run
two-or-more instances of the same computing hardware with key I/O
interfaces between all of them.

> >* As another thread participant mentioned, if you don't trust a global
> >  passive adversary, then it may be sensible to question whether you can
> >  trust their certificate issuers (I admit that your HPKP comments 
> > partially
> >  address this concern).  If you don't trust either, you might choose to 
> > save
> >  some CPU cycles (both for yourself and those who may be gathering your
> >  data).
>
> The reason we're even having this debate is that the use of TLS is of
> little consequence from an integrity point of view, because we're doing
> our own verification with our own PKI that is independent from TLS
> certificate issuers, and that isn't going to change.
>
> Because the use of TLS is inconsequential, it is a trade-off between
> various weak reasons:
>
> 1. unencrypted HTTP can be cached with a caching proxy
> 2. unencrypted HTTP can be redirected to a local mirror
> 3. encrypted HTTPS does not let a listener determine quite as easily
> what software you are running on your machine
> 4. encrypted HTTPS requires CA certificates to be deployed to clients
> and marked as trusted, or apt will fail.

Thanks for articulating these much more clearly than I did.

I think it's fine if we're making a change that is broadly agreed
upon; I wasn't able to find a decision that would be suitable to link
to from the documentation (or commit log) when searching the mailing
lists earlier, and since it's debatably-meritable change with
potential impact on users and infrastructure (albeit in a gradual way)
I wanted to raise some awareness and also check where the reasoning
for the change originates.

Thanks,
James



Re: [2016] client-side signature checking of Debian archives (Re: When should we https our mirrors?)

2023-05-31 Thread Simon Richter

Hi,

On 5/31/23 05:42, James Addison wrote:


   * It allows other devices on the local network segment to inspect the
 content that other nodes are sending and receiving.


That is very theoretical:

 - when you use switches, the local network segment has no other nodes
 - if there were other nodes, they would likely miss some packets in 
the conversation, which means they cannot generate checksums

 - there is no software that can perform this inspection

A place where such an inspection would be possible is a local proxy, but 
this is also not being done because even a caching proxy wants to 
forward data as it arrives, and there is no gain in security as the 
proxy can only use the same verification method as the client.



   * As another thread participant mentioned, if you don't trust a global
 passive adversary, then it may be sensible to question whether you can
 trust their certificate issuers (I admit that your HPKP comments partially
 address this concern).  If you don't trust either, you might choose to save
 some CPU cycles (both for yourself and those who may be gathering your
 data).


The reason we're even having this debate is that the use of TLS is of 
little consequence from an integrity point of view, because we're doing 
our own verification with our own PKI that is independent from TLS 
certificate issuers, and that isn't going to change.


Because the use of TLS is inconsequential, it is a trade-off between 
various weak reasons:


1. unencrypted HTTP can be cached with a caching proxy
2. unencrypted HTTP can be redirected to a local mirror
3. encrypted HTTPS does not let a listener determine quite as easily 
what software you are running on your machine
4. encrypted HTTPS requires CA certificates to be deployed to clients 
and marked as trusted, or apt will fail.


Whether a caching proxy and/or redirection to an unofficial nearest 
mirror are desirable or undesirable also depends on your stance -- my 
Docker machines download the same packages sometimes thirty times a day, 
so being able to redirect deb.debian.org to a local mirror is a massive 
performance boost, but also means sometimes images are built with stale 
packages.


To everyday users in developed countries, this change is completely 
irrelevant: they get a default configuration with all CAs trusted either 
way, they get packages from a content delivery network that has HTTPS 
acceleration so that is not a bottleneck, and modern CPUs have 
cryptographic primitives so this uses less CPU time than decompression.


I've raised all of these points in a previous debate already, so I 
believe they have been taken into account in the current decision. No 
users in developing countries have chimed in, so it does not seem to be 
a priority there either, and the Docker users can help themselves with a 
sed invocation as part of the image build.


So far, I'm not seeing a reason to restart this debate; for that to 
happen, we'd need to find at least someone for whom that change is more 
than a slight inconvenience.


   Simon



Re: [2016] client-side signature checking of Debian archives (Re: When should we https our mirrors?)

2023-05-30 Thread James Addison
In follow-up to: https://lists.debian.org/debian-devel/2016/10/msg00592.html

As an update here: the default recommendation in the Debian release notes now
recommends[1] HTTPS instead of HTTPS by default.

Despite the validity of many of the theoretical concerns about APT over HTTP,
I reckon that there are a few (maybe unusual) reasons that could be argued in
favour of plain HTTP:

  * It allows other devices on the local network segment to inspect the
content that other nodes are sending and receiving.  When you pay for a
drink or meal at a bar, typical etiquette is _not_ to place the banknotes
inside a sealed envelope (TLS) during the handover.  In other words:
integrity can increase as the number of potential viewers increases. (I
seem to remember reading a similar phrase about the depth of bugs in code)

In that kind of scenario, an integrity token is provided inline as part of
message (EURion constellation or similar).

  * In the context of machine learning -- where data gathered can be used to
inform and train other processes -- some individuals and organizations may
in fact _want_ to share their workflows with all, as opposed to with only
one other, potentially culturally-distant, entity.

  * As another thread participant mentioned, if you don't trust a global
passive adversary, then it may be sensible to question whether you can
trust their certificate issuers (I admit that your HPKP comments partially
address this concern).  If you don't trust either, you might choose to save
some CPU cycles (both for yourself and those who may be gathering your
data).

Reflections have been published[2] about progress and change as it has occurred
over the past decade or so.  As someone who definitely tends paranoid, despite
some of the reassurances written there, I don't fully trust that the migration
from "your traffic was mostly snoopable in transit" to "your traffic is mostly
encrypted (but to endpoints that we could lean on)" is a true shift for most
affected parties, other than creating some new social dynamics and reallocating
equipment and personnel.

Perhaps that's all an unusual perspective, and/or can be refuted with public
information - I certainly don't have any private information to prove it.

I like privacy, and I think I've been more of an advocate for software privacy
than against (in fact, some of the arguments I've developed in this message
are relatively new to me).  But I do begin to wonder whether the required
overheads for it -- especially given the limitations of the practical, human
systems that operate them -- really benefit the everyday person, or instead
only a (questionably trustworthy) few who want privacy for nefarious reasons.

[1] - 
https://www.debian.org/releases/testing/amd64/release-notes/ch-upgrading.en.html#network

[2] - https://www.ietf.org/archive/id/draft-farrell-tenyearsafter-00.html



Re: client-side signature checking of Debian archives (Re: When should we https our mirrors?)

2016-11-11 Thread Henrique de Moraes Holschuh
On Fri, 11 Nov 2016, Christoph Biedl wrote:
> a proof of concept for all this (I can resist, though). The apt programs
> could obfuscate their request behaviour, the TLS layer could add random
> padding of data and time, but I doubt this would help much.

AFAIK, the TLS layer *does* bit-stuffing and random padding, but it
cannot do that to the point it would help the problem at hand, and still
be usable.

Bitstuffing TLS to the point it could (maybe) deal with the Debian
archive is the wrong solution for the problem anyway, so I won't expand
on this.

-- 
  Henrique Holschuh



Re: client-side signature checking of Debian archives (Re: When should we https our mirrors?)

2016-11-10 Thread Christoph Biedl
Henrique de Moraes Holschuh wrote...

> There are some relevant issues, here.
> 
> 1. It does protect against passive snooping *from non-skilled
> attackers*.

Well, yes, no. The tools become better so thinking a few years into
the future sophisticated programs for that purpose might be available to
everyone. Imagine there was a time before wireshark/ethereal, and how
much work pcap analysis was back then.

> 2. It is unknown how much it can protect against passive snooping from
> skilled attackers capable of passive TCP metadata slooping and basic
> traffic analysis *FOR* something like the Debian archive and APT doing
> an update run against the Debian archive

The logical answer is pretty obvious: Not at all. It's a question of
efforts required and my gut feelings tell me it's not very much.

> Do not dismiss (2). TLS is not really designed to be able to fully
> protect object retrieval from a *fully known* *static* object store
> against traffic metadata analysis.   And an apt update run would be even
> worse to protect, as the attacker can [after a small time window from
> the mirror pulse] fully profile the more probable object combinations
> that would be retrieved depending on what version of Debian the user
> has.

Things are worse: There's a small set of clients, and their request
behaviour is quite deterministic. Another snooping aid is usage of
pdiff.

In total, I was not surprised if just given the frame metadata
(direction, high-res timestamp, payload size) it was possible to restore
the actual data transmitted with high accurancy. Even a dget/apt-get
source should have a pretty unique pattern; and I feel tempted to create
a proof of concept for all this (I can resist, though). The apt programs
could obfuscate their request behaviour, the TLS layer could add random
padding of data and time, but I doubt this would help much.

Another "wasn't surprised", applicances might already have that. If not,
the vendors could implement this easily.

> Now, hopefully I got all of that wrong and someone will set me straight.
>  It would make me sleep better at night...

Sorry Dorothy.

Christoph


signature.asc
Description: Digital signature


Re: client-side signature checking of Debian archives (Re: When should we https our mirrors?)

2016-11-10 Thread David Kalnischkies
On Thu, Nov 10, 2016 at 12:39:40PM -0200, Henrique de Moraes Holschuh wrote:
> I'd prefer if we enhanced apt transports to run a lot more protected
> (preferably under seccomp strict) before any such push for enabling
> https transports in apt.  It would reduce the security impact a great
> deal.

I am helplessly optimistic, so I will say it again even if the past
tells me it pointless: Anyone in any way interest in improvements is
more than welcome to join deity@l.d.o / #debian-apt.

Very few things get done by just talking on d-d@ about nice to haves.


> Mind you, at fist look it seems like apt transports will *run as root*
> in Debian jessie.  HOWEVER I didn't do a deep check, just "ps aux" while
> apt was running.  And I didn't check in unstable.  So, I (hopefully)
> could be wrong about this.

For jessie you are right. The few of us took an awful lot of time to
basically reimplement many parts of the acquire subsystem in the last
few years. You can watch Michael talk about it at DC14, me at DC15 and
Julian at DC16 if you like, but the very basic summary is that in
stretch onwards all apt methods run effectively as _apt:nogroups (and
with no-new-privs) & apt itself requires repositories to be signed and
expects more than just SHA1 or MD5 (as usual, that applies to everything
related to apt like aptitude, synaptics, packagekit, apt-file, …).

There is still much we wanna do, but for now we are actually happy that
we seem to have managed to satisfy all the people who responded to those
changes: The army of complainers that it breaks their firewalls, strange
so called sneaker net abominations or other interesting workflows[0] …


> Can you imagine trying to contain an exploit in the wild that will take
> advantage of people trying to update their systems against said exploit
> to spread even further?  Well, this is exactly what would happen.  We

Let the code with no bugs cast the first stone – you could just as well
say that any http bug is if wrapped in https less critical. libcurl
depends on a crapload of stuff we don't actually need because we use it
just for https and not for ftp, samba, …. And then most TLS exploits
tend to be in tricking it to consider a cert valid while it isn't, which
is a big problem for most things, but for apt those kind of bugs are
a lot less critical as we don't trust the transport layer anyhow (if we
treat it more as a MITM annoyance instead of one-and-only security).
As such, completely non-empirical of course, but I think it would be
a net-benefit to have https sources available for use by default even if
its overrated in this context – but you will only die very tired if you
try to explain why https-everywhere is a requirement for your browser
and even most (language specific) package managers to add a tiny layer
of security to them, but our beloved apt doesn't strictly need it for
security (but happily accepts the tiny layer as addition).


As already said, we are open to consider replacing libcurl with
a suitable alternative like e.g. using libgnutls directly – but see
optimistic paragraph above, I still hope that a volunteer will show up…
(as the biggest TLS exploit is usually the implementor who hasn't worked
with the API before and I haven't).

And I would still like to have some for a-t-tor, too. The package is
even way smaller than even the smallest node packages [SCNR] nowadays
and someone with an eye for detail, integration and documentation could
do wonders… but I start to digress.


Best regards

David Kalnischkies

[0] https://xkcd.com/1172/


signature.asc
Description: PGP signature


Re: client-side signature checking of Debian archives (Re: When should we https our mirrors?)

2016-11-10 Thread Henrique de Moraes Holschuh
On Mon, Oct 24, 2016, at 00:28, Russ Allbery wrote:
> The value of HTTPS lies in its protection against passive snooping. 

There are some relevant issues, here.

1. It does protect against passive snooping *from non-skilled
attackers*.  And this is not being made anywhere clear enough.

2. It is unknown how much it can protect against passive snooping from
skilled attackers capable of passive TCP metadata slooping and basic
traffic analysis *FOR* something like the Debian archive and APT doing
an update run against the Debian archive (i.e. this comment is NOT valid
for ANY OTHER USE of https).

Do not dismiss (2). TLS is not really designed to be able to fully
protect object retrieval from a *fully known* *static* object store
against traffic metadata analysis.   And an apt update run would be even
worse to protect, as the attacker can [after a small time window from
the mirror pulse] fully profile the more probable object combinations
that would be retrieved depending on what version of Debian the user
has.

apt-transport-https really exists to help people bypass half-hearted
filtering and logging by corporate or ISP web proxies, and it is very
good for that. It is a valid user case, and one we do want to support. 
But it shouldn't be sold as a strong privacy defense, *ever*.

There wouldn't be a reason to not enable apt-transport-https: at this
point of the analysis: it is still not making things really any worse,
so you'd likely have a net gain since it does raise the bar for the
attackers.

However, there are these little real world details called "attack
surface" and "implementation complexity" as sources of exploitable
security vulnerabilities, and the picture changes a lot when you start
looking at that angle:

(up-to-date Debian stable/jessie amd64 system):

ldd /usr/lib/apt/methods/http | cut -d '(' -f 1
linux-vdso.so.1
libapt-pkg.so.4.12 =>
/usr/lib/x86_64-linux-gnu/libapt-pkg.so.4.12
libstdc++.so.6 => /usr/lib/x86_64-linux-gnu/libstdc++.so.6
libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6
libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6
libutil.so.1 => /lib/x86_64-linux-gnu/libutil.so.1
libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2
libz.so.1 => /lib/x86_64-linux-gnu/libz.so.1
libbz2.so.1.0 => /lib/x86_64-linux-gnu/libbz2.so.1.0
liblzma.so.5 => /lib/x86_64-linux-gnu/liblzma.so.5
/lib64/ld-linux-x86-64.so.2

compared with:

ldd /usr/lib/apt/methods/https | cut -d '(' -f 1
linux-vdso.so.1
libapt-pkg.so.4.12 =>
/usr/lib/x86_64-linux-gnu/libapt-pkg.so.4.12
libcurl-gnutls.so.4 =>
/usr/lib/x86_64-linux-gnu/libcurl-gnutls.so.4
libstdc++.so.6 => /usr/lib/x86_64-linux-gnu/libstdc++.so.6
libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6
libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6
libutil.so.1 => /lib/x86_64-linux-gnu/libutil.so.1
libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2
libz.so.1 => /lib/x86_64-linux-gnu/libz.so.1
libbz2.so.1.0 => /lib/x86_64-linux-gnu/libbz2.so.1.0
liblzma.so.5 => /lib/x86_64-linux-gnu/liblzma.so.5
libidn.so.11 => /usr/lib/x86_64-linux-gnu/libidn.so.11
librtmp.so.1 => /usr/lib/x86_64-linux-gnu/librtmp.so.1
libssh2.so.1 => /usr/lib/x86_64-linux-gnu/libssh2.so.1
libnettle.so.4 => /usr/lib/x86_64-linux-gnu/libnettle.so.4
libgnutls-deb0.so.28 =>
/usr/lib/x86_64-linux-gnu/libgnutls-deb0.so.28
libgssapi_krb5.so.2 =>
/usr/lib/x86_64-linux-gnu/libgssapi_krb5.so.2
libkrb5.so.3 => /usr/lib/x86_64-linux-gnu/libkrb5.so.3
libk5crypto.so.3 => /usr/lib/x86_64-linux-gnu/libk5crypto.so.3
libcom_err.so.2 => /lib/x86_64-linux-gnu/libcom_err.so.2
liblber-2.4.so.2 => /usr/lib/x86_64-linux-gnu/liblber-2.4.so.2
libldap_r-2.4.so.2 =>
/usr/lib/x86_64-linux-gnu/libldap_r-2.4.so.2
libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0
/lib64/ld-linux-x86-64.so.2
libhogweed.so.2 => /usr/lib/x86_64-linux-gnu/libhogweed.so.2
libgmp.so.10 => /usr/lib/x86_64-linux-gnu/libgmp.so.10
libgcrypt.so.20 => /lib/x86_64-linux-gnu/libgcrypt.so.20
libp11-kit.so.0 => /usr/lib/x86_64-linux-gnu/libp11-kit.so.0
libtasn1.so.6 => /usr/lib/x86_64-linux-gnu/libtasn1.so.6
libkrb5support.so.0 =>
/usr/lib/x86_64-linux-gnu/libkrb5support.so.0
libkeyutils.so.1 => /lib/x86_64-linux-gnu/libkeyutils.so.1
libresolv.so.2 => /lib/x86_64-linux-gnu/libresolv.so.2
libsasl2.so.2 => /usr/lib/x86_64-linux-gnu/libsasl2.so.2
libgpg-error.so.0 => /lib/x86_64-linux-gnu/libgpg-error.so.0
libffi.so.6 => /usr/lib/x86_64-linux-gnu/libffi.so.6


I assume I don't have to remind anyone of the security history of the
above library 

Re: client-side signature checking of Debian archives (Re: When should we https our mirrors?)

2016-11-09 Thread Adrian Bunk
On Sun, Nov 06, 2016 at 12:03:03AM +0100, Philipp Kern wrote:
> On 2016-11-05 22:23, Adrian Bunk wrote:
> > The solution you are trying to sell is apt-transport-https as default.
> [...]
> > Your solution would be a lot of work with relatively little improvement.
> 
> Well, the client-side exists and works.
>...

Yes and no.

It works, but there is much work left if you want to make that the 
default.

David already mentioned in this discussion where apt-transport-https 
needs improvements.

I did already mention that the current footprint of adding 
apt-transport-https to the installer and small base filesystems
is currently pretty large.
As an example, the installer would require two different TLS libraries
if you just add apt-transport-https.

I would guess there are also other areas that have to be looked at
if that should become the default, like how certificate errors will
be handled in the installer.

> > BTW: The "possible low-effort improvement without tradeoff" is:
> > 
> > Is apt-transport-tor working reliably enough for general usage?
> > Are security updates available immediately through apt-transport-tor?
> > Is there a good reason why apt-transport-tor is not mentioned
> > at the frontpage of http://www.debian.org/security/ ?
> > 
> > My current impression (that might be wrong) is that the technical side
> > would be available, only documentation and perhaps PR (e.g. email to
> > debian-security-announce) are missing.
> 
> If we are limiting ourselves to mirrors run by DSA (which is what happens
> for the backends of the onion balancer), we could have the same with an
> HTTPS-based solution just fine. It'd likely raise the same scalability and
> operational questions as HTTPS. Your proposal here simply has different
> tradeoffs, not none as you claim.

Russ and me were discussing one specific tradeoff.

Let me repeat the relevant problem:
  By discouraging users from using mirrors for security.debian.org,
  Debian is presenting a nearly complete list of all computers in
  the world running Debian stable and their security update status
  and policies on a silver plate to the NSA.

Russ answered:
  It's a tradeoff with freshness of security updates.

With HTTP this tradeoff between "not giving information about Debian 
users on a silver plate to the NSA" and "providing security updates
as soon as possible" exists.

This tradeoff still exists with HTTPS.

Tor offers a solution for this specific problem that does not have
this specific tradeoff.

> Kind regards
> Philipp Kern

cu
Adrian

-- 

   "Is there not promise of rain?" Ling Tan asked suddenly out
of the darkness. There had been need of rain for many days.
   "Only a promise," Lao Er said.
   Pearl S. Buck - Dragon Seed



Re: client-side signature checking of Debian archives (Re: When should we https our mirrors?)

2016-11-05 Thread Philipp Kern

On 2016-11-05 22:23, Adrian Bunk wrote:

The solution you are trying to sell is apt-transport-https as default.

[...]
Your solution would be a lot of work with relatively little 
improvement.


Well, the client-side exists and works. Then it boils down if the mirror 
sponsors would be willing to offer HTTPS in general and from there the 
operational challenges of having the right certs available. Something 
like httpredir would have had it easier because it redirected to 
canonical names owned by the sponsors for which they could offer HTTPS.



(Not limited to security) it is usually worth the effort to start by
properly formulating the problem(s) that should be solved, instead of
limiting yourself to some solutions.


While I generally agree with that statement and see it violated quite 
often myself, you should also give others in the discussion the benefit 
of the doubt here.



BTW: The "possible low-effort improvement without tradeoff" is:

Is apt-transport-tor working reliably enough for general usage?
Are security updates available immediately through apt-transport-tor?
Is there a good reason why apt-transport-tor is not mentioned
at the frontpage of http://www.debian.org/security/ ?

My current impression (that might be wrong) is that the technical side
would be available, only documentation and perhaps PR (e.g. email to
debian-security-announce) are missing.


If we are limiting ourselves to mirrors run by DSA (which is what 
happens for the backends of the onion balancer), we could have the same 
with an HTTPS-based solution just fine. It'd likely raise the same 
scalability and operational questions as HTTPS. Your proposal here 
simply has different tradeoffs, not none as you claim.


Kind regards
Philipp Kern



Re: client-side signature checking of Debian archives (Re: When should we https our mirrors?)

2016-11-05 Thread Adrian Bunk
On Tue, Oct 25, 2016 at 11:06:23AM -0700, Russ Allbery wrote:
> Adrian Bunk  writes:
>...
> So, I'm not quite sure how to put this, since I don't know how much work
> you've done professionally in computer security, and I don't want to
> belittle that.  It's entirely possible that we have equivalent levels of
> experience and you just disagree with me, and I want to acknowledge that.

I have some knowledge here and there, but I am definitely not
a Security Expert.

When something is so well-known or obvious that even someone like me 
knows it or is able to figure it out, it has to be pretty well-known
or obvious.

>...
> In the specific case of retrieval of apt packages, use of HTTPS raises the
> bar for the attacker who wants to know what packages you probably have on
> your system from simply parsing the GET commands (information that would
> be captured in any dragnet surveillance database as a matter of course)
> for package names and versions to writing Debian-specific analysis code to
> make (fallible) guesses from traffic analysis.  This is a fairly
> substantial change in the required resources, and makes various things
> (such as retroactive data mining when this particular use case wasn't
> anticipated in advance) considerably harder.

I hope I am not too rude when I state the general problem I have with 
the way you are arguing:
You are not trying to solve a problem,
you are trying to sell a solution.

The solution you are trying to sell is apt-transport-https as default.

The problem would be something like "more security/privacy for users"
or "make it harder for the NSA".

Your solution would be a lot of work with relatively little improvement.

>...
> Yes, you're not going to get absolute security against the NSA with cheap
> wire encryption, but you *do* change the resource battle.

If something makes it more costly for the NSA, the US taxpayer will 
just give them another Billion every year - no party in US congress
would oppose that.

Debian resources are far more limited.

Noone is stopping you from doing the work on the client-side for making 
apt-transport-https the default (that noone seems to be working on) if 
this is important for you - it is your time.

But for any properly formulated problem and a fixed amount of resources,
I doubt the best solution available would include apt-transport-https.

>...
> > By discouraging users from using mirrors for security.debian.org, Debian
> > is presenting a nearly complete list of all computers in the world
> > running Debian stable and their security update status and policies on a
> > silver plate to the NSA.
> 
> It's a tradeoff with freshness of security updates.  Personally, I usually
> use an in-house mirror of security.debian.org for various reasons, and
> it's worth noting that our "discouraging" isn't particularly aggressive.

You are drawing your conclusions from just looking at some solutions.

When I go back one step and think of ways to improve the situation for 
users, I do also see a possible low-effort improvement that does not 
have the tradeoff you are talking about and that does not require a 
local mirror. And when even I am able to come up with something, there 
is likely more.

(Not limited to security) it is usually worth the effort to start by 
properly formulating the problem(s) that should be solved, instead of 
limiting yourself to some solutions.

cu
Adrian


BTW: The "possible low-effort improvement without tradeoff" is:

Is apt-transport-tor working reliably enough for general usage?
Are security updates available immediately through apt-transport-tor?
Is there a good reason why apt-transport-tor is not mentioned
at the frontpage of http://www.debian.org/security/ ?

My current impression (that might be wrong) is that the technical side
would be available, only documentation and perhaps PR (e.g. email to
debian-security-announce) are missing.

apt-transport-tor could also be taken into consideration in the 
unattended-upgrades discussion as a (perhaps non-default) option 
offered for receiving the security updates.

Oh, and when I look at http://www.debian.org/security/ I see
a link "For more information about security issues in Debian, please 
... and a manual called Securing Debian."

The manual says "Version: 3.17, built on Sun, 08 Apr 2012".

I know that writing documentation is less sexy than implementing
technical solutions, but for the problem "more security for users"
proper user documentation is actually more important than many of
the technical solutions.


-- 

   "Is there not promise of rain?" Ling Tan asked suddenly out
of the darkness. There had been need of rain for many days.
   "Only a promise," Lao Er said.
   Pearl S. Buck - Dragon Seed



Re: client-side signature checking of Debian archives (Re: When should we https our mirrors?)

2016-10-25 Thread Russ Allbery
Adrian Bunk  writes:

> If I were looking at the apt traffic, the most interesting for me would
> be the traffic to security.debian.org that a computer running Debian
> stable usually produces.

> Just collecting the data when and how much HTTPS traffic is happening 
> should be sufficient to determine information like the following:
>   What Debian release is running on that computer?
>   Which security relevant packages are installed in that computer?
>   Are security updates downloaded automatically or manually?
>   In the latter case, are they installed in a timely manner?

Again, this requires targeting.  It requires someone go to the effort of
building the software that does this sort of analysis, and then keeping it
up-to-date with changes in the Debian archive structure, transport layer,
package sizes, etc.  (And knowing whether the packages are installed is
quite a bit harder to figure out without doing more active things like
fingerprinting, and even then it's not always possible.)

> When your adversary is powerful enough that he is capable of monitoring
> your traffic with security.debian.org, then apt-transport-https is just 
> snake oil.

So, I'm not quite sure how to put this, since I don't know how much work
you've done professionally in computer security, and I don't want to
belittle that.  It's entirely possible that we have equivalent levels of
experience and you just disagree with me, and I want to acknowledge that.

But, that said, this feeling, which comes across roughly as "this doesn't
completely fix the problem, so why bother with half measures?", is a trap
I see people fall into when they don't have a good feel for the dynamics
of practical security measures.  It's *so* tempting to let the best be the
enemy of the good because you, as an experienced engineer with a complete
mental model of the system being protected, can see flaws in the
protective mitigations and feel like it would be easy to counter them.

This is why, in computer security, it's usually a bad idea to talk about
"fixes" except for specific software-bug vulnerabilities.  Security
measures are often described as "mitigations," and rather than determining
whether or not something is secure, it's usually better to talk about
making things easier or harder for an attacker.

You should generally assume that if a sufficiently funded and motivated
adversary wants to break into your Internet-connected system in
particular, they're going to succeed.  But that doesn't mean that computer
security is useless.  There are multiple other factors in play: a lot of
adversaries care a great deal about not being *detected* (which is a much
easier problem), most attacks are not targeted at all (they're either
spray-and-pray automated attacks or they're dragnet data gathering with no
predetermined goal in mind), and time and resources matter.  To give a
specific example, there are many situations where you don't have to keep a
government out of your stuff permanently.  You just have to keep them out
for long enough that you can get your lawyer involved.

In the specific case of retrieval of apt packages, use of HTTPS raises the
bar for the attacker who wants to know what packages you probably have on
your system from simply parsing the GET commands (information that would
be captured in any dragnet surveillance database as a matter of course)
for package names and versions to writing Debian-specific analysis code to
make (fallible) guesses from traffic analysis.  This is a fairly
substantial change in the required resources, and makes various things
(such as retroactive data mining when this particular use case wasn't
anticipated in advance) considerably harder.

> The NSA might actually be very grateful that there are people who are
> promoting such snake oil as solution, since this lowers the probability
> of people looking for solutions that could make it harder for the NSA.

I truly don't believe this worry makes any sense.

Ubiquitous encryption makes things considerably harder for the NSA because
they have to expend more resources.  There is substantial confirmation of
this from the hissy fits that the NSA has thrown in the past about public
use of encryption, and their repeated attempts to get people to adopt
crypto with back doors.  I know some people think this is all an elaborate
smoke screen, but I think that level of paranoia is unjustified.  The
people working for the NSA aren't superhuman.  Encryption matters, even if
you can still recover metadata with traffic analysis.

Yes, you're not going to get absolute security against the NSA with cheap
wire encryption, but you *do* change the resource battle.  The NSA can
bring almost unlimited resources to bear *on single, high-value targets*,
and those require substantially different precautions.  But I'm just as
worried about the implications of huge databases of information about
people's on-line activities sitting in government databases.  (I'm
actually 

Re: client-side signature checking of Debian archives (Re: When should we https our mirrors?)

2016-10-25 Thread Alexander Wirt
On Tue, 25 Oct 2016, Ian Jackson wrote:

> Adrian Bunk writes ("Re: client-side signature checking of Debian archives 
> (Re: When should we https our mirrors?)"):
> > snake oil
> > snake oil
> 
> The phrase "snake oil" is very insulting.  I have asked you several
> times to to stop.
> 
> Publicly CCing listmaster this time.
you are free to do that. But we are no babysitters. Unless things escalate we
won't step in. And sorry, I fail to see where a phrase like "snake oil" would
justify an intervention from listmaster side. 

Alex - Debian Listmaster

P.S. this is my personal oppinion as listmaster and not a coordinated
statement
 



Re: client-side signature checking of Debian archives (Re: When should we https our mirrors?)

2016-10-25 Thread Ian Jackson
Adrian Bunk writes ("Re: client-side signature checking of Debian archives (Re: 
When should we https our mirrors?)"):
> snake oil
> snake oil

The phrase "snake oil" is very insulting.  I have asked you several
times to to stop.

Publicly CCing listmaster this time.

Ian.

-- 
Ian Jackson <ijack...@chiark.greenend.org.uk>   These opinions are my own.

If I emailed you from an address @fyvzl.net or @evade.org.uk, that is
a private address which bypasses my fierce spamfilter.



Re: client-side signature checking of Debian archives (Re: When should we https our mirrors?)

2016-10-25 Thread Adrian Bunk
On Mon, Oct 24, 2016 at 04:33:57PM -0700, Russ Allbery wrote:
> Adrian Bunk  writes:
>...
> > I would assume this can be pretty automated, and that by NSA standards
> > this is not a hard problem.
> 
> Since the entire exchange is encrypted, it's not completely trivial to map
> size of data transferred to a specific package (of course, it's even
> harder if we reuse connections).  But the point I'm making is more that
> it's not something that just falls out of an obvious surveillance
> technique that has wide-ranging uses.  It requires someone to write code
> to *specifically* target Debian mirrors, which I think is much less likely
> than just collecting all the data and deciding to analyze it afterwards.
>...

If I were looking at the apt traffic, the most interesting for me would 
be the traffic to security.debian.org that a computer running Debian 
stable usually produces.

Just collecting the data when and how much HTTPS traffic is happening 
should be sufficient to determine information like the following:
  What Debian release is running on that computer?
  Which security relevant packages are installed in that computer?
  Are security updates downloaded automatically or manually?
  In the latter case, are they installed in a timely manner?

When your adversary is powerful enough that he is capable of monitoring
your traffic with security.debian.org, then apt-transport-https is just 
snake oil.

The NSA might actually be very grateful that there are people who are 
promoting such snake oil as solution, since this lowers the probability 
of people looking for solutions that could make it harder for the NSA.

I would assume it is unlikely that the NSA is monitoring the connection 
between me and my nearest Debian mirror. This does of course depend on 
your geographical location.

I would assume it is likely that the NSA is monitoring the connection 
between me and security.debian.org.

By discouraging users from using mirrors for security.debian.org,
Debian is presenting a nearly complete list of all computers in
the world running Debian stable and their security update status
and policies on a silver plate to the NSA.

cu
Adrian

-- 

   "Is there not promise of rain?" Ling Tan asked suddenly out
of the darkness. There had been need of rain for many days.
   "Only a promise," Lao Er said.
   Pearl S. Buck - Dragon Seed



Re: client-side signature checking of Debian archives (Re: When should we https our mirrors?)

2016-10-24 Thread Russ Allbery
Paul Wise  writes:

> Debian has Tor onion service frontends to various Debian services,
> including several Debian machines with archive mirrors, this is
> implemented in an automated way using Puppet and onionbalance. So we do
> not rely on Tor exit nodes, just relays and the onion service system.

> https://onion.debian.org/
> https://anonscm.debian.org/cgit/mirror/dsa-puppet.git/tree/modules/onion

Oh, interesting, thank you.  I hadn't realized that.  That definitely
makes Tor more attractive.

-- 
Russ Allbery (r...@debian.org)   



Re: client-side signature checking of Debian archives (Re: When should we https our mirrors?)

2016-10-24 Thread Paul Wise
On Tue, Oct 25, 2016 at 7:33 AM, Russ Allbery wrote:

> Tor is easier for us as a project, since we don't really have to do
> anything (assuming we just rely on existing exit nodes).

Debian has Tor onion service frontends to various Debian services,
including several Debian machines with archive mirrors, this is
implemented in an automated way using Puppet and onionbalance. So we
do not rely on Tor exit nodes, just relays and the onion service
system.

https://onion.debian.org/
https://anonscm.debian.org/cgit/mirror/dsa-puppet.git/tree/modules/onion

> SSL is much harder for us as a project

For most debian.org services run by DSA, enabling SSL on a service is
one git commit away, thanks to Lets Encrypt.

Some things like snapshot are harder due to software or other issues.
For snapshot, varnish is the frontend and it doesn't support SSL.

All the debian.org mirrors except ftp.d.o are not actually run by DSA
and DSA occasionally need to change which domain points to which
mirror, so SSL for them is much more complicated.

-- 
bye,
pabs

https://wiki.debian.org/PaulWise



Re: client-side signature checking of Debian archives (Re: When should we https our mirrors?)

2016-10-24 Thread Russ Allbery
Adrian Bunk  writes:

> The government operating or having access to the mirror you are using is
> a lot more realistic and easier than the MITM with a fake certificate
> you were talking about.

Both of those were also in the category of things that I think are
unlikely attacks unless the government is specifically targeting you, so
that leaves me confused by what point you're trying to make here.

For the record, MITM with a fake certificate is certainly realistic for a
targeted attack, and is something that one has to think about.  There's a
reason why certificate pinning is a standard part of the defenses in a
higher-security SSL configuration.  I do agree (and in fact this was my
whole point) that it's unlikely a government would burn their certificate
capability this way just to figure out what Debian packages you're
downloading, though.

> I would assume this can be pretty automated, and that by NSA standards
> this is not a hard problem.

Since the entire exchange is encrypted, it's not completely trivial to map
size of data transferred to a specific package (of course, it's even
harder if we reuse connections).  But the point I'm making is more that
it's not something that just falls out of an obvious surveillance
technique that has wide-ranging uses.  It requires someone to write code
to *specifically* target Debian mirrors, which I think is much less likely
than just collecting all the data and deciding to analyze it afterwards.

(That said, it would be possible to reconstruct the necessary data to do
the analysis later, I suppose.  But here too someone has to care at a
level deep enough to go write code to do it, as opposed to just doing ad
hoc queries.)

> Let me try to summarize my point:

> apt-transport-https makes it slightly harder to determine what packages 
> are being transferred, and this is good.

> When someone is seriously worried about a nation-state actor determining 
> what packages he downloads then apt-transport-https is not a solution, 
> and the proper solution is apt-transport-tor.

> I assume you will disagree regarding the word "slightly".
> Are we in agreement regarding the rest of my summary?

I disagree that either of them is a solution.  They're both mitigations,
and if a nation-state attacker is targeting you specifically, I wouldn't
want to rely entirely on either of them.

I agree with you that Tor is a stronger mitigation.

Tor is easier for us as a project, since we don't really have to do
anything (assuming we just rely on existing exit nodes).  SSL is much
harder for us as a project, but is simpler for the end user (and doesn't
require them to take a stance on the merits of Tor as a project; I don't
really want to get into a philosophical argument about the merits of
Internet anonymity, but this is not something everyone thinks is an ideal
to strive for as opposed to a situational fix for very specific risks).
In an ideal world we'd offer both.

> When someone is worried about the confidentiality of the information 
> what packages are installed on a system, only looking at the download 
> side is also missing other areas that might be even more problematic.

This is true.  But the set of problems are largely independent.  While
solving the download side doesn't fix all the other problems, it also
doesn't interfere with fixing all the other problems, and is still needed
to make the overall system more secure.

The reason why addressing the download side is appealing to me is that we
know dragnet surveillance is common and ongoing, so the risk it's
addressing is tangible and we have some reasonably solid information about
how it works.  So while it may not be the most serious problem, it's one
of the problems we understand the best, and therefore are in a good
position to do something about.

> I would be a lot more worried about what reportbug does when a package
> suggests libdvdcss2 - in some jurisdictions this might just be enough
> when the government is looking for a reason to raid your home.

I'm a great believer in ubiquitous encryption, even if it seems silly,
just on the grounds of "why not?".  We should encrypt reportbug traffic
too, if we can.  Yes, a lot of the details get exposed at the other end
anyway (although not necessarily), but it's usually fairly trivial to
encrypt links, and if it is, there's basically no reason not to.

-- 
Russ Allbery (r...@debian.org)   



Re: client-side signature checking of Debian archives (Re: When should we https our mirrors?)

2016-10-24 Thread Adrian Bunk
On Mon, Oct 24, 2016 at 09:22:39AM -0700, Russ Allbery wrote:
> Adrian Bunk  writes:
> > On Sun, Oct 23, 2016 at 07:28:23PM -0700, Russ Allbery wrote:
> 
> >>...
> >> The value of HTTPS lies in its protection against passive snooping.  Given
> >> the sad state of the public CA infrastructure, you cannot really protect
> >> against active MITM with HTTPS without certificate pinning.
> 
> > You are implicitely assuming that mirrors can be trusted, 
> > and even that is not true.
> 
> > Who is operating ftp.cn.debian.org, and who has access to the logfiles 
> > on that server?
> 
> So, here, I think you're making the best the enemy of the good, which is
> always tempting in security.
> 
> Everyone (myself included) wants to create a system that's actually
> secure, the same way that we want to create software that works properly.
> But in security, even more so than most software, that's rarely possible.
> There's always another attack.  It's a sliding scale of tradeoffs, and
> just because you're not preventing all attacks doesn't mean that making
> things harder for the attacker isn't useful.
> 
> Looking at this worry in particular, note that you're now assuming that a
> nation-state has compromised a specific mirror (in some way, maybe just by
> running it without regard to the privacy of its users).  As pointed out,
> this only gets partial information, but more importantly, this requires
> the nation-state to actively do something.  They have to now either
> compromise the mirror or be running it in a way that may be discoverable.
>...

The government operating or having access to the mirror you are using is 
a lot more realistic and easier than the MITM with a fake certificate 
you were talking about.

> > When a nation-state actor analyzes all the traffic on a network
> > connection that also happens to carry the traffic between you and the
> > Debian mirror you are using, HTTPS won't make a difference.
> 
> Well, I disagree, largely because I think you're overestimating the
> willingness to do deep data analysis on something that's only a tertiary
> (at best) security objective.
> 
> Dragnet surveillance is the most common nation-state approach at the
> moment primarily because it's *really easy*.  Get a backbone tap
> somewhere, dredge all the unencrypted data, run simple analysis across it
> (pulling out URLs accessed is obviously trivial for unencrypted HTTP),
> throw it into some sort of huge database, and decide what you want to
> query for later.

Pulling out the dates and amounts transferred fot HTTP traffic is
also trivial.

> Doing traffic analysis that requires analyzing encrypted data by object
> size or the like is still certainly *possible* (that was the point that I
> made in my first message to this thread), but it's not *trivial*.  It
> requires someone go write code, go think about the problem, and regularly
> gather information from a mirror to correlate encrypted object sizes with
> packages.  It requires a *human* actually *think* about the problem, and
> then set up and maintain an automated system that requires some level of
> ongoing babysitting.

I would assume this can be pretty automated, and that by NSA standards 
this is not a hard problem.

> Can nation-states do this?  Obviously, yes.  But the goal here isn't to
> prevent them from launching these sorts of attacks.  The goal is to make
> it expensive, to require that they justify a budget to hire people to do
> all this additional work that requires custom automation, and to raise the
> bar across the board to make simplistic dragnet surveillance unrewarding.
> This forces them into making some hard decisions about resource allocation
> and attempting more visible intrusions.  Hard decisions are expensive, not
> just in resources but also in politics.
> 
> It *is* a tradeoff, not perfect security, so we should obviously also take
> into account how much effort it takes on our side and whether we're
> expending less effort than we're forcing from our adversary.  But, in
> general, ubiquitous encryption is totally worth it from that sort of
> effort analysis.  Once you get good systems in place (Let's Encrypt
> seriously changed the calculation here), adding encryption is usually far
> easier than the work required on the snooping side to recover the sort of
> metadata they had access to unencrypted.  I think that's the case here.

Let me try to summarize my point:

apt-transport-https makes it slightly harder to determine what packages 
are being transferred, and this is good.

When someone is seriously worried about a nation-state actor determining 
what packages he downloads then apt-transport-https is not a solution, 
and the proper solution is apt-transport-tor.

I assume you will disagree regarding the word "slightly".
Are we in agreement regarding the rest of my summary?


When someone is worried about the confidentiality of the information 
what packages are installed on a system, only looking at 

signature checking in libcupt (Re: client-side signature checking of Debian archives (Re: When should we https our mirrors?))

2016-10-24 Thread Eugene V. Lyubimkin
Hi Kristian,

To one of your side questions,

On 24.10.2016 02:33, Kristian Erik Hermansen wrote:
>> 1) Checking chain (e.g. gpgv and its callers) have bugs. True, same as 
>> checking layer for secure transports also have bugs.
> 
> Agreed. Please let me know of a good test case to validate that your
> tools, which are not APT (?), are doing the right things. You said you
> maintained a tool which "downloads and validates Debian archives in a
> similar way APT does", which means not exactly the way APT does. Let
> me know the name of your tool and how to setup some test cases to
> validate your tool is doing things properly. Glad to spend some time
> on it and contribute any potential findings for the community benefit.

The tool I maintain is minor, not widely used package manager, which may or may 
not be worth your time. It's called
Cupt, the sources are at [1a] or [1b]; namely, the checking code at [2] and 
tests for common situations at [3]. One can
play with those test cases, or install the tool and point it [4] to malicious 
servers.

There might be other packages in Debian which access repos not through libapt.


[1a] 
https://alioth.debian.org/plugins/scmgit/cgi-bin/gitweb.cgi?p=cupt/cupt.git;a=tree
[1b] https://github.com/jackyf/cupt
[2] cpp/lib/src/internal/cachefiles.cpp:verifySignature()
[3] test/t/query/repo-signatures/*
[4] same as APT, via /etc/apt/sources.list




Re: client-side signature checking of Debian archives (Re: When should we https our mirrors?)

2016-10-24 Thread Russ Allbery
Adrian Bunk  writes:
> On Sun, Oct 23, 2016 at 07:28:23PM -0700, Russ Allbery wrote:

>>...
>> The value of HTTPS lies in its protection against passive snooping.  Given
>> the sad state of the public CA infrastructure, you cannot really protect
>> against active MITM with HTTPS without certificate pinning.

> You are implicitely assuming that mirrors can be trusted, 
> and even that is not true.

> Who is operating ftp.cn.debian.org, and who has access to the logfiles 
> on that server?

So, here, I think you're making the best the enemy of the good, which is
always tempting in security.

Everyone (myself included) wants to create a system that's actually
secure, the same way that we want to create software that works properly.
But in security, even more so than most software, that's rarely possible.
There's always another attack.  It's a sliding scale of tradeoffs, and
just because you're not preventing all attacks doesn't mean that making
things harder for the attacker isn't useful.

Looking at this worry in particular, note that you're now assuming that a
nation-state has compromised a specific mirror (in some way, maybe just by
running it without regard to the privacy of its users).  As pointed out,
this only gets partial information, but more importantly, this requires
the nation-state to actively do something.  They have to now either
compromise the mirror or be running it in a way that may be discoverable.

Security is both technical and politics, and on the political side,
actively doing something is much riskier.  Nation-state surveillance teams
are generally not fond of headlines, and there are political consequences
to being caught with your hand in the cookie jar.  Therefore, technical
measures that force them to take an *active* measure to get data, as
opposed to just passively tapping and siphoning data, are, in practice,
quite effective.  Not because they don't have the technical capabilities
to do that, but because it's much riskier politically to do so, and often
that risk isn't worth the reward from their perspective.

It's true that tapping a single mirror could still be passive, but it's a
much trickier kind of passive than just snooping on a cable.  They have to
be passively collecting the actual server logs in some way, which means
presence on the box or access to some network where those logs are being
sent in the clear.  That does raise the bar substantially.

> Debian would accept debian.nsa.gov as mirror, and the NSA might already 
> operate or have access at some current mirrors.

I'm a little dubious that this is true (I would recommend against it), but
even if so, Debian could also easily revoke that acceptance at any point
if we decided we were uncomfortable with it, which makes it an unreliable
way to gather this sort of data.

> When a nation-state actor analyzes all the traffic on a network
> connection that also happens to carry the traffic between you and the
> Debian mirror you are using, HTTPS won't make a difference.

Well, I disagree, largely because I think you're overestimating the
willingness to do deep data analysis on something that's only a tertiary
(at best) security objective.

Dragnet surveillance is the most common nation-state approach at the
moment primarily because it's *really easy*.  Get a backbone tap
somewhere, dredge all the unencrypted data, run simple analysis across it
(pulling out URLs accessed is obviously trivial for unencrypted HTTP),
throw it into some sort of huge database, and decide what you want to
query for later.

Doing traffic analysis that requires analyzing encrypted data by object
size or the like is still certainly *possible* (that was the point that I
made in my first message to this thread), but it's not *trivial*.  It
requires someone go write code, go think about the problem, and regularly
gather information from a mirror to correlate encrypted object sizes with
packages.  It requires a *human* actually *think* about the problem, and
then set up and maintain an automated system that requires some level of
ongoing babysitting.

Can nation-states do this?  Obviously, yes.  But the goal here isn't to
prevent them from launching these sorts of attacks.  The goal is to make
it expensive, to require that they justify a budget to hire people to do
all this additional work that requires custom automation, and to raise the
bar across the board to make simplistic dragnet surveillance unrewarding.
This forces them into making some hard decisions about resource allocation
and attempting more visible intrusions.  Hard decisions are expensive, not
just in resources but also in politics.

It *is* a tradeoff, not perfect security, so we should obviously also take
into account how much effort it takes on our side and whether we're
expending less effort than we're forcing from our adversary.  But, in
general, ubiquitous encryption is totally worth it from that sort of
effort analysis.  Once you get good systems in place (Let's 

Re: client-side signature checking of Debian archives (Re: When should we https our mirrors?)

2016-10-24 Thread Kristian Erik Hermansen
On Mon, Oct 24, 2016 at 2:33 AM, Adrian Bunk  wrote:
> You are implicitely assuming that mirrors can be trusted,
> and even that is not true.

No, not actually. Just presuming that NSA doesn't operate ALL mirrors.
Of course they can operate single servers or a number of servers, but
that increases costs and makes it harder to passively collude against
ALL users.

> Who is operating ftp.cn.debian.org, and who has access to the logfiles
> on that server?
>
> Debian would accept debian.nsa.gov as mirror, and the NSA might already
> operate or have access at some current mirrors.

Right, but that's a much smaller subset of ALL.

> When a nation-state actor analyzes all the traffic on a network
> connection that also happens to carry the traffic between you and
> the Debian mirror you are using, HTTPS won't make a difference.

If it doesn't make a difference, send me a PCAP of all your private
traffic captured from an intermediary node :) I mean, if you don't
seem to care, you won't mind me looking through your stuff. And I also
encourage you to configure your browsers and email clients to utilize
only plaintext HTTP / SMTP / IMAP / POP, perhaps on public wifi too,
so we can all read it. You know, I mean, if it "doesn't make a
difference to you" if you use HTTP or HTTPS or other unencrypted
protocols. The reason it matters so much with SecureAPT is because
these are critical protocols running with root privileges on your
system and are leaking a large amount of data about your system
configuration and the security of it. I don't think I need to belabor
that point. HTTPS does make a huge difference and the entire Internet
would not be using it if "didn't make a difference".

We can probably end the thread here because numerous respected @debian
contributors have confirmed the issues with confidentiality and seem
to making efforts in that direction (hopefully for the next release).

-- 
Regards,

Kristian Erik Hermansen
https://www.linkedin.com/in/kristianhermansen



Re: client-side signature checking of Debian archives (Re: When should we https our mirrors?)

2016-10-24 Thread Adrian Bunk
On Sun, Oct 23, 2016 at 07:28:23PM -0700, Russ Allbery wrote:
>...
> The value of HTTPS lies in its protection against passive snooping.  Given
> the sad state of the public CA infrastructure, you cannot really protect
> against active MITM with HTTPS without certificate pinning.

You are implicitely assuming that mirrors can be trusted, 
and even that is not true.

Who is operating ftp.cn.debian.org, and who has access to the logfiles 
on that server?

Debian would accept debian.nsa.gov as mirror, and the NSA might already 
operate or have access at some current mirrors.

> But that's
> fine; active attackers are a much, much rarer attack profile.  The most
> likely attack, and the one we're able to protect against here, is passive
> observation of mirror traffic used to build a database of who is using
> what package and at what version.  HTTPS doesn't *prevent* this, but it
> requires the attacker to do much more sophisticated traffic analysis, or
> take the *much* more expensive and *far* riskier step of moving to active
> interference with traffic, neither of which nation-state attackers want to
> do and neither of which they have the resources to do *routinely*.
> 
> It won't help if a nation-state actor is targeting you *in particular*.
> But it helps immensely against dragnet surveillance.

No, it does not help much.

When a nation-state actor analyzes all the traffic on a network 
connection that also happens to carry the traffic between you and
the Debian mirror you are using, HTTPS won't make a difference.

cu
Adrian

-- 

   "Is there not promise of rain?" Ling Tan asked suddenly out
of the darkness. There had been need of rain for many days.
   "Only a promise," Lao Er said.
   Pearl S. Buck - Dragon Seed



Re: client-side signature checking of Debian archives (Re: When should we https our mirrors?)

2016-10-24 Thread Eugene V. Lyubimkin
Hi Russ, Kristian,

On 24.10.2016 07:19, Kristian Erik Hermansen wrote:
> On Sun, Oct 23, 2016 at 7:28 PM, Russ Allbery  wrote:
>> The idea is to *add* HTTPS protection on top of the protections we already
>> have.  You're correct that it doesn't give you authentication of the
>> packages without a bunch of work, and we should assume that the general
>> public CA system is compromised.  But that actually doesn't matter much
>> for our purposes, since the point is to greatly increase the cost of
>> gathering data about what packages people have installed.
>>
>> The value of HTTPS lies in its protection against passive snooping.  Given
> 
> Exactly! Much better said than how I originally phrased these issues.
> 
>> what package and at what version.  HTTPS doesn't *prevent* this, but it
>> requires the attacker to do much more sophisticated traffic analysis, or
>> take the *much* more expensive and *far* riskier step of moving to active
>> interference with traffic, neither of which nation-state attackers want to
>> do and neither of which they have the resources to do *routinely*.
>>
>> It won't help if a nation-state actor is targeting you *in particular*.
>> But it helps immensely against dragnet surveillance.
> 
> Again, exactly right and well stated. We can never stop targeted
> attacks, but we can make passive data collection more expensive and
> increase the chances that a targeted attack is detected.

Yes, thank you for explanations, I now get the point of improving more 
confidentiality than integrity here, and warding
off most of passive data gatherers.



Re: client-side signature checking of Debian archives (Re: When should we https our mirrors?)

2016-10-23 Thread Kristian Erik Hermansen
On Sun, Oct 23, 2016 at 7:28 PM, Russ Allbery  wrote:
> The idea is to *add* HTTPS protection on top of the protections we already
> have.  You're correct that it doesn't give you authentication of the
> packages without a bunch of work, and we should assume that the general
> public CA system is compromised.  But that actually doesn't matter much
> for our purposes, since the point is to greatly increase the cost of
> gathering data about what packages people have installed.
>
> The value of HTTPS lies in its protection against passive snooping.  Given

Exactly! Much better said than how I originally phrased these issues.

> what package and at what version.  HTTPS doesn't *prevent* this, but it
> requires the attacker to do much more sophisticated traffic analysis, or
> take the *much* more expensive and *far* riskier step of moving to active
> interference with traffic, neither of which nation-state attackers want to
> do and neither of which they have the resources to do *routinely*.
>
> It won't help if a nation-state actor is targeting you *in particular*.
> But it helps immensely against dragnet surveillance.

Again, exactly right and well stated. We can never stop targeted
attacks, but we can make passive data collection more expensive and
increase the chances that a targeted attack is detected.

-- 
Regards,

Kristian Erik Hermansen
https://www.linkedin.com/in/kristianhermansen



Re: client-side signature checking of Debian archives (Re: When should we https our mirrors?)

2016-10-23 Thread Russ Allbery
"Eugene V. Lyubimkin"  writes:

> I'm not sure that benefits outweight the costs. HTTPS requires that I
> trust the third-parties -- mirror provider and CA.  Gpgv doesn't require
> third parties.

It's critical here that we do not drop GPG.  We continue using GPG for the
integrity and authentication part of package retrieval.  If anyone has
proposed replacing the GPG signatures, well, I completely disagree with
that.

The idea is to *add* HTTPS protection on top of the protections we already
have.  You're correct that it doesn't give you authentication of the
packages without a bunch of work, and we should assume that the general
public CA system is compromised.  But that actually doesn't matter much
for our purposes, since the point is to greatly increase the cost of
gathering data about what packages people have installed.

The value of HTTPS lies in its protection against passive snooping.  Given
the sad state of the public CA infrastructure, you cannot really protect
against active MITM with HTTPS without certificate pinning.  But that's
fine; active attackers are a much, much rarer attack profile.  The most
likely attack, and the one we're able to protect against here, is passive
observation of mirror traffic used to build a database of who is using
what package and at what version.  HTTPS doesn't *prevent* this, but it
requires the attacker to do much more sophisticated traffic analysis, or
take the *much* more expensive and *far* riskier step of moving to active
interference with traffic, neither of which nation-state attackers want to
do and neither of which they have the resources to do *routinely*.

It won't help if a nation-state actor is targeting you *in particular*.
But it helps immensely against dragnet surveillance.

-- 
Russ Allbery (r...@debian.org)   



Re: client-side signature checking of Debian archives (Re: When should we https our mirrors?)

2016-10-23 Thread Kristian Erik Hermansen
On Sun, Oct 23, 2016 at 10:03 AM, Eugene V. Lyubimkin  wrote:
> Thank you for the long list of examples what could go wrong. I'm happy I 
> don't have urgent fixes to apply.

Well, I would say the privacy issues are rather concerning. Security
is generally broken down into at least the following three categories:

* Confidentiality (X)
* Integrity (✓)
* Availability (✓)

Confidentiality is not a current default benefit out of the box with
Secure APT. All choices of user software are exposed. You can even
uniquely identify users by which unpopular packages they get updates
for as they transition their laptop around to various networks.

Availability is covered by the vast numbers of mirrors, so that is covered well.

Integrity is presumed by the GPG / hash verifications. Sure, you have
no publicly known vectors to fix. Presumably there are unknown vectors
the NSA and others have developed, but not made public. There are
really smart people out there that keep their vulnerabilities private
for malintent. This is the concern with leaving clear pathways for
exploitation via unencrypted HTTP channels.

It is also highly likely that at least one member of the Debian
security team has had / will have leaked a private GPG repo signing
key, more than likely by unknown errors in operations or inadvertent
actions. As such, without HTTPS, signing rogue release / package files
for targeting a specific network can take place in secret (targeted
organization would have to detect the tainting at their end-point).
This is because the NSA could specifically target that network with a
rogue "Entity-in-the-Middle" (EitM) attack using the stolen private
key. If HTTPS+HPKP were utilized, it would make pulling off the attack
much more costly, as now they would have to somehow get the mirrors
the target network is using to sync rogue release / package files
without anyone noticing (more easily detectable by mirror system
admins).

And I want to reiterate this attack:

Given a global view of the Internet, the NSA could craft queries like:

"Show us all German systems that originate from network ranges
belonging to  that have never requested a patch
for "

In the above example, they don't need to directly attack APT or
Debian. They can automatically build up a database of CVE attacks that
will work both remotely and locally against the organization, because
they can see that the organization never installed those packages.
This means the unencrypted nature of HTTP thus leaks your entire
organizational security posture to government players for selective
targeting. We know the NSA is doing this, from the Snowden
revelations. See XKeyscore program for more info. There are other
programs that specifically target Linux system administrators. So,
just being on this list might actually put you into a NSA selector
group for targeting as well (be warned, if you didn't already know).

> 1) Checking chain (e.g. gpgv and its callers) have bugs. True, same as 
> checking layer for secure transports also have bugs.

Agreed. Please let me know of a good test case to validate that your
tools, which are not APT (?), are doing the right things. You said you
maintained a tool which "downloads and validates Debian archives in a
similar way APT does", which means not exactly the way APT does. Let
me know the name of your tool and how to setup some test cases to
validate your tool is doing things properly. Glad to spend some time
on it and contribute any potential findings for the community benefit.

> 1a) If an user downloads keys not via packages, downloading may go wrong. 
> True. If I use debian-archive-keyring, I'm fine.

RIght, but there is a bootstrap problem for obtaining trusted keys in
various circumstances though. The SecureAPT wiki even documents those
and instructs the user to use out-of-band methods for obtaining the
key [1], opening up such attacks.

[1] https://wiki.debian.org/SecureApt#How_to_find_and_add_a_key

> 3) Content of packages themselves may be unsecure if they trust some 
> third-party HTTP hosts for downloading their own
> stuff. True, but not relevant to the topic of checking package integrity.

Again, I think the problem is more about CONFIDENTIALITY and not
integrity, even though integrity attacks are likely possible and have
been demonstrated in the past. Surely they will arise again in the
future in a public forum.

> 4) If an user got one malicious key, game is over. True, but so it is if an 
> user got one malicious repo server --
> maintainer scripts from any package have root access to all of your system.

I think the point is that the third-party may not necessarily be
malicious, just that they are not as operationally savvy about
protecting their GPG signing keys as Debian Security Team is, meaning
that the NSA could attack them more easily. This could even be for an
older repo that is no longer operational, but the key is still trusted
in the system from a prior installation. The NSA could 

Re: client-side signature checking of Debian archives (Re: When should we https our mirrors?)

2016-10-23 Thread Eugene V. Lyubimkin
Hi,

[ please don't CC me directly ]

On 23.10.2016 17:20, Kristian Erik Hermansen wrote:
> On Sun, Oct 23, 2016 at 7:23 AM, Eugene V. Lyubimkin  
> wrote:
>> I'm a developer of a tool which downloads and validates Debian archives
>> in a similar way APT does.
>>
>> As you use the word "theoretically", that suggests that practically
>> one can bypass the validation. Could you please list all numerous ways
>> to bypass it, so we'd fix our software?
> 
> I will detail more recent issues in the future, but just to start:

Thank you for the long list of examples what could go wrong. I'm happy I don't 
have urgent fixes to apply.

Things you mentioned seem to fall into one of the following categories:

1) Checking chain (e.g. gpgv and its callers) have bugs. True, same as checking 
layer for secure transports also have bugs.

1a) If an user downloads keys not via packages, downloading may go wrong. True. 
If I use debian-archive-keyring, I'm fine.

2) Downloading clients have bugs. True.

3) Content of packages themselves may be unsecure if they trust some 
third-party HTTP hosts for downloading their own
stuff. True, but not relevant to the topic of checking package integrity.

4) If an user got one malicious key, game is over. True, but so it is if an 
user got one malicious repo server --
maintainer scripts from any package have root access to all of your system.


I'm not sure that benefits outweight the costs. HTTPS requires that I trust the 
third-parties -- mirror provider and CA.
Gpgv doesn't require third parties. To me, that makes HTTPS (even with HPKP) 
principally weaker than offline
medium-agnostic cryptographic content checks. Or I am wrong here, will the 
suggested HTTPS+HPKP+... scheme protect me
from government players?



Re: client-side signature checking of Debian archives (Re: When should we https our mirrors?)

2016-10-23 Thread Kristian Erik Hermansen
Hi :)

On Sun, Oct 23, 2016 at 7:23 AM, Eugene V. Lyubimkin  wrote:
> I'm a developer of a tool which downloads and validates Debian archives
> in a similar way APT does.
>
> As you use the word "theoretically", that suggests that practically
> one can bypass the validation. Could you please list all numerous ways
> to bypass it, so we'd fix our software?

I will detail more recent issues in the future, but just to start:

* Numerous attacks against the downloading client itself (buffer
management flaws to gain code exec, etc)

* Attacks against a GnuPG client parsing public keys or validating signatures
** CVE-2014-4617
** CVE-2013-4402
** CVE-2013-4351
** CVE-2006-6235
** CVE-2006-0455
** CVE-2006-0049
** CVE-2001-0071

* Attacks against weak pre/post-install scripts that retrieve network
data over HTTP

* If your GPG private key is compromised (factored or stolen) without
your knowledge, HTTPS+HPKP helps make arbitrary targeting / infecting
of users much more difficult.

* Potentially trick APT into thinking the install is coming from media
disc and bypass Release file signatures

* If a user adds a GPG key for another third-party product repository
(eg. oracle-java) and that key was generated by NSA to appear as a
legitimate package, HTTPS+HPKP prevents NSA from tampering core
packages too easily (since they have the private key that likely many
users have trusted).

* If a user downloads a GPG public key by searching the public key
servers, NSA can potentially exploit CVE-2012-6085 to overwrite the
trusted local GPG store for trusted keys with a key store of their
choosing. Or also CVE-2008-1530.

* CVE-2012-3587 / CVE-2012-0954 -- APT 0.7/0.8, when using the apt-key
net-update to import keyrings, relies on GnuPG argument order and does
not check GPG subkeys, which might allow remote attackers to install
Trojan horse packages via a man-in-the-middle (MITM) attack.
HTTPS+HPKP would have made this much harder.

* More PitM attacks via CVE-2009-1358 -- "apt-get in apt before 0.7.21
does not check for the correct error code from gpgv, which causes apt
to treat a repository as valid even when it has been signed with a key
that has been revoked or expired, which might allow remote attackers
to trick apt into installing malicious repositories."

* Potentially attack NTP to change the date on a targeted Debian
client system in order to cause potentially exploitable exceptions for
use in a package tampering attack shortly thereafter.

There are other attacks too, but this should suffice for now? I think
the synopsis here is to leverage HTTPS to minimize the exposure from
APT, GPG, parsing / related bugs...unless you truly believe that
programs interpreting untrusted data over HTTP is 100% securely
constructed (quite unlikely).

-- 
Regards,

Kristian Erik Hermansen
https://www.linkedin.com/in/kristianhermansen
https://profiles.google.com/kristianerikhermansen



client-side signature checking of Debian archives (Re: When should we https our mirrors?)

2016-10-23 Thread Eugene V. Lyubimkin
Hello Kristian,

On 23.10.2016 15:04, Kristian Erik Hermansen wrote:
> [...]
> Although APT theoretically protects tampering of packages in transit
> over HTTP based on the signing key, there are numerous ways to exploit
> the plaintext HTTP protocol in transit and the way APT handles some
> aspects of validation. [...]

I'm a developer of a tool which downloads and validates Debian archives
in a similar way APT does.

As you use the word "theoretically", that suggests that practically
one can bypass the validation. Could you please list all numerous ways
to bypass it, so we'd fix our software?