Control: tag -1 wontfix

On Tue, 2018-03-13 at 13:47:10 -0400, Konstantin Ryabitsev wrote:
> On 03/13/18 05:33, Uwe Kleine-König wrote:
> >>> But it also has an impact on security: As long as the signature isn't
> >>> verified I have to consider the .tar.xz "untrusted" and the less tools I
> >>> have to make operate on it the better.  This scheme allows an attacker
> >>> that has control over a mirror to provide a .tar.xz that makes unxz do
> >>> undesirable things, see https://en.wikipedia.org/wiki/Zip_bomb for an
> >>> attack idea.
> >>
> >> Which is why we provide sha256sums.asc in each directory.
> > 
> > That would be worth to point out more prominently on the above webpage
> > then IMHO.
> 
> We do, we have a whole section about sha256sums.asc files:
> https://www.kernel.org/signature.html#kernel-org-checksum-autosigner-and-sha256sums-asc
> 
> > When you recompress files you have to resign your sha256sum file, so I
> > don't see the advantage "without needing to re-sign all files" you
> > mentioned above. 
> 
> Released tarballs carry signatures from developers, not from any
> automated infrastructure, so you can see how that complicates the
> picture if we have to ask Linus or Greg KH to create new signatures for
> all tarballs they've ever created.

That's not incompatible with then generating signatures on the
receiving side. In Debian we do distinguish between the developers
preparing releases and signing the source and binary packages to
upload to the Debian archive which then verifies those against the
current allowed list of people, and whether their keys are not weak,
or expired etc; and the archive then signing the metaindices that
users download and verify against, so those can use certificates that
can easily and are rotated continuously, can be expired, and resigned
at any time, w/o bothering developers.

This also means that the archive has to deploy and practice key
rotation, which helps in case a disaster ensues.

> > (Also recompressing has the immediate downside to break
> > third-party tools that rely on unchanged files from upstream, which IMHO
> > outweighs the disk space gained from recompressing.)
> 
> I would say such tools are wrong because they expect non-normalized
> formats to remain constant. I appreciate that I'm basically saying
> "everyone is doing it wrong and we're the only ones who are shiny and
> smell nice," but I do believe there's at least solid technical reasoning
> behind our decision to sign .tar archives and not the compressed versions.

I understand the apparent appeal of considering these artifacts as
generated stuff. But IMO these are things that should be released and
be considered immutable (like a crafted object), and not regenerated
after the fact with something else, as mentioned in the thread that would
break tons of expectations.

> I think it will help you understand the reasoning more if I explain the
> workflow behind how releases are currently produced (you can see my talk
> about the entirety of the release process here:
> https://www.youtube.com/watch?v=vohrz14S6JE):
> 
> 1. Linus creates a .tar archive locally on his laptop, using "git
> archive" -- which is always deterministic
> 2. Linus creates a detached signature of that tar archive
> 3. Linus sends us a very small request with just three things in it
> 
>    a. the tag
>    b. the prefix to use with "git archive"
>    c. the detached signature
> 
> 4. We use these to create the .tar archive from our version of the tree,
> verify it against Linus's detached signature, and if (and only if) the
> signature verifies, we create .gz and .xz archives and upload them to
> the frontends.

> There are the following major benefits behind this process:
> 
> 1. Linus only has to upload a few KB of data to produce a release,
> instead of 200+ MB of combined .xz and .gz archive data. Since he
> routinely produces releases while at conferences and remote diving
> locations, he greatly appreciates not having to do that.
> 2. More importantly, if Linus's laptop is compromised and someone tries
> to sneak in a trojaned tarball between the time when "git archive"
> finishes and "gpg --detach-sign" fires off, the signature verification
> will fail when we try to generate the tarball on our end. Any trojans
> would need to exist in the git tree, where they have a much greater
> chance of being discovered than in a one-off tarball. By using our
> current procedure we significantly reduce the risks of workstation
> compromises resulting in trojaned tarballs carrying legitimate developer
> signatures.

Sure that makes sense from a developer to archive PoV, but…

> It's true, we could ask Linus to generate signatures for the .xz archive
> on his end, but this would require that he runs "xz -9" on a 600MB
> tarball and wait half an hour for it to finish -- and then hope we
> produce the same resulting .xz on our end, which is not at all
> guaranteed between different xz versions, whereas git has tests that
> specifically check that git-archive generated .tar archives are
> identical between git releases.

…as mentioned above, that does not mean additional signatures could
not be generated for the archive to user transactions.

> > Also for the attack vector against the decompressor, this effectively
> > renders the developer's signature useless because I have to trust the
> > sha256sum.asc (and so the archive key) before handing the compressed
> > file to the decompressor.
> > (Yeah I know, this is harder to exploit than introducing changes to the
> > tar archive, but IMHO this is no reason to keep this flaw unfixed.)
> 
> I hope I've demonstrated how "fixing" this attack vector opens up a
> whole another one that is much, much worse.

Not really. :)

> > Would it be possible to provide signatures on the compressed archives
> > using the same key that today signs sha256sums? I imagine this wouldn't
> > result in a significant retooling issue on your side and in return it
> > would simplify the handling of signature verification for all those who
> > care.
> 
> No, because this would pretty much guarantee that people will not bother
> checking developer signatures and would just rely on automatically
> generated ones. This would violate our grand principle of "trust the
> developer, not infrastructure."

Well, I guess here it's where we'll disagree. This means that a
compromised/lost/expired key from a developer, cannot verify old
releases any more, and requires the same amount of resigning and key
rotation, but in an ad-hoc way, and tons of work for those involved
developers. This does not seem very optimal TBH.

In the end you need to trust the infrastructure, because not everyone
knows the lore of who maintained what during what periods, who is still
active and who is out (or even dead!), what developer keys have been
compromised. You'll need to fetch that trusted and curated keyring from
somewhere anyway. So trusting the developer works only if you know the
innards of the social structure, otherwise it will just be always
inferior to trusting the infrastructure. :)

> I believe our approach has merit and results in better security
> protections. To verify the validity of any release you should:
> 
> 1. Download the tarball and sha256sums.asc
> 2. Verify the signature on sha256sums.asc using a trusted keyring
> 3. Verify the tarball hash in sha256sums.asc
> 4. Decompress the tarball (in a jailed environment, if you like)
> 5. Verify the developer signature on the .tar file against a trusted keyring

I guess I just do not understand, that given that the sha256sums.asc
is automatically generated, why a signature per compressed tarball
couldn't be generated in addition too?

In any case, the current kernel archive practices seem rather impractical,
so as it is, I'm not planning to add support for signatures on uncompressed
tarballs, as on its own that's just insecure and annoying to handle (either
wasted disk or CPU), and the sha256sums.asc handling would be also rather
annoying to handle.

I'm thus marking this wontfix and will be closing shortly.

Thanks,
Guillem

Reply via email to