Kent Fredric posted on Sun, 21 Sep 2014 09:14:36 +1200 as excerpted:

> That is to say: without gpg, you can just create some random commit with
> some arbitrary content and push it somewhere, and you can pretend you're
> a gentoo dev and pretend you're writing commits as them.
> 
> GPG sufficiently prevents that from happening, and takes it from ameteur
> grade imposter requirements to NSA grade imposter requirements. And
> that's not a bad compromise for being imperfect.

I've seen this idea repeated several times in this thread and it bothers 
me.

In practice, gpg doesn't take it to NSA grade, even in theory it might.

The problem is this.  A gpg signature does *NOT* ensure that the person 
whose name is attached to a public/private key pair actually did the 
signature.  *ALL* it ensures is that someone with access to the 
particular private key in question signed the content.

Gpg doesn't know or care whether the person with that signing key is who 
they say they are or not.  All it knows/cares is that whoever they are, 
they have that key.  If the person who owned that key didn't keep the 
private half secure and someone else got ahold of it, game-over.  Until 
it's caught and the key revoked, that person can act with impunity as 
person-in-possession of that key.

Now realistically, gentoo has ~250 devs working in all sorts of different 
situations.  What is the change that NONE of those 250 people EVER lets 
someone else have access, whether due to letting them borrow the machine 
and then going to the restroom, or due to loss of laptop in a taxi or 
something, or due to malware?

IIRC the number of folks with kernel.org access was something similarly 
close to ~250 or so, before someone got their access creds stolen and 
kernel.org got hacked.

And as far as we know, that was *NOT* the NSA.  It was just some cracker 
group wanting access to good network bandwidth for their botnet, and they 
either didn't realize what they had or didn't know what to do with it 
once they realized it.

Basically, with ~250 devs, we can pretty much must assume that somebody's 
secret key is compromised at any point in time.  We don't know whose and 
we don't know whether it's even bad guys, not just some innocent that 
doesn't have the foggiest, and we might get lucky, but someone's key is 
either compromised at any particular point or relatively soon will be.  
With 250 devs out there living life, it's foolish to assume otherwise.


With 250 devs with signing keys and all of them having access to the 
entire tree, their humanity is the weak link, not SHA1.  SHA1 is a major 
exercise in unnecessary pain, compared to this weak link.  No NSA grade 
resources needed, and with 250 people out their spinning the roulette 
wheel of life every day that they aren't going to forget their laptop in 
a taxi somewhere, it's either already happened or it WILL happen.  That's 
a given.

Plus, even the NSA has their Edward Snowdens.  Perhaps it won't be some 
bad guy getting ahold of a key.  It's just as likely to be a "good" gentoo 
dev either turning bad, or never "good" in the first place.

So at least from here, all this worry about SHA1 is much ado about 
nothing.  The real worry is elsewhere.  Someone's has or will have 
unauthorized access to a signing key, and once they do it's simply a 
matter of chance whether they're a bad guy that knows what to do with 
it.  The real question is what systems we have in place to catch that and 
to stop-loss when we do detect it?  Because now or latter, it either has 
already happened or WILL happen.  We'd be foolish to assume anything else.

And git's not going to change that one bit.  Neither will all the signing 
and secure hashes in the world.

-- 
Duncan - List replies preferred.   No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master."  Richard Stallman


Reply via email to