On Oct 29, 2015, at 5:32 PM, Eduard wrote: > > On 10/29/2015 06:50 PM, Warren Young wrote: >> On Oct 29, 2015, at 3:40 PM, Eduard wrote: > >>>> most of the attacks on SHA-1 only apply to standalone blob cases >>> >>> And individual files (that are part of commits). That won't show up in >>> the timeline. >> >> Do you mean newly-added files? > > I'm talking about generating collisions for non-control artifacts > (actual files), not control artifacts.
Oh, I see what you mean. You’re making the same point Ron W did: If you replace the file blob data in the tip of a branch, you don’t get a timeline entry for that change. (You can do it farther up the tree, too, but it’s useless unless someone checks out an old version of the software.) I assume the Fossil sync algorithm won’t allow a remote Fossil to replace an existing artifact. If so, that attack only works if you have control of the server hosting a Fossil repo that others sync from. This would bypass the problem of not being able to spoof those who already have an existing clone of the repo, since the evil file hashes to the same value as the one they already have. But by the same token, I don’t see how to get those with existing copies of that file to download the new one. The sync protocol should skip the “unchanged” file, since the client already has an artifact with that ID. I also wonder what will happen if someone with an existing checkout checks in a diff against the changeling file, and the diffs overlap with the evil bits. I assume the server will try to apply the patch and fail, or the next person to clone the repo will get a clone that fails to open. >> Where can you put such a root of trust in the Fossil case? >> There is no central presumed-secure site with Fossil. Remember, you > were just positing that the central repo’s server got rooted. > > There is more than one answer, but one is that the root of trust are the > PGP private keys on the individual developers' personal computers. The > developers' private keys should never ever reach the public central repo > server. Ah: You’re presupposing the existence of a PGP PKI that everyone’s willing to use. Observe how PGP email has completely failed to take over the world, even given a quarter of a century. Yes, I know about keyservers. I also know there’s more than one, and that you get a lot of resistance from most people when you tell them to go get your public key. TLS works because there’s a financial motivation for people to pay one of the trusted CAs for a database record that costs maybe $1 max over its valid lifetime to generate and store. Financial arguments will work within a company, but not in an open source project. >> Plus, you can bypass Gatekeeper for $99: get a code signing cert from Apple >> and sign your evil packages with it. It’ll work until Apple catches you and >> revokes your cert. Almost no one checks *who* signed the package; all they >> know is that the OS let them install it when they double-clicked it. > > That's actually kind of depressing. Every commercial code signing system I’ve used (OS X, Windows, & Adobe Flex/AIR) works this way. They’re basically a variant on the TLS certificate scheme, which is why Verisign is the certificate provider for so many of these schemes: http://www.symantec.com/products-solutions/families/?fid=code-signing iOS throws in an additional wrinkle: un-rooted iOS devices won’t install an app that isn’t co-signed by Apple’s signature. In that way, it is more like a Debian package. The Apple App Store for OS X has the same restriction, but unlike with iOS, there’s nothing in OS X forcing you to get your apps from the App Store. I believe Android is the same way, except that it has the Gatekeeper-like exception path which lets you install unsigned apps. >>> - Every artifact must be hashed by every known algorithm. >> >> I’m assuming it's possible to change from one algorithm to another >> mid-stream, as long as the client knows all of the algorithms in use, and is >> told where the change points occur. >> >> Do you know for a fact that you cannot do this? > > I'm not sure what you mean nor why it would be necessary. I mean that I think it’s possible to replace a Fossil server and client pair with new ones that understand two different hash algorithms, and for those two to use the old hash algorithm on old artifacts, and new on new. At the transition point, you’ll have a manifest containing new-style M cards and old-style P cards. Why can’t that work? The only reason to recompute old hashes is to prevent replacement of old file artifacts, which is not very useful. > I'm mostly referring to colliding on > non-control artifacts (i.e. actual files). Oh, I see: by successfully executing a preimage attack, you can replace a file blob without rewriting the manifest that refers to it. I thought the file blobs were also chained somehow, but I can’t back that up by skimming the Fossil file format wiki article. It looks like only the manifests are chained. Unless I’m missing something, that puts me back in the “time to plan Fossil’s SHA-1 exodus” camp. >>> I personally don't think we'll ever need to go past BetterHash-512. >> >> I’m not sure if you’re saying that 512 bits will be enough forever, or that >> we already have the last hash algorithm we will ever need. > > http://www.mail-archive.com/[email protected]/msg21704.html Yes, I know about the heat death of the universe arguments. I’m just saying that you’re assuming that no one can knock BetterHash-512’s complexity down from 2^256 to the 2^dozens range we’ve seen with MD5 and SHA-1. I feel more sure about such observations when it comes to things like address space sizes. We just need to get to 256-bit addressing so we can store all relevant parameters about every fundamental particle in the universe, and thus have a perfect simulation of the universe. That’ll end the current universe and start the next one. :) _______________________________________________ fossil-users mailing list [email protected] http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users

