On Monday, June 5, 2017 at 9:54:03 PM UTC+2, Peter Todd wrote:
>
> On Mon, Jun 05, 2017 at 11:15:33AM -0700, Axel wrote: 
> > > Bitcoin block hashes are a chain, so it doesn't make any sense to 
> include 
> > > more 
> > > than one, unless you're worried about reorgs. 
> > > 
> > 
> > Agree. reorgs are the only reason to include more than one, but 10 seems 
> > like overkill. Reorgs aren't even an ultimate threat in this case, we 
> don't 
> > have a double-spending problem or similar. The only reason for us to 
> worry 
> > about reorgs is future availability of the blockchain data. 3 blocks is 
> > generally considered very secure, so maybe we can go with that. 
>
> Ah, but see, since Bitcoin blocks are in a chain, a block hash ten blocks 
> deep 
> is only invalidated in the event of an (extremely) rare re-org more than 
> ten 
> blocks deep. 
>

> > > If you are worried about reorgs, for the purpose of time-related 
> proofs, 
> > > remember that Bitcoin block header times are *very* imprecise, as the 
> > > consensus doesn't tightly bound the claimed block time for a bunch of 
> > > reasons. 
> > > So as a conservative rule of thumb, I would only consider a block 
> header 
> > > timestamp to have a precision within about a day. With careful 
> analysis 
> > > you can 
> > > get tighter bounds than that, but in practice that's rarely very 
> useful 
> > > anyway. 
> > > 
> > 
> > I agree that we don't need more precision than a day. As I said above, 
> the 
> > only worry is that a reorg would make the block data unavailable in the 
> > future, rendering the proof unverifiable. 
>
> Right, but if you don't need better precision, just picking a *single* 
> blockhash, say, ten blocks back is totally fine. On average that'd be a 
> block a 
> bit over an hour and a half old. 
>

Your point is valid, this idea escaped me.
 

>
> The only reason you'd want to include *more* than one block hash is to try 
> to 
> get better precision, with a fallback in case of a reorg. If you're 
> picking a 
> blockhash 10 blocks back, on average that'd be a bit over an hour and a 
> half 
> old, which is well within the timing precision you can expect from Bitcoin 
> anyway, and certainly good enough for our application. So my advice would 
> be to 
> keep it simple and just include a single block hash. 
>

Agree.
 

>
>
> FWIW, the actual consensus rules for Bitcoin blocktimes require the 
> following 
> two conditions to be true: 
>
> 1) nTime > the median nTime of the past 11 blocks 
>
> This rule ensures nTime will move forward, eventually, though miners have 
> a lot 
> of lee-way to backdate their blocks. A miner with negligible hashing power 
> can 
> backdate a block they create at no cost to themselves on average something 
> like 
> an hour or so, assuming the block times of all other miners are honest. 
>
> If the backdating hashing power is non-negligible - say 50% - it's quite 
> plausible they'd be able to create backdated blocks with block times that 
> are 
> multiple hours behind what they should be. If 100% of hashing power is 
> backdatin blocks, the only thing limiting the attack is at some point 
> they'll 
> cause difficulty to increase, but that'd be a scenario where block times 
> have 
> been backdated by multiple *days* at least. 
>
> Fortunately, by the time we're talking about multiple hours/days of 
> backdating, 
> it's a very public attack that probably would get noticed. But backdating 
> an 
> hour or two is something miners could definitely get away with. 
>
>
> 2) nTime < now + 2 hours 
>
> This attempts to prevent miners from producing forward dated blocks, which 
> among other things, could be used to artifically drive difficulty down. 
> But 
> because there's no universal and reliable notion of time, it's a really 
> dodgy 
> rule that itself can be used as an attack vector - miners can be in a 
> position 
> where they *want* part of the network to (initially) reject their blocks. 
> If 
> miners do start forward dating their blocks for whatever reason, I'd 
> expect to 
> see enforcement of this rule to break down; what happens then is hard to 
> know. 
>
> Again, the only good thing is that a forward dating attack is pretty 
> public, so 
> we'd at least find out it had happened and could respond accordingly. 
>
>
> Finally, I should point out that providing incentives for miners to mess 
> with 
> block times is a potential threat to Bitcoin as a whole; we'd all be 
> better off 
> if people design systems that are robust to such attacks, to avoid giving 
> attackers incentives to do them in the first place. 
>
> tl;dr: Please round off nTime to the nearest day. :) 
>
>
Again, agree.
Also, these rules were new to me. Thank you for the insights!
 

>
> > > What exactly are you trying to prevent here? 
> > > 
> > > Timestamping commits and canary signatures to prove they existed in 
> the 
> > > past 
> > > makes a lot of sense if they're signed, as that allows you to 
> establish 
> > > that 
> > > those signatures were created prior to a compromise. 
> > > 
> > > Additionally proving that canaries were created after a certain point 
> in 
> > > time 
> > > is useful to prevent forward dating. 
> > > 
> > > But what attack does the latter type of proof prevent when applied to 
> git 
> > > commits? 
> > > 
> > 
> > Using proof of freshness to prevent forward dating can be applied to 
> more 
> > than just canaries. Imagine an attacker that has temporary use-access 
> but 
> > not read-access to a private key, e.g. by compromising one side of split 
> > GPG in a setup where the vault VM does not show the user what is about 
> to 
> > be signed. In this case, the attacker might be interested in signing a 
> > malicious Qubes .iso image, and in order to prevent the developers to 
> > refute the .iso, the attacker also signs a key revocation that is 
> > sufficiently forward-dated to not raise suspicion about the .iso. 
> > 
> > I admit this example is somewhat far-fetched, but the point is that more 
> > than just canaries can benefit from forward-dating prevention. The 
> stronger 
> > argument for doing this is that it is the conservative thing to do: 
> Nothing 
> > seems to become less secure by preventing forward-dating, and it does 
> > essentially remove an attack surface. It should be standard procedure to 
> > prove timing of everything that is released, meaning both proof of 
> > freshness and proof of existence. 
>
> While it's somewhat niche, I agree that you do make a good case there for 
> the 
> at utility of proof-of-freshness when applied to PGP signatures in 
> general. In 
> addition to split-GPG, that'd also be useful for the quite common case of 
> signing with a PGP smartcard. 
>
> I will make the point though that the proof-of-freshness is only valid for 
> proving that the *signature* was freshly created, not the Git commit 
> itself. 
> The reason is simple: the proof-of-freshness is only useful because it's 
> difficult to recreate the signature; the git-commit *by itself* doesn't 
> have 
> that property. 
>

In terms of usefulness, yes.
 

>
> > A proof of freshness only proves anything for things that depend on it. 
> For 
> > example, a key revocation that does not include a proof of freshness, 
> isn't 
> > proven to be created after that point in time. The best thing would be 
> if a 
> > proof of freshness could be part of every signed message, including .iso 
> > digests and key revocations. If this seems like too much work, I believe 
> > the next best thing would be to adopt a standard procedure that 
> everything 
> > that is signed MUST also be part of a signed git commit that includes a 
> > proof of freshness. With this rule, users that trust the key owners 
> (which 
> > we must do anyways), can trust that a digest published in a signed git 
> > commit was created approximately around the point in time of it's 
> > corresponding proof of freshness. 
>
> If you put the proof-of-freshness in the git commit, what you've actually 
> done 
> is made a proof-of-freshness for the *signature* on the git commit, just 
> with 
> one step of indirection. 
>

> I think that indirection just confuses the issue, so better to put the 
> proof-of-freshness in the signature itself. Fortunately the OpenPGP 
> standard 

has something called "signature notation data" that allows you to add 
> arbitrary 
> (signed) data to OpenPGP signatures. In fact, I used to have a cronjob 
> that 
> periodically set my gpg.conf to include a recent blockhash as a 
> proof-of-freshness with the notation [email protected] 
> <javascript:>=<blockhash> 
>
>

Agree. As I said, the best thing would be if a proof of freshness could be 
part of every signed message. I only suggested proof of freshness as part 
of the commit as a second-hand option since I didn't know about the 
notational data support.
 

>
> Even better would be if GPG had the ability to run a command on demand to 
> get 
> the value a notation should be set too, but as far as I know that feature 
> doesn't exist yet. That said, I could easily add it to the OpenTimestamps 
> Client as part of the git commit timestamping support. 
>

What exactly would you add to OTS git support?
Did you mean adding proof-of-freshness functionality to the git tag signing 
override provided by OTS? I think that would be very useful.

Moreover, I think it would be useful to add something to 
qubes-secpack/utils to help make GPG signatures with a suitably old block 
hash as notational data.
 

>
> -- 
> https://petertodd.org 'peter'[:-1]@petertodd.org 
>

 btw, nice anti-spam measure!

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-devel" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-devel/33744842-910f-4531-90e2-e48f118f52d0%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to