Re: [bitcoin-dev] Design approaches for Signature Aggregation

2018-01-30 Thread Russell O'Connor via bitcoin-dev
On Tue, Jan 30, 2018 at 2:12 PM, Russell O'Connor 
wrote:

>
> and there are probably other designs for signature aggregation beyond the
> two designs I'm discussing here.
>

For example, in private communication Pieter suggested putting the
aggregate signature data into the top of the first segwit v1+ input witness
(and pop it off before evaluation of the input script) whether or not that
input is participating in the aggregation or not.  This makes this
canonical choice of position independent of the runtime behaviour of other
scripts and also prevents the script from accessing the aggregate signature
data itself, while still fitting it into the existing witness data
structure. (It doesn't let us toy with the weights of aggregated signature,
but I hope people will still be motivated to use taproot solely over P2WPKH
based on having the option to perform aggregation.)

Being able to allow aggregation to be compatible with future script or
opcode upgrades is still very difficult to design.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


[bitcoin-dev] Design approaches for Signature Aggregation

2018-01-30 Thread Russell O'Connor via bitcoin-dev
On Sat, Jan 27, 2018 at 12:23 PM, Matt Corallo 
wrote:

> Gah, please no. I see no material reason why cross-input signature
> aggregation shouldn't have the signatures in the first n-1 inputs replaced
> with something like a single-byte push where a signature is required to
> indicate aggregation, and the combined signature in the last input at
> whatever position the signature is required.
>

That would be the expedient approach.

I want to preface what I'm about to write by first stating that I think the
cross-input signature aggregation is the most important forthcoming
development for Bitcoin and I would be very happy to have any solution for
it deployed in any workable form.  Also, it is difficult to discuss pros
and cons of various designs without concrete proposals, but perhaps we can
try to say some things about various design approaches while still saying
something useful.

I think there are some issues with the expedient proposal for signature
aggregation.  The problems begin with the arbitrary choice of which input
witness will be the canonical choice for holding the aggregated signature.
We want to strictly define which input is the canonical choice for holding
the aggregated signature because we wish to avoid introducing new witness
malleability vectors.  However, the definition of the canonical input is
somewhat complicated.  Because not all inputs are necessarily participating
the aggregation, the canonical choice of input necessarily depends on the
run-time behavior of all the other input Scripts in the transaction.  This
complicates the specification and makes the implementation somewhat
error-prone.

Furthermore designing the canonical choice of input for the aggregated
signature to support future extensions of new script versions or new
opcodes that may want to participate in signature aggregation (for example,
adding CHECKSIGFROMSTACK later) is going to be extraordinarily difficult, I
think.  I don't know how it could even be done.

On the other hand, the extended-transaction approach supports a clean model
of script semantics whereby the signature aggregation is supported via a
new writer (aka logging) side-effect for Script[1].  In this model, rather
than the semantics of Script returning only failure or success, Script
instead results in either failure or conditional success plus a log of
additional constraints that need to be satisfied for the transaction to be
valid.  In the case of signature aggregation, these constraints are of the
form "I require cryptographic evidence that there is a signature on message
M from public key P".  The aggregated signature in the extension of the
transaction provides a witness that demonstrates all the constraints
emitted by all the scripts are satisfied.

Even in the extended-transaction approach, supporting future extensions of
new script versions or new opcodes that may want to participate in
signature aggregation is going to be very difficult.  However, I do have
some half-baked ideas (that you will probably like even less) on how we
could support new script versions and new opcodes based on this idea of a
writer side-effect model of Script semantics.  I hope that designing
support for extendable signature aggregation isn't infeasible.

I think that the cleaner semantic model of the extended-transaction
approach is by itself enough reason to prefer it over the expedient
approach, but reasonable people can disagree about this.  However, there
are even larger issues lurking which appear when we start looking for
unintended semantic consequences of the expedient design.  This is a common
problem with expedient approaches.  It is hard enough to come up with a
design that enables a new feature, but it is even harder to come up with a
design that enables a new feature without enabling other, unintended
"features".  I worry that people do not pay enough attention to the later,
after achieving the former. This sort of thing happened with OP_EVAL in bip
12.  In that situation, the goal was to create a design that enabled pay to
script hash, and OP_EVAL does achieve that in a very straightforward way.
However, the unintended semantic consequences was that bip 12 also enable
unbounded recursion[2] and extended the class of functions definable by
script all the way to the entire class of all computable functions.

We can find unintended semantic consequences of the expedient approach to
signature aggregation by looking at the ways it fails to fit into the
writer side-effect model for signature aggregation.

A. Firstly, we notice that scripts can determine whether or not they are in
canonical position or not by checking the length of their signature data.
This is an effect that goes beyond the abilities of just allowing signature
aggregation.  We can build scripts that can only be redeemed when they are,
or aren't the ones holding the aggregated signature.

B. In the presence of sufficient computation power[3], I expect that

Re: [bitcoin-dev] Blockchain Voluntary Fork (Split) Proposal (Chaofan Li)

2018-01-30 Thread Chaofan Li via bitcoin-dev
Hi ZmnSCPxj,


On Mon, Jan 29, 2018 at 9:32 PM, ZmnSCPxj wrote:
>What ensures that a paper money with "10 Dollar" on it, is same as 10
coins each with "1 Dollar" on it?
>This is the principle of fungibility, and means I can exchange a paper
with "10 Dollar" on it for 10 coins with "1 Dollar" on it, because by
government fiat, such an exchange is valid for all cases.
>What ensures that btc.0 and btc.1 are indistinguishable from a human
perception?

This is a good question. Does anyone think about why the bitcoins generated
from different blocks have the same value? Some of them are still
distinguishable ( if they are not combined with others sent out).  Would
the bitcoins that can be traced back to the block where it was generated
be worth different from others ?   If one day Satoshi released
his/her/their bitcoins  , would the bitcoins from the first several blocks
mined by Satoshi be worth more?

I think for fungibility, it is not like either it has fungibility or it has
no fungibility. There should be a value of fungibility (e.g. from 0 to 1)
that can be measured or evaluated.

Chaofan
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Blockchain Voluntary Fork (Split) Proposal (Chaofan Li)

2018-01-30 Thread ZmnSCPxj via bitcoin-dev
Good Morning Chaofan Li,

> The human perception of difference will be eliminated.
> Will your bank tell you whether your balance means coins or paper money?
> If wallets and exchanges only show the total amount of btc rather than btc.0 
> and btc.1, there is no human perception difference.

This returns my initial question.

What ensures that a paper money with "10 Dollar" on it, is same as 10 coins 
each with "1 Dollar" on it?

This is the principle of fungibility, and means I can exchange a paper with "10 
Dollar" on it for 10 coins with "1 Dollar" on it, because by government fiat, 
such an exchange is valid for all cases.

What ensures that btc.0 and btc.1 are indistinguishable from a human perception?

> Also note that one valid address is automatically valid on the other chain, 
> which means you can send money through any one chain. As long as one has the 
> private key, he/she can get the money anyway. So there is no difference 
> between number of merchants. The merchant ‘s address is valid on both chains.
>
> The exchange cost would be trivial. People don’t need to exchange two same 
> thing.

You are talking about sidechains.  In every sidechain proposal, there is always 
some mechanism (SPV proof-of-work, drivechain proof-of-voting, 
proof-of-mainstake...) that ensures that a sidechain coin is exchangeable for a 
mainchain coin, and from there, that every sidechain coin is exchangeable for 
every other sidechain coin.  I.e. that a smart contract with "1 BTC" on it is 
exchangeable for a mainchain UTXO of value "1 BTC".

A mere split is not enough.  As I brought up, what makes your proposal 
different from 2X, BCash, etc.?

Regards,
ZmnSCPxj___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] How accurate are the Bitcoin timestamps?

2018-01-30 Thread Neiman via bitcoin-dev
On Mon, Jan 29, 2018 at 10:54 PM, Gregory Maxwell via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> On Mon, Jan 29, 2018 at 9:40 PM, Tier Nolan via bitcoin-dev
>  wrote:
>
>  if there were tighter time requirements in the protocol
> miners would address them by running NTP which as an _astounding_ lack
> of security in terms of how it is commonly deployed.
>

Could you say a few more words about this lack of security? Or share a link
if you have one. I know very little about NTPs.


> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] How accurate are the Bitcoin timestamps?

2018-01-30 Thread Neiman via bitcoin-dev
On Mon, Jan 29, 2018 at 10:40 PM, Tier Nolan via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

>
> Much of Bitcoin operates on the assumption that a majority of miners are
> honest.  If 50%+ of miners set their timestamp reasonably accurately (say
> within 10 mins), then the actual timestamp will move forward at the same
> rate as real time.
>

Thank you for replying. I agree that under the 50%+ assumption, timestamps
are reasonably accurately, but I fail to see a reason to make this
assumption.

I'm comfortable with the 50%+ assumption regarding ledger manipulation
(double-spending, deletion of transactions etc.). I'm much less comfortable
with it regarding timestamps manipulation.

Consider the following situation:
(1) miners are selfish,
(2) miners have a financial incentive to be dishonest.

(1) is a common state on how miners function nowadays. (2) is the case that
interests us when coming to do this analysis.

In the case of ledger manipulation, the 50%+ assumption is not because we
assume that miners are good-hearted (this violates (1)). It is there due to
an assumption that the financial damage to a miner would be bigger than the
gain in (2). This happens since a ledge manipulation may cause miners to
lose block rewards, and certainly will devaluate Bitcoin, an asset which
they possess.

In the case of timestamps manipulation, I don't see any financial damage
caused to miners. Timestamps manipulation (besides the 2016*n blocks) won't
harm the function of Bitcoin, and may even go undetected (it seems to me
that the main blockchain explorers don't track it). I don't see a
justification for the 50%+ assumption here.


>
> Dishonest miners could set their timestamp as low as possible, but the
> median would move foward if more than half of the timestamps move forward.
>
>
>> If we want to be pedantic, the best lower bound for a block timestamp is
>> the timestamp of the block that closes the adjustment interval in which it
>> resides.
>>
>
> If you are assuming that the miners are majority dishonest, then they can
> set the limit to anything as long as they don't move it more than 2 hours
> into the future.
>
> The miners could set their timestamps so that they increase 1 week fake
> time every 2 weeks real time and reject any blocks more than 2 hours ahead
> of their fake time.  The difficulty would settle so that one block occurs
> every 20 mins.
>
>
>>
>> Possible improvement:
>> -
>> We may consider exchanging average with standard deviation in the
>> difficulty adjustment formula. It both better mirrors changes in the hash
>> power along the interval, and disables the option to manipulate timestamps
>> without affecting the difficulty.
>>
>> I'm aware that this change requires a hardfork, and won't happen any time
>> soon. But does it make sense to add it to a potential future hard fork?
>>
>
> For check locktime, the median of the last 11 blocks is used as an
> improved indicator of what the actual real time is.  Again, it assumes that
> a majority of the miners are honest.
>
>>
>>
>> ___
>> bitcoin-dev mailing list
>> bitcoin-dev@lists.linuxfoundation.org
>> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>>
>>
>
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev