Re: [bitcoin-dev] reviving op_difficulty

2020-08-17 Thread Tier Nolan via bitcoin-dev
On Mon, Aug 17, 2020 at 6:04 AM ZmnSCPxj  wrote:

> Taproot MAST to the rescue.
>

Another option would be a binary payout

You pay 64 + 32 + 16 + 8 + 4 + 2 + 1 as outputs.  The outputs are
enabled/disabled based on the diff value.  This would require division and
also binary operators.

D = (int) ((100 * diff) / (1 trillion))

Output 0: 1.28:  If (D & 128) then pay Alice otherwise Bob
Output 0: 0.64:  If (D & 64) then pay Alice otherwise Bob
Output 0: 0.32:  If (D & 32) then pay Alice otherwise Bob
Output 0: 0.16:  If (D & 16) then pay Alice otherwise Bob
Output 0: 0.8:  If (D & 8) then pay Alice otherwise Bob
Output 0: 0.4:  If (D & 4) then pay Alice otherwise Bob
Output 0: 0.4:  If (D & 4) then pay Alice otherwise Bob
Output 0: 0.4:  If (D & 4) then pay Alice otherwise Bob

This has log performance in terms of the number of ticks like the MAST
solution.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] reviving op_difficulty

2020-08-16 Thread Tier Nolan via bitcoin-dev
On Sun, Aug 16, 2020 at 4:50 PM Thomas Hartman via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> My understanding is that adding a single op_difficulty operation as
> proposed would enable not true difficulty futures but binary options
> on difficulty.
>
> https://en.wikipedia.org/wiki/Binary_option


Any kind of opcode is a binary option.  Either the output can be spent or
it can't.

You could get a pseudo-continuous future by having lots of outputs with
different thresholds.

Alice and Bob create a transaction with 100 outputs and each having 1% of
the future's value.

Output 0:  Pay Alice if diff < 1.00 trillion else Bob
Output 1:  Pay Alice if diff < 1.01 trillion else Bob

Output 98:  Pay Alice if diff < 1.98 trillion else Bob
Output 99:  Pay Alice if diff < 1.99 trillion else Bob

If the difficulty is 1.25 trillion, then Alice gets outputs 0-24 and Bob
gets outputs 25-99.  The future has a tick size of 1%.  It isn't very
efficient though

It would be good to have the option to specify a block height for the
future too.  If it triggered on block time, then miners have an incentive
to give false block times.

I am not clear if there is a way to solve the accounting for the
> payouts, but perhaps there is a way to do this with covenants.
>

I agree you would need covenants or something similar.

There needs to be a way to check the outputs (value and script) of the
spending transaction.  You also need a way for Alice and Bob to create
their spending transaction in sequence.

Output 0: Pay Alice if [output value 0] <= Diff / 1 trillion AND [output
value 1] >= (2 trillion - diff)  / (1 trillion) AND [output 1 pays to Bob]

To spend her output, Alice has to create a transaction which pays Bob and
assigns the coins in the right ratio.  [output value x] means the output
value of the spending transaction for output x.

To get it to work Alice creates a transaction with these restrictions

Output 0:
Script: Anything (Alice gets it to pay herself)
Value: <= Diff / 1 trillion

Output 1:
Script: Must pay to Bob
Value: >= (2 trillion - Diff) / 1 trillion

You also need to handle overflows with the calculations.

Bob can then spend output 1 and get his money.

There is a hold-up risk if Alice doesn't spend her money.  You can make the
output script so either of them can spend their coins to avoid that.

Output 0:
Pay Alice if [output value 0] <= Diff / 1 trillion AND [output value 1]
>= (2 trillion - diff)  / (1 trillion) AND [output 1 pays to Bob]
  OR
Pay Bob if [output value 0] <= (2 trillion - Diff) / 1 trillion AND
[output value 1] >= Diff / (1 trillion) AND [output 1 pays to Alice]

You would need a covenant-like instruction to check the output values and
scripts and the diff opcode to get the difficulty.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Chain width expansion

2019-10-15 Thread Tier Nolan via bitcoin-dev
On Tue, Oct 15, 2019 at 7:29 AM Braydon Fuller via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> So I don't think you can use the height in the coinbase for that
> purpose, as it's not possible to validate it without the previous
> headers. That's common for more than just the height.
>

It is a property of blockchains that the lowest digest for a chain
represents the total chainwork.

Estimate total hash count = N * (2^256) / (Nth lowest (i.e. strongest)
digest over all headers)

To produce a fake set of 10 headers that give a higher work estimate than
the main chain would require around the same effort as went into the main
chain in the first place.  You might as well completely build an
alternative chain.

Working backwards for one of those headers, you have to follow the actual
chain back to genesis.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Chain width expansion

2019-10-04 Thread Tier Nolan via bitcoin-dev
Are you assuming no network protocol changes?

At root, the requirement is that peers can prove their total chain POW.

Since each block has the height in the coinbase, a peer can send a short
proof of height for a disconnected header and could assert the POW for that
header.

Each peer could send the the N strongest headers (lowest digest/most POW)
for their main chain and prove the height of each one.

The total chain work can be estimated as N times the POW for the lowest in
the list.  This is an interesting property of how POW works.  The 10th best
POW block will have about 10% of the total POW.

The N blocks would be spread along the chain and the peer could ask for all
headers between any 2 of them and check the different in claimed POW.  If
dishonesty is discovered, the peer can be banned and all info from that
peer wiped.

You can apply the rule hierarchically.  The honest peers would have a much
higher POW chain.  You could ask the peer to give you the N strongest
headers between 2 headers that they gave for their best chain.  You can
check that their height is between the two limits.

The peer would effectively be proving their total POW recursively.

This would require a new set of messages so you can request info about the
best chain.

It also has the nice feature that it allows you to see if multiple peers
are on the same chain, since they will have the same best blocks.

The most elegant would be something like using SNARKS to directly prove
that your chain tip has a particular POW.  The download would go tip to
genesis, unlike now when it is in the other direction.



In regard to your proposal, I think the key is to limit things by peer,
rather than globally.

The limit to header width should be split between peers.  If you have N
outgoing peers, they get 1/N of your header download resources each.

You store the current best/most POW header chain and at least one
alternative chain per outgoing peer.

You could still prune old chains based on POW, but the best chain and the
current chain for each outgoing peer should not be pruned.

The security assumption is that a node is connected to at least one honest
node.

If you split resources between all peers, then it prevents the dishonest
nodes from flooding and wiping out the progress for the honest peer.

- Message Limiting -

I have the same objection here.  The message limiting should be per peer.

An honest peer who has just been connected to shouldn't suffer a penalty.

Your point that it is only a few minutes anyway may make this point moot
though.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] How accurate are the Bitcoin timestamps?

2018-01-29 Thread Tier Nolan via bitcoin-dev
On Mon, Jan 29, 2018 at 1:34 PM, Neiman via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

>
> *2.* Timestamps are not necessary to avoid double-spending. A simple
> ordering of blocks is sufficient, so exchanging timestamps with enumeration
> would work double-spending wise. Permissioned consensus protocols, such as
> hyperledger, indeed have no timestamps (in version 1.0).
>

The timestamps simply needs to be reasonably accurate.  Their main purpose
is to allow difficulty updates.

They can also be used to check that the node has caught up.


> It uses a simple average of block time in the last 2016 blocks. But such
> averages ignore any values besides the first and last one in the interval.
> Hence, if the difficulty is constant, the following sequence is valid from
> both the protocol and the miners incentives point of views:
>
> 1, 2, 3,…., 2015, 1209600 (time of two weeks), 2017, 2018, 2019,….,
> 4031, 1209600*2, 4033, 4044, …
>

Much of Bitcoin operates on the assumption that a majority of miners are
honest.  If 50%+ of miners set their timestamp reasonably accurately (say
within 10 mins), then the actual timestamp will move forward at the same
rate as real time.

Dishonest miners could set their timestamp as low as possible, but the
median would move foward if more than half of the timestamps move forward.


> If we want to be pedantic, the best lower bound for a block timestamp is
> the timestamp of the block that closes the adjustment interval in which it
> resides.
>

If you are assuming that the miners are majority dishonest, then they can
set the limit to anything as long as they don't move it more than 2 hours
into the future.

The miners could set their timestamps so that they increase 1 week fake
time every 2 weeks real time and reject any blocks more than 2 hours ahead
of their fake time.  The difficulty would settle so that one block occurs
every 20 mins.


>
> Possible improvement:
> -
> We may consider exchanging average with standard deviation in the
> difficulty adjustment formula. It both better mirrors changes in the hash
> power along the interval, and disables the option to manipulate timestamps
> without affecting the difficulty.
>
> I'm aware that this change requires a hardfork, and won't happen any time
> soon. But does it make sense to add it to a potential future hard fork?
>

For check locktime, the median of the last 11 blocks is used as an improved
indicator of what the actual real time is.  Again, it assumes that a
majority of the miners are honest.

>
>
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Merge of protocol

2018-01-24 Thread Tier Nolan via bitcoin-dev
If the communities behind two coins wanted to merge, it would be possible,
but difficulty and risky.

It represents a hard fork on both chains.  Not only does each coin's
community need to agree, the two communities need to agree with each other.

They would both have to agree the join point.  The merge block would have 2
parents.


A <- B <- C <- D
 \
J1 <- J2 <- J3 <- J4
 /
w <- x <- y <- z


In the above example, A, B, C, D is one chain and w, x, y, z is the other.
They combine and then J1, J2, J3, J4 is the combined chain.

Since block "J1" has 2 parents, it commits to the state of the 2 legacy
chains.  If you have coins in each chain at D or z, then you get coins in
the joint chain.

They would both need to agree on what the rules are for their new chain.
Since it is a (double) hard fork, they can do pretty much anything they
want.

The combined chain could continue as before.  It would be a combined chain
and each user's coin total would be unaffected.  The advantage of doing
that is that it causes minimum economic disruption to users.  The mining
power for both chains would be applied to the joint chain, so they combine
their security.

Alternatively, they could agree on an exchange rate.  Users would be given
joint-coins in exchange for their coins on the 2 legacy chains.

For something like Bitcoin Cash and Bitcoin, they could have a
re-combination rule.  1 Bitcoin-Recombined = 1 BTC + 1 BCH.  That doesn't
seem very likely though and also there are more BCH coins than BTC coins.

It might be worth moving this to bitcoin-discuss, since it isn't really
Bitcoin protocol discussion.


Wed, Jan 24, 2018 at 11:56 AM, Ilan Oh via bitcoin-dev  wrote:

> 2017 was fork year,
>
> Is it technically possible to merge two protocoles ? And thus bringing the
> strength of both into one resulting coin.
>
> I would not be surprized to see a lot of altcoin wanting to merge with
> bitcoin or between them, especially with LN current development, if it is
> possible,
>
> If anyone has ideas or ressources on this,
>
> Thanks
>
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] "Compressed" headers stream

2017-12-11 Thread Tier Nolan via bitcoin-dev
On Mon, Dec 11, 2017 at 9:56 PM, Jim Posen via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> Omitting nBits entirely seems reasonable, I wrote up a possible
> implementation here
> .
> The downside is that it is more complex because it leaks into the
> validation code. The extra 4 byte savings is certainly nice though.
>

A compromise would be to have 1 byte indicating the difference since the
last header.

Since the exponent doesn't use the full range you could steal bits from
there to indicate mode.

- no change
- mantissa offset (for small changes)
- full difficulty

This would support any nBits rule and you say 3 of the 4 bytes.


> Can you elaborate on how parallel header fetching might work? getheaders
> requests could probably already be pipelined, where the node requests the
> next 2,000 headers before processing the current batch (though would make
> sense to check that they are all above min difficulty first).
>

I suggest adding a message where you can ask for the lowest N hashes
between 2 heights on the main chain.

The reply is an array of {height, header} pairs for the N headers with the
lowest hash in the specified range.

All peers should agree on which headers are in the array.  If there is
disagreement, then you can at least narrow down on which segment there is
disagreement.

It works kind of like a cut and choose.  You pick one segment of the ones
he gave you recursively.

You can ask a peer for proof for a segment between 2 headers of the form.

- first header + coinbase with merkle branch
- all headers in the segment

This proves the segment has the correct height and that all the headers
link up.

There is a method called "high hash highway" that allows compact proofs of
total POW.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] hypothetical: Could soft-forks be prevented?

2017-09-15 Thread Tier Nolan via bitcoin-dev
On Fri, Sep 15, 2017 at 10:14 AM, Adam Back via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> True however in principle a soft-fork can also be soft-forked out. Eg say
> a publicly known soft-fork done by miners only that user node software did
> not upgrade for first by opt-in adoption.
>

It depends on what software that the general user-base is using (especially
exchanges).  If a majority of miners have deployed a hidden soft fork, then
the soft fork will only last as long as they can maintain their majority.

If they drop below 50%, then the majority of miners will eventually make
and then build on a block that is invalid according to their hidden soft
fork rules.

If the userbase doesn't support a censorship soft fork, then it will only
last as long as a majority of miners support it.  Once the cartel loses its
majority, there is a strong incentive for members to disable their soft
fork rule.  Any that don't will end up mining a lower POW, but valid, chain.

Users updating their nodes to enforce the soft fork is what makes the soft
fork irreversible (without a hard fork).


> A censorship soft-fork is harder, that's a standard hard-fork to bypass
> with current fungibility mechanisms.
>

It's only a hard fork to reverse if the community is enforcing the soft
fork.  Forking off a minority of miners doesn't make it a hard fork.


>
> Adam
>
> On Sep 15, 2017 08:12, "ZmnSCPxj via bitcoin-dev"  linuxfoundation.org> wrote:
>
>> Good morning Dan,
>>
>> My understanding is that it is impossible for soft forks to be prevented.
>>
>> 1.  Anyone-can-spend
>>
>> There are a very large number of anyone-can-spend scripts, and it would
>> be very impractical to ban them all.
>>
>> For example, the below output script is anyone-can-spend
>>
>>   OP_TRUE
>>
>> So is the below:
>>
>>   OP_SIZE  OP_EQUAL
>>
>> Or:
>>
>>   OP_1ADD  OP_EQUAL
>>
>> Or:
>>
>>   OP_BOOLAND
>>
>> Or:
>>
>>   OP_BOOLOR
>>
>> And so on.
>>
>> So no, it is not practically possible to ban anyone-can-spend outputs, as
>> there are too many potential scriptPubKey that anyone can spend.
>>
>> It is even possible to have an output that requires a proof-of-work, like
>> so:
>>
>>  OP_HASH256  OP_LESSTHAN
>>
>> All the above outputs are disallowed from propagation by IsStandard, but
>> a miner can put them validly in a block, and IsStandard is not consensus
>> code and can be modified.
>>
>> 2.  Soft fork = restrict
>>
>> It is possible (although unlikely) for a majority of miners to run soft
>> forking code which the rest of us are not privy to.
>>
>> For example, for all we know, miners are already blacklisting spends on
>> Satoshi's coins.  We would not be able to detect this at all, since no
>> transaction that spends Satoshi's coins have been broadcast, ever.  It is
>> thus indistinguishable from a world where Satoshi lost his private keys.
>> Of course, the world where Satoshi never spent his coins and miners are
>> blacklisting Satoshi's coins, is more complex than the world where Satoshi
>> never spent his coins, so it is more likely that miners are not
>> blacklisting.
>>
>> But the principle is there.  We may already be in a softfork whose rules
>> we do not know, and it just so happens that all our transactions today do
>> not violate those rules.  It is impossible for us to know this, but it is
>> very unlikely.
>>
>> Soft forks apply further restrictions on Bitcoin.  Hard forks do not.
>> Thus, if everyone else is entering a soft fork and we are oblivious, we do
>> not even know about it.  Whereas, if everyone else is entering a hard fork,
>> we will immediately see (and reject) invalid transactions and blocks.
>>
>> Thus the only way to prevent soft fork is to hard fork against the new
>> soft fork, like Bcash did.
>>
>> Regards,
>> ZmnSCPxj
>>
>>  Original Message 
>> Subject: [bitcoin-dev] hypothetical: Could soft-forks be prevented?
>> Local Time: September 13, 2017 5:50 PM
>> UTC Time: September 13, 2017 9:50 AM
>> From: bitcoin-dev@lists.linuxfoundation.org
>> To: Bitcoin Protocol Discussion 
>>
>> Hi, I am interested in the possibility of a cryptocurrency software
>> (future bitcoin or a future altcoin) that strives to have immutable
>> consensus rules.
>>
>> The goal of such a cryptocurrency would not be to have the latest and
>> greatest tech, but rather to be a long-term store of value and to offer
>> investors great certainty and predictability... something that markets
>> tend to like. And of course, zero consensus rule changes also means
>> less chance of new bugs and attack surface remains the same, which is
>> good for security.
>>
>> Of course, hard-forks are always possible. But that is a clear split
>> and something that people must opt into. Each party has to make a
>> choice, and inertia is on the side of the status quo. Whereas
>> soft-forks sort of drag people along with them, even those who oppose
>> the changes and never 

Re: [bitcoin-dev] 2 softforks to cut the blockchain and IBD time

2017-09-13 Thread Tier Nolan via bitcoin-dev
On Tue, Sep 12, 2017 at 11:58 PM, michele terzi via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

>
> Pros:
>
> you gain a much faster syncing for new nodes.
> full non pruning nodes need a lot less HD space.
> dropping old history results in more difficult future chainanalysis (at
> least by small entities)
> freezing old history in one new genesis block means the chain can no
> longer be reorged prior to that point
>

Current nodes allow pruning so you can save disk space that way.  Users
still need to download/verify the new blocks though.

Under your scheme, you don't need to throw the data away.  Nodes can decide
how far back that they want to go.

"Fast" IBD

- download header chain from genesis (~4MB per year)
- check headers against "soft" checkpoints (every 50k blocks)
- download the UTXO set of the most recent soft checkpoint (and verify
against hash)
- download blocks starting from the most recent soft checkpoint
- node is now ready to use
- [Optional] Slowly download the remaining blocks

This requires some new protocol messages to allow requesting and send the
UTXO set, though the inv and getdata messages could be used.

If you add a new services bit, NODE_NETWORK_RECENT, then nodes can find
other nodes that have the most recent blocks.  This indicates that you have
all blocks since the most recent snapshot.

The slow download doesn't have to download the blocks in order.  It can
just check against the header chain.  Once a node has all the blocks, it
would switch from NODE_NETWORK_RECENT to NODE_NETWORK.

(Multiple bits could be used to indicate that the node has 2 or more recent
time periods).

"Soft" checkpoints mean that re-orgs can't cause a network partition.  Each
soft checkpoint is a mapping of {block_hash: utxo_hash}.

A re-org of 1 year or more would be devastating so it is probably
academic.  Some people may object to centralized checkpointing and soft
checkpoints cover that objection.

full nodes with old software can no longer be fired up and sync with the
> existing network
> full nodes that went off line prior to the second fork cannot sync back
> once they turn back on line again.
>
>
This is why having archive nodes (and a way to find them) is important.

You could have a weaker requirement that nodes shouldn't delete blocks
unless they are at least 3 time periods (~3 years) old.

The software should have a setting which allows the user to specify maximum
disk space.  Disk space is cheap, so it is likely that a reasonable number
of people will leave that set to infinite.

This automatically results in lots of archive nodes.  Another setting could
decide how many time periods to download.  2-3 seem reasonable as a default
(or maybe infinite too).


> Addressing security concerns:
>
> being able to write a new genesis block means that an evil core has the
> power to steal/destroy/censor/whatever coins.
>
> this is possible only in theory, but not in practice. right now devs can
> misbehave with every softfork, but the community tests and inspects every
> new release.
>

Soft forks are inherently backward compatible.  Coins cannot be stolen
using a soft fork.  It has nothing to do with inspecting new releases.

It is possible for a majority of miners to re-write history, but that is
separate to a soft fork.

A soft fork can lock coins away.  This effectively destroys the coins, but
doesn't steal them.  It could be part of a extortion scheme I guess, but if
a majority of miners did that, then I think Bitcoin has bigger problems.


> the 2 forks will be tested and inspected as well so they are no more risky
> than other softforks.
>
>
For it to be a soft fork, you need to maintain archive nodes.  That is the
whole point.  The old network and the new network rules agree that the new
network rules are valid (and that miners only mine blocks that are valid
under the new rules).  If IBD is impossible for old nodes, then that counts
as a network split.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] SF proposal: prohibit unspendable outputs with amount=0

2017-09-07 Thread Tier Nolan via bitcoin-dev
You could have a timelocked transaction that has a zero value input (and
other non-zero inputs).  If the SF happened, that transaction would become
unspendable.

The keys to the outputs may be lost or the co-signer may refuse to
cooperate.

There seems to be some objections to long term timelocked transactions.

If someone asked me about it, I would recommend that any timelocked
transactions should very carefully make sure that they use forms that are
popular.

I think the fairest rule would be that any change which makes some
transactions invalid should be opt-in and only apply to new transaction
version numbers.

If you create a timelocked transactions with an undefined version number,
then you have little to complain about.

If the version number is defined and in-use, then transactions should not
suddenly lose validity.

A refusal to commit to that makes long term locktime use much more risky.

On Thu, Sep 7, 2017 at 12:54 AM, CryptAxe via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> As long as an unspendable outputs (OP_RETURN outputs for example) with
> amount=0 are still allowed I don't see it being an issue for anything.
>
> On Sep 5, 2017 2:52 PM, "Jorge Timón via bitcoin-dev"  linuxfoundation.org> wrote:
>
>> This is not a priority, not very important either.
>> Right now it is possible to create 0-value outputs that are spendable
>> and thus stay in the utxo (potentially forever). Requiring at least 1
>> satoshi per output doesn't really do much against a spam attack to the
>> utxo, but I think it would be slightly better than the current
>> situation.
>>
>> Is there any reason or use case to keep allowing spendable outputs
>> with null amounts in them?
>>
>> If not, I'm happy to create a BIP with its code, this should be simple.
>> ___
>> bitcoin-dev mailing list
>> bitcoin-dev@lists.linuxfoundation.org
>> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>>
>
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] SF proposal: prohibit unspendable outputs with amount=0

2017-09-06 Thread Tier Nolan via bitcoin-dev
On Tue, Sep 5, 2017 at 10:51 PM, Jorge Timón via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> Is there any reason or use case to keep allowing spendable outputs
> with null amounts in them?
>

Someone could have created a timelocked transaction that depends on a zero
value output.

This could be protected by requiring a tx version number change.  Only zero
outputs in the new version would be affected.

I am not sure how strictly people are sticking to that rule though.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] how to disable segwit in my build?

2017-07-14 Thread Tier Nolan via bitcoin-dev
On Fri, Jul 14, 2017 at 12:20 AM, Dan Libby via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> On 07/13/2017 03:50 PM, Hampus Sjöberg wrote:
> > 2. Avoid any chain of transaction that contains a SegWit transaction
>
> sounds good, though I'm unclear on how exactly to achieve (2) given that
> any party I have ever transacted with (or otherwise knows an address of
> mine) can send me coins at any time.  So it seems the only possible way
> to be certain is to run a node that has never published an address to a
> 3rd party.  Is that accurate?
>

You would also have to ensure that everyone you give your addresses to
follows the same rule.  As time passes, there would be fewer and fewer
people who have "clean" outputs.

>From the perspective of old nodes, segwit looks like lots of people are
transferring money to "anyone-can-spend" outputs.  This outputs are
completely unprotected.  Literally, anyone can spend them.  (In practice,
miners would spend them, since why would they include a transaction that
sends "free money" to someone else).

If you run an old node, then someone could send you a transaction that only
spends segwit outputs and you would think it is a valid payment.

Imagine that there are only 3 UTXOs (Alice, Bob and Carl have all the
Bitcoins).

UTXO-1:  Requires signature by Alice (legacy output)

UTXO-2: Anyone can pay (but is actually a segwit output that needs to be
signed by Bob)

UTXO-3: Anyone can pay (but is actually a segwit output that needs to be
signed by Carl)

Only Bob can spend UTXO-2, since it needs his signature.

Anyone could create a transaction that spends UTXO-2 and it would look good
to all legacy nodes.  It is an "anyone can spend" output after all.

However, if they submit the transaction to the miners, then it will be
rejected, because according to the new rules, it is invalid (it needs to be
signed by Bob).

Once a soft fork goes through, then all miners will enforce the new rules.

A miner who added the transaction to one of his blocks (since it is valid
under the old rules) would find that no other miners would accept his block
and he would get no fees for that block.  This means that all miners have
an incentive to upgrade once a soft fork activates.

His block would be accepted by legacy nodes, for a short while.  However,
since 95% of the miners are on the main chain, their chain (which rejects
his block) would end up the longest.

If you are running a legacy client when a soft fork comes in, then you can
be tricked with "zero confirm" transactions.  The transaction will look
good to you, but will be invalid under the new rules.  This makes your
client think you have received (a lot of) money, but in practice, the
transaction will not be accepted by the miners.


> Another thing that could be done is to modify my own node so that it
> actually rejects such tx, but then I have modified consensus rules
> myself, thus defeating the goal of remaining with status-quo rules, and
> anyway the rest of the network would accept the tx.  I guess the benefit
> is that I could be certain of the remaining funds I have.
>

If you wanted, you could mark any transaction that has a segwit looking
output as "dirty" and then all of its descendants as dirty.

However, pretty quickly, only a tiny fraction of all bitcoins would be
clean.

I suppose that it would be possible without modifying any rule to
> construct a "certain balance" and an "uncertain balance".
>

Right.

I think a reasonably compromise would be to assume that all transactions
buried more than a few hundred blocks deep are probably ok.  Only segwit
looking outputs would be marked as "uncertain".
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Drivechain -- Request for Discussion

2017-05-25 Thread Tier Nolan via bitcoin-dev
On Wed, May 24, 2017 at 6:32 PM, CryptAxe via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> Also the block number can only change by +1 or -1, so when a new h* is
> added to the
> queue it must be compared to the most recent h* in the queue.
> std::abs(queue.back().nHeight - ToAdd.nHeight) must equal 1.
>

I think it is better to have it locked to a particular bitcoin height and
if it doesn't get included in that block, the sidechain miner can re-claim
it.

This could be taken to the extreme where the sidechain miner specifies a
particular parent of the claiming block.

The output should have a standard template, so miners can easily find bids.

The template on my previous post was:

IF
  OP_BRIBE_VERIFY
ELSE
   OP_CHECKSIG
ENDIF


If the output is spent by the miner for block , then the
sidechain miner has spent the funds.

Otherwise, the sidechain miner can use the else branch to reclaim his money.

The sidechain miner could also reclaim his money if the transaction was
included in an earlier block.  That would defeat the purpose of the bribe.
Bitcoin miners would have a (justified) incentive to not allow Bribe
outputs to be spent "early".

The bribe transactions could be created with no fees.  This would mean that
it is pointless for bitcoin miners to include them in blocks unless they
are claiming the outputs.

The relay rules would need to be modified to handle that.  Pools could
allow bids to be made directly, but that is less decentralized.

Here's what I'm testing right now as I'm working on BMM:
>
> script << OP_RETURN << CScriptNum::serialize(nSidechain) <<
> CScriptNum(nSidechainHeight) << ToByteVector(sidechain blinded block hash
> h*)
>

I don't think OP_BRIBE should care about info for the side chain.  The only
thing that is necessary is to indicate which sidechain.

You could just define the critical hash as

Hash( SideChainHeight | blinded h* )

For bribe payout release, it needs to give that particular miner an
advantage over all competitors, so their block forms the longest chain on
the sidechain (assuming their block is actually valid).

> One other thing I want to make sure is clear enough is that the block
> number in the critical hash script is
> a sidechain block number, not a mainchain block number.
>
The sidechain miner is saying that they will pay the bribe but only if
their block is included in the main chain.  The means that main chain
height is important.

They are paying for their block to be placed ahead of all competing blocks
for their chain.

It does mean that the side-chain can have at most the same number of blocks
as bitcoin.

>
> We were thinking about making bribe outputs have a maturity period like
> generated coins. You
> think that they should be locked for >100 blocks by having OP_BRIBE also
> check the lock time?
>

Well, it depends on the exact rules for OP_BRIBE.

The process I see is:

- sidechain miner submits a bribe transaction which pays to op bribe
- bitcoin miner includes that transaction in his block (or it could be
included in a previous block)
- bitcoin miner includes a claim transaction in his block

The claim transaction spends the outputs from the bribe transaction.  If
the claim transaction is block height locked, then it violates the rules
that previous soft-forks have followed.

For previous opcode changes there was a requirement that if a transaction
was accepted into block N, then it must also be acceptable in block (N+1).

The only (unavoidable) exceptions were double spends and coinbases outputs.

This means that the same protection should be added to your claim
transaction.

You could do it by requiring all outputs of the claim transaction to start
with

<100> CHECK_SEQUENCE_VERIFY DROP ...

This is only a few bytes extra at the start of the output script.

This means you can't use witness or P2SH output types for any of the
outputs, but that isn't that important.  The point of the transaction is to
make a payment.

An alternative would be to just add the rule as part of soft-fork
definition.  You could define a claim transaction as one that spends at
least one OP_BRIBE output and therefore, all its outputs have a 100 block
delay.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Drivechain -- Request for Discussion

2017-05-24 Thread Tier Nolan via bitcoin-dev
On Wed, May 24, 2017 at 9:50 AM, Tier Nolan  wrote:

> OP_BRIBE_VERIFY could then operate as follows
>
>OP_BRIBE_VERIFY
>
> This causes the script to fail if
>does not match the block height, or
>is not the hash for the sidechain with , or
>   there is no hash for that sidechain in the block's coinbase
>
>
I was thinking more on the process for these transactions.

I assume that the process is

- sidechain miner broadcasts transaction with OP_BRIBE output
- this transaction ends up in the memory pool of miners
- Miners add the transaction to their next block
- Miners add a transaction which spends the output to one of their own
addresses

I think you need an additional rule that OP_BRIBE checks fails unless the
output is locked 100 or more blocks.

The output script would end up something like

IF
  OP_BRIBE_VERIFY
ELSE
   OP_CHECKSIG
ENDIF

This output acts like "anyone can spend" for the one block height.
Otherwise, only the sidechain miner can spend the output.

This allows the sidechain miner to reclaim their coins if the transaction
ends up in a different block.

OP_BRIBE_VERIFY would have an additional rule

The script to fails if
  one or more of the transaction outputs start with something other than
the template
   does not match the block height, or
   is not the hash for the sidechain with , or
  there is no hash for that sidechain in the block's coinbase

The template is
  <100> OP_CHECKSEQUENCE_VERIFY
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Drivechain -- Request for Discussion

2017-05-24 Thread Tier Nolan via bitcoin-dev
On Tue, May 23, 2017 at 3:22 PM, Paul Sztorc  wrote:

>
> If you haven't seen http://www.truthcoin.info/blog/drivechain/ , that is
> probably the most human-readable description.
>

I guess I was looking for the detail you get in the code, but without
having to read the code.

My quick reading gives that the sidechain codes (critical hashes) are added
when a coinbase is processed.

Any coinbase output that has the form "OP_RETURN <32 byte push>" counts as
a potential critical hash.

When the block is processed, the key value pair (hash, block_height) is
added to a hash map.

The OP_BRIBE opcode checks that the given hash is in the hash map and
replaces the top element on the stack with the pass/fail result.

It doesn't even check that the height matches the current block, though
there is a comment that that is a TODO.

I agree with ZmnSCPxj, when updating a nop, you can't change the stack.  It
has to fail the script or do nothing.

OP_BRIBE_VERIFY would cause the script to fail if the hash wasn't in the
coinbase, or cause a script failure otherwise.

Another concern is that you could have multiple bribes for the same chain
in a single coinbase.  That isn't fair and arguably what the sidechain
miner is paying for is to get his hash exclusively into the block.

I would suggest that the output is

OP_RETURN  

Then add the rule that only the first hash with a particular sidechain id
actually counts.

This forces the miner to only accept the bribe from 1 miner for each
sidechain for each block.  If he tries to accept 2, then only the first one
counts.

OP_BRIBE_VERIFY could then operate as follows

   OP_BRIBE_VERIFY

This causes the script to fail if
   does not match the block height, or
   is not the hash for the sidechain with , or
  there is no hash for that sidechain in the block's coinbase

If you want reduce the number of drops, you could serialize the info into a
single push.

This has the advantage that a sidechain miner only has to pay if his block
is accepted in the next bitcoin block.  Since he is the only miner for that
sidechain that gets into the main bitcoin block, he is pretty much
guaranteed to form the longest chain.

Without that rule, sidechain miners could end up having to pay even though
it doesn't make their chain the longest.

How are these transactions propagated over the network?  For relaying, you
could have the rule that the opcode passes as long as  is
near the current block height.  Maybe require that they are in the future.
They should be removed from the memory pool once the block height has
arrived, so losing miners can re-spend those outputs.

This opcode can be validated without needing to look at other blocks, which
is good for validating historical blocks.

I am still looking at the deposit/withdrawal code.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Drivechain -- Request for Discussion

2017-05-23 Thread Tier Nolan via bitcoin-dev
On Mon, May 22, 2017 at 9:00 PM, Paul Sztorc  wrote:

> I would replace "Bitcoins you manage to steal" with "Bitcoins you manage
> to double-spend". Then, it still seems the same to me.
>
>
With double spending, you can only get ownership of coins that you owned at
some point in the past.  Coins that are owned by someone else from coinbase
to their current owners cannot be stolen by a re-org (though they can be
moved around).

With BMM, you can take the entire reserve.  Creating a group of double
spenders can help increase the reward.


>
> It may destroy great value if it shakes confidence in the sidechain
> infrastructure. Thus, the value of the stolen BTC may decrease, in addition
> to the lost future tx fee revenues of the attacked chain.
>
> http://www.truthcoin.info/blog/drivechain/#drivechains-security
>
>
That is a fair point.  If sidechains are how Bitcoin is scaled, then
shaking confidence in a side-chain would shake confidence in Bitcoin's
future.

I wasn't thinking of a direct miner 51% attack.  It is enough to assume
that a majority of the miners go with the highest bidder each time.

If (average fees) * (timeout) is less than the total reserves, then it is
worth it for a 3rd party to just bid for his theft fork.  Miners don't have
to be assumed to be coordinating, they just have to be assumed to take the
highest bid.

Again, I don't really think it is that different. One could interchange
> "recent txns" (those which could be double-spent within 2-3 weeks) with
> "sidechain deposit tnxs".
>

It is not "recent txns", it is recent txns that you (or your group) have
the key for.  No coordination is required to steal the entire reserve from
the sidechain.

Recent txns and money on the sidechain have the property that they are
riskier than money deep on the main chain.  This is the inherent point
about sidechains, so maybe not that big a deal.

My concern is that you could have a situation where an attack is possible
and only need to assume that the miners are indifferent.

If the first attacker who tries it fails (say after creating a fork that is
90% of the length required, so losing a lot of money), then it would
discourage others.   If he succeeds, then it weakens sidechains as a
concept and that creates the incentive for miners to see that he fails.

I wonder how the incentives work out.  If a group had 25% of the money on
the sidechain, they could try to outbid the attacker.

In fact, since the attacker, by definition, creates an illegal fork, the
effect is that he reduces the block rate for the side chain (possibly to
zero, if he wins every auction).  This means that there are more
transactions per block, if there is space, or more fees per transaction, if
the blocks are full.

In both cases, this pushes up the total fees per block, so he has to pay
more per block, weakening his attack.  This is similar to where transaction
spam on Bitcoin is self-correcting by increasing the fees required to keep
the spam going.

Is there a description of the actual implementation you decided to go with,
other than the code?
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Drivechain -- Request for Discussion

2017-05-22 Thread Tier Nolan via bitcoin-dev
On Mon, May 22, 2017 at 5:19 PM, Paul Sztorc via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

>
> In the future, when there is no block subsidy, a rich attacker can also do
> that in mainchain Bitcoin.
>

I don't think they are the same.

With Bitcoin, you only get to reverse recent transactions.  If you actually
reversed 2-3 weeks of transactions, then the Bitcoin price would fall,
destroying the value of the additional coins you managed to obtain.  Even
if their was no price fall, you can only get a fraction of the total.

With BMM, you can "buy" the entire reserve of the sidechain by paying
(timeout) * (average tx fees).  If you destroy a side-chain's value, then
that doesn't affect the value of the bitcoins you manage to steal.

The incentive could be eliminated by restricting the amount of coin that
can be transferred from the side chain to the main chain to a fraction of
the transaction fee pay to the bitcoin miners.

If the side chain pays x in fees, then at most x/10 can be transferred from
the side chain to the main chain.  This means that someone who pays for
block creation can only get 10% of that value transferred to the main chain.

Main-chain miners could support fraud proofs.  A pool could easily run an
archive node for the side chain in a different data center.

This wouldn't harm the performance of their main operations, but would
guarantee that the side chain data is available for side chain validators.

The sidechain to main-chain timeout would be more than enough for fraud
proofs to be constructed.

This means that the miners would need to know what the rules are for the
side chain, so that they can process the fraud proofs.  They would also
need to run SPV nodes for the side chain, so they know which sidechain
headers to blacklist.


> In point of fact, the transactions *are* validated...by sidechain full
> nodes, same as Bitcoin proper.
>
>
The big difference is that Bitcoin holds no assets on another chain.  A
side-chain's value is directly linked to the fact that it has 100% reserves
on the Bitcoin main chain.  That can be targeted for theft.


> Paul
>
>
> Regards,
> ZmnSCPxj
>
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Treating ‘ASICBOOST’ as a Security Vulnerability

2017-05-18 Thread Tier Nolan via bitcoin-dev
On Thu, May 18, 2017 at 2:44 PM, Cameron Garnham via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> 1. Significant deviations from the Bitcoin Security Model have been
> acknowledged as security vulnerabilities.
>
> The Bitcoin Security Model assumes that every input into the Proof-of-Work
> function should have the same difficulty of producing a desired output.
>

This isn't really that clear.

Arguably as long as the effort to find a block is proportional to the block
difficulty parameter, then it isn't an exploit.  It is just an optimisation.

A quantum computer, for example, could find a block with effort
proportional to the square root of the difficulty parameter, so that would
count as an attack.  Though in that case, the fix would likely be to tweak
the difficulty parameter update calculation.

A better definition would be something like "when performing work, each
hash should be independent".

ASICBOOST does multiple checks in parallel, so would violate that.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Small Nodes: A Better Alternative to Pruned Nodes

2017-04-18 Thread Tier Nolan via bitcoin-dev
This has been discussed before.

https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-May/008101.html

including a list of nice to have features by Maxwell

https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-May/008110.html

You meet most of these rules, though you do have to download blocks from
multiple peers.

The suggestion in that thread were for a way to compactly indicate which
blocks a node has.  Each node would then store a sub-set of all the
blocks.  You just download the blocks you want from the node that has them.

Each node would be recommended to store the last few days worth anyway.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Forcenet: an experimental network with a new header format

2016-12-14 Thread Tier Nolan via bitcoin-dev
On Wed, Dec 14, 2016 at 3:45 PM, Johnson Lau <jl2...@xbt.hk> wrote:

> I think that’s too much tech debt just for softforkability.
>
> The better way would be making the sum tree as an independent tree with a
> separate commitment, and define a special type of softfork (e.g. a special
> BIP9 bit).
>

One of the problems with fraud proofs is withholding by miners.  It is
important that proof of publication/archive nodes check that the miners are
actually publishing their blocks.

If you place the data in another tree, then care needs to be taken that the
merkle path information can be obtained for that tree.

If an SPV node asks for a run of transactions from an archive node, then
the archive node can give the merkle branch for all of those transactions.
The archive node inherently has to check that tree.

The question is if there is a way to show that data is not available, but
without opening up the network to DOS.  If enough people run full nodes
then this isn't a problem.

>
> When the softfork is activated, the legacy full node will stop validating
> the sum tree. This doesn’t really degrade the security by more than a
> normal softfork, as the legacy full node would still validate the total
> weight and nSigOp based on its own rules. The only purpose of the sum tree
> is to help SPV nodes to validate. This way we could even completely
> redefine the structure and data committed in the sum tree.
>

Seems reasonable.  I think the soft-fork would have to have a timeout
before actually activating.  That would give SPV clients time to switch
over.

That could happen before the vote though, so it isn't essential.  The SPV
clients would have to support both trees and then switch mode.  Ensuring
that SPV nodes actually bother would be helped by proving that the network
actually intends to soft fork.

The SPV client just has to check that every block has at least one of the
commitments that it accepts so that it can understand fraud proofs.


>
> I’d like to combine the size weight and sigOp weight, but not sure if we
> could. The current size weight limit is 4,000,000 and sigop limit is
> 80,000. It’s 50:1. If we maintain this ratio, and define
> weight = n * (total size +  3 * base size) + sigop , with n = 50
> a block may have millions of sigops which is totally unacceptable.
>

You multiplied by the wrong term.

weight = total size +  3 * base size + n * sigop , with n = 50

weight for max block = 8,000,000

That gives a maximum of 8,000,000 / 50 = 160,000 sigops.

To get that you would need zero transaction length.  You could get close if
you have transactions that just repeat OP_CHECKSIG over and over (or maybe
something with OP_CHECKMULTISIG).


>
> On the other hand, if we make n too low, we may allow either too few
> sigop, or a too big block size.
>
> Signature aggregation will make this a bigger problem as one signature may
> spend thousands of sigop
>
>
>
> On 14 Dec 2016, at 20:52, Tier Nolan <tier.no...@gmail.com> wrote:
>
>
>
> On Wed, Dec 14, 2016 at 10:55 AM, Johnson Lau <jl2...@xbt.hk> wrote:
>
>> In a sum tree, however, since the nSigOp is implied, any redefinition
>> requires either a hardfork or a new sum tree (and the original sum tree
>> becomes a placebo for old nodes. So every softfork of this type creates a
>> new tree)
>>
>
> That's a good point.
>
>
>> The only way to fix this is to explicitly commit to the weight and
>> nSigOp, and the committed value must be equal to or larger than the real
>> value. Only in this way we could redefine it with softfork. However, that
>> means each tx will have an overhead of 16 bytes (if two int64 are used)
>>
>
> The weight and sigop count could be transmitted as variable length
> integers.  That would be around 2 bytes for the sigops and 3 bytes for the
> weight, per transaction.
>
> It would mean that the block format would have to include the raw
> transaction, "extra"/tree information and witness data for each transaction.
>
> On an unrelated note, the two costs could be combined into a unified
> cost.  For example, a sigop could have equal cost to 250 bytes.  This would
> make it easier for miners to decide what to charge.
>
> On the other hand, CPU cost and storage/network costs are not completely
> interchangeable.
>
> Is there anything that would need to be summed fees, raw tx size, weight
> and sigops that the greater or equal rule wouldn't cover?
>
> On 12 Dec 2016, at 00:40, Tier Nolan via bitcoin-dev <
> bitcoin-dev@lists.linuxfoundation.org> wrote:
>
>
> On Sat, Dec 10, 2016 at 9:41 PM, Luke Dashjr <l...@dashjr.org> wrote:
>
>> On Saturday, December 10, 2016 9:29:09 PM Tier Nolan via bitcoin-d

Re: [bitcoin-dev] Forcenet: an experimental network with a new header format

2016-12-14 Thread Tier Nolan via bitcoin-dev
On Wed, Dec 14, 2016 at 10:55 AM, Johnson Lau <jl2...@xbt.hk> wrote:

> In a sum tree, however, since the nSigOp is implied, any redefinition
> requires either a hardfork or a new sum tree (and the original sum tree
> becomes a placebo for old nodes. So every softfork of this type creates a
> new tree)
>

That's a good point.


> The only way to fix this is to explicitly commit to the weight and nSigOp,
> and the committed value must be equal to or larger than the real value.
> Only in this way we could redefine it with softfork. However, that means
> each tx will have an overhead of 16 bytes (if two int64 are used)
>

The weight and sigop count could be transmitted as variable length
integers.  That would be around 2 bytes for the sigops and 3 bytes for the
weight, per transaction.

It would mean that the block format would have to include the raw
transaction, "extra"/tree information and witness data for each transaction.

On an unrelated note, the two costs could be combined into a unified cost.
For example, a sigop could have equal cost to 250 bytes.  This would make
it easier for miners to decide what to charge.

On the other hand, CPU cost and storage/network costs are not completely
interchangeable.

Is there anything that would need to be summed fees, raw tx size, weight
and sigops that the greater or equal rule wouldn't cover?

On 12 Dec 2016, at 00:40, Tier Nolan via bitcoin-dev <bitcoin-dev@lists.
linuxfoundation.org> wrote:


On Sat, Dec 10, 2016 at 9:41 PM, Luke Dashjr <l...@dashjr.org> wrote:

> On Saturday, December 10, 2016 9:29:09 PM Tier Nolan via bitcoin-dev wrote:
> > Any new merkle algorithm should use a sum tree for partial validation and
> > fraud proofs.
>
> PR welcome.
>

Fair enough.  It is pretty basic.

https://github.com/luke-jr/bips/pull/2

It sums up sigops, block size, block cost (that is "weight" right?) and
fees.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Forcenet: an experimental network with a new header format

2016-12-11 Thread Tier Nolan via bitcoin-dev
On Sat, Dec 10, 2016 at 9:41 PM, Luke Dashjr <l...@dashjr.org> wrote:

> On Saturday, December 10, 2016 9:29:09 PM Tier Nolan via bitcoin-dev wrote:
> > Any new merkle algorithm should use a sum tree for partial validation and
> > fraud proofs.
>
> PR welcome.
>

Fair enough.  It is pretty basic.

https://github.com/luke-jr/bips/pull/2

It sums up sigops, block size, block cost (that is "weight" right?) and
fees.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Forcenet: an experimental network with a new header format

2016-12-10 Thread Tier Nolan via bitcoin-dev
On Sun, Dec 4, 2016 at 7:34 PM, Johnson Lau via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> Something not yet done:
> 1. The new merkle root algorithm described in the MMHF BIP
>

Any new merkle algorithm should use a sum tree for partial validation and
fraud proofs.

Is there something special about 216 bits?  I guess at most 448 bits total
means only one round of SHA256.  16 bits for flags would give 216 for each
child.

Even better would be to make the protocol extendable.  Allow blocks to
indicate new trees and legacy nodes would just ignore the extra ones.  If
Bitcoin supported that then the segregated witness tree could have been
added as a easier soft fork.

The sum-tree could be added later as an extra tree.


> 3. Communication with legacy nodes. This version can’t talk to legacy
> nodes through the P2P network, but theoretically they could be linked up
> with a bridge node
>

The bridge would only need to transfer the legacy blocks which are coinbase
only, so very little data.


> 5. Many other interesting hardfork ideas, and softfork ideas that works
> better with a header redesign
>

That is very true.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] BIP30 and BIP34 interaction (was Re: [BIP Proposal] Buried Deployments)

2016-11-17 Thread Tier Nolan via bitcoin-dev
On Thu, Nov 17, 2016 at 12:43 AM, Eric Voskuil  wrote:

> > This means that all future transactions will have different txids...
> rules do guarantee it.
>
> No, it means that the chance is small, there is a difference.
>

I think we are mostly in agreement then?  It is just terminology.

In terms of discussing the BIP, barring a hash collision, it does make
duplicate txids impossible.

Given that a hash collision is so unlikely, the qualifier should be added
to those making claims that require hash collisions rather than those who
assume that they aren't possible.

You could have said "However nothing precludes different txs from having
the same hash, but it requires a hash collision".

Thinking about it, a re-org to before the enforcement height could allow
it.  The checkpoints protect against that though.


> As such this is not something that a node
> can just dismiss.


The security of many parts of the system is based on hash collisions not
being possible.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] BIP30 and BIP34 interaction (was Re: [BIP Proposal] Buried Deployments)

2016-11-16 Thread Tier Nolan via bitcoin-dev
On Thu, Nov 17, 2016 at 12:10 AM, Eric Voskuil via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> Both of these cases resulted from exact duplicate txs, which BIP34 now
> precludes. However nothing precludes different txs from having the same
> hash.
>

The only way to have two transactions have the same txid is if their
parents are identical, since the txids of the parents are included in a
transaction.

Coinbases have no parents, so it used to be possible for two of them to be
identical.

Duplicate outputs weren't possible in the database, so the later coinbase
transaction effectively overwrote the earlier one.

The happened for two coinbases.  That is what the exceptions are for.

Neither of the those coinbases were spent before the overwrite happened.  I
don't even think those coinbases were spent at all.

This means that every activate coinbase transaction has a unique hash and
all new coinbases will be unique.

This means that all future transactions will have different txids.

There might not be an explicit rule that says that txids have to be unique,
but barring a break of the hash function, they rules do guarantee it.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] [BIP Proposal] Buried Deployments

2016-11-16 Thread Tier Nolan via bitcoin-dev
On Wed, Nov 16, 2016 at 1:58 PM, Eric Voskuil via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> Are checkpoints good now? Are hard forks okay now?
>

I think that at least one checkpoint should be included.  The assumption is
that no 50k re-orgs will happen, and that assumption should be directly
checked.

Checkpointing only needs to happen during the headers-first part of the
download.

If the block at the BIP-65 height is checkpointed, then the comparisons for
the other ones are automatically correct.  They are unnecessary, since the
checkpoint protects all earlier block, but many people would like to be
able to verify the legacy chain.

This makes the change a soft-fork rather than a hard fork.  Chains that
don't go through the checkpoint are rejected but no new chains are allowed.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] BIP Number Request: Addresses over Audio

2016-08-11 Thread Tier Nolan via bitcoin-dev
On Thu, Aug 11, 2016 at 2:55 PM, Erik Aronesty via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> Sorr, I thought there was some BIP for a public seed such that someone can
> generate new random addresses, but cannot trivially verify whether an
> address was derived from the seed.
>

If you take a public key and multiply it by k, then the recipient can work
out the private key by multiplying their master private key by k.

If k is random, then the recipient wouldn't be able to work it out, but if
it is non-random, then everyone else can work it out.  You need some way to
get k to the recipient without others figuring it out.

This means either the system is interactive or you use a shared secret.

The info about the shared secret is included in the scriptPubKey (or the
more socially conscientious option, an OP_RETURN).

The address would indicate the master public key.

master_public = master_private * G

The transaction contains k*G.

Both sides can compute the shared secret.

secret = k*master_private*G = master_private*k*G

 DROP DUP HASH160 
EQUALVERIFY CHECKSIG

This adds 34 bytes to the scriptPubKey.

This is pretty heavy for scanning for transactions sent to you.  You have
to check every transaction output to see if it is the given template.  Then
you have to do an ECC multiply to compute the shared secret.  Once you have
the shared secret, you need to do an ECC addition and a hash to figure out
if it matches the public key hash in the output.

This is approx one ECC multiply per output and is similar CPU load to what
you would need to do to actually verify a block.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] BIP Number Request: Addresses over Audio

2016-08-10 Thread Tier Nolan via bitcoin-dev
Have you considered CDMA?  This has the nice property that it just sounds
like noise.  The codes would take longer to send, but you could send
multiple bits at once and have the codes orthogonal.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] BIP clearing house addresses

2016-08-08 Thread Tier Nolan via bitcoin-dev
With channels and the exchange acting as hub, you can do instant trades
between altcoins.

This doesn't work with fiat accounts.  A "100% reserve" company could issue
fiat tokens.  The exchange could then trade those tokens.

This eliminates the counter-party risk for the exchange.  If the exchange
dies, you still have your (alt)coins and also fiat tokens.

There is still risk that the token company could go bankrupt though.  This
could be mitigated by that company requiring only "cashing out" tokens to
accounts which have been verified.

The company could set up a blockchain where it signed the blocks rather
than mining and could get money from transaction fees and also minting fees
(say it charges 1% for minting new tokens)

I wonder what how the law would work for that.  It isn't actually doing
trading, it is just issuing tokens and redeeming them.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] BIP clearing house addresses

2016-08-08 Thread Tier Nolan via bitcoin-dev
On Mon, Aug 8, 2016 at 1:48 AM, Matthew Roberts  wrote:

> Not everyone who uses centralized exchanges are there to obtain the
> currency though. A large portion are speculators who need to be able to
> enter and exit complex positions in milliseconds and don't care about
> decentralization, security, and often even the asset that they're buying.
>

Centralized exchanges also allow for things like limit orders.  You don't
even have to be logged in and they can execute trades.  This couldn't be
done with channels.

> Try telling everyone who currently uses Btc-e to go do their margin
> trading over lightning channels, for example.
>

Using channels and a centralized exchange gets many of the benefits of a
distributed exchange.

The channel allows instant funding while allowing the customer to have full
control over the funds.  The customer could fund the channel and then move
money to the exchange when needed.

Even margin account holders might like the fact that it is clear which
funds are under their direct control and which funds are held by the
exchange.

If they are using bitcoin funds as collateral for a margin trade, then
inherently the exchange has to have control over those funds.  A 2 of 3
system where the customer, exchange and a 3rd party arbitration agency
holds keys might be acceptable to the exchange.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] BIP clearing house addresses

2016-08-03 Thread Tier Nolan via bitcoin-dev
On Wed, Aug 3, 2016 at 7:16 PM, Matthew Roberts via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> The reason why I bring this up is existing OP codes and TX types don't
> seem suitable for a secure clearing mechanism;
>

I think reversing transactions is not likely to be acceptable.  You could
add an opcode that requires that an output be set to something.

[target script] SPENDTO

This would require that [target script] is the script for the corresponding
output.  This is a purely local check.

For example, if SPENDTO executes as part of the script for input 3, then it
checks that output 3 uses the given script as its scriptPubKey.  The value
of input 3 and output 3 would have to be the same too.

This allows check sequence verify to be used to lock the spending script
for a while.  This doesn't allow reversal, but would give a 24 hour window
where the spenders can reverse the transaction.

[IF <1 day> CSV DROP  CHECKSIG ELSE  CHECKSIG] SPENDTO  CHECKSIG

Someone with the live public key can create a transaction that spends the
funds to the script in the square brackets.

Once that transaction hits the blockchain, then someone with the  has 24 hours to spend the output before the person with the
live keys can send the funds onward.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Reasons to add sync flags to Bitcoin

2016-07-26 Thread Tier Nolan via bitcoin-dev
On Tue, Jul 26, 2016 at 9:58 PM, Martijn Meijering via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> Is there a reason miners would be more likely to engage in selfish
> mining of sync flags than they are now with ordinary blocks?
>


This proposal has the same effect as adding mandatory empty blocks.

POW targeted at 2 minutes means that the POW for the flag is 25% of the
block POW.  That gives a flag every 2 minutes and a block every 8 minutes.

It has the feature that the conversion rate from hashing power to reward is
the same for the flags and the blocks.  A flag get 25% of the reward for
25% of the effort.

A soft fork to add this rule would have a disadvantage relative to a
competing chain.  It would divert 20% of its hashing power to the flag
blocks, which would be ignored by legacy nodes.  The soft fork would need
55% of the hashing power to win the race.

This isn't that big a deal if a 75% activation threshold is used.  It might
be worth bumping it up to 80% in that case.

This rule would mean that headers first clients would have to download more
information to verify the longest chain.  If they only download the
headers, they are missing 20% of the POW.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Making AsicBoost irrelevant

2016-05-10 Thread Tier Nolan via bitcoin-dev
The various chunks in the double SHA256 are

Chunk 1: 64 bytes
version
previous_block_digest
merkle_root[31:4]

Chunk 2: 64 bytes
merkle_root[3:0]
nonce
timestamp
target

Chunk 3: 64 bytes
digest from first sha pass

Their improvement requires that all data in Chunk 2 is identical except for
the nonce.  With 4 bytes, the birthday paradox means collisions can be
found reasonable easily.

If hard forks are allowed, then moving more of the merkle root into the 2nd
chunk would make things harder.  The timestamp and target could be moved
into chunk 1.  This increases the merkle root to 12 bytes in the 2nd
chunk.  Finding collisions would be made much more difficult.

If ASIC limitations mean that the nonce must stay where it is, this would
mean that the merkle root would be split into two pieces.

On Tue, May 10, 2016 at 7:57 PM, Peter Todd via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> As part of the hard-fork proposed in the HK agreement(1) we'd like to make
> the
> patented AsicBoost optimisation useless, and hopefully make further similar
> optimizations useless as well.
>
> What's the best way to do this? Ideally this would be SPV compatible, but
> if it
> requires changes from SPV clients that's ok too. Also the fix this should
> be
> compatible with existing mining hardware.
>
>
> 1)
> https://medium.com/@bitcoinroundtable/bitcoin-roundtable-consensus-266d475a61ff
>
> 2)
> http://lists.linuxfoundation.org/pipermail/bitcoin-dev/2016-April/012596.html
>
> --
> https://petertodd.org 'peter'[:-1]@petertodd.org
>
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] p2p authentication and encryption BIPs

2016-03-23 Thread Tier Nolan via bitcoin-dev
There is probably not much loss due to per message encryption.  Even if a
MITM determined that a message was an inv message (or bloom filter
message), it wouldn't be able to extract much information.  Since the
hashes in those messages are fixed size, there is very little leakage.

You could make it so that the the encryption messages effectively create a
second data stream and break/weaken the link between message size and
wrapped message size.  This requires state though, so there is a complexity
tradeoff.

There is no real need to include an IV, since you are including a 32 byte
context hash.  The first 16 bytes of the context hash could be used as IV.

In terms of generating the context hash, it would be easier to make it
linear.

context_hash_n = SHA256(context_hash_(n-1) | message_(n-1))

As the session gets longer, both nodes would have to do more and more
hashing to compute the hash of the entire conversation.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Services bit for xthin blocks

2016-03-09 Thread Tier Nolan via bitcoin-dev
On Wed, Mar 9, 2016 at 6:11 PM, G. Andrew Stone via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> Thanks for your offer Luke, but we are happy with our own process and,
> regardless of historical provenance, see this mailing list and the BIP
> process as very Core specific for reasons that are too numerous to describe
> here but should be obvious to anyone who has been aware of the last year of
> Bitcoin history.
>

One of the advantages with the BIP process is that it means that there are
hashlocked descriptions of the specs available for people to implement
against.

The BIP process is not the same as getting a PR accepted into core.  It is
not a veto based process.  If you write the BIP and it doesn't have any
serious technical problems, then it will be accepted into the BIP repo.

Getting it marked as "final" is harder but I don't think that matters
much.  I don't think that core would actually use a service bit that was
claimed in a BIP, even if the BIP wasn't final.  Maybe in 20 years if thin
blocks aren't being used, they might recycle it.  It would be pretty
obviously an aggressive act otherwise.

The NODE_GETUTXO bit is a perfect example of that.  They don't think it is
a good idea, but they still accepted the claim on the bit, because there
are nodes actually using it.

On the other hand, the BIP git repository is hosted on the /bitcoin github
site, so in that context it can be seen as linked with core.  I wouldn't be
surprised if that specific objection was raised when it was moved from the
wiki to github.  Luke may be willing to change that if you think that would
be worth changing?

With regards to the proposal, the description on the forum link isn't
sufficient for an alternative client to implement it.  I had a look at the
thread and I think that this is the implementation?

https://github.com/ptschip/bitcoinxt/commit/7ea5854a3599851beffb1323544173f03d45373b

Is the intention here to simply reserve the bit for thin blocks usage or to
define the specification for inter-operation with other clients?

Perhaps there could be a process for claiming service bits as it can be
useful to claim a bit in advance of actually finalizing the feature.

- Claim bit with a reasonable justification (good faith intent to implement
and the bit is useful for the feature)
- Within 3 months have a finalized description of the feature that lets
other clients implement it
- Within 6 months have working software that deploys the feature
- After 6 months of it actually being in active use, the bit is "locked"
and stays assigned to that feature

There could be an expiry process if it ends up not being used after all.
Requiring a public description of the feature seems like a reasonable
requirement in exchange for the community assigning the service bit, but we
don't want to go to far.  There is no point in having lots of free bits
that end up never being used.  Worst case, the addr message could be
updated to add more bits.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Hardfork to fix difficulty drop algorithm

2016-03-02 Thread Tier Nolan via bitcoin-dev
On Wed, Mar 2, 2016 at 4:27 PM, Paul Sztorc via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> For example, it is theoretically possible that 100% of miners (not 50%
> or 10%) will shut off their hardware. This is because it is revenue
> which ~halves, not profit.


It depends on how much is sunk costs and how much is marginal costs too.

If hashing costs are 50% capital and 50% marginal, then the entire network
will be able to absorb a 50% drop in subsidy.

50% capital costs means that the cost of the loan to buy the hardware
represents half the cost.

Assume that for every $100 of income, you have to pay $49 for the loan and
$49 for electricity giving 2% profit.  If the subsidy halves, then you only
get $50 of income, so lose $48.

But if the bank repossesses the operation, they might as well keep things
running for the $1 in marginal profit (or sell on the hardware to someone
who will keep using it).

Since this drop in revenue is well known in advance, businesses will spend
less on capital.  That means that there should be less mining hardware than
otherwise.

A 6 month investment with 3 months on the high subsidy and 3 months on low
subsidy would not be made if it only generated a small profit for the first
3 and then massive losses for the 2nd period of 3 months.  For it to be
made, there needs to be large profit during the first period to compensate
for the losses in the 2nd period.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Hardfork to fix difficulty drop algorithm

2016-03-02 Thread Tier Nolan via bitcoin-dev
If a hard-fork is being considered, the easiest is to just step the
difficulty down by a factor of 2 when the adjustment happens.

This means that miners still get paid the same minting fee per hash as
before.  There isn't that much risk.  If the hashing power stays constant,
then there will be 5 minute blocks for a while until everything readjusts.

Nearly the same can be accomplished by a soft fork.

Proposal:

If 900 of the last 1000 blocks are block version X or above, then the
smooth change rule applies.

The adjustment is as follows

big_number get_new_target(int height, big_number old_target) {
if (height < 405000)
return old_target;
else if (height < 42)
return (old_target * 15000) / (height - 39);
else
return old_target;
}

What this does is ramp up the difficulty slowly from 405,000 to 420,000.
It ends up with a target that is 50% of the value stored in target bits.
These blocks are valid since they have twice as much POW as normally
required.

For block 42, the difficulty drops by 2 and the reward drops by 2 at
the same time.  This means that miners still get paid the same BTC per
hash.  It would mean 5 minute blocks until the next adjustment though.

If 90% of the network are mining the artificially hard blocks, then a  10%
fork still loses.  The 90% has an effective hash rate of 45% vs the 10%.

It is unlikely that miners would accept the fork, since they lose minting
fees.  It effectively brings the subsidy reduction forward in time.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] BIP CPRKV: Check private key verify

2016-02-29 Thread Tier Nolan via bitcoin-dev
On Mon, Feb 29, 2016 at 10:58 AM, Mats Jerratsch  wrote:

> This is actually very useful for LN too, see relevant discussion here
>
>
> http://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-November/011827.html
>

Is there much demand for trying to code up a patch to the reference
client?  I did a basic one, but it would need tests etc. added.

I think that segregated witness is going to be using up any potential
soft-fork slot for the time being anyway.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Fast bootstrapping with a pre-generated UTXO-set database

2016-02-29 Thread Tier Nolan via bitcoin-dev
One of the proposals was to build the UTXO set backwards.  You start from
the newest block and work backwards.

The database contains UTXOs (unspent transaction outputs) and "UFTXI"
(unfunded transaction inputs).

The procedure would be

For each transaction (last to first ordering)
For each output
- check if it is in the UFTXI set
-- If so, validate the signatures
-- If not, add it to the UTXO set

For each input
- Add to the UFTXI set

When you receive a transaction, it checks all the inputs
-- If all inputs are in the UTXO set, it says confirmed
-- Otherwise, gets marked as "unknown inputs"

There would also be a counter indicating how many blocks it has validated.

A transaction with an unfunded input counts as validated back to the block
it was included in.  Transactions count as confirmed to their ancestor that
has the newest validation time.

Assume that the node had validated the last 1 blocks and you had a
transaction with one input.  Assume the input transaction was included 5000
blocks ago and its input was included 50,000 blocks ago.

TX-A) input (TX-B:0) included in block 6 blocks ago
TX-B) input (TX-C:0) included in block 5000 ago
TX-C) input (TX-B:0) included in block 2 ago

TX-C would not be known to the node since it has only gone back 1
blocks.

TX-A would have confirms 6 / 5000.  This means that its outputs have been
confirmed by 6 blocks (confirms work as currently) and that its inputs have
been confirmed by 5000 blocks.

The reference client could mark transactions with 6+ output confirms and
1000+ input confirms as confirmed.

Once it hits the genesis block, then all transactions would be
6/, so it could drop the second number.


On Mon, Feb 29, 2016 at 10:29 AM, Jonas Schnelli via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA256
>
> Hi
>
> I’ve been thinking around a solution to reduce nodes bootstrap time
> (IBD) as well as a way to reduce the amount of bandwidth/network usage
> per node.
> Not sure if this idea was/is already discussed, haven’t found anything
> in a quick research.
>
>
> ==Title==
> Fast bootstrapping with a pre-generated UTXO-set database.
>
> ==Abstract==
> This documents describes a way how bitcoin nodes can bootstrap faster
> by loading a pre-generated UTXO-set datafile with moderate reduction
> of the security model.
>
> ==Specification==
> Bitcoin-core or any other full node client will need to provide a
> feature to "freeze" the UTXO-set at a specified height (will require a
> reindex). The frozen UTXO-set – at a specific height – will be
> deterministic linearized in a currently not specified
> data-serializing-format.
> Additionally, a serialized form of the current chain-index (chain
> containing all block-headers) up to the specified height will be
> appended to the pre-generated UTXO-set-datafile.
> The datafile will be hashed with a double SHA256.
>
> The corresponding hash will be produced/reproduced and signed (ECDSA)
> by a group of developers, ideally the same group of developers who are
> also signing deterministic builds (binary distribution).
>
> Full node client implementations that supports bootstrapping from a
> pre-generated UTXO-set, need to include...
> 1.) a set of pubkeys from trusted developers
> 2.) the hash (or hashes) of the pre-generated UTXO-set-datafile(s)
> 3.) n signatures of the hash(es) from 2) from a subset of developers
> defined in 1)
>
> To guarantee the integrity of developers pubkeys & signatures, methods
> like the current gitian build, used in bitcoin-core, must be used.
>
> New nodes could download a copy of the pre-generated UTXO-set, hash
> it, verify the hash against the allowed UTXO-sets, verify the ECDSA
> signatures from various developers, and continue bootstrapping from
> the specified height if the users accepts the amount of valid signatures
> .
>
> Sharing of the pre-generated UTXO-set can be done over CDNs,
> bit-torrent or any other file hosting solution. It would also be
> possible to extend the bitcoin p2p layer with features to
> distribute/share a such pre-generated UTXO-set, in chunks and with the
> according hashes to detect invalidity before downloading the whole
> content (but would probably end up in something very similar to
> bit-torrent).
>
>
> - --
> 
> -BEGIN PGP SIGNATURE-
> Version: GnuPG v2
>
> iQIcBAEBCAAGBQJW1B1wAAoJECnUvLZBb1PsqzsP/iSdvyhUzy+BZVSZbKXNjk5P
> 2vrtirI6NvKQd8hHbrcFeLfyswzYc2JWRnX8sATlauIS0pYdr97JriwUGlvxvNrY
> iVTDdf8MIVu8zScLQtJbMatpMvsewtqQEidn/yxWIhiCg4G2T5DZmlBU6O4XIKR6
> 5aPHElGOKZ15EWGHBG7z4owj3MiOaxhD9q5erBbfLPpcm08o6XAv5miqmGnnn3zh
> gocg4Gxs6iDygh3b2dCJFwWIVPxF6UVJhyjv2kLZUmEHT2Y2QvdGcLIIewcWHDze
> kgoZYmOEowujCbmeJ+LBwgOI0c1N6L/ciomPBne7ILmK4LyUEzyMLJKNYf/sZ8vI
> sVlmwZwZZLfILC7mzMAM0pfj99IOW680WHch9v31lWFlxW/bLvLqAO7n3acQuD6s
> xCZN2nAhmWC8FnMFxqB3EUz0lX8giV3qRJZjbQMS+ZrngYkAmVv2bAsoLndqf6MO
> 

Re: [bitcoin-dev] The first successful Zero-Knowledge Contingent Payment

2016-02-26 Thread Tier Nolan via bitcoin-dev
On Fri, Feb 26, 2016 at 11:45 PM, Gregory Maxwell  wrote:

> Why not use the single-show-signature scheme I came up with a while
> back on the Bitcoin side to force the bitcoin side to reveal a private
> key?
>
>
> http://lists.linuxfoundation.org/pipermail/lightning-dev/2015-November/000344.html
>

Thanks for the info, I will give it a look.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] The first successful Zero-Knowledge Contingent Payment

2016-02-26 Thread Tier Nolan via bitcoin-dev
That is very interesting.

There has been some recent discussion about atomic cross chain transfers
between Bitcoin and legacy altcoins.  For this purpose a legacy altcoin is
one that has strict IsStandard() rules and none of the advanced script
opcodes.

It has a requirement that Bob sends Alice a pair [hash_of_bob_private_key,
bob_public_key].  Bob has to prove that the hash is actually the result of
hashing the private key that matches bob_public_key.

This can be achieved with a cut-and-choose scheme.  It uses a fee so that
an attacker loses money on average.  It is vulnerable to an attacker who
doesn't mind losing money as long as the target loses money too.

Bob would have to prove that he has an x such that

xG = 
hash(x) = hash_of_bob_private_key

Is the scheme fast enough such that an elliptic curve multiply would be
feasible?  You mention 20 seconds for 5 SHA256 operations, so I am guessing
no?



On Fri, Feb 26, 2016 at 11:06 PM, Sergio Demian Lerner via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> Congratulations!
>
> It a property of the SKCP system that the person who performed the trusted
> setup cannot extract any information from a proof?
>
> In other words, is it proven hard to obtain information from a proof by
> the buyer?
>
> On Fri, Feb 26, 2016 at 6:42 PM, Gregory Maxwell via bitcoin-dev <
> bitcoin-dev@lists.linuxfoundation.org> wrote:
>
>> I am happy to announce the first successful Zero-Knowledge Contingent
>> Payment (ZKCP) on the Bitcoin network.
>>
>> ZKCP is a transaction protocol that allows a buyer to purchase
>> information from a seller using Bitcoin in a manner which is private,
>> scalable, secure, and which doesn’t require trusting anyone: the
>> expected information is transferred if and only if the payment is
>> made. The buyer and seller do not need to trust each other or depend
>> on arbitration by a third party.
>>
>> Imagine a movie-style “briefcase swap” (one party with a briefcase
>> full of cash, another containing secret documents), but without the
>> potential scenario of one of the cases being filled with shredded
>> newspaper and the resulting exciting chase scene.
>>
>> An example application would be the owners of a particular make of
>> e-book reader cooperating to purchase the DRM master keys from a
>> failing manufacturer, so that they could load their own documents on
>> their readers after the vendor’s servers go offline. This type of sale
>> is inherently irreversible, potentially crosses multiple
>> jurisdictions, and involves parties whose financial stability is
>> uncertain–meaning that both parties either take a great deal of risk
>> or have to make difficult arrangement. Using a ZKCP avoids the
>> significant transactional costs involved in a sale which can otherwise
>> easily go wrong.
>>
>> In today’s transaction I purchased a solution to a 16x16 Sudoku puzzle
>> for 0.10 BTC from Sean Bowe, a member of the Zcash team, as part of a
>> demonstration performed live at Financial Cryptography 2016 in
>> Barbados. I played my part in the transaction remotely from
>> California.
>>
>> The transfer involved two transactions:
>>
>> 8e5df5f792ac4e98cca87f10aba7947337684a5a0a7333ab897fb9c9d616ba9e
>> 200554139d1e3fe6e499f6ffb0b6e01e706eb8c897293a7f6a26d25e39623fae
>>
>> Almost all of the engineering work behind this ZKCP implementation was
>> done by Sean Bowe, with support from Pieter Wuille, myself, and Madars
>> Virza.
>>
>>
>> Read more, including technical details at
>>
>> https://bitcoincore.org/en/2016/02/26/zero-knowledge-contingent-payments-announcement/
>>
>> [I hope to have a ZKCP sudoku buying faucet up shortly. :) ]
>> ___
>> bitcoin-dev mailing list
>> bitcoin-dev@lists.linuxfoundation.org
>> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>>
>
>
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Multi-Stage Merge-Mine Headers Hard-Fork BIP

2016-02-24 Thread Tier Nolan via bitcoin-dev
You need more detail for it to be a BIP.

New Header

new_header.prev = hash of previous header's bitcoin header
new_header.small_nonce = 4 byte nonce
new_header.big_nonce = 8 byte nonce

new_header (Can contain any new fields desired)

Fake Block

block.version = 4
block.prev = new_header.prev
block.merkle = calculate_merkle(coinbase)
block.timestamp = block.getPreviousBlock().median_time_past + 1
block.bits = calculate_bits()
block.nonce = new_header.small_nonce
block.tx_count = 1

Coinbase

coinbase.version = 1
coinbase.tx_in_count = 0
coinbase.tx_out_count = 1
coinbase.tx_out[0].value = 0
coinbase.tx_out[0].pk_script = "OP_RETURN"

This is a "nuclear option" attack that knocks out the main chain.  The
median time past will increase very slowly.  It only needs to increase by 1
every 6th blocks.  That gives an increase of 336 seconds for every
difficulty update.  This will cap the update rate, so give an increase of
4X every doubling.

The new headers will end up not meeting the difficulty, so they will
presumably just repeat the last header?

If the bitcoin chain stays at constant difficulty, then each quadrupling
will take more time.

After 2 weeks: 4XDiff   (2 weeks per diff period)
After 10 weeks: 16XDiff (8 weeks per diff period)
After 42 weeks: 256XDiff (32 weeks per diff period)


On Wed, Feb 24, 2016 at 5:52 AM, James Hilliard via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> https://github.com/bitcoin/bips/pull/340
>
> BIP: ?
> Title: 2016 Multi-Stage Merge-Mine Headers Hard-Fork
> Author: James Hilliard 
> Status: Draft
> Type: Standards Track
> Created: 2016-02-23
>
> ==Abstract==
>
> Use a staged hard fork to implement a headers format change that is
> merge mine incompatible along with a timewarp to kill the previous
> chain.
>
> ==Specification==
>
> We use a block version flag to activate this fork when 3900 out of the
> previous 4032 blocks have this the version flag set. This flag locks
> in both of the below stages at the same time.
>
> Merge Mine Stage: The initial hard fork is implemented using a merge
> mine which requires that the original pre-fork chain be mined with a
> generation transaction that creates no new coins in addition to not
> containing any transactions. Additionally we have a consensus rule
> that requires that ntime be manipulated on the original chain to
> artificially increase difficulty and hold back the original chain so
> that all non-upgraded clients can never catch up with current time.
> The artificial ntime is implemented as a consensus rule for blocks in
> the new chain.
>
> Headers Change Stage: This is the final stage of the hard fork where
> the header format is made incompatible with merge mining, this is
> activated ~50,000 blocks after the Merge Mine Stage and only at the
> start of the 2016 block difficulty boundary.
>
> ==Motivation==
>
> There are serious issues with pooled mining such as block withhold
> attacks that can only be fixed by making major changes to the headers
> format.
>
> There are a number of other desirable header format changes that can
> only be made in a non-merge mine compatible way.
>
> There is a high risk of there being two viable chains if we don't have
> a way to permanently disable the original chain.
>
> ==Rationale==
>
> Our solution is to use a two stage hard fork with a single lock in period.
>
> The first stage is designed to kill off the previous chain by holding
> back ntime to artificially increase network difficulty on the original
> chain to the point where it would be extremely difficult to mine the
> 2016 blocks needed to trigger a difficulty adjustment. This also makes
> it obvious to unupgraded clients that they are not syncing properly
> and need to upgrade.
>
> By locking in both stages at the same time we ensure that any clients
> merge mining are also locked in for the headers change stage so that
> the original chain is dead by the time the headers change takes place.
>
> We timewarp over a year of merge mining to massively increase the
> difficulty on the original chain to the point that it would be
> incredibly expensive to reduce the difficulty enough that the chain
> would be able to get caught up to current time.
>
> ==Backward Compatibility==
>
> This hardfork will permanently disable all nodes, both full and light,
> which do not explicitly add support for it.
> However, their security will not be compromised due to the implementation.
> To migrate, all nodes must choose to upgrade, and miners must express
> supermajority support.
>
> ==Reference Implementation==
>
> TODO
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


[bitcoin-dev] Sig-Witness and legacy outputs

2016-02-18 Thread Tier Nolan via bitcoin-dev
I wrote a bip last year about extended transaction information.  The idea
was to include the scriptPubKey that was being spent along with
transactions.

https://github.com/TierNolan/bips/blob/extended_transactions/bip-etx.mediawiki

This makes it easier possible to verify the transactions locally.  An
extended transaction would contain the current transaction and also the
CTxOuts that are being spent.

For each entry in the UTXO set, a node could store

UTXO_hash = hash(txid_parent | n | CTxOut)

Witness transactions will do something similar.  I wonder if it would be
possible to include the CTxOut for each input that isn't a segregated
witness output, as part of the witness data.  Even for witness data, it
would be good to commit to the value of the output as part of the witness.

There was a suggestion at one of the conferences to have the witness data
include info about the block height/index of the output that each input is
spending.

The effect of this change is that nodes would only have to store the
UTXO_hashes for each UTXO value in the database.  This would make it much
more efficient.

It would also make it easier to create a simple consensus library.  You
give the library the transaction and the witness and it returns the
UTXO_hashes that are spent, the UTXO_hashes that are created, the fee,
sigops and anything that needs to be summed.

Validating a block would mostly (famous last words) mean validating the
transactions in the block and then adding up the totals.

The advantage of including the info with the transactions is that it saves
each node having to include a lookup table to find the data.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] BIP CPRKV: Check private key verify

2016-02-12 Thread Tier Nolan via bitcoin-dev
On Fri, Feb 12, 2016 at 5:02 AM,  wrote:

> Seems it could be done without any new opcode:
>

The assumption was that the altcoin would only accept standard output
scripts.  Alice's payment in step 2 pays to a non-standard script.

This is an improvement over the cut and choose, but it will only work for
coins which allow non-standard scripts (type 2 in the BIP).

I guess I was to focused on maintaining standard scripts on the altcoin.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Soft fork fix for block withholding attacks

2016-02-12 Thread Tier Nolan via bitcoin-dev
If clients were designed to warn their users when a soft fork happens, then
it could be done reasonably safely.  The reference client does this (or is
it just for high POW softforks?), but many SPV clients don't.

If there was a delay between version number changing and the rule
activation, at least nodes would get a warning recommending that they
update.

* At each difficulty interval, if 950 of the last 1000 blocks have the new
version number, reject the old version blocks from then on.

* Start new target at 255, the least significant byte must be less than or
equal to the target

* Update target at each difficulty re-targetting

T = ((T << 3) - T) >> 3

This increases the difficulty by around 12.5% per fortnight.   After 64
weeks, the target would reach 0 and stay there meaning that the difficulty
would be 256 times higher than what is given in the header.

An attacker with 2% of the network power could create 5 blocks for every
block produced by the rest of the network.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


[bitcoin-dev] BIP CPRKV: Check private key verify

2016-02-11 Thread Tier Nolan via bitcoin-dev
There was some discussion on the bitcointalk forums about using CLTV for
cross chain transfers.

Many altcoins don't support CLTV, so transfers to those coins cannot be
made secure.

I created a protocol.  It uses on cut and choose to allow commitments to
publish private keys, but it is clunky and not entirely secure.

I created a BIP draft for an opcode which would allow outputs to be locked
unless a private key was published that matches a given public key.

https://github.com/TierNolan/bips/blob/cpkv/bip-cprkv.mediawiki
 This email has been sent from a
virus-free computer protected by Avast.
www.avast.com 
<#DDB4FAA8-2DD7-40BB-A1B8-4E2AA1F9FDF2>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] BIP CPRKV: Check private key verify

2016-02-11 Thread Tier Nolan via bitcoin-dev
On Thu, Feb 11, 2016 at 10:20 PM, Thomas Kerin 
wrote:

> I wonder if this is possible as a soft fork without using segwit?
Increasing the sigop count for a NOP would be a hard fork, but such a
change would be fine with a new segwit version. It might require specific
support in the altcoin, which might be troublesome..

It is a soft fork since it makes things that were previous allowed
disallowed.  If it decreased the sigop count, then you could create a block
that had to many sigops due to the old rules.

With this rule, it increases the count.  If the sigop count is valid under
the new rules, it is also valid under the old rules.

There is no need for specific support on the altcoin.  It allows the
Bitcoin network act as trusted 3rd party so that you can do channels safely
on the altcoin, even though the altcoin still suffers from malleability and
doesn't have OP_CHECKLOCKTIMEVERIFY.

With regards to seg-witness, Ideally, the opcode would work in both old and
new scripts by re-purposing OP_NOP3.
 This email has been sent from a
virus-free computer protected by Avast.
www.avast.com 
<#DDB4FAA8-2DD7-40BB-A1B8-4E2AA1F9FDF2>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] BIP proposal: Increase block size limit to 2 megabytes

2016-02-10 Thread Tier Nolan via bitcoin-dev
On Wed, Feb 10, 2016 at 6:14 AM, David Vorick via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> I'm not clear on the utility of more nodes. Perhaps there is significant
> concern about SPV nodes getting enough bandwidth or the network struggling
> from the load?
>

It is unfortunate that when pruning is activated, the NODE_NETWORK bit is
cleared.  This means that supporting SPV clients means running full nodes
without pruning.  OTOH, a pruning node could support SPV clients that sync
more often than once every few days, especially if it stores a few GB of
block data.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] BIP proposal: Increase block size limit to 2 megabytes

2016-02-07 Thread Tier Nolan via bitcoin-dev
On Sun, Feb 7, 2016 at 7:03 PM, Patrick Strateman via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> I would expect that custodians who fail to produce coins on both sides
> of a fork in response to depositor requests will find themselves in
> serious legal trouble.
>

If the exchange uses an UTXO from before the fork to pay their clients,
then they are guaranteed to count as paying on all forks.  The exchange
doesn't need to specifically pay out for each fork.

As long as the exchange doesn't accidently double spend an output, even
change addresses are valid.

It is handling post-fork deposits where the problem can occur.  If they
only receive coins on one fork, then that should cause the client to be
credited with funds on both forks.

The easiest thing would be to refuse to accept deposits for a while
before/after the fork happens.
 This email has been sent from a
virus-free computer protected by Avast.
www.avast.com 
<#DDB4FAA8-2DD7-40BB-A1B8-4E2AA1F9FDF2>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Time to worry about 80-bit collision attacks or not?

2016-01-11 Thread Tier Nolan via bitcoin-dev
On Fri, Jan 8, 2016 at 3:46 PM, Gavin Andresen via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> How many years until we think a 2^84 attack where the work is an ECDSA
> private->public key derivation will take a reasonable amount of time?
>

I think the EC multiply is not actually required.  With compressed public
keys, the script selection rule can just be a sha256 call instead.

V is the public key of the victim, and const_pub_key is the attacker's
public key.

 if prev_hash % 2 == 0:
script = "2 V 0x02%s 2 CHECKMULTISIG" % (sha256(prev_hash)))
else:
script = "CHECKSIG %s OP_DROP" % (prev_hash, const_pub_key)

next_hash = ripemd160(sha256(script))

If a collision is found, there is a 50% chance that the two scripts have
different parity and there is a 50% chance that a compressed key is a valid
key.

This means that you need to run the algorithm 4 times instead of 2.

The advantage is that each step is 2 sha256 calls and a ripemd160 call.  No
EC multiply is required.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] New BIP editor, and request for information

2016-01-11 Thread Tier Nolan via bitcoin-dev
On Thu, Jan 7, 2016 at 5:10 PM, Luke Dashjr  wrote:

> - BIP 46 is missing from the repository, but apparently self-soft-assigned
> by
> Tier Nolan in
> https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2014-April/005545.html
> ; if this was later assigned official, or if he is still
> interested in pursuing this, it seems logical to just keep it at BIP 46.
>

I was never officially assigned any number for this.

Subsequent P2SH changes give the required functionality in an alternative
way.  This renders the BIP obsolete.

I suggest marking the number as nonassignable, in order to prevent
confusion with archive searches.  I assume that new BIP numbers will be
greater than 100 anyway.

As was pointed out at the time, I shouldn't have used a number in the
original git branch before being assigned it officially.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] A new payment address format for segregated witness or not?

2015-12-21 Thread Tier Nolan via bitcoin-dev
On Mon, Dec 21, 2015 at 5:14 AM, jl2012 via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> The SW in P2SH is worse in terms of:
> 1. It requires an additional push in scriptSig, which is not prunable in
> transmission, and is counted as part of the core block size
>

"Prunable in transmission" means that you have to include it when not
sending the witnesses?

That is a name collision with UTXO set prunable.  My initial thought when
reading that was "but scriptSigs are inherently prunable, it is
scriptPubKeys that have to be held in the UTXO database" until I saw the
"in transmission" clarification.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] We need to fix the block withholding attack

2015-12-20 Thread Tier Nolan via bitcoin-dev
On Sun, Dec 20, 2015 at 5:12 AM, Emin Gün Sirer <
bitcoin-dev@lists.linuxfoundation.org> wrote:

>  An attacker pool (A) can take a certain portion of its hashpower,
> use it to mine on behalf of victim pool (B), furnish partial proofs of work
> to B, but discard any full blocks it discovers.
>

I wonder if part of the problem here is that there is no pool identity
linked to mining pools.

If the mining protocols were altered so that miners had to indicate their
identity, then a pool couldn't forward hashing power to their victim.

If the various mining protocols were updated, they could allow checking
that the work has the domain name of the pool included.  Pools would have
to include their domain name in the block header.

A pool which provides this service is publicly saying that they will not
use the block withholding attack.  Any two pools which are doing it cannot
attack each other (since they have different domain names).  This creates
an incentive for pools to start supporting the feature.

Owners of hashing power also have an incentive to operate with pools which
offer this identity.  It means that they can ensure that they get a payout
from any blocks found.

Hosted mining is weaker, but even then, it is possible for mining hosts to
provide proof that they performed mining.  This proof would include the
identity of the mining pool.  Even if the pool was run by the host, it
would still need to have the name embedded.

Mining hosts might be able to figure out which of their customers actually
check the identity info, and then they could redirect the mining power of
those who generally don't check.  If customers randomly ask for all of the
hashing power, right back to when they joined, then this becomes expensive.

Mining power directly owned by the pool is also immune to this effect.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] We need to fix the block withholding attack

2015-12-20 Thread Tier Nolan via bitcoin-dev
On Sun, Dec 20, 2015 at 12:42 PM, Natanael  wrote:

> If total difficulty is X and the ratio for full blocks to candidate blocks
> shared with the pool is Y, then the candidate block PoW now has to meet X/Y
> while hashing the candidate block PoW + the pool's commitment hash must
> meet Y, which together makes for X/Y*Y and thus the same total difficulty.


This gives the same total difficulty but miners are throwing away otherwise
valid blocks.

This means that it is technically a soft fork.  All new blocks are valid
according to the old rule.

In practice, it is kind of a hard fork.  If Y is 10, then all upgraded
miners are throwing away 90% of the blocks that are valid under the old
rules.

>From the perspective of non-upgraded clients, the upgraded miners operate
at a 10X disadvantage.

This means that someone with 15% of the network power has a majority of the
effective hashing power, since 15% is greater than 8.5% (85% * 0.1).

The slow roll-out helps mitigate this though.  It gives non-upgraded
clients time to react.  If there is only a 5% difference initially, then
the attacker doesn't get much benefit.

The main differences are that there's a public key identifier the miners
> are told about in advance and expect to see in block templates, and that
> that now the pool has to publish this commitment value together with the
> block that also contains the commitment hash, and that this is verified
> together with the PoW.


I don't think public keys are strictly required.  Registering them with
DNSSEC is way over the top.  They can just publish the key on their website
and then use that for their identity.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Increasing the blocksize as a (generalized) softfork.

2015-12-20 Thread Tier Nolan via bitcoin-dev
This is essentially the "nuclear option".  You are destroying the current
chain (converting it to a chain of coinbases) and using the same POW to
start the new chain.  You are also giving everyone credit in the new chain
equal to their credit in the old chain.

It would be better if the current chain wasn't destroyed.

This could be achieved by adding the hash of an extended block into the
coinbase but not requiring the coinbase to be the only transaction.

The new block is the legacy block plus the associated extended block.

Users would be allowed to move money to the extended block by spending it
to a specific output template.

 OP_1 OP_TO_EXTENDED OP_TRUE

OP_1 is the extended block index and initially, only one level is available.

This would work like P2SH.  Users could spend the money on the extended
block chain exactly as they could on the main chain.

Money can be brought back the same way.

   ...   OP_0 OP_UNLOCK OP_TRUE

The txids are for transactions that have been locked in root chain.  The
transaction is only valid if they are all fully funded.  The fee for the
transaction would be fee - (cost to fund unlocked txids).  A negative fee
tx would be invalid.

This has the advantage that it keeps the main chain operating.  People can
still send money with their un-upgraded clients.  There is also an
incentive to move funds to the extended block(s).  The new extended blocks
are more complex, but potentially have lower fees.  Nobody is forced to
change.  If the large blocks aren't needed, nobody will both to use them.

The rule could be

Now:
0) 1 MB

After change over
0) 1 MB
1) 2 MB

After 2 years
0) 1 MB
1) 2 MB
2) 4MB

After 4 years
0) 1 MB
1) 2 MB
2) 4MB
3) 8MB
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Block size: It's economics & user preparation & moral hazard

2015-12-18 Thread Tier Nolan via bitcoin-dev
On Thu, Dec 17, 2015 at 7:44 PM, Peter Todd  wrote:

> If Bitcoin remains decentralized, miners have veto power over any
> blocksize increases. You can always soft-fork in a blocksize reduction
> in a decentralized blockchain that actually works.
>

The actual users of the system have significant power, if they (could)
choose to use it.  There are "chicken" effects though.  They can impose
costs on the other participants but using those options harms themselves.
If the cost of inaction is greater than the costs of action, then the
chicken effects go away.

In the extreme, they could move away from decentralisation and the concept
of miners and have a centralised checkpointing system.  This would be a
bankrupting cost to miners but at the cost to the users of the
decentralised nature of the system.

At a lower extreme, they could change the mining hash function.  This would
devalue all of the miner's investments.  A whole new program of ASIC
investments would have to happen and the new miners would be significantly
different.  It would also establish that merchants and users are not to be
ignored.  On the other hand, bankrupting miners would make it harder to
convince new miners to make the actual investments in ASICs required to
establish security.

As a gesture, if merchants and exchanges wanted to get their "seat" at the
table, they could create a representative group that insists on a trivial
soft fork.  For example, they could say that they will not accept any block
from block N to block N + 5000 that doesn't have a specific bit set in the
version.

Miners have an advantage where they can say that they have the majority of
the hashing power.  As part of the public action problem that merchants
face, there is no equivalent metric.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Block size: It's economics & user preparation & moral hazard

2015-12-17 Thread Tier Nolan via bitcoin-dev
On Wed, Dec 16, 2015 at 9:11 PM, Pieter Wuille via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> We are not avoiding a choice. We don't have the authority to make a choice.
>

This is really the most important question.

Bitcoin is kind of like a republic where there is separation of powers
between various groups.

The power blocs in the process include

- Core Devs
- Miners
- Exchanges
- Merchants
- Customers

Complete agreement is not required for a change.  If merchants and their
customers were to switch to different software, then there is little any of
the other groups could do.

Consensus is nice, certainly, and it is a good social norm to seek
widespread agreement before committing to a decision above objection.
Committing to no block increase is also committing to a decision against
objections.

Having said that, each of the groups are not equal in power and
organisation.

Merchants and their customers have potentially a large amount of power, but
they are disorganised.  There is little way for them to formally express a
view, much less put their power behind making a change.  Their potential
power is crippled by public action problems.

On the other extreme is the core devs. Their power is based on legitimacy
due to having a line of succession starting with Satoshi and respect gained
due to technical and political competence.  Being a small group, they are
organised and they are also more directly involved.

The miners are less centralised, but statements supported by the majority
of the hashing power are regularly made.  The miners' position is that they
want dev consensus.  This means that they have delegated their decision
making to the core devs.

The means that the two most powerful groups in Bitcoin have given the core
devs the authority to make the decision.  They don't have carte blanche
from the miners.

If the core devs made the 2MB hard-fork with a 75% miner threshold, it is
highly likely that the other groups would accept it.

That is the only authority that exists in Bitcoin.  The check is that if
the authority is abused, the other groups can simply leave (or use
checkpointing)
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Forget dormant UTXOs without confiscating bitcoin

2015-12-13 Thread Tier Nolan via bitcoin-dev
On Sun, Dec 13, 2015 at 6:11 PM, jl2012--- via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> Back to the topic, I would like to further elaborate my proposal.
>
> We have 3 types of full nodes:
>
> Archive nodes: full nodes that store the whole blockchain
> Full UTXO nodes: full nodes that fully store the latest UTXO state, but
> not the raw blockchain
> Lite UTXO nodes: full nodes that store only UTXO created in that past
> 42 blocks
>

There is a risk that miners would eventually react by just refusing to
accept blocks that spend dormant outputs.  This is a risk even without the
protocol, but I think if there are already lots of UTXO-lite nodes
deployed, it would be much easier to just define them as the new
(soft-forked) consensus rule.

There is a precedent for things to be disabled rather than fixed when
security problems arise.

Imagine a crisis caused by a security related bug with the revival proofs.
Disabling them is much lower risk than trying to find/fix the bug and then
deploy the fix.  The longer it takes, the longer the security problem
remains.


>
> What extra information is needed?
>
> (1) If your UTXO was generated in block Y, you first need to know the TXO
> state (spent / unspent) of all outputs in block Y at block (Y + 42).
> Only UTXOs at that time are relevant.
>
> (2) You also need to know if there was any spending of any block Y UTXOs
> after block (Y + 42).
>

Is this how it works?

Source transaction is included in block Y.

If the output is spent before Y + 420,000, then no further action is taken.

The miner for block Y + 420,000 will include a commitment to
merkle_hash(Block Y's unspent outputs).

It is possible for someone to prove that they didn't spend their
transaction before Y + 420,000.

I think the miners have to remember the "live" UTXO merkle root for every
block?

With the path to the UTXO and the miner can recalculate the root for that
block.

If there were 20 dormant outputs being spent, then the miner would have to
commit to 20 updates.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Capacity increases for the Bitcoin system.

2015-12-08 Thread Tier Nolan via bitcoin-dev
On Tue, Dec 8, 2015 at 5:41 PM, Mark Friedenbach via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> A far better place than the generation transaction (which I assume means
> coinbase transaction?) is the last transaction in the block. That allows
> you to save, on average, half of the hashes in the Merkle tree.
>

This trick can be improved by only using certain tx counts.  If the number
of transactions is limited to a power of 2 (other than the extra
transactions), then you get a path of length zero.

The number of non-zero bits in the tx count determings how many digests are
required.

https://github.com/TierNolan/bips/blob/aux_header/bip-aux-header.mediawiki

This gets the benefit of a soft-fork, while also keeping the proof lengths
small.  The linked bip has a 105 byte overhead for the path.

The cost is that only certain transaction counts are allowed.  In the worst
case, 12.5% of transactions would have to be left in the memory pool.  This
means around 7% of transactions would be delayed until the next block.

Blank transactions (or just transactions with low latency requirements)
could be used to increase the count so that it is raised to one of the
valid numbers.

Managing the UTXO set to ensure that there is at least one output that pays
to OP_TRUE is also a hassle.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Dealing with OP_IF and OP_NOTIF malleability

2015-11-06 Thread Tier Nolan via bitcoin-dev
One and zero should be defined as arrays of length one.  Otherwise, it is
still possible to mutate the transaction by changing the length of the
array.

They should also be minimally encoded but that is covered by previous rules.

On Fri, Nov 6, 2015 at 8:13 AM, jl2012 via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> I have a new BIP draft for fixing OP_IF and OP_NOTIF malleability. Please
> comment:
> https://github.com/jl2012/bips/blob/master/opifmalleability.mediawiki
>
> Copied below:
>
> BIP: x
>   Title: Dealing with OP_IF and OP_NOTIF malleability
>   Author: jl2012 
>   Status: Draft
>   Type: Standards Track
>   Created: 2015-11-06
>
> Abstract
>
> As an supplement to BIP62, this document specifies proposed changes to the
> Bitcoin transaction validity rules in order to make malleability of
> transactions with OP_IF and OP_NOTIF impossible.
>
> Motivation
>
> OP_IF and OP_NOTIF are flow control codes in the Bitcoin script system.
> The programme flow is decided by whether the top stake value is 0 or not.
> However, this behavior opens a source of malleability as a third party may
> alter a non-zero flow control value to any other non-zero value without
> invalidating the transaction.
>
> As of November 2015, OP_IF and OP_NOTIF are not commonly used in the
> blockchain. However, as more sophisticated functions such as
> OP_CHECKLOCKTIMEVERITY are being introduced, OP_IF and OP_NOTIF will become
> more popular and the related malleability should be fixed. This proposal
> serves as a supplement to BIP62 and should be implemented with other
> malleability fixes together.
>
> Specification
>
> If the transaction version is 3 or above, the flow control value for OP_IF
> and OP_NOTIF must be either 0 or 1, or the transaction fails.
>
> This is to be implemented with BIP62.
>
> Compatibility
>
> This is a softfork. To ensure OP_IF and OP_NOTIF transactions created
> before the introduction of this BIP will still be accpeted by the network,
> the new rules only apply to transactions of version 3 or above.
>
> For people who want to preserve the original behaviour of OP_IF and
> OP_NOTIF, an OP_0NOTEQUAL could be  used before the flow control code to
> transform any non-zero value to 1.
>
> Reference
>
> BIP62: https://github.com/bitcoin/bips/blob/master/bip-0062.mediawiki
>
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Dealing with OP_IF and OP_NOTIF malleability

2015-11-06 Thread Tier Nolan via bitcoin-dev
I meant not to use the OP_PUSH opcodes to do the push.

Does OP_0 give a zero length byte array?

Would this script return true?

OP_0
OP_PUSHDATA1 (length = 1, data = 0)
OP_EQUAL

The easiest definition is that OP_0 and OP_1 must be used to push the data
and not any other push opcodes.


On Fri, Nov 6, 2015 at 9:32 AM, Oleg Andreev  wrote:

>
> > One and zero should be defined as arrays of length one. Otherwise, it is
> still possible to mutate the transaction by changing the length of the
> array.
> >
> > They should also be minimally encoded but that is covered by previous
> rules.
>
> These two lines contradict each other. Minimally-encoded "zero" is an
> array of length zero, not one. I'd suggest defining this explicitly here as
> "IF/NOTIF argument must be either zero-length array or a single byte 0x01".
>
>
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Compatibility requirements for hard or soft forks

2015-11-01 Thread Tier Nolan via bitcoin-dev
On Mon, Nov 2, 2015 at 12:23 AM, Justus Ranvier via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> Are there actually any OP_CAT scripts currently in the utxo set?
>

A locked transaction could pay to an OP_CAT script with the private key
being lost.

Even if it is only in theory, it is still worth trying to prevent rule
changes which permanently prevent outputs being spendable.


> It's a lot easier to justify the position: "nobody has the right to
> change the meaning of someone else's outputs", than it is to justify,
> "some small group of people gets to decide what's standard and what
> isn't, and if you choose to use the network in a valid but nonstandard
> way, that group of people might choose to deny you access to your money
> in the future"
>

If at least one year's notice was given, then people aren't going to lose
their money, since they have notice.

Locked transactions could have a difference expectation than non-locked
ones.


> In other words, how close to the shores of "administrators of a virtual
> currency" do Bitcoin developers want to sail?
>

Miners can collectively vote to disable specific UTXOs and change the
acceptance rules.


>
>
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Compatibility requirements for hard or soft forks

2015-11-01 Thread Tier Nolan via bitcoin-dev
On Sun, Nov 1, 2015 at 5:28 PM, jl2012 via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> I think it is very important to make it clear that non-standard txs and
> non-standard scripts may become invalid in the future
>

There can be unavoidable situations which cause locked coins become
unspendable.

In an ideal world, soft forks that make UTXOs unspendable should increase
the tx version number.  BIP-13 should have done that.  That would make the
change opt-in.

The disabled opcodes like OP_CAT were a DOS/network security change.

Invalidating locked coins is another reason that they shouldn't have been
disabled permanently.

It would have been better to disable them for six months, so at least
people can get their coins back after that.  Inherently, protecting the
network required some limitations being added so that nodes couldn't be
crashed.

For guidelines

* Transaction version numbers will be increased, if possible
* Transactions with unknown/large version numbers are unsafe to use with
locktime
* Reasonable notice is given that the change is being contemplated
* Non-opt-in changes will only be to protect the integrity of the network

Locked transaction that can be validated without excessive load on the
network should be safe to use, even if non-standard.

An OP_CAT script that requires TBs of RAM to validate crosses the threshold
of reasonableness.



>
> Gavin Andresen via bitcoin-dev 於 2015-10-28 10:06 寫到:
>
>> I'm hoping this fits under the moderation rule of "short-term changes
>> to the Bitcoin protcol" (I'm not exactly clear on what is meant by
>> "short-term"; it would be lovely if the moderators would start a
>> thread on bitcoin-discuss to clarify that):
>>
>> Should it be a requirement that ANY one-megabyte transaction that is
>> valid
>> under the existing rules also be valid under new rules?
>>
>> Pro:  There could be expensive-to-validate transactions created and
>> given a
>> lockTime in the future stored somewhere safe. Their owners may have no
>> other way of spending the funds (they might have thrown away the
>> private
>> keys), and changing validation rules to be more strict so that those
>> transactions are invalid would be an unacceptable confiscation of
>> funds.
>>
>> Con: It is extremely unlikely there are any such large, timelocked
>> transactions, because the Core code has had a clear policy for years
>> that
>> 100,000-byte transactions are standard and are relayed and
>> mined, and
>> larger transactions are not. The requirement should be relaxed so that
>> only
>> valid 100,000-byte transaction under old consensus rules must be valid
>> under new consensus rules (larger transactions may or may not be
>> valid).
>>
>> I had to wrestle with that question when I implemented BIP101/Bitcoin
>> XT
>> when deciding on a limit for signature hashing (and decided the right
>> answer was to support any "non-attack"1MB transaction; see
>> https://bitcoincore.org/~gavin/ValidationSanity.pdf [1] for more
>> details).
>>
>> --
>>
>> --
>> Gavin Andresen
>>
>>
>> Links:
>> --
>> [1] https://bitcoincore.org/~gavin/ValidationSanity.pdf
>>
>> ___
>> bitcoin-dev mailing list
>> bitcoin-dev@lists.linuxfoundation.org
>> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>>
>
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] [BIP] Normalized transaction IDs

2015-10-19 Thread Tier Nolan via bitcoin-dev
On Mon, Oct 19, 2015 at 3:01 PM, Christian Decker via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> As with the previous version, which was using a hard-fork, the normalized
> transaction ID is computed only considering the non-malleable parts of a
> transaction, i.e., stripping the signatures before computing the hash of
> the transaction.
> 


Is this proposal recursive?


*Coinbase transaction *

* n-txid = txid


*Non-coinbase transactions*
* replace sigScripts with empty strings
* replace txids in TxIns with n-txid for parents

The 2nd step is recursive starting from the coinbases.

In effect, the rule is that txids are what they would have been if n-txids
had been used right from the start.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Liquid

2015-10-13 Thread Tier Nolan via bitcoin-dev
It is interesting someone trying the sidechain approach.

I guess having trusted third parties to manage the chain was not a short
term thing?  It looks like there is no POW for the Liquid sidechain.

This is an area where the bitcoin could benefit by adding a way to transfer
money to/from sidechain without requiring third parties.

On Tue, Oct 13, 2015 at 3:27 PM, Benjamin via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> p.s. the links
>
> [1] https://blockstream.com/2015/10/12/introducing-liquid/
> [2] www.coindesk.com/blockstream-commercial-sidechain-bitcoin-exchanges/
>
> On Tue, Oct 13, 2015 at 4:25 PM, Benjamin 
> wrote:
>
>> Hello all,
>>
>> I was very surprised to learn that Blockstream will implement Sidechains
>> for exchanges [1], [2] and has been working on this privately. Can somebody
>> explain this “announcement”? Just a few comments on this “proposal”.
>>
>> “This new construction establishes a security profile inherently superior
>> to existing methods of rapid transfer and settlement, and is directly
>> applicable to other problems within existing financial institutions.”
>>
>> First of all, what does Bitcoin have to do with existing financial
>> institutions? Secondly, what in do you mean by “rapid transfer” and
>> "settlement"? Bitcoin is anonymous, digital cash. There is no such thing as
>> settlement, there is only the transfer of digital cash and that's it
>> (settlement is a bad word for this kind of transfer of property). If you
>> make up new terms define them accurately and don't play the
>> crypto-buzzword-bingo game.
>>
>> “This, in addition to increasing the security of funds normally subject
>> to explicit counterparty risk, fosters conditions that increase market
>> liquidity and reduce capital requirements for on-blockchain business
>> models.”
>>
>> Again – what does Bitcoin have to do with “market liquidity” and “capital
>> requirements”?
>>
>> “Blockstream's innovative solutions are definitely a game changer for the
>> Bitcoin industry.”
>>
>> Does Blockstream have commercial products now?
>>
>> "These initial launch partners include Bitfinex, BTCC, Kraken, Unocoin,
>> and Xapo, and discussions are underway with another dozen major
>> institutional traders and licensed exchanges. "
>>
>> ??? so many questions and no answers.
>>
>> Regards,
>> Benjamin
>>
>
>
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Let's deploy BIP65 CHECKLOCKTIMEVERIFY!

2015-09-28 Thread Tier Nolan via bitcoin-dev
On Mon, Sep 28, 2015 at 11:48 AM, Mike Hearn via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> 1) Drop the "everyone must agree to make changes" idea that people here
> like to peddle, and do it loudly, so everyone in the community is correctly
> informed
>

There never was a rule that soft-forks require total consensus.  It is
desirable but not mandatory.

A majority of miners can inherently implement a soft fork against the
wishes of the rest of the users.

Merchant/exchange/user checkpointing is the defense and therefore is a
perfectly valid response to miners taking such an action.  If a soft fork
is opposed by a large section of the users, then threatening (and
implementing) a checkpoint is the correct response.

No group can force through a hard fork, it inherently requires buy-in from
a large portion of the userbase.  That is where the "total consensus"
requirement comes from.  Naturally, absolute total consensus isn't actually
required but you do need very large consensus and also consensus across the
various sub-groups.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Weak block thoughts...

2015-09-27 Thread Tier Nolan via bitcoin-dev
On Sun, Sep 27, 2015 at 2:39 AM, Gregory Maxwell via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> Unless the weak block transaction list can be a superset of the block
> transaction list size proportional propagation costs are not totally
> eliminated.
>

The POW threshold could be dynamic.  The first weak-block that builds on a
new block could be forwarded with a smaller target.

This reduces  the window size until at least one weak block is propagated.

The change in threshold could be time based (for the first 30 seconds or
so).  This would cause a surge of traffic when a new block once a new block
has propagated, so perhaps not so good an idea.


> As even if the weak block criteria is MUCH lower than the block
> criteria (which would become problematic in its own right at some
> point) the network will sometimes find blocks when there hasn't been
> any weak block priming at all (e.g. all prior priming has made it into
> blocks already).
>

If there is a transaction backlog, then miners could forward merkle
branches with transactions in the memory pool with a commitment in the
coinbase.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Torrent-style new-block propagation on Merkle trees

2015-09-24 Thread Tier Nolan via bitcoin-dev
On Thu, Sep 24, 2015 at 12:12 AM, Jonathan Toomim (Toomim Bros) via
bitcoin-dev  wrote:

>
>
> As I understand it, the current block propagation algorithm is this:
>
> 1. A node mines a block.
> 2. It notifies its peers that it has a new block with an *inv*. Typical
> nodes have 8 peers.
> 3. The peers respond that they have not seen it, and request the block
> with *getdata* [hash].
> 4. The node sends out the block in parallel to all 8 peers simultaneously.
> If the node's upstream bandwidth is limiting, then all peers will receive
> most of the block before any peer receives all of the block. The block is
> sent out as the small header followed by a list of transactions.
> 5. Once a peer completes the download, it verifies the block, then enters
> step 2.
>

Mining pools currently connect to the "fast relay network".  This is
optimised for fast block distribution.  It does no validation and is only
for low latency propagation.  The normal network is used as a fallback.

My understanding is that it works as follows:

Each miner runs a normal full node and a relay node on the same computer.

The full node tells the relay node whenever it receives a new transaction
via the inv message and the node requests the full transaction.

The relay node tells its relay peers that it knows about the transaction
(hash only) and its 4 byte key. This is not forwarded onwards, since the
relay peer only gets the hash of the transaction and doesn't do validation
anyway.  The key is just a 4 byte counter.

Each relay node keeps a mapping of txid to key for each of its peer.  There
is some garbage collection and entries are removed once the transaction is
included in a block (there might be a confirm threshold).

When a block is found, the local node sends it to the relay node.  The
relay node then forwards it to all of its peers in a compact form.

The block is sent as a list of keys for that peer and full transactions are
only sent for unknown transactions.

When a relay node receives a block, it just verifies the POW, checks that
it is new and recent.  It does not do tx validation.  It forwards the block
to its local full node, which does the validation.  Since the relay node is
on localhost, it never gets kicked due to sending invalid blocks.  This
prevents a DOS attack where you could send invalid blocks to the relay node
and cause the local full node to kick it.

If all the transactions are already known, then it can forward a block for
only 4 bytes per transactions.  I think it has an optimisation, so that is
compressed to 1 byte per tx.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] [BIP Proposal] New "sendheaders" p2p message

2015-09-24 Thread Tier Nolan via bitcoin-dev
On Thu, Sep 24, 2015 at 7:02 PM, Suhas Daftuar via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> Hi,
>
> I'm proposing the addition of a new, optional p2p message to help improve
> the way blocks are announced on the network.  The draft BIP is available
> here and pasted below:
> https://gist.github.com/sdaftuar/465bf008f0a4768c0def
>
> The goal of this p2p message is to facilitate nodes being able to
> optionally announce blocks with headers messages rather than with inv's,
> which is particularly beneficial since the introduction of headers-first
> download in Bitcoin Core 0.10.  In particular, this allows for more
> efficient propagation of reorgs as it would eliminate a round trip in
> network communication.
>

Is there actually a requirement for the new message?  New nodes could just
unilaterally switch to sending headers and current nodes would be
compatible.

It looks like the only DOS misbehaving penalty is if the header is invalid
or if the headers don't form a chain.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Weak block thoughts...

2015-09-23 Thread Tier Nolan via bitcoin-dev
On Wed, Sep 23, 2015 at 4:43 PM, Gavin Andresen via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> Imagine miners always pre-announce the blocks they're working on to their
> peers, and peers validate those 'weak blocks' as quickly as they are able.
>
> Because weak blocks are pre-validated, when a full-difficulty block based
> on a previously announced weak block is found, block propagation should be
> insanely fast-- basically, as fast as a single packet can be relayed across
> the network the whole network could be mining on the new block.
>
> I don't see any barrier to making accepting the full-difficulty block and
> CreateNewBlock() insanely fast, and if those operations take just a
> microsecond or three, miners will have an incentive to create blocks with
> fee-paying transactions that weren't in the last block, rather than mining
> empty blocks.
>

You can create these blocks in advance too.

- receive weak block
- validate
- create child block

It becomes a pure array lookup to get the new header that builds on top of
that block.  The child blocks would need to be updated as the memory pool
changes though.


> A miner could try to avoid validation work by just taking a weak block
> announced by somebody else, replacing the coinbase and re-computing the
> merkle root, and then mining. They will be at a slight disadvantage to
> fully validating miners, though, because they WOULD have to mine empty
> blocks between the time a full block is found and a fully-validating miner
> announced their next weak block.
>

This also speeds up propagation for the miner.  The first weak block that
is broadcast could end up being copied by many other miners.

A miner who is copying a block could send coinbase + original header if he
hits a block.  Weak blocks that are just coinbase + header could have lower
POW requirements, since they use up much less bandwidth.

Miners would mostly copy other miners once they had verified their blocks.
The IBLT system works well here.  A miner could pick a weak block that is
close to what it actually wants to broadcast.


> Weak block announcements are great for the network; they give transaction
> creators a pretty good idea of whether or not their transactions are likely
> to be confirmed in the next block.
>

Aggregator nodes could offer a service to show/prove how many weak blocks
that the transaction has been accepted in.


> And if we're smart about implementing them, they shouldn't increase
> bandwidth or CPU usage significantly, because all the weak blocks at a
> given point in time are likely to contain the same transactions.
>

This assumes other compression systems for handling block propagation.

>
>
> --
> --
> Gavin Andresen
>
>
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] [BIP Proposal] Version bits with timeout and delay.

2015-09-17 Thread Tier Nolan via bitcoin-dev
On Wed, Sep 16, 2015 at 11:52 PM, Eric Lombrozo  wrote:

> The exact numbers (95% vs. 75% etc) don't need to be completely specified
> to start working on an implementation. What really matters for now is
> defining the states and trigger mechanisms. I'd rather we not argue over
> the optimal values for supermajority requirement at this point.
>

The discussion was about what each state means, not the thresholds
exactly.  I agree that can be set later.

On Wed, Sep 16, 2015 at 10:03 PM, Jorge Timón  wrote:

> I understand your proposal, but I don't see what it accomplishes compared
to applying the new rule from the start (in your own blocks)

> and wait for 95% for consensus activation (which is my preference and
it's much simpler to implement).
> What are the disadvantages of my approach? What are the advantages of
yours?
I agree that miners should apply the rule from the start in their own
blocks.


*defined*
Miners set bit
Miners apply rule to their own blocks
If 75% of blocks of last 2016 have bit set, goto tentative


*tentative*
Miners set bit
Miners apply rule to their own blocks
Miners enforce rule in blocks with bit set (reject invalid blocks)
If 95% of blocks of last 2016 have bit set, goto locked-in


*locked-in*

Point of no return
Miners set bit
Miners apply rule to their own blocks
Miners enforce rule in blocks with bit set (reject invalid blocks)
After 2016 blocks goto activated


*activated*

Miners don't set bit
Reject any block that has the bit set for 10080 blocks (10 diff periods)
Reject blocks that don't follow new rule

The advantage of enforcing the rule when 75% is reached (but only for
blocks with the bit set) is that miners get early notification that they
have implemented the rule incorrectly.They might produce blocks that
they think are fine, but which aren't.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] [BIP Proposal] Version bits with timeout and delay.

2015-09-16 Thread Tier Nolan via bitcoin-dev
On Sun, Sep 13, 2015 at 7:56 PM, Rusty Russell via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> '''States'''
> With every softfork proposal we associate a state BState, which begins
> at ''defined'', and can be ''locked-in'', ''activated'',
> or ''failed''.  Transitions are considered after each
> retarget period.
>

I think the 75% rule should be maintained.  It confirms that miners who are
setting the bit are actually creating blocks that meet the new rule (though
it doesn't check if they are enforcing it).

What is the reason for aligning the updated to the difficulty window?


*defined*
Miners set bit
If 75% of blocks of last 2016 have bit set, goto tentative


*tentative*
Miners set bit
Reject blocks that have bit set that don't follow new rule
If 95% of blocks of last 2016 have bit set, goto locked-in


*locked-in*

Point of no return
Miners still set bit
Reject blocks that have bit set that don't follow new rule
After 2016 blocks goto notice


*activated*

Miners don't set bit for at least 10080 blocks
Reject blocks that don't follow new rule

'''Failure: Timeout'''
> A soft fork proposal should include a ''timeout''.
>

I think counting in blocks is easier to be exact here.

If two bits were allocated per proposal, then miners could vote against
forks to recover the bits.  If 25% of the miners vote against, then that
kills it.

In the rationale, it would be useful to discuss effects on SPV clients and
buggy miners.

SPV clients should be recommended to actually monitor the version field.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] [BIP Proposal] Version bits with timeout and delay.

2015-09-16 Thread Tier Nolan via bitcoin-dev
On Wed, Sep 16, 2015 at 9:19 PM, Rusty Russell 
wrote:

> I couldn't see a use for it, since partial enforcement of a soft fork is
> pretty useless.
>

It isn't useful for actually using the feature, but some miners might set
the bit but not actually create blocks that comply with the new rule.

This would cause their blocks to be orphaned until they fixed it.

OK, *that* variant makes perfect sense, and is no more complex, AFAICT.
>
> So, there's two weeks to detect bad implementations, then you everyone
> stops setting the bit, for later reuse by another BIP.
>

It could be more than two weeks if the support stays between 80% and 90%
for a while.

75%+ checks that blocks with the bit set follow the rule.

95%+ enters lock-in and has the same rules as 75%+, but is irreversible at
that point.


> You need a timeout: an ancient (non-mining, thus undetectable) node
> should never fork itself off the network because someone reused a failed
> BIP bit.
>

I meant if the 2nd bit was part of the BIP.  One of the 2 bits is "FOR" and
the other is "AGAINST".  If against hits 25%, then it is deemed a failure.

The 2nd bit wouldn't be used normally.  This means that proposals can be
killed quickly if they are obviously going to fail.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] [BIP Proposal] Version bits with timeout and delay.

2015-09-16 Thread Tier Nolan via bitcoin-dev
On Wed, Sep 16, 2015 at 9:38 PM, Jorge Timón  wrote:

> No, 95% is safer and will produce less orphaned blocks.
>
The point of the 75% is just as a test run.  Enforcement wouldn't happen
until 95%.

At 75%, if someone sets the bit, then they should be creating valid blocks
(under the rule).
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] [BIP Proposal] Version bits with timeout and delay.

2015-09-16 Thread Tier Nolan via bitcoin-dev
On Wed, Sep 16, 2015 at 9:54 PM, Jorge Timón <jti...@jtimon.cc> wrote:

>
> On Sep 16, 2015 4:49 PM, "Tier Nolan via bitcoin-dev" <
> bitcoin-dev@lists.linuxfoundation.org> wrote:
> > At 75%, if someone sets the bit, then they should be creating valid
> blocks (under the rule).
>
> You shouldn't rely on that, some may start applying the restrictions in
> their own blocks at 0% and others only at 90%. Until it becomes a consensus
> rule it is just part of the standard policy (and we shouldn't rely on nodes
> following the standard policy).
>

It would be a consensus rule.  If >75% of the blocks in the last 2016
window have the bit set, then reject all blocks that have the bit set and
fail to meet the rule.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] BIP 100 specification

2015-09-03 Thread Tier Nolan via bitcoin-dev
On Thu, Sep 3, 2015 at 8:57 AM, jl2012 via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

>
>1.
>
>hardLimit floats within the range 1-32M, inclusive.
>
>
>
Does the 32MB limit actually still exist anywhere in the code?  In effect,
it is re-instating a legacy limitation.

The message size limit is to minimize the storage required per peer.  If a
32MB block size is required, then each network input buffer must be at
least 32MB. This makes it harder for a node to support a large number of
peers.

There is no reason why a single message is used for each block.  Using the
merkleblock message (or a different dedicated message), it would be
possible to send messages which only contain part of a block and have a
limited maximum size.

This would allow receiving parts of a block from multiple sources.

This is a separate issue but should be considered if moving past 32MB block
sizes (or maybe as a later protocol change).


>
>1. Changing hardLimit is accomplished by encoding a proposed value
>within a block's coinbase scriptSig.
>   1. Votes refer to a byte value, encoded within the pattern
>   "/BV\d+/" Example: /BV800/ votes for 8,000,000 byte hardLimit. If
>   there is more than one match with with pattern, the first match is 
> counted.
>
> Is there a need for byte resolution?  Using MB resolution would use up
much fewer bytes in the coinbase.

Even with the +/- 20% rule, miners could vote for the nearest MB.  Once the
block size exceeds 5MB, then there is enough resolution anyway.


>1. Absent/invalid votes and votes below minimum cap (1M) are counted
>   as 1M votes. Votes above the maximum cap (32M) are counted as 32M votes.
>
>
I think abstains should count for the status quo.  Votes which are out of
range should be clamped.

Having said that, if core supports the change, then most miners will
probably vote one way or another.

> New hardLimit is the median of the followings:
> min(current hardLimit * 1.2, 20-percentile)
> max(current hardLimit / 1.2, 80-percentile)
> current hardLimit

I think this is unclear, though mathematically exact.

Sort the votes for the last 12,000 blocks from lowest to highest.

Blocks which don't have a vote are considered a vote for the status quo.

Votes are limited to +/- 20% of the current value.  Votes that are out of
range are considered to vote for the nearest in range value.

The raise value is defined as the vote for the 2400th highest block (20th
percentile).
The lower value  is defined as the vote for the 9600th highest block (80th
percentile).

If the raise value is higher than the status quo, then the new limit is set
to the raise value.
If the lower value is lower than the status quo, then the new limit is set
to the lower value.
Otherwise, the size limit is unchanged.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] [BIP-draft] CHECKSEQUENCEVERIFY - An opcode for relative locktime

2015-08-25 Thread Tier Nolan via bitcoin-dev
On Tue, Aug 25, 2015 at 11:08 PM, Mark Friedenbach via bitcoin-dev 
bitcoin-dev@lists.linuxfoundation.org wrote:

 Assuming a maximum of 1-year relative lock-times. But what is an
 appropriate maximum to choose? The use cases I have considered have only
 had lock times on the order of a few days to a month or so. However I would
 feel uncomfortable going less than a year for a hard maximum, and am having
 trouble thinking of any use case that would require more than a year of
 lock-time. Can anyone else think of a use case that requires 1yr relative
 lock-time?


The main advantage of relative locktime over absolute locktime is in
situations when it is not possible to determine when the clock should
start.   This inherently means lower delays.

As a workaround, you could chain transactions to extend the relative
locktime.

Transaction B has to be 360 days after transaction A and then transaction C
has to be 360 days after transaction B and C must be an input into the
final transaction.

The chain could be built up with multi-sig, like the refund transaction
system, so no one person can create an alternative chain.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Economic majority vote by splitting coins

2015-08-21 Thread Tier Nolan via bitcoin-dev
On Fri, Aug 21, 2015 at 4:22 AM, odinn odinn.cyberguerri...@riseup.net
wrote:

 That's interesting.  But in all honesty I don't see most users being
 able to pull off what you are describing.


The idea assumes that it is a BIP + soft fork.  This means that most
wallets would support/recognise the encumbered coins.

Even if only some wallets support it, you can still move your coins
around.  Only the people who are trading between XT and Core would need to
have wallets that support it.

If you consolidate x BTC-Core and x BTC-XT into a single output, then you
can convert it back to a normal output.


 If they are convinced that it is needed its

 use will grow but they won't realize how bad they will be misled until
 later, at which point it will be...

 .. Too Late


That is the point, this gives a sneak preview.  At minimum, it shows which
choice will give the highest BTC value.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Bitcoin is an experiment. Why don't we have an experimental hardfork?

2015-08-19 Thread Tier Nolan via bitcoin-dev
On Wed, Aug 19, 2015 at 4:22 PM, jl2012 via bitcoin-dev 
bitcoin-dev@lists.linuxfoundation.org wrote:

 Will the adoption of BitcoinXT lead by miners? No, it won't. Actually,
 Chinese miners who control 60% of the network has already said that they
 would not adopt XT. So they must not be the leader in this revolution.
 Again, miners need to make sure they could sell their bitcoin in a good
 price, and that's not possible without support of exchanges and investors.


So, the exchanges get together to encourage the miners to start running
bitcoin-XT.  What would they do?

One scheme would be to create a taint system.  All non-XT coinbases outputs
are marked as tainted.  All outputs are tainted if any of the inputs into a
transaction are tainted.  Tainted coins can only be un-tainted by sending
0.5% of their value to the public address of one of the participating
exchanges (or to OP_RETURN).  They could slowly ratchet up the surcharge.

Exchanges in the cartel agree not to exchange tainted coins.  Even if some
still do, the tainted coins are still inherently less valuable, since fewer
exchanges accept them.

Schemes like that are the main way for non-miners to flex their muscles,
even if they seem unsavory.

Taint tracking would allow merchants to participate.  They could give less
credit for tainted bitcoins, even if the exchanges are trying to remain
neutral.  If that happens, the exchanges could run 2 prices, BTC and
BTC-tainted.

On the other hand, implementing taint machinery is a bad thing for
fungibility.

It can also be accomplished with checkpointing.  They need to create 1 big
block and then agree to checkpoint it.

A less strict rule rule could be that blocks after the first big block
count as double POW.  That means that the big block chain only needs 34% of
the hashing power to win.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] CLTV/CSV/etc. deployment considerations due to XT/Not-BitcoinXT miners

2015-08-19 Thread Tier Nolan via bitcoin-dev
On Wed, Aug 19, 2015 at 6:25 PM, Btc Drak btcd...@gmail.com wrote:

 In our case for Bitcoin Core, option 2 we use nVersion=8, apply a
 bitmask of 0xdff8 thus:

 if ((block.nVersion  ~0x2007) = 4 
 CBlockIndex::IsSuperMajority(...)) { //...}

 With nVersion=8, but using comparison =4 allows us to recover the
 bit later, assuming we want it (otherwise we use version =8).


That is the 75% activation rule portion?  The 95% rule has to apply to
all blocks.

The supermajority applies to unmasked blocks?

I think you want it so that a sequence of blocks with version 8 can be
followed by version 4 blocks?

If 950 of the last 1000 blocks have bit 0x08 set, then reject any block
with a version less than 4.

This means transitioning to the version bits BIP just requires dropping the
version back to 4 and adding a rule enforcing the BIPs for version 4 and
higher blocks.

This would be part of the version bits BIP enforcement.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Dynamically Controlled Bitcoin Block Size Max Cap

2015-08-17 Thread Tier Nolan via bitcoin-dev
On Mon, Aug 17, 2015 at 12:57 PM, Rodney Morris via bitcoin-dev 
bitcoin-dev@lists.linuxfoundation.org wrote:

 I haven't run any statistics or simulations, but I'm concerned that the
 interplay between the random distribution of transaction arrival and the
 random distribution of block times may lead to false signals.


You could just take the average of all the block sizes for the last 2016
window.

If average of last 2016  50% of the limit, then increase by 6.25%
Otherwise, decrease by 6.25%

This means that the average would be around 50% of the limit.  This gives
margin to create larger blocks when blocks are happening slowly.

A majority of miners could force the limit upwards by creating spam but
full blocks.

It could be coupled with a hard limit that grows at whatever is seen as the
maximum reasonable.  This would be both a maximum and a minimum.

All of these schemes add state to the system.  If the schedule is
predictable, then you can check determine the maximum block size purely
from the header and coinbase.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Draft BIP : fixed-schedule block size increase

2015-08-17 Thread Tier Nolan via bitcoin-dev
One of the comments made by the mining pools is that they won't run XT
because it is experimental.

Has there been any consideration to making available a version of XT with
only the blocksize changes?

The least experimental version would be one that makes the absolute
minimum changes to core.

The MAX_BLOCK_SIZE parameter could be overwritten whenever the longest tip
changes.  This saves creating a new function.

Without the consensus measuring code, the patch would be even easier.
Satoshi's proposal was just a block height comparison (a year in advance).

The state storing code is also another complication.  If the standard
counting upgrade system was used, then no state would need to be stored
in the database.

On Wed, Jul 1, 2015 at 11:49 PM, odinn odinn.cyberguerri...@riseup.net
wrote:

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1

 (My replies below)

 On 06/26/2015 06:47 AM, Tier Nolan wrote:
  On Thu, Jun 25, 2015 at 3:07 PM, Adam Back a...@cypherspace.org
  mailto:a...@cypherspace.org wrote:
 
  The hard-cap serves the purpose of a safety limit in case our
  understanding about the economics, incentives or game-theory is
  wrong worst case.
 
 
  True.

 Yep.

 
  BIP 100 and 101 could be combined.  Would that increase consensus?

 Possibly ~ In my past message(s), I've suggested that Jeff's BIP 100
 is a better alternative to Gavin's proposal(s), but that I didn't
 think that this should be taken to mean that I am saying one thing is
 superior to Gavin's work, rather, I emphasized that Gavin work with
 Jeff and Adam.

 At least, at this stage the things are in a BIP process.

 If the BIP 100 and BIP 101 would be combined, what would that look
 like on paper?

 
  - Miner vote threshold reached - Wait notice period or until
  earliest start time - Block size default target set to 1 MB - Soft
  limit set to 1MB - Hard limit set to 8MB + double every 2 years -
  Miner vote to decide soft limit (lowest size ignoring bottom 20%
  but 1MB minimum)
 
  Block size updates could be aligned with the difficulty setting
  and based on the last 2016 blocks.
 
  Miners could leave the 1MB limit in place initially.  The vote is
  to get the option to increase the block size.
 
  Legacy clients would remain in the network until 80% of miners
  vote to raise the limit and a miner produces a 1MB block.
 
  If the growth rate over-estimates hardware improvements, the devs
  could add a limit into the core client.  If they give notice and
  enough users update, then miners would have to accept it.
 
  The block size becomes min(miner's vote, core devs).  Even if 4
  years notice is given, blocks would only be 4X optimal.
 
 
  ___ bitcoin-dev mailing
  list bitcoin-dev@lists.linuxfoundation.org
  https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
 

 - --
 http://abis.io ~
 a protocol concept to enable decentralization
 and expansion of a giving economy, and a new social good
 https://keybase.io/odinn
 -BEGIN PGP SIGNATURE-
 Version: GnuPG v1

 iQEcBAEBAgAGBQJVlG5oAAoJEGxwq/inSG8C0r4H/0eklB9GxgHdl4LK7UoLeYYb
 hlCiIJZ1+sRhTRIHrBtZO+nb2Uy3jLdqO9eOL4z9OXk3TCRBFwSdWrwsZXbzy3tC
 5TmYlHvLSpfjiUxpP9JcO5E2VwFvB80pKkjPuUhwFVngh0HHsTA1IinUt52ZW1QP
 wTdgKFHw3QL9zcfEXljVa3Ih9ssqrl5Eoab8vE2yr3p3QHR7caRLY1gFyKKIRxVH
 YQangx6D33JcxyAcDNhYqavyt02lHxscqyZo6I4XUvE/aZVmSVTlm2zg7xdR7aCZ
 0PlDwzpMD6Zk2QO/5qPPPos/5VETT0ompFK62go/hY2uB4cm+yZw3FFxR+Kknog=
 =rtTH
 -END PGP SIGNATURE-

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] BIP draft: Hardfork bit

2015-07-23 Thread Tier Nolan via bitcoin-dev
On Thu, Jul 23, 2015 at 5:23 PM, jl2012 via bitcoin-dev 
bitcoin-dev@lists.linuxfoundation.org wrote:

 2) Full nodes and SPV nodes following original consensus rules may not be
 aware of the deployment of a hardfork. They may stick to an
 economic-minority fork and unknowingly accept devalued legacy tokens.


This change means that they are kicked off the main chain immediately when
the fork activates.

The change is itself a hard fork.  Clients have be updated to get the
benefits.

3) In the case which the original consensus rules are also valid under the
 new consensus rules, users following the new chain may unexpectedly reorg
 back to the original chain if it grows faster than the new one. People may
 find their confirmed transactions becoming unconfirmed and lose money.


I don't understand the situation here.  Is the assumption of a group of
miners suddenly switching (for example, they realise that they didn't
intend to support the new rules)?


 Flag block is constructed in a way that nodes with the original consensus
 rules must reject. On the other hand, nodes with the new consensus rules
 must reject a block if it is not a flag block while it is supposed to be.
 To achieve these goals, the flag block must 1) have the hardfork bit
 setting to 1, 2) include a short predetermined unique description of the
 hardfork anywhere in its coinbase, and 3) follow any other rules required
 by the hardfork. If these conditions are not fully satisfied, upgraded
 nodes shall reject the block.


Ok, so set the bit and then include BIP-GIT-HASH of the canonical BIP on
github in the coinbase?

Since it is a hard fork, the version field could be completely
re-purposed.  Set the bit and add the BIP number as the lower bits in the
version field.  This lets SPV clients check if they know about the hard
fork.

There network protocol could be updated to add getdata support for asking
for a coinbase only merkleblock.  This would allow SPV clients to obtain
the coinbase.

Automatic warning system: When a flag block is found on the network, full
 nodes and SPV nodes should look into its coinbase. They should alert their
 users and/or stop accepting incoming transactions if it is an unknown
 hardfork. It should be noted that the warning system could become a DoS
 vector if the attacker is willing to give up the block reward. Therefore,
 the warning may be issued only if a few blocks are built on top of the flag
 block in a reasonable time frame. This will in turn increase the risk in
 case of a real planned hardfork so it is up to the wallet programmers to
 decide the optimal strategy. Human warning system (e.g. the emergency alert
 system in Bitcoin Core) could fill the gap.


If the rule was that hard forks only take effect 100 blocks after the flag
block, then this problem is eliminated.

Emergency hard forks may still have to take effect immediately though, so
it would have to be a custom not a rule.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev