Re: [Bitcoin-development] Hard fork via miner vote

2015-06-20 Thread Tier Nolan
I agree giving notice that the change is going to happen is critical for a
hard fork.  If miners vote in favor, they need to give people time to
upgrade (or to decide to reject the fork).

The BIP 100 proposal is that no change will happen until a timestamp is
reached.  It isn't clear exactly how it would work.

Testnet: Sep 1st 2015
Mainnet: Jan 11th 2016

It suggests 90% of 12000 blocks (~83 days).

This means that if 10800 of the last 12000 blocks are the updated version,
then the change is considered locked in.

I think having an earlier fail threshold would be a good idea too.  This
guarantees notice.

Assuming 3 is old rule and 4 is new rule

If the median of 11 timestamp is after 1st Sep 2015 and less than 10800 of
the last 12000 blocks are version 4+, then reject version 4 blocks
If the median of 11 timestamp is after 1st Nov 2015 and at least 10800 of
the last 12000 blocks are version 4+, then reject version 3 blocks
(lock-in)
If the median of 11 timestamp is after 1st Jan 2016 and at least 10800 of
the last 12000 blocks are version 4+, the allow new rule

This means that if the 90% threshold is lost at any time between 1st Sep
and 1st Nov, then the fork is rejected.  Otherwise, after the 1st Nov, it
is locked in, but the new rules don't activate until 1st Jan.

For block size, miners could still soft fork back to 1MB after 1st Nov, it
there is a user/merchant revolt (maybe that would be version 5 blocks).


On Sat, Jun 20, 2015 at 6:13 PM, Pieter Wuille pieter.wui...@gmail.com
wrote:

 Hello all,

 I've seen ideas around hard fork proposals that involve a block version
 vote (a la BIP34, BIP66, or my more recent versionbits BIP draft). I
 believe this is a bad idea, independent of what the hard fork itself is.

 Ultimately, the purpose of a hard fork is asking the whole community to
 change their full nodes to new code. The purpose of the trigger mechanism
 is to establish when that has happened.

 Using a 95% threshold, implies the fork can happen when at least 5% of
 miners have not upgraded, which implies some full nodes have not (as miners
 are nodes), and in addition, means the old chain can keep growing too,
 confusing old non-miner nodes as well.

 Ideally, the fork should be scheduled when one is certain nodes will have
 upgraded, and the risk for a fork will be gone. If everyone has upgraded,
 no vote is necessary, and if nodes have not, it remains risky to fork them
 off.

 I understand that, in order to keep humans in the loop, you want an
 observable trigger mechanism, and a hashrate vote is an easy way to do
 this. But at least, use a minimum timestamp you believe to be reasonable
 for upgrade, and a 100% threshold afterwards. Anything else guarantees that
 your forking change happens *knowingly* before the risk is gone.

 You may argue that miners would be asked to - and have it in their best
 interest - to not actually make blocks that violate the changed rule before
 they are reasonably sure that everyone has upgraded. That is possible, but
 it does not gain you anything over just using a 100% threshold, as how
 would they be reasonably sure everyone has upgraded, while blocks creater
 by non-upgraded miners are still being created?

 TL;DR: use a timestamp switchover for a hard fork, or add a block voting
 threshold as a means to keep humans in the loop, but if you do, use 100% as
 threshold.

 --
 Pieter


 --

 ___
 Bitcoin-development mailing list
 Bitcoin-development@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bitcoin-development


--
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] F2Pool has enabled full replace-by-fee

2015-06-19 Thread Tier Nolan
On Fri, Jun 19, 2015 at 5:42 PM, Eric Lombrozo elombr...@gmail.com wrote:

 If we want a non-repudiation mechanism in the protocol, we should
 explicitly define one rather than relying on “prima facie” assumptions.
 Otherwise, I would recommend not relying on the existence of a signed
 transaction as proof of intent to pay…


Outputs could be marked as locked.  If you are performing a zero
confirmation spend, then the recipient could insist that you flag the
output for them as non-reducible.

This reduces privacy since it would be obvious which output was change.  If
both are locked, then the fee can't be increased.

This would be information that miners could ignore though.

Creating the right incentives is hard though.  Blocks could be
discouraged if they have a double spend that is known about for a while
which reduces payment for a locked output.
--
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] New attack identified and potential solution described: Dropped-transaction spam attack against the block size limit

2015-06-09 Thread Tier Nolan
On Tue, Jun 9, 2015 at 2:36 PM, Gavin Andresen gavinandre...@gmail.com
wrote:

 How about this for mitigating this potential attack:

 1. Limit the memory pool to some reasonable number of blocks-worth of
 transactions (e.g. 11)
 2. If evicting transactions from the memory pool, prefer to evict
 transactions that are part of long chains of unconfirmed transactions.
 3. Allow blocks to grow in size in times of high transaction demand.


I think 2 should just be fee per kB.  If the pool is full and a transaction
arrives, it has to have a fee per kB that is higher than the lowest
transaction in the pool.

The effect is that the fee per kB threshold for getting a transaction into
the memory pool increases as the attack proceeds.  This means that the cost
to maintain the attack increases.

With replace by fee, the new transaction would have to have a fee that is
more than a fixed amount more than the lowest already in the pool.  I think
the replace by fee code already does this.  This prevents transactions with
fees that increase by 1 Satoshi at a time being relayed.

For allowing large blocks when block space is in high demand, you could
limit the average block size.

If the average was set to 1MB, the rule could be that blocks must be 2MB or
lower and the total size of the a block and the previous 99 must be 100MB
or lower.  This gives an average of 1MB per block, but allows bursts.
--
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] [BIP draft] Consensus-enforced transaction replacement signalled via sequence numbers

2015-06-02 Thread Tier Nolan
I am glad to see the transaction version number increase.  The commit
doesn't update the default transaction version though.  The node would
still produce version 1 transactions.

Does the reference client already produce transactions with final sequence
numbers?  If so, then they will be valid version 2 transactions.  If it
sets the sequence to all zeros, then it won't trigger the new code either.
I think simply bumping the default version number to 2 would be safe.

For the timestamp locktime, median block time would be better than raw
block time.  Median time is the median timestamp of the previous 11
blocks.  This reduces the incentive to mess with the timestamp.  Median
time is earlier than block time, but since things are relative, it should
balance out.

Miners have around 2 hours worth of flexibility when setting the
timestamps, so it may not be that big a deal.



On Tue, Jun 2, 2015 at 5:34 AM, Stephen Morse stephencalebmo...@gmail.com
wrote:

 I see, so OP_SEQUENCEVERIFY will have a value pushed on the stack right
 before, and then check that the input spending the prevout has nSequence
 corresponds to at least the sequence specified by the stack value. Good
 idea! Keeps the script code from depending on external chain specific data,
 which is nice.

 Hopefully we can repurpose one of the OP_NOPs for CHECKLOCKTIMEVERIFY and
 one for OP_CHECKSEQUENCEVERIFY. Very complementary.

 Best,
 Stephen


 On Tue, Jun 2, 2015 at 12:16 AM, Mark Friedenbach m...@friedenbach.org
 wrote:

 You are correct! I am maintaining a 'checksequenceverify' branch in my
 git repository as well, an OP_RCLTV using sequence numbers:

 https://github.com/maaku/bitcoin/tree/checksequenceverify

 Most of the interesting use cases for relative lock-time require an RCLTV
 opcode. What is interesting about this architecture is that it possible to
 cleanly separate the relative lock-time (sequence numbers) from the RCLTV
 opcode (OP_CHECKSEQUENCEVERIFY) both in concept and in implementation. Like
 CLTV, the CSV opcode only checks transaction data and requires no
 contextual knowledge about block headers, a weakness of the other RCLTV
 proposals that violate the clean separation between libscript and
 libconsensus. In a similar way, this BIP proposal only touches the
 transaction validation logic without any impact to script.

 I would like to propose an additional BIP covering the
 CHECKSEQUENCEVERIFY opcode and its enabling applications. But, well, one
 thing at a time.

 On Mon, Jun 1, 2015 at 8:45 PM, Stephen Morse 
 stephencalebmo...@gmail.com wrote:

 Hi Mark,

 Overall, I like this idea in every way except for one: unless I am
 missing something, we may still need an OP_RCLTV even with this being
 implemented.

 In use cases such as micropayment channels where the funds are locked up
 by multiple parties, the enforcement of the relative locktime can be done
 by the first-signing party. So, while your solution would probably work in
 cases like this, where multiple signing parties are involved, there may be
 other, seen or unforeseen, use cases that require putting the relative
 locktime right into the spending contract (the scriptPubKey itself).
 When there is only one signer, there's nothing that enforces using an
 nSequence and nVersion=2 that would prevent spending the output until a
 certain time.

 I hope this is received as constructive criticism, I do think this is an
 innovative idea. In my view, though, it seems to be less fully-featured
 than just repurposing an OP_NOP to create OP_RCLTV. The benefits are
 obviously that it saves transaction space by repurposing unused space, and
 would likely work for most cases where an OP_RCLTV would be needed.

 Best,
 Stephen

 On Mon, Jun 1, 2015 at 9:49 PM, Mark Friedenbach m...@friedenbach.org
 wrote:

 I have written a reference implementation and BIP draft for a soft-fork
 change to the consensus-enforced behaviour of sequence numbers for the
 purpose of supporting transaction replacement via per-input relative
 lock-times. This proposal was previously discussed on the mailing list in
 the following thread:

 http://sourceforge.net/p/bitcoin/mailman/message/34146752/

 In short summary, this proposal seeks to enable safe transaction
 replacement by re-purposing the nSequence field of a transaction input to
 be a consensus-enforced relative lock-time.

 The advantages of this approach is that it makes use of the full range
 of the 32-bit sequence number which until now has rarely been used for
 anything other than a boolean control over absolute nLockTime, and it does
 so in a way that is semantically compatible with the originally envisioned
 use of sequence numbers for fast mempool transaction replacement.

 The disadvantages are that external constraints often prevent the full
 range of sequence numbers from being used when interpreted as a relative
 lock-time, and re-purposing nSequence as a relative lock-time precludes its
 use in other contexts. The latter point has 

Re: [Bitcoin-development] Proposed alternatives to the 20MB step function

2015-05-29 Thread Tier Nolan
On Fri, May 29, 2015 at 12:26 PM, Mike Hearn m...@plan99.net wrote:

 IMO it's not even clear there needs to be a size limit at all. Currently
 the 32mb message cap imposes one anyway


If the plan is a fix once and for all, then that should be changed too.  It
could be set so that it is at least some multiple of the max block size
allowed.

Alternatively, the merkle block message already incorporates the required
functionality.

Send
- headers message (with 1 header)
- merkleblock messages (max 1MB per message)

The transactions for each merkleblock could be sent directly before each
merkleblock, as is currently the case.

That system can send a block of any size.  It would require a change to the
processing of any merkleblocks received.
--
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Proposed alternatives to the 20MB step function

2015-05-29 Thread Tier Nolan
On Fri, May 29, 2015 at 1:39 PM, Gavin Andresen gavinandre...@gmail.com
wrote:

 But if there is still no consensus among developers but the bigger blocks
 now movement is successful, I'll ask for help getting big miners to do the
 same, and use the soft-fork block version voting mechanism to (hopefully)
 get a majority and then a super-majority willing to produce bigger blocks.
 The purpose of that process is to prove to any doubters that they'd better
 start supporting bigger blocks or they'll be left behind, and to give them
 a chance to upgrade before that happens.


How do you define that the movement is successful?

For


 Because if we can't come to consensus here, the ultimate authority for
 determining consensus is what code the majority of merchants and exchanges
 and miners are running.


The measure is miner consensus.  How do you intend to measure
exchange/merchant acceptance?
--
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Proposed alternatives to the 20MB stepfunction

2015-05-29 Thread Tier Nolan
On Fri, May 29, 2015 at 5:39 PM, Raystonn . rayst...@hotmail.com wrote:

   Regarding Tier’s proposal: The lower security you mention for extended
 blocks would delay, possibly forever, the larger blocks maximum block size
 that we want for the entire network.  That doesn’t sound like an optimal
 solution.


I don't think so.  The lower security is the potential centralisation
risk.  If you have your money in the root chain, then you can watch it.
You can probably also watch it in a 20MB chain.

Full nodes would still verify the entire block (root + extended).  It is a
nuclear option, since you can make any changes you want to the rules for
the extended chain.  The only safe guard is that people have to voluntarly
transfer coins to the extended block.

The extended block might have 10-15% of the total bitcoins, but still be
useful, since they would be the ones that move the most.  If you want to
store your coins long term, you move them back to the root block where you
can watch them more closely.

It does make things more complex though.  Wallets would have to list 2
balances.
--
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Proposed alternatives to the 20MB step function

2015-05-29 Thread Tier Nolan
On Fri, May 29, 2015 at 3:09 PM, Tier Nolan tier.no...@gmail.com wrote:



 On Fri, May 29, 2015 at 1:39 PM, Gavin Andresen gavinandre...@gmail.com
 wrote:

 But if there is still no consensus among developers but the bigger
 blocks now movement is successful, I'll ask for help getting big miners to
 do the same, and use the soft-fork block version voting mechanism to
 (hopefully) get a majority and then a super-majority willing to produce
 bigger blocks. The purpose of that process is to prove to any doubters that
 they'd better start supporting bigger blocks or they'll be left behind, and
 to give them a chance to upgrade before that happens.


 How do you define that the movement is successful?


Sorry again, I keep auto-sending from gmail when trying to delete.

In theory, using the nuclear option, the block size can be increased via
soft fork.

Version 4 blocks would contain the hash of the a valid extended block in
the coinbase.

block height 32 byte extended hash

To send coins to the auxiliary block, you send them to some template.

OP_P2SH_EXTENDED scriptPubKey hash OP_TRUE

This transaction can be spent by anyone (under the current rules).  The
soft fork would lock the transaction output unless it transferred money
from the extended block.

To unlock the transaction output, you need to include the txid of
transaction(s) in the extended block and signature(s) in the scriptSig.

The transaction output can be spent in the extended block using P2SH
against the scriptPubKey hash.

This means that people can choose to move their money to the extended
block.  It might have lower security than leaving it in the root chain.

The extended chain could use the updated script language too.

This is obviously more complex than just increasing the size though, but it
could be a fallback option if no consensus is reached.  It has the
advantage of giving people a choice.  They can move their money to the
extended chain or not, as they wish.
--
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Consensus-enforced transaction replacement via sequence numbers

2015-05-28 Thread Tier Nolan
Can you update it so that it only applies to transactions with version
number 3 and higher.  Changing the meaning of a field is exactly what the
version numbers are for.

You could even decode version 3 transactions like that.

Version 3 transactions have a sequence number of 0x and the
sequence number field is re-purposed for relative lock time.

This means that legacy transactions that have already been signed but have
a locktime in the future will still be able to enter the blockchain
(without having to wait significantly longer than expected).

On Thu, May 28, 2015 at 10:56 AM, Mark Friedenbach m...@friedenbach.org
wrote:

 I have no problem with modifying the proposal to have the most significant
 bit signal use of the nSequence field as a relative lock-time. That leaves
 a full 31 bits for experimentation when relative lock-time is not in use. I
 have adjusted the code appropriately:

 https://github.com/maaku/bitcoin/tree/sequencenumbers

 On Wed, May 27, 2015 at 10:39 AM, Mike Hearn m...@plan99.net wrote:

 Mike, this proposal was purposefully constructed to maintain as well as
 possible the semantics of Satoshi's original construction. Higher sequence
 numbers -- chronologically later transactions -- are able to hit the chain
 earlier, and therefore it can be reasonably argued will be selected by
 miners before the later transactions mature. Did I fail in some way to
 capture that original intent?


 Right, but the original protocol allowed for e.g. millions of revisions
 of the transaction, hence for high frequency trading (that's actually how
 Satoshi originally explained it to me - as a way to do HFT - back then the
 channel concept didn't exist).

 As you point out, with a careful construction of channels you should only
 need to bump the sequence number when the channel reverses direction. If
 your app only needs to do that rarely, it's a fine approach.And your
 proposal does sounds better than sequence numbers being useless like at the
 moment. I'm just wondering if we can get back to the original somehow or at
 least leave a path open to it, as it seems to be a superset of all other
 proposals, features-wise.




 --

 ___
 Bitcoin-development mailing list
 Bitcoin-development@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bitcoin-development


--
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Consensus-enforced transaction replacement via sequence numbers

2015-05-28 Thread Tier Nolan
On Thu, May 28, 2015 at 3:59 PM, Mark Friedenbach m...@friedenbach.org
wrote:

 Why 3? Do we have a version 2?

I meant whatever the next version is, so you are right, it's version 2.

 As for doing it in serialization, that would alter the txid making it a
 hard fork change.

The change is backwards compatible (since there is no restrictions on
sequence numbers).   This makes it a soft fork.

That doesn't change the fact that you are changing what a field in the
transaction represents.

You could say that the sequence number is no longer encoded in the
serialization, it is assumed to be 0x for all version 2+
transactions and the relative locktime is a whole new field that is the
same size (and position).

I think keeping some of the bytes for other uses is a good idea.  The
entire top 2 bytes could be ignored when working out relative locktime
verify.  That leaves them fully free to be set to anything.

It could be that if the MSB of the bottom 2 bytes is set, then that
activates the rule and the top 2 bytes are ignored.

Are there any use-cases which need a RLTV of more than 8191 blocks delay
(that can't be covered by the absolute version)?
--
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Version bits proposal

2015-05-27 Thread Tier Nolan
I think it would be better to have the deadlines set as block counts.  That
eliminates the need to use the median time mechanism.

The deadline could be matched to a start-line.  The definition would then
be something like

BIP 105
Start block: 325000
End block: 35
Activation: 750 of 1000
Implication: 950 of 1000
Bit: 9

This would allow creation of a simple table of known BIPs.  It also keeps
multiple users of the bit as strictly separate.

The alternative to the start time is that it is set equal to the deadline
or implication time of the previous user of the bit.

Was the intention to change the 95% rule.  You need 750 of the last 1000 to
activate and then must wait at least 1000 for implication?


On Wed, May 27, 2015 at 4:51 AM, Jorge Timón jti...@jtimon.cc wrote:

 It would also help to see the actual code changes required, which I'm sure
 will be much shorter than the explanation itself.
 On May 27, 2015 5:47 AM, Luke Dashjr l...@dashjr.org wrote:

 On Wednesday, May 27, 2015 1:48:05 AM Pieter Wuille wrote:
  Feel free to comment. As the gist does not support notifying
 participants
  of new comments, I would suggest using the mailing list instead.

 I suggest adding a section describing how this interacts with and changes
 GBT.

 Currently, the client tells the server what the highest block version it
 supports is, and the server indicates a block version to use in its
 template,
 as well as optional instructions for the client to forcefully use this
 version
 despite its own maximum version number. Making the version a bitfield
 contradicts the increment-only assumption of this design, and since GBT
 clients are not aware of overall network consensus state, reused bits can
 easily become confused. I suggest, therefore, that GBT clients should
 indicate
 (instead of a maximum supported version number) a list of softforks by
 identifier keyword, and the GBT server respond with a template indicating:
 - An object of softfork keywords to bit values, that the server will
 accept.
 - The version number, as presently conveyed, indicating the preferred
 softfork
 flags.

 Does this sound reasonable, and/or am I missing anything else?

 Luke


 --
 ___
 Bitcoin-development mailing list
 Bitcoin-development@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bitcoin-development



 --

 ___
 Bitcoin-development mailing list
 Bitcoin-development@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bitcoin-development


--
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Version bits proposal

2015-05-27 Thread Tier Nolan
On Wed, May 27, 2015 at 11:15 AM, Peter Todd p...@petertodd.org wrote:

 The median time mechanism is basically a way for hashing power to show
 what time they think it is. Equally, the nVersion soft-fork mechanism is
 a way for hashing power to show what features they want to support.


Fair enough.  It means slightly more processing, but the median time could
be cached in the header index, so no big deal.

Block counts are inconvenient for planning, as there's no guarantee
 they'll actually happen in any particular time frame, forward and back.


I don't think the deadline needs to be set that accurately.  A roughly 6
month deadline should be fine, but as you say a majority of miners is
needed to abuse the median time and it is already a miner poll.

Perhaps the number of blocks used in the median could be increased to
reduce noise.

The median time could be median of the last 144 blocks plus 12 hours.


 If you assume no large reorganizations, your table of known BIPs can
 just as easily be a list of block heights even if the median time
 mechanism is used.


I think it makes it easier to write the code.  It reduced the state that
needs to be stored per BIP.  You don't need to check if the previous bips
were all accepted.

Each bit is assigned to a particular BIP for a particular range of times
(or blocks).

If block numbers were used for the deadline, you just need to check the
block index for the deadline block.

enum {
BIP_INACTIVE = 0,
BIP_ACTIVE,
BIP_LOCKED
BIP_INVALID_BLOCK,
}

int GetBIPState(block, bip)
{
if (block.height == bip.deadline)  // Bit must be set to match
locked/unlocked at deadline
{
int bipState = check_supermajority(...);
if (bipState == BIP_LOCKED  (block.nVersion  bip.bit)
return BIP_LOCKED;

if (bipState != BIP_LOCKED  (block.nVersion  (~bip.bit)))
return BIP_INACTIVE;

return BIP_INVALID_BLOCK;
}

if (block.height  deadline) // Look at the deadline block to determine
if the BIP is locked
return (block_index[deadline].nVersion  bip_bit) != 0 ? BIP_LOCKED
: BIP_INACTIVE;

if (block.height  startline + I) // BIP cannot activate/lock until
startline + implicit window size
return INACTIVE;

return check_supermajority() // Check supermajority of bit
}

The block at height deadline would indicate if the BIP was locked in.

Block time could still be used as long as the block height was set after
that.  The deadline_time could be in six months.  The startline height
could be the current block height and the deadline_height could be
startline + 35000.

The gives roughly

start time = now
deadline time = now + six months
deadline height = now + eight months

The deadline height is the block height when the bit is returned to the
pool but the deadline time is when the BIP has to be accepted.

It also helps with the warning system.  For each block height, there is a
set of known BIP bits that are allowed.  Once the final deadline is passed,
the expected mask is zeros.

On Wed, May 27, 2015 at 11:15 AM, Jorge Timón jti...@jtimon.cc wrote:

 On May 27, 2015 11:35 AM, Tier Nolan tier.no...@gmail.com wrote:

  Was the intention to change the 95% rule.  You need 750 of the last 1000
 to activate and then must wait at least 1000 for implication?

 You need 75% to start applying it, 95% to start rejecting blocks that
 don't apply it.


I think the phrasing is ambiguous.  I was just asking for clarification.

Whenever I out of any W *subsequent* blocks (regardless of the block
itself) have bit B set,

That suggests that the I of W blocks for the 95% rule must happen after
activation.  This makes the rule checking harder.  Easier to use the
current system, where blocks that were part of the 750 rule also count
towards the 95% rule.
--
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Consensus-enforced transaction replacement via sequence numbers

2015-05-27 Thread Tier Nolan
This could cause legacy transactions to become unspendable.


A new transaction version number should be used to indicate the change of
the field from sequence number to relative lock time.

Legacy transactions should not have the rule applied to them.

On Wed, May 27, 2015 at 9:18 AM, Gregory Maxwell gmaxw...@gmail.com wrote:

 On Wed, May 27, 2015 at 7:47 AM, Peter Todd p...@petertodd.org wrote:
  Equally this proposal is no more consensus enforcement than simply
  increasing the fee (and possibly decreasing the absolute nLockTime) for

 You've misunderstood it, I think-- Functionally nlocktime but relative
 to each txin's height.

 But the construction gives the sequence numbers a rational meaning,
 they count down the earliest position a transaction can be included.
 (e.g. the highest possible sequence number can be included any time
 the inputs are included) the next lower sequence number can only be
 included one block later than the input its assigned to is included,
 the next lower one block beyond that. All consensus enforced.   A
 miner could opt to not include the higher sequence number (which is
 the only one of the set which it _can_ include) it the hopes of
 collecting more fees later on the next block, similar to how someone
 could ignore an eligible locked transaction in the hopes that a future
 double spend will be more profitable (and that it'll enjoy that
 profit) but in both cases it must take nothing at all this block, and
 risk being cut off by someone else (and, of course, nothing requires
 users use sequence numbers only one apart...).

 It makes sequence numbers work exactly like you'd expect-- within the
 bounds of whats possible in a decentralized system.  At the same time,
 all it is ... is relative nlocktime.


 --
 ___
 Bitcoin-development mailing list
 Bitcoin-development@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bitcoin-development

--
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Proposed alternatives to the 20MB step function

2015-05-19 Thread Tier Nolan
On Mon, May 18, 2015 at 2:42 AM, Rusty Russell ru...@rustcorp.com.au
wrote:

 OK.  Be nice if these were cleaned up, but I guess it's a sunk cost.


Yeah.

On the plus side, as people spend their money, old UTXOs would be used up
and then they would be included in the cost function.  It is only people
who are storing their money long term that wouldn't.

They are unlikely to have consumed their UTXOs anyway, unless miners
started paying for UTXOs.

We could make it a range.

UTXOs from below 355,000 and above 375,000 are included.  That can create
incentive problems for the next similar change, I think a future threshold
is better.


  He said utxo_created_size not utxo_created so I assumed scriptlen?


Maybe I mis-read.


 But you made that number up?  The soft cap and hard byte limit are
 different beasts, so there's no need for soft cost cap  hard byte
 limit.


I was thinking about it being a soft-fork.

If it was combined with the 20MB limit change, then it can be anything.

I made a suggestion somewhere (her or forums not sure), that transactions
should be allowed to store bytes.

For example, a new opcode could be added, byte_count OP_LOCK_BYTES.

This makes the transaction seem byte_count larger.  However, when
spending the UTXO, that transaction counts as byte_count smaller, even
against the hard-cap.

This would be useful for channels.  If channels were 100-1000X the
blockchain volume and someone caused lots of channels to close, there
mightn't be enough space for all the close channel transactions.  Some
people might be able to get their refund transactions included in the
blockchain because the timeout expires.

If transactions could store enough space to be spent, then a mass channel
close would cause some very large blocks, but then they would have to be
followed by lots of tiny blocks.

The block limit would be an average not fixed per block.  There would be 3
limits

Absolute hard limit (max bytes no matter what): 100MB
Hard limit (max bytes after stored bytes offset): 30MB
Soft limit (max bytes equivalents): 10MB

Blocks lager than ~32MB require a new network protocol, which makes the
hard fork even harder.  The protocol change could be messages can now be
150MB max though, so maybe not so complex.



  This requires that transactions include scriptPubKey information when
  broadcasting them.

 Brilliant!  I completely missed that possibility...


I have written a BIP about it.  It is still in the draft stage.  I had a
look into writing up the code for the protocol change.

https://github.com/TierNolan/bips/blob/extended_transactions/bip-etx.mediawiki
https://github.com/TierNolan/bips/blob/extended_transactions/bip-etx-fork.mediawiki
--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] [BIP] Normalized Transaction IDs

2015-05-19 Thread Tier Nolan
On Tue, May 19, 2015 at 9:28 AM, Christian Decker 
decker.christ...@gmail.com wrote:

 Thanks Stephen, I hadn't thought about BIP 34 and we need to address this
 in both proposals. If we can avoid it I'd like not to have one
 transaction hashed one way and other transactions in another way.


The normalized TXID cannot depend on height for other transactions.
Otherwise, it gets mutated when been added to the chain, depending on
height.

An option would be that the height is included in the scriptSig for all
transactions, but for non-coinbase transctions, the height used is zero.

I think if height has to be an input into the normalized txid function, the
specifics of inclusion don't matter.

The previous txid for coinbases are required to be all zeros, so the
normalized txid could be to add the height to the txids of all inputs.
Again, non-coinbase transactions would have heights of zero.


 Is there a specific reason why that was not chosen at the time?


I assumed that since the scriptSig in the coinbase is specifically intended
to be random bytes/extra nonce, so putting a restriction on it was
guaranteed to be backward compatible.
--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Proposed alternatives to the 20MB step function

2015-05-16 Thread Tier Nolan
On Sat, May 16, 2015 at 1:22 AM, Rusty Russell ru...@rustcorp.com.au
wrote:

 Some tweaks:

 1) Nomenclature: call tx_size tx_cost and real_size tx_bytes?


Fair enough.


 2) If we have a reasonable hard *byte* limit, I don't think that we need
the MAX().  In fact, it's probably OK to go negative.


I agree, we want people to compress the UTXO space and a transaction with
100 inputs and one output is great.

It may have privacy problem though.



 3) ... or maybe not, if any consumed UTXO was generated before the soft
fork (reducing Tier's perverse incentive).


The incentive problem can be fixed by excluding UTXOs from blocks before a
certain count.

UTXOs in blocks before 375000 don't count.



 4) How do we measure UTXO size?  There are some constant-ish things in
there (eg. txid as key, height, outnum, amount).  Maybe just add 32
to scriptlen?


They can be stored as a fixed digest.  That can be any size, depending on
security requirements.

Gmaxwell's cost proposal is 3-4 bytes per UTXO change.  It isn't
4*UXTO.size - 3*UTXO.size

It is only a small nudge.  With only 10% of the block space to play with it
can't be massive.

This requires that transactions include scriptPubKey information when
broadcasting them.



 5) Add a CHECKSIG cost.  Naively, since we allow 20,000 CHECKSIGs and
1MB blocks, that implies a cost of 50 bytes per CHECKSIG (but counted
correctly, unlike now).

 This last one implies that the initial cost limit would be 2M, but in
 practice probably somewhere in the middle.

   tx_cost = 50*num-CHECKSIG
 + tx_bytes
 + 4*utxo_created_size
 - 3*utxo_consumed_size

  A 250 byte transaction with 2 inputs and 2 outputs would have an adjusted
  size of 252 bytes.

 Now cost == 352.


That is to large a cost for a 10% block change.  It could be included in
the block size hard fork though.  I think have one combined cost for
transactions is good.  It means much fewer spread out transaction checks.
The code for the cost formula would be in one place.
--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Block Size Increase Requirements

2015-05-16 Thread Tier Nolan
On Sat, May 9, 2015 at 4:08 AM, Peter Todd p...@petertodd.org wrote:

  I wonder if having a miner flag would be good for the network.

 Makes it trivial to find miners and DoS attack them - a huge risk to the
 network as a whole, as well as the miners.


To mitigate against this, two chaintips could be tracked.  The miner tip
and the client tip.

Miners would build on the miner tip.  When performing client services, like
wallets, they would use the client tip.

The client would act exactly the same as any node, the only change would be
that it gives miner work based on the mining tip.

If the two tips end up significantly forking, there would be a warning to
the miner and perhaps eventually refuse to give out new work.

That would happen when there was a miner level hard-fork.


 That'd be an excellent way to double-spend merchants, significantly
 increasing the chance that the double-spend would succeed as you only
 have to get sufficient hashing power to get the lucky blocks; you don't
 need enough hashing power to *also* ensure those blocks don't become the
 longest chain, removing the need to sybil attack your target.


To launch that attack, you need to produce fake blocks.  That is
expensive.

Stephen Cale's suggestion to wait more than one block before counting a
transaction as confirmed would also help mitigate.
--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Block Size Increase Requirements

2015-05-16 Thread Tier Nolan
On Sat, May 16, 2015 at 5:39 AM, Stephen stephencalebmo...@gmail.com
wrote:

 I think this could be mitigated by counting confirmations differently. We
 should think of confirmations as only coming from blocks following the
 miners' more strict rule set. So if a merchant were to see payment for the
 first time in a block that met their own size restrictions but not the
 miners', then they would simply count it as unconfirmed.


In effect, there is a confirm penalty for less strict blocks.  Confirms =
max(miner_confirms, merchant_confirms - 3, 0)

Merchants who don't upgrade end up having to wait longer to hit
confirmations.

If they get deep enough in the chain, though, the client should probably
 count them as being confirmed anyway, even if they don't meet the client
 nodes' expectation of the miners' block size limit. This happening probably
 just means that the client has not updated their software (or
 -minermaxblocksize configuration, depending on how it is implemented) in a
 long time.


That is a good idea.  Any parameters that have miner/merchant differences
should be modifiable (but only upwards) in the command line.

Why are my transactions taking longer to confirm?

There was a soft fork to make the block size larger and your client is
being careful.  You need to add minermaxblocksize=4MB to your
bitcoin.conf file.

Hah, it could be called a semi-hard fork?
--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] [BIP] Normalized Transaction IDs

2015-05-15 Thread Tier Nolan
On Fri, May 15, 2015 at 10:54 AM, s7r s...@sky-ip.org wrote:

 Hello,

 How will this exactly be safe against:
 a) the malleability of the parent tx (2nd level malleability)


The signature signs everything except the signature itself.  The normalized
txid doesn't include that signature, so mutations of the signature don't
cause the normalized txid to change.

If the refund transaction refers to the parent using the normalised txid,
then it doesn't matter if the parent has a mutated signature.  The
normalized transaction ignores the mutation.

If the parent is mutated, then the refund doesn't even have to be modified,
it still refers to it.

If you want a multi-level refund transaction, then all refund transactions
must use the normalized txids to refer to their parents.  The root
transaction is submitted to the blockchain and locked down.


 b) replays


If there are 2 transactions which are mutations of each other, then only
one can be added to the block chain, since the other is a double spend.

The normalized txid refers to all of them, rather than a specific
transaction.


 If you strip just the scriptSig of the input(s), the txid(s) can still
 be mutated (with higher probability before it gets confirmed).


Mutation is only a problem if it occurs after signing.  The signature signs
everything except the signature itself.


 If you strip both the scriptSig of the parent and the txid, nothing can
 any longer be mutated but this is not safe against replays.


Correct, but normalized txids are safe against replays, so are better.

I think the new signature opcode fixes things too.  The question is hard
fork but clean solution vs a soft fork but a little more hassle.
--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Long-term mining incentives

2015-05-13 Thread Tier Nolan
On Wed, May 13, 2015 at 10:49 AM, Thomas Voegtlin thom...@electrum.org
wrote:


 The reason I am asking that is, there seems to be no consensus among
 core developers on how Bitcoin can work without miner subsidy. How it
 *will* work is another question.


The position seems to be that it will continue to work for the time being,
so there is still time for more research.

Proof of stake has problems with handling long term reversals.  The main
proposal is to slightly weaken the security requirements.

With POW, a new node only needs to know the genesis block (and network
rules) to fully determine which of two chains is the strongest.

Penalties for abusing POS inherently create a time horizon.  A suggested
POS security model would assume that a full node is a node that resyncs
with the network regularly (every N blocks).N would be depend on the
network rules of the coin.

The alternative is that 51% of the holders of coins at the genesis block
can rewrite the entire chain.  The genesis block might not be the first
block, a POS coin might still use POW for minting.

https://blog.ethereum.org/2014/11/25/proof-stake-learned-love-weak-subjectivity/
--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Long-term mining incentives

2015-05-13 Thread Tier Nolan
On Wed, May 13, 2015 at 11:31 AM, Alex Mizrahi alex.mizr...@gmail.com
wrote:


 But this matters if a new node has access to the globally strongest chain.


A node only needs a path of honest nodes to the network.

If a node is connected to 99 dishonest nodes and 1 honest node, it can
still sync with the main network.


 In practice, Bitcoin already embraces weak subjectivity e.g. in form of
 checkpoints embedded into the source code. So it's hard to take PoW purists
 seriously.


That isn't why checkpoints exist.  They are to prevent a disk consumption
DOS attack.

They also allow verification to go faster.  Signature operations are
assumed to be correct without checking if they are in blocks before the
last checkpoint.

They do protect against multi-month forks though, even if not the reason
that they exist.

If releases happen every 6 months, and the checkpoint is 3 months deep at
release, then for the average node, the checkpoint is 3 to 9 months old.

A 3 month reversal would be devastating, so the checkpoint isn't adding
much extra security.

With headers first downloading, the checkpoints could be removed.  They
could still be used for speeding up verification of historical blocks.
Blocks behind the last checkpoint wouldn't need their signatures checked.

Removing them could cause a hard-fork though, so maybe they could be
defined as legacy artifacts of the blockchain.  Future checkpoints could be
advisory.
--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Proposed additional options for pruned nodes

2015-05-13 Thread Tier Nolan
On Wed, May 13, 2015 at 6:19 AM, Daniel Kraft d...@domob.eu wrote:

 2) Divide the range of all blocks into intervals with exponentially
 growing size.  I. e., something like this:

 1, 1, 2, 2, 4, 4, 8, 8, 16, 16, ...


Interesting.  This can be combined with the system I suggested.

A node broadcasts 3 pieces of information

Seed (16 bits): This is the seed
M_bits_lsb (1 bit):  Used to indicate M during a transition
N (7 bits):  This is the count of the last range held (or partially held)

M = 1  M_bits

M should be set to the lowest power of 2 greater than double the block
chain height

That gives M = 1 million at the moment.  During changing M, some nodes will
be using the higher M and others will use the lower M.

The M_bits_lsb field allows those to be distinguished.

As the block height approaches 512k, nodes can begin to upgrade.  For a
period around block 512k, some nodes could use M = 1 million and others
could use M = 2 million.

Assuming M is around 3 times higher than the block height, then the odds of
a start being less than the block height is around 35%.  If they runs by
25% each step, then that is approx a double for each hit.

Size(n) = ((4 + (n  0x3))  (n  2)) * 2.5MB

This gives an exponential increase, but groups of 4 are linearly
interpolated.


*Size(0) = 10 MB*
Size(1) = 12.5MB
Size(2) = 15 MB
Size(3) = 17.5MB
Size(4) = 20MB

*Size(5) = 25MB*
Size(6) = 30MB
Size(7) = 35MB

*Size(8) = 40MB*

Start(n) = Hash(seed + n) mod M

A node should store as much of its last start as possible.  Assuming start
0, 5, and 8 were hits but the node had a max size of 60MB.  It can store
0 and 5 and have 25MB left.  That isn't enough to store all of run 8, but
it should store 25MB of the blocks in run 8 anyway.

Size(255) = pow(2, 31) * 17.5MB = 35,840 TB

Decreasing N only causes previously accepted runs to be invalidated.

When a node approaches a transition point for N, it would select a block
height within 25,000 of the transition point.  Once it reaches that block,
it will begin downloading the new runs that it needs.  When updating, it
can set N to zero.  This spreads out the upgrade (over around a year), with
only a small number of nodes upgrading at any time.

New nodes should use the higher M, if near a transition point (say within
100,000).
--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Proposed alternatives to the 20MB step function

2015-05-13 Thread Tier Nolan
On Sat, May 9, 2015 at 4:36 AM, Gregory Maxwell gmaxw...@gmail.com wrote:

 An example would
 be tx_size = MAX( real_size  1,  real_size + 4*utxo_created_size -
 3*utxo_consumed_size).


This could be implemented as a soft fork too.

* 1MB hard size limit
* 900kB soft limit

S = block size
U = UTXO_adjusted_size = S + 4 * outputs - 3 * inputs

A block is valid if S  1MB and U  1MB

A 250 byte transaction with 2 inputs and 2 outputs would have an adjusted
size of 252 bytes.

The memory pool could be sorted by fee per adjusted_size.

 Coin selection could be adjusted so it tries to have at least 2 inputs
when creating transactions, unless the input is worth more than a threshold
(say 0.001 BTC).

This is a pretty weak incentive, especially if the block size is
increased.  Maybe it will cause a nudge
--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] [BIP] Normalized Transaction IDs

2015-05-13 Thread Tier Nolan
I think this is a good way to handle things, but as you say, it is a hard
fork.

CHECKLOCKTIMEVERIFY covers many of the use cases, but it would be nice to
fix malleability once and for all.

This has the effect of doubling the size of the UTXO database.  At minimum,
there needs to be a legacy txid to normalized txid map in the database.

An addition to the BIP would eliminate the need for the 2nd index.  You
could require a SPV proof of the spending transaction to be included with
legacy transactions.  This would allow clients to verify that the
normalized txid matched the legacy id.

The OutPoint would be {LegacyId | SPV Proof to spending tx  | spending tx |
index}.  This allows a legacy transaction to be upgraded.  OutPoints which
use a normalized txid don't need the SPV proof.

The hard fork would be followed by a transitional period, in which both
txids could be used.  Afterwards, legacy transactions have to have the SPV
proof added.  This means that old transactions with locktimes years in the
future can be upgraded for spending, without nodes needing to maintain two
indexes.
--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Long-term mining incentives

2015-05-13 Thread Tier Nolan
On Wed, May 13, 2015 at 1:26 PM, Alex Mizrahi alex.mizr...@gmail.com
wrote:

 He tries to investigate, and after some time discovers that his router (or
 his ISP's router) was hijacked. His Bitcoin node couldn't connect to any of
 the legitimate nodes, and thus got a complete fake chain from the attacker.
 Bitcoins he received were totally fake.

 Bitcoin Core did a shitty job and confirmed some fake transactions.


I don't really see how you can protect against total isolation of a node
(POS or POW).  You would need to find an alternative route for the
information.

Even encrypted connections are pointless without authentication of who you
are communicating with.

Again, it is part of the security model that you can connect to at least
one honest node.

Someone tweated all the bitcoin headers at one point.  The problem is that
if everyone uses the same check, then that source can be compromised.

 WIthout checkpoints an attacker could prepare a fork for $10.
 With checkpoints, it would cost him at least $1000, but more likely
upwards of $10.
 That's quite a difference, no?

Headers first mean that you can't knock a synced node off the main chain
without winning the POW race.

Checkpoints can be replaced with a minimum amount of POW for initial sync.
This prevents spam of low POW blocks.  Once a node is on a chain with at
least that much POW, it considers it the main chain.,
--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] [BIP] Normalized Transaction IDs

2015-05-13 Thread Tier Nolan
After more thought, I think I came up with a clearer description of the
recursive version.

The simple definition is that the hash for the new signature opcode should
simply assume that the normalized txid system was used since the
beginning.  All txids in the entire blockchain should be replaced with the
correct values.

This requires a full re-index of the blockchain.  You can't work out what
the TXID-N of a transaction is without knowning the TXID-N of its parents,
in order to do the replacement.

The non-recursive version can only handle refunds one level deep.

A:
from: IN
sigA: based on hash(...)

B:
from A
sig: based on hash(from: TXID-N(A) | )  // sig removed

C:
from B
sig: based on hash(from: TXID-N(B) | )  // sig removed

If A is mutated before being added into the chain, then B can be modified
to a valid transaction (B-new).

A-mutated:
from: IN
sig_mutated: based on hash(...) with some mutation

B has to be modified to B-new to make it valid.

B-new:
from A-mutated
sig: based on hash(from: TXID-N(A-mutated), )

Since TXID-N(A-mutated) is equal to TXID-N(A), the signature from B is
still valid.

Howver, C-new cannot be created.

C-new:
from B-new
sig: based on hash(from: TXID-N(B-new), )

TXID-N(B-new) is not the same as TXID-N(B).  Since the from field is not
removed by the TXID-N operation, differences in that field mean that the
TXIDs are difference.

This means that the signature for C is not valid for C-new.

The recursive version repairs this problem.

Rather than simply delete the scriptSig from the transaction.  All txids
must also be replaced with their TXID-N versions.

Again, A is mutated before being added into the chain and B-new is produced.

A-mutated:
from: IN
sig_mutated: based on hash(...) with some mutation
TXID-N: TXID-N(A)

B has to be modified to B-new to make it valid.

B-new:
from A-mutated
sig: based on hash(from: TXID-N(A-mutated), )
TXID-N: TXID-N(B)

Since TXID-N(A-mutated) is equal to TXID-N(A), the signature from B is
still valid.

Likewise the TXID-N(B-new) is equal to TXID-N(B).

The from field is replaced by the TXID-N from A-mutated which is equal to
TXID-N(A) and the sig is the same.

C-new:
from B-new
sig: based on hash(from: TXID-N(B-new), )

The signature is still valid, since TXID-N(B-new) is the same as TXID-N(B).

This means that multi-level refunds are possible.
--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] [BIP] Normalized Transaction IDs

2015-05-13 Thread Tier Nolan
On Wed, May 13, 2015 at 9:31 PM, Pieter Wuille pieter.wui...@gmail.com
wrote:


 This was what I was suggesting all along, sorry if I wasn't clear.


That's great.  So, basically the multi-level refund problem is solved by
this?
--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] [BIP] Normalized Transaction IDs

2015-05-13 Thread Tier Nolan
On Wed, May 13, 2015 at 4:24 PM, Christian Decker 
decker.christ...@gmail.com wrote

 It does and I should have mentioned it in the draft, according to my
 calculations a mapping legacy ID - normalized ID is about 256 MB in size,
 or at least it was at height 330'000, things might have changed a bit and
 I'll recompute that. I omitted the deprecation of legacy IDs on purpose
 since we don't know whether we will migrate completely or leave keep both
 options viable.


There are around 20 million UTXOs.  At 2*32 bytes per entry, that is more
than 1GB.  There are more UTXOs than transactions, but 256MB seems a little
low.

I think both IDs can be used in the merkle tree, since we lookup an ID in
 both indices we can use both to address them and we will find them either
 way.


The id that is used to sign should be used in the merkle tree.  The hard
fork should simply be to allow transactions that use the normalized
transaction hash.


 As for the opcodes I'll have to check, but I currently don't see how they
 could be affected.


Agreed, the transaction is simply changed and all the standard rules apply.


 We can certainly split the proposal should it get too large, for now it
 seems manageable, since opcodes are not affected.


Right it is just a database update.  The undo info also needs to be changed
so that both txids are included.


 Bloom-filtering is resolved by adding the normalized transaction IDs and
 checking for both IDs in the filter.


Yeah, if a transaction spends with a legacy txid, it should still match if
the normalized txid is included in the filter.

 Since you mention bundling the change with other changes that require a
hard-fork it might be a good idea to build a separate proposal for a
generic hard-fork rollout mechanism.

That would be useful.  On the other hand, we don't want to make them to
easy.

I think this is a good choice for a hard fork test, since it is
uncontroversial.  With a time machine, it would have been done this way at
the start.

What about the following:

The reference client is updated so that it uses version 2 transactions by
default (but it can be changed by user).  A pop-up could appear for the GUI.

There is no other change.

All transactions in blocks 375000 to 385000 are considered votes and
weighted by bitcoin days destroyed (max 60 days).

If  75% of the transactions by weight are version 2, then the community
are considered to support the hard fork.

There would need to be a way to protect against miners censoring
transactions/votes.

Users could submit their transactions directly to a p2p tallying system.
The coin would be aged based on the age in block 375000 unless included in
the blockchain.  These votes don't need to be ordered and multiple votes
for the same coin would only count once.

In fact, votes could just be based on holding in block X.

This is an opinion poll rather than a referendum though.

Assuming support of the community, the hard fork can then proceed in a
similar way to the way a soft fork does.

Devs update the reference client to produce version 4 blocks and version 3
transactions.  Miners could watch version 3 transactions to gauge user
interest and use that to help decide if they should update.

If 750 of the last 1000 blocks are version 4 or higher, reject blocks with
transactions of less than version 3 in version 4 blocks

This means that legacy clients will be slow to confirm their
transactions, since their transactions cannot go into version 4 blocks.
This is encouragement to upgrade.

If 950 of the last 1000 blocks are version 4 or higher, reject blocks with
transactions of less than version 3 in all blocks

This means that legacy nodes can no longer send transactions but can
still receive.  Transactions received from other legacy nodes would remain
unconfirmed.

If 990 of the last 1000 blocks are version 4 or higher, reject version 3 or
lower blocks

This is the point of no return.  Rejecting version 3 blocks means that
the next rule is guaranteed to activate within the next 2016 blocks.
Legacy nodes remain on the main chain, but cannot send.  Miners mining with
legacy clients are (soft) forked off the chain.

If 1000 of the last 1000 blocks are version 4 or higher and the difficulty
retarget has just happened, activate hard fork rule

This hard forks legacy nodes off the chain.  99% of miners support this
change and users have been encouraged to update.  The block rate for the
non-forked chain is ast most 1% of normal.  Blocks happen every 16 hours.
By timing activation after a difficulty retarget, it makes it harder for
the other fork to adapt to the reduced hash rate.


--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using 

Re: [Bitcoin-development] [BIP] Normalized Transaction IDs

2015-05-13 Thread Tier Nolan
On Wed, May 13, 2015 at 6:14 PM, Pieter Wuille pieter.wui...@gmail.com
wrote:

 Normalized transaction ids are only effectively non-malleable when all
 inputs they refer to are also non-malleable (or you can have malleability
 in 2nd level dependencies), so I do not believe it makes sense to allow
 mixed usage of the txids at all.


The txid or txid-norm is signed, so can't be changed after signing.

The hard fork is to allow transactions to refer to their inputs by txid or
txid-norm.  You pick one before signing.

 They do not provide the actual benefit of guaranteed non-malleability
 before it becomes disallowed to use the old mechanism.

A signed transaction cannot have its txid changed.  It is true that users
of the system would have to use txid-norm.

The basic refund transaction is as follows.

 A creates TX1: Pay w BTC to B's public key if signed by A  B

 A creates TX2: Pay w BTC from TX1-norm to A's public key, locked 48
hours in the future, signed by A

 A sends TX2 to B

 B signs TX2 and returns to A

A broadcasts TX1.  It is mutated before entering the chain to become
TX1-mutated.

A can still submit TX2 to the blockchain, since TX1 and TX1-mutated have
the same txid-norm.


 That, together with the +- resource doubling needed for the UTXO set (as
 earlier mentioned) and the fact that an alternative which is only a
 softfork are available, makes this a bad idea IMHO.

 Unsure to what extent this has been presented on the mailinglist, but the
 softfork idea is this:
 * Transactions get 2 txids, one used to reference them (computed as
 before), and one used in an (extended) sighash.
 * The txins keep using the normal txid, so not structural changes to
 Bitcoin.
 * The ntxid is computed by replacing the scriptSigs in inputs by the empty
 string, and by replacing the txids in txins by their corresponding ntxids.
 * A new checksig operator is softforked in, which uses the ntxids in its
 sighashes rather than the full txid.
 * To support efficiently computing ntxids, every tx in the utxo set
 (currently around 6M) stores the ntxid, but only supports lookup bu txid
 still.

 This does result in a system where a changed dependency indeed invalidates
 the spending transaction, but the fix is trivial and can be done without
 access to the private key.

The problem with this is that 2 level malleability is not protected against.

C spends B which spends A.

A is mutated before it hits the chain.  The only change in A is in the
scriptSig.

B can be converted to B-new without breaking the signature.  This is
because the only change to A was in the sciptSig, which is dropped when
computing the txid-norm.

B-new spends A-mutated.  B-new is different from B in a different place.
The txid it uses to refer to the previous output is changed.

The signed transaction C cannot be converted to a valid C-new.  The txid of
the input points to B.  It is updated to point at B-new.  B-new and B don't
have the same txid-norm, since the change is outside the scriptSig.  This
means that the signature for C is invalid.

The txid replacements should be done recursively.  All input txids should
be replaced by txid-norms when computing the txid-norm for the
transaction.  I think this repairs the problem with only allowing one level?

Computing txid-norm:

- replace all txids in inputs with txid-norms of those transactions
- replace all input scriptSigs with empty scripts
- transaction hash is txid-norm for that transaction

The same situation as above is not fatal now.

C spends B which spends A.

A is mutated before it hits the chain.  The only change in A is in the
scriptSig.

B can be converted to B-new without breaking the signature.  This is
because the only change to A was in the sciptSig, which is dropped when
computing the txid-norm (as before).

B-new spends A mutated.  B-new is different from B in for the previous
inputs.

The input for B-new points to A-mutated.  When computing the txid-norm,
that would be replaced with the txid-norm for A.

Similarly, the input for B points to A and that would have been replaced
with the txid-norm for A.

This means that B and B-new have the same txid-norm.

The signed transaction C can be converted to a valid C-new.  The txid of
the input points to B.  It is updated to point at B-new.  B-new and B now
have have the same txid-norm and so C is valid.

I think this reasoning is valid, but probably needs writing out actual
serializations.
--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net

Re: [Bitcoin-development] Proposed additional options for pruned nodes

2015-05-12 Thread Tier Nolan
On Tue, May 12, 2015 at 6:16 PM, Peter Todd p...@petertodd.org wrote:


 Lots of people are tossing around ideas for partial archival nodes that
 would store a subset of blocks, such that collectively the whole
 blockchain would be available even if no one node had the entire chain.


A compact way to describe which blocks are stored helps to mitigate against
fingerprint attacks.

It also means that a node could compactly indicate which blocks it stores
with service bits.

The node could pick two numbers

W = window = a power of 2
P = position = random value less than W

The node would store all blocks with a height of P mod W.  The block hash
could be used too.

This has the nice feature that the node can throw away half of its data and
still represent what is stored.

W_new = W * 2
P_new = (random_bool()) ? P + W/2 : P;

Half of the stored blocks would match P_new mod W_new and the other half
could be deleted.  This means that the store would use up between 50% and
100% of the allocated size.

Another benefit is that it increases the probability that at least someone
has every block.

If N nodes each store 1% of the blocks, then the odds of a block being
stored is pow(0.99, N).  For 1000 nodes, that gives odds of 1 in 23,164
that a block will be missing.  That means that around 13 out of 300,000
blocks would be missing.  There would likely be more nodes than that, and
also storage nodes, so it is not a major risk.

If everyone is storing 1% of blocks, then they would set W to 128.  As long
as all of the 128 buckets is covered by some nodes, then all blocks are
stored.  With 1000 nodes, that gives odds of 0.6% that at least one bucket
will be missed.  That is better than around 13 blocks being missing.

Nodes could inform peers of their W and P parameters on connection.  The
version message could be amended or a getparams message of some kind
could be added.

W could be encoded with 4 bits and P could be encoded with 16 bits, for 20
in total.  W = 1  bits[19:16] and P = bits[14:0].  That gives a maximum W
of 32768, which is likely to many bits for P.

Initial download would be harder, since new nodes would have to connect to
at least 100 different nodes.  They could download from random nodes, and
just download the ones they are missing from storage nodes.  Even storage
nodes could have a range of W values.
--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Proposed additional options for pruned nodes

2015-05-12 Thread Tier Nolan
On Tue, May 12, 2015 at 8:03 PM, Gregory Maxwell gmaxw...@gmail.com wrote:


 (0) Block coverage should have locality; historical blocks are
 (almost) always needed in contiguous ranges.   Having random peers
 with totally random blocks would be horrific for performance; as you'd
 have to hunt down a working peer and make a connection for each block
 with high probability.

 (1) Block storage on nodes with a fraction of the history should not
 depend on believing random peers; because listening to peers can
 easily create attacks (e.g. someone could break the network; by
 convincing nodes to become unbalanced) and not useful-- it's not like
 the blockchain is substantially different for anyone; if you're to the
 point of needing to know coverage to fill then something is wrong.
 Gaps would be handled by archive nodes, so there is no reason to
 increase vulnerability by doing anything but behaving uniformly.

 (2) The decision to contact a node should need O(1) communications,
 not just because of the delay of chasing around just to find who has
 someone; but because that chasing process usually makes the process
 _highly_ sybil vulnerable.

 (3) The expression of what blocks a node has should be compact (e.g.
 not a dense list of blocks) so it can be rumored efficiently.

 (4) Figuring out what block (ranges) a peer has given should be
 computationally efficient.

 (5) The communication about what blocks a node has should be compact.

 (6) The coverage created by the network should be uniform, and should
 remain uniform as the blockchain grows; ideally it you shouldn't need
 to update your state to know what blocks a peer will store in the
 future, assuming that it doesn't change the amount of data its
 planning to use. (What Tier Nolan proposes sounds like it fails this
 point)

 (7) Growth of the blockchain shouldn't cause much (or any) need to
 refetch old blocks.


M = 1,000,000
N = number of starts

S(0) = hash(seed) mod M
...
S(n) = hash(S(n-1)) mod M

This generates a sequence of start points.  If the start point is less than
the block height, then it counts as a hit.

The node stores the 50MB of data starting at the block at height S(n).

As the blockchain increases in size, new starts will be less than the block
height.  This means some other runs would be deleted.

A weakness is that it is random with regards to block heights.  Tiny blocks
have the same priority as larger blocks.

0) Blocks are local, in 50MB runs
1) Agreed, nodes should download headers-first (or some other compact way
of finding the highest POW chain)
2) M could be fixed, N and the seed are all that is required.  The seed
doesn't have to be that large.  If 1% of the blockchain is stored, then 16
bits should be sufficient so that every block is covered by seeds.
3) N is likely to be less than 2 bytes and the seed can be 2 bytes
4) A 1% cover of 50GB of blockchain would have 10 starts @ 50MB per run.
That is 10 hashes.  They don't even necessarily need to be crypt hashes
5) Isn't this the same as 3?
6) Every block has the same odds of being included.  There inherently needs
to be an update when a node deletes some info due to exceeding its cap.  N
can be dropped one run at a time.
7) When new starts drop below the tip height, N can be decremented and that
one run is deleted.

There would need to be a special rule to ensure the low height blocks are
covered.  Nodes should keep the first 50MB of blocks with some probability
(10%?)
--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Proposed alternatives to the 20MB step function

2015-05-09 Thread Tier Nolan
On Sat, May 9, 2015 at 12:58 PM, Gavin Andresen gavinandre...@gmail.com
wrote:

 RE: fixing sigop counting, and building in UTXO cost: great idea! One of
 the problems with this debate is it is easy for great ideas get lost in all
 the noise.


If the UTXO set cost is built in, UTXO database entries suddenly are worth
something, in addition to the bitcoin held in that entry.

A user's client might display how many they own.  When sending money to a
merchant, the user might demand the merchant indicate a slot to pay to.

The user could send an ANYONE_CAN_PAY partial transaction.  The transaction
would guarantee that the user has at least as many UTXOs as before.

Discussing the possibility of doing this creates an incentive to bloat the
UTXO set right now, since UTXOs would be valuable in the future.

The objective would be to make them valuable enough to encourage
conservation, but not so valuable that the UTXO contains more value than
the bitcoins in the output.

Gmaxwell's suggested tx_size = MAX( real_size  1,  real_size +
4*utxo_created_size - 3*utxo_consumed_size) for a 250 byte transaction
with 1 input and 2 outputs has very little effect.

real_size + 4 * (2) - 3 * 1 = 255

That gives a 2% size penalty for adding an extra UTXO.  I doubt that is
enough to change behavior.

The UTXO set growth could be limited directly.  A block would be invalid if
it increases the number of UTXO entries above the charted path.

RE: a hard upper limit, with a dynamic limit under it:


If the block is greater than 32MB, then it means an update to how blocks
are broadcast, so that could be a reasonable hard upper limit (or maybe
31MB, or just the 20MB already suggested).
--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Assurance contracts to fund the network with OP_CHECKLOCKTIMEVERIFY

2015-05-08 Thread Tier Nolan
Just to clarify the process.

Pledgers create transactions using the following template and broadcast
them.  The p2p protocol could be modified to allow this, or it could be a
separate system.


*Input: 0.01 BTC*


*Signed with SIGHASH_ANYONE_CAN_PAY*

*Output 50BTC*

*Paid to: 1 million OP_CHECKLOCKTIMEVERIFY OP_TRUE*


*Output 0.01BTC*

*Paid to OP_TRUE*
This transaction is invalid, since the inputs don't pay for the output.
The advantage of the sighash anyone can pay field is that other people
can add additional inputs without making the signature invalid.  Normally,
any change to the transaction would make a signature invalid.

Eventually, enough other users have added pledges and a valid transaction
can be broadcast.


*Input: 0.01 BTC*

*Signed with SIGHASH_ANYONE_CAN_PAY*

*Input: 1.2 BTCSigned with SIGHASH_ANYONE_CAN_PAY*


*Input: 5 BTCSigned with SIGHASH_ANYONE_CAN_PAY*

*etc*





*Input: 1.3 BTCSigned with SIGHASH_ANYONE_CAN_PAYOutput 50BTC*
*Paid to: 1 million OP_CHECKLOCKTIMEVERIFY OP_TRUE*

*Output 0.01BTC**Paid to OP_TRUE*

This transaction can be submitted to the main network.  Once it is included
into the blockchain, it is locked in.

In this example, it might be included in block 999,500.  The 0.01BTC output
(and any excess over 50BTC) can be collected by the block 999,500 miner.

The OP_CHECKLOCKTIMEVERIFY opcode means that the 50BTC output cannot be
spent until block 1 million.  Once block 1 million arrives, the output is
completely unprotected.  This means that the miner who mines block 1
million can simply take it, by including his own transaction that sends it
to an address he controls.  It would be irrational to include somebody
else's transaction which spent it.

If by block 999,900, the transaction hasn't been completed (due to not
enough pledgers), the pledgers can spend the coin(s) that they were going
to use for their pledge.  This invalidates those inputs and effectively
withdraws from the pledge.

On Fri, May 8, 2015 at 11:01 AM, Benjamin benjamin.l.cor...@gmail.com
wrote:

 2. A merchant wants to cause block number 1 million to effectively
 have a minting fee of 50BTC. - why should he do that? That's the
 entire tragedy of the commons problem, no?


No, the pledger is saying that he will only pay 0.01BTC if the miner gets a
reward of 50BTC.

Imagine a group of 1000 people who want to make a donation of 50BTC to
something.  They all way that they will donate 0.05BTC, but only if
everyone else donates.

It still isn't perfect.  Everyone has an incentive to wait until the last
minute to pledge.
--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Block Size Increase Requirements

2015-05-08 Thread Tier Nolan
On Fri, May 8, 2015 at 5:37 PM, Peter Todd p...@petertodd.org wrote:

 The soft-limit is there miners themselves produce smaller blocks; the
 soft-limit does not prevent other miners from producing larger blocks.


I wonder if having a miner flag would be good for the network.

Clients for general users and merchants would have a less strict rule than
the rule for miners.  Miners who don't set their miners flag might get
orphaned off the chain.

For example, the limits could be setup as follows.

Clients: 20MB
Miners: 4MB

When in miner mode, the client would reject 4MB blocks and wouldn't build
on them.  The reference client might even track the miner and the non-miner
chain tip.

Miners would refuse to build on 5MB blocks, but merchants and general users
would accept them.

This allows the miners to soft fork the limit at some point in the future.
If 75% of miners decided to up the limit to 8MB, then all merchants and the
general users would accept the new blocks.  It could follow the standard
soft fork rules.

This is a more general version of the system where miners are allowed to
vote on the block size (subject to a higher limit).

A similar system is where clients track all header trees.  Your wallet
could warn you that there is an invalid tree that has  75% of the hashing
power and you might want to upgrade.
--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


[Bitcoin-development] Assurance contracts to fund the network with OP_CHECKLOCKTIMEVERIFY

2015-05-07 Thread Tier Nolan
One of the suggestions to avoid the problem of fees going to zero is
assurance contracts.  This lets users (perhaps large merchants or
exchanges) pay to support the network.  If insufficient people pay for the
contract, then it fails.

Mike Hearn suggests one way of achieving it, but it doesn't actually create
an assurance contract.  Miners can exploit the system to convert the
pledges into donations.

https://bitcointalk.org/index.php?topic=157141.msg1821770#msg1821770

Consider a situation in the future where the minting fee has dropped to
almost zero.  A merchant wants to cause block number 1 million to
effectively have a minting fee of 50BTC.

He creates a transaction with one input (0.1BTC) and one output (50BTC) and
signs it using SIGHASH_ANYONE_CAN_PAY.  The output pays to OP_TRUE.  This
means that anyone can spend it.  The miner who includes the transaction
will send it to an address he controls (or pay to fee).  The transaction
has a locktime of 1 million, so that it cannot be included before that
point.

This transaction cannot be included in a block, since the inputs are lower
than the outputs.  The SIGHASH_ANYONE_CAN_PAY field mean that others can
pledge additional funds.  They add more input to add more money and the
same sighash.

There would need to be some kind of notice boeard system for these pledges,
but if enough pledge, then a valid transaction can be created.  It is in
miner's interests to maintain such a notice board.

The problem is that it counts as a pure donation.  Even if only 10BTC has
been pledged, a miner can just add 40BTC of his own money and finish the
transaction.  He nets the 10BTC of the pledges if he wins the block.  If he
loses, nobody sees his 40BTC transaction.  The only risk is if his block is
orphaned and somehow the miner who mines the winning block gets his 40BTC
transaction into his block.

The assurance contract was supposed to mean If the effective minting fee
for block 1 million is 50 BTC, then I will pay 0.1BTC.  By adding his
40BTC to the transaction the miner converts it to a pure donation.

The key point is that *other* miners don't get 50BTC reward if they find
the block, so it doesn't push up the total hashing power being committed to
the blockchain, that a 50BTC minting fee would achieve.  This is the whole
point of the assurance contract.

OP_CHECKLOCKTIMEVERIFY could be used to solve the problem.

Instead of paying to OP_TRUE, the transaction should pay 50 BTC to 1
million OP_CHECKLOCKTIMEVERIFY OP_TRUE and 0.01BTC to OP_TRUE.

This means that the transaction could be included into a block well in
advance of the 1 million block point.  Once block 1 million arrives, any
miner would be able to spend the 50 BTC.  The 0.01BTC is the fee for the
block the transaction is included in.

If the contract hasn't been included in a block well in advance, pledgers
would be recommended to spend their pledged input,

It can be used to pledge to many blocks at once.  The transaction could pay
out to lots of 50BTC outputs but with the locktime increasing by for each
output.

For high value transactions, it isn't just the POW of the next block that
matters but all the blocks that are built on top of it.

A pledger might want to say I will pay 1BTC if the next 100 blocks all
have at least an effective minting fee of 50BTC
--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Mechanics of a hard fork

2015-05-07 Thread Tier Nolan
In terms of miners, a strong supermajority is arguably sufficient, even 75%
would be enough.

The near total consensus required is merchants and users.  If (almost) all
merchants and users updated and only 75% of the miners updated, then that
would give a successful hard-fork.

On the other hand, if 99.99% of the miners updated and only 75% of
merchants and 75% of users updated, then that would be a serioud split of
the network.

The advantage of strong miner support is that it effectively kills the fork
that follows the old rules.  The 25% of merchants and users sees a
blockchain stall.

Miners are likely to switch to the fork that is worth the most.  A mining
pool could even give 2 different sub-domains.  A hasher can pick which
rule-set to follow.  Most likely, they would converge on the fork which
paid the most, but the old ruleset would likely still have some hashing
power and would eventually re-target.

On Thu, May 7, 2015 at 9:00 PM, Roy Badami r...@gnomon.org.uk wrote:

 I'd love to have more discussion of exactly how a hard fork should be
 implemented.  I think it might actually be of some value to have rough
 consensus on that before we get too bogged down with exactly what the
 proposed hard fork should do.  After all, how can we debate whether a
 particular hard fork proposal has consensus if we haven't even decided
 what level of supermajority is needed to establish consensus?

 For instance, back in 2012 Gavin was proposing, effectively, that a
 hard fork should require a supermajority of 99% of miners in order to
 succeed:

 https://gist.github.com/gavinandresen/2355445

 More recently, Gavin has proposed that a supermoajority of only 80% of
 miners should be needed in order to trigger the hard fork.


 http://www.gavintech.blogspot.co.uk/2015/01/twenty-megabytes-testing-results.html

 Just now, on this list (see attached message) Gavin seems to be
 aluding to some mechanism for a hard fork which involves consensus of
 full nodes, and then a soft fork preceeding the hard fork, which I'd
 love to see a full explanation of.

 FWIW, I think 80% is far too low to establish consensus for a hard
 fork.  I think the supermajority of miners should be sufficiently
 large that the rump doesn't constitute a viable coin.  If you don't
 have that very strong level of consensus then you risk forking Bitcoin
 into two competing coins (and I believe we already have one exchange
 promissing to trade both forks as long as the blockchains are alive).

 As a starting point, I think 35/36th of miners (approximately 97.2%)
 is the minimum I would be comfortable with.  It means that the rump
 coin will initially have an average confirmation time of 6 hours
 (until difficulty, very slowly, adjusts) which is probably far enough
 from viable that the majority of holdouts will quickly desert it too.

 Thoughs?

 roy

 --
 One dashboard for servers and applications across Physical-Virtual-Cloud
 Widest out-of-the-box monitoring support with 50+ applications
 Performance metrics, stats and reports that give you Actionable Insights
 Deep dive visibility with transaction tracing using APM Insight.
 http://ad.doubleclick.net/ddm/clk/290420510;117567292;y
 ___
 Bitcoin-development mailing list
 Bitcoin-development@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bitcoin-development


--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Relative CHECKLOCKTIMEVERIFY (was CLTV proposal)

2015-05-06 Thread Tier Nolan
On Wed, May 6, 2015 at 8:37 AM, Jorge Timón jti...@jtimon.cc wrote:


 This gives you less flexibility and I don't think it's necessary.
 Please let's try to avoid this if it's possible.


It is just a switch that turns on and off the new mode.

In retrospect, it would be better to just up the transaction version.

In transactions from v2 onwards, the sequence field means height.  That
means legacy transactions would be spendable.

This is a pure soft-fork.
--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Block Size Increase

2015-05-06 Thread Tier Nolan
On Wed, May 6, 2015 at 11:12 PM, Matt Corallo bitcoin-l...@bluematt.me
wrote:

 Personally, I'm rather strongly against any commitment to a block size
 increase in the near future.


Miners can already soft-fork to reduce the maximum block size.  If 51% of
miners agree to a 250kB block size, then that is the maximum block size.

The question being discussed is what is the maximum block size merchants
and users will accept.  This puts a reasonable limit on the maximum size
miners can increase the block size to.

In effect, the block size is set by the minimum of the miner's and the
merchants/user's size.min(miner, merchants/users).


 This allows the well-funded Bitcoin ecosystem to continue building
 systems which rely on transactions moving quickly into blocks while
 pretending these systems scale. Thus, instead of working on technologies
 which bring Bitcoin's trustlessness to systems which scale beyond a
 blockchain's necessarily slow and (compared to updating numbers in a
 database) expensive settlement, the ecosystem as a whole continues to
 focus on building centralized platforms and advocate for changes to
 Bitcoin which allow them to maintain the status quo[1].


Would you accept a rule that the maximum size is 20MB (doubling every 2
years), but that miners have an efficient method for choosing a lower size?

If miners could specify the maximum block size in their block headers, then
they could coordinate to adjust the block size.  If 75% vote to lower the
size, then it is lowered and vice versa for raiding.

Every 2016 blocks, the votes are counter.  If the 504th lowest of the 2016
blocks is higher than the previous size, then the size is set to that
size.  Similarly, if the 504th highest is lower than the previous size, it
becomes the new size.

There could be 2 default trajectories.  The reference client might always
vote to double the size every 4 years.

To handle large blocks (32MB) requires a change to the p2p protocol
message size limits, or a way to split blocks over multiple messages.

It would be nice to add new features to any hard-fork.

I favour adding an auxiliary header.  The Merkle root in the header could
be replaced with hash(merkle_root | hash(aux_header)).  This is a fairly
simple change, but helps with things like commitments.  One of the fields
in the auxiliary header could be an extra nonce field.  This would mean
fast regeneration of the merkle root for ASIC miners.  This is a pretty
simple change.
--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Block Size Increase

2015-05-06 Thread Tier Nolan
On Thu, May 7, 2015 at 12:12 AM, Matt Corallo bitcoin-l...@bluematt.me
wrote:

 The point of the hard block size limit is exactly because giving miners
 free rule to do anything they like with their blocks would allow them to
 do any number of crazy attacks. The incentives for miners to pick block
 sizes are no where near compatible with what allows the network to
 continue to run in a decentralized manner.


Miners can always reduce the block size (if they coordinate).  Increasing
the maximum block size doesn't necessarily cause an increase.  A majority
of miners can soft-fork to set the limit lower than the hard limit.

Setting the hard-fork limit higher means that a soft fork can be used to
adjust the limit in the future.

The reference client would accept blocks above the soft limit for wallet
purposes, but not build on them.  Blocks above the hard limit would be
rejected completely.
--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Relative CHECKLOCKTIMEVERIFY (was CLTV proposal)

2015-05-05 Thread Tier Nolan
I think that should be greater than in the comparison?  You want it to fail
if the the height of the UTXO plus the sequence number is greater than the
spending block's height.

There should be an exception for final inputs.  Otherwise, they will count
as relative locktime of 0x.  Is this check handled elsewhere?

if (!tx.vin[i].IsFinal()  nSpendHeight  coins-nHeight +
tx.vin[i].nSequence)
   return state.Invalid(false, REJECT_INVALID,
bad-txns-non-final-input);

Is the intention to let the script check the sequence number?

number OP_RELATIVELOCKTIMEVERIFY

would check if number is less than or equal to the sequence number.

It does make sequence mean something completely different from before.
Invalidating previously valid transactions has the potential to reduce
confidence in the currency.

A workaround would be to have a way to enable it in the sigScript by
extending Peter Todd's suggestion in the other email chain.

1 OP_NOP2 means OP_CHECKLOCKTIMEVERIFY (absolute)
2 OP_NOP2 means OP_RELATIVECHECKLOCKTIMEVERIFY

3 OP_NOP2 means OP_SEQUENCE_AS_RELATIVE_HEIGHT

OP_SEQUENCE_AS_RELATIVE_HEIGHT would cause the script to fail unless it was
the first opcode in the script.  It acts as a flag to enable using the
sequence number as for relative block height.

This can be achieved using a simple pattern match.

bool CScript::IsSequenceAsRelativeHeight() const
{
// Extra-fast test for pay-to-script-hash CScripts:
return (this-size() = 4 
this-at(0) == OP_PUSHDATA1 
this-at(1) == 1 
this-at(2) == 0xFF 
this-at(3) == OP_NOP2);
}

if (!tx.vin[i].IsFinal() 
tx.vin[i].scriptSig.IsSequenceAsRelativeHeight()  nSpendHeight 
coins-nHeight + tx.vin[i].nSequence)
   return state.Invalid(false, REJECT_INVALID,
bad-txns-non-final-input);

On Mon, May 4, 2015 at 12:24 PM, Jorge Timón jti...@jtimon.cc wrote:

 for (unsigned int i = 0; i  tx.vin.size(); i++) {
 // ...
 if (coins-nHeight + tx.vin[i].nSequence  nSpendHeight)
 return state.Invalid(false, REJECT_INVALID,
 bad-txns-non-final-input);
 // ...
 }

--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] BIP draft - Auxiliary Header Format

2014-11-12 Thread Tier Nolan
I was going to look into creating reference code for this.

The first BIP could be reasonably easy, since it just needs to check for
the presence of the 2 special transactions.

That would mean that it doesn't actually create version 3 blocks at all.

Ideally, I would make it easy for miners to mine version 3 blocks.  I could
add a new field to the getblocktemplate that has the 2 transactions ready
to go.

What do pools actually use for generating blocks.  I assume it's custom
code but that they use (near) standard software for the memory pool?


On Mon, Nov 10, 2014 at 11:39 PM, Tier Nolan tier.no...@gmail.com wrote:

 I have added the network BIP too.  It only has the aheaders message and
 the extra field for getheaders.


 https://github.com/TierNolan/bips/blob/aux_header/bip-aux-header-network.mediawiki

 The transaction definitions are still at:

 https://github.com/TierNolan/bips/blob/aux_header/bip-aux-header.mediawiki

 On Mon, Nov 10, 2014 at 9:21 PM, Tier Nolan tier.no...@gmail.com wrote:

 I updated the BIP to cover only the specification of the transactions
 that need to be added.  I will create a network BIP tomorrow.

 On Mon, Nov 10, 2014 at 11:42 AM, Tier Nolan tier.no...@gmail.com
 wrote:

 The aheaders message is required to make use of the data by SPV
 clients.  This could be in a separate BIP though.  I wanted to show that
 the merkle path to the aux-header transaction could be efficiently encoded,
 but a reference to the other BIP would be sufficient.

 For the other messages, the problem is that the hash of the aux header
 is part of the block, but the aux header itself is not.  That means that
 the aux header has to be sent for validation of the block.

 I will change it so that the entire aux-header is encoded in the block.
 I think encoding the hash in the final transaction and the full aux-header
 in the 2nd last one is the best way to do it.  This has the added advantage
 of reducing the changes to block data storage, since the aux-header doesn't
 have to be stored separately.


 On Mon, Nov 10, 2014 at 12:52 AM, Gregory Maxwell gmaxw...@gmail.com
 wrote:

 Some initial comments...

 Tying in the protocol changes is really confusing and the fact that
 they seem to be required out the gates would seemingly make this much
 harder to deploy.   Is there a need to do that? Why can't the p2p part
 be entirely separate from the comitted data?

 On Mon, Nov 10, 2014 at 12:39 AM, Tier Nolan tier.no...@gmail.com
 wrote:
  I made some changes to the draft.  The merkleblock now has the
 auxiliary
  header information too.
 
  There is a tradeoff between overhead and delayed transactions.  Is
 12.5%
  transactions being delayed to the next block unacceptable?  Would
 adding
  padding transactions be an improvement?
 
  Creating the seed transactions is an implementation headache.
 
  Each node needs to have control over an UTXO to create the final
 transaction
  in the block that has the digest of the auxiliary header.  This means
 that
  it is not possible to simply start a node and have it mine.  It has to
  somehow be given the private key.  If two nodes were given the same
 key by
  accident, then one could end up blocking the other.
 
  On one end of the scale is adding a transaction with a few thousand
 outputs
  into the block chain.  The signatures for locktime restricted
 transactions
  that spend those outputs could be hard-coded into the software.  This
 is the
  easiest to implement, but would mean a large table of signatures.  The
  person who generates the signature list would have to be trusted not
 to
  spend the outputs early.
 
  The other end of the scale means that mining nodes need to include a
 wallets
  to manage their UTXO entry.  Miners can split a zero value output
 into lots
  of outputs, if they wish.
 
  A middle ground would be for nodes to be able to detect the special
  transactions and use them.  A server could send out timelocked
 transactions
  that pay to a particular address but the transaction would be
 timelocked.
  The private key for the output would be known.  However, miners who
 mine
  version 2 blocks wouldn't be able to spend them early.
 
 
  On Sat, Nov 8, 2014 at 11:45 PM, Tier Nolan tier.no...@gmail.com
 wrote:
 
  I created a draft BIP detailing a way to add auxiliary headers to
 Bitcoin
  in a bandwidth efficient way.  The overhead per auxiliary header is
 only
  around 104 bytes per header.  This is much smaller than would be
 required by
  embedding the hash of the header in the coinbase of the block.
 
  It is a soft fork and it uses the last transaction in the block to
 store
  the hash of the auxiliary header.
 
  It makes use of the fact that the last transaction in the block has
 a much
  less complex Merkle branch than the other transactions.
 
 
 https://github.com/TierNolan/bips/blob/aux_header/bip-aux-header.mediawiki

Re: [Bitcoin-development] BIP draft - Auxiliary Header Format

2014-11-10 Thread Tier Nolan
The aheaders message is required to make use of the data by SPV clients.
This could be in a separate BIP though.  I wanted to show that the merkle
path to the aux-header transaction could be efficiently encoded, but a
reference to the other BIP would be sufficient.

For the other messages, the problem is that the hash of the aux header is
part of the block, but the aux header itself is not.  That means that the
aux header has to be sent for validation of the block.

I will change it so that the entire aux-header is encoded in the block.  I
think encoding the hash in the final transaction and the full aux-header in
the 2nd last one is the best way to do it.  This has the added advantage of
reducing the changes to block data storage, since the aux-header doesn't
have to be stored separately.

On Mon, Nov 10, 2014 at 12:52 AM, Gregory Maxwell gmaxw...@gmail.com
wrote:

 Some initial comments...

 Tying in the protocol changes is really confusing and the fact that
 they seem to be required out the gates would seemingly make this much
 harder to deploy.   Is there a need to do that? Why can't the p2p part
 be entirely separate from the comitted data?

 On Mon, Nov 10, 2014 at 12:39 AM, Tier Nolan tier.no...@gmail.com wrote:
  I made some changes to the draft.  The merkleblock now has the auxiliary
  header information too.
 
  There is a tradeoff between overhead and delayed transactions.  Is 12.5%
  transactions being delayed to the next block unacceptable?  Would adding
  padding transactions be an improvement?
 
  Creating the seed transactions is an implementation headache.
 
  Each node needs to have control over an UTXO to create the final
 transaction
  in the block that has the digest of the auxiliary header.  This means
 that
  it is not possible to simply start a node and have it mine.  It has to
  somehow be given the private key.  If two nodes were given the same key
 by
  accident, then one could end up blocking the other.
 
  On one end of the scale is adding a transaction with a few thousand
 outputs
  into the block chain.  The signatures for locktime restricted
 transactions
  that spend those outputs could be hard-coded into the software.  This is
 the
  easiest to implement, but would mean a large table of signatures.  The
  person who generates the signature list would have to be trusted not to
  spend the outputs early.
 
  The other end of the scale means that mining nodes need to include a
 wallets
  to manage their UTXO entry.  Miners can split a zero value output into
 lots
  of outputs, if they wish.
 
  A middle ground would be for nodes to be able to detect the special
  transactions and use them.  A server could send out timelocked
 transactions
  that pay to a particular address but the transaction would be timelocked.
  The private key for the output would be known.  However, miners who mine
  version 2 blocks wouldn't be able to spend them early.
 
 
  On Sat, Nov 8, 2014 at 11:45 PM, Tier Nolan tier.no...@gmail.com
 wrote:
 
  I created a draft BIP detailing a way to add auxiliary headers to
 Bitcoin
  in a bandwidth efficient way.  The overhead per auxiliary header is only
  around 104 bytes per header.  This is much smaller than would be
 required by
  embedding the hash of the header in the coinbase of the block.
 
  It is a soft fork and it uses the last transaction in the block to store
  the hash of the auxiliary header.
 
  It makes use of the fact that the last transaction in the block has a
 much
  less complex Merkle branch than the other transactions.
 
 
 https://github.com/TierNolan/bips/blob/aux_header/bip-aux-header.mediawiki
 
 
 
 
 --
 
  ___
  Bitcoin-development mailing list
  Bitcoin-development@lists.sourceforge.net
  https://lists.sourceforge.net/lists/listinfo/bitcoin-development
 

--
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] BIP draft - Auxiliary Header Format

2014-11-10 Thread Tier Nolan
I updated the BIP to cover only the specification of the transactions that
need to be added.  I will create a network BIP tomorrow.

On Mon, Nov 10, 2014 at 11:42 AM, Tier Nolan tier.no...@gmail.com wrote:

 The aheaders message is required to make use of the data by SPV clients.
 This could be in a separate BIP though.  I wanted to show that the merkle
 path to the aux-header transaction could be efficiently encoded, but a
 reference to the other BIP would be sufficient.

 For the other messages, the problem is that the hash of the aux header is
 part of the block, but the aux header itself is not.  That means that the
 aux header has to be sent for validation of the block.

 I will change it so that the entire aux-header is encoded in the block.  I
 think encoding the hash in the final transaction and the full aux-header in
 the 2nd last one is the best way to do it.  This has the added advantage of
 reducing the changes to block data storage, since the aux-header doesn't
 have to be stored separately.


 On Mon, Nov 10, 2014 at 12:52 AM, Gregory Maxwell gmaxw...@gmail.com
 wrote:

 Some initial comments...

 Tying in the protocol changes is really confusing and the fact that
 they seem to be required out the gates would seemingly make this much
 harder to deploy.   Is there a need to do that? Why can't the p2p part
 be entirely separate from the comitted data?

 On Mon, Nov 10, 2014 at 12:39 AM, Tier Nolan tier.no...@gmail.com
 wrote:
  I made some changes to the draft.  The merkleblock now has the auxiliary
  header information too.
 
  There is a tradeoff between overhead and delayed transactions.  Is 12.5%
  transactions being delayed to the next block unacceptable?  Would adding
  padding transactions be an improvement?
 
  Creating the seed transactions is an implementation headache.
 
  Each node needs to have control over an UTXO to create the final
 transaction
  in the block that has the digest of the auxiliary header.  This means
 that
  it is not possible to simply start a node and have it mine.  It has to
  somehow be given the private key.  If two nodes were given the same key
 by
  accident, then one could end up blocking the other.
 
  On one end of the scale is adding a transaction with a few thousand
 outputs
  into the block chain.  The signatures for locktime restricted
 transactions
  that spend those outputs could be hard-coded into the software.  This
 is the
  easiest to implement, but would mean a large table of signatures.  The
  person who generates the signature list would have to be trusted not to
  spend the outputs early.
 
  The other end of the scale means that mining nodes need to include a
 wallets
  to manage their UTXO entry.  Miners can split a zero value output into
 lots
  of outputs, if they wish.
 
  A middle ground would be for nodes to be able to detect the special
  transactions and use them.  A server could send out timelocked
 transactions
  that pay to a particular address but the transaction would be
 timelocked.
  The private key for the output would be known.  However, miners who mine
  version 2 blocks wouldn't be able to spend them early.
 
 
  On Sat, Nov 8, 2014 at 11:45 PM, Tier Nolan tier.no...@gmail.com
 wrote:
 
  I created a draft BIP detailing a way to add auxiliary headers to
 Bitcoin
  in a bandwidth efficient way.  The overhead per auxiliary header is
 only
  around 104 bytes per header.  This is much smaller than would be
 required by
  embedding the hash of the header in the coinbase of the block.
 
  It is a soft fork and it uses the last transaction in the block to
 store
  the hash of the auxiliary header.
 
  It makes use of the fact that the last transaction in the block has a
 much
  less complex Merkle branch than the other transactions.
 
 
 https://github.com/TierNolan/bips/blob/aux_header/bip-aux-header.mediawiki
 
 
 
 
 --
 
  ___
  Bitcoin-development mailing list
  Bitcoin-development@lists.sourceforge.net
  https://lists.sourceforge.net/lists/listinfo/bitcoin-development
 



--
Comprehensive Server Monitoring with Site24x7.
Monitor 10 servers for $9/Month.
Get alerted through email, SMS, voice calls or mobile push notifications.
Take corrective actions from your mobile device.
http://pubads.g.doubleclick.net/gampad/clk?id=154624111iu=/4140/ostg.clktrk___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] BIP draft - Auxiliary Header Format

2014-11-09 Thread Tier Nolan
I made some changes to the draft.  The merkleblock now has the auxiliary
header information too.

There is a tradeoff between overhead and delayed transactions.  Is 12.5%
transactions being delayed to the next block unacceptable?  Would adding
padding transactions be an improvement?

Creating the seed transactions is an implementation headache.

Each node needs to have control over an UTXO to create the final
transaction in the block that has the digest of the auxiliary header.  This
means that it is not possible to simply start a node and have it mine.  It
has to somehow be given the private key.  If two nodes were given the same
key by accident, then one could end up blocking the other.

On one end of the scale is adding a transaction with a few thousand outputs
into the block chain.  The signatures for locktime restricted transactions
that spend those outputs could be hard-coded into the software.  This is
the easiest to implement, but would mean a large table of signatures.  The
person who generates the signature list would have to be trusted not to
spend the outputs early.

The other end of the scale means that mining nodes need to include a
wallets to manage their UTXO entry.  Miners can split a zero value output
into lots of outputs, if they wish.

A middle ground would be for nodes to be able to detect the special
transactions and use them.  A server could send out timelocked transactions
that pay to a particular address but the transaction would be timelocked.
The private key for the output would be known.  However, miners who mine
version 2 blocks wouldn't be able to spend them early.


On Sat, Nov 8, 2014 at 11:45 PM, Tier Nolan tier.no...@gmail.com wrote:

 I created a draft BIP detailing a way to add auxiliary headers to Bitcoin
 in a bandwidth efficient way.  The overhead per auxiliary header is only
 around 104 bytes per header.  This is much smaller than would be required
 by embedding the hash of the header in the coinbase of the block.

 It is a soft fork and it uses the last transaction in the block to store
 the hash of the auxiliary header.

 It makes use of the fact that the last transaction in the block has a much
 less complex Merkle branch than the other transactions.

 https://github.com/TierNolan/bips/blob/aux_header/bip-aux-header.mediawiki


--
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


[Bitcoin-development] BIP draft - Auxiliary Header Format

2014-11-08 Thread Tier Nolan
I created a draft BIP detailing a way to add auxiliary headers to Bitcoin
in a bandwidth efficient way.  The overhead per auxiliary header is only
around 104 bytes per header.  This is much smaller than would be required
by embedding the hash of the header in the coinbase of the block.

It is a soft fork and it uses the last transaction in the block to store
the hash of the auxiliary header.

It makes use of the fact that the last transaction in the block has a much
less complex Merkle branch than the other transactions.

https://github.com/TierNolan/bips/blob/aux_header/bip-aux-header.mediawiki
--
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Bitcoind-in-background mode for SPV wallets

2014-05-04 Thread Tier Nolan
On Fri, Apr 11, 2014 at 5:54 PM, Gregory Maxwell gmaxw...@gmail.com wrote:

 For the non-error-coded case I believe nodes
 with random spans of blocks works out asymptotically to the same
 failure rates as random.


If each block is really 512 blocks in sequence, then each slot is more
likely to be hit.  It effectively reduces the number of blocks by the
minimum run lengths.

ECC seemed cooler though.


 (The conversation Peter Todd was referring to was one where I was
 pointing out that with suitable error coding you also get an
 anti-censorship effect where its very difficult to provide part of the
 data without potentially providing all of it)


Interesting too.


 I think in the network we have today and for the foreseeable future we
 can reasonably count on there being a reasonable number of nodes that
 store all the blocks... quite likely not enough to satisfy the
 historical block demand from the network alone, but easily enough to
 supply blocks that have otherwise gone missing.


That's true.  Scaling up the transactions per second increases the chance
of data lost.

With side/tree chains, the odds of data loss in the less important chains
increases (though they are by definition lower value chains)
--
Accelerate Dev Cycles with Automated Cross-Browser Testing - For FREE
Instantly run your Selenium tests across 300+ browser/OS combos.  Get 
unparalleled scalability from the best Selenium testing platform available.
Simple to use. Nothing to install. Get started now for free.
http://p.sf.net/sfu/SauceLabs___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


[Bitcoin-development] BIP Draft: Atomic Cross Chain Transfer Protocol

2014-04-30 Thread Tier Nolan
Due to popular demand, I have created a BIP for cross chain atomic
transfers.

Unlike the previous version, this version only requires hash locking.   The
previous version required a selector transaction based on if statements.

OP_HASH160 OP_EQUAL_VERIFY [public key] OP_CHECKSIG

OP_HASH160 OP_EQUAL_VERIFY OP_N [public key 1] ... [public key m]
OP_M OP_CHECK_MULTISIG

https://github.com/TierNolan/bips/blob/bip4x/bip-atom.mediawiki
--
Accelerate Dev Cycles with Automated Cross-Browser Testing - For FREE
Instantly run your Selenium tests across 300+ browser/OS combos.  Get 
unparalleled scalability from the best Selenium testing platform available.
Simple to use. Nothing to install. Get started now for free.
http://p.sf.net/sfu/SauceLabs___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] BIP Draft: Atomic Cross Chain Transfer Protocol

2014-04-30 Thread Tier Nolan
On Wed, Apr 30, 2014 at 7:59 PM, Luke Dashjr l...@dashjr.org wrote:

 Instead of TX0, TX1, etc, can you put some kind of meaningful identifier
 for
 these transactions?


Sorry, that is the names come from the original thread, where I was
outlining the idea.  I updated the names.


 TX1 and TX2 *cannot* be signed until after TX0 is completely signed by both
 parties.


The bail in transactions are only signed by one of the parties.  They are
kept secret until the refund/payout transactions are all properly signed.

There is a malleability risk though, hence the need for the 3rd party.

It works on the same refund principle as payment channels.

After TX0 is signed, but before TX2 is signed, either party could
 walk away or otherwise hold the funds hostage. The sequence of signing
 proposed in this BIP is *not possible to perform*.


TX0 is not broadcast until the refund transactions are complete.


 How did you implement and test this? :/


This is a draft at the moment.

There is an implementation of (almost) this system but not by me.  This
proposal reduces the number of non-standard transaction types required.

A full implement is the next step.


 What is the purpose of the OP_EQUAL_VERIFY in TX4? I don't see a use...


That is a typo, I have updated it.


 IMO, there should be separate BIPs for the exchange itself, and the
 protocol
 to negotiate the exchange.


I can do that.


 I would recommend changing the latter from JSON-RPC
 to some extension of the Payment Protocol, if possible.


I wanted it to be as simple as possible, but I guess MIME is just a
different way of doing things.


 Perhaps it would be good to only support compressed keys, to discourage
 use of
 uncompressed ones..


I would have no objection.



 Luke

--
Accelerate Dev Cycles with Automated Cross-Browser Testing - For FREE
Instantly run your Selenium tests across 300+ browser/OS combos.  Get 
unparalleled scalability from the best Selenium testing platform available.
Simple to use. Nothing to install. Get started now for free.
http://p.sf.net/sfu/SauceLabs___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] BIP Draft: Atomic Cross Chain Transfer Protocol

2014-04-30 Thread Tier Nolan
I updated again.

The new version only requires non-standard transactions on one of the two
networks.

Next step is a simple TCP / RPC server that will implement the protocol to
trade between testnet and mainnet.  Timeouts of much less than 24 hours
should be possible now.


On Wed, Apr 30, 2014 at 9:48 PM, Tier Nolan tier.no...@gmail.com wrote:

 On Wed, Apr 30, 2014 at 7:59 PM, Luke Dashjr l...@dashjr.org wrote:

 Instead of TX0, TX1, etc, can you put some kind of meaningful identifier
 for
 these transactions?


 Sorry, that is the names come from the original thread, where I was
 outlining the idea.  I updated the names.


 TX1 and TX2 *cannot* be signed until after TX0 is completely signed by
 both
 parties.


 The bail in transactions are only signed by one of the parties.  They are
 kept secret until the refund/payout transactions are all properly signed.

 There is a malleability risk though, hence the need for the 3rd party.

 It works on the same refund principle as payment channels.

 After TX0 is signed, but before TX2 is signed, either party could
 walk away or otherwise hold the funds hostage. The sequence of signing
 proposed in this BIP is *not possible to perform*.


 TX0 is not broadcast until the refund transactions are complete.


 How did you implement and test this? :/


 This is a draft at the moment.

 There is an implementation of (almost) this system but not by me.  This
 proposal reduces the number of non-standard transaction types required.

 A full implement is the next step.


 What is the purpose of the OP_EQUAL_VERIFY in TX4? I don't see a use...


 That is a typo, I have updated it.


 IMO, there should be separate BIPs for the exchange itself, and the
 protocol
 to negotiate the exchange.


 I can do that.


 I would recommend changing the latter from JSON-RPC
 to some extension of the Payment Protocol, if possible.


 I wanted it to be as simple as possible, but I guess MIME is just a
 different way of doing things.


 Perhaps it would be good to only support compressed keys, to discourage
 use of
 uncompressed ones..


 I would have no objection.



 Luke



--
Accelerate Dev Cycles with Automated Cross-Browser Testing - For FREE
Instantly run your Selenium tests across 300+ browser/OS combos.  Get 
unparalleled scalability from the best Selenium testing platform available.
Simple to use. Nothing to install. Get started now for free.
http://p.sf.net/sfu/SauceLabs___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] BIP32 wallet structure in use? Remove it?

2014-04-26 Thread Tier Nolan
Maybe the solution is to have a defined way to import an unknown wallet?

This means that the gap space and a search ordering needs to be defined.

Given a blockchain and a root seed, it should be possible to find all the
addresses for that root seed.

The hierarchy that the wallet actually uses could be anything.


On Sat, Apr 26, 2014 at 11:36 AM, Thomas Voegtlin thoma...@gmx.de wrote:

 I totally agree with gmaxwell here. The cost of interoperability is too
 high. It would force us to freeze all features, and to require a broad
 consensus everytime we want to add something new.

 In addition, some partial level of compatibility would probably lead to
 users not able to recover all their funds when they enter their seed in
 another wallet. That is not acceptable, and should be avoided.




 Le 25/04/2014 17:46, Gregory Maxwell a écrit :
 
  I don't believe that wallet interoperability at this level is possible
  in general except as an explicit compatibility feature. I also don't
  believe that it is a huge loss that it is so.
 
  The structure of the derivation defines and constrains functionality.
  You cannot be structure compatible unless you have the same features
  and behavior with respect to key management.  To that extent that
  wallets have the same features, I agree its better if they are
  compatible— but unless they are dead software they likely won't keep
  the same features for long.
 
  Even if their key management were compatible there are many other
  things that go into making a wallet portable between systems; the
  handling of private keys is just one part:  a complete wallet will
  have other (again, functionality specific) metadata.
 
  I agree that it would be it would be possible to support a
  compatibility mode where a wallet has just a subset of features which
  works when loaded into different systems, but I'm somewhat doubtful
  that it would be widely used. The decision to use that mode comes at
  the wrong time— when you start, not when you need the features you
  chose to disable or when you want to switch programs. But the obvious
  thing to do there is to just specify that a linear chain with no
  further branching is that mode: then that will be the same mode you
  use when someone gives you a master public key and asks you to use it
  for reoccurring changes— so at least the software will get used.
 
  Compatibility for something like a recovery tool is another matter,
  and BIP32 probably defines enough there that with a bit of extra data
  about how the real wallet worked that recovery can be successful.
 
  Calling it vendor lock in sounds overblown to me.  If someone wants
  to change wallets they can transfer the funds— manual handling of
  private keys is seldom advisable, and as is they're going to lose
  their metadata in any case.  No one expects to switch banks and to
  keep their account records at the new bank. And while less than
  perfect, the price of heavily constraining functionality in order to
  get another result is just too high.
 
 
 --
  Start Your Social Network Today - Download eXo Platform
  Build your Enterprise Intranet with eXo Platform Software
  Java Based Open Source Intranet - Social, Extensible, Cloud Ready
  Get Started Now And Turn Your Intranet Into A Collaboration Platform
  http://p.sf.net/sfu/ExoPlatform
  ___
  Bitcoin-development mailing list
  Bitcoin-development@lists.sourceforge.net
  https://lists.sourceforge.net/lists/listinfo/bitcoin-development
 


 --
 Start Your Social Network Today - Download eXo Platform
 Build your Enterprise Intranet with eXo Platform Software
 Java Based Open Source Intranet - Social, Extensible, Cloud Ready
 Get Started Now And Turn Your Intranet Into A Collaboration Platform
 http://p.sf.net/sfu/ExoPlatform
 ___
 Bitcoin-development mailing list
 Bitcoin-development@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bitcoin-development

--
Start Your Social Network Today - Download eXo Platform
Build your Enterprise Intranet with eXo Platform Software
Java Based Open Source Intranet - Social, Extensible, Cloud Ready
Get Started Now And Turn Your Intranet Into A Collaboration Platform
http://p.sf.net/sfu/ExoPlatform___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


[Bitcoin-development] BIP - Selector Script

2014-04-25 Thread Tier Nolan
This is a BIP to allow the spender to choose one of multiple standard
scripts to use for spending the output.

https://github.com/TierNolan/bips/blob/bip4x/bip-0045.mediawiki

This is required as part of the atomic cross chain transfer protocol.  It
is required so that outputs can be retrieved, if the process ends before
being committed.

https://bitcointalk.org/index.php?topic=193281.msg2224949#msg2224949

The script allows multiple standard scripts to be included in the
scriptPubKey.

When redeeming the script the spender indicates which of the standard
scripts to use.

Only one standard script is actually executed, so the only cost is the
extra storage required.

A more ambitious change would be a soft fork like P2SH, except the spender
is allowed to select from multiple hashes.  Effectively, it would be
Multi-P2SH.

This gets much of the benefits of MAST, but it requires a formal soft fork
to implement.

If there is agreement, I can code up the reference implementation as a PR.
The multi-P2SH might actually be easier.
--
Start Your Social Network Today - Download eXo Platform
Build your Enterprise Intranet with eXo Platform Software
Java Based Open Source Intranet - Social, Extensible, Cloud Ready
Get Started Now And Turn Your Intranet Into A Collaboration Platform
http://p.sf.net/sfu/ExoPlatform___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] BIP - Hash Locked Transaction

2014-04-25 Thread Tier Nolan
On Fri, Apr 25, 2014 at 8:18 PM, Luke-Jr l...@dashjr.org wrote:

 This one looks entirely useless (it cannot be made secure)


The hash locking isn't to prevent someone else stealing your coin.  Once a
user broadcasts a transaction with x in it, then everyone has access to x.

It is to release the coin on the other chain.  If you spend the output, you
automatically give the other participant the password to take your coin on
the other chain (completing the trade).

The BIP allows the hash to protect any of other standard transactions
(except P2SH, since that is a template match).

For example, it would allow a script of the form

OP_HASH160 [20-byte-password-hash] OP_EQUAL_VERIFY OP_DUP OP_HASH160
pubKeyHash OP_EQUALVERIFY OP_CHECKSIG


To spend it, you would need to provide the password and also sign the
transaction using the private key.



 and the assertion
 that it is necessary for atomic cross-chain transfers seems unfounded and
 probably wrong...


I meant that it is required for the particular protocol.



 Luke

--
Start Your Social Network Today - Download eXo Platform
Build your Enterprise Intranet with eXo Platform Software
Java Based Open Source Intranet - Social, Extensible, Cloud Ready
Get Started Now And Turn Your Intranet Into A Collaboration Platform
http://p.sf.net/sfu/ExoPlatform___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] BIP - Selector Script

2014-04-25 Thread Tier Nolan
On Fri, Apr 25, 2014 at 8:17 PM, Luke-Jr l...@dashjr.org wrote:

 I believe you meant to link here instead?
 https://github.com/TierNolan/bips/blob/bip4x/bip-0046.mediawiki

 Yeah, sorry.


 This looks reasonable from a brief skim over, but does not define any use
 cases (it mentions necessary for atomic cross chain transfers, but does
 not
 explain how it is useful for that - perhaps that belongs in another BIP you
 haven't written yet, though).


One use case should be enough.  The atomic cross chain proposal has been
discussed for a while.  It feels like bitcoin works on an ask permission
first basis.

It always stalls at the fact that non-standard transactions are hard to get
confirmed on other coins.  It is hard to find pools on other coins which
have weaker isStandard() checks.  The timeouts have to be set so that they
are long enough to guarantee that transactions are accepted before they
expire.

A testnet to testnet transfer is the best that would be possible at the
moment.

I don't think the cross chain system needs a BIP (except to justify this
one).

If cross chain transfer become popular, then it would be useful to ensure
that clients are interoperable, but first things first.  If the
transactions aren't accepted in any chains, then everything stalls.

Secure transfers require that the malleability issue is fixed, but that is
a separate issue.  I am assuming that will be fixed at some point in the
future, since micro-payment channels also requires that it is fixed.


 IMO, it should also require P2SH.


It could be restricted to only P2SH, I don't think there would be a loss in
doing that.

Personally, I would make it so that P2SH is mandatory after a certain
time.  It makes distributed verification of the block chain easier.
Everything needed to verify a script is present in the transaction (except
that the output actually exists).

A soft fork that expands P2SH functionality would be even better, but I
would rather not let the best be the enemy of the good.



 Luke


 On Friday, April 25, 2014 6:49:35 PM Tier Nolan wrote:
  This is a BIP to allow the spender to choose one of multiple standard
  scripts to use for spending the output.
 
  https://github.com/TierNolan/bips/blob/bip4x/bip-0045.mediawiki
 
  This is required as part of the atomic cross chain transfer protocol.  It
  is required so that outputs can be retrieved, if the process ends before
  being committed.
 
  https://bitcointalk.org/index.php?topic=193281.msg2224949#msg2224949
 
  The script allows multiple standard scripts to be included in the
  scriptPubKey.
 
  When redeeming the script the spender indicates which of the standard
  scripts to use.
 
  Only one standard script is actually executed, so the only cost is the
  extra storage required.
 
  A more ambitious change would be a soft fork like P2SH, except the
 spender
  is allowed to select from multiple hashes.  Effectively, it would be
  Multi-P2SH.
 
  This gets much of the benefits of MAST, but it requires a formal soft
 fork
  to implement.
 
  If there is agreement, I can code up the reference implementation as a
 PR.
  The multi-P2SH might actually be easier.


 --
 Start Your Social Network Today - Download eXo Platform
 Build your Enterprise Intranet with eXo Platform Software
 Java Based Open Source Intranet - Social, Extensible, Cloud Ready
 Get Started Now And Turn Your Intranet Into A Collaboration Platform
 http://p.sf.net/sfu/ExoPlatform
 ___
 Bitcoin-development mailing list
 Bitcoin-development@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bitcoin-development

--
Start Your Social Network Today - Download eXo Platform
Build your Enterprise Intranet with eXo Platform Software
Java Based Open Source Intranet - Social, Extensible, Cloud Ready
Get Started Now And Turn Your Intranet Into A Collaboration Platform
http://p.sf.net/sfu/ExoPlatform___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] BIP - Selector Script

2014-04-25 Thread Tier Nolan
On Fri, Apr 25, 2014 at 8:58 PM, Peter Todd p...@petertodd.org wrote:

 Keep in mind that P2SH redeemScripts are limited to just 520 bytes;
 there's going to be many cases where more complex transactions just
 can't be encoded in P2SH at all.


True.  Having said that, this is just a change to isStandard(), rather than
a protocol change.

These transactions can already be mined into blocks.


 --
 'peter'[:-1]@petertodd.org
 6407c80d5d4506a4253b4b426e0c7702963f8bf91e7971aa


 --
 Start Your Social Network Today - Download eXo Platform
 Build your Enterprise Intranet with eXo Platform Software
 Java Based Open Source Intranet - Social, Extensible, Cloud Ready
 Get Started Now And Turn Your Intranet Into A Collaboration Platform
 http://p.sf.net/sfu/ExoPlatform
 ___
 Bitcoin-development mailing list
 Bitcoin-development@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bitcoin-development


--
Start Your Social Network Today - Download eXo Platform
Build your Enterprise Intranet with eXo Platform Software
Java Based Open Source Intranet - Social, Extensible, Cloud Ready
Get Started Now And Turn Your Intranet Into A Collaboration Platform
http://p.sf.net/sfu/ExoPlatform___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] BIP - Selector Script

2014-04-25 Thread Tier Nolan
On Fri, Apr 25, 2014 at 9:26 PM, Luke-Jr l...@dashjr.org wrote:

 They define standard for interoperability between
 software. So, if you want nodes to relay these transactions, you need to
 convince them, not merely write a BIP for the transaction format.


I agree with you in theory, each miner could decide their inclusion rules
for themselves.

In practice, if the reference client is updated, then most miners will
accept those transactions.  In addition, it would eventually propagate to
alt-coins (or at least the supported ones).

I could simply submit the changes as a pull request for the reference
client, but I was hoping that by doing it this way, it would increase the
odds of it being accepted.


 Defining a BIP for cross-chain trading would be one way to do that.


I don't think it quite requires the same coordination in the short term.  I
could write up the sequence as an info BIP.

The malleability issue has been known for years.
 I wouldn't expect any special effort made to fix it...


It is possible to tweak the protocol so that it still works.  However, it
means that 3rd parties are required (that could go in the BIP too).


 There is some ongoing discussion of a softfork to basically redo the Script
 language entirely, but it will take quite a bit of time and development
 before
 we'll see it in the wild.


Implementing multi-P2SH gets a lot of the benefits of MAST, in terms of
efficiency.



 Luke

 P.S. Did the BIP editor assign these numbers? If not, best to keep them
 numberless until assigned, to avoid confusion when people Google the real
 BIP
 44 and 45...


Not yet, but that is just my personal repo.  I did email gmaxwell, but he
said that they can't be assigned until some discussion has happened.

I take your point that the name appears in the link though, so could cause
issues with searching.



 --
 Start Your Social Network Today - Download eXo Platform
 Build your Enterprise Intranet with eXo Platform Software
 Java Based Open Source Intranet - Social, Extensible, Cloud Ready
 Get Started Now And Turn Your Intranet Into A Collaboration Platform
 http://p.sf.net/sfu/ExoPlatform
 ___
 Bitcoin-development mailing list
 Bitcoin-development@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bitcoin-development

--
Start Your Social Network Today - Download eXo Platform
Build your Enterprise Intranet with eXo Platform Software
Java Based Open Source Intranet - Social, Extensible, Cloud Ready
Get Started Now And Turn Your Intranet Into A Collaboration Platform
http://p.sf.net/sfu/ExoPlatform___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] BIP - Hash Locked Transaction

2014-04-25 Thread Tier Nolan
On Fri, Apr 25, 2014 at 10:14 PM, Peter Todd p...@petertodd.org wrote:

 Along those lines, rather than doing up yet another format specific type
 as Tier Nolan is doing with his BIP, why not write a BIP looking at how
 the IsStandard() rules could be removed?


Removal of isStandard() would be even better/more flexible.

A whitelist of low risk opcodes seems like a reasonable compromise.

My thoughts behind these two BIPs are that they are a smaller change that
adds functionality required for a particular use-case (and some others).

Changing the entire philosophy behind isStandard() is a much bigger change
than just adding one new type.
--
Start Your Social Network Today - Download eXo Platform
Build your Enterprise Intranet with eXo Platform Software
Java Based Open Source Intranet - Social, Extensible, Cloud Ready
Get Started Now And Turn Your Intranet Into A Collaboration Platform
http://p.sf.net/sfu/ExoPlatform___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] New BIP32 structure

2014-04-23 Thread Tier Nolan
On Wed, Apr 23, 2014 at 7:46 PM, Pavol Rusnak st...@gk2.sk wrote:


  Setting the gap limit to high is just a small extra cost in that case.

 Not if you have 100 accounts on 10 different devices.


I meant for a merchant with a server that is handing out hundreds of
addresses.

The point is to have a single system that is compatible over a large number
of systems.
--
Start Your Social Network Today - Download eXo Platform
Build your Enterprise Intranet with eXo Platform Software
Java Based Open Source Intranet - Social, Extensible, Cloud Ready
Get Started Now And Turn Your Intranet Into A Collaboration Platform
http://p.sf.net/sfu/ExoPlatform___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Coinbase reallocation to discourage Finney attacks

2014-04-23 Thread Tier Nolan
An interesting experiment would be a transaction proof of publication
chain.

Each transaction would be added to that chain when it is received.  It
could be merge mined with the main chain.

If the size was limited, then it doesn't even require spam protection.

Blocks could be discouraged if they have transactions which violate the
ordering in that chain.  Miners could still decide which transactions they
include, but couldn't include transactions which are double spends.

The locktime/final field could be used for transactions which want to be
replaceable.

The chain could use some of the fast block proposals.  For example, it
could include orphans of a block when computing the block's POW.



On Wed, Apr 23, 2014 at 9:53 PM, Gregory Maxwell gmaxw...@gmail.com wrote:

 On Wed, Apr 23, 2014 at 1:44 PM, Adam Ritter arit...@gmail.com wrote:
  Isn't a faster blockchain for transactions (maybe as a sidechain) solving
  the problem? If there would be a safe way for 0-confirmation
 transactions,
  the Bitcoin blockchain wouldn't even be needed.

 Large scale consensus can't generally provide instantly irreversible
 transactions directly: Increasing the block speed can't help past the
 point where the time starts getting close to the network diameter...
 you simply can't tell what a consensus of a group of nodes is until
 several times the light cone that includes all of them.  And if you
 start getting close to the limit you dilute the power working on the
 consensus and potentially make life easier for a large attacker.

 Maybe other chains with different parameters could achieve a different
 tradeoff which was better suited to low value retail transactions
 (e.g. where you want a soft confirmation fast). A choice of tradeoffs
 could be very useful, and maybe you can practically get close enough
 (e.g. would knowing you lost a zero-conf double spend within 30
 seconds 90% of the time be good enough?)... but I'm not aware of any
 silver bullet there which gives you something identical to what a
 centralized service can give you without invoking at least a little
 bit of centralization.


 --
 Start Your Social Network Today - Download eXo Platform
 Build your Enterprise Intranet with eXo Platform Software
 Java Based Open Source Intranet - Social, Extensible, Cloud Ready
 Get Started Now And Turn Your Intranet Into A Collaboration Platform
 http://p.sf.net/sfu/ExoPlatform
 ___
 Bitcoin-development mailing list
 Bitcoin-development@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bitcoin-development

--
Start Your Social Network Today - Download eXo Platform
Build your Enterprise Intranet with eXo Platform Software
Java Based Open Source Intranet - Social, Extensible, Cloud Ready
Get Started Now And Turn Your Intranet Into A Collaboration Platform
http://p.sf.net/sfu/ExoPlatform___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Coinbase reallocation to discourage Finney attacks

2014-04-23 Thread Tier Nolan
On Wed, Apr 23, 2014 at 10:39 PM, Gregory Maxwell gmaxw...@gmail.comwrote:

 You can see me proposing this kind of thing in a number of places (e.g.
 http://download.wpsoftware.net/bitcoin/wizards/2014-04-15.txt p2pool
 only forces the subsidy today, but the same mechnism could instead
 force transactions..


Interesting.  You set the share-block size to 16kB and set the share POW to
1/64 of the main target.

Each share-block would be allowed to append up to 16kB on the previous
share-block.

This would keep the bandwidth the same, but on average blocks would be only
512kB.

e.g. to get you fast confirmation., or
 previously on BCT for the last couple years) but there are still
 limits here:  If you don't follow the fast-confirmation share chain
 you cannot mine third party transactions because you'll be at risk of
 mining a double spend that gets you orphaned, or building on a prior
 block that other miners have decided is bad.  This means that if the
 latency or data rate requirements of the share chain are too large
 relative to ordinary mining it may create some centralization
 pressure.


This effect could be reduced by having colours for blocks and
transactions.

The block colour would be a loop based on block height.

You could have 16 transaction colours based on the lowest 4 bits in the
txId.

A transaction is only valid if all inputs into the transaction are the
correct colour for that block.

This allows blocks to be created in advance.  If you are processing colour
7 at the moment, you can have a colour 8 block ready.

16 colours is probably to many.   It would only be necessary for things
like 1 second block rates.

The disadvantage is that wallets would have to make sure that they have
coins for each of the 16 colours.

If you spend the wrong colour, you add 16 block times of latency.



 That said, I think using a fast confirmation share-chain is much
 better than decreasing block times and could be a very useful tool if
 we believe that there are many applications which could be improved
 with e.g. a 30 second or 1 minute interblock time.  Mostly my thinking
 has been that these retail applications really want sub-second
 confirmation, which can't reasonably be provided in this manner so I
 didn't mention it in this thread.


In a shop setting, you could set it up so that the person scans a QR-code
to setup a channel with the shop.

They can then scan all their stuff and by the time they have done that, the
channel would be ready.

If there was a queue, it could be done when the person enters the queue.

In fact, there could be QR-codes at multiple locations.
--
Start Your Social Network Today - Download eXo Platform
Build your Enterprise Intranet with eXo Platform Software
Java Based Open Source Intranet - Social, Extensible, Cloud Ready
Get Started Now And Turn Your Intranet Into A Collaboration Platform
http://p.sf.net/sfu/ExoPlatform___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Economics of information propagation

2014-04-21 Thread Tier Nolan
On Mon, Apr 21, 2014 at 5:06 AM, Peter Todd p...@petertodd.org wrote:

 Of course, in reality smaller miners can just mine on top of block headers
 and include no transactions and do no validation, but that is extremely
 harmful to the security of Bitcoin.


I don't think it reduces security much.  It is extremely unlikely that
someone would publish an invalid block, since they would waste their POW.

Presuming that new headers are correct is reasonable, as long as you check
the full block within a few minutes of receiving the header.

If anything, it increases security, since less hashing power is wasted
while the full block is broadcast.

Block propagation could take the form

- broadcast new header
- all miners switch to mining empty blocks
- broadcast new block
- miners update to a block with transactions

If the block doesn't arrive within a timeout, then the miner could switch
back to the old block.

This would mean that a few percent of empty blocks end up in the
blockchain, but that doesn't do any harm.

It is only harmful, if it is used as a DOS attack on the network.

The empty blocks will only occur when 2 blocks are found in quick
succession, so it doesn't have much affect on average time until 1
confirm.  Empty blocks are just as good for providing 1 of the 6 confirms
needed too.
--
Start Your Social Network Today - Download eXo Platform
Build your Enterprise Intranet with eXo Platform Software
Java Based Open Source Intranet - Social, Extensible, Cloud Ready
Get Started Now And Turn Your Intranet Into A Collaboration Platform
http://p.sf.net/sfu/ExoPlatform___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Tree-chains preliminary summary

2014-04-17 Thread Tier Nolan
How does this system handle problems with the lower chains after they have
been locked-in?

The rule is that if a block in the child chain is pointed to by its parent,
then it effectively has infinite POW?

The point of the system is that a node monitoring the parent chain only has
to watch the header chain for its 2 children.

A parent block header could point to an invalid block in one of the child
chains.  That parent block could end up built on top of before the problem
was discovered.

This would mean that a child chain problem could cause a roll-back of a
parent chain.  This violates the principle that parents are dominant over
child chains.

Alternatively, the child chain could discard the infinite POW blocks, since
they are illegal.

P1 - C1
P2 - ---
P3 - C3
P4 - C5

It turns out C4 (or C5) was an invalid block

P5 - C4'
P6 - ---
P7 - C8'

This is a valid sequence.  Once P7 points at C8, the alternative chain
displaces C5.

This displacement could require a compact fraud proof to show that C4 was
an illegal block and that C5 was built on it.

This shouldn't happen if the miner was actually watching the log(N) chains,
but can't be guaranteed against.

I wonder if the proof of stake nothing is at stake principle applies
here.  Miners aren't putting anything at stake by merge mining the lower
chains.

At minimum, they should get tx-fees for the lower chains that they merge
mine.  The rule could require that the minting reward is divided over the
merge mined chains.
--
Learn Graph Databases - Download FREE O'Reilly Book
Graph Databases is the definitive new guide to graph databases and their
applications. Written by three acclaimed leaders in the field,
this first edition is now available. Download your free book today!
http://p.sf.net/sfu/NeoTech___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Bitcoind-in-background mode for SPV wallets

2014-04-10 Thread Tier Nolan
Error correction is an interesting suggestion.

If there was 1 nodes and each stored 0.1% of the blocks, at random,
then the odds of a block not being stored is 45 in a million.

Blocks are stored on average 10 times, so there is already reasonable
redundancy.

With 1 million blocks, 45 would be lost in that case, even though most are
stored multiple times.

With error correction codes, the chances of blocks going missing is much
lower.

For example, if there was 32 out of 34 Reed-Solomon-like system, then 2
blocks out of 34 could be lost without any actual data loss for the network.

As a back of the envelop check, the odds of 2 missing blocks landing within
34 of another is 68/100.  That means that the odds of 2 missing blocks
falling in the same correction section is 45 * 34 / 100 = 0.153%.  Even
in that case, the missing blocks could be reconstructed, as long as you
know that they are missing.

The error correction code has taken it from being a near certainty that
some blocks would be lost to less than 0.153%.

A simple error correction system would just take 32 blocks in sequence and
then compute 2 extra blocks.

The extra blocks would have to be the same length as the longest block in
the 32 being corrected.

The shorter blocks would be padded with zeroes so everything is the same
size.

For each byte position in the blocks you compute the polynomial that goes
through byte (x, data(x)), for x = 0 to 31.  This could be a finite field,
or just mod 257.

You can then compute the value for x=32 and x = 33.  Those are the values
for the 2 extra blocks.

If mod 257 is used, then only the 2 extra blocks have to deal with symbols
from 0 to 256.

If you have 32 of the 34 blocks, you can compute the polynomial and thus
generate the 32 actual blocks.

This could be achieved by a soft fork by having a commitment every 32
blocks in the coinbase.

It makes the header chain much longer though.

Longer sections are more efficient, but need more calculations to recover
everything.  You could also do interleaving to handle the case where entire
sections are missing.


On Thu, Apr 10, 2014 at 12:54 PM, Peter Todd p...@petertodd.org wrote:

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA512



 On 10 April 2014 07:50:55 GMT-04:00, Gregory Maxwell gmaxw...@gmail.com
 wrote:
 (Just be glad I'm not suggesting coding the entire blockchain with an
 error correcting code so that it doesn't matter which subset you're
 holding)

 I forgot to ask last night: if you do that, can you add new blocks to the
 chain with the encoding incrementally?
 -BEGIN PGP SIGNATURE-
 Version: APG v1.1.1

 iQFQBAEBCgA6BQJTRoZ+MxxQZXRlciBUb2RkIChsb3cgc2VjdXJpdHkga2V5KSA8
 cGV0ZUBwZXRlcnRvZGQub3JnPgAKCRAZnIM7qOfwhYudCAC7ImifMnLIFHv1UifV
 zRxtDkx7UxIf9dncDAcrTIyKEDhoouh0TmoZl3HKQ3KUEETAVKsMzqXLgqVe6Ezr
 ny1bm0pQlkBCZFRwuZvmB27Y3mwC8PD6rT9ywtWzFjWd8PEg6/UaM547nQPw7ir0
 27S3XMfE/BMiQWfWnWc/nqpbmJjd8x/dM3oiTG9SVZ7iNxotxAqfnW2X5tkhJb0q
 dAV08wpu6aZ5hTyLpvDxXDFjEG119HJeLkT9QVIrg+GBG55PYORqE4gQr6uhrF4L
 fGZS2EIlbk+kAiv0EjglQfxWM7KSRegplSASiKEOuX80tqLIsEugNh1em8qvG401
 NOAS
 =CWql
 -END PGP SIGNATURE-



 --
 Put Bad Developers to Shame
 Dominate Development with Jenkins Continuous Integration
 Continuously Automate Build, Test  Deployment
 Start a new project now. Try Jenkins in the cloud.
 http://p.sf.net/sfu/13600_Cloudbees
 ___
 Bitcoin-development mailing list
 Bitcoin-development@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bitcoin-development

--
Put Bad Developers to Shame
Dominate Development with Jenkins Continuous Integration
Continuously Automate Build, Test  Deployment 
Start a new project now. Try Jenkins in the cloud.
http://p.sf.net/sfu/13600_Cloudbees___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Chain pruning

2014-04-10 Thread Tier Nolan
On Thu, Apr 10, 2014 at 7:32 PM, Pieter Wuille pieter.wui...@gmail.comwrote:

 If you trust hashrate for determining which UTXO set is valid, a 51%
 attack becomes worse in that you can be made to believe a version of
 history which is in fact invalid.


If there are invalidation proofs, then this isn't strictly true.

If you are connected to 10 nodes and only 1 is honest, it can send you the
proof that your main chain is invalid.

For bad scripts, it shows you the input transaction for the invalid input
along with the merkle path to prove it is in a previous block.

For double spends, it could show the transaction which spent the output.

Double spends are pretty much the same as trying to spend non-existent
outputs anyway.

If the UTXO set commit was actually a merkle tree, then all updates could
be included.

Blocks could have extra data with the proofs that the UTXO set is being
updated correctly.

To update the UTXO set, you need the paths for all spent inputs.

It puts a large load on miners to keep things working, since they have to
run a full node.

If they commit the data to the chain, then SPV nodes can do local checking.

One of them will find invalid blocks eventually (even if one of the other
miners don't).


 --
 Pieter


 --
 Put Bad Developers to Shame
 Dominate Development with Jenkins Continuous Integration
 Continuously Automate Build, Test  Deployment
 Start a new project now. Try Jenkins in the cloud.
 http://p.sf.net/sfu/13600_Cloudbees
 ___
 Bitcoin-development mailing list
 Bitcoin-development@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bitcoin-development

--
Put Bad Developers to Shame
Dominate Development with Jenkins Continuous Integration
Continuously Automate Build, Test  Deployment 
Start a new project now. Try Jenkins in the cloud.
http://p.sf.net/sfu/13600_Cloudbees___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Why are we bleeding nodes?

2014-04-07 Thread Tier Nolan
On Mon, Apr 7, 2014 at 8:50 PM, Tamas Blummer ta...@bitsofproof.com wrote:

 You have to load headers sequantially to be able to connect them and
 determine the longest chain.


The isn't strictly true.  If you are connected to a some honest nodes, then
you could download portions of the chain and then connect the various
sub-chains together.

The protocol doesn't support it though.  There is no system to ask for
block headers for the main chain block with a given height,

Finding one high bandwidth peer to download the entire header chain
sequentially is pretty much forced.  The client can switch if there is a
timeout.

Other peers could be used to parallel download the block chain while the
main chain is downloading.  Even if the header download stalled, it
wouldn't be that big a deal.

 Blocks can be loaded in random order once you have their order given by
the headers.
 Computing the UTXO however will force you to at least temporarily store
the blocks unless you have plenty of RAM.

You only need to store the UTXO set, rather than the entire block chain.

It is possible to generate the UTXO set without doing any signature
verification.

A lightweight node could just verify the UTXO set and then do random
signature verifications.

The keeps disk space and CPU reasonably low.  If an illegal transaction is
added to be a block, then proof could be provided for the bad transaction.

The only slightly difficult thing is confirming inflation.  That can be
checked on a block by block basis when downloading the entire block chain.

 Regards,
 Tamas Blummer
 http://bitsofproof.com http://bitsofproof.com

On 07.04.2014, at 21:30, Paul Lyon pml...@hotmail.ca wrote:

I hope I'm not thread-jacking here, apologies if so, but that's the
approach I've taken with the node I'm working on.

Headers can be downloaded and stored in any order, it'll make sense of what
the winning chain is. Blocks don't need to be downloaded in any particular
order and they don't need to be saved to disk, the UTXO is fully
self-contained. That way the concern of storing blocks for seeding (or not)
is wholly separated from syncing the UTXO. This allows me to do the initial
blockchain sync in ~6 hours when I use my SSD. I only need enough disk
space to store the UTXO, and then whatever amount of block data the user
would want to store for the health of the network.

This project is a bitcoin learning exercise for me, so I can only hope I
don't have any critical design flaws in there. :)

--
From: ta...@bitsofproof.com
Date: Mon, 7 Apr 2014 21:20:31 +0200
To: gmaxw...@gmail.com
CC: bitcoin-development@lists.sourceforge.net
Subject: Re: [Bitcoin-development] Why are we bleeding nodes?


Once headers are loaded first there is no reason for sequential loading.

Validation has to be sequantial, but that step can be deferred until the
blocks before a point are loaded and continous.

Tamas Blummer
http://bitsofproof.com

On 07.04.2014, at 21:03, Gregory Maxwell gmaxw...@gmail.com wrote:

On Mon, Apr 7, 2014 at 12:00 PM, Tamas Blummer ta...@bitsofproof.com
wrote:

therefore I guess it is more handy to return some bitmap of pruned/full
blocks than ranges.


A bitmap also means high overhead and-- if it's used to advertise
non-contiguous blocks-- poor locality, since blocks are fetched
sequentially.



--
Put Bad Developers to Shame Dominate Development with Jenkins Continuous
Integration Continuously Automate Build, Test  Deployment Start a new
project now. Try Jenkins in the cloud.http://p.sf.net/sfu/13600_Cloudbees

___ Bitcoin-development mailing
list Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development




 --
 Put Bad Developers to Shame
 Dominate Development with Jenkins Continuous Integration
 Continuously Automate Build, Test  Deployment
 Start a new project now. Try Jenkins in the cloud.
 http://p.sf.net/sfu/13600_Cloudbees
 ___
 Bitcoin-development mailing list
 Bitcoin-development@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bitcoin-development


--
Put Bad Developers to Shame
Dominate Development with Jenkins Continuous Integration
Continuously Automate Build, Test  Deployment 
Start a new project now. Try Jenkins in the cloud.
http://p.sf.net/sfu/13600_Cloudbees___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Why are we bleeding nodes?

2014-04-07 Thread Tier Nolan
On Mon, Apr 7, 2014 at 10:55 PM, Paul Lyon pml...@hotmail.ca wrote:

  I actually ask for headers from each peer I'm connected to and then dump
 them into the backend to be sorted out.. is this abusive to the network?


I think downloading from a subset of the peers and switching out any slow
ones is a reasonable compromise.

Once you have a chain, you can quickly check that all peers have the same
main chain.

Your backend system could have a method that gives you the hash of the last
10 headers on the longest chain it knows about.  You can use the block
locator hash system.

This can be used with the getheaders message and if the new peer is on a
different chain, then it will just send you the headers starting at the
genesis block.

If that happens, you need to download the entire chain from that peer and
see if it is better than your current best.


*From:* Tier Nolan tier.no...@gmail.com
*Sent:* Monday, April 07, 2014 6:48 PM
*To:* bitcoin-development@lists.sourceforge.net


On Mon, Apr 7, 2014 at 8:50 PM, Tamas Blummer ta...@bitsofproof.com wrote:

 You have to load headers sequantially to be able to connect them and
 determine the longest chain.


The isn't strictly true.  If you are connected to a some honest nodes, then
you could download portions of the chain and then connect the various
sub-chains together.

The protocol doesn't support it though.  There is no system to ask for
block headers for the main chain block with a given height,

Finding one high bandwidth peer to download the entire header chain
sequentially is pretty much forced.  The client can switch if there is a
timeout.

Other peers could be used to parallel download the block chain while the
main chain is downloading.  Even if the header download stalled, it
wouldn't be that big a deal.

 Blocks can be loaded in random order once you have their order given by
the headers.
 Computing the UTXO however will force you to at least temporarily store
the blocks unless you have plenty of RAM.

You only need to store the UTXO set, rather than the entire block chain.

It is possible to generate the UTXO set without doing any signature
verification.

A lightweight node could just verify the UTXO set and then do random
signature verifications.

The keeps disk space and CPU reasonably low.  If an illegal transaction is
added to be a block, then proof could be provided for the bad transaction.

The only slightly difficult thing is confirming inflation.  That can be
checked on a block by block basis when downloading the entire block chain.

 Regards,
 Tamas Blummer
 http://bitsofproof.com http://bitsofproof.com

On 07.04.2014, at 21:30, Paul Lyon pml...@hotmail.ca wrote:

I hope I'm not thread-jacking here, apologies if so, but that's the
approach I've taken with the node I'm working on.

Headers can be downloaded and stored in any order, it'll make sense of what
the winning chain is. Blocks don't need to be downloaded in any particular
order and they don't need to be saved to disk, the UTXO is fully
self-contained. That way the concern of storing blocks for seeding (or not)
is wholly separated from syncing the UTXO. This allows me to do the initial
blockchain sync in ~6 hours when I use my SSD. I only need enough disk
space to store the UTXO, and then whatever amount of block data the user
would want to store for the health of the network.

This project is a bitcoin learning exercise for me, so I can only hope I
don't have any critical design flaws in there. :)

--
From: ta...@bitsofproof.com
Date: Mon, 7 Apr 2014 21:20:31 +0200
To: gmaxw...@gmail.com
CC: bitcoin-development@lists.sourceforge.net
Subject: Re: [Bitcoin-development] Why are we bleeding nodes?


Once headers are loaded first there is no reason for sequential loading.

Validation has to be sequantial, but that step can be deferred until the
blocks before a point are loaded and continous.

Tamas Blummer
http://bitsofproof.com

On 07.04.2014, at 21:03, Gregory Maxwell gmaxw...@gmail.com wrote:

On Mon, Apr 7, 2014 at 12:00 PM, Tamas Blummer ta...@bitsofproof.com
wrote:

therefore I guess it is more handy to return some bitmap of pruned/full
blocks than ranges.


A bitmap also means high overhead and-- if it's used to advertise
non-contiguous blocks-- poor locality, since blocks are fetched
sequentially.



--
Put Bad Developers to Shame Dominate Development with Jenkins Continuous
Integration Continuously Automate Build, Test  Deployment Start a new
project now. Try Jenkins in the cloud.http://p.sf.net/sfu/13600_Cloudbees

___ Bitcoin-development mailing
list Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development




 --
 Put Bad Developers to Shame
 Dominate Development with Jenkins Continuous Integration

Re: [Bitcoin-development] Malleability and MtGox's announcement

2014-02-10 Thread Tier Nolan
On Mon, Feb 10, 2014 at 7:47 PM, Oliver Egginger bitc...@olivere.de wrote:

 As I understand this attack someone renames the transaction ID before
 being confirmed in the blockchain. Not easy but if he is fast enough it
 should be possible. With a bit of luck for the attacker the new
 transaction is added to the block chain and the original transaction is
 discarded as double-spend. Right?


No, the problem was that the transaction MtGox produced was poorly
formatted.

It wouldn't cause a block containing the transaction to be rejected, but
the default client wouldn't relay the transaction or add it into a block.

This means that transaction stalls.

If the attacker has a direct connection to MtGox, they can receive the
transaction directly.

The attacker would fix the formatting (which changes the transaction id,
but doesn't change the signature) and then forward it to the network, as
normal.

The old transaction never propagates correctly.

Up to this point the attacker has nothing gained. But next the attacker
 stressed the Gox support and refers to the original transaction ID. Gox
 was then probably fooled in such cases and has refunded already paid
 Bitcoins to the attackers (virtual) Gox-wallet.


They sent out the transaction a second time.

The right solution is that the new transaction should re-spend at least one
of the coins that the first transaction spent.  That way only one can
possibly be accepted.
--
Android apps run on BlackBerry 10
Introducing the new BlackBerry 10.2.1 Runtime for Android apps.
Now with support for Jelly Bean, Bluetooth, Mapview and more.
Get your Android app in front of a whole new audience.  Start now.
http://pubads.g.doubleclick.net/gampad/clk?id=124407151iu=/4140/ostg.clktrk___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] An idea for alternative payment scheme

2014-01-03 Thread Tier Nolan
The random number that the buyer uses could be generated from a root key
too.

This would allow them to regenerate all random numbers that they used and
recreate their receipts.  The master root would have to be stored on your
computer though.

The payment protocol is supposed to do something like this already though.


On Fri, Jan 3, 2014 at 6:00 PM, Nadav Ivgi na...@shesek.info wrote:

 I had an idea for a payment scheme that uses key derivation, but instead
 of the payee deriving the addresses, the payer would do it.

 It would work like that:

1. The payee publishes his master public key
2. The payer generates a random receipt number (say, 25 random bytes)
3. The payer derives an address from the master public key using the
receipt number and pays to it
4. The payer sends the receipt to the payee
5. The payee derives a private key with that receipt and adds it to
his wallet


 Advantages:

- It increases privacy by avoiding address reuse
- The process is asynchronous. The payee is completely passive in the
payment process and isn't required to provide new addresses before each
payment (so no payment server required)
- Its usable as a replacement for cases where re-used addresses are
the most viable solution (like putting an address in a forum signature or
as a development fund in a github readme)
- The receipt also acts as a proof of payment that the payer can
provide to the payee
- Also, if the master is known to belong to someone, this also allows
the payer prove to a third-party that the payment was made to that someone.
If the output was spent, it also proves that he was aware of the payment
and has the receipt.
- Its a really thin abstraction layer that doesn't require much changes

 Disadvantages:

- Losing the receipt numbers means losing access to your funds, they
are random and there's no way to restore them
- It requires sending the receipt to the payee somehow. Email could
work for that, but a better defined channel that also can talk to the
Bitcoin client and add the receipt would be much better.

 What do you think?


 --
 Rapidly troubleshoot problems before they affect your business. Most IT
 organizations don't have a clear picture of how application performance
 affects their revenue. With AppDynamics, you get 100% visibility into your
 Java,.NET,  PHP application. Start your 15-day FREE TRIAL of AppDynamics
 Pro!
 http://pubads.g.doubleclick.net/gampad/clk?id=84349831iu=/4140/ostg.clktrk
 ___
 Bitcoin-development mailing list
 Bitcoin-development@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bitcoin-development


--
Rapidly troubleshoot problems before they affect your business. Most IT 
organizations don't have a clear picture of how application performance 
affects their revenue. With AppDynamics, you get 100% visibility into your 
Java,.NET,  PHP application. Start your 15-day FREE TRIAL of AppDynamics Pro!
http://pubads.g.doubleclick.net/gampad/clk?id=84349831iu=/4140/ostg.clktrk___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Peer Discovery and Overlay

2013-12-24 Thread Tier Nolan
On Tue, Dec 24, 2013 at 8:52 AM, Jeremy Spilman jer...@taplink.co wrote:

 Are there any past instances of applications hijacking or interfacing with
 the exiting p2p messages, or abusing 'getaddr' functionality? Are there
 any guidelines on this, or should there be?


There was a BIP by Stefan Thomas for adding custom services to the
protocol.  Discovery would be helpful here too.  If this was added, it
wouldn't be intended for use in a hostile way though.

This one was the custom services BIP.  It defines a change to the version
message and also custom sub-commands.
https://github.com/bitcoin/bips/blob/master/bip-0036.mediawiki

This one discusses how network discovery should be handles.
https://en.bitcoin.it/wiki/User:Justmoon/BIP_Draft:_Custom_Service_Discovery
--
Rapidly troubleshoot problems before they affect your business. Most IT 
organizations don't have a clear picture of how application performance 
affects their revenue. With AppDynamics, you get 100% visibility into your 
Java,.NET,  PHP application. Start your 15-day FREE TRIAL of AppDynamics Pro!
http://pubads.g.doubleclick.net/gampad/clk?id=84349831iu=/4140/ostg.clktrk___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] [ANN] High-speed Bitcoin Relay Network

2013-11-06 Thread Tier Nolan
On Wed, Nov 6, 2013 at 5:50 AM, Matt Corallo bitcoin-l...@bluematt.mewrote:

 Relay node details:
  * The relay nodes do some data verification to prevent DoS, but in
 order to keep relay fast, they do not fully verify the data they are
 relaying, thus YOU SHOULD NEVER mine a block building on top of a
 relayed block without fully checking it with your own bitcoin validator
 (as you would any other block relayed from the P2P network).


Wouldn't this cause disconnects due to misbehavior?

A standard node connecting to a relay node would receive
blocks/transactions that are not valid in some way and then disconnect.

Have you looked though the official client to find what things are
considered signs that a peer is hostile?  I assume things like double
spending checks count as misbehavior and can't be quickly checked by a
relay node.

Maybe another bit could be assigned in the services field as relay.  This
means that the node doesn't do any checking.

Connects to relay nodes could be command line/config file only.  Peers
wouldn't connect to them.
--
November Webinars for C, C++, Fortran Developers
Accelerate application performance with scalable programming models. Explore
techniques for threading, error checking, porting, and tuning. Get the most 
from the latest Intel processors and coprocessors. See abstracts and register
http://pubads.g.doubleclick.net/gampad/clk?id=60136231iu=/4140/ostg.clktrk___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Distributing low POW headers

2013-07-28 Thread Tier Nolan
On Sun, Jul 28, 2013 at 7:42 PM, John Dillon
john.dillon...@googlemail.comwrote:

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA256

 On Wed, Jul 24, 2013 at 11:55 AM, Tier Nolan tier.no...@gmail.com wrote:
  Distributing headers with 1/64 of the standard POW means that a header
 would
  be broadcast approximately once every 9 seconds (assuming a 10 minute
 block
  time).  This was picked because sending 80 byte headers every 9 seconds
  shouldn't represent much load on the network.

 As Peter said, much should be quantified.


It has the same statistic properties as normal blocks just 64 times faster.

Even if there is a new block 30 seconds after the previous one, that
doesn't cause a burst of 64 low POW block headers in the 30 second window.
They are all statistically independent hashing attempts.


 Sounds like you are changing economics and requiring miners to have even
 better
 network connections. This is not a thing to do lightly and it probably a
 bad
 idea.


No, it just breaks ties.  In most cases there would be only 1 contender
block, so all miners are equal.

If 10% of blocks were ties/orphans, then only 1% of blocks would be a 3-way
tie.  That probably overestimates the orphan rate.

This means the miner has to download 2 blocks 10% of the time and 3 blocks
1% of the time.

However, even then, half the network wouldn't have to download the 2nd
block of the tie, since they happened to get the winner first.  This means
5% extra bandwidth on average.

16 low POW headers at 9 seconds per header is more than 2 minutes for a
miner to switch to the other contender.

A miner would only lose out if he doesn't notice that block he is mining
against is not getting built on by anyone else.

He needs to download both tied blocks so that he can switch, but he has 2
minutes to actually switch.

I understand Pieter Wuille is working on letting Bitcoin propagate and make
 use
 of pure block headers, a step towards SPV and partial UTXO mode.


That would need to happen before low POW ones are broadcast.  There is a
basic set of rules in the first post.

At the moment, the client only provides headers when asked, but never
broadcasts them.


 Orphan measurement would be very useful for a lot of reasons, how about you
 think about that first?


I think distributing the low POW headers on an advisory basis a reasonable
first step.  However, just broadcasting the headers is a zeroth step.

Miners would probably break ties towards the block that seems to be getting
the most hashing anyway.

I think for orphan rate, the best is to have a system to link to orphans.
This would add the POW of the orphan to the main chain's total.

Unfortunately adding fields to the header is hard.  It could be done as a
coinbase extra-nonce thing.  A better option would be if the merkle tree
could include non-transactions.

The merkle root could be replaced by hash(auxiliary header).  This has the
advantage of not impacting ASIC miners.

Broadcasting all headers would at least allow clients to count orphans,
even if they aren't integrated into the block chain.

It wouldn't have the potential data rate issues either
 and should be a very simple change.


I don't think the data rate is really that high.  It would be 80 bytes
every 9 seconds, or 9 bytes per second.

Blocks are 500kB every 10 minutes, or 853 bytes per second.


 Just set some threshold relative to the
 height of the best block where you will not further propagate and orphan
 block(header) and prior to that limit do so freely. I believe the change
 would
 be 100% compatible with the P2P protocol as it is based on inventories.


Right absolutely.  Headers of blocks that add to the block tree within
recent history should be forwarded.

The inv system would need to be tweaked, since it can only say block and
transaction.

A block header field would allow the node to say that it only has the
header.  Alternatively, it would reply with a header message to the
getblocks message.
--
See everything from the browser to the database with AppDynamics
Get end-to-end visibility with application monitoring from AppDynamics
Isolate bottlenecks and diagnose root cause in seconds.
Start your free trial of AppDynamics Pro today!
http://pubads.g.doubleclick.net/gampad/clk?id=48808831iu=/4140/ostg.clktrk___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Distributing low POW headers

2013-07-24 Thread Tier Nolan
On Wed, Jul 24, 2013 at 10:42 AM, Peter Todd p...@petertodd.org wrote:

 Please provide equations and data justifying the 'magic constants' in
 this proposal.


The are a range of workable values.  Ideally, there would first need to be
agreement on the general principle.

Distributing headers with 1/64 of the standard POW means that a header
would be broadcast approximately once every 9 seconds (assuming a 10 minute
block time).  This was picked because sending 80 byte headers every 9
seconds shouldn't represent much load on the network.

The second magic number is how much credit to give for mini-headers.
Setting it at 1/16 means that the headers will be worth around 4 times as
much as a block (since there would be around 63 low POW headers for each
full POW one).

This creates an incentive for miners to take headers into account.  If all
the headers were worth less than a full block, then a fork which was losing
would suddenly be winning if a block is found.  A fork will only become the
main chain due to a new block, if it is within 16 mini-confirms.

Miners don't have to mine against the absolute best fork, but they do need
to make sure they stay within 16 of the best one (so if they find a block,
that block would be considered part of the main chain).  Some hysteresis
might be helpful.  The rule could be to only switch unless the current fork
is losing by at least 4 mini-confirms.

In most cases, this won't be a problem, since orphans don't happen that
often anyway.

Since it isn't a chain, this doesn't give the full benefits of a 9 second
block, but it should bring things to consensus faster.  6 full confirms
would be much more secure against random and hostile reversals.

It doesn't have the risks of 9 second blocks in causing network collapse,
since it isn't a chain, the headers are short, and there is no
confirmations of the required (other than checking the hash).

Each mini confirms adds to the strength of leaf blocks of the tree.  If
there is a tie, and 20% of the network is mining one block and 80% is
mining the other, the mining power of the network will be split until the
next block arrives.

With mini confirms, the entire network is aware of the 2 blocks (since the
headers would be forwarded) and the mini-confirms would show which one has
majority hashing power.

The least risk option would be to make them purely advisory.  The proposal
takes it further than that.

The proposal means that if the network is split 80/20, then miners should
stick with the 80% fork, even if the 20% fork wins the race for the next
block.

Winning a few rounds is easier than wining many rounds worth of
mini-confirms.

The key is that as long as the honest miners stay on the main chain, they
will eventually overwhelm any rewrite attack with less than 50% of the
mining power.  This is a system to agree on what is the main chain in the
face of a re-write attack.



 Currently we do not relay blocks to peers if they conflict with blocks
 in the best known chain. What changes exactly are you proposing to that
 behavior?


The (sub) proposal is that headers would still be broadcast.  The blocks
would not be forwarded.

If a header extends the header tree, meets full POW and is near the end
of the chain, then it is broadcast.  This means that all nodes will have
the entire header tree, including orphans.

The full blocks would only be sent if they extend the main chain.

Second, if a header builds on a header that is in the header tree, then it
is broadcast, even if it doesn't meet full POW (only 1/64 required).  This
gives information on which fork is getting the most power.

It gives information about potential consensus loss forks, where a
significant number of miners are following an alternative chain.

In fact, this is probably worth doing as an initial step.

A warning could be displayed on the client if a fork is getting more than
15% of the hashing power.
--
See everything from the browser to the database with AppDynamics
Get end-to-end visibility with application monitoring from AppDynamics
Isolate bottlenecks and diagnose root cause in seconds.
Start your free trial of AppDynamics Pro today!
http://pubads.g.doubleclick.net/gampad/clk?id=48808831iu=/4140/ostg.clktrk___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


[Bitcoin-development] Distributing low POW headers

2013-07-23 Thread Tier Nolan
I was thinking about a change to the rules for distinguishing between forks
and maybe a BIP..

*Summary*

- Low POW headers should be broadcast by the network

If a header has more than 1/64 of the POW of a block, it should be
broadcast.  This provides information on which fork is getting most of the
hashing power.

- Miners should use the header information to decide on longest chain

The fork selection rule for miners should be biased towards staying on the
fork that has the most hashing power.

This means that they might keep hashing on a fork that is 1-2 blocks
shorter.

If most miners follow the rule, then it is the best strategy for other
miners to also follow this rule.

- Advantages

This lowers the probability of natural and malicious reversals.

*Distributing low POW headers*

First block header messages that have more than 1/64 of the standard POW
requirements would be forwarded.

This means the client needs to maintain a short term view of the entire
header tree.

if (header extends header tree) {
  if (header meets full POW) {
add to header tree;
forward to peers;
check if any blocks in storage now extend the header tree
  } else {
if (header meets POW / 64) {
  forward to peers;
}
} else {
  if (header meets POW) {
add to orphan header storage
  }
}

The storage could be limited and headers could be discarded after a while.

This has the extra advantage that it informs clients of forks that are
receiving hashing power.

This could be linked to a protocol version to prevent disconnects due to
invalid header messages.

*Determining the longest chain*

Each link would get extra credit for headers received.

Assume there are 2 forks starting at block A as the fork point.

A(63) - B(72) - C(37) - D(58)

and

A(63) - B'(6) - C'(9) - D'(4) - E(7) - F(6)

The numbers in brackets are the number of low POW headers received that
have those blocks as parent.

The new rule is that the POW for a block is equal to

POW * (1 + (headers / 16))

Only headers within some threshold of the end of the (shorter) chain
count.  However, in most cases, that doesn't matter since the fork point
will act as the start point.  As long as miners keep headers for 30-40
blocks, they will likely have all headers back to any reasonable fork point.

This means that the top fork is considered longer, since it has much more
headers, even though it has 2 less blocks.

If 75% of the miners follow this rule, then the top fork will eventually
catch up and win, so it is in the interests of the other 25% to follow the
rule too.

Even if there isn't complete agreement on headers received, the fork that
is getting the most hashing will naturally gain most of the headers, so
ties will be broken quickly.
--
See everything from the browser to the database with AppDynamics
Get end-to-end visibility with application monitoring from AppDynamics
Isolate bottlenecks and diagnose root cause in seconds.
Start your free trial of AppDynamics Pro today!
http://pubads.g.doubleclick.net/gampad/clk?id=48808831iu=/4140/ostg.clktrk___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development