Re: [bitcoin-dev] Small Nodes: A Better Alternative to Pruned Nodes

2017-04-17 Thread Danny Thorpe via bitcoin-dev
1TB HDD is now available for under $40 USD.  How is the 100GB storage
requirement preventing anyone from setting up full nodes?

On Apr 16, 2017 11:55 PM, "David Vorick via bitcoin-dev" <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> *Rationale:*
>
> A node that stores the full blockchain (I will use the term archival node)
> requires over 100GB of disk space, which I believe is one of the most
> significant barriers to more people running full nodes. And I believe the
> ecosystem would benefit substantially if more users were running full nodes.
>
> The best alternative today to storing the full blockchain is to run a
> pruned node, which keeps only the UTXO set and throws away already verified
> blocks. The operator of the pruned node is able to enjoy the full security
> benefits of a full node, but is essentially leeching the network, as they
> performed a large download likely without contributing anything back.
>
> This puts more pressure on the archival nodes, as the archival nodes need
> to pick up the slack and help new nodes bootstrap to the network. As the
> pressure on archival nodes grows, fewer people will be able to actually run
> archival nodes, and the situation will degrade. The situation would likely
> become problematic quickly if bitcoin-core were to ship with the defaults
> set to a pruned node.
>
> Even further, the people most likely to care about saving 100GB of disk
> space are also the people least likely to care about some extra bandwidth
> usage. For datacenter nodes, and for nodes doing lots of bandwidth, the
> bandwidth is usually the biggest cost of running the node. For home users
> however, as long as they stay under their bandwidth cap, the bandwidth is
> actually free. Ideally, new nodes would be able to bootstrap from nodes
> that do not have to pay for their bandwidth, instead of needing to rely on
> a decreasing percentage of heavy-duty archival nodes.
>
> I have (perhaps incorrectly) identified disk space consumption as the most
> significant factor in your average user choosing to run a pruned node or a
> lite client instead of a full node. The average user is not typically too
> worried about bandwidth, and is also not typically too worried about
> initial blockchain download time. But the 100GB hit to your disk space can
> be a huge psychological factor, especially if your hard drive only has
> 500GB available in the first place, and 250+ GB is already consumed by
> other files you have.
>
> I believe that improving the disk usage situation would greatly benefit
> decentralization, especially if it could be done without putting pressure
> on archival nodes.
>
> *Small Nodes Proposal:*
>
> I propose an alternative to the pruned node that does not put undue
> pressure on archival nodes, and would be acceptable and non-risky to ship
> as a default in bitcoin-core. For lack of a better name, I'll call this new
> type of node a 'small node'. The intention is that bitcoin-core would
> eventually ship 'small nodes' by default, such that the expected amount of
> disk consumption drops from today's 100+ GB to less than 30 GB.
>
> My alternative proposal has the following properties:
>
> + Full nodes only need to store ~20% of the blockchain
> + With very high probability, a new node will be able to recover the
> entire blockchain by connecting to 6 random small node peers.
> + An attacker that can eliminate a chosen+ 95% of the full nodes running
> today will be unable to prevent new nodes from downloading the full
> blockchain, even if the attacker is also able to eliminate all archival
> nodes. (assuming all nodes today were small nodes instead of archival nodes)
>
> Method:
>
> A small node will pick an index [5, 256). This index is that node's
> permanent index. When storing a block, instead of storing the full block,
> the node will use Reed-Solomon coding to erasure code the block using a
> 5-of-256 scheme. The result will be 256 pieces that are 20% of the size of
> the block each. The node picks the piece that corresponds to its index, and
> stores that instead. (Indexes 0-4 are reserved for archival nodes -
> explained later)
>
> The node is now storing a fragment of every block. Alone, this fragment
> cannot be used to recover any piece of the blockchain. However, when paired
> with any 5 unique fragments (fragments of the same index will not be
> unique), the full block can be recovered.
>
> Nodes can optionally store more than 1 fragment each. At 5 fragments, the
> node becomes a full archival node, and the chosen indexes should be 0-4.
> This is advantageous for the archival node as the encoded data for the
> first 5 indexes will actually be identical to the block itself - there is
> no computational overhead for selecting the first indexes. There is also no
> need to choose random indexes, because the full block can be recovered no
> matter which indexes are chosen.
>
> When connecting to new peers, the indexes of each peer needs to be known.
> Once peers 

Re: [bitcoin-dev] Clearing up some misconceptions about full nodes

2016-02-12 Thread Danny Thorpe via bitcoin-dev
"With a very powerful "Desktop" machine bitcoin-qt dominates CPU/GPU
resources."

That doesn't match my experience.

System responsiveness / user experience can suffer when running bitcoin-qt
on a spinning hard disk. Disk I/O load will cause the whole system to grind
and severely disrupt the user experience.

Move the Bitcoin data to an SSD, though, and it's an entirely different
story.

The initial blockchain synchronization / "catch up" is CPU and disk
intensive, but after initial sync I find bitcoin-qt uses only a trivial
amount of CPU to keep up with verifying new blocks and new transactions.

Running bitcoin-qt occasionally is a much more painful user experience than
running bitcoin-qt continuously.

I'm running Bitcoin Core v0.12.rc2 on an old dual core Pentium E2160 at
1.8GHz, 6GB RAM, 64 bit Windows 10, with the Bitcoin data on SSD. This
system is about 6 years old and was an economy model even when new. Not
what I would call a powerful system. I've only added RAM and the SSD.

On that machine I run two instances of Bitcoin-qt - one for mainnet, and
another for testnet, and an instance of bfgminer to manage a handful of USB
Block Eruptors for testnet mining. Both bitcoin-qt instances are typically
at their max of 25 connections (each). Total CPU load floats around 11%,
with only occasional spikes to 40% for a few seconds.  The mainnet
bitcoin-qt process uses about 700MB of RAM, testnet about 300MB.

This machine did fall into disk grinding paralysis during initial sync /
catchup with the v0.10 and v0.11 builds of bitcoin-qt, when the Bitcoin
data was on a spinning disk. Moving the Bitcoin data to an SSD drive had
the greatest impact on breaking the disk-bound whole-system paralysis.
Increasing the system RAM, upgrading to v0.12, and upgrading the OS to Win
10 all contributed smaller improvements.

It is possible to run a full node on a small desktop machine concurrent
with user apps. Just get the Bitcoin data off of spinning media and onto
SSD, make sure you have plenty of RAM, and leave bitcoin-qt running all the
time.

-Danny



On Wed, Feb 10, 2016 at 11:03 PM, Patrick Shirkey via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

>
> On Thu, February 11, 2016 8:15 am, Chris Belcher via bitcoin-dev wrote:
> > I've been asked to post this to this mailing list too. It's time to
> > clear up some misconceptions floating around about full nodes.
> >
> > === Myth: There are only about 5500 full nodes worldwide ===
> >
> > This number comes from this and similar sites: https://bitnodes.21.co/
> > and it measured by trying to probe every nodes on their open ports.
> >
> > Problem is, not all nodes actually have open ports that can be probed.
> > Either because they are behind firewalls or because their users have
> > configured them to not listen for connections.
> >
> > Nobody knows how many full nodes there are, since many people don't know
> > how to forward ports behind a firewall, and bandwidth can be costly, its
> > quite likely that the number of nodes with closed ports is at least
> > another several thousand.
> >
> > Nodes with open ports are able to upload blocks to new full nodes. In
> > all other ways they are the same as nodes with closed ports. But because
> > open-port-nodes can be measured and closed-port-nodes cannot, some
> > members of the bitcoin community have been mistaken into believing that
> > open-port-nodes are that matters.
> >
> > === Myth: This number of nodes matters and/or is too low. ===
> >
> > Nodes with open ports are useful to the bitcoin network because they
> > help bootstrap new nodes by uploading historical blocks, they are a
> > measure of bandwidth capacity. Right now there is no shortage of
> > bandwidth capacity, and if there was it could be easily added by renting
> > cloud servers.
> >
> > The problem is not bandwidth or connections, but trust, security and
> > privacy. Let me explain.
> >
> > Full nodes are able to check that all of bitcoin's rules are being
> > followed. Rules like following the inflation schedule, no double
> > spending, no spending of coins that don't belong to the holder of the
> > private key and all the other rules required to make bitcoin work (e.g.
> > difficulty)
> >
> > Full nodes are what make bitcoin trustless. No longer do you have to
> > trust a financial institution like a bank or paypal, you can simply run
> > software on your own computer. To put simply, the only node that matters
> > is the one you use.
> >
> > === Myth: There is no incentive to run nodes, the network relies on
> > altruism ===
> >
> > It is very much in the individual bitcoin's users rational self interest
> > to run a full node and use it as their wallet.
> >
> > Using a full node as your wallet is the only way to know for sure that
> > none of bitcoin's rules have been broken. Rules like no coins were spent
> > not belonging to the owner, that no coins were spent twice, that no
> > inflation happens outside of the schedule and that all the 

Re: [bitcoin-dev] Forget dormant UTXOs without confiscating bitcoin

2015-12-13 Thread Danny Thorpe via bitcoin-dev
What is the current behavior / cost that this proposal is trying to avoid?
Are ancient utxos required to be kept in memory always in a fully
validating node, or can ancient utxos get pushed out of memory like a
normal LRU caching db?

Thanks,
-Danny
On Dec 12, 2015 1:55 PM, "jl2012--- via bitcoin-dev" <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> It is a common practice in commercial banks that a dormant account might
> be confiscated. Confiscating or deleting dormant UTXOs might be too
> controversial, but allowing the UTXOs set growing without any limit might
> not be a sustainable option. People lose their private keys. People do
> stupid things like sending bitcoin to 1BitcoinEater. We shouldn’t be
> obliged to store everything permanently. This is my proposal:
>
> Dormant UTXOs are those UTXOs with 42 confirmations. In every block X
> after 42, it will commit to a hash for all UTXOs generated in block
> X-42. The UTXOs are first serialized into the form:
> txid|index|value|scriptPubKey, then a sorted Merkle hash is calculated.
> After some confirmations, nodes may safely delete the UTXO records of block
> X permanently.
>
> If a user is trying to redeem a dormant UTXO, in addition the signature,
> they have to provide the scriptPubKey, height (X), and UTXO value as part
> of the witness. They also need to provide the Merkle path to the dormant
> UTXO commitment.
>
> To confirm this tx, the miner will calculate a new Merkle hash for the
> block X, with the hash of the spent UTXO replaced by 1, and commit the hash
> to the current block. All full nodes will keep an index of latest dormant
> UTXO commitments so double spending is not possible. (a "meta-UTXO set")
>
> If all dormant UTXOs under a Merkle branch are spent, hash of the branch
> will become 1. If all dormant UTXOs in a block are spent, the record for
> this block could be forgotten. Full nodes do not need to remember which
> particular UTXO is spent or not, since any person trying to redeem a
> dormant UTXO has to provide such information.
>
> It becomes the responsibility of dormant coin holders to scan the
> blockchain for the current status of the UTXO commitment for their coin.
> They may also need to pay extra fee for the increased tx size.
>
> This is a softfork if there is no hash collision but this is a fundamental
> assumption in Bitcoin anyway. The proposal also works without segregated
> witness, just by replacing "witness" with "scriptSig"
>
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] [BIP] Normalized transaction IDs

2015-10-21 Thread Danny Thorpe via bitcoin-dev
A signer modifying the order of inputs or changing outputs when
"re-signing" a transaction (which already has dependent child transactions
spending its outputs) seems like quite a different hazard than a malicious
third party modifying a transaction in the mempool by twiddling opcodes in
the signature scripts.  The former seems like more a matter of keeping your
own house in order (an internal affair) while the latter is an external
threat beyond the transaction writer's control.

While I agree that having a canonical ordering for inputs and outputs might
be useful in some cases, there are also use cases where the relative
positions of inputs and outputs are significant, where reordering would
change the semantics of the transaction.  SIGHASH_SINGLE, for example,
makes an association between an input index and an output index. Open Asset
colored coins are identified by the order of inputs and outputs.

Let's keep canonical ordering separate from the normalized transaction ID
proposal. Baby steps. Normalized transaction IDs provide an immediate
benefit against the hazard of third party manipulation of transactions in
the mempool, even without canonical ordering.

-Danny





On Wed, Oct 21, 2015 at 1:46 AM, Luke Dashjr via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> On Wednesday, October 21, 2015 8:44:53 AM Christian Decker wrote:
> > Hm, that is true as long as the signer is the only signer of the
> > transaction, otherwise he'd be invalidating the signatures of the other
> > signers.
>
> Or he can just have the other signers re-sign with the modified version.
> Even if it only worked with a single signer, it's still a form of
> malleability
> that your BIP does not presently solve, but would be desirable to solve...
>
> Luke
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Proposed new policy for transactions that depend on other unconfirmed transactions

2015-10-05 Thread Danny Thorpe via bitcoin-dev
What does "package" mean here?

When you say 25 txs, does that mean maximum linked chain depth, or total
number of dependent transactions regardless of chain depth?

Thanks,
-Danny



On Mon, Oct 5, 2015 at 11:45 AM, Alex Morcos via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> I'd like to propose updates to the new policy limits on unconfirmed
> transaction chains.
>
> The existing limits in master and scheduled for release in 0.12 are:
> Ancestor packages = 100 txs and 900kb total size
> Descendant packages = 1000 txs and 2500kb total size
>
> Before 0.12 is released I would like to propose a significant reduction in
> these limits. In the course of analyzing algorithms for mempool limiting,
> it became clear that large packages of unconfirmed transactions were the
> primary vector for mempool clogging or relay fee boosting attacks. Feedback
> from the initial proposed limits was that they were too generous anyway.
>
> The proposed new limits are:
> Ancestor packages = 25 txs and 100kb total size
> Descendant packages = 25 txs and 100kb total size
>
> Based on historical transaction data, the most restrictive of these limits
> is the 25 transaction count on descendant packages. Over the period of
> April and May of this year (before stress tests), 5.8% of transactions
> would have violated this limit alone. Applying all the limits together
> would have affected 6.1% of transactions.
>
> Please keep in mind these are policy limits that affect transactions which
> depend on other unconfirmed transactions only. They are not a change to
> consensus rules and do not affect how many chained txs a valid block may
> contain. Furthermore, any transaction that was unable to be relayed due to
> these limits need only wait for some of its unconfirmed ancestors to be
> included in a block and then it could be successfully broadcast. This is
> unlikely to affect the total time from creation to inclusion in a block.
> Finally, these limits are command line arguments that can easily be changed
> on an individual node basis in Bitcoin Core.
>
> Please give your feedback if you know of legitimate use cases that would
> be hindered by these limits.
>
> Thanks,
> Alex
>
> On Mon, Sep 21, 2015 at 11:02 AM, Alex Morcos  wrote:
>
>> Thanks for everyone's review.  These policy changes have been merged in
>> to master in 6654 , which
>> just implements these limits and no mempool limiting yet.  The default
>> ancestor package size limit is 900kb not 1MB.
>>
>> Yes I think these limits are generous, but they were designed to be as
>> generous as was computationally feasible so they were unobjectionable
>> (since the existing policy was no limits).  This does not preclude future
>> changes to policy that would reduce these limits.
>>
>>
>>
>>
>>
>> On Fri, Aug 21, 2015 at 3:52 PM, Danny Thorpe 
>> wrote:
>>
>>> The limits Alex proposed are generous (bordering on obscene!), but
>>> dropping that down to allowing only two levels of chained unconfirmed
>>> transactions is too tight.
>>>
>>> Use case: Brokered asset transfers may require sets of transactions with
>>> a dependency tree depth of 3 to be published together. ( N seller txs, 1
>>> broker bridge tx, M buyer txs )
>>>
>>> If the originally proposed depth limit of 100 does not provide a
>>> sufficient cap on memory consumption or loop/recursion depth, a depth limit
>>> of 10 would provide plenty of headroom for this 3 level use case and
>>> similar patterns.
>>>
>>> -Danny
>>>
>>> On Fri, Aug 21, 2015 at 12:22 PM, Matt Corallo via bitcoin-dev <
>>> bitcoin-dev@lists.linuxfoundation.org> wrote:
>>>
 I dont see any problem with such limits. Though, hell, if you limited
 entire tx dependency trees (ie transactions and all required unconfirmed
 transactions for them) to something like 10 txn, maximum two levels
 deep, I also wouldnt have a problem.

 Matt

 On 08/14/15 19:33, Alex Morcos via bitcoin-dev wrote:
 > Hi everyone,
 >
 >
 > I'd like to propose a new set of requirements as a policy on when to
 > accept new transactions into the mempool and relay them.  This policy
 > would affect transactions which have as inputs other transactions
 which
 > are not yet confirmed in the blockchain.
 >
 > The motivation for this policy is 6470
 >  which aims to limit
 the
 > size of a mempool.  As discussed in that pull
 > ,
 > once the mempool is full a new transaction must be able to pay not
 only
 > for the transaction it would evict, but any dependent transactions
 that
 > would be removed from the mempool as well.  In order to make sure this
 > is always feasible, I'm proposing 4 new policy limits.
 >
 > All limits are command line 

Re: [bitcoin-dev] Proposal to add the bitcoin symbol to Unicode

2015-09-08 Thread Danny Thorpe via bitcoin-dev
What of this prior effort, proposing B-with-horizontal-bar (Ƀ)?
http://bitcoinsymbol.org/

They argue that B-with-2-vertical-bars is easily confused with the Thai
Bhat currency symbol, which is a B with a single vertical bar.

I'm not terribly fond of the B-with-horizontal-bar as a symbol, but it does
have the advantage that it is already in the Unicode glyph set, already
available on most Unicode enabled devices.

-Danny

On Sat, Sep 5, 2015 at 7:11 AM, Ken Shirriff via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> Use of the bitcoin symbol in text is inconvenient, because the bitcoin
> symbol isn't in the Unicode standard. To fix this, I've written a proposal
> to have the common B-with-vertical-bars bitcoin symbol added to Unicode.
> I've successfully proposed a new character for Unicode before, so I'm
> familiar with the process and think this has a good chance of succeeding.
> The proposal is at http://righto.com/bitcoin-unicode.pdf
>
> I received a suggestion to run this proposal by the bitcoin-dev group, so
> I hope this email is appropriate here. Endorsement by Bitcoin developers
> will help the Unicode Committee realize the importance of adding this
> symbol, so please let me know if you support this proposal.
>
> Thanks,
> Ken
>
>
>
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Proposed new policy for transactions that depend on other unconfirmed transactions

2015-08-21 Thread Danny Thorpe via bitcoin-dev
The limits Alex proposed are generous (bordering on obscene!), but dropping
that down to allowing only two levels of chained unconfirmed transactions
is too tight.

Use case: Brokered asset transfers may require sets of transactions with a
dependency tree depth of 3 to be published together. ( N seller txs, 1
broker bridge tx, M buyer txs )

If the originally proposed depth limit of 100 does not provide a sufficient
cap on memory consumption or loop/recursion depth, a depth limit of 10
would provide plenty of headroom for this 3 level use case and similar
patterns.

-Danny

On Fri, Aug 21, 2015 at 12:22 PM, Matt Corallo via bitcoin-dev 
bitcoin-dev@lists.linuxfoundation.org wrote:

 I dont see any problem with such limits. Though, hell, if you limited
 entire tx dependency trees (ie transactions and all required unconfirmed
 transactions for them) to something like 10 txn, maximum two levels
 deep, I also wouldnt have a problem.

 Matt

 On 08/14/15 19:33, Alex Morcos via bitcoin-dev wrote:
  Hi everyone,
 
 
  I'd like to propose a new set of requirements as a policy on when to
  accept new transactions into the mempool and relay them.  This policy
  would affect transactions which have as inputs other transactions which
  are not yet confirmed in the blockchain.
 
  The motivation for this policy is 6470
  https://github.com/bitcoin/bitcoin/pull/6470 which aims to limit the
  size of a mempool.  As discussed in that pull
  https://github.com/bitcoin/bitcoin/pull/6470#issuecomment-125324736,
  once the mempool is full a new transaction must be able to pay not only
  for the transaction it would evict, but any dependent transactions that
  would be removed from the mempool as well.  In order to make sure this
  is always feasible, I'm proposing 4 new policy limits.
 
  All limits are command line configurable.
 
  The first two limits are required to make sure no chain of transactions
  will be too large for the eviction code to handle:
 
  Max number of descendant txs : No transaction shall be accepted if it
  would cause another transaction in the mempool to have too many
  descendant transactions (all of which would have to be evicted if the
  ancestor transaction was evicted).  Default: 1000
 
  Max descendant size : No transaction shall be accepted if it would cause
  another transaction in the mempool to have the total size of all its
  descendant transactions be too great.  Default : maxmempool / 200  =
 2.5MB
 
  The third limit is required to make sure calculating the state required
  for sorting and limiting the mempool and enforcing the first 2 limits is
  computationally feasible:
 
  Max number of ancestor txs:  No transaction shall be accepted if it has
  too many ancestor transactions which are not yet confirmed (ie, in the
  mempool). Default: 100
 
  The fourth limit is required to maintain the pre existing policy goal
  that all transactions in the mempool should be mineable in the next
 block.
 
  Max ancestor size: No transaction shall be accepted if the total size of
  all its unconfirmed ancestor transactions is too large.  Default: 1MB
 
  (All limits include the transaction itself.)
 
  For reference, these limits would have affected less than 2% of
  transactions entering the mempool in April or May of this year.  During
  the period of 7/6 through 7/14, while the network was under stress test,
  as many as 25% of the transactions would have been affected.
 
  The code to implement the descendant package tracking and new policy
  limits can be found in 6557
  https://github.com/bitcoin/bitcoin/pull/6557 which is built off of
 6470.
 
  Thanks,
  Alex
 
 
 
  ___
  bitcoin-dev mailing list
  bitcoin-dev@lists.linuxfoundation.org
  https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
 
 ___
 bitcoin-dev mailing list
 bitcoin-dev@lists.linuxfoundation.org
 https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Bitcoin is an experiment. Why don't we have an experimental hardfork?

2015-08-18 Thread Danny Thorpe via bitcoin-dev
Ya, so?  All that means is that the experiment might reach the hard fork
tipping point faster than mainnet would. Verifying that the network can
handle such transitions, and how larger blocks affect the network, is the
point of testing.

And when I refer to testnet, I mean the public global testnet blockchain,
not in-house isolated networks like testnet-in-a-box.

On Tue, Aug 18, 2015 at 1:51 PM, Eric Lombrozo elombr...@gmail.com wrote:

 Problem is if you know most of the people running the testnet personally
 (as is pretty much the case with many current testnets) then the deployment
 amounts to “hey guys, let’s install the new version”

 On Aug 18, 2015, at 1:48 PM, Danny Thorpe via bitcoin-dev 
 bitcoin-dev@lists.linuxfoundation.org wrote:

 Deploying experimental code onto the live bitcoin blockchain seems
 unnecessarily risky.  Why not deploy a blocksize limit experiment for long
 term trials using testnet instead?

 On Tue, Aug 18, 2015 at 2:54 AM, jl2012 via bitcoin-dev 
 bitcoin-dev@lists.linuxfoundation.org wrote:

 As I understand, there is already a consensus among core dev that block
 size should/could be raised. The remaining questions are how, when, how
 much, and how fast. These are the questions for the coming Bitcoin
 Scalability Workshops but immediate consensus in these issues are not
 guaranteed.

 Could we just stop the debate for a moment, and agree to a scheduled
 experimental hardfork?

 Objectives (by order of importance):

 1. The most important objective is to show the world that reaching
 consensus for a Bitcoin hardfork is possible. If we could have a successful
 one, we would have more in the future

 2. With a slight increase in block size, to collect data for future
 hardforks

 3. To slightly relieve the pressure of full block, without minimal
 adverse effects on network performance

 With the objectives 1 and 2 in mind, this is to NOT intended to be a
 kick-the-can-down-the-road solution. The third objective is more like a
 side effect of this experiment.


 Proposal (parameters in ** are my recommendations but negotiable):

 1. Today, we all agree that some kind of block size hardfork will happen
 on t1=*1 June 2016*

 2. If no other consensus could be reached before t2=*1 Feb 2016*, we will
 adopt the backup plan

 3. The backup plan is: t3=*30 days* after m=*80%* of miner approval, but
 not before t1=*1 June 2016*, the block size is increased to s=*1.5MB*

 4. If the backup plan is adopted, we all agree that a better solution
 should be found before t4=*31 Dec 2017*.

 Rationale:

 t1 = 1 June 2016 is chosen to make sure everyone have enough time to
 prepare for a hardfork. Although we do not know what actually will happen
 but we know something must happen around that moment.

 t2 = 1 Feb 2016 is chosen to allow 5 more months of negotiations (and 2
 months after the workshops). If it is successful, we don't need to activate
 the backup plan

 t3 = 30 days is chosen to make sure every full nodes have enough time to
 upgrade after the actual hardfork date is confirmed

 t4 = 31 Dec 2017 is chosen, with 1.5 year of data and further debate,
 hopefully we would find a better solution. It is important to acknowledge
 that the backup plan is not a final solution

 m = 80%: We don't want a very small portion of miners to have the power
 to veto a hardfork, while it is important to make sure the new fork is
 secured by enough mining power. 80% is just a compromise.

 s = 1.5MB. As the 1MB cap was set 5 years ago, there is no doubt that all
 types of technology has since improved by 50%. I don't mind making it a
 bit smaller but in that case not much valuable data could be gathered and
 the second objective of this experiment may not be archived.

 

 If the community as a whole could agree with this experimental hardfork,
 we could announce the plan on bitcoin.org and start coding of the patch
 immediately. At the same time, exploration for a better solution continues.
 If no further consensus could be reached, a new version of Bitcoin Core
 with the patch will be released on or before 1 Feb 2016 and everyone will
 be asked to upgrade immediately.
 ___
 bitcoin-dev mailing list
 bitcoin-dev@lists.linuxfoundation.org
 https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


 ___
 bitcoin-dev mailing list
 bitcoin-dev@lists.linuxfoundation.org
 https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev



___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Bitcoin is an experiment. Why don't we have an experimental hardfork?

2015-08-18 Thread Danny Thorpe via bitcoin-dev
Again, I'm not suggesting further testing in sterile environments.  I'm
suggesting testing on the public global testnet network, so that real-world
hazards such as network lag, bandwidth constraints, traffic bottlenecks,
etc can wreak what havoc they can on the proposed implementation.  Also, a
test deployment would give more people an opportunity to see how the
proposed implementation works and kick the tires, which might help to
reduce some degree of angst about the proposals.

Your point appears to be that the biggest challenge facing Bitcoin is not
technical, but political. Sadly, you may be right.

On Tue, Aug 18, 2015 at 2:17 PM, Eric Lombrozo elombr...@gmail.com wrote:

 People have already been testing big blocks on testnets.

 The biggest problem here isn’t whether we can test the code in a fairly
 sterile environment. The biggest problem is convincing enough people to
 switch without:

 1) Pissing off the other side enough to the point where regardless of who
 wins the other side refuses to cooperate
 2) Screwing up the incentive model, allowing people to sabotage the
 process somehow
 3) Setting a precedent enabling hostile entities to destroy the network
 from within in the future
 etc…

 These kinds of things seem very hard to test on a testnet.

 On Aug 18, 2015, at 2:06 PM, Danny Thorpe danny.tho...@gmail.com wrote:

 Ya, so?  All that means is that the experiment might reach the hard fork
 tipping point faster than mainnet would. Verifying that the network can
 handle such transitions, and how larger blocks affect the network, is the
 point of testing.

 And when I refer to testnet, I mean the public global testnet blockchain,
 not in-house isolated networks like testnet-in-a-box.

 On Tue, Aug 18, 2015 at 1:51 PM, Eric Lombrozo elombr...@gmail.com
 wrote:

 Problem is if you know most of the people running the testnet personally
 (as is pretty much the case with many current testnets) then the deployment
 amounts to “hey guys, let’s install the new version”

 On Aug 18, 2015, at 1:48 PM, Danny Thorpe via bitcoin-dev 
 bitcoin-dev@lists.linuxfoundation.org wrote:

 Deploying experimental code onto the live bitcoin blockchain seems
 unnecessarily risky.  Why not deploy a blocksize limit experiment for long
 term trials using testnet instead?

 On Tue, Aug 18, 2015 at 2:54 AM, jl2012 via bitcoin-dev 
 bitcoin-dev@lists.linuxfoundation.org wrote:

 As I understand, there is already a consensus among core dev that block
 size should/could be raised. The remaining questions are how, when, how
 much, and how fast. These are the questions for the coming Bitcoin
 Scalability Workshops but immediate consensus in these issues are not
 guaranteed.

 Could we just stop the debate for a moment, and agree to a scheduled
 experimental hardfork?

 Objectives (by order of importance):

 1. The most important objective is to show the world that reaching
 consensus for a Bitcoin hardfork is possible. If we could have a successful
 one, we would have more in the future

 2. With a slight increase in block size, to collect data for future
 hardforks

 3. To slightly relieve the pressure of full block, without minimal
 adverse effects on network performance

 With the objectives 1 and 2 in mind, this is to NOT intended to be a
 kick-the-can-down-the-road solution. The third objective is more like a
 side effect of this experiment.


 Proposal (parameters in ** are my recommendations but negotiable):

 1. Today, we all agree that some kind of block size hardfork will happen
 on t1=*1 June 2016*

 2. If no other consensus could be reached before t2=*1 Feb 2016*, we
 will adopt the backup plan

 3. The backup plan is: t3=*30 days* after m=*80%* of miner approval, but
 not before t1=*1 June 2016*, the block size is increased to s=*1.5MB*

 4. If the backup plan is adopted, we all agree that a better solution
 should be found before t4=*31 Dec 2017*.

 Rationale:

 t1 = 1 June 2016 is chosen to make sure everyone have enough time to
 prepare for a hardfork. Although we do not know what actually will happen
 but we know something must happen around that moment.

 t2 = 1 Feb 2016 is chosen to allow 5 more months of negotiations (and 2
 months after the workshops). If it is successful, we don't need to activate
 the backup plan

 t3 = 30 days is chosen to make sure every full nodes have enough time to
 upgrade after the actual hardfork date is confirmed

 t4 = 31 Dec 2017 is chosen, with 1.5 year of data and further debate,
 hopefully we would find a better solution. It is important to acknowledge
 that the backup plan is not a final solution

 m = 80%: We don't want a very small portion of miners to have the power
 to veto a hardfork, while it is important to make sure the new fork is
 secured by enough mining power. 80% is just a compromise.

 s = 1.5MB. As the 1MB cap was set 5 years ago, there is no doubt that
 all types of technology has since improved by 50%. I don't mind making it
 a bit smaller but in that case

Re: [bitcoin-dev] Dynamically Controlled Bitcoin Block Size Max Cap

2015-08-18 Thread Danny Thorpe via bitcoin-dev
I like the simplicity of this approach.  Demand driven response.

Is there really a need to reduce the max block size at all?  It is just a
maximum limit, not a required size for every block.  If a seasonal
transaction surge bumps the max block size limit up a notch, what harm is
there in leaving the max block size limit at the high water mark
indefinitely, even though periods of decreased transaction volume?

I'd argue to remove the reduction step, making the max block size
calculation a monotonic increasing function. Cut the complexity in half.

-Danny

On Tue, Aug 18, 2015 at 5:13 AM, Upal Chakraborty via bitcoin-dev 
bitcoin-dev@lists.linuxfoundation.org wrote:

 Regarding:
 http://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-August/010295.html


 I am arguing with the following statement here...

 *I see problems to this approach. The biggest one I see is that a miner
 with 11% of hash power could sabotage block size increases by only ever
 mining empty blocks.*



 First, I would like to argue from economics' point of view. If someone
 wants to hold back the block size increase with 11% hash power by mining
 empty blocks, he has to sacrifice Tx fees, which is not economical. 11%
 hash power will most likely be a pool and pool miners will find out soon
 that they are losing Tx fees because of pool owner's intention. Hence,
 they'll switch pool and pool owner will lose out. This is the same reason
 why 51% attack will never happen, even if a pool gets more than 51% hash
 power.


 Next, I would like to propose a slightly modified technical solution to
 this problem in algorithmic format...

 If more than 50% of block's size, found in the first 2000 of the last
 difficulty period, is more than 90% MaxBlockSize
  Double MaxBlockSize
 Else if more than 90% of block's size, found in the first 2000 of the last
 difficulty period, is less than 50% MaxBlockSize
  Half MaxBlockSize
 Else
  Keep the same MaxBlockSize

 This is how, those who want to stop increase, need to have more than 50%
 hash power. Those who want to stop decrease, need to have more than 10%
 hash power, but must mine more than 50% of MaxBlockSize in all blocks. In
 this approach, not only miners, but also the end user have their say.
 Because, end users will fill up the mempool, from where miners will take Tx
 to fill up the blocks. Please note that, taking first 2000 of the last 2016
 blocks is important to avoid data discrepancy among different nodes due to
 orphan blocks. It is assumed that a chain can not be orphaned after having
 16 confirmation.

 Looking for comments.





 ___
 bitcoin-dev mailing list
 bitcoin-dev@lists.linuxfoundation.org
 https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


[bitcoin-dev] TestNet block starved?

2015-08-14 Thread Danny Thorpe via bitcoin-dev
Any idea what's going on with TestNet?  No blocks for nearly 2 hours now,
according to multiple block explorers (blockr.io, my own bitcoin node, etc).

Prior to the last block (530516), there were a lot of blocks with zero
transactions and only an occasional block with a ton of txs.

Any info?

Thx,
-Danny
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev