Re: [Bitcoin-development] Long-term mining incentives

2015-05-12 Thread Gavin Andresen
Added back the list, I didn't mean to reply privately:

Fair enough, I'll try to find time in the next month or three to write up
four plausible future scenarios for how mining incentives might work:

1) Fee-supported with very large blocks containing lots of tiny-fee
transactions
2) Proof-of-idle supported (I wish Tadge Dryja would publish his
proof-of-idle idea)
3) Fees purely as transaction-spam-prevention measure, chain security via
alternative consensus algorithm (in this scenario there is very little
mining).
4) Fee supported with small blocks containing high-fee transactions moving
coins to/from sidechains.

Would that be helpful, or do you have some reason for thinking that we
should pick just one and focus all of our efforts on making that one
scenario happen?

I always think it is better, when possible, not to bet on one horse.


On Tue, May 12, 2015 at 10:39 AM, Thomas Voegtlin thom...@electrum.org
wrote:

 Le 12/05/2015 15:44, Gavin Andresen a écrit :
  Ok, here's my scenario:
 
  https://blog.bitcoinfoundation.org/a-scalability-roadmap/
 
  It might be wrong. I welcome other people to present their road maps.
 

 [answering to you only because you answered to me and not to the list;
 feel free to repost this to the list though]

 Yes, that's exactly the kind of roadmap I am asking for. But your blog
 post does not say anything about long term mining incentives, it only
 talks about scalability. My point is that we need the same kind of thing
 for miners incentives.




-- 
--
Gavin Andresen
--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Proposed additional options for pruned nodes

2015-05-12 Thread gabe appleton
Yes, but that just increases the incentive for partially-full nodes. It
would add to the assumed-small number of full nodes.

Or am I misunderstanding?

On Tue, May 12, 2015 at 12:05 PM, Jeff Garzik jgar...@bitpay.com wrote:

 A general assumption is that you will have a few archive nodes with the
 full blockchain, and a majority of nodes are pruned, able to serve only the
 tail of the chains.


 On Tue, May 12, 2015 at 8:26 AM, gabe appleton gapplet...@gmail.com
 wrote:

 Hi,

 There's been a lot of talk in the rest of the community about how the
 20MB step would increase storage needs, and that switching to pruned nodes
 (partially) would reduce network security. I think I may have a solution.

 There could be a hybrid option in nodes. Selecting this would do the
 following:
 Flip the --no-wallet toggle
 Select a section of the blockchain to store fully (percentage based,
 possibly on hash % sections?)
 Begin pruning all sections not included in 2
 The idea is that you can implement it similar to how a Koorde is done, in
 that the network will decide which sections it retrieves. So if the user
 prompts it to store 50% of the blockchain, it would look at its peers, and
 at their peers (if secure), and choose the least-occurring options from
 them.

 This would allow them to continue validating all transactions, and still
 store a full copy, just distributed among many nodes. It should overall
 have little impact on security (unless I'm mistaken), and it would
 significantly reduce storage needs on a node.

 It would also allow for a retroactive --max-size flag, where it will
 prune until it is at the specified size, and continue to prune over time,
 while keeping to the sections defined by the network.

 What sort of side effects or network vulnerabilities would this
 introduce? I know some said it wouldn't be Sybil resistant, but how would
 this be less so than a fully pruned node?


 --
 One dashboard for servers and applications across Physical-Virtual-Cloud
 Widest out-of-the-box monitoring support with 50+ applications
 Performance metrics, stats and reports that give you Actionable Insights
 Deep dive visibility with transaction tracing using APM Insight.
 http://ad.doubleclick.net/ddm/clk/290420510;117567292;y
 ___
 Bitcoin-development mailing list
 Bitcoin-development@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bitcoin-development




 --
 Jeff Garzik
 Bitcoin core developer and open source evangelist
 BitPay, Inc.  https://bitpay.com/

--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Proposed additional options for pruned nodes

2015-05-12 Thread Peter Todd
On Tue, May 12, 2015 at 09:05:44AM -0700, Jeff Garzik wrote:
 A general assumption is that you will have a few archive nodes with the
 full blockchain, and a majority of nodes are pruned, able to serve only the
 tail of the chains.

Hmm?

Lots of people are tossing around ideas for partial archival nodes that
would store a subset of blocks, such that collectively the whole
blockchain would be available even if no one node had the entire chain.

-- 
'peter'[:-1]@petertodd.org
156d2069eeebb3309455f526cfe50efbf8a85ec630df7f7c


signature.asc
Description: Digital signature
--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Long-term mining incentives

2015-05-12 Thread Dave Hudson
I think proof-of-idle had a potentially serious problem when I last looked at 
it. The risk is that a largish miner can use everyone else's idle time to 
construct a very long chain; it's also easy enough for them to make it appear 
to be the work of a large number of distinct miners. Given that this would 
allow them to arbitrarily re-mine any block rewards and potentially censor any 
transactions then that just seems like a huge security hole?


Cheers,
Dave


 On 12 May 2015, at 17:10, Gavin Andresen gavinandre...@gmail.com wrote:
 
 Added back the list, I didn't mean to reply privately:
 
 Fair enough, I'll try to find time in the next month or three to write up 
 four plausible future scenarios for how mining incentives might work:
 
 1) Fee-supported with very large blocks containing lots of tiny-fee 
 transactions
 2) Proof-of-idle supported (I wish Tadge Dryja would publish his 
 proof-of-idle idea)
 3) Fees purely as transaction-spam-prevention measure, chain security via 
 alternative consensus algorithm (in this scenario there is very little 
 mining).
 4) Fee supported with small blocks containing high-fee transactions moving 
 coins to/from sidechains.
 
 Would that be helpful, or do you have some reason for thinking that we should 
 pick just one and focus all of our efforts on making that one scenario happen?
 
 I always think it is better, when possible, not to bet on one horse.
 
 
 On Tue, May 12, 2015 at 10:39 AM, Thomas Voegtlin thom...@electrum.org 
 mailto:thom...@electrum.org wrote:
 Le 12/05/2015 15:44, Gavin Andresen a écrit :
  Ok, here's my scenario:
 
  https://blog.bitcoinfoundation.org/a-scalability-roadmap/ 
  https://blog.bitcoinfoundation.org/a-scalability-roadmap/
 
  It might be wrong. I welcome other people to present their road maps.
 
 
 [answering to you only because you answered to me and not to the list;
 feel free to repost this to the list though]
 
 Yes, that's exactly the kind of roadmap I am asking for. But your blog
 post does not say anything about long term mining incentives, it only
 talks about scalability. My point is that we need the same kind of thing
 for miners incentives.
 
 
 
 -- 
 --
 Gavin Andresen
 --
 One dashboard for servers and applications across Physical-Virtual-Cloud 
 Widest out-of-the-box monitoring support with 50+ applications
 Performance metrics, stats and reports that give you Actionable Insights
 Deep dive visibility with transaction tracing using APM Insight.
 http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
 Bitcoin-development mailing list
 Bitcoin-development@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bitcoin-development

--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Proposed additional options for pruned nodes

2015-05-12 Thread Jeff Garzik
A general assumption is that you will have a few archive nodes with the
full blockchain, and a majority of nodes are pruned, able to serve only the
tail of the chains.


On Tue, May 12, 2015 at 8:26 AM, gabe appleton gapplet...@gmail.com wrote:

 Hi,

 There's been a lot of talk in the rest of the community about how the 20MB
 step would increase storage needs, and that switching to pruned nodes
 (partially) would reduce network security. I think I may have a solution.

 There could be a hybrid option in nodes. Selecting this would do the
 following:
 Flip the --no-wallet toggle
 Select a section of the blockchain to store fully (percentage based,
 possibly on hash % sections?)
 Begin pruning all sections not included in 2
 The idea is that you can implement it similar to how a Koorde is done, in
 that the network will decide which sections it retrieves. So if the user
 prompts it to store 50% of the blockchain, it would look at its peers, and
 at their peers (if secure), and choose the least-occurring options from
 them.

 This would allow them to continue validating all transactions, and still
 store a full copy, just distributed among many nodes. It should overall
 have little impact on security (unless I'm mistaken), and it would
 significantly reduce storage needs on a node.

 It would also allow for a retroactive --max-size flag, where it will prune
 until it is at the specified size, and continue to prune over time, while
 keeping to the sections defined by the network.

 What sort of side effects or network vulnerabilities would this introduce?
 I know some said it wouldn't be Sybil resistant, but how would this be less
 so than a fully pruned node?


 --
 One dashboard for servers and applications across Physical-Virtual-Cloud
 Widest out-of-the-box monitoring support with 50+ applications
 Performance metrics, stats and reports that give you Actionable Insights
 Deep dive visibility with transaction tracing using APM Insight.
 http://ad.doubleclick.net/ddm/clk/290420510;117567292;y
 ___
 Bitcoin-development mailing list
 Bitcoin-development@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bitcoin-development




-- 
Jeff Garzik
Bitcoin core developer and open source evangelist
BitPay, Inc.  https://bitpay.com/
--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Proposed additional options for pruned nodes

2015-05-12 Thread Jeff Garzik
True.  Part of the issue rests on the block sync horizon/cliff.  There is a
value X which is the average number of blocks the 90th percentile of nodes
need in order to sync.  It is sufficient for the [semi-]pruned nodes to
keep X blocks, after which nodes must fall back to archive nodes for older
data.

There is simply far, far more demand for recent blocks, and the demand for
old blocks very rapidly falls off.

There was even a more radical suggestion years ago - refuse to sync if too
old (2 weeks?), and force the user to download ancient data via torrent.



On Tue, May 12, 2015 at 1:02 PM, Gregory Maxwell gmaxw...@gmail.com wrote:

 On Tue, May 12, 2015 at 7:38 PM, Jeff Garzik jgar...@bitpay.com wrote:
  One general problem is that security is weakened when an attacker can
 DoS a
  small part of the chain by DoS'ing a small number of nodes - yet the
 impact
  is a network-wide DoS because nobody can complete a sync.

 It might be more interesting to think of that attack as a bandwidth
 exhaustion DOS attack on the archive nodes... if you can't get a copy
 without them, thats where you'll go.

 So the question arises: does the option make some nodes that would
 have been archive not be? Probably some-- but would it do so much that
 it would offset the gain of additional copies of the data when those
 attacks are not going no. I suspect not.

 It's also useful to give people incremental ways to participate even
 when they can't swollow the whole pill; or choose to provide the
 resource thats cheap for them to provide.  In particular, if there is
 only two kinds of full nodes-- archive and pruned; then the archive
 nodes take both a huge disk and bandwidth cost; where as if there are
 fractional then archives take low(er) bandwidth unless the fractionals
 get DOS attacked.




-- 
Jeff Garzik
Bitcoin core developer and open source evangelist
BitPay, Inc.  https://bitpay.com/
--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Bitcoin transaction

2015-05-12 Thread Danny Thorpe
See the Open Assets protocol specification for technical details on how a
colored coin (of the Open Asset flavor) is represented in a bitcoin
transaction.

https://github.com/OpenAssets/open-assets-protocol

http://www.CoinPrism.com also has a discussion forum where some colored
coin devs hang out.

http://www.coinprism.info is a blockchain explorer that is colored-coin
aware.

On Tue, May 12, 2015 at 2:54 AM, Telephone Lemien lemienteleph...@gmail.com
 wrote:

 Thank You,
 I know this, but I want to have mores details in the inputs/outputs, or in
 the script of input/output and how i will proceed in the code.
 Thanks for all replaying

 2015-05-12 11:47 GMT+02:00 Patrick Mccorry (PGR) 
 patrick.mcco...@newcastle.ac.uk:

  There is no difference to the transaction as far as im aware – just the
 inputs / outputs have a special meaning (and should have a special order).
 So you can track 1 BTC throughout the blockchain and this 1 BTC represents
 my asset. Someone may give a more useful answer.



 *From:* Telephone Lemien [mailto:lemienteleph...@gmail.com]
 *Sent:* 12 May 2015 10:45
 *To:* Bitcoin Dev
 *Subject:* [Bitcoin-development] Bitcoin transaction



 Hello evry body,

 I want to know what is the difference between a bitcoin transaction and
 colored coins transaction technically.

 Thanks




 --
 One dashboard for servers and applications across Physical-Virtual-Cloud
 Widest out-of-the-box monitoring support with 50+ applications
 Performance metrics, stats and reports that give you Actionable Insights
 Deep dive visibility with transaction tracing using APM Insight.
 http://ad.doubleclick.net/ddm/clk/290420510;117567292;y
 ___
 Bitcoin-development mailing list
 Bitcoin-development@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bitcoin-development


--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Reducing the block rate instead of increasing the maximum block size

2015-05-12 Thread Sergio Lerner


On 11/05/2015 04:25 p.m., Leo Wandersleb wrote:
 I assume that 1 minute block target will not get any substantial support but
 just in case only few people speaking up might be taken as careful
support of
 the idea, here's my two cents:

 In mining, stale shares depend on delay between pool/network and the
miner. This
 varies substantially globally and as Peter Todd/Luke-Jr mentioned,
speed of
 light will always keep those at a disadvantage that are 100 light
milli seconds
 away from the creation of the last block. If anything, this warrants
to increase
 block target, not reduce. (The increase might wait until we have
miners on Mars
 though ;) )

An additional delay of 200 milliseconds means loosing approximately 0.3%
of the revenue.
Do you really think this is going to be the key factor to prevent a
mining pool from being used?
There are lot of other factors, such as DoS protections, security,
privacy, variance, trust, algorithm to distribute shares, that are much
more important than that.

And having a 1 minute block actually reduces the payout variance 10x, so
miners will be happy for that. And many pool miners may opt to do solo
mining, and create new full-nodes.



 If SPV also becomes 10 times more traffic intensive, I can only urge
you to
 travel to anything but central Europe or the USA.
The SPV traffic is minuscule. Bloom-filers are an ugly solution that
increases bandwidth and does not provide a real privacy solution.
Small improvements in the wire protocol can reduce the traffic two-fold.



 I want bitcoin to be the currency for the other x billion and thus I
oppose any
 change that moves the balance towards the economically upper billion.
Because having a 10 minute rate Bitcoin is a good Internet money. If you
have a 1 minute rate, then it can also be a retail payment method, an
virtual game trading payment method, a gambling, XXX-video renting 
(hey, it takes less than 10 minutes to see one of those :), and much more.

You can reach more billions by having near instant payments.
Don't tell me about the morning caffe, I would like that everyone is
buying their coffe with Bitcoin and there are millions of users before
we figure out how to do that off-chain.

Best regards,
 Sergio.


--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Proposed additional options for pruned nodes

2015-05-12 Thread gabe appleton
Yet this holds true in our current assumptions of the network as well: that
it will become a collection of pruned nodes with a few storage nodes.

A hybrid option makes this better, because it spreads the risk, rather than
concentrating it in full nodes.
On May 12, 2015 3:38 PM, Jeff Garzik jgar...@bitpay.com wrote:

 One general problem is that security is weakened when an attacker can DoS
 a small part of the chain by DoS'ing a small number of nodes - yet the
 impact is a network-wide DoS because nobody can complete a sync.


 On Tue, May 12, 2015 at 12:24 PM, gabe appleton gapplet...@gmail.com
 wrote:

 0, 1, 3, 4, 5, 6 can be solved by looking at chunks chronologically. Ie,
 give the signed (by sender) hash of the first and last block in your range.
 This is less data dense than the idea above, but it might work better.

 That said, this is likely a less secure way to do it. To improve upon
 that, a node could request a block of random height within that range and
 verify it, but that violates point 2. And the scheme in itself definitely
 violates point 7.
 On May 12, 2015 3:07 PM, Gregory Maxwell gmaxw...@gmail.com wrote:

 It's a little frustrating to see this just repeated without even
 paying attention to the desirable characteristics from the prior
 discussions.

 Summarizing from memory:

 (0) Block coverage should have locality; historical blocks are
 (almost) always needed in contiguous ranges.   Having random peers
 with totally random blocks would be horrific for performance; as you'd
 have to hunt down a working peer and make a connection for each block
 with high probability.

 (1) Block storage on nodes with a fraction of the history should not
 depend on believing random peers; because listening to peers can
 easily create attacks (e.g. someone could break the network; by
 convincing nodes to become unbalanced) and not useful-- it's not like
 the blockchain is substantially different for anyone; if you're to the
 point of needing to know coverage to fill then something is wrong.
 Gaps would be handled by archive nodes, so there is no reason to
 increase vulnerability by doing anything but behaving uniformly.

 (2) The decision to contact a node should need O(1) communications,
 not just because of the delay of chasing around just to find who has
 someone; but because that chasing process usually makes the process
 _highly_ sybil vulnerable.

 (3) The expression of what blocks a node has should be compact (e.g.
 not a dense list of blocks) so it can be rumored efficiently.

 (4) Figuring out what block (ranges) a peer has given should be
 computationally efficient.

 (5) The communication about what blocks a node has should be compact.

 (6) The coverage created by the network should be uniform, and should
 remain uniform as the blockchain grows; ideally it you shouldn't need
 to update your state to know what blocks a peer will store in the
 future, assuming that it doesn't change the amount of data its
 planning to use. (What Tier Nolan proposes sounds like it fails this
 point)

 (7) Growth of the blockchain shouldn't cause much (or any) need to
 refetch old blocks.

 I've previously proposed schemes which come close but fail one of the
 above.

 (e.g. a scheme based on reservoir sampling that gives uniform
 selection of contiguous ranges, communicating only 64 bits of data to
 know what blocks a node claims to have, remaining totally uniform as
 the chain grows, without any need to refetch -- but needs O(height)
 work to figure out what blocks a peer has from the data it
 communicated.;   or another scheme based on consistent hashes that has
 log(height) computation; but sometimes may result in a node needing to
 go refetch an old block range it previously didn't store-- creating
 re-balancing traffic.)

 So far something that meets all those criteria (and/or whatever ones
 I'm not remembering) has not been discovered; but I don't really think
 much time has been spent on it. I think its very likely possible.


 --
 One dashboard for servers and applications across Physical-Virtual-Cloud
 Widest out-of-the-box monitoring support with 50+ applications
 Performance metrics, stats and reports that give you Actionable Insights
 Deep dive visibility with transaction tracing using APM Insight.
 http://ad.doubleclick.net/ddm/clk/290420510;117567292;y
 ___
 Bitcoin-development mailing list
 Bitcoin-development@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bitcoin-development



 --
 One dashboard for servers and applications across Physical-Virtual-Cloud
 Widest out-of-the-box monitoring support with 50+ applications
 Performance metrics, stats and reports that give you Actionable Insights
 Deep dive visibility with transaction tracing using APM Insight.
 

Re: [Bitcoin-development] Proposed additional options for pruned nodes

2015-05-12 Thread Tier Nolan
On Tue, May 12, 2015 at 6:16 PM, Peter Todd p...@petertodd.org wrote:


 Lots of people are tossing around ideas for partial archival nodes that
 would store a subset of blocks, such that collectively the whole
 blockchain would be available even if no one node had the entire chain.


A compact way to describe which blocks are stored helps to mitigate against
fingerprint attacks.

It also means that a node could compactly indicate which blocks it stores
with service bits.

The node could pick two numbers

W = window = a power of 2
P = position = random value less than W

The node would store all blocks with a height of P mod W.  The block hash
could be used too.

This has the nice feature that the node can throw away half of its data and
still represent what is stored.

W_new = W * 2
P_new = (random_bool()) ? P + W/2 : P;

Half of the stored blocks would match P_new mod W_new and the other half
could be deleted.  This means that the store would use up between 50% and
100% of the allocated size.

Another benefit is that it increases the probability that at least someone
has every block.

If N nodes each store 1% of the blocks, then the odds of a block being
stored is pow(0.99, N).  For 1000 nodes, that gives odds of 1 in 23,164
that a block will be missing.  That means that around 13 out of 300,000
blocks would be missing.  There would likely be more nodes than that, and
also storage nodes, so it is not a major risk.

If everyone is storing 1% of blocks, then they would set W to 128.  As long
as all of the 128 buckets is covered by some nodes, then all blocks are
stored.  With 1000 nodes, that gives odds of 0.6% that at least one bucket
will be missed.  That is better than around 13 blocks being missing.

Nodes could inform peers of their W and P parameters on connection.  The
version message could be amended or a getparams message of some kind
could be added.

W could be encoded with 4 bits and P could be encoded with 16 bits, for 20
in total.  W = 1  bits[19:16] and P = bits[14:0].  That gives a maximum W
of 32768, which is likely to many bits for P.

Initial download would be harder, since new nodes would have to connect to
at least 100 different nodes.  They could download from random nodes, and
just download the ones they are missing from storage nodes.  Even storage
nodes could have a range of W values.
--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Proposed additional options for pruned nodes

2015-05-12 Thread Gregory Maxwell
It's a little frustrating to see this just repeated without even
paying attention to the desirable characteristics from the prior
discussions.

Summarizing from memory:

(0) Block coverage should have locality; historical blocks are
(almost) always needed in contiguous ranges.   Having random peers
with totally random blocks would be horrific for performance; as you'd
have to hunt down a working peer and make a connection for each block
with high probability.

(1) Block storage on nodes with a fraction of the history should not
depend on believing random peers; because listening to peers can
easily create attacks (e.g. someone could break the network; by
convincing nodes to become unbalanced) and not useful-- it's not like
the blockchain is substantially different for anyone; if you're to the
point of needing to know coverage to fill then something is wrong.
Gaps would be handled by archive nodes, so there is no reason to
increase vulnerability by doing anything but behaving uniformly.

(2) The decision to contact a node should need O(1) communications,
not just because of the delay of chasing around just to find who has
someone; but because that chasing process usually makes the process
_highly_ sybil vulnerable.

(3) The expression of what blocks a node has should be compact (e.g.
not a dense list of blocks) so it can be rumored efficiently.

(4) Figuring out what block (ranges) a peer has given should be
computationally efficient.

(5) The communication about what blocks a node has should be compact.

(6) The coverage created by the network should be uniform, and should
remain uniform as the blockchain grows; ideally it you shouldn't need
to update your state to know what blocks a peer will store in the
future, assuming that it doesn't change the amount of data its
planning to use. (What Tier Nolan proposes sounds like it fails this
point)

(7) Growth of the blockchain shouldn't cause much (or any) need to
refetch old blocks.

I've previously proposed schemes which come close but fail one of the above.

(e.g. a scheme based on reservoir sampling that gives uniform
selection of contiguous ranges, communicating only 64 bits of data to
know what blocks a node claims to have, remaining totally uniform as
the chain grows, without any need to refetch -- but needs O(height)
work to figure out what blocks a peer has from the data it
communicated.;   or another scheme based on consistent hashes that has
log(height) computation; but sometimes may result in a node needing to
go refetch an old block range it previously didn't store-- creating
re-balancing traffic.)

So far something that meets all those criteria (and/or whatever ones
I'm not remembering) has not been discovered; but I don't really think
much time has been spent on it. I think its very likely possible.

--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] CLTV opcode allocation; long-term plans?

2015-05-12 Thread Jorge Timón
This saves us ocodes for later but it's uglier and produces slightly
bigger scripts.
If we're convinced it's worth it, seems like the right way to do it,
and certainly cltv and rclv/op_maturity are related.
But let's not forget that we can always use this same trick with the
last opcode to get 2^64 brand new opcodes.
So I'm not convinced at all on whether we want  #5496 or #6124.
But it would be nice to decide and stop blocking  this.

On Sat, May 9, 2015 at 11:12 AM, Peter Todd p...@petertodd.org wrote:
 On Tue, May 05, 2015 at 01:54:33AM +0100, Btc Drak wrote:
  That said, if people have strong feelings about this, I would be willing
  to make OP_CLTV work as follows:
 
  nLockTime 1 OP_CLTV
 
  Where the 1 selects absolute mode, and all others act as OP_NOP's. A
  future relative CLTV could then be a future soft-fork implemented as
  follows:
 
  relative nLockTime 2 OP_CLTV
 
  On the bad side it'd be two or three days of work to rewrite all the
  existing tests and example code and update the BIP, and (slightly) gets
  us away from the well-tested existing implementation. It also may
  complicate the codebase compared to sticking with just doing a Script
  v2.0, with the additional execution environment data required for v2.0
  scripts cleanly separated out. But all in all, the above isn't too big
  of a deal.


 Adding a parameter to OP_CLTV makes it much more flexible and is the most
 economic use of precious NOPs.
 The extra time required is ok and it would be good to make this change to
 the PR in time for the feature freeze.

 Done!

 https://github.com/bitcoin/bitcoin/pull/5496#issuecomment-100454263

 --
 'peter'[:-1]@petertodd.org
 12c438a597ad15df697888be579f4f818a30517cd60fbdc8

 --
 One dashboard for servers and applications across Physical-Virtual-Cloud
 Widest out-of-the-box monitoring support with 50+ applications
 Performance metrics, stats and reports that give you Actionable Insights
 Deep dive visibility with transaction tracing using APM Insight.
 http://ad.doubleclick.net/ddm/clk/290420510;117567292;y
 ___
 Bitcoin-development mailing list
 Bitcoin-development@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bitcoin-development


--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Proposed additional options for pruned nodes

2015-05-12 Thread Jeff Garzik
One general problem is that security is weakened when an attacker can DoS a
small part of the chain by DoS'ing a small number of nodes - yet the impact
is a network-wide DoS because nobody can complete a sync.


On Tue, May 12, 2015 at 12:24 PM, gabe appleton gapplet...@gmail.com
wrote:

 0, 1, 3, 4, 5, 6 can be solved by looking at chunks chronologically. Ie,
 give the signed (by sender) hash of the first and last block in your range.
 This is less data dense than the idea above, but it might work better.

 That said, this is likely a less secure way to do it. To improve upon
 that, a node could request a block of random height within that range and
 verify it, but that violates point 2. And the scheme in itself definitely
 violates point 7.
 On May 12, 2015 3:07 PM, Gregory Maxwell gmaxw...@gmail.com wrote:

 It's a little frustrating to see this just repeated without even
 paying attention to the desirable characteristics from the prior
 discussions.

 Summarizing from memory:

 (0) Block coverage should have locality; historical blocks are
 (almost) always needed in contiguous ranges.   Having random peers
 with totally random blocks would be horrific for performance; as you'd
 have to hunt down a working peer and make a connection for each block
 with high probability.

 (1) Block storage on nodes with a fraction of the history should not
 depend on believing random peers; because listening to peers can
 easily create attacks (e.g. someone could break the network; by
 convincing nodes to become unbalanced) and not useful-- it's not like
 the blockchain is substantially different for anyone; if you're to the
 point of needing to know coverage to fill then something is wrong.
 Gaps would be handled by archive nodes, so there is no reason to
 increase vulnerability by doing anything but behaving uniformly.

 (2) The decision to contact a node should need O(1) communications,
 not just because of the delay of chasing around just to find who has
 someone; but because that chasing process usually makes the process
 _highly_ sybil vulnerable.

 (3) The expression of what blocks a node has should be compact (e.g.
 not a dense list of blocks) so it can be rumored efficiently.

 (4) Figuring out what block (ranges) a peer has given should be
 computationally efficient.

 (5) The communication about what blocks a node has should be compact.

 (6) The coverage created by the network should be uniform, and should
 remain uniform as the blockchain grows; ideally it you shouldn't need
 to update your state to know what blocks a peer will store in the
 future, assuming that it doesn't change the amount of data its
 planning to use. (What Tier Nolan proposes sounds like it fails this
 point)

 (7) Growth of the blockchain shouldn't cause much (or any) need to
 refetch old blocks.

 I've previously proposed schemes which come close but fail one of the
 above.

 (e.g. a scheme based on reservoir sampling that gives uniform
 selection of contiguous ranges, communicating only 64 bits of data to
 know what blocks a node claims to have, remaining totally uniform as
 the chain grows, without any need to refetch -- but needs O(height)
 work to figure out what blocks a peer has from the data it
 communicated.;   or another scheme based on consistent hashes that has
 log(height) computation; but sometimes may result in a node needing to
 go refetch an old block range it previously didn't store-- creating
 re-balancing traffic.)

 So far something that meets all those criteria (and/or whatever ones
 I'm not remembering) has not been discovered; but I don't really think
 much time has been spent on it. I think its very likely possible.


 --
 One dashboard for servers and applications across Physical-Virtual-Cloud
 Widest out-of-the-box monitoring support with 50+ applications
 Performance metrics, stats and reports that give you Actionable Insights
 Deep dive visibility with transaction tracing using APM Insight.
 http://ad.doubleclick.net/ddm/clk/290420510;117567292;y
 ___
 Bitcoin-development mailing list
 Bitcoin-development@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bitcoin-development



 --
 One dashboard for servers and applications across Physical-Virtual-Cloud
 Widest out-of-the-box monitoring support with 50+ applications
 Performance metrics, stats and reports that give you Actionable Insights
 Deep dive visibility with transaction tracing using APM Insight.
 http://ad.doubleclick.net/ddm/clk/290420510;117567292;y
 ___
 Bitcoin-development mailing list
 Bitcoin-development@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bitcoin-development




-- 
Jeff Garzik
Bitcoin core developer and open source evangelist
BitPay, Inc.  

Re: [Bitcoin-development] A suggestion for reducing the size of the UTXO database

2015-05-12 Thread Danny Thorpe
Having thousands of utxos floating around for a single address is clearly a
bad thing - it creates a lot of memory load on bitcoin nodes.

However, having only one utxo for an address is also a bad thing, for
concurrent operations.

Having several utxos available to spend is good for parallelism, so that
2 or more tasks which are spending from the same address don't have to line
up single file waiting for one of the tasks to publish a tx first so that
the next task can spend the (unconfirmed) change output of the first.
Requiring/Forcing/Having a single output carry the entire balance of an
address does not work at scale. (Yes, this presumes that the tasks are
coordinated so that they don't attempt to spend the same outputs. Internal
coordination is solvable.)

In multiple replies, you push for having all utxos of an address spent in
one transaction.  Why all?  If the objective is to reduce the size of the
utxo pool, it would be sufficient simply to recommend that wallets and
other spenders consume more utxos than they create, on average.

I'm ok with consume more utxos than you generate as a good citizen / best
practices recommendation, but a requirement that all prior outputs must be
spent in one transaction seems excessive and impractical.

-Danny

On Sat, May 9, 2015 at 10:09 AM, Jim Phillips j...@ergophobia.org wrote:

 Forgive me if this idea has been suggested before, but I made this
 suggestion on reddit and I got some feedback recommending I also bring it
 to this list -- so here goes.

 I wonder if there isn't perhaps a simpler way of dealing with UTXO growth.
 What if, rather than deal with the issue at the protocol level, we deal
 with it at the source of the problem -- the wallets. Right now, the typical
 wallet selects only the minimum number of unspent outputs when building a
 transaction. The goal is to keep the transaction size to a minimum so that
 the fee stays low. Consequently, lots of unspent outputs just don't get
 used, and are left lying around until some point in the future.

 What if we started designing wallets to consolidate unspent outputs? When
 selecting unspent outputs for a transaction, rather than choosing just the
 minimum number from a particular address, why not select them ALL? Take all
 of the UTXOs from a particular address or wallet, send however much needs
 to be spent to the payee, and send the rest back to the same address or a
 change address as a single output? Through this method, we should wind up
 shrinking the UTXO database over time rather than growing it with each
 transaction. Obviously, as Bitcoin gains wider adoption, the UTXO database
 will grow, simply because there are 7 billion people in the world, and
 eventually a good percentage of them will have one or more wallets with
 spendable bitcoin. But this idea could limit the growth at least.

 The vast majority of users are running one of a handful of different
 wallet apps: Core, Electrum; Armory; Mycelium; Breadwallet; Coinbase;
 Circle; Blockchain.info; and maybe a few others. The developers of all
 these wallets have a vested interest in the continued usefulness of
 Bitcoin, and so should not be opposed to changing their UTXO selection
 algorithms to one that reduces the UTXO database instead of growing it.

 From the miners perspective, even though these types of transactions would
 be larger, the fee could stay low. Miners actually benefit from them in
 that it reduces the amount of storage they need to dedicate to holding the
 UTXO. So miners are incentivized to mine these types of transactions with a
 higher priority despite a low fee.

 Relays could also get in on the action and enforce this type of behavior
 by refusing to relay or deprioritizing the relay of transactions that don't
 use all of the available UTXOs from the addresses used as inputs. Relays
 are not only the ones who benefit the most from a reduction of the UTXO
 database, they're also in the best position to promote good behavior.

 --
 *James G. Phillips IV*
 https://plus.google.com/u/0/113107039501292625391/posts

 *Don't bunt. Aim out of the ball park. Aim for the company of immortals.
 -- David Ogilvy*

  *This message was created with 100% recycled electrons. Please think
 twice before printing.*


 --
 One dashboard for servers and applications across Physical-Virtual-Cloud
 Widest out-of-the-box monitoring support with 50+ applications
 Performance metrics, stats and reports that give you Actionable Insights
 Deep dive visibility with transaction tracing using APM Insight.
 http://ad.doubleclick.net/ddm/clk/290420510;117567292;y
 ___
 Bitcoin-development mailing list
 Bitcoin-development@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bitcoin-development


--
One dashboard for servers and applications 

Re: [Bitcoin-development] CLTV opcode allocation; long-term plans?

2015-05-12 Thread Pieter Wuille
I have no strong opinion, but a slight preference for separate opcodes.

Reason: given the current progress, they'll likely be deployed
independently, and maybe the end result is not something that cleanly fits
the current CLTV argument structure.
--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Proposed additional options for pruned nodes

2015-05-12 Thread gabe appleton
0, 1, 3, 4, 5, 6 can be solved by looking at chunks chronologically. Ie,
give the signed (by sender) hash of the first and last block in your range.
This is less data dense than the idea above, but it might work better.

That said, this is likely a less secure way to do it. To improve upon that,
a node could request a block of random height within that range and verify
it, but that violates point 2. And the scheme in itself definitely violates
point 7.
On May 12, 2015 3:07 PM, Gregory Maxwell gmaxw...@gmail.com wrote:

 It's a little frustrating to see this just repeated without even
 paying attention to the desirable characteristics from the prior
 discussions.

 Summarizing from memory:

 (0) Block coverage should have locality; historical blocks are
 (almost) always needed in contiguous ranges.   Having random peers
 with totally random blocks would be horrific for performance; as you'd
 have to hunt down a working peer and make a connection for each block
 with high probability.

 (1) Block storage on nodes with a fraction of the history should not
 depend on believing random peers; because listening to peers can
 easily create attacks (e.g. someone could break the network; by
 convincing nodes to become unbalanced) and not useful-- it's not like
 the blockchain is substantially different for anyone; if you're to the
 point of needing to know coverage to fill then something is wrong.
 Gaps would be handled by archive nodes, so there is no reason to
 increase vulnerability by doing anything but behaving uniformly.

 (2) The decision to contact a node should need O(1) communications,
 not just because of the delay of chasing around just to find who has
 someone; but because that chasing process usually makes the process
 _highly_ sybil vulnerable.

 (3) The expression of what blocks a node has should be compact (e.g.
 not a dense list of blocks) so it can be rumored efficiently.

 (4) Figuring out what block (ranges) a peer has given should be
 computationally efficient.

 (5) The communication about what blocks a node has should be compact.

 (6) The coverage created by the network should be uniform, and should
 remain uniform as the blockchain grows; ideally it you shouldn't need
 to update your state to know what blocks a peer will store in the
 future, assuming that it doesn't change the amount of data its
 planning to use. (What Tier Nolan proposes sounds like it fails this
 point)

 (7) Growth of the blockchain shouldn't cause much (or any) need to
 refetch old blocks.

 I've previously proposed schemes which come close but fail one of the
 above.

 (e.g. a scheme based on reservoir sampling that gives uniform
 selection of contiguous ranges, communicating only 64 bits of data to
 know what blocks a node claims to have, remaining totally uniform as
 the chain grows, without any need to refetch -- but needs O(height)
 work to figure out what blocks a peer has from the data it
 communicated.;   or another scheme based on consistent hashes that has
 log(height) computation; but sometimes may result in a node needing to
 go refetch an old block range it previously didn't store-- creating
 re-balancing traffic.)

 So far something that meets all those criteria (and/or whatever ones
 I'm not remembering) has not been discovered; but I don't really think
 much time has been spent on it. I think its very likely possible.


 --
 One dashboard for servers and applications across Physical-Virtual-Cloud
 Widest out-of-the-box monitoring support with 50+ applications
 Performance metrics, stats and reports that give you Actionable Insights
 Deep dive visibility with transaction tracing using APM Insight.
 http://ad.doubleclick.net/ddm/clk/290420510;117567292;y
 ___
 Bitcoin-development mailing list
 Bitcoin-development@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bitcoin-development

--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] CLTV opcode allocation; long-term plans?

2015-05-12 Thread Btc Drak
Gavin and @NicolasDorier have a point: If there isn't actually scarcity of
NOPs because OP_NOP10 could become type OP_EX (if we run out), it makes
sense to chose the original unparameterised CLTV version #6124 which also
has been better tested. It's cleaner, more readable and results in a
slightly smaller script which has also got to be a plus.

On Tue, May 12, 2015 at 8:16 PM, Jorge Timón jti...@jtimon.cc wrote:

 This saves us ocodes for later but it's uglier and produces slightly
 bigger scripts.
 If we're convinced it's worth it, seems like the right way to do it,
 and certainly cltv and rclv/op_maturity are related.
 But let's not forget that we can always use this same trick with the
 last opcode to get 2^64 brand new opcodes.
 So I'm not convinced at all on whether we want  #5496 or #6124.
 But it would be nice to decide and stop blocking  this.

 On Sat, May 9, 2015 at 11:12 AM, Peter Todd p...@petertodd.org wrote:
  On Tue, May 05, 2015 at 01:54:33AM +0100, Btc Drak wrote:
   That said, if people have strong feelings about this, I would be
 willing
   to make OP_CLTV work as follows:
  
   nLockTime 1 OP_CLTV
  
   Where the 1 selects absolute mode, and all others act as OP_NOP's. A
   future relative CLTV could then be a future soft-fork implemented as
   follows:
  
   relative nLockTime 2 OP_CLTV
  
   On the bad side it'd be two or three days of work to rewrite all the
   existing tests and example code and update the BIP, and (slightly)
 gets
   us away from the well-tested existing implementation. It also may
   complicate the codebase compared to sticking with just doing a Script
   v2.0, with the additional execution environment data required for v2.0
   scripts cleanly separated out. But all in all, the above isn't too big
   of a deal.
 
 
  Adding a parameter to OP_CLTV makes it much more flexible and is the
 most
  economic use of precious NOPs.
  The extra time required is ok and it would be good to make this change
 to
  the PR in time for the feature freeze.
 
  Done!
 
  https://github.com/bitcoin/bitcoin/pull/5496#issuecomment-100454263
 
  --
  'peter'[:-1]@petertodd.org
  12c438a597ad15df697888be579f4f818a30517cd60fbdc8
 
 
 --
  One dashboard for servers and applications across Physical-Virtual-Cloud
  Widest out-of-the-box monitoring support with 50+ applications
  Performance metrics, stats and reports that give you Actionable Insights
  Deep dive visibility with transaction tracing using APM Insight.
  http://ad.doubleclick.net/ddm/clk/290420510;117567292;y
  ___
  Bitcoin-development mailing list
  Bitcoin-development@lists.sourceforge.net
  https://lists.sourceforge.net/lists/listinfo/bitcoin-development
 

--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] CLTV opcode allocation; long-term plans?

2015-05-12 Thread Luke Dashjr
It should actually be straightforward to softfork RCLTV in as a negative CLTV.
All nLockTime are = any negative number, so a negative number makes CLTV a 
no-op always. Therefore, it is clean to define negative numbers as relative 
later. It's also somewhat obvious to developers, since negative numbers often 
imply an offset (eg, negative list indices in Python).

Luke

--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] CLTV opcode allocation; long-term plans?

2015-05-12 Thread Peter Todd
On Tue, May 12, 2015 at 08:38:27PM +, Luke Dashjr wrote:
 It should actually be straightforward to softfork RCLTV in as a negative CLTV.
 All nLockTime are = any negative number, so a negative number makes CLTV a 
 no-op always. Therefore, it is clean to define negative numbers as relative 
 later. It's also somewhat obvious to developers, since negative numbers often 
 imply an offset (eg, negative list indices in Python).

Doing this makes handling the year 2038 problem a good deal more
complex.

The CLTV codebase specifically fails on negative arguments to avoid any
ambiguity or implementation differences here.

-- 
'peter'[:-1]@petertodd.org
0e7980aab9c096c46e7f34c43a661c5cb2ea71525ebb8af7


signature.asc
Description: Digital signature
--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Proposed additional options for pruned nodes

2015-05-12 Thread gabe appleton
I suppose this begs two questions:

1) why not have a partial archive store the most recent X% of the
blockchain by default?

2) why not include some sort of torrent in QT, to mitigate this risk? I
don't think this is necessarily a good idea, but I'd like to hear the
reasoning.
On May 12, 2015 4:11 PM, Jeff Garzik jgar...@bitpay.com wrote:

 True.  Part of the issue rests on the block sync horizon/cliff.  There is
 a value X which is the average number of blocks the 90th percentile of
 nodes need in order to sync.  It is sufficient for the [semi-]pruned nodes
 to keep X blocks, after which nodes must fall back to archive nodes for
 older data.

 There is simply far, far more demand for recent blocks, and the demand for
 old blocks very rapidly falls off.

 There was even a more radical suggestion years ago - refuse to sync if too
 old (2 weeks?), and force the user to download ancient data via torrent.



 On Tue, May 12, 2015 at 1:02 PM, Gregory Maxwell gmaxw...@gmail.com
 wrote:

 On Tue, May 12, 2015 at 7:38 PM, Jeff Garzik jgar...@bitpay.com wrote:
  One general problem is that security is weakened when an attacker can
 DoS a
  small part of the chain by DoS'ing a small number of nodes - yet the
 impact
  is a network-wide DoS because nobody can complete a sync.

 It might be more interesting to think of that attack as a bandwidth
 exhaustion DOS attack on the archive nodes... if you can't get a copy
 without them, thats where you'll go.

 So the question arises: does the option make some nodes that would
 have been archive not be? Probably some-- but would it do so much that
 it would offset the gain of additional copies of the data when those
 attacks are not going no. I suspect not.

 It's also useful to give people incremental ways to participate even
 when they can't swollow the whole pill; or choose to provide the
 resource thats cheap for them to provide.  In particular, if there is
 only two kinds of full nodes-- archive and pruned; then the archive
 nodes take both a huge disk and bandwidth cost; where as if there are
 fractional then archives take low(er) bandwidth unless the fractionals
 get DOS attacked.




 --
 Jeff Garzik
 Bitcoin core developer and open source evangelist
 BitPay, Inc.  https://bitpay.com/

--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Proposed additional options for pruned nodes

2015-05-12 Thread Gregory Maxwell
On Tue, May 12, 2015 at 8:10 PM, Jeff Garzik jgar...@bitpay.com wrote:
 True.  Part of the issue rests on the block sync horizon/cliff.  There is a
 value X which is the average number of blocks the 90th percentile of nodes
 need in order to sync.  It is sufficient for the [semi-]pruned nodes to keep
 X blocks, after which nodes must fall back to archive nodes for older data.


Prior discussion had things like the definition of pruned means you
have and will serve at least the last 288 from your tip (which is
what I put in the pruned service bip text); and another flag for I
have at least the last 2016.  (2016 should be reevaluated-- it was
just a round number near where sipa's old data showed the fetch
probability flatlined.

But that data was old,  but what it showed that the probability of a
block being fetched vs depth looked like a exponential drop-off (I
think with a 50% at 3-ish days); plus a constant low probability.
Which is probably what we should have expected.

 There was even a more radical suggestion years ago - refuse to sync if too
 old (2 weeks?), and force the user to download ancient data via torrent.

I'm not fond of this; it makes the system dependent on centralized
services (e.g. trackers and sources of torrents). A torrent also
cannot very efficiently handle fractional copies; cannot efficiently
grow over time. Bitcoin should be complete-- plus, many nodes already
have the data.

--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


[Bitcoin-development] Fwd: Proposed additional options for pruned nodes

2015-05-12 Thread Adam Weiss
FYI on behalf of jgarzik...

-- Forwarded message --
From: Jeff Garzik jgar...@bitpay.com
Date: Tue, May 12, 2015 at 4:48 PM
Subject: Re: [Bitcoin-development] Proposed additional options for pruned
nodes
To: Adam Weiss a...@signal11.com


Maybe you could forward my response to the list as an FYI?


On Tue, May 12, 2015 at 12:43 PM, Jeff Garzik jgar...@bitpay.com wrote:

 You are the 12th person to report this.  It is SF, not bitpay, rewriting
 email headers and breaking authentication.


 On Tue, May 12, 2015 at 12:40 PM, Adam Weiss a...@signal11.com wrote:

 fyi, your email to bitcoin-dev is still generating google spam warnings...

 --adam



--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] CLTV opcode allocation; long-term plans?

2015-05-12 Thread Jorge Timón
I like the reuse with negative numbers more than the current proposal
because it doesn't imply bigger scripts. If all problems that may
arise can be solved, that is.
If we went that route, we would start with the initial CLTV too.
But I don't see many strong arguments in favor of using the current
trick later when we're actually running out of opcodes, just that
CLTV and RCLTV/op_maturity are semantically related. How
semantically related depends on the final form of RCLTV/op_maturity,
but I don't think anybody wants to block CLTV until RCLTV is ready.

So we could just deploy the initial CLTV (#6124) now and then decide
whether we want to reuse it with negatives for RCLTV or if we use an
independent op.
Can the people that don't like that plan give stronger arguments in
favor of the parametrized version?

I've missed IRC conversations, so I may be missing something...


On Tue, May 12, 2015 at 11:01 PM, Peter Todd p...@petertodd.org wrote:
 On Tue, May 12, 2015 at 08:38:27PM +, Luke Dashjr wrote:
 It should actually be straightforward to softfork RCLTV in as a negative 
 CLTV.
 All nLockTime are = any negative number, so a negative number makes CLTV a
 no-op always. Therefore, it is clean to define negative numbers as relative
 later. It's also somewhat obvious to developers, since negative numbers often
 imply an offset (eg, negative list indices in Python).

 Doing this makes handling the year 2038 problem a good deal more
 complex.

 The CLTV codebase specifically fails on negative arguments to avoid any
 ambiguity or implementation differences here.

 --
 'peter'[:-1]@petertodd.org
 0e7980aab9c096c46e7f34c43a661c5cb2ea71525ebb8af7

 --
 One dashboard for servers and applications across Physical-Virtual-Cloud
 Widest out-of-the-box monitoring support with 50+ applications
 Performance metrics, stats and reports that give you Actionable Insights
 Deep dive visibility with transaction tracing using APM Insight.
 http://ad.doubleclick.net/ddm/clk/290420510;117567292;y
 ___
 Bitcoin-development mailing list
 Bitcoin-development@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bitcoin-development


--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


[Bitcoin-development] Bitcoin transaction

2015-05-12 Thread Telephone Lemien
Hello evry body,
I want to know what is the difference between a bitcoin transaction and
colored coins transaction technically.

Thanks
--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Long-term mining incentives

2015-05-12 Thread Pedro Worcel
Disclaimer: I don't know anything about Bitcoin.

 ​2) Proof-of-idle supported (I wish Tadge Dryja would publish his
proof-of-idle idea)
 3) Fees purely as transaction-spam-prevention measure, chain security via
alternative consensus algorithm (in this scenario there is very little
mining).

I don't understand why you would casually mention moving away from Proof of
Work, I thought that was the big breakthrough that made Bitcoin possible at
all?

Thanks,
Pedro

2015-05-13 4:10 GMT+12:00 Gavin Andresen gavinandre...@gmail.com:

 Added back the list, I didn't mean to reply privately:

 Fair enough, I'll try to find time in the next month or three to write up
 four plausible future scenarios for how mining incentives might work:

 1) Fee-supported with very large blocks containing lots of tiny-fee
 transactions
 ​​
 2) Proof-of-idle supported (I wish Tadge Dryja would publish his
 proof-of-idle idea)
 3) Fees purely as transaction-spam-prevention measure, chain security via
 alternative consensus algorithm (in this scenario there is very little
 mining).
 4) Fee supported with small blocks containing high-fee transactions moving
 coins to/from sidechains.

 Would that be helpful, or do you have some reason for thinking that we
 should pick just one and focus all of our efforts on making that one
 scenario happen?

 I always think it is better, when possible, not to bet on one horse.


 On Tue, May 12, 2015 at 10:39 AM, Thomas Voegtlin thom...@electrum.org
 wrote:

 Le 12/05/2015 15:44, Gavin Andresen a écrit :
  Ok, here's my scenario:
 
  https://blog.bitcoinfoundation.org/a-scalability-roadmap/
 
  It might be wrong. I welcome other people to present their road maps.
 

 [answering to you only because you answered to me and not to the list;
 feel free to repost this to the list though]

 Yes, that's exactly the kind of roadmap I am asking for. But your blog
 post does not say anything about long term mining incentives, it only
 talks about scalability. My point is that we need the same kind of thing
 for miners incentives.




 --
 --
 Gavin Andresen


 --
 One dashboard for servers and applications across Physical-Virtual-Cloud
 Widest out-of-the-box monitoring support with 50+ applications
 Performance metrics, stats and reports that give you Actionable Insights
 Deep dive visibility with transaction tracing using APM Insight.
 http://ad.doubleclick.net/ddm/clk/290420510;117567292;y
 ___
 Bitcoin-development mailing list
 Bitcoin-development@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bitcoin-development


--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Proposed additional options for pruned nodes

2015-05-12 Thread Tier Nolan
On Tue, May 12, 2015 at 8:03 PM, Gregory Maxwell gmaxw...@gmail.com wrote:


 (0) Block coverage should have locality; historical blocks are
 (almost) always needed in contiguous ranges.   Having random peers
 with totally random blocks would be horrific for performance; as you'd
 have to hunt down a working peer and make a connection for each block
 with high probability.

 (1) Block storage on nodes with a fraction of the history should not
 depend on believing random peers; because listening to peers can
 easily create attacks (e.g. someone could break the network; by
 convincing nodes to become unbalanced) and not useful-- it's not like
 the blockchain is substantially different for anyone; if you're to the
 point of needing to know coverage to fill then something is wrong.
 Gaps would be handled by archive nodes, so there is no reason to
 increase vulnerability by doing anything but behaving uniformly.

 (2) The decision to contact a node should need O(1) communications,
 not just because of the delay of chasing around just to find who has
 someone; but because that chasing process usually makes the process
 _highly_ sybil vulnerable.

 (3) The expression of what blocks a node has should be compact (e.g.
 not a dense list of blocks) so it can be rumored efficiently.

 (4) Figuring out what block (ranges) a peer has given should be
 computationally efficient.

 (5) The communication about what blocks a node has should be compact.

 (6) The coverage created by the network should be uniform, and should
 remain uniform as the blockchain grows; ideally it you shouldn't need
 to update your state to know what blocks a peer will store in the
 future, assuming that it doesn't change the amount of data its
 planning to use. (What Tier Nolan proposes sounds like it fails this
 point)

 (7) Growth of the blockchain shouldn't cause much (or any) need to
 refetch old blocks.


M = 1,000,000
N = number of starts

S(0) = hash(seed) mod M
...
S(n) = hash(S(n-1)) mod M

This generates a sequence of start points.  If the start point is less than
the block height, then it counts as a hit.

The node stores the 50MB of data starting at the block at height S(n).

As the blockchain increases in size, new starts will be less than the block
height.  This means some other runs would be deleted.

A weakness is that it is random with regards to block heights.  Tiny blocks
have the same priority as larger blocks.

0) Blocks are local, in 50MB runs
1) Agreed, nodes should download headers-first (or some other compact way
of finding the highest POW chain)
2) M could be fixed, N and the seed are all that is required.  The seed
doesn't have to be that large.  If 1% of the blockchain is stored, then 16
bits should be sufficient so that every block is covered by seeds.
3) N is likely to be less than 2 bytes and the seed can be 2 bytes
4) A 1% cover of 50GB of blockchain would have 10 starts @ 50MB per run.
That is 10 hashes.  They don't even necessarily need to be crypt hashes
5) Isn't this the same as 3?
6) Every block has the same odds of being included.  There inherently needs
to be an update when a node deletes some info due to exceeding its cap.  N
can be dropped one run at a time.
7) When new starts drop below the tip height, N can be decremented and that
one run is deleted.

There would need to be a special rule to ensure the low height blocks are
covered.  Nodes should keep the first 50MB of blocks with some probability
(10%?)
--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Proposed additional options for pruned nodes

2015-05-12 Thread gabe appleton
This is exactly the sort of solution I was hoping for. It seems this is the
minimal modification to make it work, and, if someone was willing to work
with me, I would love to help implement this.

My only concern would be if the - - max-size flag is not included than this
delivers significantly less benefit to the end user. Still a good chunk,
but possibly not enough.
On May 12, 2015 6:03 PM, Tier Nolan tier.no...@gmail.com wrote:



 On Tue, May 12, 2015 at 8:03 PM, Gregory Maxwell gmaxw...@gmail.com
 wrote:


 (0) Block coverage should have locality; historical blocks are
 (almost) always needed in contiguous ranges.   Having random peers
 with totally random blocks would be horrific for performance; as you'd
 have to hunt down a working peer and make a connection for each block
 with high probability.

 (1) Block storage on nodes with a fraction of the history should not
 depend on believing random peers; because listening to peers can
 easily create attacks (e.g. someone could break the network; by
 convincing nodes to become unbalanced) and not useful-- it's not like
 the blockchain is substantially different for anyone; if you're to the
 point of needing to know coverage to fill then something is wrong.
 Gaps would be handled by archive nodes, so there is no reason to
 increase vulnerability by doing anything but behaving uniformly.

 (2) The decision to contact a node should need O(1) communications,
 not just because of the delay of chasing around just to find who has
 someone; but because that chasing process usually makes the process
 _highly_ sybil vulnerable.

 (3) The expression of what blocks a node has should be compact (e.g.
 not a dense list of blocks) so it can be rumored efficiently.

 (4) Figuring out what block (ranges) a peer has given should be
 computationally efficient.

 (5) The communication about what blocks a node has should be compact.

 (6) The coverage created by the network should be uniform, and should
 remain uniform as the blockchain grows; ideally it you shouldn't need
 to update your state to know what blocks a peer will store in the
 future, assuming that it doesn't change the amount of data its
 planning to use. (What Tier Nolan proposes sounds like it fails this
 point)

 (7) Growth of the blockchain shouldn't cause much (or any) need to
 refetch old blocks.


 M = 1,000,000
 N = number of starts

 S(0) = hash(seed) mod M
 ...
 S(n) = hash(S(n-1)) mod M

 This generates a sequence of start points.  If the start point is less
 than the block height, then it counts as a hit.

 The node stores the 50MB of data starting at the block at height S(n).

 As the blockchain increases in size, new starts will be less than the
 block height.  This means some other runs would be deleted.

 A weakness is that it is random with regards to block heights.  Tiny
 blocks have the same priority as larger blocks.

 0) Blocks are local, in 50MB runs
 1) Agreed, nodes should download headers-first (or some other compact way
 of finding the highest POW chain)
 2) M could be fixed, N and the seed are all that is required.  The seed
 doesn't have to be that large.  If 1% of the blockchain is stored, then 16
 bits should be sufficient so that every block is covered by seeds.
 3) N is likely to be less than 2 bytes and the seed can be 2 bytes
 4) A 1% cover of 50GB of blockchain would have 10 starts @ 50MB per run.
 That is 10 hashes.  They don't even necessarily need to be crypt hashes
 5) Isn't this the same as 3?
 6) Every block has the same odds of being included.  There inherently
 needs to be an update when a node deletes some info due to exceeding its
 cap.  N can be dropped one run at a time.
 7) When new starts drop below the tip height, N can be decremented and
 that one run is deleted.

 There would need to be a special rule to ensure the low height blocks are
 covered.  Nodes should keep the first 50MB of blocks with some probability
 (10%?)


 --
 One dashboard for servers and applications across Physical-Virtual-Cloud
 Widest out-of-the-box monitoring support with 50+ applications
 Performance metrics, stats and reports that give you Actionable Insights
 Deep dive visibility with transaction tracing using APM Insight.
 http://ad.doubleclick.net/ddm/clk/290420510;117567292;y
 ___
 Bitcoin-development mailing list
 Bitcoin-development@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bitcoin-development


--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.

Re: [Bitcoin-development] Long-term mining incentives

2015-05-12 Thread Adam Back
I think its fair to say no one knows how to make a consensus that
works in a decentralised fashion that doesnt weaken the bitcoin
security model without proof-of-work for now.

I am presuming Gavin is just saying in the context of not pre-judging
the future that maybe in the far future another innovation might be
found (or alternatively maybe its not mathematically possible).

Towards that it would be useful to try further to prove this one way
or another (prove that proof of stake cant work if that is generically
mathematically provable).

Adam

On 12 May 2015 at 14:24, Pedro Worcel pe...@worcel.com wrote:
 Disclaimer: I don't know anything about Bitcoin.

 2) Proof-of-idle supported (I wish Tadge Dryja would publish his
 proof-of-idle idea)
 3) Fees purely as transaction-spam-prevention measure, chain security via
 alternative consensus algorithm (in this scenario there is very little
 mining).

 I don't understand why you would casually mention moving away from Proof of
 Work, I thought that was the big breakthrough that made Bitcoin possible at
 all?

 Thanks,
 Pedro

 2015-05-13 4:10 GMT+12:00 Gavin Andresen gavinandre...@gmail.com:

 Added back the list, I didn't mean to reply privately:

 Fair enough, I'll try to find time in the next month or three to write up
 four plausible future scenarios for how mining incentives might work:

 1) Fee-supported with very large blocks containing lots of tiny-fee
 transactions
 2) Proof-of-idle supported (I wish Tadge Dryja would publish his
 proof-of-idle idea)
 3) Fees purely as transaction-spam-prevention measure, chain security via
 alternative consensus algorithm (in this scenario there is very little
 mining).
 4) Fee supported with small blocks containing high-fee transactions moving
 coins to/from sidechains.

 Would that be helpful, or do you have some reason for thinking that we
 should pick just one and focus all of our efforts on making that one
 scenario happen?

 I always think it is better, when possible, not to bet on one horse.


 On Tue, May 12, 2015 at 10:39 AM, Thomas Voegtlin thom...@electrum.org
 wrote:

 Le 12/05/2015 15:44, Gavin Andresen a écrit :
  Ok, here's my scenario:
 
  https://blog.bitcoinfoundation.org/a-scalability-roadmap/
 
  It might be wrong. I welcome other people to present their road maps.
 

 [answering to you only because you answered to me and not to the list;
 feel free to repost this to the list though]

 Yes, that's exactly the kind of roadmap I am asking for. But your blog
 post does not say anything about long term mining incentives, it only
 talks about scalability. My point is that we need the same kind of thing
 for miners incentives.




 --
 --
 Gavin Andresen


 --
 One dashboard for servers and applications across Physical-Virtual-Cloud
 Widest out-of-the-box monitoring support with 50+ applications
 Performance metrics, stats and reports that give you Actionable Insights
 Deep dive visibility with transaction tracing using APM Insight.
 http://ad.doubleclick.net/ddm/clk/290420510;117567292;y
 ___
 Bitcoin-development mailing list
 Bitcoin-development@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bitcoin-development



 --
 One dashboard for servers and applications across Physical-Virtual-Cloud
 Widest out-of-the-box monitoring support with 50+ applications
 Performance metrics, stats and reports that give you Actionable Insights
 Deep dive visibility with transaction tracing using APM Insight.
 http://ad.doubleclick.net/ddm/clk/290420510;117567292;y
 ___
 Bitcoin-development mailing list
 Bitcoin-development@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bitcoin-development


--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Long-term mining incentives

2015-05-12 Thread Thomas Voegtlin
Thank you for your answer.

I agree that a lot of things will change, and I am not asking for a
prediction of technological developments; prediction is certainly
impossible. What I would like to have is some sort of reference scenario
for the future of Bitcoin. Something a bit like the Standard Model in
Physics. The reference scenario should not be a prediction of the
future, that's not the point. In fact, it will have to be updated
everytime technological evolutions or code changes render it obsolete.

However, the reference scenario should be a workable path through the
future, using today's technologies and today's knowlegde, and including
all planned code changes. It should be, as much as possible, amenable to
quantitative analysis. It could be used to justify controversial
decisions such as a hard fork.

Your proposal of a block size increase would be much stronger if it came
with such a scenario. It would show that you know where you are going.



Le 11/05/2015 19:29, Gavin Andresen a écrit :
 I think long-term the chain will not be secured purely by proof-of-work. I
 think when the Bitcoin network was tiny running solely on people's home
 computers proof-of-work was the right way to secure the chain, and the only
 fair way to both secure the chain and distribute the coins.
 
 See https://gist.github.com/gavinandresen/630d4a6c24ac6144482a  for some
 half-baked thoughts along those lines. I don't think proof-of-work is the
 last word in distributed consensus (I also don't think any alternatives are
 anywhere near ready to deploy, but they might be in ten years).
 
 I also think it is premature to worry about what will happen in twenty or
 thirty years when the block subsidy is insignificant. A lot will happen in
 the next twenty years. I could spin a vision of what will secure the chain
 in twenty years, but I'd put a low probability on that vision actually
 turning out to be correct.
 
 That is why I keep saying Bitcoin is an experiment. But I also believe that
 the incentives are correct, and there are a lot of very motivated, smart,
 hard-working people who will make it work. When you're talking about trying
 to predict what will happen decades from now, I think that is the best you
 can (honestly) do.
 

--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Proposed additional options for pruned nodes

2015-05-12 Thread Daniel Kraft
Hi all!

On 2015-05-12 21:03, Gregory Maxwell wrote:
 Summarizing from memory:

In the context of this discussion, let me also restate an idea I've
proposed in Bitcointalk for this.  It is probably not perfect and could
surely be adapted (I'm interested in that), but I think it meets
most/all of the criteria stated below.  It is similar to the idea with
start points, but gives O(log height) instead of O(height) for
determining which blocks a node has.

Let me for simplicity assume that the node wants to store 50% of all
blocks.  It is straight-forward to extend the scheme so that this is
configurable:

1) Create some kind of seed that can be compact and will be sent to
other peers to define which blocks the node has.  Use it to initialise a
PRNG of some sort.

2) Divide the range of all blocks into intervals with exponentially
growing size.  I. e., something like this:

1, 1, 2, 2, 4, 4, 8, 8, 16, 16, ...

With this, only O(log height) intervals are necessary to cover height
blocks.

3) Using the PRNG, *one* of the two intervals of each length is
selected.  The node stores these blocks and discards the others.
(Possibly keeping the last 200 or 2,016 or whatever blocks additionally.)

 (0) Block coverage should have locality; historical blocks are
 (almost) always needed in contiguous ranges.   Having random peers
 with totally random blocks would be horrific for performance; as you'd
 have to hunt down a working peer and make a connection for each block
 with high probability.

You get contiguous block ranges (with at most O(log height) breaks).
Also ranges of newer blocks are longer, which may be an advantage if
those blocks are needed more often.

 (1) Block storage on nodes with a fraction of the history should not
 depend on believing random peers; because listening to peers can
 easily create attacks (e.g. someone could break the network; by
 convincing nodes to become unbalanced) and not useful-- it's not like
 the blockchain is substantially different for anyone; if you're to the
 point of needing to know coverage to fill then something is wrong.
 Gaps would be handled by archive nodes, so there is no reason to
 increase vulnerability by doing anything but behaving uniformly.

With my proposal, each node determines randomly and on its own which
blocks to store.  No believing anyone.

 (2) The decision to contact a node should need O(1) communications,
 not just because of the delay of chasing around just to find who has
 someone; but because that chasing process usually makes the process
 _highly_ sybil vulnerable.

Not exactly sure what you mean by that, but I think that's fulfilled.
You can (locally) compute in O(log height) from a node's seed whether or
not it has the blocks you need.  This needs only communication about the
node's seed.

 (3) The expression of what blocks a node has should be compact (e.g.
 not a dense list of blocks) so it can be rumored efficiently.

See above.

 (4) Figuring out what block (ranges) a peer has given should be
 computationally efficient.

O(log height).  Not O(1), but that's probably not a big issue.

 (5) The communication about what blocks a node has should be compact.

See above.

 (6) The coverage created by the network should be uniform, and should
 remain uniform as the blockchain grows; ideally it you shouldn't need
 to update your state to know what blocks a peer will store in the
 future, assuming that it doesn't change the amount of data its
 planning to use. (What Tier Nolan proposes sounds like it fails this
 point)

Coverage will be uniform if the seed is created randomly and the PRNG
has good properties.  No need to update the seed if the other node's
fraction is unchanged.  (Not sure if you suggest for nodes to define a
fraction or rather an absolute size.)

 (7) Growth of the blockchain shouldn't cause much (or any) need to
 refetch old blocks.

No need to do that with the scheme.

What do you think about this idea?  Some random thoughts from myself:

*) I need to formulate it in a more general way so that the fraction can
be arbitrary and not just 50%.  This should be easy to do, and I can do
it if there's interest.

*) It is O(log height) and not O(1), but that should not be too
different for the heights that are relevant.

*) Maybe it would be better / easier to not use the PRNG at all; just
decide to *always* use the first or the second interval with a given
size.  Not sure about that.

*) With the proposed scheme, the node's actual fraction of stored blocks
will vary between 1/2 and 2/3 (if I got the mathematics right, it is
still early) as the blocks come in.  Not sure if that's a problem.  I
can do a precise analysis of this property for an extended scheme if you
are interested in it.

Yours,
Daniel

-- 
http://www.domob.eu/
OpenPGP: 1142 850E 6DFF 65BA 63D6  88A8 B249 2AC4 A733 0737
Namecoin: id/domob - https://nameid.org/?name=domob
--
Done:  Arc-Bar-Cav-Hea-Kni-Ran-Rog-Sam-Tou-Val-Wiz
To go: Mon-Pri



signature.asc
Description: 

Re: [Bitcoin-development] Bitcoin transaction

2015-05-12 Thread Telephone Lemien
Thank You,
I know this, but I want to have mores details in the inputs/outputs, or in
the script of input/output and how i will proceed in the code.
Thanks for all replaying

2015-05-12 11:47 GMT+02:00 Patrick Mccorry (PGR) 
patrick.mcco...@newcastle.ac.uk:

  There is no difference to the transaction as far as im aware – just the
 inputs / outputs have a special meaning (and should have a special order).
 So you can track 1 BTC throughout the blockchain and this 1 BTC represents
 my asset. Someone may give a more useful answer.



 *From:* Telephone Lemien [mailto:lemienteleph...@gmail.com]
 *Sent:* 12 May 2015 10:45
 *To:* Bitcoin Dev
 *Subject:* [Bitcoin-development] Bitcoin transaction



 Hello evry body,

 I want to know what is the difference between a bitcoin transaction and
 colored coins transaction technically.

 Thanks

--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


[Bitcoin-development] Proposed additional options for pruned nodes

2015-05-12 Thread gabe appleton
Hi,

There's been a lot of talk in the rest of the community about how the 20MB
step would increase storage needs, and that switching to pruned nodes
(partially) would reduce network security. I think I may have a solution.

There could be a hybrid option in nodes. Selecting this would do the
following:
Flip the --no-wallet toggle
Select a section of the blockchain to store fully (percentage based,
possibly on hash % sections?)
Begin pruning all sections not included in 2
The idea is that you can implement it similar to how a Koorde is done, in
that the network will decide which sections it retrieves. So if the user
prompts it to store 50% of the blockchain, it would look at its peers, and
at their peers (if secure), and choose the least-occurring options from
them.

This would allow them to continue validating all transactions, and still
store a full copy, just distributed among many nodes. It should overall
have little impact on security (unless I'm mistaken), and it would
significantly reduce storage needs on a node.

It would also allow for a retroactive --max-size flag, where it will prune
until it is at the specified size, and continue to prune over time, while
keeping to the sections defined by the network.

What sort of side effects or network vulnerabilities would this introduce?
I know some said it wouldn't be Sybil resistant, but how would this be less
so than a fully pruned node?
--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development