Re: [Bitcoin-development] Bitcoind-in-background mode for SPV wallets

2014-04-10 Thread Tamas Blummer
Hi Wladimir,

If the motivation of the SPV wallet is to radically extend functionality, as in 
my case, then the index is specific to the added features and the subset of the 
blockchain that is of interest for the wallet.
As you also point out, adding huge generic purpose indices to core would rather 
discourage people using full nodes due to excess requirements. 

I believe nothing would add more to the core’s popularity as a trusted 
background node to SPV than full validation at lowest possible memory, disk and 
CPU footprint.
Serving headers should be default but storing and serving full blocks 
configurable to ranges, so people can tailor to their bandwith and space 
available.

Tamas Blummer
Bits of Proof

On 09.04.2014, at 21:25, Wladimir laa...@gmail.com wrote:
 
 
 Adding a RPC call for a address - utxo query wouldn't be a big deal. It 
 has been requested before for other purposes as well, all the better if it 
 helps for interaction with Electrum.
 
 Spent history would be involve a much larger index, and it's not likely that 
 will end up in bitcoin
 
 Wladimir
 



signature.asc
Description: Message signed with OpenPGP using GPGMail
--
Put Bad Developers to Shame
Dominate Development with Jenkins Continuous Integration
Continuously Automate Build, Test  Deployment 
Start a new project now. Try Jenkins in the cloud.
http://p.sf.net/sfu/13600_Cloudbees___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Bitcoind-in-background mode for SPV wallets

2014-04-10 Thread Mike Hearn
I tend to agree with slush here - counting the IPs in addr broadcasts often
gives a number like 100,000 vs just 10,000 for actually reachable nodes (or
less). It seems like optimising the NAT tunneling code would help. Starting
by adding more diagnostic stuff to the GUI. STUN support may also help.

The main constraint with home devices is not IMHO their actual power but
rather that a lot of people no longer keep computers switched on all the
time. If you don't do that then spv with bundled Core can't help your
security because the spv wallet would always be syncing from the p2p
network for performance reasons.
On 9 Apr 2014 22:13, slush sl...@centrum.cz wrote:

 I believe there're plenty bitcoind instances running, but they don't have
 configured port forwarding properly.There's uPNP support in bitcoind, but
 it works only on simple setups.

 Maybe there're some not yet considered way how to expose these *existing*
 instances to Internet, to strenghten the network. Maybe just self-test
 indicating the node is not reachable from outside (together with short
 howto like in some torrent clients).

 These days IPv6 is slowly deploying to server environments, but maybe
 there's some simple way how to bundle ipv6 tunnelling into bitcoind so any
 instance will become ipv6-reachable automatically?

 Maybe there're other ideas how to improve current situation without needs
 of reworking the architecture.

 Marek


 On Wed, Apr 9, 2014 at 9:33 PM, Gregory Maxwell gmaxw...@gmail.comwrote:

 On Wed, Apr 9, 2014 at 11:58 AM, Justus Ranvier justusranv...@gmail.com
 wrote:
  Anyone reading the archives of the list will see about triple the
  number of people independently confirming the resource usage problem
  than they will see denying it, so I'm not particularly worried.

 The list has open membership, there is no particular qualification or
 background required to post here. Optimal use of an information source
 requires critical reading and understanding the limitations of the
 medium. Counting comments is usually not a great way to assess
 technical considerations on an open public forum.  Doubly so because
 those comments were not actually talking about the same thing I am
 talking about.

 Existing implementations are inefficient in many known ways (and, no
 doubt, some unknown ones). This list is about developing protocol and
 implementations including improving their efficiency.  When talking
 about incentives the costs you need to consider are the costs of the
 best realistic option.  As far as I know there is no doubt from anyone
 technically experienced that under the current network rules full
 nodes can be operated with vastly less resources than current
 implementations use, it's just a question of the relatively modest
 implementation improvements.

 When you argue that Bitcoin doesn't have the right incentives (and
 thus something??) I retort that the actual resource _requirements_ are
 for the protocol very low. I gave specific example numbers to enable
 correction or clarification if I've said something wrong or
 controversial. Pointing out that existing implementations are not that
 currently as efficient as the underlying requirements and that some
 large number of users do not like the efficiency of existing
 implementations doesn't tell me anything I disagree with or didn't
 already know. Whats being discussed around here contributes to
 prioritizing improvements over the existing implementations.

 I hope this clarifies something.


 --
 Put Bad Developers to Shame
 Dominate Development with Jenkins Continuous Integration
 Continuously Automate Build, Test  Deployment
 Start a new project now. Try Jenkins in the cloud.
 http://p.sf.net/sfu/13600_Cloudbees
 ___
 Bitcoin-development mailing list
 Bitcoin-development@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bitcoin-development




 --
 Put Bad Developers to Shame
 Dominate Development with Jenkins Continuous Integration
 Continuously Automate Build, Test  Deployment
 Start a new project now. Try Jenkins in the cloud.
 http://p.sf.net/sfu/13600_Cloudbees
 ___
 Bitcoin-development mailing list
 Bitcoin-development@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bitcoin-development


--
Put Bad Developers to Shame
Dominate Development with Jenkins Continuous Integration
Continuously Automate Build, Test  Deployment 
Start a new project now. Try Jenkins in the cloud.
http://p.sf.net/sfu/13600_Cloudbees___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Bitcoind-in-background mode for SPV wallets

2014-04-10 Thread Wladimir
On Thu, Apr 10, 2014 at 8:38 AM, Mike Hearn m...@plan99.net wrote:

 I tend to agree with slush here - counting the IPs in addr broadcasts
 often gives a number like 100,000 vs just 10,000 for actually reachable
 nodes (or less). It seems like optimising the NAT tunneling code would
 help. Starting by adding more diagnostic stuff to the GUI. STUN support may
 also help.

 The main constraint with home devices is not IMHO their actual power but
 rather that a lot of people no longer keep computers switched on all the
 time. If you don't do that then spv with bundled Core can't help your
 security because the spv wallet would always be syncing from the p2p
 network for performance reasons.

I agree that there is a fundamental incompatibility in usage between
wallets and nodes. Wallets need to be online as little as possible, nodes
need to online as much as possible.

However, a full node background process could also be running if the wallet
is not open itself. Ffor example - by running as a system service.

Bitcoin Core's own wallet is also moving to SPV, so this means a general
solution is needed to get people to run a node when the wallet is not
running.

Maybe the node shouldn't be controlled from the wallet at all, it could be
a 'node control' user interface on its own (this is what -disablewallet
does currently). In this case, there is no need for packaging it with a
wallet The only drawback would be that initially, people wouldn't know why
or when to install this, hence my suggestion to pack it with wallets...

Wladimir
--
Put Bad Developers to Shame
Dominate Development with Jenkins Continuous Integration
Continuously Automate Build, Test  Deployment 
Start a new project now. Try Jenkins in the cloud.
http://p.sf.net/sfu/13600_Cloudbees___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Bitcoind-in-background mode for SPV wallets

2014-04-10 Thread Mike Hearn
It's an optimisation problem. Home environments are much more hostile than
servers are due to things like virus scanners, wildly varying memory
pressure as apps are started and shut down, highly asymmetrical upstream
versus downstream bandwidth,  complicated nat setups, people who only use
laptops (which I think is most people these days) and so on.

So I think the right way to go is to optimise the things that hurt server
node operators like large memory and disk  usage, and this will
automatically make it more pleasant to run on the desktop as well. If at
some point all the low hanging fruit for the server side is gone then
improving things on the desktop would be the next place to go. But we have
to be realistic. Desktop tower machines that are always on are dying and
will not be coming back. Not a single person I know uses them anymore, they
have been wiped out in favour of laptops. This is why, given the tiny size
of the bitcoin core development team, I do not think it makes sense to
spend precious coding hours chasing this goal.
On 10 Apr 2014 08:51, Wladimir laa...@gmail.com wrote:

 On Thu, Apr 10, 2014 at 8:38 AM, Mike Hearn m...@plan99.net wrote:

 I tend to agree with slush here - counting the IPs in addr broadcasts
 often gives a number like 100,000 vs just 10,000 for actually reachable
 nodes (or less). It seems like optimising the NAT tunneling code would
 help. Starting by adding more diagnostic stuff to the GUI. STUN support may
 also help.

 The main constraint with home devices is not IMHO their actual power but
 rather that a lot of people no longer keep computers switched on all the
 time. If you don't do that then spv with bundled Core can't help your
 security because the spv wallet would always be syncing from the p2p
 network for performance reasons.

 I agree that there is a fundamental incompatibility in usage between
 wallets and nodes. Wallets need to be online as little as possible, nodes
 need to online as much as possible.

 However, a full node background process could also be running if the
 wallet is not open itself. Ffor example - by running as a system service.

 Bitcoin Core's own wallet is also moving to SPV, so this means a general
 solution is needed to get people to run a node when the wallet is not
 running.

 Maybe the node shouldn't be controlled from the wallet at all, it could be
 a 'node control' user interface on its own (this is what -disablewallet
 does currently). In this case, there is no need for packaging it with a
 wallet The only drawback would be that initially, people wouldn't know why
 or when to install this, hence my suggestion to pack it with wallets...

 Wladimir


--
Put Bad Developers to Shame
Dominate Development with Jenkins Continuous Integration
Continuously Automate Build, Test  Deployment 
Start a new project now. Try Jenkins in the cloud.
http://p.sf.net/sfu/13600_Cloudbees___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Bitcoind-in-background mode for SPV wallets

2014-04-10 Thread Tamas Blummer
You ask why people would install this ?

I find it is odd that we who hold the key to instant machine to machine micro 
payments do not use it to incentivise committing resources to the network.
What about serving archive blocks to peers paying for it ?

Tamas Blummer
http://bitsofproof.com

On 10.04.2014, at 08:50, Wladimir laa...@gmail.com wrote:
 The only drawback would be that initially, people wouldn't know why or when 
 to install this, hence my suggestion to pack it with wallets...
 
 Wladimir
 



signature.asc
Description: Message signed with OpenPGP using GPGMail
--
Put Bad Developers to Shame
Dominate Development with Jenkins Continuous Integration
Continuously Automate Build, Test  Deployment 
Start a new project now. Try Jenkins in the cloud.
http://p.sf.net/sfu/13600_Cloudbees___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Bitcoind-in-background mode for SPV wallets

2014-04-10 Thread Mike Hearn

 I find it is odd that we who hold the key to instant machine to machine
 micro payments do not use it to incentivise committing resources to the
 network.


It's not a new idea, obviously, but there are some practical consequences:

1) To pay a node for serving, you have to have bitcoins. To get bitcoins,
you need to sync with the network via a node. Catch 22.

2) If some nodes choose to charge and others choose to not charge, a smart
wallet will always use the free nodes. In the absence of any global load
balancing algorithms, this would lead to the free nodes getting overloaded
and collapsing whilst the for-pay nodes remain silent.

3) The only payment channel implementations today are bitcoinj's (Java) and
one written by Jeff in Javascript. There are no C++ implementations. And as
Matt and I can attest to, doing a real, solid, fully debugged
implementation that's integrated into a real app is  a lot of work.

I still think the lowest hanging fruit is basic, boring optimisations
rather than architectural rethinks.
--
Put Bad Developers to Shame
Dominate Development with Jenkins Continuous Integration
Continuously Automate Build, Test  Deployment 
Start a new project now. Try Jenkins in the cloud.
http://p.sf.net/sfu/13600_Cloudbees___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Bitcoind-in-background mode for SPV wallets

2014-04-10 Thread Peter Todd
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512



 But we
have
to be realistic. Desktop tower machines that are always on are dying
and
will not be coming back. Not a single person I know uses them anymore,
they
have been wiped out in favour of laptops. This is why, given the tiny
size
of the bitcoin core development team, I do not think it makes sense to
spend precious coding hours chasing this goal.

Your social group is weird.

Nearly every coworker at my previous job had a tower computer at work and at 
home. Equally in my nontechnical social group lots of people, a significant 
minority if not majority, have Apple and PC desktops hooked up to large 
monitors at home for media production and games. Those who don't more often 
than not have laptops used as desktops, sitting in one place 95% of the time 
and left on.

People have found it most efficient to work at a static desk for centuries - 
that's not going away. Of course we're seeing desktop usage and sales falling, 
but that's only because previously the mobile usage was forced into suboptimal 
options by technical realities. The trend will bottom out a long way from zero.

Besides, even if just 1% of bitcoin users had a machine they left on that could 
usefully contribute to the network it would still vastly outweigh the much 
smaller percentage who would run nodes on expensive hosted capacity out of the 
goodness of their hearts. If we educated users about the privacy advantages of 
full nodes and gave them software that automatically contributed back within 
defined limits we'd have tens of thousands more useful nodes in the exact same 
way that user friendly filesharing software has lead to millions of users 
contributing bandwidth to filesharing networks. Similarly take advantage of the 
fault tolerance inherent in what we're doing and ensure that our software can 
shrug off nodes with a few % of downtime - certainly possible.

Of course, this doesn't fit in the business plans of those who might want to 
run full nodes to data mine and deanonymize users for marketing, tax 
collection, and law enforcement - one of the few profitable things to do with a 
full node - but screw those people.
-BEGIN PGP SIGNATURE-
Version: APG v1.1.1

iQFQBAEBCgA6BQJTRmVPMxxQZXRlciBUb2RkIChsb3cgc2VjdXJpdHkga2V5KSA8
cGV0ZUBwZXRlcnRvZGQub3JnPgAKCRAZnIM7qOfwhYb9B/0QWgQusFrQoPfYIzig
vpeo9sQgHAfA3gPYdqDLTtlaTgt8w3oP+/N46+Pi3lEphhCusXstzAFxi6c0XXsk
s96l9MqrUCZu55oEj1wZP0LJQx1uyUdevFv9bmocG5+94jBvGIoy3JZ3iQg+nNjL
uE9FpTnL43VOZ0WI9g6pXIE6XX6CxTx56tcxla4LTTypA1DijXa9MyYGOfYxXdPJ
w7jiRKl6Ijb3twP8+pX07GSIlL9yP7bESydnwyzwEo/RxAJxPmUpLuxluQ5DKTNY
G9TtwCpT+c6g5nXOxkI31XRcDuzhT+2kEhiDA6Neu2YNGrnyQx2WL6XuZNhi8nKB
IOMm
=rwGV
-END PGP SIGNATURE-


--
Put Bad Developers to Shame
Dominate Development with Jenkins Continuous Integration
Continuously Automate Build, Test  Deployment 
Start a new project now. Try Jenkins in the cloud.
http://p.sf.net/sfu/13600_Cloudbees
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Bitcoind-in-background mode for SPV wallets

2014-04-10 Thread Tamas Blummer
I know the idea is not new. Just bringing it up to emphasize that if we don’t 
use it how could we expect other networks using it.
Machine to machine micro payments could become the killer application for 
Bitcoin.

1) There is no catch 22 as there are plenty of ways getting bitcoin without 
bootstrapping a full node.

2) let markets work out and not speculate what would happen.

3) Serving archive bolcks does not have to be part of core but could be a 
distinct service written in a language of your choice using new protocol.

As mentioned earlier I am for a stripped down core that does nothing else than 
consensus and stores nothing else needed for that task and offering SPV api to 
the wallets.

Tamas Blummer
http://bitsofproof.com

On 10.04.2014, at 11:17, Mike Hearn m...@plan99.net wrote:

 I find it is odd that we who hold the key to instant machine to machine micro 
 payments do not use it to incentivise committing resources to the network.
 
 It's not a new idea, obviously, but there are some practical consequences:
 
 1) To pay a node for serving, you have to have bitcoins. To get bitcoins, you 
 need to sync with the network via a node. Catch 22.
 
 2) If some nodes choose to charge and others choose to not charge, a smart 
 wallet will always use the free nodes. In the absence of any global load 
 balancing algorithms, this would lead to the free nodes getting overloaded 
 and collapsing whilst the for-pay nodes remain silent.
 
 3) The only payment channel implementations today are bitcoinj's (Java) and 
 one written by Jeff in Javascript. There are no C++ implementations. And as 
 Matt and I can attest to, doing a real, solid, fully debugged implementation 
 that's integrated into a real app is  a lot of work.
 
 I still think the lowest hanging fruit is basic, boring optimisations rather 
 than architectural rethinks.



signature.asc
Description: Message signed with OpenPGP using GPGMail
--
Put Bad Developers to Shame
Dominate Development with Jenkins Continuous Integration
Continuously Automate Build, Test  Deployment 
Start a new project now. Try Jenkins in the cloud.
http://p.sf.net/sfu/13600_Cloudbees___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Bitcoind-in-background mode for SPV wallets

2014-04-10 Thread Peter Todd
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512



On 10 April 2014 05:17:28 GMT-04:00, Mike Hearn m...@plan99.net wrote:

 I find it is odd that we who hold the key to instant machine to
machine
 micro payments do not use it to incentivise committing resources to
the
 network.


It's not a new idea, obviously, but there are some practical
consequences:

You're both missing a more important issue: a core security assumption of 
bitcoin is that information is so easy to spread that censorship of it becomes 
impractical. If we're at the point where nodes are charging for their data 
we've failed that assumption.

More concretely, if my business is charging for block chain data and I can make 
a profit doing so via micro payments I have perverse incentives to drive away 
my competitors. If I give a peer a whole block they can sell access to that 
information in turn. Why would I make it easy for them if I don't have too?

Anyway, much of this discussion seems to stem from the misconception that 
contributing back to the network is a binary all or nothing thing - it's not. 
Over a year ago I myself was lamenting how I and the other bitcoin-wizards 
working on scalability had quickly solved every scaling problem *but* how to 
make it possible to scale up and keep mining decentralised.
-BEGIN PGP SIGNATURE-
Version: APG v1.1.1

iQFQBAEBCgA6BQJTRmi9MxxQZXRlciBUb2RkIChsb3cgc2VjdXJpdHkga2V5KSA8
cGV0ZUBwZXRlcnRvZGQub3JnPgAKCRAZnIM7qOfwhT/BB/98DudTV85hmruk0WRS
xVt7gGK6DJ2Isc7iJE09i9wSQc/PbHu7rZ2MYPreOdVzRmlHYhOV2ShnZaZJ7A9g
qB8pwy1wVrZgbrFeXXOLegNxGF2Xzc8OtL1E0bkNtTUUkuPIvT3UV4xn/Z+UZToR
XImXpfakfJvyRH80cbMNu4xG/t7Ym4K63nEpCCdsEKNm5a1vHTNRTNfGYMC1wrSV
XI3boZk7BQjqFDZADnonUU9zQ1WOmpdaVBYm+Xtgc+HXl3QODLcwGCY9hIRvaxek
L+IYX9yTVbgngDGy70BYG4ekWxvMtNzRU+9HG5DgA1/Er9r1KIaf/98xiFR7RVB4
Yfia
=34aD
-END PGP SIGNATURE-


--
Put Bad Developers to Shame
Dominate Development with Jenkins Continuous Integration
Continuously Automate Build, Test  Deployment 
Start a new project now. Try Jenkins in the cloud.
http://p.sf.net/sfu/13600_Cloudbees
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Bitcoind-in-background mode for SPV wallets

2014-04-10 Thread Mike Hearn

 1) There is no catch 22 as there are plenty of ways getting bitcoin
 without bootstrapping a full node.


I think I maybe wasn't clear. To spend coins you need transaction data.
Today, the dominant model is that people get that data by scanning the
block chain. If you can obtain the transaction data without doing that
then, either:

1) Someone is doing chain scanning for free. See my point about why pay if
you can get it for free.

2) You got your tx data direct from the person you who sent you the funds,
perhaps via the payment protocol. This would resolve the catch 22 by
allowing you to spend bitcoins without actually having talked to the P2P
network first, but we're a loong way from this world.

And that's it. I don't think there are any other ways to get the tx data
you need. Either someone gives it to you in the act of spending, or someone
else gives it away for free, undermining the charge-for-the-p2p-network
model.
--
Put Bad Developers to Shame
Dominate Development with Jenkins Continuous Integration
Continuously Automate Build, Test  Deployment 
Start a new project now. Try Jenkins in the cloud.
http://p.sf.net/sfu/13600_Cloudbees___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Bitcoind-in-background mode for SPV wallets

2014-04-10 Thread Tamas Blummer
Thanks, Peter and you convinced me. I run away with a thought.

It’d be great to find a spot to deploy payment channels, but I agree this is 
not it.

Tamas Blummer
http://bitsofproof.com

On 10.04.2014, at 12:40, Mike Hearn m...@plan99.net wrote:

 1) There is no catch 22 as there are plenty of ways getting bitcoin without 
 bootstrapping a full node.
 
 I think I maybe wasn't clear. To spend coins you need transaction data. 
 Today, the dominant model is that people get that data by scanning the block 
 chain. If you can obtain the transaction data without doing that then, either:
 
 1) Someone is doing chain scanning for free. See my point about why pay if 
 you can get it for free.
 
 2) You got your tx data direct from the person you who sent you the funds, 
 perhaps via the payment protocol. This would resolve the catch 22 by allowing 
 you to spend bitcoins without actually having talked to the P2P network 
 first, but we're a loong way from this world.
 
 And that's it. I don't think there are any other ways to get the tx data you 
 need. Either someone gives it to you in the act of spending, or someone else 
 gives it away for free, undermining the charge-for-the-p2p-network model.



signature.asc
Description: Message signed with OpenPGP using GPGMail
--
Put Bad Developers to Shame
Dominate Development with Jenkins Continuous Integration
Continuously Automate Build, Test  Deployment 
Start a new project now. Try Jenkins in the cloud.
http://p.sf.net/sfu/13600_Cloudbees___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Bitcoind-in-background mode for SPV wallets

2014-04-10 Thread Wladimir
On Thu, Apr 10, 2014 at 8:04 AM, Tamas Blummer ta...@bitsofproof.comwrote:

 Serving headers should be default but storing and serving full blocks
 configurable to ranges, so people can tailor to their bandwith and space
 available.


I do agree that it is important.

This does require changes to the P2P protocol, as currently there is no way
for a node to signal that they store only part of the block chain. Also,
clients will have to be modified to take this into account. Right now they
are under the assumption that every full node can send them every
(previous) block.

What would this involve?

Do you know of any previous work towards this?

Wladimir
--
Put Bad Developers to Shame
Dominate Development with Jenkins Continuous Integration
Continuously Automate Build, Test  Deployment 
Start a new project now. Try Jenkins in the cloud.
http://p.sf.net/sfu/13600_Cloudbees___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Bitcoind-in-background mode for SPV wallets

2014-04-10 Thread Mike Hearn

 What would this involve?

 Do you know of any previous work towards this?


Chain pruning is a fairly complicated project, partly because it spans
codebases. For instance if you try and implement it *just* by changing
Bitcoin Core, you will break all the SPV clients based on bitcoinj (i.e.
all of them). Big changes to the P2P network like this require upgrading
both codebases simultaneously.

I think things like this may be why Gavin is now just chief scientist
instead of Core maintainer - in future, the changes people need will span
projects and require fairly significant planning.

From a technical perspective, it means extending addr broadcasts so nodes
broadcast how much of the chain they have, and teaching both Core and
bitcoinj how to search for nodes that have enough of the chain for them to
use. Currently bitcoinj still doesn't use addr broadcasts at all, there's
an incomplete patch available but it was never finished or merged. So that
has to be fixed first. And that probably implies improving Bitcoin Core so
the results of getaddr are more usable, ideally as high quality as what the
DNS seeds provide, because if lots of bad addresses are returned this will
slow down initial connect time, which is an important performance metric.
--
Put Bad Developers to Shame
Dominate Development with Jenkins Continuous Integration
Continuously Automate Build, Test  Deployment 
Start a new project now. Try Jenkins in the cloud.
http://p.sf.net/sfu/13600_Cloudbees___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


[Bitcoin-development] Chain pruning

2014-04-10 Thread Mike Hearn
Chain pruning is probably a separate thread, changing subject.


 Reason is that the actual blocks available are likely to change frequently
 (if
 you keep the last week of blocks


I doubt anyone would specify blocks to keep in terms of time. More likely
it'd be in terms of megabytes, as that's the actual resource constraint on
nodes. Given a block size average it's easy to go from megabytes to
num_blocks, so I had imagined it'd be a new addr field that specifies how
many blocks from the chain head are stored. Then you'd connect to some
nodes and if they indicate their chain head - num_blocks_stored is higher
than your current chain height, you'd do a getaddr and go looking for nodes
that are storing far enough back.
--
Put Bad Developers to Shame
Dominate Development with Jenkins Continuous Integration
Continuously Automate Build, Test  Deployment 
Start a new project now. Try Jenkins in the cloud.
http://p.sf.net/sfu/13600_Cloudbees___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Bitcoind-in-background mode for SPV wallets

2014-04-10 Thread Peter Todd
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512



On 10 April 2014 06:44:32 GMT-04:00, Tamas Blummer ta...@bitsofproof.com 
wrote:
Thanks, Peter and you convinced me. I run away with a thought.

It’d be great to find a spot to deploy payment channels, but I agree
this is not it.

No problem!

I'm sure we'll see payment channels implemented sooner or later
the form of hub and spoke payment networks. The idea there is you have one or 
more centralised hubs who in turn have payment channels setup to and from 
payors and payees. So long as the person you want to pay is connected to the 
same hub as you are, or in more advanced versions connected via a ripple style 
chain, you can push payment to the hub and get proof they did the same for the 
recipient. Your loss is always limited to the incremental payment amount and 
payment is essentially instant.

Of course, it's got some disadvantages compared to standard bitcoin 
transactions - its less decentralised - but when compared to other forms of 
off-chain payment in most situations its a strict improvement, and having the 
capability available is always a strict improvement. Like fidelity bonded banks 
the trust required in the hubs is low enough that with some minor effort 
applied to anti-DoS you could probably get away with using even hubs run by 
anonymous actors, making the centralisation less important. (hubs are 
essentially interchangeable) Unlike pure fidelity bonded banks the effort 
required to build this is relatively minor!

You can even combine it with chaum tokens for anonymity. You'll want to hold 
the tokens for some amount of time to thwart timing analysis, leaving you 
somewhat vulnerable to theft, but in that case fidelity bonded banking 
principles can be applied. Other than that case the idea is probably made 
obsolete by micropayment hubs.

Regulatory issues will be interesting... If you wind up with a few central 
payment hubs, without chaum tokens, those hubs learn all details about every 
transaction made. Obviously if a big actor like BitPay implemented this there 
would be a lot of pressure on them to make those records available to law 
enforcement and tax authorities, not to mention marketing and other data 
mining. Equally I suspect that if an alternative more decentralised version was 
implemented there would be strong government pressure for those approved hubs 
to not interoperate with the decentralised hubs, and equally for merchants to 
not accept payment from the decentralised hubs.

But all the same, if widely implemented this reduces pressure to raise the 
block size enormously, keeping the underlying system decentralised. So the net 
effect is probably positive regardless.

Oh yeah, credit goes to Mike Hearn for the payment channels, and if I'm 
correct, for the hub concept as well.

Amir: You should think about adding the above to dark wallet. It'd be good if 
the protocols are implemented in an open and decentralised fashion first, prior 
to vendor lock in.
-BEGIN PGP SIGNATURE-
Version: APG v1.1.1

iQFQBAEBCgA6BQJTRoIlMxxQZXRlciBUb2RkIChsb3cgc2VjdXJpdHkga2V5KSA8
cGV0ZUBwZXRlcnRvZGQub3JnPgAKCRAZnIM7qOfwhW26B/9A9OtYjoSHo620XZzF
VqfnnVFCPr3DpD/HuT3JYhF1kkL2vTt5wkRIHmHmfJ29Sduj8St7EFiLOyUg2mvt
q9heZgzCnqxLJm9zMiiQnb3Y/plvhTLfaONnHI+OPSfrABL6DA04nEe8OBPuFfv/
NowJ74DP/65YBq3EqbqG0dJExKm1BhdrEpWNq0v5YoCVuEYkWgFHL8SdRHnfFyxA
XTkP8avzlG82r98k55IrV0O/6VQNHjE0+xF4EHjEYBacy6OwlpEYeLrqx/VDAQ5R
RufXeAltNZI0tzLQ8nY0aMBH3YFxF0+14/sbmOAtmnD6EW49gEcV9MnSJc5ct4m7
Szq5
=aC39
-END PGP SIGNATURE-


--
Put Bad Developers to Shame
Dominate Development with Jenkins Continuous Integration
Continuously Automate Build, Test  Deployment 
Start a new project now. Try Jenkins in the cloud.
http://p.sf.net/sfu/13600_Cloudbees
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Bitcoind-in-background mode for SPV wallets

2014-04-10 Thread Peter Todd
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512



On 10 April 2014 07:32:44 GMT-04:00, Pieter Wuille pieter.wui...@gmail.com 
wrote:
There were earlier discussions.

The two ideas were either using one or a few service bits to indicate
availability of blocks, or to extend addr messages with some flags to
indicate this information.

I wonder whether we can't have a hybrid: bits to indicate general
degree of availability of blocks (none, only recent, everything), but
indicate actual availability only upon actually connecting (through a
version extension, or - preferably - a separate message). Reason is
that the actual blocks available are likely to change frequently (if
you keep the last week of blocks, a 3-day old addr entry will have
quite outdated information), and not that important to actual peer
selection - only to drive the decision which blocks to ask after
connection.

Why not just put an expiration date on that information and delay deletion 
until the expiration is reached?

Also, its worth noting how the node bit solution you proposed can be done as a 
gradual upgrade path for SPV client. From the perspective of nodes that don't 
know about it they just see the pruned nodes as SPV nodes without any chain 
data at all. The only issue would be if large numbers of uses turned off their 
full nodes, but that's a possibility regardless. Done with partial UTXO set 
mode this may even result in an eventual increase in the number of full nodes.
-BEGIN PGP SIGNATURE-
Version: APG v1.1.1

iQFQBAEBCgA6BQJTRoPZMxxQZXRlciBUb2RkIChsb3cgc2VjdXJpdHkga2V5KSA8
cGV0ZUBwZXRlcnRvZGQub3JnPgAKCRAZnIM7qOfwhetrCACA02EJQ0VpcYvafuNc
7pvqMVeirJRu3Uv7Wy8rcl9jW5irM5fmNdznARtv2vwpEZN7MU0wp3ZY1FYOCv2f
PvWC7DBCSBs2BuyGkvPuwnXEppTrYmWFT3qjg+99lF1IlOV4yWFacja2RGDuJkea
fYUkODosHJjFVcXi5aMkBPQ5sOFdlUVbC94YV4d4PDSmF2fHLGG8uEfEweYb6Pv+
gj1CsfuAWf8DWzygDeL8x/wOG9HeqYqEbjxyOb9hxlp1ByUof+4WJtz3QfGsR2Xt
fvkmgS8vkUxSIZorMdypj7oLBOnfDW1bEK5He2SlqPdYi5FEQusZ/jMMX3Fw74GV
fJKt
=Wyv8
-END PGP SIGNATURE-


--
Put Bad Developers to Shame
Dominate Development with Jenkins Continuous Integration
Continuously Automate Build, Test  Deployment 
Start a new project now. Try Jenkins in the cloud.
http://p.sf.net/sfu/13600_Cloudbees
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Bitcoind-in-background mode for SPV wallets

2014-04-10 Thread Mike Hearn

 Oh yeah, credit goes to Mike Hearn for the payment channels, and if I'm
 correct, for the hub concept as well.


Actually, the design is from Satoshi and Matt did most of the
implementation work last year during a Google internship. Though I ended up
doing a lot of work on it too. We actually got pretty far: there was
Android UI for it and a couple of apps we coded up. I wish we could have
pushed it over the finishing line and got real world usage. Hopefully we
can return to it someday soon.

I think the hub/spoke concept was invented by goldsmiths in 16th century
Italy, as they started handing pieces of paper across their benches, or
*bancos* in Italian   :-)
--
Put Bad Developers to Shame
Dominate Development with Jenkins Continuous Integration
Continuously Automate Build, Test  Deployment 
Start a new project now. Try Jenkins in the cloud.
http://p.sf.net/sfu/13600_Cloudbees___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Bitcoind-in-background mode for SPV wallets

2014-04-10 Thread Gregory Maxwell
On Thu, Apr 10, 2014 at 4:32 AM, Pieter Wuille pieter.wui...@gmail.com wrote:
 There were earlier discussions.

On this list.

 The two ideas were either using one or a few service bits to indicate
 availability of blocks, or to extend addr messages with some flags to
 indicate this information.

 I wonder whether we can't have a hybrid: bits to indicate general
 degree of availability of blocks (none, only recent, everything), but
 indicate actual availability only upon actually connecting (through a
 version extension, or - preferably - a separate message). Reason is
 that the actual blocks available are likely to change frequently (if
 you keep the last week of blocks, a 3-day old addr entry will have
 quite outdated information), and not that important to actual peer
 selection - only to drive the decision which blocks to ask after
 connection.

I think you actually do need the kept ranges to be circulated,
otherwise you might need to hunt for a very long time to find the
right nodes with the blocks you need.  Alternatively, you give up and
don't hunt and pick some node that has them all and we get poor load
distribution. I'd rather be in a case where the nodes that have the
full history are only hit as a last resort.

WRT the changing values, I think that is pretty uniquely related to
the most recent blocks, and so instead I think that should be handled
symbolically (e.g. the hybrid approach... a flag for the I keep the
most recent 2000 blocks, I say 2000 because thats about where the
request offset historgrams flattened out) or as a single offset range
I keep the last N=200,  and the flag or the offset would be in
addition to whatever additional range was signaled. The latter could
be infrequently changing.

Signaling _more_ and more current range data on connect seems fine to
me, I just don't think it replaces something that gets relayed.

Based on the safety against reorgs and the block request access
patterns we observed I'm pretty sure we'd want any node serving blocks
at all to be at least the last N (for some number between 144 and 2000
or so). Based on the request patterns if just the recent blocks use up
all the space you're willing to spend, then I think thats probably
still the optimal contribution...

(Just be glad I'm not suggesting coding the entire blockchain with an
error correcting code so that it doesn't matter which subset you're
holding)

--
Put Bad Developers to Shame
Dominate Development with Jenkins Continuous Integration
Continuously Automate Build, Test  Deployment 
Start a new project now. Try Jenkins in the cloud.
http://p.sf.net/sfu/13600_Cloudbees
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Bitcoind-in-background mode for SPV wallets

2014-04-10 Thread Peter Todd
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512



On 10 April 2014 07:45:16 GMT-04:00, Mike Hearn m...@plan99.net wrote:

 Oh yeah, credit goes to Mike Hearn for the payment channels, and if
I'm
 correct, for the hub concept as well.


Actually, the design is from Satoshi and Matt did most of the
implementation work last year during a Google internship.

Ah right, of course. Along those lines we should credit Jeremy Spilman (?) for 
figuring out how to get rid of the dependency on nSequence, makimg the protocol 
trust-free.

I do recall it having an issue with malleability, semi-fixed with the P2SH 
trick. Be good to clear that up for good for Pieter's proposed malleability 
patch.

Though I
ended up
doing a lot of work on it too. We actually got pretty far: there was
Android UI for it and a couple of apps we coded up. I wish we could
have
pushed it over the finishing line and got real world usage. Hopefully
we
can return to it someday soon.

I think the hub/spoke concept was invented by goldsmiths in 16th
century
Italy, as they started handing pieces of paper across their benches, or
*bancos* in Italian   :-)

...and only took another five hundred years for math to catch up and make it 
trust free, modulo miner centralisation!
-BEGIN PGP SIGNATURE-
Version: APG v1.1.1

iQFQBAEBCgA6BQJTRoYCMxxQZXRlciBUb2RkIChsb3cgc2VjdXJpdHkga2V5KSA8
cGV0ZUBwZXRlcnRvZGQub3JnPgAKCRAZnIM7qOfwhdNMB/9zFBT3nFSkasHkMVW8
01pE/VYrclH5BsxANakOqfb57Rprc4xue9H4AuppqAlIo/i/gYtOyoDy9y8oQDmC
YRiM6K5+bqUH8uC/Rjz1AASBrnb3zAasqCJlklheb5nP3+EoLpKNJ04Brk8rnlqp
CMO5GktE43r9buYL9MOMXHanB523wRmNV8JjpEF+y5tPYe9YW7rsdmpjX8F8sRga
PE1MMNy43lctoej4tR0iSBfK2ZNudsz7PdzW9+4Gvpc4NDMHp5O4JwhX/vPbSyyC
+Or1BvPKe58zVSi5kBi7AJEeFXasfuKpwBwT9r2CGmjvI62ESmkj5M9eqcb4i8Yy
i2Zr
=E4NH
-END PGP SIGNATURE-


--
Put Bad Developers to Shame
Dominate Development with Jenkins Continuous Integration
Continuously Automate Build, Test  Deployment 
Start a new project now. Try Jenkins in the cloud.
http://p.sf.net/sfu/13600_Cloudbees
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Bitcoind-in-background mode for SPV wallets

2014-04-10 Thread Peter Todd
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512



On 10 April 2014 07:50:55 GMT-04:00, Gregory Maxwell gmaxw...@gmail.com wrote:
(Just be glad I'm not suggesting coding the entire blockchain with an
error correcting code so that it doesn't matter which subset you're
holding)

I forgot to ask last night: if you do that, can you add new blocks to the chain 
with the encoding incrementally?
-BEGIN PGP SIGNATURE-
Version: APG v1.1.1

iQFQBAEBCgA6BQJTRoZ+MxxQZXRlciBUb2RkIChsb3cgc2VjdXJpdHkga2V5KSA8
cGV0ZUBwZXRlcnRvZGQub3JnPgAKCRAZnIM7qOfwhYudCAC7ImifMnLIFHv1UifV
zRxtDkx7UxIf9dncDAcrTIyKEDhoouh0TmoZl3HKQ3KUEETAVKsMzqXLgqVe6Ezr
ny1bm0pQlkBCZFRwuZvmB27Y3mwC8PD6rT9ywtWzFjWd8PEg6/UaM547nQPw7ir0
27S3XMfE/BMiQWfWnWc/nqpbmJjd8x/dM3oiTG9SVZ7iNxotxAqfnW2X5tkhJb0q
dAV08wpu6aZ5hTyLpvDxXDFjEG119HJeLkT9QVIrg+GBG55PYORqE4gQr6uhrF4L
fGZS2EIlbk+kAiv0EjglQfxWM7KSRegplSASiKEOuX80tqLIsEugNh1em8qvG401
NOAS
=CWql
-END PGP SIGNATURE-


--
Put Bad Developers to Shame
Dominate Development with Jenkins Continuous Integration
Continuously Automate Build, Test  Deployment 
Start a new project now. Try Jenkins in the cloud.
http://p.sf.net/sfu/13600_Cloudbees
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Chain pruning

2014-04-10 Thread Wladimir
On Thu, Apr 10, 2014 at 1:37 PM, Mike Hearn m...@plan99.net wrote:

 Chain pruning is probably a separate thread, changing subject.


 Reason is that the actual blocks available are likely to change
 frequently (if
 you keep the last week of blocks


 I doubt anyone would specify blocks to keep in terms of time. More likely
 it'd be in terms of megabytes, as that's the actual resource constraint on
 nodes.


Well with bitcoin, (average) time, number of blocks and (maximum) size are
all related to each other so it doesn't matter how it is specified, it's
always possible to give estimates of all three.

As for implementation it indeed makes most sense to work with block ranges.


 Given a block size average it's easy to go from megabytes to num_blocks,
 so I had imagined it'd be a new addr field that specifies how many blocks
 from the chain head are stored. Then you'd connect to some nodes and if
 they indicate their chain head - num_blocks_stored is higher than your
 current chain height, you'd do a getaddr and go looking for nodes that are
 storing far enough back.


This assumes that nodes will always be storing the latest blocks. For
dynamic nodes that take part in the consensus this makes sense.

Just wondering: Would there be a use for a [static] node that, say, always
serves only the first 10 blocks? Or, even, a static range like block
10 - 20?

Wladimir
--
Put Bad Developers to Shame
Dominate Development with Jenkins Continuous Integration
Continuously Automate Build, Test  Deployment 
Start a new project now. Try Jenkins in the cloud.
http://p.sf.net/sfu/13600_Cloudbees___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Chain pruning

2014-04-10 Thread Gregory Maxwell
On Thu, Apr 10, 2014 at 4:57 AM, Wladimir laa...@gmail.com wrote:
 Just wondering: Would there be a use for a [static] node that, say, always
 serves only the first 10 blocks? Or, even, a static range like block
 10 - 20?

The last time we discussed this sipa collected data based on how often
blocks were feteched as a function of their depth and there was a huge
increase for recent blocks that didn't really level out until 2000
blocks or so— presumably its not uncommon for nodes to be offline for
a week or two at a time.

But sure I could see a fixed range as also being a useful contribution
though I'm struggling to figure out what set of constraints would
leave a node without following the consensus?   Obviously it has
bandwidth if you're expecting to contribute much in serving those
historic blocks... and verifying is reasonably cpu cheap with fast
ecdsa code.   Maybe it has a lot of read only storage?

I think it should be possible to express and use such a thing in the
protocol even if I'm currently unsure as to why you wouldn't do 10
- 20  _plus_ the most recent 144 that you were already keeping
around for reorgs.

In terms of peer selection, if the blocks you need aren't covered by
the nodes you're currently connected to I think you'd prefer to seek
node nodes which have the least rare-ness in the ranges they offer.
E.g. if you're looking for a block 50 from the tip,  you're should
probably not prefer to fetch it from someone with blocks 10-15
if its one of only 100 nodes that has that range.

--
Put Bad Developers to Shame
Dominate Development with Jenkins Continuous Integration
Continuously Automate Build, Test  Deployment 
Start a new project now. Try Jenkins in the cloud.
http://p.sf.net/sfu/13600_Cloudbees
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Feedback request: colored coins protocol

2014-04-10 Thread Flavien Charlon
Thanks for the valuable feedback. I see there is a strong concern with
requiring a large BTC capital for issuing coloring coins, so I am now in
the process of modifying the specification to address that. I will post an
update when this is finished.

By the way, padding doesn't solve the issue entirely (issuing 10 billion
shares sill takes you 100 BTC, even with padding and 1 satoshi = 1 share),
so I am going for the solution where the asset quantity of every output is
explicitly encoded in the OP_RETURN output. That way, whether you are
issuing 1 share or 100 trillions, you never need to pay more than 540
satoshis.


On Mon, Apr 7, 2014 at 8:58 PM, Alex Mizrahi alex.mizr...@gmail.com wrote:

 This is beyond ridiculous...

 Color kernel which works with padding is still quite simple. I think we
 have extra 10-50 lines of code to handle padding in coloredcoinlib.
 Essentially we have a couple of lines like this :

 value_wop = tx.outputs[oi].value - padding

 (value_wop means value without padding).
 And then we have like 10 lines of code which selects padding for a
 transaction.

 That's not a lot of extra complexity. And it solves the problem once and
 for all.

 What you propose instead: a different colored coin representing 10
 shares, and another one representing 100 shares (like the different
 denominations of dollar bills)  is much more complex, and it won't work:

 Suppose you have $100 coin, as a single coin.
 How do you send $54.23?
 That's simply impossible.

 So you'd rather push complexity to higher levels (and create inconvenience
 for end users, as you admitted yourself) than add 10-50 lines of code to
 color kernel?
 I just do not understand this.

 But I'm not going to argue. I already wrote everything which I could write
 on this topic.



 --
 Put Bad Developers to Shame
 Dominate Development with Jenkins Continuous Integration
 Continuously Automate Build, Test  Deployment
 Start a new project now. Try Jenkins in the cloud.
 http://p.sf.net/sfu/13600_Cloudbees
 ___
 Bitcoin-development mailing list
 Bitcoin-development@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bitcoin-development


--
Put Bad Developers to Shame
Dominate Development with Jenkins Continuous Integration
Continuously Automate Build, Test  Deployment 
Start a new project now. Try Jenkins in the cloud.
http://p.sf.net/sfu/13600_Cloudbees___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Chain pruning

2014-04-10 Thread Wladimir
On Thu, Apr 10, 2014 at 2:10 PM, Gregory Maxwell gmaxw...@gmail.com wrote:

 But sure I could see a fixed range as also being a useful contribution
 though I'm struggling to figure out what set of constraints would
 leave a node without following the consensus?   Obviously it has
 bandwidth if you're expecting to contribute much in serving those
 historic blocks... and verifying is reasonably cpu cheap with fast
 ecdsa code.   Maybe it has a lot of read only storage?


The use case is that you could burn the node implementation + block data +
a live operating system on a read-only medium. This could be set in stone
for a long time.

There would be no consensus code to keep up to date with protocol
developments, because it doesn't take active part in it.

I don't think it would be terribly useful right now, but it could be useful
when nodes that host all history become rare. It'd allow distributing
'pieces of history' in a self-contained form.


 I think it should be possible to express and use such a thing in the
 protocol even if I'm currently unsure as to why you wouldn't do 10
 - 20  _plus_ the most recent 144 that you were already keeping
 around for reorgs.


Yes, it would be nice to at least be able to express it, if it doesn't make
the protocol too finicky.

In terms of peer selection, if the blocks you need aren't covered by
 the nodes you're currently connected to I think you'd prefer to seek
 node nodes which have the least rare-ness in the ranges they offer.
 E.g. if you're looking for a block 50 from the tip,  you're should
 probably not prefer to fetch it from someone with blocks 10-15
 if its one of only 100 nodes that has that range.


That makes sense.

In general, if you want a block 50 from the tip, it would be best to
request it from a node that only serves the last N (N~50) blocks, and not
a history node that could use the same bandwidth to serve earlier, rarer
blocks to others.

Wladimir
--
Put Bad Developers to Shame
Dominate Development with Jenkins Continuous Integration
Continuously Automate Build, Test  Deployment 
Start a new project now. Try Jenkins in the cloud.
http://p.sf.net/sfu/13600_Cloudbees___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Chain pruning

2014-04-10 Thread Brian Hoffman
This is probably just noise, but what if nodes could compress and store
earlier transaction sets (archive sets) and serve them up conditionally. So
if there were let's say 100 archive sets of (10,000 blocks) you might have
5 open at any time when you're an active archive node while the others sit
on your disk compressed and unavailable to the network. This would allow
nodes to have all full transactions but conserve disk space and network
activity since they wouldn't ever respond about every possible transaction.

This could be based on a rotational request period, based on request count
or done periodically. Once their considered active they would be expected
to uncompress a set and make it available to the network. Clients would
have to piece together archive sets from different nodes, but if there
weren't enough archive nodes to cover the chain they could ratchet up the
amount of required open archive sets when your node was active.

I fully expect to have my idea trashed, but I'm dipping toes in the waters
of contribution.




On Thu, Apr 10, 2014 at 10:19 AM, Wladimir laa...@gmail.com wrote:


 On Thu, Apr 10, 2014 at 2:10 PM, Gregory Maxwell gmaxw...@gmail.comwrote:

 But sure I could see a fixed range as also being a useful contribution
 though I'm struggling to figure out what set of constraints would
 leave a node without following the consensus?   Obviously it has
 bandwidth if you're expecting to contribute much in serving those
 historic blocks... and verifying is reasonably cpu cheap with fast
 ecdsa code.   Maybe it has a lot of read only storage?


 The use case is that you could burn the node implementation + block data +
 a live operating system on a read-only medium. This could be set in stone
 for a long time.

 There would be no consensus code to keep up to date with protocol
 developments, because it doesn't take active part in it.

 I don't think it would be terribly useful right now, but it could be
 useful when nodes that host all history become rare. It'd allow
 distributing 'pieces of history' in a self-contained form.


 I think it should be possible to express and use such a thing in the
 protocol even if I'm currently unsure as to why you wouldn't do 10
 - 20  _plus_ the most recent 144 that you were already keeping
 around for reorgs.


 Yes, it would be nice to at least be able to express it, if it doesn't
 make the protocol too finicky.

 In terms of peer selection, if the blocks you need aren't covered by
 the nodes you're currently connected to I think you'd prefer to seek
 node nodes which have the least rare-ness in the ranges they offer.
 E.g. if you're looking for a block 50 from the tip,  you're should
 probably not prefer to fetch it from someone with blocks 10-15
 if its one of only 100 nodes that has that range.


 That makes sense.

 In general, if you want a block 50 from the tip, it would be best to
 request it from a node that only serves the last N (N~50) blocks, and not
 a history node that could use the same bandwidth to serve earlier, rarer
 blocks to others.

 Wladimir



 --
 Put Bad Developers to Shame
 Dominate Development with Jenkins Continuous Integration
 Continuously Automate Build, Test  Deployment
 Start a new project now. Try Jenkins in the cloud.
 http://p.sf.net/sfu/13600_Cloudbees
 ___
 Bitcoin-development mailing list
 Bitcoin-development@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bitcoin-development


--
Put Bad Developers to Shame
Dominate Development with Jenkins Continuous Integration
Continuously Automate Build, Test  Deployment 
Start a new project now. Try Jenkins in the cloud.
http://p.sf.net/sfu/13600_Cloudbees___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Chain pruning

2014-04-10 Thread Mike Hearn
Suggestions always welcome!

The main problem with this is that the block chain is mostly random bytes
(hashes, keys) so it doesn't compress that well. It compresses a bit, but
not enough to change the fundamental physics.

However, that does not mean the entire chain has to be stored on expensive
rotating platters. I've suggested that in some star trek future where the
chain really is gigantic, it could be stored on tape and spooled off at
high speed. Literally a direct DMA from tape drive to NIC. But we're not
there yet :)
--
Put Bad Developers to Shame
Dominate Development with Jenkins Continuous Integration
Continuously Automate Build, Test  Deployment 
Start a new project now. Try Jenkins in the cloud.
http://p.sf.net/sfu/13600_Cloudbees___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Chain pruning

2014-04-10 Thread Brian Hoffman
Looks like only about ~30% disk space savings so I see your point. Is there
a critical reason why blocks couldn't be formed into superblocks that are
chained together and nodes could serve a specific superblock, which could
be pieced together from different nodes to get the full blockchain? This
would allow participants with limited resources to serve full portions of
the blockchain rather than limited pieces of the entire blockchain.


On Thu, Apr 10, 2014 at 12:28 PM, Mike Hearn m...@plan99.net wrote:

 Suggestions always welcome!

 The main problem with this is that the block chain is mostly random bytes
 (hashes, keys) so it doesn't compress that well. It compresses a bit, but
 not enough to change the fundamental physics.

 However, that does not mean the entire chain has to be stored on expensive
 rotating platters. I've suggested that in some star trek future where the
 chain really is gigantic, it could be stored on tape and spooled off at
 high speed. Literally a direct DMA from tape drive to NIC. But we're not
 there yet :)

--
Put Bad Developers to Shame
Dominate Development with Jenkins Continuous Integration
Continuously Automate Build, Test  Deployment 
Start a new project now. Try Jenkins in the cloud.
http://p.sf.net/sfu/13600_Cloudbees___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Chain pruning

2014-04-10 Thread Ricardo Filipe
anyway, any kind of compression that comes to the blockchain is
orthogonal to pruning.

I agree that you will probably want some kind of replication on more
recent nodes than on older ones. However, nodes with older blocks
don't need to be static, get the block distribution algorithm to
sort it out.

2014-04-10 17:28 GMT+01:00 Mike Hearn m...@plan99.net:
 Suggestions always welcome!

 The main problem with this is that the block chain is mostly random bytes
 (hashes, keys) so it doesn't compress that well. It compresses a bit, but
 not enough to change the fundamental physics.

 However, that does not mean the entire chain has to be stored on expensive
 rotating platters. I've suggested that in some star trek future where the
 chain really is gigantic, it could be stored on tape and spooled off at high
 speed. Literally a direct DMA from tape drive to NIC. But we're not there
 yet :)

 --
 Put Bad Developers to Shame
 Dominate Development with Jenkins Continuous Integration
 Continuously Automate Build, Test  Deployment
 Start a new project now. Try Jenkins in the cloud.
 http://p.sf.net/sfu/13600_Cloudbees
 ___
 Bitcoin-development mailing list
 Bitcoin-development@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bitcoin-development


--
Put Bad Developers to Shame
Dominate Development with Jenkins Continuous Integration
Continuously Automate Build, Test  Deployment 
Start a new project now. Try Jenkins in the cloud.
http://p.sf.net/sfu/13600_Cloudbees
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Chain pruning

2014-04-10 Thread Ricardo Filipe
that's what blockchain pruning is all about :)

2014-04-10 17:47 GMT+01:00 Brian Hoffman brianchoff...@gmail.com:
 Looks like only about ~30% disk space savings so I see your point. Is there
 a critical reason why blocks couldn't be formed into superblocks that are
 chained together and nodes could serve a specific superblock, which could be
 pieced together from different nodes to get the full blockchain? This would
 allow participants with limited resources to serve full portions of the
 blockchain rather than limited pieces of the entire blockchain.


 On Thu, Apr 10, 2014 at 12:28 PM, Mike Hearn m...@plan99.net wrote:

 Suggestions always welcome!

 The main problem with this is that the block chain is mostly random bytes
 (hashes, keys) so it doesn't compress that well. It compresses a bit, but
 not enough to change the fundamental physics.

 However, that does not mean the entire chain has to be stored on expensive
 rotating platters. I've suggested that in some star trek future where the
 chain really is gigantic, it could be stored on tape and spooled off at high
 speed. Literally a direct DMA from tape drive to NIC. But we're not there
 yet :)



 --
 Put Bad Developers to Shame
 Dominate Development with Jenkins Continuous Integration
 Continuously Automate Build, Test  Deployment
 Start a new project now. Try Jenkins in the cloud.
 http://p.sf.net/sfu/13600_Cloudbees
 ___
 Bitcoin-development mailing list
 Bitcoin-development@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bitcoin-development


--
Put Bad Developers to Shame
Dominate Development with Jenkins Continuous Integration
Continuously Automate Build, Test  Deployment 
Start a new project now. Try Jenkins in the cloud.
http://p.sf.net/sfu/13600_Cloudbees
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Chain pruning

2014-04-10 Thread Brian Hoffman
Okay...will let myself out now ;P


On Thu, Apr 10, 2014 at 12:54 PM, Ricardo Filipe
ricardojdfil...@gmail.comwrote:

 that's what blockchain pruning is all about :)

 2014-04-10 17:47 GMT+01:00 Brian Hoffman brianchoff...@gmail.com:
  Looks like only about ~30% disk space savings so I see your point. Is
 there
  a critical reason why blocks couldn't be formed into superblocks that
 are
  chained together and nodes could serve a specific superblock, which
 could be
  pieced together from different nodes to get the full blockchain? This
 would
  allow participants with limited resources to serve full portions of the
  blockchain rather than limited pieces of the entire blockchain.
 
 
  On Thu, Apr 10, 2014 at 12:28 PM, Mike Hearn m...@plan99.net wrote:
 
  Suggestions always welcome!
 
  The main problem with this is that the block chain is mostly random
 bytes
  (hashes, keys) so it doesn't compress that well. It compresses a bit,
 but
  not enough to change the fundamental physics.
 
  However, that does not mean the entire chain has to be stored on
 expensive
  rotating platters. I've suggested that in some star trek future where
 the
  chain really is gigantic, it could be stored on tape and spooled off at
 high
  speed. Literally a direct DMA from tape drive to NIC. But we're not
 there
  yet :)
 
 
 
 
 --
  Put Bad Developers to Shame
  Dominate Development with Jenkins Continuous Integration
  Continuously Automate Build, Test  Deployment
  Start a new project now. Try Jenkins in the cloud.
  http://p.sf.net/sfu/13600_Cloudbees
  ___
  Bitcoin-development mailing list
  Bitcoin-development@lists.sourceforge.net
  https://lists.sourceforge.net/lists/listinfo/bitcoin-development
 

--
Put Bad Developers to Shame
Dominate Development with Jenkins Continuous Integration
Continuously Automate Build, Test  Deployment 
Start a new project now. Try Jenkins in the cloud.
http://p.sf.net/sfu/13600_Cloudbees___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Chain pruning

2014-04-10 Thread Pieter Wuille
On Thu, Apr 10, 2014 at 6:47 PM, Brian Hoffman brianchoff...@gmail.com wrote:
 Looks like only about ~30% disk space savings so I see your point. Is there
 a critical reason why blocks couldn't be formed into superblocks that are
 chained together and nodes could serve a specific superblock, which could be
 pieced together from different nodes to get the full blockchain? This would
 allow participants with limited resources to serve full portions of the
 blockchain rather than limited pieces of the entire blockchain.

As this is a suggestion that I think I've seen come up once a month
for the past 3 years, let's try to answer it thoroughly.

The actual state of the blockchain is the UTXO set (stored in
chainstate/ by the reference client). It's the set of all unspent
transaction outputs at the currently active point in the block chain.
It is all you need for validating future blocks.

The problem is, you can't just give someone the UTXO set and expect
them to trust it, as there is no way to prove that it was the result
of processing the actual blocks.

As Bitcoin's full node uses a zero trust model, where (apart from
one detail: the order of otherwise valid transactions) it never
assumes any data received from the outside it valid, it HAS to see the
previous blocks in order to establish the validity of the current UTXO
set. This is what initial block syncing does. Nothing but the actual
blocks can provide this data, and it is why the actual blocks need to
be available. It does not require everyone to have all blocks, though
- they just need to have seen them during processing.

A related, but not identical evolution is merkle UTXO commitments.
This means that we shape the UTXO set as a merkle tree, compute its
root after every block, and require that the block commits to this
root hash (by putting it in the coinbase, for example). This means a
full node can copy the chain state from someone else, and check that
its hash matches what the block chain commits to. It's important to
note that this is a strict reduction in security: we're now trusting
that the longest chain (with most proof of work) commits to a valid
UTXO set (at some point in the past).

In essence, combining both ideas means you get superblocks (the UTXO
set is essentially the summary of the result of all past blocks), in a
way that is less-than-currently-but-perhaps-still-acceptably-validated.

-- 
Pieter

--
Put Bad Developers to Shame
Dominate Development with Jenkins Continuous Integration
Continuously Automate Build, Test  Deployment 
Start a new project now. Try Jenkins in the cloud.
http://p.sf.net/sfu/13600_Cloudbees
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Chain pruning

2014-04-10 Thread Brian Hoffman
Ok I think I've got a good understanding of where we're at now. I can
promise that the next person to waste your time in 30 days will not be me.
I'm pleasantly surprised to see a community that doesn't kickban newcomers
and takes the time to explain (re-explain) concepts.

Hoping to add *beneficial* thoughts in the future!


On Thu, Apr 10, 2014 at 12:59 PM, Pieter Wuille pieter.wui...@gmail.comwrote:

 On Thu, Apr 10, 2014 at 6:47 PM, Brian Hoffman brianchoff...@gmail.com
 wrote:
  Looks like only about ~30% disk space savings so I see your point. Is
 there
  a critical reason why blocks couldn't be formed into superblocks that
 are
  chained together and nodes could serve a specific superblock, which
 could be
  pieced together from different nodes to get the full blockchain? This
 would
  allow participants with limited resources to serve full portions of the
  blockchain rather than limited pieces of the entire blockchain.

 As this is a suggestion that I think I've seen come up once a month
 for the past 3 years, let's try to answer it thoroughly.

 The actual state of the blockchain is the UTXO set (stored in
 chainstate/ by the reference client). It's the set of all unspent
 transaction outputs at the currently active point in the block chain.
 It is all you need for validating future blocks.

 The problem is, you can't just give someone the UTXO set and expect
 them to trust it, as there is no way to prove that it was the result
 of processing the actual blocks.

 As Bitcoin's full node uses a zero trust model, where (apart from
 one detail: the order of otherwise valid transactions) it never
 assumes any data received from the outside it valid, it HAS to see the
 previous blocks in order to establish the validity of the current UTXO
 set. This is what initial block syncing does. Nothing but the actual
 blocks can provide this data, and it is why the actual blocks need to
 be available. It does not require everyone to have all blocks, though
 - they just need to have seen them during processing.

 A related, but not identical evolution is merkle UTXO commitments.
 This means that we shape the UTXO set as a merkle tree, compute its
 root after every block, and require that the block commits to this
 root hash (by putting it in the coinbase, for example). This means a
 full node can copy the chain state from someone else, and check that
 its hash matches what the block chain commits to. It's important to
 note that this is a strict reduction in security: we're now trusting
 that the longest chain (with most proof of work) commits to a valid
 UTXO set (at some point in the past).

 In essence, combining both ideas means you get superblocks (the UTXO
 set is essentially the summary of the result of all past blocks), in a
 way that is less-than-currently-but-perhaps-still-acceptably-validated.

 --
 Pieter

--
Put Bad Developers to Shame
Dominate Development with Jenkins Continuous Integration
Continuously Automate Build, Test  Deployment 
Start a new project now. Try Jenkins in the cloud.
http://p.sf.net/sfu/13600_Cloudbees___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Feedback request: colored coins protocol

2014-04-10 Thread Alex Mizrahi
 At this point, I don't think what you are doing is even colored coins
 anymore. You might want to look into Counterparty or Mastercoin.


Nope, it's still colored coins. The difference between colored coin model
and Mastercoin model is that colored coins are linked to transaction
outputs, while Mastercoin has a notion of address balances.

The implications of this is that in colored coin model explicit
dependencies allow us to rely on SPV. (Assuming that one can fetch the
dependency graph to link txout in question to genesis.)
While it is not the case with Mastercoin.

While it's pretty far from the original colored coins model, what Flavien
have described is identical to it in majority of aspects.

This is an interesting approach, but OP_RETURN size limitations can be a
significant problem for some kinds of applications.
--
Put Bad Developers to Shame
Dominate Development with Jenkins Continuous Integration
Continuously Automate Build, Test  Deployment 
Start a new project now. Try Jenkins in the cloud.
http://p.sf.net/sfu/13600_Cloudbees___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Bitcoind-in-background mode for SPV wallets

2014-04-10 Thread Tier Nolan
Error correction is an interesting suggestion.

If there was 1 nodes and each stored 0.1% of the blocks, at random,
then the odds of a block not being stored is 45 in a million.

Blocks are stored on average 10 times, so there is already reasonable
redundancy.

With 1 million blocks, 45 would be lost in that case, even though most are
stored multiple times.

With error correction codes, the chances of blocks going missing is much
lower.

For example, if there was 32 out of 34 Reed-Solomon-like system, then 2
blocks out of 34 could be lost without any actual data loss for the network.

As a back of the envelop check, the odds of 2 missing blocks landing within
34 of another is 68/100.  That means that the odds of 2 missing blocks
falling in the same correction section is 45 * 34 / 100 = 0.153%.  Even
in that case, the missing blocks could be reconstructed, as long as you
know that they are missing.

The error correction code has taken it from being a near certainty that
some blocks would be lost to less than 0.153%.

A simple error correction system would just take 32 blocks in sequence and
then compute 2 extra blocks.

The extra blocks would have to be the same length as the longest block in
the 32 being corrected.

The shorter blocks would be padded with zeroes so everything is the same
size.

For each byte position in the blocks you compute the polynomial that goes
through byte (x, data(x)), for x = 0 to 31.  This could be a finite field,
or just mod 257.

You can then compute the value for x=32 and x = 33.  Those are the values
for the 2 extra blocks.

If mod 257 is used, then only the 2 extra blocks have to deal with symbols
from 0 to 256.

If you have 32 of the 34 blocks, you can compute the polynomial and thus
generate the 32 actual blocks.

This could be achieved by a soft fork by having a commitment every 32
blocks in the coinbase.

It makes the header chain much longer though.

Longer sections are more efficient, but need more calculations to recover
everything.  You could also do interleaving to handle the case where entire
sections are missing.


On Thu, Apr 10, 2014 at 12:54 PM, Peter Todd p...@petertodd.org wrote:

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA512



 On 10 April 2014 07:50:55 GMT-04:00, Gregory Maxwell gmaxw...@gmail.com
 wrote:
 (Just be glad I'm not suggesting coding the entire blockchain with an
 error correcting code so that it doesn't matter which subset you're
 holding)

 I forgot to ask last night: if you do that, can you add new blocks to the
 chain with the encoding incrementally?
 -BEGIN PGP SIGNATURE-
 Version: APG v1.1.1

 iQFQBAEBCgA6BQJTRoZ+MxxQZXRlciBUb2RkIChsb3cgc2VjdXJpdHkga2V5KSA8
 cGV0ZUBwZXRlcnRvZGQub3JnPgAKCRAZnIM7qOfwhYudCAC7ImifMnLIFHv1UifV
 zRxtDkx7UxIf9dncDAcrTIyKEDhoouh0TmoZl3HKQ3KUEETAVKsMzqXLgqVe6Ezr
 ny1bm0pQlkBCZFRwuZvmB27Y3mwC8PD6rT9ywtWzFjWd8PEg6/UaM547nQPw7ir0
 27S3XMfE/BMiQWfWnWc/nqpbmJjd8x/dM3oiTG9SVZ7iNxotxAqfnW2X5tkhJb0q
 dAV08wpu6aZ5hTyLpvDxXDFjEG119HJeLkT9QVIrg+GBG55PYORqE4gQr6uhrF4L
 fGZS2EIlbk+kAiv0EjglQfxWM7KSRegplSASiKEOuX80tqLIsEugNh1em8qvG401
 NOAS
 =CWql
 -END PGP SIGNATURE-



 --
 Put Bad Developers to Shame
 Dominate Development with Jenkins Continuous Integration
 Continuously Automate Build, Test  Deployment
 Start a new project now. Try Jenkins in the cloud.
 http://p.sf.net/sfu/13600_Cloudbees
 ___
 Bitcoin-development mailing list
 Bitcoin-development@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bitcoin-development

--
Put Bad Developers to Shame
Dominate Development with Jenkins Continuous Integration
Continuously Automate Build, Test  Deployment 
Start a new project now. Try Jenkins in the cloud.
http://p.sf.net/sfu/13600_Cloudbees___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Chain pruning

2014-04-10 Thread Paul Rabahy
You say UTXO commitments is a strict reduction in security. If UTXO
commitments were rolled in as a soft fork, I do not see any new attacks
that could happen to a person trusting the committed UTXO + any remaining
blocks to catch up to the head.

I would imagine the soft fork to proceed similar to the following.
1. Miners begin including UTXO commitments.
2. Miners begin rejecting blocks with invalid UTXO commitments.
3. Miners begin rejecting blocks with no UTXO commitments.

To start up a fresh client it would follow the following.
1. Sync headers.
2. Pick a committed UTXO that is deep enough to not get orphaned.
3. Sync blocks from commitment to head.

I would argue that a client following this methodology is strictly more
secure than SPV, and I don't see any attacks that make it less secure than
a full client. It is obviously still susceptible to a 51% attack, but so is
the traditional block chain. I also do not see any sybil attacks that are
strengthened by this change because it is not modifying the networking code.

I guess if the soft fork happened, then miners began to not include the
UTXO commitment anymore, it would lower the overall network hash rate, but
this would be self-harming to the miners so they have an incentive to not
do it.

Please let me know if I have missed something.


On Thu, Apr 10, 2014 at 12:59 PM, Pieter Wuille pieter.wui...@gmail.comwrote:


 As this is a suggestion that I think I've seen come up once a month
 for the past 3 years, let's try to answer it thoroughly.

 The actual state of the blockchain is the UTXO set (stored in
 chainstate/ by the reference client). It's the set of all unspent
 transaction outputs at the currently active point in the block chain.
 It is all you need for validating future blocks.

 The problem is, you can't just give someone the UTXO set and expect
 them to trust it, as there is no way to prove that it was the result
 of processing the actual blocks.

 As Bitcoin's full node uses a zero trust model, where (apart from
 one detail: the order of otherwise valid transactions) it never
 assumes any data received from the outside it valid, it HAS to see the
 previous blocks in order to establish the validity of the current UTXO
 set. This is what initial block syncing does. Nothing but the actual
 blocks can provide this data, and it is why the actual blocks need to
 be available. It does not require everyone to have all blocks, though
 - they just need to have seen them during processing.

 A related, but not identical evolution is merkle UTXO commitments.
 This means that we shape the UTXO set as a merkle tree, compute its
 root after every block, and require that the block commits to this
 root hash (by putting it in the coinbase, for example). This means a
 full node can copy the chain state from someone else, and check that
 its hash matches what the block chain commits to. It's important to
 note that this is a strict reduction in security: we're now trusting
 that the longest chain (with most proof of work) commits to a valid
 UTXO set (at some point in the past).

 In essence, combining both ideas means you get superblocks (the UTXO
 set is essentially the summary of the result of all past blocks), in a
 way that is less-than-currently-but-perhaps-still-acceptably-validated.

 --
 Pieter


 --
 Put Bad Developers to Shame
 Dominate Development with Jenkins Continuous Integration
 Continuously Automate Build, Test  Deployment
 Start a new project now. Try Jenkins in the cloud.
 http://p.sf.net/sfu/13600_Cloudbees
 ___
 Bitcoin-development mailing list
 Bitcoin-development@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bitcoin-development

--
Put Bad Developers to Shame
Dominate Development with Jenkins Continuous Integration
Continuously Automate Build, Test  Deployment 
Start a new project now. Try Jenkins in the cloud.
http://p.sf.net/sfu/13600_Cloudbees___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Chain pruning

2014-04-10 Thread Pieter Wuille
On Thu, Apr 10, 2014 at 8:19 PM, Paul Rabahy prab...@gmail.com wrote:
 Please let me know if I have missed something.

A 51% attack can make you believe you were paid, while you weren't.

Full node security right now validates everything - there is no way
you can ever be made to believe something invalid. The only attacks
against it are about which version of valid history eventually gets
chosen.

If you trust hashrate for determining which UTXO set is valid, a 51%
attack becomes worse in that you can be made to believe a version of
history which is in fact invalid.

-- 
Pieter

--
Put Bad Developers to Shame
Dominate Development with Jenkins Continuous Integration
Continuously Automate Build, Test  Deployment 
Start a new project now. Try Jenkins in the cloud.
http://p.sf.net/sfu/13600_Cloudbees
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Chain pruning

2014-04-10 Thread Mark Friedenbach
You took the quote out of context:

a full node can copy the chain state from someone else, and check that
its hash matches what the block chain commits to. It's important to
note that this is a strict reduction in security: we're now trusting
that the longest chain (with most proof of work) commits to a valid
UTXO set (at some point in the past).

The described synchronization mechanism would be to determine the
most-work block header (SPV level of security!), and then sync the UTXO
set committed to within that block. This is strictly less security than
building the UTXO set yourself because it is susceptible to a 51% attack
which violates protocol rules.

On 04/10/2014 11:19 AM, Paul Rabahy wrote:
 You say UTXO commitments is a strict reduction in security. If UTXO
 commitments were rolled in as a soft fork, I do not see any new attacks
 that could happen to a person trusting the committed UTXO + any
 remaining blocks to catch up to the head.
 
 I would imagine the soft fork to proceed similar to the following.
 1. Miners begin including UTXO commitments.
 2. Miners begin rejecting blocks with invalid UTXO commitments.
 3. Miners begin rejecting blocks with no UTXO commitments.
 
 To start up a fresh client it would follow the following.
 1. Sync headers.
 2. Pick a committed UTXO that is deep enough to not get orphaned.
 3. Sync blocks from commitment to head.
 
 I would argue that a client following this methodology is strictly more
 secure than SPV, and I don't see any attacks that make it less secure
 than a full client. It is obviously still susceptible to a 51% attack,
 but so is the traditional block chain. I also do not see any sybil
 attacks that are strengthened by this change because it is not modifying
 the networking code.
 
 I guess if the soft fork happened, then miners began to not include the
 UTXO commitment anymore, it would lower the overall network hash rate,
 but this would be self-harming to the miners so they have an incentive
 to not do it.
 
 Please let me know if I have missed something.

--
Put Bad Developers to Shame
Dominate Development with Jenkins Continuous Integration
Continuously Automate Build, Test  Deployment 
Start a new project now. Try Jenkins in the cloud.
http://p.sf.net/sfu/13600_Cloudbees
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Chain pruning

2014-04-10 Thread Tier Nolan
On Thu, Apr 10, 2014 at 7:32 PM, Pieter Wuille pieter.wui...@gmail.comwrote:

 If you trust hashrate for determining which UTXO set is valid, a 51%
 attack becomes worse in that you can be made to believe a version of
 history which is in fact invalid.


If there are invalidation proofs, then this isn't strictly true.

If you are connected to 10 nodes and only 1 is honest, it can send you the
proof that your main chain is invalid.

For bad scripts, it shows you the input transaction for the invalid input
along with the merkle path to prove it is in a previous block.

For double spends, it could show the transaction which spent the output.

Double spends are pretty much the same as trying to spend non-existent
outputs anyway.

If the UTXO set commit was actually a merkle tree, then all updates could
be included.

Blocks could have extra data with the proofs that the UTXO set is being
updated correctly.

To update the UTXO set, you need the paths for all spent inputs.

It puts a large load on miners to keep things working, since they have to
run a full node.

If they commit the data to the chain, then SPV nodes can do local checking.

One of them will find invalid blocks eventually (even if one of the other
miners don't).


 --
 Pieter


 --
 Put Bad Developers to Shame
 Dominate Development with Jenkins Continuous Integration
 Continuously Automate Build, Test  Deployment
 Start a new project now. Try Jenkins in the cloud.
 http://p.sf.net/sfu/13600_Cloudbees
 ___
 Bitcoin-development mailing list
 Bitcoin-development@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bitcoin-development

--
Put Bad Developers to Shame
Dominate Development with Jenkins Continuous Integration
Continuously Automate Build, Test  Deployment 
Start a new project now. Try Jenkins in the cloud.
http://p.sf.net/sfu/13600_Cloudbees___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Chain pruning

2014-04-10 Thread Pieter Wuille
On Thu, Apr 10, 2014 at 10:12 PM, Tier Nolan tier.no...@gmail.com wrote:
 On Thu, Apr 10, 2014 at 7:32 PM, Pieter Wuille pieter.wui...@gmail.com
 wrote:

 If you trust hashrate for determining which UTXO set is valid, a 51%
 attack becomes worse in that you can be made to believe a version of
 history which is in fact invalid.


 If there are invalidation proofs, then this isn't strictly true.

I'm aware of fraud proofs, and they're a very cool idea. They allow
you to leverage some herd immunity in the system (assuming you'll be
told about invalid data you received without actually validating it).
However, they are certainly not the same thing as zero trust security
a fully validating node offers.

For example, a sybil attack that hides the actual best chain + fraud
proofs from you, plus being fed a chain that commits to an invalid
UTXO set.

There are many ideas that make attacks harder, and they're probably
good ideas to deploy, but there is little that achieves the security
of a full node. (well, perhaps a zero-knowledge proof of having run
the validation code against the claimed chain tip to produce the known
UTXO set...).
-- 
Pieter

--
Put Bad Developers to Shame
Dominate Development with Jenkins Continuous Integration
Continuously Automate Build, Test  Deployment 
Start a new project now. Try Jenkins in the cloud.
http://p.sf.net/sfu/13600_Cloudbees
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Chain pruning

2014-04-10 Thread Jesus Cea
On 10/04/14 18:59, Pieter Wuille wrote:
 It's important to
 note that this is a strict reduction in security: we're now trusting
 that the longest chain (with most proof of work) commits to a valid
 UTXO set (at some point in the past).

AFAIK, current bitcoin code code already set blockchain checkpoints from
time to time. It is a garanteed that a longer chain starting before the
checkpoint is not going to be accepted suddently. See
https://bitcointalk.org/index.php?topic=194078.0.

Could be perfectly valid to store only unspend wallets before last
checkpoint, if during the blockchain download the node did all the checks.

Would be interesting, of course, to be able to verify unspend wallet
accounting having only that checkpoint data (the merkle tree can do
that, I guess). So you could detect a data corruption or manipulation in
your local harddisk.

-- 
Jesús Cea Avión _/_/  _/_/_/_/_/_/
j...@jcea.es - http://www.jcea.es/ _/_/_/_/  _/_/_/_/  _/_/
Twitter: @jcea_/_/_/_/  _/_/_/_/_/
jabber / xmpp:j...@jabber.org  _/_/  _/_/_/_/  _/_/  _/_/
Things are not so easy  _/_/  _/_/_/_/  _/_/_/_/  _/_/
My name is Dump, Core Dump   _/_/_/_/_/_/  _/_/  _/_/
El amor es poner tu felicidad en la felicidad de otro - Leibniz



signature.asc
Description: OpenPGP digital signature
--
Put Bad Developers to Shame
Dominate Development with Jenkins Continuous Integration
Continuously Automate Build, Test  Deployment 
Start a new project now. Try Jenkins in the cloud.
http://p.sf.net/sfu/13600_Cloudbees___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Chain pruning

2014-04-10 Thread Mark Friedenbach
Checkpoints will go away, eventually.

On 04/10/2014 02:34 PM, Jesus Cea wrote:
 On 10/04/14 18:59, Pieter Wuille wrote:
 It's important to
 note that this is a strict reduction in security: we're now trusting
 that the longest chain (with most proof of work) commits to a valid
 UTXO set (at some point in the past).
 
 AFAIK, current bitcoin code code already set blockchain checkpoints from
 time to time. It is a garanteed that a longer chain starting before the
 checkpoint is not going to be accepted suddently. See
 https://bitcointalk.org/index.php?topic=194078.0.
 
 Could be perfectly valid to store only unspend wallets before last
 checkpoint, if during the blockchain download the node did all the checks.
 
 Would be interesting, of course, to be able to verify unspend wallet
 accounting having only that checkpoint data (the merkle tree can do
 that, I guess). So you could detect a data corruption or manipulation in
 your local harddisk.
 
 
 
 --
 Put Bad Developers to Shame
 Dominate Development with Jenkins Continuous Integration
 Continuously Automate Build, Test  Deployment 
 Start a new project now. Try Jenkins in the cloud.
 http://p.sf.net/sfu/13600_Cloudbees
 
 
 
 ___
 Bitcoin-development mailing list
 Bitcoin-development@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bitcoin-development
 

--
Put Bad Developers to Shame
Dominate Development with Jenkins Continuous Integration
Continuously Automate Build, Test  Deployment 
Start a new project now. Try Jenkins in the cloud.
http://p.sf.net/sfu/13600_Cloudbees
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Chain pruning

2014-04-10 Thread Jesus Cea
On 11/04/14 00:15, Mark Friedenbach wrote:
 Checkpoints will go away, eventually.

Why?. The points in the forum thread seem pretty sensible.

-- 
Jesús Cea Avión _/_/  _/_/_/_/_/_/
j...@jcea.es - http://www.jcea.es/ _/_/_/_/  _/_/_/_/  _/_/
Twitter: @jcea_/_/_/_/  _/_/_/_/_/
jabber / xmpp:j...@jabber.org  _/_/  _/_/_/_/  _/_/  _/_/
Things are not so easy  _/_/  _/_/_/_/  _/_/_/_/  _/_/
My name is Dump, Core Dump   _/_/_/_/_/_/  _/_/  _/_/
El amor es poner tu felicidad en la felicidad de otro - Leibniz



signature.asc
Description: OpenPGP digital signature
--
Put Bad Developers to Shame
Dominate Development with Jenkins Continuous Integration
Continuously Automate Build, Test  Deployment 
Start a new project now. Try Jenkins in the cloud.
http://p.sf.net/sfu/13600_Cloudbees___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Chain pruning

2014-04-10 Thread Gregory Maxwell
On Thu, Apr 10, 2014 at 3:24 PM, Jesus Cea j...@jcea.es wrote:
 On 11/04/14 00:15, Mark Friedenbach wrote:
 Checkpoints will go away, eventually.
 Why?. The points in the forum thread seem pretty sensible.

Because with headers first synchronization the major problems that
they solve— e.g. block flooding DOS attacks, weak chain isolation, and
checking shortcutting can be addressed in other more efficient ways
that don't result in putting trust in third parties.

They also cause really severe confusion about the security model.

Instead you can embed in software knoweldge that the longest chain is
at least this long to prevent isolation attacks, which is a lot
simpler and less trusting.  You can also do randomized validation of
the deeply burried old history for performance, instead of constantly
depending on 'trusted parties' to update software or it gets slower
over time, and locally save your own validation fingerprints so if you
need to reinitilize data you can remember what you've check so far by
hash.

--
Put Bad Developers to Shame
Dominate Development with Jenkins Continuous Integration
Continuously Automate Build, Test  Deployment 
Start a new project now. Try Jenkins in the cloud.
http://p.sf.net/sfu/13600_Cloudbees
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Presenting a BIP for Shamir's Secret Sharing of Bitcoin private keys

2014-04-10 Thread Nikita Schmidt
 What do you think a big-integer division by a word-sized divisor *is*? 
 Obviously rolling your own is always an option. Are you just saying that 
 Base58 encoding and decoding is easier than Shamir's Secret Sharing because 
 the divisors are small?

Well, yes, to be fair, in fact it is.  The small divisor and lack of
modulo arithmetic make base-58 encoding and decoding noticeably
smaller and easier than Shamir's Secret Sharing over GF(P256).

--
Put Bad Developers to Shame
Dominate Development with Jenkins Continuous Integration
Continuously Automate Build, Test  Deployment 
Start a new project now. Try Jenkins in the cloud.
http://p.sf.net/sfu/13600_Cloudbees
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development