Re: [bitcoin-dev] Small Nodes: A Better Alternative to Pruned Nodes

2017-04-18 Thread Aymeric Vitte via bitcoin-dev
>From the initial post " The situation would likely become problematic
quickly if bitcoin-core were to ship with the defaults set to a pruned
node."

Sorry to be straight, I read the (painful) thread below, and most of
what is in there is inept, wrong, obsolete... or biased, cf the first
sentence above, if the idea is to invent a workaround to the fact that
pruning might/will become the default or might/will be set by the users
as the default so full nodes might/will disappear then just say it
clearly instead of proposing this kind of non-solution as a solution to
secure the blockchain

I can't believe this is serious, people now are supposed to prune but
will be forced to host a part of the blockchain, how do you expect this
to work, why people would do this? Knowing that to start pruning they
need a full node, then since we are there, why not continuing with a
full node... but indeed, why should they continue with a full node, and
therefore why should they accept to host a part of the blockchain if
they decline the first proposal?

This is absurd, you are not addressing the first priority given the
context which is to quickly increase the full nodes and which obviously
includes an incentive for people to run them

It gives also the feeling that bitcoin wants to reinvent everything not
capitalizing on/knowing what exists, sorry again but the concepts of the
proposal and others like archival nodes are just funny


Le 18/04/2017 à 15:07, Tier Nolan via bitcoin-dev a écrit :
> This has been discussed before.
>
> https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-May/008101.html
>
> including a list of nice to have features by Maxwell
>
> https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-May/008110.html
>
> You meet most of these rules, though you do have to download blocks
> from multiple peers.
>
> The suggestion in that thread were for a way to compactly indicate
> which blocks a node has.  Each node would then store a sub-set of all
> the blocks.  You just download the blocks you want from the node that
> has them.
>
> Each node would be recommended to store the last few days worth anyway.
>
>
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev

-- 
Zcash wallets made simple: https://github.com/Ayms/zcash-wallets
Bitcoin wallets made simple: https://github.com/Ayms/bitcoin-wallets
Get the torrent dynamic blocklist: http://peersm.com/getblocklist
Check the 10 M passwords list: http://peersm.com/findmyass
Anti-spies and private torrents, dynamic blocklist: http://torrent-live.org
Peersm : http://www.peersm.com
torrent-live: https://github.com/Ayms/torrent-live
node-Tor : https://www.github.com/Ayms/node-Tor
GitHub : https://www.github.com/Ayms

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Transaction signalling

2017-04-18 Thread Tim Ruffing via bitcoin-dev
I don't have an opinion on whether signaling is a good idea in general.

However I don't think that using addresses is a good idea, because this
has privacy implications. For example, it makes it much easier to link
the addresses, e.g., inputs with change address. (The change address
votes for the same proposal as the input address.)

Tim

On Tue, 2017-04-18 at 18:07 +, Christian Decker via bitcoin-dev
wrote:
> I really like the idea of extending signalling capabilities to the
> end-users. It gives stakeholders a voice in the decisions we take in
> the network, and are a clear signal to all other involved parties. It
> reminds me of a student thesis I supervised some time ago [1], in
> which we explored various signalling ideas.
> 
> I think we have a number of fields that may be used for such a
> signalling, e.g., OP_RETURN, locktime, and output scripts. I think
> OP_RETURN is probably not the field you'd want to use though since it
> adds data that needs to be transferred, stored for bootstrap, and
> outputs in the UTXO would need to be tagged with additional
> information. Locktime has the advantage of being mostly a freeform
> field for values in the past, but it clashes with other uses that may
> rely on it. Furthermore, it is the transaction creator that specifies
> the locktime, hence the signal trails one hop behind the current
> owner, i.e., the actual stakeholder.
> 
> I think probably the best field to signal would be the output
> script. It is specified by the recipient of the funds, i.e., the
> current owner, and is already stored in the UTXO, so a single pass
> can
> tally up the votes. We could for example use the last 4 bits of the
> pubkey/pubkeyhash to opt in (3 leading 0 bits) and the vote (0/1
> depending on the stakeholders desired signal). We'd need to define
> similar semantics for other script types, but getting the standard
> scripts to be recognized should be simple.
> 
> In the spirit of full disclosure I'd like to also mention some of the
> downsides of voting this way. Unlike the OP_RETURN proposal, users
> that do not intend to signal will also be included in the tally. I'd
> expect the signals of these users to be random with a 50% chance of
> either outcome, so they should not influence the final result, but
> may
> muddy the water depending on what part of the population is
> signalling. The opt-in should make sure that the majority of votes
> are
> actually voluntary votes, and not just users that randomly select a
> pubkey/pubkeyhash, and can be adjusted as desired, though higher
> values require more grinding on behalf of the users.
> 
> The grinding may also exacerbate some problems we already have with
> the HD Wallet lookahead, since we now skip a number of addresses, so
> we should not require too many opt-in bits.
> 
> So there are some problems we'd need to tackle, but I'm really
> excited
> about this, as it could provide data to make informed decisions, and
> should put an end to the endless speculation about the will of the
> economic majority.
> 
> Cheers,
> Christian
> 
> [1] http://pub.tik.ee.ethz.ch/students/2015-HS/SA-2015-30.pdf
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Transaction signalling

2017-04-18 Thread Christian Decker via bitcoin-dev
I really like the idea of extending signalling capabilities to the
end-users. It gives stakeholders a voice in the decisions we take in
the network, and are a clear signal to all other involved parties. It
reminds me of a student thesis I supervised some time ago [1], in
which we explored various signalling ideas.

I think we have a number of fields that may be used for such a
signalling, e.g., OP_RETURN, locktime, and output scripts. I think
OP_RETURN is probably not the field you'd want to use though since it
adds data that needs to be transferred, stored for bootstrap, and
outputs in the UTXO would need to be tagged with additional
information. Locktime has the advantage of being mostly a freeform
field for values in the past, but it clashes with other uses that may
rely on it. Furthermore, it is the transaction creator that specifies
the locktime, hence the signal trails one hop behind the current
owner, i.e., the actual stakeholder.

I think probably the best field to signal would be the output
script. It is specified by the recipient of the funds, i.e., the
current owner, and is already stored in the UTXO, so a single pass can
tally up the votes. We could for example use the last 4 bits of the
pubkey/pubkeyhash to opt in (3 leading 0 bits) and the vote (0/1
depending on the stakeholders desired signal). We'd need to define
similar semantics for other script types, but getting the standard
scripts to be recognized should be simple.

In the spirit of full disclosure I'd like to also mention some of the
downsides of voting this way. Unlike the OP_RETURN proposal, users
that do not intend to signal will also be included in the tally. I'd
expect the signals of these users to be random with a 50% chance of
either outcome, so they should not influence the final result, but may
muddy the water depending on what part of the population is
signalling. The opt-in should make sure that the majority of votes are
actually voluntary votes, and not just users that randomly select a
pubkey/pubkeyhash, and can be adjusted as desired, though higher
values require more grinding on behalf of the users.

The grinding may also exacerbate some problems we already have with
the HD Wallet lookahead, since we now skip a number of addresses, so
we should not require too many opt-in bits.

So there are some problems we'd need to tackle, but I'm really excited
about this, as it could provide data to make informed decisions, and
should put an end to the endless speculation about the will of the
economic majority.

Cheers,
Christian

[1] http://pub.tik.ee.ethz.ch/students/2015-HS/SA-2015-30.pdf
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Transaction signalling

2017-04-18 Thread Erik Aronesty via bitcoin-dev
Just to be clear, the tagging would occur on the addresses, and the
weighting would be by value, so it's a measure of economic significance.
Major exchanges will regularly tag massive amounts of Bitcoins with their
capabilities.

Just adding a nice bit-field and a tagging standard, and then charting it
might be enough to "think about how to use it later".   The only problem
would be that this would interfere with "other uses of op_return" ...
colored coins, etc.

Personally, I think that's OK, since the purpose is to tag economically
meaningful nodes to the Bitcoin ecosystem and colored coins, by definition,
only have value to "other ecosystems".

(Counterargument: Suppose in some future where this is used as an
alternative to BIP9 for a user-coordinated code release - especially in
situations where miners have rejected activation of a widely-regarded
proposal.  Suppose also, in that future, colored coin ICO's that use
op-return are regularly used to float the shares of major corporation.  It
might be irresponsible to exclude them from coordinating protocol changes.)





On Tue, Apr 18, 2017 at 10:52 AM, Marcel Jamin  wrote:

> Probably a bad idea for various reasons, but tagging (fee paying)
> transactions with info about the capabilities of the node that created
> it might be interesting? Might be useful to gauge economic support for
> certain upgrades, especially if excluding long transaction chains,
> etc. In the very least it would be a far better indicator than simply
> counting reachable nodes.
>
> On 17 April 2017 at 17:50, Erik Aronesty via bitcoin-dev
>  wrote:
> > If users added a signal to OP_RETURN, might it be possible to tag all
> > validated input addresses with that signal.
> >
> > Then a node can activate a new feature after the percentage of tagged
> input
> > addresses reaches a certain level within a certain period of time?
> >
> > This could be used in addition to a flag day to trigger activation of a
> > feature with some reassurance of user uptake.
> >
> >
> >
> >
> >
> >
> >
> >
> >
> > ___
> > bitcoin-dev mailing list
> > bitcoin-dev@lists.linuxfoundation.org
> > https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
> >
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Transaction signalling

2017-04-18 Thread Marcel Jamin via bitcoin-dev
Probably a bad idea for various reasons, but tagging (fee paying)
transactions with info about the capabilities of the node that created
it might be interesting? Might be useful to gauge economic support for
certain upgrades, especially if excluding long transaction chains,
etc. In the very least it would be a far better indicator than simply
counting reachable nodes.

On 17 April 2017 at 17:50, Erik Aronesty via bitcoin-dev
 wrote:
> If users added a signal to OP_RETURN, might it be possible to tag all
> validated input addresses with that signal.
>
> Then a node can activate a new feature after the percentage of tagged input
> addresses reaches a certain level within a certain period of time?
>
> This could be used in addition to a flag day to trigger activation of a
> feature with some reassurance of user uptake.
>
>
>
>
>
>
>
>
>
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Small Nodes: A Better Alternative to Pruned Nodes

2017-04-18 Thread Tier Nolan via bitcoin-dev
This has been discussed before.

https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-May/008101.html

including a list of nice to have features by Maxwell

https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-May/008110.html

You meet most of these rules, though you do have to download blocks from
multiple peers.

The suggestion in that thread were for a way to compactly indicate which
blocks a node has.  Each node would then store a sub-set of all the
blocks.  You just download the blocks you want from the node that has them.

Each node would be recommended to store the last few days worth anyway.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Draft BIP: Version bits extension with guaranteed lock-in

2017-04-18 Thread Kekcoin via bitcoin-dev
> After some thought I managed to simplify the original uaversionbits proposal 
> introducing a simple boolean flag to guarantee lock-in of a BIP9 deployment 
> by the timeout. This seems to be the simplest form combining optional flag 
> day activation with BIP9. This brings the best of both worlds allowing user 
> activated soft forks that can be activated early by the hash power.

After mulling over this proposal I think it is quite elegant; however there is 
one big "regression" in functionality in regards to BIP9 which it extends upon; 
a lack of back-out procedure. That is to say, if a protocol change is deployed 
using this BIP9-with-lock-in-on-timeout method, it is no longer possible to 
abstain from activating it if it is shown to contain a critical flaw.

I suggest that a second version bit can be used as an abandonment vote; with 
sufficient hashpower (50% might be enough since it is no longer about safe 
coordination of protocol change deployment) the proposed protocol change is 
abandoned. This changes the dynamic from BIP9's "opt-in" to an "opt-out" 
system, still governed by hashpower, but far less susceptible to minority miner 
veto.___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Small Nodes: A Better Alternative to Pruned Nodes

2017-04-18 Thread Tom Zander via bitcoin-dev
On Monday, 17 April 2017 08:54:49 CEST David Vorick via bitcoin-dev wrote:
> The best alternative today to storing the full blockchain is to run a
> pruned node

The idea looks a little overly complex to me.

I suggested something similar which is a much simpler version;
https://zander.github.io/scaling/Pruning/

> # Random pruning mode
> 
> There is a large gap between the two current modes of everything
> (currently 75GB) and only what we need (2GB or so).
> 
> This mode would have two areas, it would keep a days worth of blocks to
> make sure that any reorgs etc would not cause a re-download, but it would
> have additionally have an area that can be used to store historical data
> to be shared on the network. Maybe 20 or 50GB.
> 
> One main feature of Bitcoin is that we have massive replication. Each node
> currently holds all the same data that every other node holds. But this
> doesn't have to be the case with pruned nodes. A node itself has no need
> for historic data at all.
> 
> The suggestion is that a node stores a random set of blocks. Dropping
> random blocks as the node runs out of disk-space. Additionally, we would
> introduce a new way to download blocks from other nodes which allows the
> node to say it doesn't actually have the block requested.
> 
> The effect of this setup is that many different nodes together end up
> having the total amount of blocks, even though each node only has a
> fraction of the total amount.

-- 
Tom Zander
Blog: https://zander.github.io
Vlog: https://vimeo.com/channels/tomscryptochannel
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


[bitcoin-dev] Properties of an ideal PoW algorithm & implementation

2017-04-18 Thread Natanael via bitcoin-dev
To expand on this below;

Den 18 apr. 2017 00:34 skrev "Natanael" :

IMHO the best option if we change PoW is an algorithm that's moderately
processing heavy (we still need reasonably fast verification) and which
resists partial state reuse (not fast or fully "linear" in processing like
SHA256) just for the sake of invalidating asicboost style attacks, and it
should also have an existing reference implementation for hardware that's
provably close in performance to the theoretical ideal implementation of
the algorithm (in other words, one where we know there's no hidden
optimizations).

[...] The competition would mostly be about packing similar gate designs
closely and energy efficiency. (Now that I think about it, the proof MAY
have to consider energy use too, as a larger and slower but more efficient
chip still is competitive in mining...)


What matters for miners in terms of cost is primarily (correctly computed)
hashes per joule (watt-seconds). The most direct proxy for this in terms of
algorithm execution is the number of transistor (gate) activations per
computed hash (PoW unit).

To prove that an implementation is near optimal, you would show there's a
minimum number of necessary transistor activations per computed hash, and
that your implementation is within a reasonable range of that number.

We also need to show that for a practical implementation you can't reuse
much internal state (easiest way is "whitening" the block header,
pre-hashing or having a slow hash with an initial whitening step of its
own). This is to kill any ASICBOOST type optimization. Performance should
be constant, not linear relative to input size.

The PoW step should always be the most expensive part of creating a
complete block candidate! Otherwise it loses part of its meaning. It should
however still also be reasonably easy to verify.

Given that there's already PoW ASIC optimizations since years back that use
deliberately lossy hash computations just because those circuits can run
faster (X% of hashes are computed wrong, but you get Y% more computed
hashes in return which exceeds the error rate), any proof of an
implementation being near optimal (for mining) must also consider the
possibility of implementations of a design that deliberately allows errors
just to reduce the total count of transistor activations per N amount of
computed hashes. Yes, that means the reference implementation is allowed to
be lossy.

So for a reasonably large N (number of computed hashes, to take batch
processing into consideration), the proof would show that there's a
specific ratio for a minimum number of average gate activations per
correctly computed hash, a smallest ratio = X number of gate activations /
(N * success rate) across all possible implementations of the algorithm.
And you'd show your implementation is close to that ratio.

It would also have to consider a reasonable range of time-memory tradeoffs
including the potential of precomputation. Hopefully we could implement an
algorithm that effectively makes such precomputation meaningless by making
the potential gain insignificant for any reasonable ASIC chip size and
amount of precomputation resources.

A summary of important mining PoW algorithm properties;

* Constant verification speed, reasonably fast even on slow hardware

* As explained above, still slow / expensive enough to dominate the costs
of block candidate creation

* Difficulty must be easy to adjust (no problem for simple hash-style
algorithms like today)

* Cryptographic strength, something like preimage resistance (the algorithm
can't allow forcing a particular output, the chance must not be better than
random within any achievable computational bounds)

* As explained above, no hidden shortcuts. Everybody has equal knowledge.

* Predictable and close to constant PoW computation performance, and not
linear in performance relative to input size the way SHA256 is (lossy
implementations will always make it not-quite-constant)

* As explained above, no significant reusable state or other reusable work
(killing ASICBOOST)

* As explained above, no meaningful precomputation possible. No unfair
headstarts.

* Should only rely on just transistors for implementation, shouldn't rely
on memory or other components due to unknowable future engineering results
and changes in cost

* Reasonably compact implementation, measured in memory use, CPU load and
similar metrics

* Reasonably small inputs and outputs (in line with regular hashes)

* All mining PoW should be "embarrassingly parallel" (highly
parallellizable) with minimal or no gain from batch computation,
performance scaling should be linear with increased chip size & cycle
speed.

What else is there? Did I miss anything important?
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Small Nodes: A Better Alternative to Pruned Nodes

2017-04-18 Thread Jonas Schnelli via bitcoin-dev
Hi Dave

> A node that stores the full blockchain (I will use the term archival node) 
> requires over 100GB of disk space, which I believe is one of the most 
> significant barriers to more people running full nodes. And I believe the 
> ecosystem would benefit substantially if more users were running full nodes.


Thanks for your proposal.

I agree that 100GB of data may be cumbersome for some systems, especially if 
you target end user systems (Laptops/Desktops). Though, in my opinion, for 
those systems, CPU consumption is the biggest UX blocker.
Bootstrapping a full node on a decent consumer system with default parameters 
takes days, and, during this period, you probably run at full CPU capacity and 
you will be disturbed by constant fan noise. Standard tasks may be impossible 
because your system will be slowed down to a point where even word processing 
may get difficult.
This is because Core (with its default settings) is made to sync as fast as 
possible.

Once you have verified the chain and you reach the chain tip, indeed, it will 
be much better (until you shutdown for a couple of days/hours and have to 
re-sync/catch-up).

1. I agree that we need to have a way for pruned nodes to partially serve 
historical blocks.
My personal measurements told me that around ~80% of historical block serving 
are between tip and -1’000 blocks.
Currently, Core nodes have only two modes of operations, „server all historical 
blocks“ or „none“.
This makes little sense especially if you prune to a target size of, lets say, 
80GB (~80% of the chain).
Ideally, there would be a mode where your full node can signal a third mode „I 
keep the last 1000 blocks“ (or make this more dynamic).

2. Bootstrapping new peers
I’m not sure if full nodes must be the single point of historical data storage. 
Full nodes provide a valuable service (verification, relay, filtering, etc.). 
I’m not sure if serving historical blocks is one of them. Historical blocks 
could be made available on CDN’s or other file storage networks. You are going 
to verify them anyways,... the serving part is pure data storage.
I’m also pretty sure that some users have stopping running full nodes because 
their upstream bandwidth consumption (because of serving historical blocks) was 
getting intolerable.
Especially „consumer“ peers must have been hit by this (little experience in 
how to reduce traffic, upstream in general is bad for consumers-connections, 
little resources in general).

Having a second option built into full nodes (or as an external bootstrap 
service/app) how to download historical blocks during bootstrapping could 
probably be a relieve for "small nodes“.
It could be a little daemon that downloads historical blocks from CDN’s, etc. 
and feeds them into your full node over p2p/8333 and kickstarts your 
bootstrapping without bothering valuable peers.
Or, the alternative download, could be built into the full nodes main logic.
And, if it wasn’t obvious, this must not bypass the verification!

I’m also aware of the downsides of this. This can eventually reduce 
decentralisation of the storage of historical bitcoin blockchain data and – 
eventually – increase the upstream bandwidth of peers willing to serve 
historical blocks (especially in a transition phase to a second 
„download“-option).
Maybe it’s a tradeoff between reducing decentralisation by killing low resource 
nodes because serving historical blocks is getting too resource-intense _or_ 
reducing decentralisation by moving some percentage of the historical data 
storage away from the bitcoin p2p network.
The later seems more promising to me.


To your proposal:
- Isn’t there a tiny finger-printing element if peers have to pick an 
segmentation index?
- SPV bloom filter clients can’t use fragmented blocks to filter txns? Right? 
How could they avoid connecting to those peers?




signature.asc
Description: Message signed with OpenPGP
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev