Re: [bitcoin-dev] Properties of an ideal PoW algorithm & implementation

2017-04-19 Thread Bram Cohen via bitcoin-dev
Repeatedly hashing to make it so that lossy implementations just fail
sounds like a great idea. Also relying on a single crypto primitive which
is as simple as possible is also a great idea, and specifically using
blake2b is conservative because not only is it simple but its block size is
larger than the amount of data being hashed so asicboost-style attacks
don't apply at all and the logic of multiple blocks doesn't have to be
built.

Memory hard functions are a valiant effort and are holding up better than
expected but the problem is that when they fail they fail catastrophically,
immediately going from running on completely commodity hardware to only
running on hardware from the one vendor who's pulled off the feat of making
it work. My guess is it's only a matter of time until that happens.

So the best PoW function we know of today, assuming that you're trying to
make mining hardware as commodity as possible, is to repeatedly hash using
blake2b ten or maybe a hundred times.

Mind you, I still think hard forking the PoW function is a very bad idea,
but if you were to do it, that would be the way to go.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Small Nodes: A Better Alternative to Pruned Nodes

2017-04-19 Thread David Vorick via bitcoin-dev
On Tue, Apr 18, 2017 at 3:43 AM, Jonas Schnelli 
wrote:

>
> Hi Dave
>
> *1. I agree that we need to have a way for pruned nodes to partially serve
> historical blocks.*
> My personal measurements told me that around ~80% of historical block
> serving are between tip and -1’000 blocks.
> Currently, Core nodes have only two modes of operations, „server all
> historical blocks“ or „none“.
> This makes little sense especially if you prune to a target size of, lets
> say, 80GB (~80% of the chain).
> Ideally, there would be a mode where your full node can signal a third
> mode „I keep the last 1000 blocks“ (or make this more dynamic).
>

That probably makes sense with small nodes too. The past 1000 blocks are
such a small footprint compared to the rest of the chain.


>
> *2. Bootstrapping new peers*
> I’m not sure if full nodes must be the single point of historical data
> storage. Full nodes provide a valuable service (verification, relay,
> filtering, etc.). I’m not sure if serving historical blocks is one of them.
> Historical blocks could be made available on CDN’s or other file storage
> networks. You are going to verify them anyways,... the serving part is pure
> data storage.
> I’m also pretty sure that some users have stopping running full nodes
> because their upstream bandwidth consumption (because of serving historical
> blocks) was getting intolerable.
> Especially „consumer“ peers must have been hit by this (little experience
> in how to reduce traffic, upstream in general is bad for
> consumers-connections, little resources in general).
>

Perhaps it is not, but I would think that it would be pretty
straightforward to configure a global bandwidth limit within Bitcoin. I
know many torrent clients, and clients for protocols like Tor and i2p
include the ability to set both speed limits and monthly bandwidth limits.
Shipping core with sane default limits is probably sufficient to solve
bandwidth issues for most users. I don't know if default limits may result
in today's archive nodes pulling less weight though - changing the defaults
to have limits may slow the network down as a whole.

In my experience (living in a city where most people have uncapped
connections), disk usage is usually the bigger issue, but certainly
bandwidth is a known problem (especially for rural users) as well.


>
> Having a second option built into full nodes (or as an external bootstrap
> service/app) how to download historical blocks during bootstrapping could
> probably be a relieve for "small nodes“.
> It could be a little daemon that downloads historical blocks from CDN’s,
> etc. and feeds them into your full node over p2p/8333 and kickstarts your
> bootstrapping without bothering valuable peers.
> Or, the alternative download, could be built into the full nodes main
> logic.
> And, if it wasn’t obvious, this must not bypass the verification!
>

I worry about any type of CDN being a central point of failure. CDNs cost
money, which means someone is footing the bill. Torrenting typically relies
on a DHT, which is much easier to attack than Bitcoin's peer network. It's
possible that a decentralized CDN could be used, but I don't think any yet
exist (though I am building one for unrelated reasons) which are both
sufficiently secure and incentive-compatible to be considered as an
alternative to using full nodes to bootstrap.

I just don't want to end up in a situation where 90% of users are getting
their blockchain from the same 3 websites or centralized services. And I
also don't want to rely on any p2p solution which would not stand up to a
serious adversary. Right now, I think the bitcoin p2p network is by
significant margin the best we've got. The landscape for decentralized data
distribution is evolving rapidly though, perhaps in a few years there will
be a superior alternative.


> *To your proposal:*
> - Isn’t there a tiny finger-printing element if peers have to pick an
> segmentation index?
> - SPV bloom filter clients can’t use fragmented blocks to filter txns?
> Right? How could they avoid connecting to those peers?
>
> 
>

Yes, there is finger-print that happens if you have nodes pick an index.
And the fingerprint gets a lot worse if you have a node pick multiple
indexes. Though, isn't it already required that nodes have some sort of IP
address or hidden service domain? I want to say that the fingerprint
created by picking an index is not a big deal, because it can be separated
from activity like transaction relaying and mining. Though, I am not
certain and perhaps it is a problem.

To be honest, I hadn't really considered SPV nodes at the time of writing.
Small nodes would still be seeing all of the new blocks, and per your
suggestion may also be storing the 1000 or so most recent blocks, but SPV
nodes would still need some way to find all of their historical
transactions. The problem is not fetching blocks, it's figuring out which
blocks are worth fetching. It may be sufficient to have 

Re: [bitcoin-dev] Properties of an ideal PoW algorithm & implementation

2017-04-19 Thread praxeology_guy via bitcoin-dev
Natanael,

=== Metal Layers ===

One factor in chip cost other than transistor count is the number of layers 
required to route all the interconnects in the desired die area constraint. The 
need for fewer layers can result in less patent-able costs of layering 
technology. Fewer layers are quicker and easier to manufacture.

I'm not an expert in the field, and I can't vouch for the validity of the 
entirety of the paper, but this paper discusses various factors that impact 
chip cost design.
http://www.cse.psu.edu/~juz138/files/3d-cost-tcad10.pdf

=== Early nonce mixing, Variable Length Input with Near Constant Work ===

To minimize asicboost like optimizations... the entirety of the input should be 
mixed with the nonce data ASAP. For example with Bitcoin as it is now, the 80 
byte block header doesn't fully fit in one 64 byte SHA256 input block. This 
results in a 2nd SHA256 block input that only has 4 bytes of nonce and the rest 
constant that are mixed much later than the rest of the input... which allows 
for unexpected optimizations.

Solution: A hash algorithm that could have more linear computation time vs 
input size would be a 2 stage algorithm:
1. 1st stage Merkle tree hash to pre-lossy-mix-compress the variable length 
input stream to the size of the 2nd stage state vector. Each bit of input 
should have about equal influence on each of the output bits. (Minimize 
information loss, maximize mixed-ness).
2. Multi-round mixing of the 2nd stage, where this stage is significantly more 
work than the 1st stage.

This is somewhat done already in Bitcoin by the PoW doing SHA256 twice in 
serial. The first time is pretty much the merkle tree hash (a node with two 
children), and then the second time is the mult-round mixing. If the Bitcoin 
PoW did SHA256 three or four times or more, then asicboost like optimizations 
would have less of an effect.

In actual hardware, assuming a particular input length for the design can 
result in a significantly more optimized design than creating hardware that can 
handle a variable length input. So your design goal of "not linear in 
performance relative to input size" to me seems to be a hard one to attain... 
in practical, to support very large input sizes in a constant work fashion 
requires a trade off between memory/parallelization and die space. I think it 
would be better to make an assumption about the block header size, such as that 
it is exactly 80 bytes, or, at least something reasonable like the hardware 
should be able to support a block header size <= 128 bytes.

Cheers,
Praxeology Guy___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] I do not support the BIP 148 UASF

2017-04-19 Thread Erik Aronesty via bitcoin-dev
The "UASF movement" seems a bit premature to me - I doubt UASF will be
necessary if a WTXID commitment is tried first.   I think that should be
first-efforts focus.

On Sat, Apr 15, 2017 at 2:50 PM, Gregory Maxwell via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> On Sat, Apr 15, 2017 at 1:42 PM, Mark Friedenbach via bitcoin-dev <
> bitcoin-dev@lists.linuxfoundation.org> wrote:
>
>> triggering BIP141 activation, and therefore not enabling the new
>> consensus rules on already deployed full nodes. BIP148 is making an
>> explicit choice to favor dragging along those users which have upgraded to
>> BIP141 support over those miners who have failed to upgrade.
>>
>
> I do not follow the argument that a critical design feature of a
> particular "user activated soft fork" could be that it is users don't need
> to be involved.  If the goal is user activation I would think that the
> expectation would be that the overwhelming majority of users would be
> upgrading to do it, if that isn't the case, then it isn't really a user
> activated softfork-- it's something else.
>
>
>> On an aside, I'm somewhat disappointed that you have decided to make a
>> public statement against the UASF proposal. Not because we disagree -- that
>> is fine -- but because any UASF must be a grassroots effort and
>> endorsements (or denouncements) detract from that.
>>
>
> So it has to be supported by the public but I can't say why I don't
> support it? This seems extremely suspect to me.
>
>
>
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Small Nodes: A Better Alternative to Pruned Nodes

2017-04-19 Thread Angel Leon via bitcoin-dev
>Financially incentivising nodes is a really weird area because it would
allow someone to essentially automate the deployment of nodes. i.e. if a
node can pay for itself 100% (even at a lesser value, it just becomes
cheaper overall), you could write an application that uses an AWS API or a
digital ocean API to automatically deploy 100's of nodes. Which sounds
great but not if that person is malicious and wants to prevent the
community adopting proposals.

what other projects have done to avoid such attacks (while incentivizing
economically running full nodes) is to only distribute part of the block
rewards back such nodes if that node has committed/frozen a predetermined
amount of coins that can't be spent. This also leaves less liquidity for
market speculation and a incentives for long term commitments.

On Wed, Apr 19, 2017 at 5:14 AM udevNull via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> I'd like to add to this. There is definitely a barrier of entry with
> regards to setting up a full node. Unless you're living in a first world
> country, the bandwidth requirements alone, will outright prevent you from
> even setting up a full node (sync since genesis).
>
> To maintain that also becomes a sunk cost, as there is no financial
> incentive to run a node, only an idealogical one. Most of the people who
> benefit and will benefit from Bitcoin, are the un-banked. Which you will
> find in 3rd world countries, that don't have ISPs that provide the data
> packages, to cater for the requirements of running a full node. I'm sure
> many would like to, but simply cannot afford it.
>
> A user may not want to run a node at home, but rather on a digital ocean
> or AWS server, which they cannot afford to do either considering the
> bandwidth and storage costs associated with it. However, I don't think they
> should be excluded from participating in the network (supporting proposals,
> voicing their opinions, running their own wallets, writing their own
> applications on top of Bitcoin [which I think is extremely important]).
>
> So I would definitely be in favour of a small node of sorts. It will
> present us with some interesting technical challenges along the way but
> it's definitely worth while looking into.
>
> Financially incentivising nodes is a really weird area because it would
> allow someone to essentially automate the deployment of nodes. i.e. if a
> node can pay for itself 100% (even at a lesser value, it just becomes
> cheaper overall), you could write an application that uses an AWS API or a
> digital ocean API to automatically deploy 100's of nodes. Which sounds
> great but not if that person is malicious and wants to prevent the
> community adopting proposals.
> Just my 2 cents worth.
>
>
> Sent with ProtonMail  Secure Email.
>
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Small Nodes: A Better Alternative to Pruned Nodes

2017-04-19 Thread udevNull via bitcoin-dev
I'd like to add to this. There is definitely a barrier of entry with regards to 
setting up a full node. Unless you're living in a first world country, the 
bandwidth requirements alone, will outright prevent you from even setting up a 
full node (sync since genesis).

To maintain that also becomes a sunk cost, as there is no financial incentive 
to run a node, only an idealogical one. Most of the people who benefit and will 
benefit from Bitcoin, are the un-banked. Which you will find in 3rd world 
countries, that don't have ISPs that provide the data packages, to cater for 
the requirements of running a full node. I'm sure many would like to, but 
simply cannot afford it.

A user may not want to run a node at home, but rather on a digital ocean or AWS 
server, which they cannot afford to do either considering the bandwidth and 
storage costs associated with it. However, I don't think they should be 
excluded from participating in the network (supporting proposals, voicing their 
opinions, running their own wallets, writing their own applications on top of 
Bitcoin [which I think is extremely important]).

So I would definitely be in favour of a small node of sorts. It will present us 
with some interesting technical challenges along the way but it's definitely 
worth while looking into.

Financially incentivising nodes is a really weird area because it would allow 
someone to essentially automate the deployment of nodes. i.e. if a node can pay 
for itself 100% (even at a lesser value, it just becomes cheaper overall), you 
could write an application that uses an AWS API or a digital ocean API to 
automatically deploy 100's of nodes. Which sounds great but not if that person 
is malicious and wants to prevent the community adopting proposals.

Just my 2 cents worth.

Sent with [ProtonMail](https://protonmail.com) Secure Email.___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Properties of an ideal PoW algorithm & implementation

2017-04-19 Thread Tim Ruffing via bitcoin-dev
On Tue, 2017-04-18 at 12:34 +0200, Natanael via bitcoin-dev wrote:
> To prove that an implementation is near optimal, you would show
> there's a minimum number of necessary transistor activations per
> computed hash, and that your implementation is within a reasonable
> range of that number. 

I'm not an expert on lower bounds of algorithms but I think proving
such properties is basically out of reach for mankind currently.

> 
> We also need to show that for a practical implementation you can't
> reuse much internal state (easiest way is "whitening" the block
> header, pre-hashing or having a slow hash with an initial whitening
> step of its own). This is to kill any ASICBOOST type optimization.
> Performance should be constant, not linear relative to input size. 

Yes, a reasonable thing in practice seems to use a slower hash function
(or just iterating the hash function many times), see also this thread:
 https://twitter.com/Ethan_Heilman/status/850015029189644288 .

PoW verification will still be fast enough. That's not the bottleneck
of block verification anyway.

Also, I don't agree that a PoW function should not rely on memory.
Memory-hard functions are the best we have currently.


Tim
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev