Re: [bitcoin-dev] Malice Reactive Proof of Work Additions (MR POWA): Protecting Bitcoin from malicious miners

2017-04-17 Thread Natanael via bitcoin-dev
Den 17 apr. 2017 16:14 skrev "Erik Aronesty via bitcoin-dev" <
bitcoin-dev@lists.linuxfoundation.org>:


It's too bad we can't make the POW somehow dynamic so that any specialized
hardware is impossible, and only GPU / FPGA is possible.



Maybe a variant of Keccak where the size of the sponge is increased along
with additional zero bits required.  Seems like this would either have to
resist specialized hardware or imply sha3 is compromised such that the size
of the sponge does not incerase the number of possible output bits as
expected.


Technically SHA3 (keccak) already has the SHAKE mode, an extensible output
function (XOF). It's basically a hash with arbitary output length (with
fixed state size, 256 bits is the common choice). A little bit like hooking
a hash straight into a stream cipher.

The other modes should *probably* not allow the same behavior, though. I
can't guarantee that however.

You may be interested in looking at parameterizable ciphers and if any of
them might be applicable to PoW.

IMHO the best option if we change PoW is an algorithm that's moderately
processing heavy (we still need reasonably fast verification) and which
resists partial state reuse (not fast or fully "linear" in processing like
SHA256) just for the sake of invalidating asicboost style attacks, and it
should also have an existing reference implementation for hardware that's
provably close in performance to the theoretical ideal implementation of
the algorithm (in other words, one where we know there's no hidden
optimizations).

Anything relying on memory or other such expensive components is likely to
fall flat eventually as fast memory is made more compact, cheaper and moves
closer to the cores.

That should be approximately what it takes to level out the playing field
in ASIC manufacturing, because then we would know there's no fancy tricks
to deploy that would give one player unfair advantage. The competition
would mostly be about packing similar gate designs closely and energy
efficiency. (Now that I think about it, the proof MAY have to consider
energy use too, as a larger and slower but more efficient chip still is
competitive in mining...)

We should also put a larger nonce in the header if possible, to reduce the
incentive to mess with the entropy elsewhere in blocks.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


[bitcoin-dev] Transaction signalling

2017-04-17 Thread Erik Aronesty via bitcoin-dev
If users added a signal to OP_RETURN, might it be possible to tag all
validated input addresses with that signal.

Then a node can activate a new feature after the percentage of tagged input
addresses reaches a certain level within a certain period of time?

This could be used in addition to a flag day to trigger activation of a
feature with some reassurance of user uptake.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Malice Reactive Proof of Work Additions (MR POWA): Protecting Bitcoin from malicious miners

2017-04-17 Thread Erik Aronesty via bitcoin-dev
It's too bad we can't make the POW somehow dynamic so that any specialized
hardware is impossible, and only GPU / FPGA is possible.



Maybe a variant of Keccak where the size of the sponge is increased along
with additional zero bits required.  Seems like this would either have to
resist specialized hardware or imply sha3 is compromised such that the size
of the sponge does not incerase the number of possible output bits as
expected.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


[bitcoin-dev] Suggestions to improve opcodes with O(N) complexity

2017-04-17 Thread Sergio Demian Lerner via bitcoin-dev
I came across O(N) behavior of two scripting opcodes, OP_IF and OP_ROLL. By
exploiting edge cases for each of these two sub-optimal algorithms, I
manage to simulate a Segwit block that takes up to 5.6 seconds to verify on
a Ubuntu VM running on a single Core i5 processor. The simulation is based
on a single thread executing EvalScript(), the Bitcoin script execution
function. The tests were not performed processing actual blocks. These
results should not make anyone worry, because there are worse problems in
Bitcoin block verification, and because Bitcoin employs several worker
threads for verifying scripts in parallel. For example, a Segwit block can
request 8 signature verifications when all transactions are P2WSH. It
is said that Bitcoin Core (in a modern multi-core machine, using its
multi-threading verification capabilities) can verify 8000 ECDSA signatures
per second. Therefore a malicious miner can create a Segwit block that
requires approximately 10 seconds to be verified. Since the examples
presented in this post consume less than 10 seconds, I don’t consider my
findings as vulnerabilities. However, if the block size is to be increased
in the future, these problems should be considered prior increasing the
block size. The scripts presented here as examples do not leave the value
stack empty, but the Bitcoin protocol does not require it. Bitcoin only
requires the top value to be true to accept the script.

OP_IF abuse

Every time a Bitcoin script executes the OP_IF opcode, a boolean value
indicating if the condition was true, false or the conditional was skipped
(also represented as false) is pushed into the vfExec stack.  Every time an
opcode is executed, the number of  false values in the vfExec stack is
counted using the following line:

bool fExec = !count(vfExec.begin(), vfExec.end(), false);

If the count is non-zero, all subsequent instructions except OP_ELSE and
OP_ENDIF are skipped. It is clear that the longer the conditional stack is,
the more it takes to count the false elements.

The following scriptPub or ScriptSig exploits this problem:

0
OP_IF { 100 times }

0 { 9798 times }

OP_ENDIF { 100 times }
1

The vfExec vector is filled with 100 elements, and then each element is
scanned 9799 times, totaling more than 979K items scanned. This took 2.5
seconds in my test VM (for a block filled with these scriptSigs).

To re-write this logic with a O(1) algorithm, one simply has to count the
number of true conditions in one variable (trueCount), and the number of
false or skipped conditions following all true conditions in another
(ignoreCount). Detecting if code needs to be executed or not requires just
testing if ignoreCount is zero.

The handling of OP_IF / OP_NOTIF / OP_ELSE should be like the following
pseudo-code:

fExec = (ignoreCount==0);
...
case OP_IF:
case OP_NOTIF:
 {
   if (fExec)
{
 compute fValue...
 if (fValue) trueCount++; else ignoreCount++;
} else
ignoreCount++;
 } break;
 case OP_ELSE:
 {
 if ((trueCount==0) && (ignoreCount==0))
 return set_error(serror, SCRIPT_ERR_UNBALANCED_CONDITIONAL);
 if (ignoreCount==0) { trueCount--; ignoreCount++; } else
 if (ignoreCount==1) { trueCount++; ignoreCount--; }
 } break;
case OP_ENDIF:
 {
if ((trueCount==0) && (ignoreCount==0))
return set_error(serror, SCRIPT_ERR_UNBALANCED_CONDITIONAL);
if (ignoreCount>0) ignoreCount--; else trueCount--;
 }
 break;

You may have noticed the strange behavior of Bitcoin’s ELSE statement.
Bitcoin allows one to switch between true and false conditions several
times. For example, the following script is valid and leaves the value 2 on
the stack:

1 OP_IF OP_ELSE OP_ELSE 2 OP_ENDIF

OP_ROLL

The second problem lies in the OP_ROLL opcode. This opcode removes a value
at a given index from the value stack, and pushes it on top. As the Bitcoin
Core stack stores a list of char std::vector by value (not by reference),
and because the stack is itself a std::vector (not a linked list), then
removing the first elements requires moving all elements one position in
memory. The value stack can store a maximum of 1000 elements. The following
script fills the stack and then moves each stack element 200 times, so the
number of moved elements is 200K. This took almost 5.6 seconds in my test
VM (for a block filled with these scriptSigs).

1  {999 times}
998 OP_ROLL { 200 times }

I tried other scripts, such as filling the stack with values of size 520
using DUP3, and then performing rolls, but all of them led to a block that
took less time, if the block is to be filled with the scripts.

One solution to this problem is use a linked list data structure instead of
a std::vector, to allow O(1) removal of items, but it still requires O(N)
for element lookup. A balanced tree where each internal node is augmented
with the number of children underneath can be used to provide efficient
indexed access and efficient element removal. However, the overhead of such
data structure may kill 

Re: [bitcoin-dev] Malice Reactive Proof of Work Additions (MR POWA): Protecting Bitcoin from malicious miners

2017-04-17 Thread Erik Aronesty via bitcoin-dev
On Apr 16, 2017 6:28 PM,  wrote:



On 2017-04-16 17:04, Erik Aronesty via bitcoin-dev wrote:

> This is a great solution.
>
> 8 or more secure hashes, each of which can be implemented on GPU/CPU,
> but rotate through them - per block round robin.
>
> Hardware, infrastructue investment is protected.  ASIC is not.
>
>
The write time for configuring a FPGA with a fresh bitstream is measured in
tens of milliseconds.


I have no objections to the use of FPGA or any other commercially available
hardware.




ASIC will never beat this - because it will be 8x more expensive to
> maintain the cold circuits.
>
>
Unused circuits don't consume power, which is the main cost in running a
miner


They make GPUs or FPGAs (as u mentioned) far more affordable.  The problem
is centralized manufacturing, which, in turn, is a side effect of a covert
hardware mining optimization leading to a monopoly.

A rotating POW seems to make ASIC manufacture impractical compared to
generalized, commercially available hardware.

It's too bad we can't make the POW somehow dynamic so that any specialized
hardware is impossible, and only GPU / FPGA is possible.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Small Nodes: A Better Alternative to Pruned Nodes

2017-04-17 Thread Aymeric Vitte via bitcoin-dev
While I fully agree with the intent (increasing full nodes so a big
miner waking up in a bad mood can't threaten the world any longer every
day as it is now) I am not sure to get the interest of this proposal,
because:

- it's probably not a good idea to encourage the home users to run full
nodes, there are many people running servers far from their capacity
that could easily run efficient full nodes

- if someone can't allocate 100 GB today to run a full node, then we
can't expect him to allocate more in the future

- the download time is a real concern

- this proposal is a kind of reinventing torrents, while limiting the
number of connections to something not efficient at all, I don't see why
something that is proven to be super efficient (torrents) would be
needed to be reinvented, I am not saying that it should be used as the
bittorrent network is doing but the concepts can be reused

- I don't get at all the concept of "archival" nodes since it's another
useless step toward centralization

I think the only way to increase full nodes it to design an incentive
for people to run them


Le 17/04/2017 à 08:54, David Vorick via bitcoin-dev a écrit :
> *Rationale:*
>
> A node that stores the full blockchain (I will use the term archival
> node) requires over 100GB of disk space, which I believe is one of the
> most significant barriers to more people running full nodes. And I
> believe the ecosystem would benefit substantially if more users were
> running full nodes.
>
> The best alternative today to storing the full blockchain is to run a
> pruned node, which keeps only the UTXO set and throws away already
> verified blocks. The operator of the pruned node is able to enjoy the
> full security benefits of a full node, but is essentially leeching the
> network, as they performed a large download likely without
> contributing anything back.
>
> This puts more pressure on the archival nodes, as the archival nodes
> need to pick up the slack and help new nodes bootstrap to the network.
> As the pressure on archival nodes grows, fewer people will be able to
> actually run archival nodes, and the situation will degrade. The
> situation would likely become problematic quickly if bitcoin-core were
> to ship with the defaults set to a pruned node.
>
> Even further, the people most likely to care about saving 100GB of
> disk space are also the people least likely to care about some extra
> bandwidth usage. For datacenter nodes, and for nodes doing lots of
> bandwidth, the bandwidth is usually the biggest cost of running the
> node. For home users however, as long as they stay under their
> bandwidth cap, the bandwidth is actually free. Ideally, new nodes
> would be able to bootstrap from nodes that do not have to pay for
> their bandwidth, instead of needing to rely on a decreasing percentage
> of heavy-duty archival nodes.
>
> I have (perhaps incorrectly) identified disk space consumption as the
> most significant factor in your average user choosing to run a pruned
> node or a lite client instead of a full node. The average user is not
> typically too worried about bandwidth, and is also not typically too
> worried about initial blockchain download time. But the 100GB hit to
> your disk space can be a huge psychological factor, especially if your
> hard drive only has 500GB available in the first place, and 250+ GB is
> already consumed by other files you have.
>
> I believe that improving the disk usage situation would greatly
> benefit decentralization, especially if it could be done without
> putting pressure on archival nodes.
>
> *Small Nodes Proposal:*
>
> I propose an alternative to the pruned node that does not put undue
> pressure on archival nodes, and would be acceptable and non-risky to
> ship as a default in bitcoin-core. For lack of a better name, I'll
> call this new type of node a 'small node'. The intention is that
> bitcoin-core would eventually ship 'small nodes' by default, such that
> the expected amount of disk consumption drops from today's 100+ GB to
> less than 30 GB.
>
> My alternative proposal has the following properties:
>
> + Full nodes only need to store ~20% of the blockchain
> + With very high probability, a new node will be able to recover the
> entire blockchain by connecting to 6 random small node peers.
> + An attacker that can eliminate a chosen+ 95% of the full nodes
> running today will be unable to prevent new nodes from downloading the
> full blockchain, even if the attacker is also able to eliminate all
> archival nodes. (assuming all nodes today were small nodes instead of
> archival nodes)
>
> Method:
>
> A small node will pick an index [5, 256). This index is that node's
> permanent index. When storing a block, instead of storing the full
> block, the node will use Reed-Solomon coding to erasure code the block
> using a 5-of-256 scheme. The result will be 256 pieces that are 20% of
> the size of the block each. The node picks the piece that corresponds
> to its index, a

Re: [bitcoin-dev] Small Nodes: A Better Alternative to Pruned Nodes

2017-04-17 Thread David Vorick via bitcoin-dev
Most people do not want to go out and buy new hardware to run a Bitcoin
node. The want to use the hardware that they already own, and usually that
hardware is going to have a non-generous amount of disk space. 500GB SSD
with no HDD is common in computers today.

But really, the best test is to go out and talk to people. Ask them if they
run a full node, and if they say no, ask them why not. In my experience,
the most common answer by a significant margin is that they don't want to
lose the disk space. That psychology is far more important than any example
of cheap hard drives. People don't want to go out and buy a hard drive so
that they can run Bitcoin. It's a non-starter.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Small Nodes: A Better Alternative to Pruned Nodes

2017-04-17 Thread Danny Thorpe via bitcoin-dev
1TB HDD is now available for under $40 USD.  How is the 100GB storage
requirement preventing anyone from setting up full nodes?

On Apr 16, 2017 11:55 PM, "David Vorick via bitcoin-dev" <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> *Rationale:*
>
> A node that stores the full blockchain (I will use the term archival node)
> requires over 100GB of disk space, which I believe is one of the most
> significant barriers to more people running full nodes. And I believe the
> ecosystem would benefit substantially if more users were running full nodes.
>
> The best alternative today to storing the full blockchain is to run a
> pruned node, which keeps only the UTXO set and throws away already verified
> blocks. The operator of the pruned node is able to enjoy the full security
> benefits of a full node, but is essentially leeching the network, as they
> performed a large download likely without contributing anything back.
>
> This puts more pressure on the archival nodes, as the archival nodes need
> to pick up the slack and help new nodes bootstrap to the network. As the
> pressure on archival nodes grows, fewer people will be able to actually run
> archival nodes, and the situation will degrade. The situation would likely
> become problematic quickly if bitcoin-core were to ship with the defaults
> set to a pruned node.
>
> Even further, the people most likely to care about saving 100GB of disk
> space are also the people least likely to care about some extra bandwidth
> usage. For datacenter nodes, and for nodes doing lots of bandwidth, the
> bandwidth is usually the biggest cost of running the node. For home users
> however, as long as they stay under their bandwidth cap, the bandwidth is
> actually free. Ideally, new nodes would be able to bootstrap from nodes
> that do not have to pay for their bandwidth, instead of needing to rely on
> a decreasing percentage of heavy-duty archival nodes.
>
> I have (perhaps incorrectly) identified disk space consumption as the most
> significant factor in your average user choosing to run a pruned node or a
> lite client instead of a full node. The average user is not typically too
> worried about bandwidth, and is also not typically too worried about
> initial blockchain download time. But the 100GB hit to your disk space can
> be a huge psychological factor, especially if your hard drive only has
> 500GB available in the first place, and 250+ GB is already consumed by
> other files you have.
>
> I believe that improving the disk usage situation would greatly benefit
> decentralization, especially if it could be done without putting pressure
> on archival nodes.
>
> *Small Nodes Proposal:*
>
> I propose an alternative to the pruned node that does not put undue
> pressure on archival nodes, and would be acceptable and non-risky to ship
> as a default in bitcoin-core. For lack of a better name, I'll call this new
> type of node a 'small node'. The intention is that bitcoin-core would
> eventually ship 'small nodes' by default, such that the expected amount of
> disk consumption drops from today's 100+ GB to less than 30 GB.
>
> My alternative proposal has the following properties:
>
> + Full nodes only need to store ~20% of the blockchain
> + With very high probability, a new node will be able to recover the
> entire blockchain by connecting to 6 random small node peers.
> + An attacker that can eliminate a chosen+ 95% of the full nodes running
> today will be unable to prevent new nodes from downloading the full
> blockchain, even if the attacker is also able to eliminate all archival
> nodes. (assuming all nodes today were small nodes instead of archival nodes)
>
> Method:
>
> A small node will pick an index [5, 256). This index is that node's
> permanent index. When storing a block, instead of storing the full block,
> the node will use Reed-Solomon coding to erasure code the block using a
> 5-of-256 scheme. The result will be 256 pieces that are 20% of the size of
> the block each. The node picks the piece that corresponds to its index, and
> stores that instead. (Indexes 0-4 are reserved for archival nodes -
> explained later)
>
> The node is now storing a fragment of every block. Alone, this fragment
> cannot be used to recover any piece of the blockchain. However, when paired
> with any 5 unique fragments (fragments of the same index will not be
> unique), the full block can be recovered.
>
> Nodes can optionally store more than 1 fragment each. At 5 fragments, the
> node becomes a full archival node, and the chosen indexes should be 0-4.
> This is advantageous for the archival node as the encoded data for the
> first 5 indexes will actually be identical to the block itself - there is
> no computational overhead for selecting the first indexes. There is also no
> need to choose random indexes, because the full block can be recovered no
> matter which indexes are chosen.
>
> When connecting to new peers, the indexes of each peer needs to be known.
> Once peers to