Re: [bitcoin-dev] brickchain

2022-11-08 Thread mm-studios via bitcoin-dev
--- Original Message ---
On Tuesday, November 8th, 2022 at 3:49 PM, Erik Aronesty  wrote:

>> I think it's pretty clear that the "competitive nature of PoW" is not 
>> referring to verification nodes
>
> cool, so we can agree there is no accepted centralization pressure for 
> validating nodes then

The centralization produced by PoW only affects miners. the rest of nodes are 
freely distributed.
in the producer-consumer view consumers (blockchain builders) are 
satisfactorily distributed. It can't be said so about miners(block producers), 
who form a quite centralized subsystem with only a handful major pools 
producing blocks.

>> layers also add fees to users
>
> source? i feel like it's obvious that the tree-like efficiencies should 
> reduce fees, but i'd appreciate your research on that topic

systems(layers) where abuse is controlled by fees add up each one a cost.

> On Tue, Nov 8, 2022 at 9:25 AM mm-studios  wrote:
>
>> --- Original Message ---
>> On Tuesday, November 8th, 2022 at 2:16 PM, Erik Aronesty  
>> wrote:
>>
 A) to not increase the workload of full-nodes
>>>
>>> yes, this is critical
>>>
 given the competitive nature of PoW itself
>>>
>>> validating nodes do not compete with PoW, i think maybe you are not sure of 
>>> the difference between a miner and a node
>>>
>>> nodes do validation of transactions, they do this for free, and many of 
>>> them provide essential services, like SPV validation for mobile
>>
>> I think it's pretty clear that the "competitive nature of PoW" is not 
>> referring to verification nodes (satoshi preferred this other word).
>>
>>> B) to not undermine L2 systems like LN.
>>>
>>> yes, as a general rule, layered financial systems are vastly superior. so 
>>> that risks incurred by edge layers are not propagated fully to the inner 
>>> layers. For example L3 projects like TARO and RGB are building on lightning 
>>> with less risk
>>
>> layers also add fees to users
>>
>>> On Wed, Oct 19, 2022 at 12:04 PM mm-studios  wrote:
>>>
 Thanks all for your responses.
 so is it a no-go is because "reduced settlement speed is a desirable 
 feature"?

 I don';t know what weights more in this consideration:
 A) to not increase the workload of full-nodes, being "less difficult to 
 operate" and hence reduce the chance of some of them giving up which would 
 lead to a negative centralization effect. (a bit cumbersome reasoning in 
 my opinion, given the competitive nature of PoW itself, which introduce an 
 accepted centralization, forcing some miners to give up). In this case the 
 fact is accepted because is decentralized enough.
 B) to not undermine L2 systems like LN.

 in any case it is a major no-go reason, if there is not intention to speed 
 up L1.
 Thanks
 M

 --- Original Message ---
 On Wednesday, October 19th, 2022 at 3:24 PM, Erik Aronesty  
 wrote:

>> currently, a miner produce blocks with a limited capacity of 
>> transactions that ultimately limits the global settlement throughput to 
>> a reduced number of tx/s.
>
> reduced settlement speed is a desirable feature and isn't something we 
> need to fix
>
> the focus should be on layer 2 protocols that allow the ability to hold & 
> transfer, uncommitted transactions as pools / joins, so that layer 1's 
> decentralization and incentives can remain undisturbed
>
> protocols like mweb, for example
>
> On Wed, Oct 19, 2022 at 7:34 AM mm-studios via bitcoin-dev 
>  wrote:
>
>> Hi Bitcoin devs,
>> I'd like to share an idea of a method to increase throughput in the 
>> bitcoin network.
>>
>> Currently, a miner produce blocks with a limited capacity of 
>> transactions that ultimately limits the global settlement throughput to 
>> a reduced number of tx/s.
>>
>> Big-blockers proposed the removal of limits but this didn't come with 
>> undesirable effects that have been widely discussed and rejected.
>>
>> The main feature we wanted to preserve is 'small blocks', providing 
>> 'better network effects' I won't focus on them.
>>
>> The problem with small blocks is that, once a block is filled 
>> transactions, they are kept back in the mempool, waiting for their turn 
>> in future blocks.
>>
>> The following changes in the protocol aim to let all transactions go in 
>> the current block, while keeping the block size small. It requires 
>> changes in the PoW algorithm.
>>
>> Currently, the PoW algorithm consists on finding a valid hash for the 
>> block. Its validity is determined by comparing the numeric value of the 
>> block hash with a protocol-defined value difficulty.
>>
>> Once a miner finds a nonce for the block that satisfies the condition 
>> the new block becomes valid and can be propagated. All nodes would 
>> update 

Re: [bitcoin-dev] brickchain

2022-11-08 Thread mm-studios via bitcoin-dev
--- Original Message ---
On Tuesday, November 8th, 2022 at 2:16 PM, Erik Aronesty  wrote:

>> A) to not increase the workload of full-nodes
>
> yes, this is critical
>
>> given the competitive nature of PoW itself
>
> validating nodes do not compete with PoW, i think maybe you are not sure of 
> the difference between a miner and a node
>
> nodes do validation of transactions, they do this for free, and many of them 
> provide essential services, like SPV validation for mobile

I think it's pretty clear that the "competitive nature of PoW" is not referring 
to verification nodes (satoshi preferred this other word).

> B) to not undermine L2 systems like LN.
>
> yes, as a general rule, layered financial systems are vastly superior. so 
> that risks incurred by edge layers are not propagated fully to the inner 
> layers. For example L3 projects like TARO and RGB are building on lightning 
> with less risk

layers also add fees to users

> On Wed, Oct 19, 2022 at 12:04 PM mm-studios  wrote:
>
>> Thanks all for your responses.
>> so is it a no-go is because "reduced settlement speed is a desirable 
>> feature"?
>>
>> I don';t know what weights more in this consideration:
>> A) to not increase the workload of full-nodes, being "less difficult to 
>> operate" and hence reduce the chance of some of them giving up which would 
>> lead to a negative centralization effect. (a bit cumbersome reasoning in my 
>> opinion, given the competitive nature of PoW itself, which introduce an 
>> accepted centralization, forcing some miners to give up). In this case the 
>> fact is accepted because is decentralized enough.
>> B) to not undermine L2 systems like LN.
>>
>> in any case it is a major no-go reason, if there is not intention to speed 
>> up L1.
>> Thanks
>> M
>>
>> --- Original Message ---
>> On Wednesday, October 19th, 2022 at 3:24 PM, Erik Aronesty  
>> wrote:
>>
 currently, a miner produce blocks with a limited capacity of transactions 
 that ultimately limits the global settlement throughput to a reduced 
 number of tx/s.
>>>
>>> reduced settlement speed is a desirable feature and isn't something we need 
>>> to fix
>>>
>>> the focus should be on layer 2 protocols that allow the ability to hold & 
>>> transfer, uncommitted transactions as pools / joins, so that layer 1's 
>>> decentralization and incentives can remain undisturbed
>>>
>>> protocols like mweb, for example
>>>
>>> On Wed, Oct 19, 2022 at 7:34 AM mm-studios via bitcoin-dev 
>>>  wrote:
>>>
 Hi Bitcoin devs,
 I'd like to share an idea of a method to increase throughput in the 
 bitcoin network.

 Currently, a miner produce blocks with a limited capacity of transactions 
 that ultimately limits the global settlement throughput to a reduced 
 number of tx/s.

 Big-blockers proposed the removal of limits but this didn't come with 
 undesirable effects that have been widely discussed and rejected.

 The main feature we wanted to preserve is 'small blocks', providing 
 'better network effects' I won't focus on them.

 The problem with small blocks is that, once a block is filled 
 transactions, they are kept back in the mempool, waiting for their turn in 
 future blocks.

 The following changes in the protocol aim to let all transactions go in 
 the current block, while keeping the block size small. It requires changes 
 in the PoW algorithm.

 Currently, the PoW algorithm consists on finding a valid hash for the 
 block. Its validity is determined by comparing the numeric value of the 
 block hash with a protocol-defined value difficulty.

 Once a miner finds a nonce for the block that satisfies the condition the 
 new block becomes valid and can be propagated. All nodes would update 
 their blockchains with it. (assuming no conflict resolution (orphan 
 blocks, ...) for clarity).

 This process is meant to happen every 10 minutes in average.

 With this background information (we all already know) I go on to describe 
 the idea:

 Let's allow a miner to include transactions until the block is filled, 
 let's call this structure (coining a new term 'Brick'), B0. [brick=block 
 that doesn't meet the difficulty rule and is filled of tx to its full 
 capacity]
 Since PoW hashing is continuously active, Brick B0 would have a nonce 
 corresponding to a minimum numeric value of its hash found until it got 
 filled.

 Fully filled brick B0, with a hash that doesn't meet the difficulty rule, 
 would be broadcasted and nodes would have it on in a separate fork as 
 usual.

 At this point, instead of discarding transactions, our miner would start 
 working on a new brick B1, linked with B0 as usual.

 Nodes would allow incoming regular blocks and bricks with hashes that 
 don't satisfy the difficulty rule, provided the brick is 

Re: [bitcoin-dev] brickchain

2022-11-08 Thread Erik Aronesty via bitcoin-dev
> I think it's pretty clear that the "competitive nature of PoW" is not
referring to verification nodes

cool, so we can agree there is no accepted centralization pressure for
validating nodes then

> layers also add fees to users

source?  i feel like it's obvious that the tree-like efficiencies should
reduce fees, but i'd appreciate your research on that topic


On Tue, Nov 8, 2022 at 9:25 AM mm-studios  wrote:

>
> --- Original Message ---
> On Tuesday, November 8th, 2022 at 2:16 PM, Erik Aronesty 
> wrote:
>
> > A) to not increase the workload of full-nodes
>
> yes, this is critical
>
> > given the competitive nature of PoW itself
>
> validating nodes do not compete with PoW, i think maybe you are not sure
> of the difference between a miner and a node
>
> nodes do validation of transactions, they do this for free, and many of
> them provide essential services, like SPV validation for mobile
>
>
>
> I think it's pretty clear that the "competitive nature of PoW" is not
> referring to verification nodes (satoshi preferred this other word).
>
> B) to not undermine L2 systems like LN.
>
> yes, as a general rule, layered financial systems are vastly superior. so
> that risks incurred by edge layers are not propagated fully to the inner
> layers. For example L3 projects like TARO and RGB are building on lightning
> with less risk
>
>
> layers also add fees to users
>
>
> On Wed, Oct 19, 2022 at 12:04 PM mm-studios  wrote:
>
>> Thanks all for your responses.
>> so is it a no-go is because "reduced settlement speed is a desirable
>> feature"?
>>
>> I don';t know what weights more in this consideration:
>> A) to not increase the workload of full-nodes, being "less difficult to
>> operate" and hence reduce the chance of some of them giving up which would
>> lead to a negative centralization effect. (a bit cumbersome reasoning in my
>> opinion, given the competitive nature of PoW itself, which introduce an
>> accepted centralization, forcing some miners to give up). In this case the
>> fact is accepted because is decentralized enough.
>> B) to not undermine L2 systems like LN.
>>
>> in any case it is a major no-go reason, if there is not intention to
>> speed up L1.
>> Thanks
>> M
>> --- Original Message ---
>> On Wednesday, October 19th, 2022 at 3:24 PM, Erik Aronesty 
>> wrote:
>>
>> > currently, a miner produce blocks with a limited capacity of
>> transactions that ultimately limits the global settlement throughput to a
>> reduced number of tx/s.
>>
>> reduced settlement speed is a desirable feature and isn't something we
>> need to fix
>>
>> the focus should be on layer 2 protocols that allow the ability to hold &
>> transfer, uncommitted transactions as pools / joins, so that layer 1's
>> decentralization and incentives can remain undisturbed
>>
>> protocols like mweb, for example
>>
>>
>>
>>
>> On Wed, Oct 19, 2022 at 7:34 AM mm-studios via bitcoin-dev <
>> bitcoin-dev@lists.linuxfoundation.org> wrote:
>>
>>> Hi Bitcoin devs,
>>> I'd like to share an idea of a method to increase throughput in the
>>> bitcoin network.
>>>
>>> Currently, a miner produce blocks with a limited capacity of
>>> transactions that ultimately limits the global settlement throughput to a
>>> reduced number of tx/s.
>>>
>>> Big-blockers proposed the removal of limits but this didn't come with
>>> undesirable effects that have been widely discussed and rejected.
>>>
>>> The main feature we wanted to preserve is 'small blocks', providing
>>> 'better network effects' I won't focus on them.
>>>
>>> The problem with small blocks is that, once a block is filled
>>> transactions, they are kept back in the mempool, waiting for their turn in
>>> future blocks.
>>>
>>> The following changes in the protocol aim to let all transactions go in
>>> the current block, while keeping the block size small. It requires changes
>>> in the PoW algorithm.
>>>
>>> Currently, the PoW algorithm consists on finding a valid hash for the
>>> block. Its validity is determined by comparing the numeric value of the
>>> block hash with a protocol-defined value difficulty.
>>>
>>> Once a miner finds a nonce for the block that satisfies the condition
>>> the new block becomes valid and can be propagated. All nodes would update
>>> their blockchains with it. (assuming no conflict resolution (orphan blocks,
>>> ...) for clarity).
>>>
>>> This process is meant to happen every 10 minutes in average.
>>>
>>> With this background information (we all already know) I go on to
>>> describe the idea:
>>>
>>> Let's allow a miner to include transactions until the block is filled,
>>> let's call this structure (coining a new term 'Brick'), B0. [brick=block
>>> that doesn't meet the difficulty rule and is filled of tx to its full
>>> capacity]
>>> Since PoW hashing is continuously active, Brick B0 would have a nonce
>>> corresponding to a minimum numeric value of its hash found until it got
>>> filled.
>>>
>>> Fully filled brick B0, with a hash that doesn't meet the 

Re: [bitcoin-dev] brickchain

2022-11-08 Thread Erik Aronesty via bitcoin-dev
> A) to not increase the workload of full-nodes

yes, this is critical

>  given the competitive nature of PoW itself

validating nodes do not compete with PoW, i think maybe you are not sure of
the difference between a miner and a node

nodes do validation of transactions, they do this for free, and many of
them provide essential services, like SPV validation for mobile


B) to not undermine L2 systems like LN.

yes, as a general rule, layered financial systems are vastly superior.  so
that risks incurred by edge layers are not propagated fully to the inner
layers.  For example L3 projects like TARO and RGB are building on
lightning with less risk

On Wed, Oct 19, 2022 at 12:04 PM mm-studios  wrote:

> Thanks all for your responses.
> so is it a no-go is because "reduced settlement speed is a desirable
> feature"?
>
> I don';t know what weights more in this consideration:
> A) to not increase the workload of full-nodes, being "less difficult to
> operate" and hence reduce the chance of some of them giving up which would
> lead to a negative centralization effect. (a bit cumbersome reasoning in my
> opinion, given the competitive nature of PoW itself, which introduce an
> accepted centralization, forcing some miners to give up). In this case the
> fact is accepted because is decentralized enough.
> B) to not undermine L2 systems like LN.
>
> in any case it is a major no-go reason, if there is not intention to speed
> up L1.
> Thanks
> M
> --- Original Message ---
> On Wednesday, October 19th, 2022 at 3:24 PM, Erik Aronesty 
> wrote:
>
> > currently, a miner produce blocks with a limited capacity of
> transactions that ultimately limits the global settlement throughput to a
> reduced number of tx/s.
>
> reduced settlement speed is a desirable feature and isn't something we
> need to fix
>
> the focus should be on layer 2 protocols that allow the ability to hold &
> transfer, uncommitted transactions as pools / joins, so that layer 1's
> decentralization and incentives can remain undisturbed
>
> protocols like mweb, for example
>
>
>
>
> On Wed, Oct 19, 2022 at 7:34 AM mm-studios via bitcoin-dev <
> bitcoin-dev@lists.linuxfoundation.org> wrote:
>
>> Hi Bitcoin devs,
>> I'd like to share an idea of a method to increase throughput in the
>> bitcoin network.
>>
>> Currently, a miner produce blocks with a limited capacity of transactions
>> that ultimately limits the global settlement throughput to a reduced number
>> of tx/s.
>>
>> Big-blockers proposed the removal of limits but this didn't come with
>> undesirable effects that have been widely discussed and rejected.
>>
>> The main feature we wanted to preserve is 'small blocks', providing
>> 'better network effects' I won't focus on them.
>>
>> The problem with small blocks is that, once a block is filled
>> transactions, they are kept back in the mempool, waiting for their turn in
>> future blocks.
>>
>> The following changes in the protocol aim to let all transactions go in
>> the current block, while keeping the block size small. It requires changes
>> in the PoW algorithm.
>>
>> Currently, the PoW algorithm consists on finding a valid hash for the
>> block. Its validity is determined by comparing the numeric value of the
>> block hash with a protocol-defined value difficulty.
>>
>> Once a miner finds a nonce for the block that satisfies the condition the
>> new block becomes valid and can be propagated. All nodes would update their
>> blockchains with it. (assuming no conflict resolution (orphan blocks, ...)
>> for clarity).
>>
>> This process is meant to happen every 10 minutes in average.
>>
>> With this background information (we all already know) I go on to
>> describe the idea:
>>
>> Let's allow a miner to include transactions until the block is filled,
>> let's call this structure (coining a new term 'Brick'), B0. [brick=block
>> that doesn't meet the difficulty rule and is filled of tx to its full
>> capacity]
>> Since PoW hashing is continuously active, Brick B0 would have a nonce
>> corresponding to a minimum numeric value of its hash found until it got
>> filled.
>>
>> Fully filled brick B0, with a hash that doesn't meet the difficulty rule,
>> would be broadcasted and nodes would have it on in a separate fork as usual.
>>
>> At this point, instead of discarding transactions, our miner would start
>> working on a new brick B1, linked with B0 as usual.
>>
>> Nodes would allow incoming regular blocks and bricks with hashes that
>> don't satisfy the difficulty rule, provided the brick is fully filled of
>> transactions. Bricks not fully filled would be rejected as invalid to
>> prevent spam (except if constitutes the last brick of a brickchain,
>> explained below).
>>
>> Let's assume that 10 minutes have elapsed and our miner is in a state
>> where N bricks have been produced and the accumulated PoW calculated using
>> mathematics (every brick contains a 'minimum hash found', when a series of
>> 'minimum hashes' is computationally 

Re: [bitcoin-dev] brickchain

2022-10-19 Thread mm-studios via bitcoin-dev
--- Original Message ---
On Wednesday, October 19th, 2022 at 2:40 PM, angus  wrote:

>> Let's allow a miner to include transactions until the block is filled, let's 
>> call this structure (coining a new term 'Brick'), B0. [brick=block that 
>> doesn't meet the difficulty rule and is filled of tx to its full capacity]
>> Since PoW hashing is continuously active, Brick B0 would have a nonce 
>> corresponding to a minimum numeric value of its hash found until it got 
>> filled.
>
> So, if I'm understanding right, this amounts to "reduce difficulty required 
> for a block ('brick') to be valid if the mempool contains more than 1 block's 
> worth of transactions so we get transactions confirmed faster" using 'bricks' 
> as short-lived sidechains that get merged into blocks?

They wouldn't get confirmed faster.
Imagine a regular Big Block (BB) could be re-structured as a brickchain
BB = B0 <- B1 <- ... <- Bn (Block = chain of bricks)

Only B0 contains the coinbase transaction.

Bi are streamed from miner to nodes as they are produced.
The node creates a separate fork on B0 arrival, and on arrival of the last B1 
treats the whole brickchain as they now treat a 1 Block: Either accept it or 
reject it as as a whole. (like is the complete block had just arrived entirely. 
(In reality it has arrived as a stream of bricks).
Before the brickchain is complete the node does nothing special, just validate 
each brick on arrival and wait for the next.

> This would have the same fundamental problem as just making the max blocksize 
> bigger - it increases the rate of growth of storage required for a full node, 
> because you're allowing blocks/bricks to be created faster, so there will be 
> more confirmed transactions to store in a given time window than under 
> current Bitcoin rules.

Yes, the data transmitted over the network is bigger, because we are 
intentionally increasing the throughput, instead of delaying tx in the mempool.
This is a potential howto in case there was an intention of speeding up L1.
The unavoidable price of speed in tx/s is bandwidth and volume of data to 
process.
The point is to do it without making bigger blocks.

> Bitcoin doesn't take the size of the mempool into account when adjusting the 
> difficulty because the time-between-blocks is 'more important' than avoiding 
> congestion where transactions take ages to get into a block. The fee 
> mechanism in part allows users to decide how urgently they want their tx to 
> get confirmed, and high fees when there is congestion also disincentivises 
> others from transacting at all, which helps arrest mempool growth.

streaming bricks instead of delivering a big block can be considered a way of 
reducing congestion. This is valid at any scale.
E.g. 1 Mb block delivered at once every 10 minutes versus a stream of 10 
100Kib-brick delivered 1 per minute

> I'd imagine we'd also see a 'highway widening' effect with this kind of 
> proposal - if you increase the tx volume Bitcoin can settle in a given time, 
> that will quickly be used up by more people transacting until we're back at a 
> congested state again.
>
>> Fully filled brick B0, with a hash that doesn't meet the difficulty rule, 
>> would be broadcasted and nodes would have it on in a separate fork as usual.

congestion can alweays happen with enough workload.
A system able to determine its workload can regulate it (keeping tx in the 
mempool to temporarily aliviate).

The congestion counter-measure remains in essence the same.

> How do we know if the hash the miner does find for a brick was their 'best 
> effort' and they're not just being lazy? There's an element of luck in the 
> best hash a miner can find, sometimes it takes a long time to meet the 
> difficulty requirement and sometimes it happens almost at instantly.

a lazy miner will produce a longer brickchain because they would have greater 
hashes than a more powerful miner. A more competitive miner will deliver the 
complete brickchain faster and hence its half-way through brickchain will be 
discarded.
It is exactly like the current system and a lazy miner.

> How would we know how 'busy' the mempool was at the time a brick from months 
> or years ago was mined?

I dont understand the question, but i guess it is the same answer replacing 
bricks for blocks

> Nodes have to be able to run through the entire history of the blockchain and 
> check everything is valid. They have to do this using only the previous 
> blocks they've already validated - they won't have historical snapshots of 
> the mempool (they'll build and mutate a UTXO set, but that's different). 
> Transactions don't contain a 'created-at' time that you could compare to the 
> block's creation time (and if they did, you probably couldn't trust it).

Why does this question apply to the concept of bricks and not to the concept of 
block?

I see a resulting blockchain would be a chain of blocks and bricks:
Bi = Block at height i
bi = brick at height i


Re: [bitcoin-dev] brickchain

2022-10-19 Thread mm-studios via bitcoin-dev
--- Original Message ---
On Wednesday, October 19th, 2022 at 10:34 PM, G. Andrew Stone 
 wrote:

> Consider that a miner can also produce transactions. So every miner would 
> produce spam tx to fill their bricks at the minimum allowed difficulty to 
> reap the brick coinbase reward.

except that, as I explained in a prev email, bricks don't contain reward. They 
are meaningless unless they form a complete brickchain with an accumulated 
difficulty that is equivalent to current block difficulty.

> You might quickly respond with a modification that changes or eliminates the 
> brick coinbase reward, but perhaps that exact reward and the major negative 
> consequence of miners creating spam tx needs careful thought.

since 1 block is equivalent to a brickchain, there exist only 1 coinbase tx
and since the brickchain is treated atomically as a whole, it follows the same 
processing as a block.
The only observable difference (and the reason of augmentating throughput) in 
the wire is that the information has been transmitted in streaming (decomposed 
block spaced in time)

> See "bobtail" for a weak block proposal that produces a more consistent 
> discovery time, and "tailstorm" for a proposal that uses the content of those 
> weak blocks as commitment to what transactions miners are working on (which 
> will allow more trustworthy (but still not foolproof) use of transactions 
> before confirmation)... neither of which have a snowball's chance in hell 
> (along with any other hard forking change) of being put into bitcoin :-).

thanks
Marcos

> Andrew
>
> On Wed, Oct 19, 2022 at 12:05 PM mm-studios via bitcoin-dev 
>  wrote:
>
>> Thanks all for your responses.
>> so is it a no-go is because "reduced settlement speed is a desirable 
>> feature"?
>>
>> I don';t know what weights more in this consideration:
>> A) to not increase the workload of full-nodes, being "less difficult to 
>> operate" and hence reduce the chance of some of them giving up which would 
>> lead to a negative centralization effect. (a bit cumbersome reasoning in my 
>> opinion, given the competitive nature of PoW itself, which introduce an 
>> accepted centralization, forcing some miners to give up). In this case the 
>> fact is accepted because is decentralized enough.
>> B) to not undermine L2 systems like LN.
>>
>> in any case it is a major no-go reason, if there is not intention to speed 
>> up L1.
>> Thanks
>> M
>>
>> --- Original Message ---
>> On Wednesday, October 19th, 2022 at 3:24 PM, Erik Aronesty  
>> wrote:
>>
 currently, a miner produce blocks with a limited capacity of transactions 
 that ultimately limits the global settlement throughput to a reduced 
 number of tx/s.
>>>
>>> reduced settlement speed is a desirable feature and isn't something we need 
>>> to fix
>>>
>>> the focus should be on layer 2 protocols that allow the ability to hold & 
>>> transfer, uncommitted transactions as pools / joins, so that layer 1's 
>>> decentralization and incentives can remain undisturbed
>>>
>>> protocols like mweb, for example
>>>
>>> On Wed, Oct 19, 2022 at 7:34 AM mm-studios via bitcoin-dev 
>>>  wrote:
>>>
 Hi Bitcoin devs,
 I'd like to share an idea of a method to increase throughput in the 
 bitcoin network.

 Currently, a miner produce blocks with a limited capacity of transactions 
 that ultimately limits the global settlement throughput to a reduced 
 number of tx/s.

 Big-blockers proposed the removal of limits but this didn't come with 
 undesirable effects that have been widely discussed and rejected.

 The main feature we wanted to preserve is 'small blocks', providing 
 'better network effects' I won't focus on them.

 The problem with small blocks is that, once a block is filled 
 transactions, they are kept back in the mempool, waiting for their turn in 
 future blocks.

 The following changes in the protocol aim to let all transactions go in 
 the current block, while keeping the block size small. It requires changes 
 in the PoW algorithm.

 Currently, the PoW algorithm consists on finding a valid hash for the 
 block. Its validity is determined by comparing the numeric value of the 
 block hash with a protocol-defined value difficulty.

 Once a miner finds a nonce for the block that satisfies the condition the 
 new block becomes valid and can be propagated. All nodes would update 
 their blockchains with it. (assuming no conflict resolution (orphan 
 blocks, ...) for clarity).

 This process is meant to happen every 10 minutes in average.

 With this background information (we all already know) I go on to describe 
 the idea:

 Let's allow a miner to include transactions until the block is filled, 
 let's call this structure (coining a new term 'Brick'), B0. [brick=block 
 that doesn't meet the difficulty rule and is filled of tx to 

Re: [bitcoin-dev] brickchain

2022-10-19 Thread G. Andrew Stone via bitcoin-dev
Consider that a miner can also produce transactions.  So every miner would
produce spam tx to fill their bricks at the minimum allowed difficulty to
reap the brick coinbase reward.

You might quickly respond with a modification that changes or eliminates
the brick coinbase reward, but perhaps that exact reward and the major
negative consequence of miners creating spam tx needs careful thought.

See "bobtail" for a weak block proposal that produces a more consistent
discovery time, and "tailstorm" for a proposal that uses the content of
those weak blocks as commitment to what transactions miners are working on
(which will allow more trustworthy (but still not foolproof) use of
transactions before confirmation)... neither of which have a snowball's
chance in hell (along with any other hard forking change) of being put into
bitcoin :-).

Andrew

On Wed, Oct 19, 2022 at 12:05 PM mm-studios via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> Thanks all for your responses.
> so is it a no-go is because "reduced settlement speed is a desirable
> feature"?
>
> I don';t know what weights more in this consideration:
> A) to not increase the workload of full-nodes, being "less difficult to
> operate" and hence reduce the chance of some of them giving up which would
> lead to a negative centralization effect. (a bit cumbersome reasoning in my
> opinion, given the competitive nature of PoW itself, which introduce an
> accepted centralization, forcing some miners to give up). In this case the
> fact is accepted because is decentralized enough.
> B) to not undermine L2 systems like LN.
>
> in any case it is a major no-go reason, if there is not intention to speed
> up L1.
> Thanks
> M
> --- Original Message ---
> On Wednesday, October 19th, 2022 at 3:24 PM, Erik Aronesty 
> wrote:
>
> > currently, a miner produce blocks with a limited capacity of
> transactions that ultimately limits the global settlement throughput to a
> reduced number of tx/s.
>
> reduced settlement speed is a desirable feature and isn't something we
> need to fix
>
> the focus should be on layer 2 protocols that allow the ability to hold &
> transfer, uncommitted transactions as pools / joins, so that layer 1's
> decentralization and incentives can remain undisturbed
>
> protocols like mweb, for example
>
>
>
>
> On Wed, Oct 19, 2022 at 7:34 AM mm-studios via bitcoin-dev <
> bitcoin-dev@lists.linuxfoundation.org> wrote:
>
>> Hi Bitcoin devs,
>> I'd like to share an idea of a method to increase throughput in the
>> bitcoin network.
>>
>> Currently, a miner produce blocks with a limited capacity of transactions
>> that ultimately limits the global settlement throughput to a reduced number
>> of tx/s.
>>
>> Big-blockers proposed the removal of limits but this didn't come with
>> undesirable effects that have been widely discussed and rejected.
>>
>> The main feature we wanted to preserve is 'small blocks', providing
>> 'better network effects' I won't focus on them.
>>
>> The problem with small blocks is that, once a block is filled
>> transactions, they are kept back in the mempool, waiting for their turn in
>> future blocks.
>>
>> The following changes in the protocol aim to let all transactions go in
>> the current block, while keeping the block size small. It requires changes
>> in the PoW algorithm.
>>
>> Currently, the PoW algorithm consists on finding a valid hash for the
>> block. Its validity is determined by comparing the numeric value of the
>> block hash with a protocol-defined value difficulty.
>>
>> Once a miner finds a nonce for the block that satisfies the condition the
>> new block becomes valid and can be propagated. All nodes would update their
>> blockchains with it. (assuming no conflict resolution (orphan blocks, ...)
>> for clarity).
>>
>> This process is meant to happen every 10 minutes in average.
>>
>> With this background information (we all already know) I go on to
>> describe the idea:
>>
>> Let's allow a miner to include transactions until the block is filled,
>> let's call this structure (coining a new term 'Brick'), B0. [brick=block
>> that doesn't meet the difficulty rule and is filled of tx to its full
>> capacity]
>> Since PoW hashing is continuously active, Brick B0 would have a nonce
>> corresponding to a minimum numeric value of its hash found until it got
>> filled.
>>
>> Fully filled brick B0, with a hash that doesn't meet the difficulty rule,
>> would be broadcasted and nodes would have it on in a separate fork as usual.
>>
>> At this point, instead of discarding transactions, our miner would start
>> working on a new brick B1, linked with B0 as usual.
>>
>> Nodes would allow incoming regular blocks and bricks with hashes that
>> don't satisfy the difficulty rule, provided the brick is fully filled of
>> transactions. Bricks not fully filled would be rejected as invalid to
>> prevent spam (except if constitutes the last brick of a brickchain,
>> explained below).
>>
>> Let's assume that 10 

Re: [bitcoin-dev] brickchain

2022-10-19 Thread mm-studios via bitcoin-dev
Thanks all for your responses.
so is it a no-go is because "reduced settlement speed is a desirable feature"?

I don';t know what weights more in this consideration:
A) to not increase the workload of full-nodes, being "less difficult to 
operate" and hence reduce the chance of some of them giving up which would lead 
to a negative centralization effect. (a bit cumbersome reasoning in my opinion, 
given the competitive nature of PoW itself, which introduce an accepted 
centralization, forcing some miners to give up). In this case the fact is 
accepted because is decentralized enough.
B) to not undermine L2 systems like LN.

in any case it is a major no-go reason, if there is not intention to speed up 
L1.
Thanks
M

--- Original Message ---
On Wednesday, October 19th, 2022 at 3:24 PM, Erik Aronesty  wrote:

>> currently, a miner produce blocks with a limited capacity of transactions 
>> that ultimately limits the global settlement throughput to a reduced number 
>> of tx/s.
>
> reduced settlement speed is a desirable feature and isn't something we need 
> to fix
>
> the focus should be on layer 2 protocols that allow the ability to hold & 
> transfer, uncommitted transactions as pools / joins, so that layer 1's 
> decentralization and incentives can remain undisturbed
>
> protocols like mweb, for example
>
> On Wed, Oct 19, 2022 at 7:34 AM mm-studios via bitcoin-dev 
>  wrote:
>
>> Hi Bitcoin devs,
>> I'd like to share an idea of a method to increase throughput in the bitcoin 
>> network.
>>
>> Currently, a miner produce blocks with a limited capacity of transactions 
>> that ultimately limits the global settlement throughput to a reduced number 
>> of tx/s.
>>
>> Big-blockers proposed the removal of limits but this didn't come with 
>> undesirable effects that have been widely discussed and rejected.
>>
>> The main feature we wanted to preserve is 'small blocks', providing 'better 
>> network effects' I won't focus on them.
>>
>> The problem with small blocks is that, once a block is filled transactions, 
>> they are kept back in the mempool, waiting for their turn in future blocks.
>>
>> The following changes in the protocol aim to let all transactions go in the 
>> current block, while keeping the block size small. It requires changes in 
>> the PoW algorithm.
>>
>> Currently, the PoW algorithm consists on finding a valid hash for the block. 
>> Its validity is determined by comparing the numeric value of the block hash 
>> with a protocol-defined value difficulty.
>>
>> Once a miner finds a nonce for the block that satisfies the condition the 
>> new block becomes valid and can be propagated. All nodes would update their 
>> blockchains with it. (assuming no conflict resolution (orphan blocks, ...) 
>> for clarity).
>>
>> This process is meant to happen every 10 minutes in average.
>>
>> With this background information (we all already know) I go on to describe 
>> the idea:
>>
>> Let's allow a miner to include transactions until the block is filled, let's 
>> call this structure (coining a new term 'Brick'), B0. [brick=block that 
>> doesn't meet the difficulty rule and is filled of tx to its full capacity]
>> Since PoW hashing is continuously active, Brick B0 would have a nonce 
>> corresponding to a minimum numeric value of its hash found until it got 
>> filled.
>>
>> Fully filled brick B0, with a hash that doesn't meet the difficulty rule, 
>> would be broadcasted and nodes would have it on in a separate fork as usual.
>>
>> At this point, instead of discarding transactions, our miner would start 
>> working on a new brick B1, linked with B0 as usual.
>>
>> Nodes would allow incoming regular blocks and bricks with hashes that don't 
>> satisfy the difficulty rule, provided the brick is fully filled of 
>> transactions. Bricks not fully filled would be rejected as invalid to 
>> prevent spam (except if constitutes the last brick of a brickchain, 
>> explained below).
>>
>> Let's assume that 10 minutes have elapsed and our miner is in a state where 
>> N bricks have been produced and the accumulated PoW calculated using 
>> mathematics (every brick contains a 'minimum hash found', when a series of 
>> 'minimum hashes' is computationally equivalent to the network difficulty is 
>> then the full 'brickchain' is valid as a Block.
>>
>> This calculus shall be better defined, but I hope that this idea can serve 
>> as a seed to a BIP, or otherwise deemed absurd, which might be possible and 
>> I'd be delighted to discover why a scheme like this wouldn't work.
>>
>> If it finally worked, it could completely flush mempools, keep transactions 
>> fees low and increase throughput without an increase in the block size that 
>> would raise other concerns related to propagation.
>>
>> Thank you.
>> I look forward to your responses.
>>
>> --
>> Marcos Mayorgahttps://twitter.com/KatlasC
>>
>> ___
>> bitcoin-dev mailing list
>> 

Re: [bitcoin-dev] brickchain

2022-10-19 Thread Erik Aronesty via bitcoin-dev
> currently, a miner produce blocks with a limited capacity of transactions
that ultimately limits the global settlement throughput to a reduced number
of tx/s.

reduced settlement speed is a desirable feature and isn't something we need
to fix

the focus should be on layer 2 protocols that allow the ability to hold &
transfer, uncommitted transactions as pools / joins, so that layer 1's
decentralization and incentives can remain undisturbed

protocols like mweb, for example




On Wed, Oct 19, 2022 at 7:34 AM mm-studios via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> Hi Bitcoin devs,
> I'd like to share an idea of a method to increase throughput in the
> bitcoin network.
>
> Currently, a miner produce blocks with a limited capacity of transactions
> that ultimately limits the global settlement throughput to a reduced number
> of tx/s.
>
> Big-blockers proposed the removal of limits but this didn't come with
> undesirable effects that have been widely discussed and rejected.
>
> The main feature we wanted to preserve is 'small blocks', providing
> 'better network effects' I won't focus on them.
>
> The problem with small blocks is that, once a block is filled
> transactions, they are kept back in the mempool, waiting for their turn in
> future blocks.
>
> The following changes in the protocol aim to let all transactions go in
> the current block, while keeping the block size small. It requires changes
> in the PoW algorithm.
>
> Currently, the PoW algorithm consists on finding a valid hash for the
> block. Its validity is determined by comparing the numeric value of the
> block hash with a protocol-defined value difficulty.
>
> Once a miner finds a nonce for the block that satisfies the condition the
> new block becomes valid and can be propagated. All nodes would update their
> blockchains with it. (assuming no conflict resolution (orphan blocks, ...)
> for clarity).
>
> This process is meant to happen every 10 minutes in average.
>
> With this background information (we all already know) I go on to describe
> the idea:
>
> Let's allow a miner to include transactions until the block is filled,
> let's call this structure (coining a new term 'Brick'), B0. [brick=block
> that doesn't meet the difficulty rule and is filled of tx to its full
> capacity]
> Since PoW hashing is continuously active, Brick B0 would have a nonce
> corresponding to a minimum numeric value of its hash found until it got
> filled.
>
> Fully filled brick B0, with a hash that doesn't meet the difficulty rule,
> would be broadcasted and nodes would have it on in a separate fork as usual.
>
> At this point, instead of discarding transactions, our miner would start
> working on a new brick B1, linked with B0 as usual.
>
> Nodes would allow incoming regular blocks and bricks with hashes that
> don't satisfy the difficulty rule, provided the brick is fully filled of
> transactions. Bricks not fully filled would be rejected as invalid to
> prevent spam (except if constitutes the last brick of a brickchain,
> explained below).
>
> Let's assume that 10 minutes have elapsed and our miner is in a state
> where N bricks have been produced and the accumulated PoW calculated using
> mathematics (every brick contains a 'minimum hash found', when a series of
> 'minimum hashes' is computationally equivalent to the network difficulty is
> then the full 'brickchain' is valid as a Block.
>
> This calculus shall be better defined, but I hope that this idea can serve
> as a seed to a BIP, or otherwise deemed absurd, which might be possible and
> I'd be delighted to discover why a scheme like this wouldn't work.
>
> If it finally worked, it could completely flush mempools, keep
> transactions fees low and increase throughput without an increase in the
> block size that would raise other concerns related to propagation.
>
> Thank you.
> I look forward to your responses.
>
> --
> Marcos Mayorga
> https://twitter.com/KatlasC
>
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] brickchain

2022-10-19 Thread Bryan Bishop via bitcoin-dev
Hi,

On Wed, Oct 19, 2022 at 6:34 AM mm-studios via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> Fully filled brick B0, with a hash that doesn't meet the difficulty rule,
> would be broadcasted and nodes would have it on in a separate fork as usual.
>

Check out the previous "weak block" proposals:
https://diyhpl.us/~bryan/irc/bitcoin/weak-blocks-links.2016-05-09.txt

- Bryan
https://twitter.com/kanzure
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] brickchain

2022-10-19 Thread angus via bitcoin-dev


> Let's allow a miner to include transactions until the block is filled, let's 
> call this structure (coining a new term 'Brick'), B0. [brick=block that 
> doesn't meet the difficulty rule and is filled of tx to its full capacity]
> Since PoW hashing is continuously active, Brick B0 would have a nonce 
> corresponding to a minimum numeric value of its hash found until it got 
> filled.


So, if I'm understanding right, this amounts to "reduce difficulty required for 
a block ('brick') to be valid if the mempool contains more than 1 block's worth 
of transactions so we get transactions confirmed faster" using 'bricks' as 
short-lived sidechains that get merged into blocks?

This would have the same fundamental problem as just making the max blocksize 
bigger - it increases the rate of growth of storage required for a full node, 
because you're allowing blocks/bricks to be created faster, so there will be 
more confirmed transactions to store in a given time window than under current 
Bitcoin rules.

Bitcoin doesn't take the size of the mempool into account when adjusting the 
difficulty because the time-between-blocks is 'more important' than avoiding 
congestion where transactions take ages to get into a block. The fee mechanism 
in part allows users to decide how urgently they want their tx to get 
confirmed, and high fees when there is congestion also disincentivises others 
from transacting at all, which helps arrest mempool growth.

I'd imagine we'd also see a 'highway widening' effect with this kind of 
proposal - if you increase the tx volume Bitcoin can settle in a given time, 
that will quickly be used up by more people transacting until we're back at a 
congested state again.

> Fully filled brick B0, with a hash that doesn't meet the difficulty rule, 
> would be broadcasted and nodes would have it on in a separate fork as usual.


How do we know if the hash the miner does find for a brick was their 'best 
effort' and they're not just being lazy? There's an element of luck in the best 
hash a miner can find, sometimes it takes a long time to meet the difficulty 
requirement and sometimes it happens almost at instantly.

How would we know how 'busy' the mempool was at the time a brick from months or 
years ago was mined?

Nodes have to be able to run through the entire history of the blockchain and 
check everything is valid. They have to do this using only the previous blocks 
they've already validated - they won't have historical snapshots of the mempool 
(they'll build and mutate a UTXO set, but that's different). Transactions don't 
contain a 'created-at' time that you could compare to the block's creation time 
(and if they did, you probably couldn't trust it).

With the current system, Nodes can calculate what the difficulty should be for 
every block based on those previous blocks' times and difficulties - but how 
would you know an old brick was valid if its difficulty was low but at the time 
the mempool was busy, vs. getting a fraudulent brick that is actually invalid 
because there isn't enough work in it? You can't solve this by adding some 
mempoolsize field to bricks, as you'd have to blindly trust miners not to lie 
about them.

If we can't be (fairly) certain that a miner put a minimum amount of work into 
finding a hash, then you lose all the strengths of PoW.

If you weaken the difficulty requirement which is there so that mining blocks 
is hard so that it is very hard to intentionally fork the chain, re-mine 
previous blocks, overtake the other fork, and get the network to re-org onto 
your chain - then there's no Proof of work undergirding consensus in the 
ledger's state.

Secondly, where does the block reward go? Do brick miners get a fraction of the 
reward proportionate to the fraction of the difficulty they got to? Later when 
bricks become part of a block, who gets the block reward for that complete 
block? Who gets the fees? No miner is going to bother mining a 
merge-bricks-into-block block if the reward isn't the same or better than just 
mining a regular block, but each miner of the bricks in it would also want a 
reward. But, we can't give them both a block reward as that'd increase 
Bitcoin's issuance rate, which might be the only thing people are more strongly 
opposed to than increasing the blocksize! xD

> At this point, instead of discarding transactions, our miner would start 
> working on a new brick B1, linked with B0 as usual.
> 

> Nodes would allow incoming regular blocks and bricks with hashes that don't 
> satisfy the difficulty rule, provided the brick is fully filled of 
> transactions. Bricks not fully filled would be rejected as invalid to prevent 
> spam (except if constitutes the last brick of a brickchain, explained below).
> 

> Let's assume that 10 minutes have elapsed and our miner is in a state where N 
> bricks have been produced and the accumulated PoW calculated using 
> mathematics (every brick contains a 'minimum hash found', when a 

[bitcoin-dev] brickchain

2022-10-19 Thread mm-studios via bitcoin-dev
Hi Bitcoin devs,
I'd like to share an idea of a method to increase throughput in the bitcoin 
network.

Currently, a miner produce blocks with a limited capacity of transactions that 
ultimately limits the global settlement throughput to a reduced number of tx/s.

Big-blockers proposed the removal of limits but this didn't come with 
undesirable effects that have been widely discussed and rejected.

The main feature we wanted to preserve is 'small blocks', providing 'better 
network effects' I won't focus on them.

The problem with small blocks is that, once a block is filled transactions, 
they are kept back in the mempool, waiting for their turn in future blocks.

The following changes in the protocol aim to let all transactions go in the 
current block, while keeping the block size small. It requires changes in the 
PoW algorithm.

Currently, the PoW algorithm consists on finding a valid hash for the block. 
Its validity is determined by comparing the numeric value of the block hash 
with a protocol-defined value difficulty.

Once a miner finds a nonce for the block that satisfies the condition the new 
block becomes valid and can be propagated. All nodes would update their 
blockchains with it. (assuming no conflict resolution (orphan blocks, ...) for 
clarity).

This process is meant to happen every 10 minutes in average.

With this background information (we all already know) I go on to describe the 
idea:

Let's allow a miner to include transactions until the block is filled, let's 
call this structure (coining a new term 'Brick'), B0. [brick=block that doesn't 
meet the difficulty rule and is filled of tx to its full capacity]
Since PoW hashing is continuously active, Brick B0 would have a nonce 
corresponding to a minimum numeric value of its hash found until it got filled.

Fully filled brick B0, with a hash that doesn't meet the difficulty rule, would 
be broadcasted and nodes would have it on in a separate fork as usual.

At this point, instead of discarding transactions, our miner would start 
working on a new brick B1, linked with B0 as usual.

Nodes would allow incoming regular blocks and bricks with hashes that don't 
satisfy the difficulty rule, provided the brick is fully filled of 
transactions. Bricks not fully filled would be rejected as invalid to prevent 
spam (except if constitutes the last brick of a brickchain, explained below).

Let's assume that 10 minutes have elapsed and our miner is in a state where N 
bricks have been produced and the accumulated PoW calculated using mathematics 
(every brick contains a 'minimum hash found', when a series of 'minimum hashes' 
is computationally equivalent to the network difficulty is then the full 
'brickchain' is valid as a Block.

This calculus shall be better defined, but I hope that this idea can serve as a 
seed to a BIP, or otherwise deemed absurd, which might be possible and I'd be 
delighted to discover why a scheme like this wouldn't work.

If it finally worked, it could completely flush mempools, keep transactions 
fees low and increase throughput without an increase in the block size that 
would raise other concerns related to propagation.

Thank you.
I look forward to your responses.

--
Marcos Mayorgahttps://twitter.com/KatlasC___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev