Re: [bitcoin-dev] Chain width expansion

2019-10-04 Thread Tier Nolan via bitcoin-dev
Are you assuming no network protocol changes?

At root, the requirement is that peers can prove their total chain POW.

Since each block has the height in the coinbase, a peer can send a short
proof of height for a disconnected header and could assert the POW for that
header.

Each peer could send the the N strongest headers (lowest digest/most POW)
for their main chain and prove the height of each one.

The total chain work can be estimated as N times the POW for the lowest in
the list.  This is an interesting property of how POW works.  The 10th best
POW block will have about 10% of the total POW.

The N blocks would be spread along the chain and the peer could ask for all
headers between any 2 of them and check the different in claimed POW.  If
dishonesty is discovered, the peer can be banned and all info from that
peer wiped.

You can apply the rule hierarchically.  The honest peers would have a much
higher POW chain.  You could ask the peer to give you the N strongest
headers between 2 headers that they gave for their best chain.  You can
check that their height is between the two limits.

The peer would effectively be proving their total POW recursively.

This would require a new set of messages so you can request info about the
best chain.

It also has the nice feature that it allows you to see if multiple peers
are on the same chain, since they will have the same best blocks.

The most elegant would be something like using SNARKS to directly prove
that your chain tip has a particular POW.  The download would go tip to
genesis, unlike now when it is in the other direction.



In regard to your proposal, I think the key is to limit things by peer,
rather than globally.

The limit to header width should be split between peers.  If you have N
outgoing peers, they get 1/N of your header download resources each.

You store the current best/most POW header chain and at least one
alternative chain per outgoing peer.

You could still prune old chains based on POW, but the best chain and the
current chain for each outgoing peer should not be pruned.

The security assumption is that a node is connected to at least one honest
node.

If you split resources between all peers, then it prevents the dishonest
nodes from flooding and wiping out the progress for the honest peer.

- Message Limiting -

I have the same objection here.  The message limiting should be per peer.

An honest peer who has just been connected to shouldn't suffer a penalty.

Your point that it is only a few minutes anyway may make this point moot
though.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Chain width expansion

2019-10-04 Thread Braydon Fuller via bitcoin-dev
On 10/4/19 1:20 AM, David A. Harding wrote:

> On Thu, Oct 03, 2019 at 05:38:36PM -0700, Braydon Fuller via bitcoin-dev 
> wrote:
>> This paper describes a solution [to DoS attacks] that does not
>> require enabling or maintaining checkpoints and provides improved security.
>> [...] 
>> The paper is available at:
>> https://bcoin.io/papers/bitcoin-chain-expansion.pdf
> [..] But I worry that the mechanisms could also be used to keep a node that
> synced to a long-but-lower-PoW chain on that false chain (or other false
> chain) indefinitely even if it had connections to honest peers that
> tried to tell it about the most-PoW chain.

Here is an example: An attacker eclipses a target node during the
initial block download; all of the target's outgoing peers are the
attacker. The attacker has a low work chain that is sent to the target.
The total chainwork for the low work chain is 0x09104210421039 at a
height of 593,975. The target is now in the state of a fully validated
low work dishonest chain. The target node then connects to an honest
peer and learns about the honest chain. The chainwork of the honest
chain is 0x085b67d9e07a751e53679d68 at a height of 593,975. The first
69,500 headers of the honest chain would have a delay, however the
remaining 52,4475 would not be delayed. Given a maximum of 5 seconds,
this would be a total delay of only 157 seconds.


___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] [Lightning-dev] OP_CAT was Re: Continuing the discussion about noinput / anyprevout

2019-10-04 Thread Jeremy via bitcoin-dev
Interesting point.

The script is under your control, so you should be able to ensure that you
are always using a correctly constructed midstate, e.g., something like:

scriptPubKey: <-1> OP_SHA256STREAM DEPTH OP_SHA256STREAM <-2>
OP_SHA256STREAM
 OP_EQUALVERIFY

would hash all the elements on the stack and compare to a known hash.
How is that sort of thing weak to midstateattacks?


--
@JeremyRubin 



On Fri, Oct 4, 2019 at 4:16 AM Peter Todd  wrote:

> On Thu, Oct 03, 2019 at 10:02:14PM -0700, Jeremy via bitcoin-dev wrote:
> > Awhile back, Ethan and I discussed having, rather than OP_CAT, an
> > OP_SHA256STREAM that uses the streaming properties of a SHA256 hash
> > function to allow concatenation of an unlimited amount of data, provided
> > the only use is to hash it.
> >
> > You can then use it perhaps as follows:
> >
> > // start a new hash with item
> > OP_SHA256STREAM  (-1) -> [state]
> > // Add item to the hash in state
> > OP_SHA256STREAM n [item] [state] -> [state]
> > // Finalize
> > OP_SHA256STREAM (-2) [state] -> [Hash]
> >
> > <-1> OP_SHA256STREAM<3> OP_SHA256STREAM
> <-2>
> > OP_SHA256STREAM
>
> One issue with this is the simplest implementation where the state is just
> raw
> bytes would expose raw SHA256 midstates, allowing people to use them
> directly;
> preventing that would require adding types to the stack. Specifically I
> could
> write a script that rather than initializing the state correctly from the
> official IV, instead takes an untrusted state as input.
>
> SHA256 isn't designed to be used in situations where adversaries control
> the
> initialization vector. I personally don't know one way or the other if
> anyone
> has analyzed this in detail, but I'd be surprised if that's secure. I
> considered adding midstate support to OpenTimestamps but decided against
> it for
> exactly that reason.
>
> I don't have the link handy but there's even an example of an experienced
> cryptographer on this very list (bitcoin-dev) proposing a design that falls
> victim to this attack. It's a subtle issue and we probably don't want to
> encourage it.
>
> --
> https://petertodd.org 'peter'[:-1]@petertodd.org
> ___
> Lightning-dev mailing list
> lightning-...@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] [Lightning-dev] OP_CAT was Re: Continuing the discussion about noinput / anyprevout

2019-10-04 Thread Jeremy via bitcoin-dev
Good point -- in our discussion, we called it OP_FFS -- Fold Functional
Stream, and it could be initialized with a different integer to select for
different functions. Therefore the stream processing opcodes would be
generic, but extensible.
--
@JeremyRubin 



On Fri, Oct 4, 2019 at 12:00 AM ZmnSCPxj via Lightning-dev <
lightning-...@lists.linuxfoundation.org> wrote:

> Good morning Jeremy,
>
> > Awhile back, Ethan and I discussed having, rather than OP_CAT, an
> OP_SHA256STREAM that uses the streaming properties of a SHA256 hash
> function to allow concatenation of an unlimited amount of data, provided
> the only use is to hash it.
> >
> > You can then use it perhaps as follows:
> >
> > // start a new hash with item
> > OP_SHA256STREAM  (-1) -> [state]
> > // Add item to the hash in state
> > OP_SHA256STREAM n [item] [state] -> [state]
> > // Finalize
> > OP_SHA256STREAM (-2) [state] -> [Hash]
> >
> > <-1> OP_SHA256STREAM<3> OP_SHA256STREAM
> <-2> OP_SHA256STREAM
> >
> > Or it coul
> >
>
> This seems a good idea.
>
> Though it brings up the age-old tension between:
>
> * Generically-useable components, but due to generalization are less
> efficient.
> * Specific-use components, which are efficient, but which may end up not
> being useable in the future.
>
> In particular, `OP_SHA256STREAM` would no longer be useable if SHA256
> eventually is broken, while the `OP_CAT` will still be useable in the
> indefinite future.
> In the future a new hash function can simply be defined and the same
> technique with `OP_CAT` would still be useable.
>
>
> Regards,
> ZmnSCPxj
>
> > --
> > @JeremyRubin
> >
> > On Thu, Oct 3, 2019 at 8:04 PM Ethan Heilman  wrote:
> >
> > > I hope you are having an great afternoon ZmnSCPxj,
> > >
> > > You make an excellent point!
> > >
> > > I had thought about doing the following to tag nodes
> > >
> > > || means OP_CAT
> > >
> > > `node = SHA256(type||SHA256(data))`
> > > so a subnode would be
> > > `subnode1 = SHA256(1||SHA256(subnode2||subnode3))`
> > > and a leaf node would be
> > > `leafnode = SHA256(0||SHA256(leafdata))`
> > >
> > > Yet, I like your idea better. Increasing the size of the two inputs to
> > > OP_CAT to be 260 Bytes each where 520 Bytes is the maximum allowable
> > > size of object on the stack seems sensible and also doesn't special
> > > case the logic of OP_CAT.
> > >
> > > It would also increase performance. SHA256(tag||subnode2||subnode3)
> > > requires 2 compression function calls whereas
> > > SHA256(1||SHA256(subnode2||subnode3)) requires 2+1=3 compression
> > > function calls (due to padding).
> > >
> > > >Or we could implement tagged SHA256 as a new opcode...
> > >
> > > I agree that tagged SHA256 as an op code that would certainty be
> > > useful, but OP_CAT provides far more utility and is a simpler change.
> > >
> > > Thanks,
> > > Ethan
> > >
> > > On Thu, Oct 3, 2019 at 7:42 PM ZmnSCPxj 
> wrote:
> > > >
> > > > Good morning Ethan,
> > > >
> > > >
> > > > > To avoid derailing the NO_INPUT conversation, I have changed the
> > > > > subject to OP_CAT.
> > > > >
> > > > > Responding to:
> > > > > """
> > > > >
> > > > > -   `SIGHASH` flags attached to signatures are a misdesign, sadly
> > > > > retained from the original BitCoin 0.1.0 Alpha for Windows
> design, on
> > > > > par with:
> > > > > [..]
> > > > >
> > > > > -   `OP_CAT` and `OP_MULT` and `OP_ADD` and friends
> > > > > [..]
> > > > > """
> > > > >
> > > > > OP_CAT is an extremely valuable op code. I understand why it
> was
> > > > > removed as the situation at the time with scripts was dire.
> However
> > > > > most of the protocols I've wanted to build on Bitcoin run into
> the
> > > > > limitation that stack values can not be concatenated. For
> instance
> > > > > TumbleBit would have far smaller transaction sizes if OP_CAT
> was
> > > > > supported in Bitcoin. If it happens to me as a researcher it is
> > > > > probably holding other people back as well. If I could wave a
> magic
> > > > > wand and turn on one of the disabled op codes it would be
> OP_CAT. Of
> > > > > course with the change that size of each concatenated value
> must be 64
> > > > > Bytes or less.
> > > >
> > > > Why 64 bytes in particular?
> > > >
> > > > It seems obvious to me that this 64 bytes is most suited for
> building Merkle trees, being the size of two SHA256 hashes.
> > > >
> > > > However we have had issues with the use of Merkle trees in Bitcoin
> blocks.
> > > > Specifically, it is difficult to determine if a hash on a Merkle
> node is the hash of a Merkle subnode, or a leaf transaction.
> > > > My understanding is that this is the reason for now requiring
> transactions to be at least 80 bytes.
> > > >
> > > > The obvious fix would be to prepend the type of the hashed object,
> i.e. add at least one byte to determine this type.
> > > > Taproot for example uses tagged hash 

Re: [bitcoin-dev] [Lightning-dev] OP_CAT was Re: Continuing the discussion about noinput / anyprevout

2019-10-04 Thread Peter Todd via bitcoin-dev
On Thu, Oct 03, 2019 at 10:02:14PM -0700, Jeremy via bitcoin-dev wrote:
> Awhile back, Ethan and I discussed having, rather than OP_CAT, an
> OP_SHA256STREAM that uses the streaming properties of a SHA256 hash
> function to allow concatenation of an unlimited amount of data, provided
> the only use is to hash it.
> 
> You can then use it perhaps as follows:
> 
> // start a new hash with item
> OP_SHA256STREAM  (-1) -> [state]
> // Add item to the hash in state
> OP_SHA256STREAM n [item] [state] -> [state]
> // Finalize
> OP_SHA256STREAM (-2) [state] -> [Hash]
> 
> <-1> OP_SHA256STREAM<3> OP_SHA256STREAM <-2>
> OP_SHA256STREAM

One issue with this is the simplest implementation where the state is just raw
bytes would expose raw SHA256 midstates, allowing people to use them directly;
preventing that would require adding types to the stack. Specifically I could
write a script that rather than initializing the state correctly from the
official IV, instead takes an untrusted state as input.

SHA256 isn't designed to be used in situations where adversaries control the
initialization vector. I personally don't know one way or the other if anyone
has analyzed this in detail, but I'd be surprised if that's secure. I
considered adding midstate support to OpenTimestamps but decided against it for
exactly that reason.

I don't have the link handy but there's even an example of an experienced
cryptographer on this very list (bitcoin-dev) proposing a design that falls
victim to this attack. It's a subtle issue and we probably don't want to
encourage it.

-- 
https://petertodd.org 'peter'[:-1]@petertodd.org


signature.asc
Description: PGP signature
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] ChainWallet - A way to prevent loss of funds by physical violence

2019-10-04 Thread Bryan Bishop via bitcoin-dev
Since the user can't prove that they are using this technique, or
petertodd's timelock encryption for that matter, an attacker has little
incentive to stop physically attacking until they have a spendable UTXO.

I believe you can get the same effect with on-chain timelocks, or
delete-the-bits plus a rangeproof and a zero-knowledge proof that the
rangeproof corresponds to some secret that can be used to derive the
expected public key. I think Jeremy Rubin had an idea for such a proof.

Also, adam3us has described a similar thought here:
https://bitcointalk.org/index.php?topic=311000.0

- Bryan

On Fri, Oct 4, 2019, 4:43 AM Saulo Fonseca via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> Hi everyone
>
> If you are a hodler, I like to propose the creation of a key stretching as
> a new layer of protection over your current wallet.
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


[bitcoin-dev] ChainWallet - A way to prevent loss of funds by physical violence

2019-10-04 Thread Saulo Fonseca via bitcoin-dev
Hi everyone

If you are a hodler, I like to propose the creation of a key stretching as a 
new layer of protection over your current wallet. I call it ChainWallet. 
Whatever is the method used to generate your private key, we can do the 
following:

newPrivKey = sha256(sha256(sha256(…sha256(privKey)…)))
NewWallet = PubAddress(newPrivKey)
In this way we create a chain of hashes over your private key and generate a 
new wallet from it. If the chain is very long (billions or trillions of hashes) 
it will take a long time to be created. If you don’t keep the newPrivKey, the 
only way to move coins in the NewWallet is to generate the chain again.

The length of the chain can be easy memorized as an exponent such as 2^40 or 
10^12.

What is that gut for? You will not be able to move your coins in an unplanned 
way such as being tortured by a kidnaper. You can create a wallet that takes 
days or even months to return the final address.

Comparison with a BrainWallet

If the first privKey is the hash of a password, your ChainWallet can be 
compared to a BrainWallet with a chain added to it. BrainWallets have a bad 
reputation because it is possible to create a brute-force attack against it. 
There are reports where the attacker was able to guess the password by 
generating hundreds of thousands of hashes per second. But, if you use a 
ChainWallet that takes one second to be generated, it means that the speed of 
an attack would be reduced to one guess per second. This makes a brute force 
attack practically impossible.

Entropy

The ChainWallet adds only a few bits of entropy to your key. The idea here is 
not to increase the entropy, but to add “time” as part of the puzzle.

SHA-256

I am suggesting the use of SHA-256 because it is the most popular hash 
algorithm in the crypto community. But you could use SHA-512 or a slower hash 
algorithm such as Bcrypt to do it. But keep in mind that other hash algorithms 
can reduce the entropy.

The idea is to add time to the key generation. If you use many SHA-256 or a few 
SHA-512, as long as both need the same time to be generated, there is no 
difference.

Other hashes have the advantage that a hardware implementation of it is not 
widespread.

ASICs

Someone could mention that ASICs get more and more powerful and could crack a 
ChainWallet. But they have a huge hash rate because they calculate it in 
parallel. A ChainWallet requires that the output of a hash would be the input 
of the next calculation. This dramatically reduces the speed of a hardware 
implementation of such algorithms.

Let’s pick an example:  The Bitfury Clarke has 8.154 cores and runs 120 Gh/s. 
This means that each core can perform about 14.72 Mh/s. This speed is all that 
you can get with one of the best ASIC on the market. 17.72 Mh/s is only about 
17,7 times faster than a typical computer. This speed can only increase slowly, 
as technology needs time to make the transistors run faster. So, the best way 
to generate a ChainWallet is by using such an ASIC core.

Misuse

Someone could argue that people would misuse it by picking easy to remember 
passwords or small chain length. A wallet implementation could solve it by 
forcing a minimum length for the chain and block commonly used words for the 
password. It is a matter of design.

Theft

The major advantage of a ChainWallet is the ability to avoid a theft. If your 
wallet takes a really long time to be generated and someone tries to force you 
to give your private key, you would not be able to do it, even if you really 
want. You could also give away a wrong password or chain length and he/she is 
not able to verify it. The chances are very small that he/she will wait weeks 
of months for the chain generation of even that he/she is able to do the chain 
calculation.

Final Thoughts

A ChainWallet could be used as an alternative to BIP39. Instead of keeping 24 
words, you would have a password and two numbers, a base and an exponent, that 
defines the length of the chain. This is easier to memorize, so you do not need 
to write it down.

This is only meant as an additional option along with all others available in 
the crypto environment, such as multisig and smart contracts. As for those 
other ideas, the ChainWallet is not applicable in every case.

When the day arrives at which you want to stop hodling and transferring your 
coins to another location, you should re-generate your wallet in a planned way 
with the same original private key and length of the chain. Then, after waiting 
until the program concludes, you will get the new private key back.

Web Links

The original idea can be found on this post:

https://www.reddit.com/user/sauloqf/comments/a3q8dt/chainwallet 


A proof of concept in C++ can be found on this link:

https://github.com/Saulo-Fonseca/ChainWallet 


The community is testing the concept for a while. You 

Re: [bitcoin-dev] Chain width expansion

2019-10-04 Thread David A. Harding via bitcoin-dev
On Thu, Oct 03, 2019 at 05:38:36PM -0700, Braydon Fuller via bitcoin-dev wrote:
> This paper describes a solution [to DoS attacks] that does not
> require enabling or maintaining checkpoints and provides improved security.
> [...] 
> The paper is available at:
> https://bcoin.io/papers/bitcoin-chain-expansion.pdf

Hi Braydon,

Thank you for researching this important issue.  An alternative solution
proposed some time ago (I believe originally by Gregory Maxwell) was a
soft fork to raise the minimum difficulty.  You can find discussion of
it in various old IRC conversations[1,2] as well as in related changes
to Bitcoin Core such as PR #9053 addining minimum chain work[3] and the
assumed-valid change added in Bitcoin Core 0.14.0[4].

[1] 
http://www.erisian.com.au/meetbot/bitcoin-core-dev/2016/bitcoin-core-dev.2016-10-27-19.01.log.html#l-121
[2] 
http://www.erisian.com.au/meetbot/bitcoin-core-dev/2017/bitcoin-core-dev.2017-03-02-19.01.log.html#l-57
[3] 
https://github.com/bitcoin/bitcoin/pull/9053/commits/fd46136dfaf68a7046cf7b8693824d73ac6b1caf
[4] https://bitcoincore.org/en/2017/03/08/release-0.14.0/#assumed-valid-blocks

The solutions proposed in section 4.2 and 4.3 of your paper have the
advantage of not requiring any consensus changes.  However, I find it
hard to analyze the full consequences of the throttling solution in
4.3 and the pruning solution in 4.2.  If we assume a node is on the
most-PoW valid chain and that a huge fork is unlikely, it seems fine.
But I worry that the mechanisms could also be used to keep a node that
synced to a long-but-lower-PoW chain on that false chain (or other false
chain) indefinitely even if it had connections to honest peers that
tried to tell it about the most-PoW chain.

For example, with your maximum throttle of 5 seconds between
`getheaders` requests and the `headers` P2P message maximum of 2,000
headers per instance, it would take about half an hour to get a full
chain worth of headers.  If a peer was disconnected before sending
enough headers to establish they were on the most-PoW chain, your
pruning solution would delete whatever progress was made, forcing the
next peer to start from genesis and taking them at least half an hour
too.  On frequently-suspended laptops or poor connections, it's possible
a node could be be operational for a long time before it kept the same
connection open for half an hour.  All that time, it would be on a
dishonest chain.

By comparison, I find it easy to analyze the effect of raising the
minimum difficulty.  It is a change to the consensus rules, so it's
something we should be careful about, but it's the kind of
basically-one-line change that I expect should be easy for a large
number of people to review directly.  Assuming the choice of a new
minimum (and what point in the chain to use it) is sane, I think it
would be easy to get acceptance, and I think it would further be easy
increase it again every five years or so as overall hashrate increases.

-Dave
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] [Lightning-dev] OP_CAT was Re: Continuing the discussion about noinput / anyprevout

2019-10-04 Thread ZmnSCPxj via bitcoin-dev
Good morning Jeremy,

> Awhile back, Ethan and I discussed having, rather than OP_CAT, an 
> OP_SHA256STREAM that uses the streaming properties of a SHA256 hash function 
> to allow concatenation of an unlimited amount of data, provided the only use 
> is to hash it.
>
> You can then use it perhaps as follows:
>
> // start a new hash with item
> OP_SHA256STREAM  (-1) -> [state]
> // Add item to the hash in state
> OP_SHA256STREAM n [item] [state] -> [state]
> // Finalize
> OP_SHA256STREAM (-2) [state] -> [Hash]
>
> <-1> OP_SHA256STREAM<3> OP_SHA256STREAM <-2> 
> OP_SHA256STREAM
>
> Or it coul
>

This seems a good idea.

Though it brings up the age-old tension between:

* Generically-useable components, but due to generalization are less efficient.
* Specific-use components, which are efficient, but which may end up not being 
useable in the future.

In particular, `OP_SHA256STREAM` would no longer be useable if SHA256 
eventually is broken, while the `OP_CAT` will still be useable in the 
indefinite future.
In the future a new hash function can simply be defined and the same technique 
with `OP_CAT` would still be useable.


Regards,
ZmnSCPxj

> --
> @JeremyRubin
>
> On Thu, Oct 3, 2019 at 8:04 PM Ethan Heilman  wrote:
>
> > I hope you are having an great afternoon ZmnSCPxj,
> >
> > You make an excellent point!
> >
> > I had thought about doing the following to tag nodes
> >
> > || means OP_CAT
> >
> > `node = SHA256(type||SHA256(data))`
> > so a subnode would be
> > `subnode1 = SHA256(1||SHA256(subnode2||subnode3))`
> > and a leaf node would be
> > `leafnode = SHA256(0||SHA256(leafdata))`
> >
> > Yet, I like your idea better. Increasing the size of the two inputs to
> > OP_CAT to be 260 Bytes each where 520 Bytes is the maximum allowable
> > size of object on the stack seems sensible and also doesn't special
> > case the logic of OP_CAT.
> >
> > It would also increase performance. SHA256(tag||subnode2||subnode3)
> > requires 2 compression function calls whereas
> > SHA256(1||SHA256(subnode2||subnode3)) requires 2+1=3 compression
> > function calls (due to padding).
> >
> > >Or we could implement tagged SHA256 as a new opcode...
> >
> > I agree that tagged SHA256 as an op code that would certainty be
> > useful, but OP_CAT provides far more utility and is a simpler change.
> >
> > Thanks,
> > Ethan
> >
> > On Thu, Oct 3, 2019 at 7:42 PM ZmnSCPxj  wrote:
> > >
> > > Good morning Ethan,
> > >
> > >
> > > > To avoid derailing the NO_INPUT conversation, I have changed the
> > > > subject to OP_CAT.
> > > >
> > > > Responding to:
> > > > """
> > > >
> > > > -   `SIGHASH` flags attached to signatures are a misdesign, sadly
> > > >     retained from the original BitCoin 0.1.0 Alpha for Windows design, 
> > > >on
> > > >     par with:
> > > >     [..]
> > > >
> > > > -   `OP_CAT` and `OP_MULT` and `OP_ADD` and friends
> > > >     [..]
> > > >     """
> > > >
> > > >     OP_CAT is an extremely valuable op code. I understand why it was
> > > >     removed as the situation at the time with scripts was dire. However
> > > >     most of the protocols I've wanted to build on Bitcoin run into the
> > > >     limitation that stack values can not be concatenated. For instance
> > > >     TumbleBit would have far smaller transaction sizes if OP_CAT was
> > > >     supported in Bitcoin. If it happens to me as a researcher it is
> > > >     probably holding other people back as well. If I could wave a magic
> > > >     wand and turn on one of the disabled op codes it would be OP_CAT. Of
> > > >     course with the change that size of each concatenated value must be 
> > > >64
> > > >     Bytes or less.
> > >
> > > Why 64 bytes in particular?
> > >
> > > It seems obvious to me that this 64 bytes is most suited for building 
> > > Merkle trees, being the size of two SHA256 hashes.
> > >
> > > However we have had issues with the use of Merkle trees in Bitcoin blocks.
> > > Specifically, it is difficult to determine if a hash on a Merkle node is 
> > > the hash of a Merkle subnode, or a leaf transaction.
> > > My understanding is that this is the reason for now requiring 
> > > transactions to be at least 80 bytes.
> > >
> > > The obvious fix would be to prepend the type of the hashed object, i.e. 
> > > add at least one byte to determine this type.
> > > Taproot for example uses tagged hash functions, with a different tag for 
> > > leaves, and tagged hashes are just 
> > > prepend-this-32-byte-constant-twice-before-you-SHA256.
> > >
> > > This seems to indicate that to check merkle tree proofs, an `OP_CAT` with 
> > > only 64 bytes max output size would not be sufficient.
> > >
> > > Or we could implement tagged SHA256 as a new opcode...
> > >
> > > Regards,
> > > ZmnSCPxj
> > >
> > >
> > > >
> > > >     On Tue, Oct 1, 2019 at 10:04 PM ZmnSCPxj via bitcoin-dev
> > > >     bitcoin-dev@lists.linuxfoundation.org wrote:
> > > >
> > > >
> > > > > Good morning lists,
> > > > > Let me propose the below radical idea:
> > > > >
> > 

Re: [bitcoin-dev] Smaller "Bitcoin address" accounts in the blockchain.

2019-10-04 Thread ZmnSCPxj via bitcoin-dev
Good morning David,

> Currently, bitcoin must be redeemed by providing input to a script which 
> results in the required output.  This causes the attached amount of bitcoin 
> to become available for use in the outputs of a transaction.  Is there any 
> work on creating a shorter "transaction" which, instead of creating a new 
> output, points to (creates a virtual copy of) an existing (unspent) output 
> with a larger amount attached to it?  This would invalidate the smaller, 
> earlier UTXO and replace it with the new one without requiring the earlier 
> one to be redeemed, and also without requiring the original script to be 
> duplicated.  It is a method for aggregating bitcoin to a UTXO which may 
> otherwise not be economically viable.
>
> The idea is that there already exists a script that must be satisfied to 
> spend X1, and if the owner of X1 would like to have the same requirements for 
> spending X2, this would be a transaction that does that using fewer data 
> bytes.  Since the script already exists, the transaction can simply point to 
> it instead of duplicating it.
>
> This would also enable the capacity of lightning channels to be increased on 
> the fly without closing the existing channel and re-opening a new one.  The 
> LN layer would have to cope with the possibility that the "short channel ID" 
> could change.
>
> Dave.

This moves us closer to an "account"-style rather than "UTXO"-style.
The advantage of UTXO-style is that it becomes easy to validate a transaction 
as valid when putting it into the mempool, and as long as the UTXO it consumes 
remains valid, revalidation of the transaction when it is seen in a block is 
unnecessary.

Admittedly, the issue with account-style is when the account is overdrawn --- 
with UTXOs every spend drains the entire "account" and the "account" 
subsequently is definitely no longer spendable, whereas with accounts, every 
fullnode has to consider what would happen if two or more transactions spend 
from the account.
In your case, it seems to just *add* to the amount of a UTXO.

In any case, this might not be easy to implement in current Bitcoin.
The UTXO-style is deeply ingrained to Bitcoin design, and cannot be easily 
hacked in a softfork.

See also 
https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-July/017135.html 
and its thread for the difficulties involved with "just copy some existing 
`scriptPubKey`" and why such a thing will be very unlikely to come in Bitcoin.


But I think this can be done, in spirit, by pay-to-endpoint / payjoin.

In P2EP/Payjoin, the payer contacts the payee and offers to coinjoin 
simultaneously to the payment.
This does what you want:

* Refers to a previous UTXO owned by the payee, and deletes it (by normal 
transaction spending rules).
* Creates a new UTO, owned by the payee, which contains the total value of the 
below:
  * The above old UTXO.
  * The value to be transferred from payer to payee.

The only issues are that:

* Payee has to be online and cooperate.
* Payee has to provide signatures for the old UTXO, adding more blockchain data.
* New UTXO has to publish a SCRIPT too.
  * In terms of *privacy*, of course you *have* to use a new SCRIPT with a new 
public key anyway.
Thus this is superior to your proposal where the pubkey is reused, as 
P2EP/Payjoin preserves privacy.


Regards,
ZmnSCPXj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev