Re: [Bitcoin-development] 75%/95% threshold for transaction versions

2015-04-26 Thread Joseph Poon
On Sun, Apr 26, 2015 at 03:01:10AM +0300, s7r wrote:
 It's true that malleability is not the end of the world, but it is
 annoying for contracts and micropayment channels, especially refunds
 spending the fund tx before it is even in the blockchain, relying
 solely on its txid.

Agreed, needing the transaction to be signed  broadcastable before the
refunds can be generated is similar to paying for a contract before the
terms have been decided.

  I think we can solve both by using NORMALIZEDTXID - wouldn't this be
  simpler and easier to implement? 

The current problem is that SIGHASH_NORMALIZED_TXID as presently
discussed implies stripping the sigScript, which is not sufficient for
the Lightning Network.

The currently discussed SIGHASH_NORMALIZED_TXID does not permit chained
transactions 2 levels deep, which is necessary for Lightning as well.
The path from the Commitment - HTLC - Refund requires up to 3 levels
deep of transactions. 

Suppose TxA - TxB - TxC - TxD. All outputs are 2-of-2 multisig. TxA
has already entered into the blockchain, the rest have not yet been
broadcast. If TxB spends from TxA, it doesn't need new sighash flags, it
just does a plain SIGHASH_ALL. However, TxC needs
SIGHASH_NORMALIZED_TXID due to malleability risks.
SIGHASH_NORMALIZED_TXID works for TxC because the sigScript can change,
but because TxA's txid has already entered the blockchain, the parent's
input txids cannot change (with high degrees of certainty).

However, with TxD, the txid of TxB may be different, which will result
in an invalid transaction if SIGHASH_NORMALIZED_TXID only strips the
sigScript when obtaining the normalized txid of TxC. The reason for this
is TxC's input txid of TxB has changed (TxC's input 0 txid of TxB)!

Therefore, a functional SIGHASH_NORMALIZED which permits chained
transactions requires the parent transaction's sigScript *AND* txid to
be stripped when determining the parent's normalized txid. Similar to
OP_CHECKSIG, a part of the normalized TXID includes each input's
scriptPubKey, e.g. TxC's normalized TXID includes TxB's scriptPubKey
output which it is spending, so when TxD signs TxC's normalized TXID, it
includes TxB's output (this is a cheap way of increasing uniqueness but
is not an absolute necessity if it's too difficult). All this data
should be immediately available when validating the transaction and
appending it to the UTXO set.

If the txid and sigScript are removed when building the normalized input
txid as part of the spend/signature, it should be possible for chained
transactions to work. However, this isn't absolute security against
replay attacks. If there are two spends with all inputs having the same
values *AND* the same scriptPubKeys per input, then it can be replayed.
The odds of this occurring seems like a sort of uncanny valley of risks;
it's low enough that it shouldn't ever happen which may result in a lack
of documentation, so when it does happen it'll be a big surprise. So,
even if this safer method becomes a softfork, perhaps great care
should be taken before making this a default method of spending when the
sighash flag is not an absolute necessity (i.e. don't do it! I'm all
in favor of giving this a scary name so developers won't inadvertently
think hey, normalization sounds like a good thing to do).

That said, it should cover an overwhelming majority of potential
replays, it's nearly impossible to create a duplicate replayable tx of
someone *else's* send, since the poteintally replayable transaction
signs the sigScript of the redeemed output.

As a side note, SIGHASH_NORMALIZED does not permit spending from any
transaction, which is desirable for the Lightning Network (HTLCs may
persist in new Commitment Transactions). However, this is merely a nice
to have and is not an absolute necessity, there is no significant loss
of functionality, merely some slight slowdown from significantly more
signatures. For Lightning in particular, the effect would probably be
batching Commitment Transactions (e.g. 1 mass update per second per
channel), with the only major discernable penalty is an order of
magnitude greater storage of signatures.

Additionally, I think it was Mark Friedenbach who brought up that
SIGHASH_NORMALIZED creates significant complexities with the need for an
additional hash with every UTXO (almost doubling the UTXO set size), and
with nodes which already have UTXO pruning enabled, it'll require
downloading the entire blockchain. I'm not sure if this problem is
insurmountable or not, but if a normalized sighash becomes the most
ideal candidate for a malleability soft-fork, then sooner may be better
than later as more nodes start using the pruning patch.


 Why are we talking about P3SH when we can just upgrade
 P2SH to support additional OP codes? 

Assuming you mean the current P2SH scriptPubKey format, it's not
possible to do so while making it a soft fork. If you use OP_EQUAL,
current nodes will treat P3SH transactions as P2SH ones.

I'm in favor 

Re: [Bitcoin-development] Fwd: Reusable payment codes

2015-04-26 Thread Justus Ranvier
Payment codes establish the identity of the payer and allow for simpler
methods for identifying the payee, and automatically provide the payee with
the information they need to send a refund.

If merchants and customers were using payment codes, they would not need
the BIP70 equivalents.

I think the best way to explain payment codes is that they add the missing
from address to transactions which users want, but we've had to tell them
they can't have.

A payment code behaves much more like an email address than a traditional
Bitcoin address.

On Sun, Apr 26, 2015 at 2:58 PM, Mike Hearn m...@plan99.net wrote:

 Could you maybe write a short bit of text comparing this approach to
 extending BIP70 and combining it with a simple Subspace style
 store-and-forward network?

--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] 75%/95% threshold for transaction versions

2015-04-26 Thread Joseph Poon
On Sat, Apr 25, 2015 at 11:51:37PM -0700, Joseph Poon wrote:
 signs the sigScript of the redeemed output.

Err, typo, I meant:
... signs the *scriptPubKey* of the redeemed output.

-- 
Joseph Poon

--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Relative CHECKLOCKTIMEVERIFY (was CLTV proposal)

2015-04-26 Thread Jorge Timón
On Sun, Apr 26, 2015 at 1:35 PM, Jorge Timón jti...@jtimon.cc wrote:
 There's another possibility that could keep the utxo out of Script 
 verification:

 class CTxIn
 {
 public:
 COutPoint prevout;
 CScript scriptSig;
 uint32_t nSequence;
 }

 could turn into:

 class CTxIn
 {
 public:
 COutPoint prevout;
 CScript scriptSig;
 uint32_t nHeight;
 }

 And a new softfork rule could enforce that all new CTxIn set nHeight
 to the correct height in which its corresponding prevout got into the
 chain.
 That would remove the need for the TxOutputGetter param in
 bitcoinconsensus_verify_script, but unfortunately it is not reorg safe
 (apart from other ugly implementation details).

Wait, wait, this can be made reorg-safe and more backards compatible.
The new validation rule at the tx validation level (currently in
main::CheckInputs()) would be

for (unsigned int i = 0; i  tx.vin.size(); i++) {
// ...
if (tx.vin.nHeight + 100  tx.nLockTime)
return state.Invalid(false, REJECT_INVALID,
bad-txns-vin-height-reorg-unsafe);
if (coins-nHeight  tx.vin.nHeight)
return state.Invalid(false, REJECT_INVALID,
bad-txns-vin-height-false);
// ...
}

Existing transactions that have used the deprecated CTxIn::nSequence
for something else will be fine if they've used low nSequences.
The only concern would be breaking some colored coins kernels, but
there's many others implemented that don't rely on CTxIn::nSequence.

Transactions that want to use OP_MATURITY just have to set the
corresponding CTxIn::nHeight and CTransaction::nLockTime properly.
This way op_maturity wouldn't require anything from the utxo and the
final interface could be:

 int bitcoinconsensus_verify_script(const unsigned char* scriptPubKey,
unsigned int scriptPubKeyLen,
const unsigned char* txTo,
unsigned int txToLen,
unsigned int nIn, unsigned int nHeight,
unsigned int flags,
secp256k1_context_t* ctx,
bitcoinconsensus_error* err);

--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Relative CHECKLOCKTIMEVERIFY (was CLTV proposal)

2015-04-26 Thread Jorge Timón
On Tue, Apr 21, 2015 at 9:59 AM, Peter Todd p...@petertodd.org wrote:
 Thus we have a few possibilities:

 1) RCLTV against nLockTime

 Needs a minimum age  COINBASE_MATURITY to be safe.


 2) RCLTV against current block height/time

 Completely reorg safe.

Yes, can we call this one OP_MATURITY to distinguish it from RCLTV?

 3) GET_TXOUT_HEIGHT/TIME diff ADD CLTV

 To be reorg safe GET_TXOUT_HEIGHT/TIME must fail if minimum age 
 COINBASE_MATURITY. This can be implemented by comparing against
 nLockTime.

Mhmm, interesting.

 All three possibilities require us to make information about the
 prevout's height/time available to VerifyScript(). The only question is
 if we want VerifyScript() to also take the current block height/time - I
 see no reason why it can't. As for the mempool, keeping track of what
 transactions made use of these opcodes so they can be reevaluated if
 their prevouts are re-organised seems fine to me.

I'm totally fine with changing the interface to:

 int bitcoinconsensus_verify_script(const unsigned char
*scriptPubKey, unsigned int scriptPubKeyLen,
const unsigned char *txTo
  , unsigned int txToLen, unsigned nHeight,
unsigned int nIn, unsigned int
flags, bitcoinconsensus_error* err);

I prefer op_maturity over RCLTV and there are also gains for absolute
CLTV as you explain later.
When you validate the script inputs of a transaction you already have
a height, either the real final nHeight in ConnectBlock and the miner,
or nSpendHeight in AcceptToMemoryPool.
The costs are meaningless in my opinion, specially when we will
already have to change the interface to add libsecp256k1's context.

I'm infinitely more worried about the other assumption that the 3
solutions are already making.
Changing to

 int bitcoinconsensus_verify_script(const unsigned char
*scriptPubKey, unsigned int scriptPubKeyLen,
const unsigned char *txTo
  , unsigned int txToLen, const CCoinsViewCache inputs,
unsigned int nIn, unsigned int
flags, bitcoinconsensus_error* err);

Is simply not possible because CCoinsViewCache is a C++.
You could solve it in a similar way in which you could solve that
dependency for VerifyTransaction.
For example:

typedef const CTxOut (*TxOutputGetter)(const uint256 txid, uint32_t n);

  int bitcoinconsensus_verify_script(const unsigned char
*scriptPubKey, unsigned int scriptPubKeyLen,
const unsigned char *txTo
  , unsigned int txToLen, TxOutputGetter utxoGetter,
unsigned int nIn, unsigned int
flags, bitcoinconsensus_error* err);

Of course, this is assuming that CTxOut becomes a C struct instead of
a C++ class and little things like that.
In terms of code encapsulation, this is still 100 times uglier than
adding the nHeight so if we're doing it, yes, please, let's do both.

There's another possibility that could keep the utxo out of Script verification:

class CTxIn
{
public:
COutPoint prevout;
CScript scriptSig;
uint32_t nSequence;
}

could turn into:

class CTxIn
{
public:
COutPoint prevout;
CScript scriptSig;
uint32_t nHeight;
}

And a new softfork rule could enforce that all new CTxIn set nHeight
to the correct height in which its corresponding prevout got into the
chain.
That would remove the need for the TxOutputGetter param in
bitcoinconsensus_verify_script, but unfortunately it is not reorg safe
(apart from other ugly implementation details).

So, in summary, I think the new interface has to be something along these lines:

  int bitcoinconsensus_verify_script(const unsigned char
*scriptPubKey, unsigned int scriptPubKeyLen,
const unsigned char *txTo,
unsigned int nIn,
unsigned int txToLen,
TxOutputGetter utxoGetter, unsigned nHeight, secp256k1_context_t *ctx
unsigned int flags,
bitcoinconsensus_error* err);

 Time-based locks
 

 Do we want to support them at all? May cause incentive issues with
 mining, see #bitcoin-wizards discussion, Jul 17th 2013:

 https://download.wpsoftware.net/bitcoin/wizards/2013/07/13-07-17.log

I'm totally fine not supporting time-based locks for the new operators.
Removing them from the regular nLockTime could be more complicated but
I wouldn't mind either.
Every time I think of a contract or protocol that involves time, I do
it in terms of block heights.
I would prefer to change all my clocks to work in blocks instead of
minutes over changing nHeights for timestamps in any of those
contracts.

 --
 'peter'[:-1]@petertodd.org
 015e09479548c5b63b99a62d31b019e6479f195bf0cbd935

 --
 BPM Camp - Free Virtual Workshop May 6th at 10am PDT/1PM EDT
 Develop 

Re: [Bitcoin-development] Fwd: Reusable payment codes

2015-04-26 Thread Mike Hearn
Could you maybe write a short bit of text comparing this approach to
extending BIP70 and combining it with a simple Subspace style
store-and-forward network?
--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Proof of Payment

2015-04-26 Thread Tom Harding
On 4/22/2015 1:03 PM, Kalle Rosenbaum wrote:

 I've built a proof-of-concept for Proof of Payment. It's available at
 http://www.rosenbaum.se:8080. The site contains links to the source
 code for both the server and a Mycelium fork as well as pre-built apk:s.


  There are several scenarios in which it would be useful to
 prove that you have paid for something. For example:
  A pre-paid hotel room where your PoP functions as a key to the
 door.
  An online video rental service where you pay for a video and
 watch it on any device.
  An ad-sign where you pay in advance for e.g. 2-weeks
 exclusivity. During this period you can upload new content to the
 sign whenever you like using PoP.
  A lottery where all participants pay to the same address, and
 the winner of the T-shirt is selected among the transactions to
 that address. You exchange the T-shirt for a PoP for the winning
 transaction.


Kalle,

You propose a standard format for proving that wallet-controlled funds
COULD HAVE BEEN spent as they were in a real transaction.  Standardized
PoP would give wallets a new way to communicate with the outside world.

PoP could allow payment and delivery to be separated in time in a
standard way, without relying on a mechanism external to bitcoin's
cryptosystem, and enable standardized real-world scenarios where sender
!= beneficiary, and/or receiver != provider.

Payment:
sender - receiver

Delivery:
beneficiary - provider

Some more use cases might be:
Waiting in comfort:
 - Send a payment ahead of time, then wander over and collect the goods
after X confirmations.

Authorized pickup :
 - Hot wallet software used by related people could facilitate the use
of 1 of N multisig funds.  Any one of the N wallets could collect goods
and services purchased by any of the others.

Non-monetary gifts:
 - Sender exports spent keys to a beneficiary, enabling PoP to work as a
gift claim

Contingent services:
 - Without Bob's permission, a 3rd party conditions action on a payment
made from Alice to Bob.  For example, if you donated at least .02 BTC to
Dorian, you (or combining scenarios, any of your N authorized family
members), can come to my dinner party.

I tried out your demo wallet and service and it worked as advertised.

Could the same standard also be used to prove that a transaction COULD
BE created?  To generalize the concept beyond actual payments, you could
call it something like proof of payment potential.

Why not make these proofs permanently INVALID transactions, to remove
any possibility of their being mined and spending everything to fees
when used in this way, and also in cases involving reorganizations?

I agree that PoP seems complementary to BIP70.



--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development