Re: [Bitcoin-development] Block Size Increase

2015-05-13 Thread Oliver Egginger
08.05.2015 at 5:49 Jeff Garzik wrote:
 To repeat, the very first point in my email reply was: Agree that 7 tps
 is too low  

For interbank trading that would maybe enough but I don't know.

I'm not a developer but as a (former) user and computer scientist I'm
also asking myself what is the core of the problem? Personally, for
privacy reasons I do not want to leave a footprint in the blockchain for
each pizza. And why should this expense be good for trivial things of
everyday life?

If one encounters the block boundary, he or she will do more effort or
give up. I'm thinking most people will give up because their
transactions are not really economical. It is much better for them to
use third-partys (or another payment system).

And that's where we are at the heart of the problem. The Bitcoin
third-party economy. With few exceptions this is pure horror. More worse
than any used car dealer. And the community just waits that things get
better. But that will never happen of its own accord. We are living in a
Wild West Town. So we need a Sheriff and many other things.

We need a small but good functioning economy around the blockchain. To
create one, we have to accept a few unpleasant truths. I do not know if
the community is ready for it.

Nevertheless, I know that some companies do a good job. But they have to
prevail against their dishonest competitors.

People take advantage of the blockchain, because they no longer trust
anyone. But this will not scale in the long run.

- oliver








--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Long-term mining incentives

2015-05-13 Thread Tier Nolan
On Wed, May 13, 2015 at 10:49 AM, Thomas Voegtlin thom...@electrum.org
wrote:


 The reason I am asking that is, there seems to be no consensus among
 core developers on how Bitcoin can work without miner subsidy. How it
 *will* work is another question.


The position seems to be that it will continue to work for the time being,
so there is still time for more research.

Proof of stake has problems with handling long term reversals.  The main
proposal is to slightly weaken the security requirements.

With POW, a new node only needs to know the genesis block (and network
rules) to fully determine which of two chains is the strongest.

Penalties for abusing POS inherently create a time horizon.  A suggested
POS security model would assume that a full node is a node that resyncs
with the network regularly (every N blocks).N would be depend on the
network rules of the coin.

The alternative is that 51% of the holders of coins at the genesis block
can rewrite the entire chain.  The genesis block might not be the first
block, a POS coin might still use POW for minting.

https://blog.ethereum.org/2014/11/25/proof-stake-learned-love-weak-subjectivity/
--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Long-term mining incentives

2015-05-13 Thread Tier Nolan
On Wed, May 13, 2015 at 11:31 AM, Alex Mizrahi alex.mizr...@gmail.com
wrote:


 But this matters if a new node has access to the globally strongest chain.


A node only needs a path of honest nodes to the network.

If a node is connected to 99 dishonest nodes and 1 honest node, it can
still sync with the main network.


 In practice, Bitcoin already embraces weak subjectivity e.g. in form of
 checkpoints embedded into the source code. So it's hard to take PoW purists
 seriously.


That isn't why checkpoints exist.  They are to prevent a disk consumption
DOS attack.

They also allow verification to go faster.  Signature operations are
assumed to be correct without checking if they are in blocks before the
last checkpoint.

They do protect against multi-month forks though, even if not the reason
that they exist.

If releases happen every 6 months, and the checkpoint is 3 months deep at
release, then for the average node, the checkpoint is 3 to 9 months old.

A 3 month reversal would be devastating, so the checkpoint isn't adding
much extra security.

With headers first downloading, the checkpoints could be removed.  They
could still be used for speeding up verification of historical blocks.
Blocks behind the last checkpoint wouldn't need their signatures checked.

Removing them could cause a hard-fork though, so maybe they could be
defined as legacy artifacts of the blockchain.  Future checkpoints could be
advisory.
--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Proposed additional options for pruned nodes

2015-05-13 Thread Tier Nolan
On Wed, May 13, 2015 at 6:19 AM, Daniel Kraft d...@domob.eu wrote:

 2) Divide the range of all blocks into intervals with exponentially
 growing size.  I. e., something like this:

 1, 1, 2, 2, 4, 4, 8, 8, 16, 16, ...


Interesting.  This can be combined with the system I suggested.

A node broadcasts 3 pieces of information

Seed (16 bits): This is the seed
M_bits_lsb (1 bit):  Used to indicate M during a transition
N (7 bits):  This is the count of the last range held (or partially held)

M = 1  M_bits

M should be set to the lowest power of 2 greater than double the block
chain height

That gives M = 1 million at the moment.  During changing M, some nodes will
be using the higher M and others will use the lower M.

The M_bits_lsb field allows those to be distinguished.

As the block height approaches 512k, nodes can begin to upgrade.  For a
period around block 512k, some nodes could use M = 1 million and others
could use M = 2 million.

Assuming M is around 3 times higher than the block height, then the odds of
a start being less than the block height is around 35%.  If they runs by
25% each step, then that is approx a double for each hit.

Size(n) = ((4 + (n  0x3))  (n  2)) * 2.5MB

This gives an exponential increase, but groups of 4 are linearly
interpolated.


*Size(0) = 10 MB*
Size(1) = 12.5MB
Size(2) = 15 MB
Size(3) = 17.5MB
Size(4) = 20MB

*Size(5) = 25MB*
Size(6) = 30MB
Size(7) = 35MB

*Size(8) = 40MB*

Start(n) = Hash(seed + n) mod M

A node should store as much of its last start as possible.  Assuming start
0, 5, and 8 were hits but the node had a max size of 60MB.  It can store
0 and 5 and have 25MB left.  That isn't enough to store all of run 8, but
it should store 25MB of the blocks in run 8 anyway.

Size(255) = pow(2, 31) * 17.5MB = 35,840 TB

Decreasing N only causes previously accepted runs to be invalidated.

When a node approaches a transition point for N, it would select a block
height within 25,000 of the transition point.  Once it reaches that block,
it will begin downloading the new runs that it needs.  When updating, it
can set N to zero.  This spreads out the upgrade (over around a year), with
only a small number of nodes upgrading at any time.

New nodes should use the higher M, if near a transition point (say within
100,000).
--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Long-term mining incentives

2015-05-13 Thread Thomas Voegtlin

Le 12/05/2015 18:10, Gavin Andresen a écrit :
 Added back the list, I didn't mean to reply privately:
 
 Fair enough, I'll try to find time in the next month or three to write up
 four plausible future scenarios for how mining incentives might work:
 
 1) Fee-supported with very large blocks containing lots of tiny-fee
 transactions
 2) Proof-of-idle supported (I wish Tadge Dryja would publish his
 proof-of-idle idea)
 3) Fees purely as transaction-spam-prevention measure, chain security via
 alternative consensus algorithm (in this scenario there is very little
 mining).
 4) Fee supported with small blocks containing high-fee transactions moving
 coins to/from sidechains.
 
 Would that be helpful, or do you have some reason for thinking that we
 should pick just one and focus all of our efforts on making that one
 scenario happen?
 
 I always think it is better, when possible, not to bet on one horse.
 

Sorry if I did not make myself clear. It is not about betting on one
single horse, or about making one particular scenario happen. It is not
about predicting whether something else will replace PoW in the future,
and I am in no way asking you to focus your efforts in one particular
direction at the expenses of others. Various directions will be explored
by various people, and that's great.

I am talking about what we know today. I would like an answer to the
following question: Do we have a reason to believe that Bitcoin can work
in the long run, without involving technologies that have not been
invented yet? Is there a single scenario that we know could work?

Exotic and unproven technologies are not an answer to that question. The
reference scenario should be as boring as possible, and as verifiable as
possible. I am not asking what you think is the most likely to happen,
but what is the most likely to work, given the knowledge we have today.

If I was asking: Can we send humans to the moon by 2100?, I guess your
answer would be: Yes we can, because it has been done in the past with
chemical rockets, and we know how to build them. You would probably not
use a space elevator in your answer.

The reason I am asking that is, there seems to be no consensus among
core developers on how Bitcoin can work without miner subsidy. How it
*will* work is another question.

--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Long-term mining incentives

2015-05-13 Thread Alex Mizrahi
 With POW, a new node only needs to know the genesis block (and network
 rules) to fully determine which of two chains is the strongest.


But this matters if a new node has access to the globally strongest chain.
If attacker is able to block connections to legitimate nodes, a new node
will happily accept attacker's chain.

So PoW, by itself, doesn't give strong security guarantees. This problem is
so fundamental people avoid talking about it.

In practice, Bitcoin already embraces weak subjectivity e.g. in form of
checkpoints embedded into the source code. So it's hard to take PoW purists
seriously.
--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Proposed alternatives to the 20MB step function

2015-05-13 Thread Tier Nolan
On Sat, May 9, 2015 at 4:36 AM, Gregory Maxwell gmaxw...@gmail.com wrote:

 An example would
 be tx_size = MAX( real_size  1,  real_size + 4*utxo_created_size -
 3*utxo_consumed_size).


This could be implemented as a soft fork too.

* 1MB hard size limit
* 900kB soft limit

S = block size
U = UTXO_adjusted_size = S + 4 * outputs - 3 * inputs

A block is valid if S  1MB and U  1MB

A 250 byte transaction with 2 inputs and 2 outputs would have an adjusted
size of 252 bytes.

The memory pool could be sorted by fee per adjusted_size.

 Coin selection could be adjusted so it tries to have at least 2 inputs
when creating transactions, unless the input is worth more than a threshold
(say 0.001 BTC).

This is a pretty weak incentive, especially if the block size is
increased.  Maybe it will cause a nudge
--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Block Size Increase

2015-05-13 Thread Angel Leon
 Personally, for privacy reasons I do not want to leave a footprint in the
blockchain for each pizza. And  why should this expense be good for trivial
things of everyday life?

Then what's the point?
Isn't this supposed to be an Open transactional network, it doesn't matter
if you don't want that, what matters is what people want to do with it, and
there's nothing you can do to stop someone from opening a wallet and buying
a pizza with it, except the core of the problem you ask yourself about,
which is, the minute this goes mainstream and people get their wallets out
the whole thing will collapse, regardless of what you want the blockchain
for.

Why talk about the billions of unbanked and all the romantic vision if you
can't let them use their money however they want in a decentralized
fashion. Otherwise let's just go back to centralized banking because the
minute you want to put things off chain, you need an organization that will
need to respond to government regulation and that's the end for the
billions of unbanked to be part of the network.


http://twitter.com/gubatron

On Wed, May 13, 2015 at 6:37 AM, Oliver Egginger bitc...@olivere.de wrote:

 08.05.2015 at 5:49 Jeff Garzik wrote:
  To repeat, the very first point in my email reply was: Agree that 7 tps
  is too low

 For interbank trading that would maybe enough but I don't know.

 I'm not a developer but as a (former) user and computer scientist I'm
 also asking myself what is the core of the problem? Personally, for
 privacy reasons I do not want to leave a footprint in the blockchain for
 each pizza. And why should this expense be good for trivial things of
 everyday life?

 If one encounters the block boundary, he or she will do more effort or
 give up. I'm thinking most people will give up because their
 transactions are not really economical. It is much better for them to
 use third-partys (or another payment system).

 And that's where we are at the heart of the problem. The Bitcoin
 third-party economy. With few exceptions this is pure horror. More worse
 than any used car dealer. And the community just waits that things get
 better. But that will never happen of its own accord. We are living in a
 Wild West Town. So we need a Sheriff and many other things.

 We need a small but good functioning economy around the blockchain. To
 create one, we have to accept a few unpleasant truths. I do not know if
 the community is ready for it.

 Nevertheless, I know that some companies do a good job. But they have to
 prevail against their dishonest competitors.

 People take advantage of the blockchain, because they no longer trust
 anyone. But this will not scale in the long run.

 - oliver









 --
 One dashboard for servers and applications across Physical-Virtual-Cloud
 Widest out-of-the-box monitoring support with 50+ applications
 Performance metrics, stats and reports that give you Actionable Insights
 Deep dive visibility with transaction tracing using APM Insight.
 http://ad.doubleclick.net/ddm/clk/290420510;117567292;y
 ___
 Bitcoin-development mailing list
 Bitcoin-development@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bitcoin-development

--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Long-term mining incentives

2015-05-13 Thread Alex Mizrahi
Let's consider a concrete example:

1. User wants to accept Bitcoin payments, as his customers want this.
2. He downloads a recent version of Bitcoin Core, checks hashes and so on.
(Maybe even builds from source.)
3. Let's it to sync for several hours or days.
4. After wallet is synced, he gives his address to customer.
5. Customer pays.
6. User waits 10 confirmations and ships the goods. (Suppose it's something
very expensive.)
7. Some time later, user wants to convert some of his bitcoins to dollars.
He sends his bitcoins to an exchange but they never arrive.

He tries to investigate, and after some time discovers that his router (or
his ISP's router) was hijacked. His Bitcoin node couldn't connect to any of
the legitimate nodes, and thus got a complete fake chain from the attacker.
Bitcoins he received were totally fake.

Bitcoin Core did a shitty job and confirmed some fake transactions.
User doesn't care that *if *his network was not impaired, Bitcoin Core *would
have *worked properly.
The main duty of Bitcoin Core is to check whether transactions are
confirmed, and if it can be fooled by a simple router hack, then it does
its job poorly.

If you don't see it being a problem, you should't be allowed to develop
anything security-related.

If a node is connected to 99 dishonest nodes and 1 honest node, it can
 still sync with the main network.


Yes, it is good against Sybil attack, but not good against a network-level
attack.
Attack on user's routers is a very realistic, plausible attack.
Imagine if SSL could be hacked by hacking a router, would people still use
it?

Fucking no.


 A 3 month reversal would be devastating, so the checkpoint isn't adding
 much extra security.


WIthout checkpoints an attacker could prepare a fork for $10.
With checkpoints, it would cost him at least $1000, but more likely upwards
of $10.
That's quite a difference, no?

I do not care what do you think about the reasons why checkpoints were
added, but it is a fact that they make the attack scenario I describe above
hard to impossible.

Without checkpoints, you could perform this attack using a laptop.
With checkpoints, you need access to significant amounts of mining ASICs.
--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] [BIP] Normalized Transaction IDs

2015-05-13 Thread Tier Nolan
I think this is a good way to handle things, but as you say, it is a hard
fork.

CHECKLOCKTIMEVERIFY covers many of the use cases, but it would be nice to
fix malleability once and for all.

This has the effect of doubling the size of the UTXO database.  At minimum,
there needs to be a legacy txid to normalized txid map in the database.

An addition to the BIP would eliminate the need for the 2nd index.  You
could require a SPV proof of the spending transaction to be included with
legacy transactions.  This would allow clients to verify that the
normalized txid matched the legacy id.

The OutPoint would be {LegacyId | SPV Proof to spending tx  | spending tx |
index}.  This allows a legacy transaction to be upgraded.  OutPoints which
use a normalized txid don't need the SPV proof.

The hard fork would be followed by a transitional period, in which both
txids could be used.  Afterwards, legacy transactions have to have the SPV
proof added.  This means that old transactions with locktimes years in the
future can be upgraded for spending, without nodes needing to maintain two
indexes.
--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Long-term mining incentives

2015-05-13 Thread Tier Nolan
On Wed, May 13, 2015 at 1:26 PM, Alex Mizrahi alex.mizr...@gmail.com
wrote:

 He tries to investigate, and after some time discovers that his router (or
 his ISP's router) was hijacked. His Bitcoin node couldn't connect to any of
 the legitimate nodes, and thus got a complete fake chain from the attacker.
 Bitcoins he received were totally fake.

 Bitcoin Core did a shitty job and confirmed some fake transactions.


I don't really see how you can protect against total isolation of a node
(POS or POW).  You would need to find an alternative route for the
information.

Even encrypted connections are pointless without authentication of who you
are communicating with.

Again, it is part of the security model that you can connect to at least
one honest node.

Someone tweated all the bitcoin headers at one point.  The problem is that
if everyone uses the same check, then that source can be compromised.

 WIthout checkpoints an attacker could prepare a fork for $10.
 With checkpoints, it would cost him at least $1000, but more likely
upwards of $10.
 That's quite a difference, no?

Headers first mean that you can't knock a synced node off the main chain
without winning the POW race.

Checkpoints can be replaced with a minimum amount of POW for initial sync.
This prevents spam of low POW blocks.  Once a node is on a chain with at
least that much POW, it considers it the main chain.,
--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] [BIP] Normalized Transaction IDs

2015-05-13 Thread Gavin Andresen
I think this needs more details before it gets a BIP number; for example,
which opcodes does this affect, and how, exactly, does it affect them? Is
the merkle root in the block header computed using normalized transaction
ids or normalized ids?

I think there might actually be two or three or four BIPs here:

 + Overall what is trying to be accomplished
 + Changes to the OP_*SIG* opcodes
 + Changes to the bloom-filtering SPV support
 + ...eventually, hard fork rollout plan

I also think that it is a good idea to have actually implemented a proposal
before getting a BIP number. At least, I find that actually writing the
code often turns up issues I hadn't considered when thinking about the
problem at a high level. And I STRONGLY believe BIPs should be descriptive
(here is how this thing works) not proscriptive (here's how I think we
should all do it).

Finally: I like the idea of moving to a normalized txid. But it might make
sense to bundle that change with a bigger change to OP_CHECKSIG; see Greg
Maxwell's excellent talk about his current thoughts on that topic:
  https://www.youtube.com/watch?v=Gs9lJTRZCDc


On Wed, May 13, 2015 at 9:12 AM, Tier Nolan tier.no...@gmail.com wrote:

 I think this is a good way to handle things, but as you say, it is a hard
 fork.

 CHECKLOCKTIMEVERIFY covers many of the use cases, but it would be nice to
 fix malleability once and for all.

 This has the effect of doubling the size of the UTXO database.  At
 minimum, there needs to be a legacy txid to normalized txid map in the
 database.

 An addition to the BIP would eliminate the need for the 2nd index.  You
 could require a SPV proof of the spending transaction to be included with
 legacy transactions.  This would allow clients to verify that the
 normalized txid matched the legacy id.

 The OutPoint would be {LegacyId | SPV Proof to spending tx  | spending tx
 | index}.  This allows a legacy transaction to be upgraded.  OutPoints
 which use a normalized txid don't need the SPV proof.

 The hard fork would be followed by a transitional period, in which both
 txids could be used.  Afterwards, legacy transactions have to have the SPV
 proof added.  This means that old transactions with locktimes years in the
 future can be upgraded for spending, without nodes needing to maintain two
 indexes.


 --
 One dashboard for servers and applications across Physical-Virtual-Cloud
 Widest out-of-the-box monitoring support with 50+ applications
 Performance metrics, stats and reports that give you Actionable Insights
 Deep dive visibility with transaction tracing using APM Insight.
 http://ad.doubleclick.net/ddm/clk/290420510;117567292;y
 ___
 Bitcoin-development mailing list
 Bitcoin-development@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bitcoin-development




-- 
--
Gavin Andresen
--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


[Bitcoin-development] [BIP] Normalized Transaction IDs

2015-05-13 Thread Christian Decker
Hi All,

I'd like to propose a BIP to normalize transaction IDs in order to address
transaction malleability and facilitate higher level protocols.

The normalized transaction ID is an alias used in parallel to the current
(legacy) transaction IDs to address outputs in transactions. It is
calculated by removing (zeroing) the scriptSig before computing the hash,
which ensures that only data whose integrity is also guaranteed by the
signatures influences the hash. Thus if anything causes the normalized ID
to change it automatically invalidates the signature. When validating a
client supporting this BIP would use both the normalized tx ID as well as
the legacy tx ID when validating transactions.

The detailed writeup can be found here:
https://github.com/cdecker/bips/blob/normalized-txid/bip-00nn.mediawiki.

@gmaxwell: I'd like to request a BIP number, unless there is something
really wrong with the proposal.

In addition to being a simple alternative that solves transaction
malleability it also hugely simplifies higher level protocols. We can now
use template transactions upon which sequences of transactions can be built
before signing them.

I hesitated quite a while to propose it since it does require a hardfork
(old clients would not find the prevTx identified by the normalized
transaction ID and deem the spending transaction invalid), but it seems
that hardforks are no longer the dreaded boogeyman nobody talks about.
I left out the details of how the hardfork is to be done, as it does not
really matter and we may have a good mechanism to apply a bunch of
hardforks concurrently in the future.

I'm sure it'll take time to implement and upgrade, but I think it would be
a nice addition to the functionality and would solve a long standing
problem :-)

Please let me know what you think, the proposal is definitely not set in
stone at this point and I'm sure we can improve it further.

Regards,
Christian
--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Long-term mining incentives

2015-05-13 Thread Gavin
Checkpoints will be replaced by compiled-in 'at THIS timestamp the main chain 
had THIS much proof of work.'

That is enough information to prevent attacks and still allow optimizations 
like skipping signature checking for ancient transactions.

I don't think anybody is proposing replacing checkpoints with nothing.

--
Gavin Andresen


 On May 13, 2015, at 8:26 AM, Alex Mizrahi alex.mizr...@gmail.com wrote:
 
 Let's consider a concrete example:
 
 1. User wants to accept Bitcoin payments, as his customers want this.
 2. He downloads a recent version of Bitcoin Core, checks hashes and so on. 
 (Maybe even builds from source.)
 3. Let's it to sync for several hours or days.
 4. After wallet is synced, he gives his address to customer.
 5. Customer pays. 
 6. User waits 10 confirmations and ships the goods. (Suppose it's something 
 very expensive.)
 7. Some time later, user wants to convert some of his bitcoins to dollars. He 
 sends his bitcoins to an exchange but they never arrive.
 
 He tries to investigate, and after some time discovers that his router (or 
 his ISP's router) was hijacked. His Bitcoin node couldn't connect to any of 
 the legitimate nodes, and thus got a complete fake chain from the attacker.
 Bitcoins he received were totally fake.
 
 Bitcoin Core did a shitty job and confirmed some fake transactions.
 User doesn't care that if his network was not impaired, Bitcoin Core would 
 have worked properly.
 The main duty of Bitcoin Core is to check whether transactions are confirmed, 
 and if it can be fooled by a simple router hack, then it does its job poorly.
 
 If you don't see it being a problem, you should't be allowed to develop 
 anything security-related.
 
 If a node is connected to 99 dishonest nodes and 1 honest node, it can still 
 sync with the main network.
 
 Yes, it is good against Sybil attack, but not good against a network-level 
 attack.
 Attack on user's routers is a very realistic, plausible attack.
 Imagine if SSL could be hacked by hacking a router, would people still use it?
 
 Fucking no.
   
 A 3 month reversal would be devastating, so the checkpoint isn't adding much 
 extra security.
 
 WIthout checkpoints an attacker could prepare a fork for $10.
 With checkpoints, it would cost him at least $1000, but more likely upwards 
 of $10.
 That's quite a difference, no?
 
 I do not care what do you think about the reasons why checkpoints were added, 
 but it is a fact that they make the attack scenario I describe above hard to 
 impossible.
 
 Without checkpoints, you could perform this attack using a laptop.
 With checkpoints, you need access to significant amounts of mining ASICs.
 
 --
 One dashboard for servers and applications across Physical-Virtual-Cloud 
 Widest out-of-the-box monitoring support with 50+ applications
 Performance metrics, stats and reports that give you Actionable Insights
 Deep dive visibility with transaction tracing using APM Insight.
 http://ad.doubleclick.net/ddm/clk/290420510;117567292;y
 ___
 Bitcoin-development mailing list
 Bitcoin-development@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bitcoin-development
--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Long-term mining incentives

2015-05-13 Thread Alex Mizrahi
 I don't really see how you can protect against total isolation of a node
 (POS or POW). You would need to find an alternative route for the
 information.


Alternative route for the information is the whole point of weak
subjectivity, no?

PoS depends on weak subjectivity to prevent long term reversals, but
using it also prevents total isolation attacks.

The argument that PoW is better than PoS because PoS has to depend on weak
subjectivity, but PoW doesn't is wrong.
Any practical implementation of PoW will also have to rely on weak
subjectivity to be secure against isolation attack.
And if we have to rely on weak subjectivity anyway, then why not PoS?


 Again, it is part of the security model that you can connect to at least
 one honest node.


This is the security model of PoW-based consensus. If you study
PoW-consensus, then yes, this is the model you have to use.

But people use Bitcoin Core as a piece of software. They do not care what
security model you use, they expect it to work.
If there are realistic scenarios in which it fails, then this must be
documented. Users should be made aware of the problem, should be able to
take preventative measures (e.g. manually check the latest block against
sources they trust), etc.


 The problem is that if everyone uses the same check, then that source can
 be compromised.


Yes, this problem cannot be solved in a 100% decentralized and automatic
way.
Which doesn't mean it's not worth solving, does it?

1. There are non-decentralized, trust-based solutions: refuse to work if
none of well-known nodes are accessible.
Well-known nodes are already used for bootstrapping, and this is another
point which can be attacked.
So if it's impossible to make it 100% decentralized and secure, why not
make it 99% decentralized and secure?

2. It is a common practice to check sha256sum after downloading the
package, and this is usually done manually.
Why can't checking block hashes against some source become a common
practice as well?


Also it's worth noting that these security measures are additive.
Isolating a node AND hijacking one of well-known nodes AND hijacking a
block explorer site user checks hashes against is exponentially harder than
defeating a single measure.
--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] [BIP] Normalized Transaction IDs

2015-05-13 Thread Christian Decker
On Wed, May 13, 2015 at 8:40 PM Pieter Wuille pieter.wui...@gmail.com
wrote:

 On Wed, May 13, 2015 at 11:04 AM, Christian Decker 
 decker.christ...@gmail.com wrote:

 If the inputs to my transaction have been long confirmed I can be
 reasonably safe in assuming that the transaction hash does not change
 anymore. It's true that I have to be careful not to build on top of
 transactions that use legacy references to transactions that are
 unconfirmed or have few confirmations, however that does not invalidate the
 utility of the normalized transaction IDs.


 Sufficient confirmations help of course, but make systems like this less
 useful for more complex interactions where you have multiple unconfirmed
 transactions waiting on each other. I think being able to rely on this
 problem being solved unconditionally is what makes the proposal attractive.
 For the simple cases, see BIP62.


If we are building a long running contract using a complex chain of
transactions, or multiple transactions that depend on each other, there is
no point in ever using any malleable legacy transaction IDs and I would
simply stop cooperating if you tried. I don't think your argument applies.
If we build our contract using only normalized transaction IDs there is no
way of suffering any losses due to malleability.

The reason I mentioned the confirmation is that all protocols I can think
of start by collaboratively creating a transaction that locks in funds into
a multisig output, that is committed to the blockchain. Starting from this
initial setup transaction would be using normalized transaction IDs,
therefore not be susceptible to malleability.



 I remember reading about the SIGHASH proposal somewhere. It feels really
 hackish to me: It is a substantial change to the way signatures are
 verified, I cannot really see how this is a softfork if clients that did
 not update are unable to verify transactions using that SIGHASH Flag and it
 is adding more data (the normalized hash) to the script, which has to be
 stored as part of the transaction. It may be true that a node observing
 changes in the input transactions of a transaction using this flag could
 fix the problem, however it requires the node's intervention.


 I think you misunderstand the idea. This is related, but orthogonal to the
 ideas about extended the sighash flags that have been discussed here before.

 All it's doing is adding a new CHECKSIG operator to script, which, in its
 internally used signature hash, 1) removes the scriptSigs from transactions
 before hashing 2) replaces the txids in txins by their ntxid. It does not
 add any data to transactions, and it is a softfork, because it only impacts
 scripts which actually use the new CHECKSIG operator. Wallets that don't
 support signing with this new operator would not give out addresses that
 use it.


In that case I don't think I heard this proposal before, and I might be
missing out :-)
So if transaction B spends an output from A, then the input from B contains
the CHECKSIG operator telling the validating client to do what exactly? It
appears that it wants us to go and fetch A, normalize it, put the
normalized hash in the txIn of B and then continue the validation? Wouldn't
that also need a mapping from the normalized transaction ID to the legacy
transaction ID that was confirmed?

A client that did not update still would have no clue on how to handle
these transactions, since it simply does not understand the CHECKSIG
operator. If such a transaction ends up in a block I cannot even catch up
with the network since the transaction does not validate for me.

Could you provide an example of how this works?



 Compare that to the simple and clean solution in the proposal, which does
 not add extra data to be stored, keeps the OP_*SIG* semantics as they are
 and where once you sign a transaction it does not have to be monitored or
 changed in order to be valid.


 OP_*SIG* semantics don't change here either, we're just adding a superior
 opcode (which in most ways behaves the same as the existing operators). I
 agree with the advantage of not needing to monitor transactions afterwards
 for malleated inputs, but I think you underestimate the deployment costs.
 If you want to upgrade the world (eventually, after the old index is
 dropped, which is IMHO the only point where this proposal becomes superior
 to the alternatives) to this, you're changing *every single piece of
 Bitcoin software on the planet*. This is not just changing some validation
 rules that are opt-in to use, you're fundamentally changing how
 transactions refer to each other.


As I mentioned before, this is a really long term strategy, hoping to get
the cleanest and easiest solution, so that we do not further complicate the
inner workings of Bitcoin. I don't think that it is completely out of
question to eventually upgrade to use normalized transactions, after all
the average lifespan of hardware is a few years tops.



 Also, what do blocks 

Re: [Bitcoin-development] [BIP] Normalized Transaction IDs

2015-05-13 Thread Pieter Wuille
On Wed, May 13, 2015 at 1:27 PM, Tier Nolan tier.no...@gmail.com wrote:

 After more thought, I think I came up with a clearer description of the
 recursive version.

 The simple definition is that the hash for the new signature opcode should
 simply assume that the normalized txid system was used since the
 beginning.  All txids in the entire blockchain should be replaced with the
 correct values.

 This requires a full re-index of the blockchain.  You can't work out what
 the TXID-N of a transaction is without knowning the TXID-N of its parents,
 in order to do the replacement.

 The non-recursive version can only handle refunds one level deep.


This was what I was suggesting all along, sorry if I wasn't clear.

-- 
Pieter
--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] [BIP] Normalized Transaction IDs

2015-05-13 Thread Tier Nolan
After more thought, I think I came up with a clearer description of the
recursive version.

The simple definition is that the hash for the new signature opcode should
simply assume that the normalized txid system was used since the
beginning.  All txids in the entire blockchain should be replaced with the
correct values.

This requires a full re-index of the blockchain.  You can't work out what
the TXID-N of a transaction is without knowning the TXID-N of its parents,
in order to do the replacement.

The non-recursive version can only handle refunds one level deep.

A:
from: IN
sigA: based on hash(...)

B:
from A
sig: based on hash(from: TXID-N(A) | )  // sig removed

C:
from B
sig: based on hash(from: TXID-N(B) | )  // sig removed

If A is mutated before being added into the chain, then B can be modified
to a valid transaction (B-new).

A-mutated:
from: IN
sig_mutated: based on hash(...) with some mutation

B has to be modified to B-new to make it valid.

B-new:
from A-mutated
sig: based on hash(from: TXID-N(A-mutated), )

Since TXID-N(A-mutated) is equal to TXID-N(A), the signature from B is
still valid.

Howver, C-new cannot be created.

C-new:
from B-new
sig: based on hash(from: TXID-N(B-new), )

TXID-N(B-new) is not the same as TXID-N(B).  Since the from field is not
removed by the TXID-N operation, differences in that field mean that the
TXIDs are difference.

This means that the signature for C is not valid for C-new.

The recursive version repairs this problem.

Rather than simply delete the scriptSig from the transaction.  All txids
must also be replaced with their TXID-N versions.

Again, A is mutated before being added into the chain and B-new is produced.

A-mutated:
from: IN
sig_mutated: based on hash(...) with some mutation
TXID-N: TXID-N(A)

B has to be modified to B-new to make it valid.

B-new:
from A-mutated
sig: based on hash(from: TXID-N(A-mutated), )
TXID-N: TXID-N(B)

Since TXID-N(A-mutated) is equal to TXID-N(A), the signature from B is
still valid.

Likewise the TXID-N(B-new) is equal to TXID-N(B).

The from field is replaced by the TXID-N from A-mutated which is equal to
TXID-N(A) and the sig is the same.

C-new:
from B-new
sig: based on hash(from: TXID-N(B-new), )

The signature is still valid, since TXID-N(B-new) is the same as TXID-N(B).

This means that multi-level refunds are possible.
--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] [BIP] Normalized Transaction IDs

2015-05-13 Thread Pieter Wuille
On Wed, May 13, 2015 at 11:04 AM, Christian Decker 
decker.christ...@gmail.com wrote:

 If the inputs to my transaction have been long confirmed I can be
 reasonably safe in assuming that the transaction hash does not change
 anymore. It's true that I have to be careful not to build on top of
 transactions that use legacy references to transactions that are
 unconfirmed or have few confirmations, however that does not invalidate the
 utility of the normalized transaction IDs.


Sufficient confirmations help of course, but make systems like this less
useful for more complex interactions where you have multiple unconfirmed
transactions waiting on each other. I think being able to rely on this
problem being solved unconditionally is what makes the proposal attractive.
For the simple cases, see BIP62.

I remember reading about the SIGHASH proposal somewhere. It feels really
 hackish to me: It is a substantial change to the way signatures are
 verified, I cannot really see how this is a softfork if clients that did
 not update are unable to verify transactions using that SIGHASH Flag and it
 is adding more data (the normalized hash) to the script, which has to be
 stored as part of the transaction. It may be true that a node observing
 changes in the input transactions of a transaction using this flag could
 fix the problem, however it requires the node's intervention.


I think you misunderstand the idea. This is related, but orthogonal to the
ideas about extended the sighash flags that have been discussed here before.

All it's doing is adding a new CHECKSIG operator to script, which, in its
internally used signature hash, 1) removes the scriptSigs from transactions
before hashing 2) replaces the txids in txins by their ntxid. It does not
add any data to transactions, and it is a softfork, because it only impacts
scripts which actually use the new CHECKSIG operator. Wallets that don't
support signing with this new operator would not give out addresses that
use it.


 Compare that to the simple and clean solution in the proposal, which does
 not add extra data to be stored, keeps the OP_*SIG* semantics as they are
 and where once you sign a transaction it does not have to be monitored or
 changed in order to be valid.


OP_*SIG* semantics don't change here either, we're just adding a superior
opcode (which in most ways behaves the same as the existing operators). I
agree with the advantage of not needing to monitor transactions afterwards
for malleated inputs, but I think you underestimate the deployment costs.
If you want to upgrade the world (eventually, after the old index is
dropped, which is IMHO the only point where this proposal becomes superior
to the alternatives) to this, you're changing *every single piece of
Bitcoin software on the planet*. This is not just changing some validation
rules that are opt-in to use, you're fundamentally changing how
transactions refer to each other.

Also, what do blocks commit to? Do you keep using the old transaction ids
for this? Because if you don't, any relayer on the network can invalidate a
block (and have the receiver mark it as invalid) by changing the txids. You
need to somehow commit to the scriptSig data in blocks still so the POW of
a block is invalidated by changing a scriptSig.

There certainly are merits using the SIGHASH approach in the short term (it
 does not require a hard fork), however I think the normalized transaction
 ID is a cleaner and simpler long-term solution, even though it requires a
 hard-fork.


It requires a hard fork, but more importantly, it requires the whole world
to change their software (not just validation code) to effectively use it.
That, plus large up-front deployment costs (doubling the cache size for
every full node for the same propagation speed is not a small thing) which
may not end up being effective.

-- 
Pieter
--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Long-term mining incentives

2015-05-13 Thread Pedro Worcel
Thank you for your response, that does make sense. It's going to be
interesting to follow what is going to happen!

2015-05-14 3:41 GMT+12:00 Gavin Andresen gavinandre...@gmail.com:

 On Tue, May 12, 2015 at 7:48 PM, Adam Back a...@cypherspace.org wrote:

 I think its fair to say no one knows how to make a consensus that
 works in a decentralised fashion that doesnt weaken the bitcoin
 security model without proof-of-work for now.


 Yes.


 I am presuming Gavin is just saying in the context of not pre-judging
 the future that maybe in the far future another innovation might be
 found (or alternatively maybe its not mathematically possible).


 Yes... or an alternative might be found that weakens the Bitcoin security
 model by a small enough amount that it either doesn't matter or the
 weakening is vastly overwhelmed by some other benefit.

 I'm influenced by the way the Internet works; packets addressed to
 74.125.226.67 reliably get to Google through a very decentralized system
 that I'll freely admit I don't understand. Yes, a determined attacker can
 re-route packets, but layers of security on top means re-routing packets
 isn't enough to pull off profitable attacks.

 I think Bitcoin's proof-of-work might evolve in a similar way. Yes, you
 might be able to 51% attack the POW, but layers of security on top of POW
 will mean that won't be enough to pull off profitable attacks.


 --
 --
 Gavin Andresen


--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] [BIP] Normalized Transaction IDs

2015-05-13 Thread Pieter Wuille
On Wed, May 13, 2015 at 12:14 PM, Christian Decker 
decker.christ...@gmail.com wrote:


 On Wed, May 13, 2015 at 8:40 PM Pieter Wuille pieter.wui...@gmail.com
 wrote:

 On Wed, May 13, 2015 at 11:04 AM, Christian Decker 
 decker.christ...@gmail.com wrote:

 If the inputs to my transaction have been long confirmed I can be
 reasonably safe in assuming that the transaction hash does not change
 anymore. It's true that I have to be careful not to build on top of
 transactions that use legacy references to transactions that are
 unconfirmed or have few confirmations, however that does not invalidate the
 utility of the normalized transaction IDs.


 Sufficient confirmations help of course, but make systems like this less
 useful for more complex interactions where you have multiple unconfirmed
 transactions waiting on each other. I think being able to rely on this
 problem being solved unconditionally is what makes the proposal attractive.
 For the simple cases, see BIP62.


 If we are building a long running contract using a complex chain of
 transactions, or multiple transactions that depend on each other, there is
 no point in ever using any malleable legacy transaction IDs and I would
 simply stop cooperating if you tried. I don't think your argument applies.
 If we build our contract using only normalized transaction IDs there is no
 way of suffering any losses due to malleability.


That's correct as long as you stay within your contract, but you likely
want compatibility with other software, without waiting an age before and
after your contract settles on the chain. It's a weaker argument, though, I
agree.

I remember reading about the SIGHASH proposal somewhere. It feels really
 hackish to me: It is a substantial change to the way signatures are
 verified, I cannot really see how this is a softfork if clients that did
 not update are unable to verify transactions using that SIGHASH Flag and it
 is adding more data (the normalized hash) to the script, which has to be
 stored as part of the transaction. It may be true that a node observing
 changes in the input transactions of a transaction using this flag could
 fix the problem, however it requires the node's intervention.


 I think you misunderstand the idea. This is related, but orthogonal to
 the ideas about extended the sighash flags that have been discussed here
 before.

 All it's doing is adding a new CHECKSIG operator to script, which, in its
 internally used signature hash, 1) removes the scriptSigs from transactions
 before hashing 2) replaces the txids in txins by their ntxid. It does not
 add any data to transactions, and it is a softfork, because it only impacts
 scripts which actually use the new CHECKSIG operator. Wallets that don't
 support signing with this new operator would not give out addresses that
 use it.


 In that case I don't think I heard this proposal before, and I might be
 missing out :-)
 So if transaction B spends an output from A, then the input from B
 contains the CHECKSIG operator telling the validating client to do what
 exactly? It appears that it wants us to go and fetch A, normalize it, put
 the normalized hash in the txIn of B and then continue the validation?
 Wouldn't that also need a mapping from the normalized transaction ID to the
 legacy transaction ID that was confirmed?


There would just be an OP_CHECKAWESOMESIG, which can do anything. It can
identical to how OP_CHECKSIG works now, but has a changed algorithm for its
signature hash algorithm. Optionally (and likely in practice, I think), it
can do various other proposed improvements, like using Schnorr signatures,
having a smaller signature encoding, supporting batch validation, have
extended sighash flags, ...

It wouldn't fetch A and normalize it; that's impossible as you would need
to go fetch all of A's dependencies too and recurse until you hit the
coinbases that produced them. Instead, your UTXO set contains the
normalized txid for every normal txid (which adds around 26% to the UTXO
set size now), but lookups in it remain only by txid.

You don't need a ntxid-txid mapping, as transactions and blocks keep
referring to transactions by txid. Only the OP_CHECKAWESOMESIG operator
would do the conversion, and at most once.

A client that did not update still would have no clue on how to handle
 these transactions, since it simply does not understand the CHECKSIG
 operator. If such a transaction ends up in a block I cannot even catch up
 with the network since the transaction does not validate for me.


As for every softfork, it works by redefining an OP_NOP operator, so old
nodes simply consider these checksigs unconditionally valid. That does mean
you don't want to use them before the consensus rule is forked in
(=enforced by a majority of the hashrate), and that you suffer from the
temporary security reduction that an old full node is unknowingly reduced
to SPV security for these opcodes. However, as full node wallet, this
problem does not affect 

Re: [Bitcoin-development] [BIP] Normalized Transaction IDs

2015-05-13 Thread Tier Nolan
On Wed, May 13, 2015 at 9:31 PM, Pieter Wuille pieter.wui...@gmail.com
wrote:


 This was what I was suggesting all along, sorry if I wasn't clear.


That's great.  So, basically the multi-level refund problem is solved by
this?
--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Long-term mining incentives

2015-05-13 Thread Jorge Timón
On Mon, May 11, 2015 at 7:29 PM, Gavin Andresen gavinandre...@gmail.com wrote:
 I think long-term the chain will not be secured purely by proof-of-work. I
 think when the Bitcoin network was tiny running solely on people's home
 computers proof-of-work was the right way to secure the chain, and the only
 fair way to both secure the chain and distribute the coins.

 See https://gist.github.com/gavinandresen/630d4a6c24ac6144482a  for some
 half-baked thoughts along those lines. I don't think proof-of-work is the
 last word in distributed consensus (I also don't think any alternatives are
 anywhere near ready to deploy, but they might be in ten years).

Or never, nobody knows at this point.

 I also think it is premature to worry about what will happen in twenty or
 thirty years when the block subsidy is insignificant. A lot will happen in
 the next twenty years. I could spin a vision of what will secure the chain
 in twenty years, but I'd put a low probability on that vision actually
 turning out to be correct.

I think is very healthy to worry about that since we know it's
something that will happen.
The system should work without subsidies.

 That is why I keep saying Bitcoin is an experiment. But I also believe that
 the incentives are correct, and there are a lot of very motivated, smart,
 hard-working people who will make it work. When you're talking about trying
 to predict what will happen decades from now, I think that is the best you
 can (honestly) do.

Lightning payment channels may be a new idea, but payment channels are
not, and nobody is using them.
They are the best solution to scalability we have right now,
increasing the block size is simply not a solution, it's just kicking
the can down the road (while reducing the incentives to deploy real
solutions like payment channels).

Not worrying about 10 years in the future but asking people to trust
estimates and speculations about how everything will burn in 2 years
if we don't act right now seems pretty arbitrary to me.
One could just as well argue that there's smart hard-working people
that will solve those problems before they hit us.

It is true that the more distant the future you're trying to predict
is, the more difficult it is to predict it, but any threshold that
separates relevant worries from too far in the future to worry
about it will always be arbitrary.
Fortunately we don't need to all share the same time horizon for what
is worrying and what is not.
What we need is a clear criterion for what is acceptable for a
hardfork and a general plan to deploy them:

-Do all the hardfork changes need to be uncontroversial? How do we
define uncontroversial?
-Should we maintain and test implementation of hardfork whises that
seem too small to justify a hardfork on their own (ie time travel fix,
allowing to sign inputs values...) to also deploy them at the same
time that other more necessary hardforks?

I agree that hardforks shouldn't be impossible and in that sense I'm
glad that you started the hardfork debate, but I believe we should be
focusing on that debate rather than the block size one.
Once we have a clear criteria, hopefully the block size debate should
become less noisy and more productive.

--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Long-term mining incentives

2015-05-13 Thread Gavin Andresen
On Tue, May 12, 2015 at 7:48 PM, Adam Back a...@cypherspace.org wrote:

 I think its fair to say no one knows how to make a consensus that
 works in a decentralised fashion that doesnt weaken the bitcoin
 security model without proof-of-work for now.


Yes.


 I am presuming Gavin is just saying in the context of not pre-judging
 the future that maybe in the far future another innovation might be
 found (or alternatively maybe its not mathematically possible).


Yes... or an alternative might be found that weakens the Bitcoin security
model by a small enough amount that it either doesn't matter or the
weakening is vastly overwhelmed by some other benefit.

I'm influenced by the way the Internet works; packets addressed to
74.125.226.67 reliably get to Google through a very decentralized system
that I'll freely admit I don't understand. Yes, a determined attacker can
re-route packets, but layers of security on top means re-routing packets
isn't enough to pull off profitable attacks.

I think Bitcoin's proof-of-work might evolve in a similar way. Yes, you
might be able to 51% attack the POW, but layers of security on top of POW
will mean that won't be enough to pull off profitable attacks.


-- 
--
Gavin Andresen
--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] [BIP] Normalized Transaction IDs

2015-05-13 Thread Tier Nolan
On Wed, May 13, 2015 at 4:24 PM, Christian Decker 
decker.christ...@gmail.com wrote

 It does and I should have mentioned it in the draft, according to my
 calculations a mapping legacy ID - normalized ID is about 256 MB in size,
 or at least it was at height 330'000, things might have changed a bit and
 I'll recompute that. I omitted the deprecation of legacy IDs on purpose
 since we don't know whether we will migrate completely or leave keep both
 options viable.


There are around 20 million UTXOs.  At 2*32 bytes per entry, that is more
than 1GB.  There are more UTXOs than transactions, but 256MB seems a little
low.

I think both IDs can be used in the merkle tree, since we lookup an ID in
 both indices we can use both to address them and we will find them either
 way.


The id that is used to sign should be used in the merkle tree.  The hard
fork should simply be to allow transactions that use the normalized
transaction hash.


 As for the opcodes I'll have to check, but I currently don't see how they
 could be affected.


Agreed, the transaction is simply changed and all the standard rules apply.


 We can certainly split the proposal should it get too large, for now it
 seems manageable, since opcodes are not affected.


Right it is just a database update.  The undo info also needs to be changed
so that both txids are included.


 Bloom-filtering is resolved by adding the normalized transaction IDs and
 checking for both IDs in the filter.


Yeah, if a transaction spends with a legacy txid, it should still match if
the normalized txid is included in the filter.

 Since you mention bundling the change with other changes that require a
hard-fork it might be a good idea to build a separate proposal for a
generic hard-fork rollout mechanism.

That would be useful.  On the other hand, we don't want to make them to
easy.

I think this is a good choice for a hard fork test, since it is
uncontroversial.  With a time machine, it would have been done this way at
the start.

What about the following:

The reference client is updated so that it uses version 2 transactions by
default (but it can be changed by user).  A pop-up could appear for the GUI.

There is no other change.

All transactions in blocks 375000 to 385000 are considered votes and
weighted by bitcoin days destroyed (max 60 days).

If  75% of the transactions by weight are version 2, then the community
are considered to support the hard fork.

There would need to be a way to protect against miners censoring
transactions/votes.

Users could submit their transactions directly to a p2p tallying system.
The coin would be aged based on the age in block 375000 unless included in
the blockchain.  These votes don't need to be ordered and multiple votes
for the same coin would only count once.

In fact, votes could just be based on holding in block X.

This is an opinion poll rather than a referendum though.

Assuming support of the community, the hard fork can then proceed in a
similar way to the way a soft fork does.

Devs update the reference client to produce version 4 blocks and version 3
transactions.  Miners could watch version 3 transactions to gauge user
interest and use that to help decide if they should update.

If 750 of the last 1000 blocks are version 4 or higher, reject blocks with
transactions of less than version 3 in version 4 blocks

This means that legacy clients will be slow to confirm their
transactions, since their transactions cannot go into version 4 blocks.
This is encouragement to upgrade.

If 950 of the last 1000 blocks are version 4 or higher, reject blocks with
transactions of less than version 3 in all blocks

This means that legacy nodes can no longer send transactions but can
still receive.  Transactions received from other legacy nodes would remain
unconfirmed.

If 990 of the last 1000 blocks are version 4 or higher, reject version 3 or
lower blocks

This is the point of no return.  Rejecting version 3 blocks means that
the next rule is guaranteed to activate within the next 2016 blocks.
Legacy nodes remain on the main chain, but cannot send.  Miners mining with
legacy clients are (soft) forked off the chain.

If 1000 of the last 1000 blocks are version 4 or higher and the difficulty
retarget has just happened, activate hard fork rule

This hard forks legacy nodes off the chain.  99% of miners support this
change and users have been encouraged to update.  The block rate for the
non-forked chain is ast most 1% of normal.  Blocks happen every 16 hours.
By timing activation after a difficulty retarget, it makes it harder for
the other fork to adapt to the reduced hash rate.


--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using 

Re: [Bitcoin-development] [BIP] Normalized Transaction IDs

2015-05-13 Thread Christian Decker
Glad you like it, I was afraid that I missed something obvious :-)

The points the two of you raised are valid and I will address them as soon
as possible. I certainly will implement this proposal so that it becomes
more concrete, but my C++ is a bit rusty and it'll take some time, so I
wanted to gauge interest first.

 This has the effect of doubling the size of the UTXO database.  At
minimum, there needs to be a legacy txid to normalized txid map in the
database.

 An addition to the BIP would eliminate the need for the 2nd index.  You
could require a SPV proof of the spending transaction to be included with
legacy transactions.  This would allow clients to verify that the
normalized txid matched the legacy id.

The OutPoint would be {LegacyId | SPV Proof to spending tx  | spending tx
| index}.  This allows a legacy transaction to be upgraded.  OutPoints
which use a normalized txid don't need the SPV proof.

It does and I should have mentioned it in the draft, according to my
calculations a mapping legacy ID - normalized ID is about 256 MB in size,
or at least it was at height 330'000, things might have changed a bit and
I'll recompute that. I omitted the deprecation of legacy IDs on purpose
since we don't know whether we will migrate completely or leave keep both
options viable.

 I think this needs more details before it gets a BIP number; for example,
which opcodes does this affect, and how, exactly, does it affect them? Is
the merkle root in the block header computed using normalized transaction
ids or normalized ids?

I think both IDs can be used in the merkle tree, since we lookup an ID in
both indices we can use both to address them and we will find them either
way.

As for the opcodes I'll have to check, but I currently don't see how they
could be affected. The OP_*SIG* codes calculate their own (more
complicated) stripped transaction before hashing and checking the
signature. The input of the stripped transaction simply contains whatever
hash was used to reference the output, so we do not replace IDs during the
operation. The stripped format used by OP_*SIG* operations does not have to
adhere to the hashes used to reference a transaction in the input.

 I think there might actually be two or three or four BIPs here:

  + Overall what is trying to be accomplished
  + Changes to the OP_*SIG* opcodes
  + Changes to the bloom-filtering SPV support
  + ...eventually, hard fork rollout plan

 I also think that it is a good idea to have actually implemented a
proposal before getting a BIP number. At least, I find that actually
writing the code often turns up issues I hadn't considered when thinking
about the problem at a high level. And I STRONGLY believe BIPs should be
descriptive (here is how this thing works) not proscriptive (here's how
I think we should all do it).

We can certainly split the proposal should it get too large, for now it
seems manageable, since opcodes are not affected. Bloom-filtering is
resolved by adding the normalized transaction IDs and checking for both IDs
in the filter. Since you mention bundling the change with other changes
that require a hard-fork it might be a good idea to build a separate
proposal for a generic hard-fork rollout mechanism.

If there are no obvious roadblocks and the change seems generally a good
thing I will implement it in Bitcoin Core :-)

Regards,
Chris

On Wed, May 13, 2015 at 3:44 PM Gavin Andresen gavinandre...@gmail.com
wrote:

 I think this needs more details before it gets a BIP number; for example,
 which opcodes does this affect, and how, exactly, does it affect them? Is
 the merkle root in the block header computed using normalized transaction
 ids or normalized ids?

 I think there might actually be two or three or four BIPs here:

  + Overall what is trying to be accomplished
  + Changes to the OP_*SIG* opcodes
  + Changes to the bloom-filtering SPV support
  + ...eventually, hard fork rollout plan

 I also think that it is a good idea to have actually implemented a
 proposal before getting a BIP number. At least, I find that actually
 writing the code often turns up issues I hadn't considered when thinking
 about the problem at a high level. And I STRONGLY believe BIPs should be
 descriptive (here is how this thing works) not proscriptive (here's how
 I think we should all do it).

 Finally: I like the idea of moving to a normalized txid. But it might make
 sense to bundle that change with a bigger change to OP_CHECKSIG; see Greg
 Maxwell's excellent talk about his current thoughts on that topic:
   https://www.youtube.com/watch?v=Gs9lJTRZCDc


 On Wed, May 13, 2015 at 9:12 AM, Tier Nolan tier.no...@gmail.com wrote:

 I think this is a good way to handle things, but as you say, it is a hard
 fork.

 CHECKLOCKTIMEVERIFY covers many of the use cases, but it would be nice to
 fix malleability once and for all.

 This has the effect of doubling the size of the UTXO database.  At
 minimum, there needs to be a legacy txid to normalized txid 

Re: [Bitcoin-development] Long-term mining incentives

2015-05-13 Thread Pieter Wuille
On Wed, May 13, 2015 at 5:48 PM, Aaron Voisine vois...@gmail.com wrote:

 We have $3billion plus of value in this system to defend. The safe,
 conservative course is to increase the block size. Miners already have an
 incentive to find ways to encourage higher fees  and we can help them with
 standard recommended propagation rules and hybrid priority/fee transaction
 selection for blocks that increases confirmation delays for low fee
 transactions.


You may find that the most economical solution, but I can't understand how
you can call it conservative.

Suggesting a hard fork is betting the survival of the entire ecosystem on
the bet that everyone will agree with and upgrade to new suggested software
before a flag date.

-- 
Pieter
--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Long-term mining incentives

2015-05-13 Thread Aaron Voisine
 by people and businesses deciding to not use on-chain settlement.

I completely agree. Increasing fees will cause people voluntary economize
on blockspace by finding alternatives, i.e. not bitcoin. A fee however is a
known, upfront cost... unpredictable transaction failure in most cases will
be a far higher, unacceptable cost to the user than the actual fee. The
higher the costs of using the system, the lower the adoption as a
store-of-value. The lower the adoption as store-of-value, the lower the
price, and the lower the value of bitcoin to the world.

 That only measures miner adoption, which is the least relevant.

I concede the point. Perhaps a flag date based on previous observation of
network upgrade rates with a conservative additional margin in addition to
supermajority of mining power.


Aaron Voisine
co-founder and CEO
breadwallet.com

On Wed, May 13, 2015 at 6:19 PM, Pieter Wuille pieter.wui...@gmail.com
wrote:

 On Wed, May 13, 2015 at 6:13 PM, Aaron Voisine vois...@gmail.com wrote:

 Conservative is a relative term. Dropping transactions in a way that is
 unpredictable to the sender sounds incredibly drastic to me. I'm suggesting
 increasing the blocksize, drastic as it is, is the more conservative choice.


 Transactions are already being dropped, in a more indirect way: by people
 and businesses deciding to not use on-chain settlement. That is very sad,
 but it's completely inevitable that there is space for some use cases and
 not for others (at whatever block size). It's only a things don't fit
 anymore when you see on-chain transactions as the only means for doing
 payments, and that is already not the case. Increasing the block size
 allows for more utility on-chain, but it does not fundamentally add more
 use cases - only more growth space for people already invested in being
 able to do things on-chain while externalizing the costs to others.


 I would recommend that the fork take effect when some specific large
 supermajority of the pervious 1000 blocks indicate they have upgraded, as a
 safer alternative to a simple flag date, but I'm sure I wouldn't have to
 point out that option to people here.


 That only measures miner adoption, which is the least relevant. The
 question is whether people using full nodes will upgrade. If they do, then
 miners are forced to upgrade too, or become irrelevant. If they don't, the
 upgrade is risky with or without miner adoption.

 --
 Pieter


--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Long-term mining incentives

2015-05-13 Thread Jorge Timón
On Wed, May 13, 2015 at 12:31 PM, Alex Mizrahi alex.mizr...@gmail.com wrote:
 But this matters if a new node has access to the globally strongest chain.
 If attacker is able to block connections to legitimate nodes, a new node
 will happily accept attacker's chain.

If you get isolated from the network you may not get the longest valid
chain. I don't think any other consensus mechanism deals with this
better than Bitcoin.

 So PoW, by itself, doesn't give strong security guarantees. This problem is
 so fundamental people avoid talking about it.

 In practice, Bitcoin already embraces weak subjectivity e.g. in form of
 checkpoints embedded into the source code. So it's hard to take PoW purists
 seriously.

Checkpoints are NOT part of the consensus rules, they're just an
optimization that can be removed.
Try keeping the genesis block as your only checkpoint and rebuild: it
will work. You can also define your own checkpoints, there's no need
for everyone to use the same ones.
In a future with committed utxo the optimization could be bigger, but
still, we shouldn't rely on checkpoints for consensus, they're just an
optimization and you should only trust checkpoints that are buried in
the chain. Trusting a committed utxo checkpoint from 2 years ago
doesn't seem very risky. If the code is not already done (not really
sure if it was done as part of auto-prune), we should be prepared for
reorgs that invalidate checkpoints.
So, no, Bitcoin does NOT rely on that weak subjectivity thing.

--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] [BIP] Normalized Transaction IDs

2015-05-13 Thread Pieter Wuille
On Wed, May 13, 2015 at 1:32 PM, Tier Nolan tier.no...@gmail.com wrote:


 On Wed, May 13, 2015 at 9:31 PM, Pieter Wuille pieter.wui...@gmail.com
 wrote:


 This was what I was suggesting all along, sorry if I wasn't clear.

 That's great.  So, basically the multi-level refund problem is solved by
 this?


Yes. So to be clear, I think there are 2 desirable end-goal proposals
(ignoring difficulty of changing things for a minute):

* Transactions and blocks keep referring to other transactions by full
txid, but signature hashes are computed off normalized txids (which are
recursively defined to use normalized txids all the way back to coinbases).
Is this what you are suggesting now as well?

* Blocks commit to full transaction data, but transactions and signature
hashes use normalized txids.

The benefit of the latter solution is that it doesn't need fixing up
transactions whose inputs have been malleated, but comes at the cost of
doing a very invasive hard fork.

-- 
Pieter
--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Long-term mining incentives

2015-05-13 Thread Aaron Voisine
 increasing the block size is simply not a solution, it's just kicking
 the can down the road (while reducing the incentives to deploy real
 solutions like payment channels).

Placing hard limits on blocksize is not the right solution. There are still
plenty of options to be explored to increase fees, resulting in users
voluntarily economizing on block space. It's premature to resort to
destroying the reliability of propagated transaction getting into blocks.

Child-pays-for-parent is useful, but requires the recipient to spend inputs
upon receipt, consuming even more block space. Replace-by-fee may also
help, but users won't know the fee they are getting charged until after the
fact, and it will make worse all the problems that tx malleability causes
today.

We have $3billion plus of value in this system to defend. The safe,
conservative course is to increase the block size. Miners already have an
incentive to find ways to encourage higher fees  and we can help them with
standard recommended propagation rules and hybrid priority/fee transaction
selection for blocks that increases confirmation delays for low fee
transactions.

Aaron Voisine
co-founder and CEO
breadwallet.com

On Wed, May 13, 2015 at 5:11 PM, Jorge Timón jti...@jtimon.cc wrote:

 On Mon, May 11, 2015 at 7:29 PM, Gavin Andresen gavinandre...@gmail.com
 wrote:
  I think long-term the chain will not be secured purely by proof-of-work.
 I
  think when the Bitcoin network was tiny running solely on people's home
  computers proof-of-work was the right way to secure the chain, and the
 only
  fair way to both secure the chain and distribute the coins.
 
  See https://gist.github.com/gavinandresen/630d4a6c24ac6144482a  for some
  half-baked thoughts along those lines. I don't think proof-of-work is the
  last word in distributed consensus (I also don't think any alternatives
 are
  anywhere near ready to deploy, but they might be in ten years).

 Or never, nobody knows at this point.

  I also think it is premature to worry about what will happen in twenty or
  thirty years when the block subsidy is insignificant. A lot will happen
 in
  the next twenty years. I could spin a vision of what will secure the
 chain
  in twenty years, but I'd put a low probability on that vision actually
  turning out to be correct.

 I think is very healthy to worry about that since we know it's
 something that will happen.
 The system should work without subsidies.

  That is why I keep saying Bitcoin is an experiment. But I also believe
 that
  the incentives are correct, and there are a lot of very motivated, smart,
  hard-working people who will make it work. When you're talking about
 trying
  to predict what will happen decades from now, I think that is the best
 you
  can (honestly) do.

 Lightning payment channels may be a new idea, but payment channels are
 not, and nobody is using them.
 They are the best solution to scalability we have right now,
 increasing the block size is simply not a solution, it's just kicking
 the can down the road (while reducing the incentives to deploy real
 solutions like payment channels).

 Not worrying about 10 years in the future but asking people to trust
 estimates and speculations about how everything will burn in 2 years
 if we don't act right now seems pretty arbitrary to me.
 One could just as well argue that there's smart hard-working people
 that will solve those problems before they hit us.

 It is true that the more distant the future you're trying to predict
 is, the more difficult it is to predict it, but any threshold that
 separates relevant worries from too far in the future to worry
 about it will always be arbitrary.
 Fortunately we don't need to all share the same time horizon for what
 is worrying and what is not.
 What we need is a clear criterion for what is acceptable for a
 hardfork and a general plan to deploy them:

 -Do all the hardfork changes need to be uncontroversial? How do we
 define uncontroversial?
 -Should we maintain and test implementation of hardfork whises that
 seem too small to justify a hardfork on their own (ie time travel fix,
 allowing to sign inputs values...) to also deploy them at the same
 time that other more necessary hardforks?

 I agree that hardforks shouldn't be impossible and in that sense I'm
 glad that you started the hardfork debate, but I believe we should be
 focusing on that debate rather than the block size one.
 Once we have a clear criteria, hopefully the block size debate should
 become less noisy and more productive.


 --
 One dashboard for servers and applications across Physical-Virtual-Cloud
 Widest out-of-the-box monitoring support with 50+ applications
 Performance metrics, stats and reports that give you Actionable Insights
 Deep dive visibility with transaction tracing using APM Insight.
 http://ad.doubleclick.net/ddm/clk/290420510;117567292;y
 

Re: [Bitcoin-development] Long-term mining incentives

2015-05-13 Thread Aaron Voisine
Conservative is a relative term. Dropping transactions in a way that is
unpredictable to the sender sounds incredibly drastic to me. I'm suggesting
increasing the blocksize, drastic as it is, is the more conservative
choice. I would recommend that the fork take effect when some specific
large supermajority of the pervious 1000 blocks indicate they have
upgraded, as a safer alternative to a simple flag date, but I'm sure I
wouldn't have to point out that option to people here.


Aaron Voisine
co-founder and CEO
breadwallet.com

On Wed, May 13, 2015 at 5:58 PM, Pieter Wuille pieter.wui...@gmail.com
wrote:

 On Wed, May 13, 2015 at 5:48 PM, Aaron Voisine vois...@gmail.com wrote:

 We have $3billion plus of value in this system to defend. The safe,
 conservative course is to increase the block size. Miners already have an
 incentive to find ways to encourage higher fees  and we can help them with
 standard recommended propagation rules and hybrid priority/fee transaction
 selection for blocks that increases confirmation delays for low fee
 transactions.


 You may find that the most economical solution, but I can't understand how
 you can call it conservative.

 Suggesting a hard fork is betting the survival of the entire ecosystem on
 the bet that everyone will agree with and upgrade to new suggested software
 before a flag date.

 --
 Pieter


--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Long-term mining incentives

2015-05-13 Thread Pieter Wuille
On Wed, May 13, 2015 at 6:13 PM, Aaron Voisine vois...@gmail.com wrote:

 Conservative is a relative term. Dropping transactions in a way that is
 unpredictable to the sender sounds incredibly drastic to me. I'm suggesting
 increasing the blocksize, drastic as it is, is the more conservative choice.


Transactions are already being dropped, in a more indirect way: by people
and businesses deciding to not use on-chain settlement. That is very sad,
but it's completely inevitable that there is space for some use cases and
not for others (at whatever block size). It's only a things don't fit
anymore when you see on-chain transactions as the only means for doing
payments, and that is already not the case. Increasing the block size
allows for more utility on-chain, but it does not fundamentally add more
use cases - only more growth space for people already invested in being
able to do things on-chain while externalizing the costs to others.


 I would recommend that the fork take effect when some specific large
 supermajority of the pervious 1000 blocks indicate they have upgraded, as a
 safer alternative to a simple flag date, but I'm sure I wouldn't have to
 point out that option to people here.


That only measures miner adoption, which is the least relevant. The
question is whether people using full nodes will upgrade. If they do, then
miners are forced to upgrade too, or become irrelevant. If they don't, the
upgrade is risky with or without miner adoption.

-- 
Pieter
--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Long-term mining incentives

2015-05-13 Thread Aaron Voisine
 I concede the point. Perhaps a flag date based on previous observation of
network upgrade rates with a conservative additional margin in addition to
supermajority of mining power.

It occurs to me that this would allow for a relatively small percentage of
miners to stop the upgrade if the flag date turns out to be poorly chosen
and a large number of non-mining nodes haven't upgraded yet. Would be a
nice safety fallback.


Aaron Voisine
co-founder and CEO
breadwallet.com

On Wed, May 13, 2015 at 6:31 PM, Aaron Voisine vois...@gmail.com wrote:

  by people and businesses deciding to not use on-chain settlement.

 I completely agree. Increasing fees will cause people voluntary economize
 on blockspace by finding alternatives, i.e. not bitcoin. A fee however is a
 known, upfront cost... unpredictable transaction failure in most cases will
 be a far higher, unacceptable cost to the user than the actual fee. The
 higher the costs of using the system, the lower the adoption as a
 store-of-value. The lower the adoption as store-of-value, the lower the
 price, and the lower the value of bitcoin to the world.

  That only measures miner adoption, which is the least relevant.

 I concede the point. Perhaps a flag date based on previous observation of
 network upgrade rates with a conservative additional margin in addition to
 supermajority of mining power.


 Aaron Voisine
 co-founder and CEO
 breadwallet.com

 On Wed, May 13, 2015 at 6:19 PM, Pieter Wuille pieter.wui...@gmail.com
 wrote:

 On Wed, May 13, 2015 at 6:13 PM, Aaron Voisine vois...@gmail.com wrote:

 Conservative is a relative term. Dropping transactions in a way that is
 unpredictable to the sender sounds incredibly drastic to me. I'm suggesting
 increasing the blocksize, drastic as it is, is the more conservative choice.


 Transactions are already being dropped, in a more indirect way: by people
 and businesses deciding to not use on-chain settlement. That is very sad,
 but it's completely inevitable that there is space for some use cases and
 not for others (at whatever block size). It's only a things don't fit
 anymore when you see on-chain transactions as the only means for doing
 payments, and that is already not the case. Increasing the block size
 allows for more utility on-chain, but it does not fundamentally add more
 use cases - only more growth space for people already invested in being
 able to do things on-chain while externalizing the costs to others.


 I would recommend that the fork take effect when some specific large
 supermajority of the pervious 1000 blocks indicate they have upgraded, as a
 safer alternative to a simple flag date, but I'm sure I wouldn't have to
 point out that option to people here.


 That only measures miner adoption, which is the least relevant. The
 question is whether people using full nodes will upgrade. If they do, then
 miners are forced to upgrade too, or become irrelevant. If they don't, the
 upgrade is risky with or without miner adoption.

 --
 Pieter



--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Long-term mining incentives

2015-05-13 Thread Melvin Carvalho
On 11 May 2015 at 18:28, Thomas Voegtlin thom...@electrum.org wrote:

 The discussion on block size increase has brought some attention to the
 other elephant in the room: Long-term mining incentives.

 Bitcoin derives its current market value from the assumption that a
 stable, steady-state regime will be reached in the future, where miners
 have an incentive to keep mining to protect the network. Such a steady
 state regime does not exist today, because miners get most of their
 reward from the block subsidy, which will progressively be removed.

 Thus, today's 3 billion USD question is the following: Will a steady
 state regime be reached in the future? Can such a regime exist? What are
 the necessary conditions for its existence?

 Satoshi's paper suggests that this may be achieved through miner fees.
 Quite a few people seem to take this for granted, and are working to
 make it happen (developing cpfp and replace-by-fee). This explains part
 of the opposition to raising the block size limit; some people would
 like to see some fee pressure building up first, in order to get closer
 to a regime where miners are incentivised by transaction fees instead of
 block subsidy. Indeed, the emergence of a working fee market would be
 extremely reassuring for the long-term viability of bitcoin. So, the
 thinking goes, by raising the block size limit, we would be postponing a
 crucial reality check. We would be buying time, at the expenses of
 Bitcoin's decentralization.

 OTOH, proponents of a block size increase have a very good point: if the
 block size is not raised soon, Bitcoin is going to enter a new, unknown
 and potentially harmful regime. In the current regime, almost all
 transaction get confirmed quickly, and fee pressure does not exist. Mike
 Hearn suggested that, when blocks reach full capacity and users start to
 experience confirmation delays and confirmation uncertainty, users will
 simply go away and stop using Bitcoin. To me, that outcome sounds very
 plausible indeed. Thus, proponents of the block size increase are
 conservative; they are trying to preserve the current regime, which is
 known to work, instead of letting the network enter uncharted territory.

 My problem is that this seems to lacks a vision. If the maximal block
 size is increased only to buy time, or because some people think that 7
 tps is not enough to compete with VISA, then I guess it would be
 healthier to try and develop off-chain infrastructure first, such as the
 Lightning network.

 OTOH, I also fail to see evidence that a limited block capacity will
 lead to a functional fee market, able to sustain a steady state. A
 functional market requires well-informed participants who make rational
 choices and accept the outcomes of their choices. That is not the case
 today, and to believe that it will magically happen because blocks start
 to reach full capacity sounds a lot like like wishful thinking.

 So here is my question, to both proponents and opponents of a block size
 increase: What steady-state regime do you envision for Bitcoin, and what
 is is your plan to get there? More specifically, how will the
 steady-state regime look like? Will users experience fee pressure and
 delays, or will it look more like a scaled up version of what we enjoy
 today? Should fee pressure be increased jointly with subsidy decrease,
 or as soon as possible, or never? What incentives will exist for miners
 once the subsidy is gone? Will miners have an incentive to permanently
 fork off the last block and capture its fees? Do you expect Bitcoin to
 work because miners are altruistic/selfish/honest/caring?

 A clear vision would be welcome.


I am guided here by Satoshi's paper:

Commerce on the Internet has come to rely almost exclusively on financial
institutions serving as trusted third parties to process electronic
payments. While the system works well enough for *most transactions*

This suggests to me that most tx will occur off-block with the block chain
used for settlement.  Indeed Satoshi was working on a trust based market
before he left.

If commerce works well enough off-block with zero trust settlement
supporting it, people might even forget that the block chain exists, like
with gold settlement.  But it can be used for transactions.  To this end I
welcome higher fees, so that the block chain becomes the reserve currency
of the internet and is used sparingly.

But as Gavin pointed out, bitcoin is still an experiment and we are all
still learning.  We are also learning from alt coin mechanisms.  I am
unsure there is huge urgency here, and would lean towards caution as
bitcoin infrastructure rapidly grows.




 --
 One dashboard for servers and applications across Physical-Virtual-Cloud
 Widest out-of-the-box monitoring support with 50+ applications
 Performance metrics, stats and reports that give you Actionable Insights
 Deep dive visibility with 

Re: [Bitcoin-development] [BIP] Normalized Transaction IDs

2015-05-13 Thread Pieter Wuille
Normalized transaction ids are only effectively non-malleable when all
inputs they refer to are also non-malleable (or you can have malleability
in 2nd level dependencies), so I do not believe it makes sense to allow
mixed usage of the txids at all. They do not provide the actual benefit of
guaranteed non-malleability before it becomes disallowed to use the old
mechanism. That, together with the +- resource doubling needed for the UTXO
set (as earlier mentioned) and the fact that an alternative which is only a
softfork are available, makes this a bad idea IMHO.

Unsure to what extent this has been presented on the mailinglist, but the
softfork idea is this:
* Transactions get 2 txids, one used to reference them (computed as
before), and one used in an (extended) sighash.
* The txins keep using the normal txid, so not structural changes to
Bitcoin.
* The ntxid is computed by replacing the scriptSigs in inputs by the empty
string, and by replacing the txids in txins by their corresponding ntxids.
* A new checksig operator is softforked in, which uses the ntxids in its
sighashes rather than the full txid.
* To support efficiently computing ntxids, every tx in the utxo set
(currently around 6M) stores the ntxid, but only supports lookup bu txid
still.

This does result in a system where a changed dependency indeed invalidates
the spending transaction, but the fix is trivial and can be done without
access to the private key.
On May 13, 2015 5:50 AM, Christian Decker decker.christ...@gmail.com
wrote:

 Hi All,

 I'd like to propose a BIP to normalize transaction IDs in order to address
 transaction malleability and facilitate higher level protocols.

 The normalized transaction ID is an alias used in parallel to the current
 (legacy) transaction IDs to address outputs in transactions. It is
 calculated by removing (zeroing) the scriptSig before computing the hash,
 which ensures that only data whose integrity is also guaranteed by the
 signatures influences the hash. Thus if anything causes the normalized ID
 to change it automatically invalidates the signature. When validating a
 client supporting this BIP would use both the normalized tx ID as well as
 the legacy tx ID when validating transactions.

 The detailed writeup can be found here:
 https://github.com/cdecker/bips/blob/normalized-txid/bip-00nn.mediawiki.

 @gmaxwell: I'd like to request a BIP number, unless there is something
 really wrong with the proposal.

 In addition to being a simple alternative that solves transaction
 malleability it also hugely simplifies higher level protocols. We can now
 use template transactions upon which sequences of transactions can be built
 before signing them.

 I hesitated quite a while to propose it since it does require a hardfork
 (old clients would not find the prevTx identified by the normalized
 transaction ID and deem the spending transaction invalid), but it seems
 that hardforks are no longer the dreaded boogeyman nobody talks about.
 I left out the details of how the hardfork is to be done, as it does not
 really matter and we may have a good mechanism to apply a bunch of
 hardforks concurrently in the future.

 I'm sure it'll take time to implement and upgrade, but I think it would be
 a nice addition to the functionality and would solve a long standing
 problem :-)

 Please let me know what you think, the proposal is definitely not set in
 stone at this point and I'm sure we can improve it further.

 Regards,
 Christian


 --
 One dashboard for servers and applications across Physical-Virtual-Cloud
 Widest out-of-the-box monitoring support with 50+ applications
 Performance metrics, stats and reports that give you Actionable Insights
 Deep dive visibility with transaction tracing using APM Insight.
 http://ad.doubleclick.net/ddm/clk/290420510;117567292;y
 ___
 Bitcoin-development mailing list
 Bitcoin-development@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bitcoin-development


--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] [BIP] Normalized Transaction IDs

2015-05-13 Thread Luke Dashjr
I think this hardfork is dead-on-arrival given the ideas for OP_CHECKSIG 
softforking. Instead of referring to previous transactions by a normalised 
hash, it makes better sense to simply change the outpoints in the signed data 
and allow nodes to hotfix dependent transactions when/if they are malleated. 
Furthermore, the approach of using a hash of scriptPubKey in the input rather 
than an outpoint also solves dependencies in the face of intentional 
malleability (respending with a higher fee, or CoinJoin, for a few examples).

These aren't barriers to making the proposal or being assigned a BIP number if 
you want to go forward with that, but you may wish to reconsider spending time 
on it.

Luke


On Wednesday, May 13, 2015 12:48:04 PM Christian Decker wrote:
 Hi All,
 
 I'd like to propose a BIP to normalize transaction IDs in order to address
 transaction malleability and facilitate higher level protocols.
 
 The normalized transaction ID is an alias used in parallel to the current
 (legacy) transaction IDs to address outputs in transactions. It is
 calculated by removing (zeroing) the scriptSig before computing the hash,
 which ensures that only data whose integrity is also guaranteed by the
 signatures influences the hash. Thus if anything causes the normalized ID
 to change it automatically invalidates the signature. When validating a
 client supporting this BIP would use both the normalized tx ID as well as
 the legacy tx ID when validating transactions.
 
 The detailed writeup can be found here:
 https://github.com/cdecker/bips/blob/normalized-txid/bip-00nn.mediawiki.
 
 @gmaxwell: I'd like to request a BIP number, unless there is something
 really wrong with the proposal.
 
 In addition to being a simple alternative that solves transaction
 malleability it also hugely simplifies higher level protocols. We can now
 use template transactions upon which sequences of transactions can be built
 before signing them.
 
 I hesitated quite a while to propose it since it does require a hardfork
 (old clients would not find the prevTx identified by the normalized
 transaction ID and deem the spending transaction invalid), but it seems
 that hardforks are no longer the dreaded boogeyman nobody talks about.
 I left out the details of how the hardfork is to be done, as it does not
 really matter and we may have a good mechanism to apply a bunch of
 hardforks concurrently in the future.
 
 I'm sure it'll take time to implement and upgrade, but I think it would be
 a nice addition to the functionality and would solve a long standing
 problem :-)
 
 Please let me know what you think, the proposal is definitely not set in
 stone at this point and I'm sure we can improve it further.
 
 Regards,
 Christian

--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] [BIP] Normalized Transaction IDs

2015-05-13 Thread Christian Decker
If the inputs to my transaction have been long confirmed I can be
reasonably safe in assuming that the transaction hash does not change
anymore. It's true that I have to be careful not to build on top of
transactions that use legacy references to transactions that are
unconfirmed or have few confirmations, however that does not invalidate the
utility of the normalized transaction IDs.

The resource doubling is not optimal, I agree, but compare that to dragging
around malleability and subsequent hacks to sort-of fix it forever.
Additionally if we were to decide to abandon legacy transaction IDs we
could eventually drop the legacy index after a sufficient transition period.

I remember reading about the SIGHASH proposal somewhere. It feels really
hackish to me: It is a substantial change to the way signatures are
verified, I cannot really see how this is a softfork if clients that did
not update are unable to verify transactions using that SIGHASH Flag and it
is adding more data (the normalized hash) to the script, which has to be
stored as part of the transaction. It may be true that a node observing
changes in the input transactions of a transaction using this flag could
fix the problem, however it requires the node's intervention.

Compare that to the simple and clean solution in the proposal, which does
not add extra data to be stored, keeps the OP_*SIG* semantics as they are
and where once you sign a transaction it does not have to be monitored or
changed in order to be valid.

There certainly are merits using the SIGHASH approach in the short term (it
does not require a hard fork), however I think the normalized transaction
ID is a cleaner and simpler long-term solution, even though it requires a
hard-fork.

Regards,
Christian

On Wed, May 13, 2015 at 7:14 PM Pieter Wuille pieter.wui...@gmail.com
wrote:

 Normalized transaction ids are only effectively non-malleable when all
 inputs they refer to are also non-malleable (or you can have malleability
 in 2nd level dependencies), so I do not believe it makes sense to allow
 mixed usage of the txids at all. They do not provide the actual benefit of
 guaranteed non-malleability before it becomes disallowed to use the old
 mechanism. That, together with the +- resource doubling needed for the UTXO
 set (as earlier mentioned) and the fact that an alternative which is only a
 softfork are available, makes this a bad idea IMHO.

 Unsure to what extent this has been presented on the mailinglist, but the
 softfork idea is this:
 * Transactions get 2 txids, one used to reference them (computed as
 before), and one used in an (extended) sighash.
 * The txins keep using the normal txid, so not structural changes to
 Bitcoin.
 * The ntxid is computed by replacing the scriptSigs in inputs by the empty
 string, and by replacing the txids in txins by their corresponding ntxids.
 * A new checksig operator is softforked in, which uses the ntxids in its
 sighashes rather than the full txid.
 * To support efficiently computing ntxids, every tx in the utxo set
 (currently around 6M) stores the ntxid, but only supports lookup bu txid
 still.

 This does result in a system where a changed dependency indeed invalidates
 the spending transaction, but the fix is trivial and can be done without
 access to the private key.
 On May 13, 2015 5:50 AM, Christian Decker decker.christ...@gmail.com
 wrote:

 Hi All,

 I'd like to propose a BIP to normalize transaction IDs in order to
 address transaction malleability and facilitate higher level protocols.

 The normalized transaction ID is an alias used in parallel to the current
 (legacy) transaction IDs to address outputs in transactions. It is
 calculated by removing (zeroing) the scriptSig before computing the hash,
 which ensures that only data whose integrity is also guaranteed by the
 signatures influences the hash. Thus if anything causes the normalized ID
 to change it automatically invalidates the signature. When validating a
 client supporting this BIP would use both the normalized tx ID as well as
 the legacy tx ID when validating transactions.

 The detailed writeup can be found here:
 https://github.com/cdecker/bips/blob/normalized-txid/bip-00nn.mediawiki.

 @gmaxwell: I'd like to request a BIP number, unless there is something
 really wrong with the proposal.

 In addition to being a simple alternative that solves transaction
 malleability it also hugely simplifies higher level protocols. We can now
 use template transactions upon which sequences of transactions can be built
 before signing them.

 I hesitated quite a while to propose it since it does require a hardfork
 (old clients would not find the prevTx identified by the normalized
 transaction ID and deem the spending transaction invalid), but it seems
 that hardforks are no longer the dreaded boogeyman nobody talks about.
 I left out the details of how the hardfork is to be done, as it does not
 really matter and we may have a good mechanism to apply a bunch of
 

Re: [Bitcoin-development] [BIP] Normalized Transaction IDs

2015-05-13 Thread Tier Nolan
On Wed, May 13, 2015 at 6:14 PM, Pieter Wuille pieter.wui...@gmail.com
wrote:

 Normalized transaction ids are only effectively non-malleable when all
 inputs they refer to are also non-malleable (or you can have malleability
 in 2nd level dependencies), so I do not believe it makes sense to allow
 mixed usage of the txids at all.


The txid or txid-norm is signed, so can't be changed after signing.

The hard fork is to allow transactions to refer to their inputs by txid or
txid-norm.  You pick one before signing.

 They do not provide the actual benefit of guaranteed non-malleability
 before it becomes disallowed to use the old mechanism.

A signed transaction cannot have its txid changed.  It is true that users
of the system would have to use txid-norm.

The basic refund transaction is as follows.

 A creates TX1: Pay w BTC to B's public key if signed by A  B

 A creates TX2: Pay w BTC from TX1-norm to A's public key, locked 48
hours in the future, signed by A

 A sends TX2 to B

 B signs TX2 and returns to A

A broadcasts TX1.  It is mutated before entering the chain to become
TX1-mutated.

A can still submit TX2 to the blockchain, since TX1 and TX1-mutated have
the same txid-norm.


 That, together with the +- resource doubling needed for the UTXO set (as
 earlier mentioned) and the fact that an alternative which is only a
 softfork are available, makes this a bad idea IMHO.

 Unsure to what extent this has been presented on the mailinglist, but the
 softfork idea is this:
 * Transactions get 2 txids, one used to reference them (computed as
 before), and one used in an (extended) sighash.
 * The txins keep using the normal txid, so not structural changes to
 Bitcoin.
 * The ntxid is computed by replacing the scriptSigs in inputs by the empty
 string, and by replacing the txids in txins by their corresponding ntxids.
 * A new checksig operator is softforked in, which uses the ntxids in its
 sighashes rather than the full txid.
 * To support efficiently computing ntxids, every tx in the utxo set
 (currently around 6M) stores the ntxid, but only supports lookup bu txid
 still.

 This does result in a system where a changed dependency indeed invalidates
 the spending transaction, but the fix is trivial and can be done without
 access to the private key.

The problem with this is that 2 level malleability is not protected against.

C spends B which spends A.

A is mutated before it hits the chain.  The only change in A is in the
scriptSig.

B can be converted to B-new without breaking the signature.  This is
because the only change to A was in the sciptSig, which is dropped when
computing the txid-norm.

B-new spends A-mutated.  B-new is different from B in a different place.
The txid it uses to refer to the previous output is changed.

The signed transaction C cannot be converted to a valid C-new.  The txid of
the input points to B.  It is updated to point at B-new.  B-new and B don't
have the same txid-norm, since the change is outside the scriptSig.  This
means that the signature for C is invalid.

The txid replacements should be done recursively.  All input txids should
be replaced by txid-norms when computing the txid-norm for the
transaction.  I think this repairs the problem with only allowing one level?

Computing txid-norm:

- replace all txids in inputs with txid-norms of those transactions
- replace all input scriptSigs with empty scripts
- transaction hash is txid-norm for that transaction

The same situation as above is not fatal now.

C spends B which spends A.

A is mutated before it hits the chain.  The only change in A is in the
scriptSig.

B can be converted to B-new without breaking the signature.  This is
because the only change to A was in the sciptSig, which is dropped when
computing the txid-norm (as before).

B-new spends A mutated.  B-new is different from B in for the previous
inputs.

The input for B-new points to A-mutated.  When computing the txid-norm,
that would be replaced with the txid-norm for A.

Similarly, the input for B points to A and that would have been replaced
with the txid-norm for A.

This means that B and B-new have the same txid-norm.

The signed transaction C can be converted to a valid C-new.  The txid of
the input points to B.  It is updated to point at B-new.  B-new and B now
have have the same txid-norm and so C is valid.

I think this reasoning is valid, but probably needs writing out actual
serializations.
--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net

Re: [Bitcoin-development] Long-term mining incentives

2015-05-13 Thread Damian Gomez
I hope to keep continuing this conversations. Pardon my absence, but I
don't alway feel like I have much to contribute especially if it's not
techincal.

On my part I have been a proponent, of an alterrnativ consensus, that
begins shifting away from teh current cooinbase reward system in order to
reduce mining on the whole and thus limit those who do mine to do so on a
level of integrity.


I took a look at the ethereum blog on weak subjectivity, it does seem to be
a good transtition to use a gravity schema to be implemented in a Log
Structured Merge tree  in order to find doscrepancy in forks.


Using this sama data structure could still be used in a consensus model. In
terms of how nodes communicate on teh network their speed and latency
communication are at least halfway solved based off their intereactions
(kernel software changes) with how nodes write and read memory { smp_wrb()
 || smp_rmb() } This would allow for a connection on the


Let me provide a use case:  Say that we wanted to begin a new model for
integrity, then the current value for integrity would utilize a OTS from
the previous hash in order to establish the previous owner address of the
block it was previously part of.  THE MAIN ISSUE here is being able to
verify, which value of integrity is useful for being able to establish a
genesis block. A paper by Lee  Ewe (2001) called *The Byzantine General's
Problem* gives insight as to how a  O(n^c) model is suitable to send a
message w/ value through out the system, each node is then sent a
read-invalidate request in order to change their cache logs for old system
memory in a new fixed address. Upon consensus of this value the rest of the
brainer {1st recipeients} nodes would be able to send a forward
propagation of  the learnt value and, after acceptance the value would then
be backpropagated to the genesis block upoon every round in orderr to set a
deterministic standard for the dynamic increase of integrity of the system.


In POW systems the nonce generated would be the accumulation of the
integrity within a system and what their computatiuonal exertion in terms
of the overall rate of integrity increase in the system as the new coinbase
- this value then is assigned and signed to the hash and teh Merkel Root
 as two layers encoded to its base and then reencrypted using EDCSA from
the 256 to 512 bit transformation so that the new address given has a
validity that cannot be easily fingerprinted and the malleability of teh
transaction becomes much more difficult due to the overall  2 ^ 28
verification stamp provided to the new hash.   The parameters  T T r P

(Trust value)  - foud in the new coinbase or the scriptSig
( Hidden) - found in the Hash, and the merkel root hash
(TRust overall)  R = within the target range for  new nonces and address
locations
Paradigm (integrity) = held within the genesis block as a backpropogated
solution



Using this signature then the  nodes would then be able to communicate and
transition the memory resevres for previous transaction on the block based
on the byzantine consensus. What noone has yet mentioned which I have
forgotten too, is how these datacenters of pool woul be supported w/out
fees. I will thrw that one out to all of you.  The current consensus system
leaves room for orp[haned transactions if there were miltiple signature
requests the queue would be lined up based off integrity values in order to
have the most effective changes occcur first.

I have some more thoughts and will continue working on the techinical
vernacular and how a noob developer and decent computer science student
could make such an mplementation a reality.  Thanks in advance for
listengin to this.



Thank you to Greg Maxwell for allowing us to liosten to his talk online,
was hearing while writing this.  And to Krzysztof Okupsi and Paul
McKenny(Memory Barriers Hardware View for Software hackers) for their help
in nudging my brain and the relentles people behind the scenes who make all
our minds possible.







On Wed, May 13, 2015 at 4:26 AM, 
bitcoin-development-requ...@lists.sourceforge.net wrote:

 Send Bitcoin-development mailing list submissions to
 bitcoin-development@lists.sourceforge.net

 To subscribe or unsubscribe via the World Wide Web, visit
 https://lists.sourceforge.net/lists/listinfo/bitcoin-development
 or, via email, send a message with subject or body 'help' to
 bitcoin-development-requ...@lists.sourceforge.net

 You can reach the person managing the list at
 bitcoin-development-ow...@lists.sourceforge.net

 When replying, please edit your Subject line so it is more specific
 than Re: Contents of Bitcoin-development digest...

 Today's Topics:

1. Re: Long-term mining incentives (Thomas Voegtlin)
2. Re: Long-term mining incentives (Tier Nolan)
3. Re: Long-term mining incentives (Alex Mizrahi)
4. Re: Proposed alternatives to the 20MB stepfunction (Tier
 Nolan)
5. Re: Block Size Increase (Oliver