Re: [bitcoin-dev] BIP CPRKV: Check private key verify

2016-04-18 Thread jl2012--- via bitcoin-dev
I just realize that if we have OP_CAT, OP_CHECKPRIVATEKEYVERIFY (aka 
OP_CHECKPRIVPUBPAIR) is not needed (and is probably better for privacy)

 

Bob has the prikey-x for pubkey-x. Alice and Bob will agree to a random secret 
nonce, k. They calculate r, in the same way as signing a transaction.

 

The script is:

 

SIZE  ADD <0x30> SWAP CAT <0x02|r-length|r> CAT SWAP CAT 
 CECHKSIGVERIFY  CHECKSIG

 

To redeem, Bob has to provide:

 

 <0x02|s-length|s|sighashtype>

 

With k, s and sighash, Alice (and only Alice) can recover the prikey-x with the 
well-known k-reuse exploit

( https://en.wikipedia.org/wiki/Elliptic_Curve_Digital_Signature_Algorithm )

 

The script will be much cleaner if we remove the DER encoding in the next 
generation of CHECKSIG

 

The benefit is prikey-x remains a secret among Alice and Bob. If they don’t 
mind exposing the prikey-x, they could use r = x coordinate of pubkey-x, which 
means k = prikey-x (https://bitcointalk.org/index.php?topic=291092.0) This 
would reduce the witness size a little bit as a DUP may be used

 

From: bitcoin-dev-boun...@lists.linuxfoundation.org 
[mailto:bitcoin-dev-boun...@lists.linuxfoundation.org] On Behalf Of Tier Nolan 
via bitcoin-dev
Sent: Monday, 29 February, 2016 19:53
Cc: Bitcoin Dev 
Subject: Re: [bitcoin-dev] BIP CPRKV: Check private key verify

 

On Mon, Feb 29, 2016 at 10:58 AM, Mats Jerratsch  > wrote:

This is actually very useful for LN too, see relevant discussion here

http://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-November/011827.html

 

Is there much demand for trying to code up a patch to the reference client?  I 
did a basic one, but it would need tests etc. added.

I think that segregated witness is going to be using up any potential soft-fork 
slot for the time being anyway.

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


[bitcoin-dev] BIP draft: Merkelized Abstract Syntax Tree

2016-04-01 Thread jl2012--- via bitcoin-dev
BIP draft: https://github.com/jl2012/bips/blob/mast/bip-mast.mediawiki
Reference implementation:
https://github.com/jl2012/bitcoin/commit/f335cab76eb95d4f7754a718df201216a49
75d8c

This BIP defines a new witness program type that uses a Merkle tree to
encode mutually exclusive branches in a script. This enables complicated
redemption conditions that are currently not possible, improves privacy by
hiding unexecuted scripts, and allows inclusion of non-consensus enforced
data with very low or no additional cost.

The reference implementation is a small and simple patch on top of BIP141
(segwit), however, I have no intention to push this before segwit is
enforced. Instead, I hope the MAST will come with many new op codes,
particularly Schnorr signature.

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] BIP CPRKV: Check private key verify

2016-02-11 Thread jl2012--- via bitcoin-dev
Seems it could be done without any new opcode:

 

Bob is trading b Bitcoins for a altcoins.

 

1. Bob Pays D Bitcoins to

 

IF

 CLTV DROP  CHECKSIG

ELSE

HASH160  EQUALVERIFY  CHECKSIG

ENDIF

 

2. Alice pays a altcoins to

 

IF

HASH160  EQUALVERIFY  CHECKSIG

ELSE

HASH160  EQUALVERIFY  CHECKSIG

ENDIF

 

3. Bob pays b Bitcoins to

 

IF

 CLTV DROP  CHECKSIG

ELSE

HASH160  EQUALVERIFY  CHECKSIG

ENDIF

 

4. Alice claims output from step 3 and reveals secret A

 

5. Bob claims output from step 2

 

6. Bob claims output from step 1 and reveals secret B

 

From: bitcoin-dev-boun...@lists.linuxfoundation.org 
[mailto:bitcoin-dev-boun...@lists.linuxfoundation.org] On Behalf Of Tier Nolan 
via bitcoin-dev
Sent: Friday, 12 February, 2016 04:05
To: Bitcoin Dev 
Subject: [bitcoin-dev] BIP CPRKV: Check private key verify

 

There was some discussion on the bitcointalk forums about using CLTV for cross 
chain transfers.

Many altcoins don't support CLTV, so transfers to those coins cannot be made 
secure.  

I created a protocol.  It uses on cut and choose to allow commitments to 
publish private keys, but it is clunky and not entirely secure.

I created a BIP draft for an opcode which would allow outputs to be locked 
unless a private key was published that matches a given public key.


https://github.com/TierNolan/bips/blob/cpkv/bip-cprkv.mediawiki


  

This email has been sent from a virus-free computer protected by Avast. 
  www.avast.com 

 

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] A roadmap to a better header format and bigger block size

2016-02-09 Thread jl2012--- via bitcoin-dev
I am actually suggesting 1 hardfork, not 2. However, different rules are
activated at different time to enhance safety and reduce disruption. The
advantage is people are required to upgrade once, not twice. Any clients
designed for stage 2 should also be ready for stage 3.


-Original Message-
From: Matt Corallo [mailto:lf-li...@mattcorallo.com] 
Sent: Wednesday, 10 February, 2016 06:15
To: jl2...@xbt.hk; bitcoin-dev@lists.linuxfoundation.org
Subject: Re: [bitcoin-dev] A roadmap to a better header format and bigger
block size

As for your stages idea, I generally like the idea (and mentioned it may be
a good idea in my proposal), but am worried about scheduling two hard-forks
at onceLets do our first hard-fork first with the things we think we
will need anytime in the visible future that we have reasonable designs for
now, and talk about a second one after we've seen what did/didnt blow up
with the first one.

Anyway, this generally seems reasonable - it looks like most of this matches
up with what I said more specifically in my mail yesterday, with the
addition of timewarp fixes, which we should probably add, and Luke's header
changes, which I need to spend some more time thinking about.

Matt


___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] On Hardforks in the Context of SegWit

2016-02-08 Thread jl2012--- via bitcoin-dev
Thanks for this proposal. Just some quick response:

1. The segwit hardfork (BIP HF) could be deployed with BIP141 (segwit
softfork). BIP141 doesn't need grace period. BIP HF will have around 1 year
of grace period.

2. Threshold is 95%. Using 4 versoin bits: a) BIP 141; b) BIP HF; c) BIP 141
if BIP HF has already got 95%; d) BIP HF if BIP141 has already got 95%.
Voting a and c (or b and d) at the same time is invalid. BIP 141 is
activated if a>95% or (a+c>95% and b+d>95%). BIP HF is activated if b>95% or
(a+c>95% and b+d>95%).

3. Fix time warp attack: this may break some SPV implementation

4. Limiting non-segwit inputs may make some existing signed tx invalid. My
proposal is: a) count the number of non-segwit sigop in a tx, including
those in unexecuted branch (sigop); b) measure the tx size without scripgSig
(size); c) a new rule is SUM(sigop*size) < some_value . This allows
calculation without actually running the script.


-Original Message-
From: bitcoin-dev-boun...@lists.linuxfoundation.org
[mailto:bitcoin-dev-boun...@lists.linuxfoundation.org] On Behalf Of Matt
Corallo via bitcoin-dev
Sent: Tuesday, 9 February, 2016 03:27
To: Bitcoin Dev 
Subject: [bitcoin-dev] On Hardforks in the Context of SegWit

Hi all,

I believe we, today, have a unique opportunity to begin to close the book on
the short-term scaling debate.

First a little background. The scaling debate that has been gripping the
Bitcoin community for the past half year has taken an interesting turn in
2016. Until recently, there have been two distinct camps - one proposing a
significant change to the consensus-enforced block size limit to allow for
more on-blockchain transactions and the other opposing such a change,
suggesting instead that scaling be obtained by adding more flexible systems
on top of the blockchain. At this point, however, the entire Bitcoin
community seems to have unified around a single vision - roughly 2MB of
transactions per block, whether via Segregated Witness or via a hard fork,
is something that can be both technically supported and which adds more
headroom before second-layer technologies must be in place. Additionally, it
seems that the vast majority of the community agrees that segregated witness
should be implemented in the near future and that hard forks will be a
necessity at some point, and I don't believe it should be controversial
that, as we have never done a hard fork before, gaining experience by
working towards a hard fork now is a good idea.

With the apparent agreement in the community, it is incredibly disheartening
that there is still so much strife, creating a toxic environment in which
developers are not able to work, companies are worried about their future
ability to easily move Bitcoins, and investors are losing confidence. The
way I see it, this broad unification of visions across all parts of the
community places the burden of selecting the most technically-sound way to
achieve that vision squarely on the development community.

Sadly, the strife is furthered by the huge risks involved in a hard fork in
the presence of strife, creating a toxic cycle which prevents a safe hard
fork. While there has been talk of doing an "emergency hardfork" as an
option, and while I do believe this is possible, it is not something that
will be easy, especially for something as controversial as rising fees.
Given that we have never done a hard fork before, being very careful and
deliberate in doing so is critical, and the technical community working
together to plan for all of the things that might go wrong is key to not
destroying significant value.

As such, I'd like to ask everyone involved to take this opportunity to
"reset", forgive past aggressions, and return the technical debates to
technical forums (ie here, IRC, etc).

As what a hard fork should look like in the context of segwit has never
(!) been discussed in any serious sense, I'd like to kick off such a
discussion with a (somewhat) specific proposal.

First some design notes:
* I think a key design feature should be taking this opportunity to add
small increases in decentralization pressure, where possible.
* Due to the several non-linear validation time issues in transaction
validation which are fixed by SegWit's signature-hashing changes, I strongly
believe any hard fork proposal which changes the block size should rely on
SegWit's existence.
* As with any hard fork proposal, its easy to end up pulling in hundreds of
small fixes for any number of protocol annoyances. In order to avoid doing
this, we should try hard to stick with a few simple changes.

Here is a proposed outline (to activate only after SegWit and with the
currently-proposed version of SegWit):

1) The segregated witness discount is changed from 75% to 50%. The block
size limit (ie transactions + witness/2) is set to 1.5MB. This gives a
maximum block size of 3MB and a "network-upgraded" block size of roughly
2.1MB. This still 

Re: [bitcoin-dev] BIP proposal: Increase block size limit to 2 megabytes

2016-02-07 Thread jl2012--- via bitcoin-dev
You are making a very naïve assumption that miners are just looking for
profit for the next second. Instead, they would try to optimize their short
term and long term ROI. It is also well known that some miners would mine at
a loss, even not for ideological reasons, if they believe that their action
is beneficial to the network and will provide long term ROI. It happened
after the last halving in 2012. Without any immediate price appreciation,
the hashing rate decreased by only less than 10%

 

http://bitcoin.sipa.be/speed-ever.png

 

 

From: bitcoin-dev-boun...@lists.linuxfoundation.org
[mailto:bitcoin-dev-boun...@lists.linuxfoundation.org] On Behalf Of Jonathan
Toomim via bitcoin-dev
Sent: Monday, 8 February, 2016 01:11
To: Anthony Towns 
Cc: bitcoin-dev@lists.linuxfoundation.org
Subject: Re: [bitcoin-dev] BIP proposal: Increase block size limit to 2
megabytes

 

 

On Feb 7, 2016, at 7:19 AM, Anthony Towns via bitcoin-dev
 > wrote:





The stated reasoning for 75% versus 95% is "because it gives "veto power"
to a single big solo miner or mining pool". But if a 20% miner wants to
"veto" the upgrade, with a 75% threshold, they could instead simply use
their hashpower to vote for an upgrade, but then not mine anything on
the new chain. At that point there'd be as little as 55% mining the new
2MB chain with 45% of hashpower remaining on the old chain. That'd be 18
minute blocks versus 22 minute blocks, which doesn't seem like much of
a difference in practice, and at that point hashpower could plausibly
end up switching almost entirely back to the original consensus rules
prior to the grace period ending.

 

Keep in mind that within a single difficulty adjustment period, the
difficulty of mining a block on either chain will be identical. Even if the
value of a 1MB branch coin is $100 and the hashrate on the 1 MB branch is
100 PH/s, and the value of a 2 MB branch coin is $101 and the hashrate on
the 2 MB branch is 1000 PH/s, the rational thing for a miner to do (for the
first adjustment period) is to mine on the 2 MB branch, because the miner
would earn 1% more on that branch.

 

So you're assuming that 25% of the hashrate chooses to remain on the
minority version during the grace period, and that 20% chooses to switch
back to the minority side. The fork happens. One branch has 1 MB blocks
every 22 minutes, and the other branch has 2 MB blocks every 18 minutes. The
first branch cannot handle the pre-fork transaction volume, as it only has
45% of the capacity that it had pre-fork. The second one can, as it has 111%
of the pre-fork capacity. This makes the 1 MB branch much less usable than
the 2 MB branch, which in turn causes the market value of newly minted coins
on that branch to fall, which in turn causes miners to switch to the more
profitable 2MB branch. This exacerbates the usability difference, which
exacerbates the price difference, etc. Having two competing chains with
equal hashrate using the same PoW function and nearly equal features is not
a stable state. Positive feedback loops exist to make the vast majority of
the users and the hashrate join one side.

 

Basically, any miners who stick to the minority branch are going to lose a
lot of money.

 

 

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Hardfork bit BIP

2016-02-07 Thread jl2012--- via bitcoin-dev
From: Gavin Andresen [mailto:gavinandre...@gmail.com] 
Sent: Friday, 5 February, 2016 06:16
To: Gregory Maxwell 
Cc: jl2012 ; Bitcoin Dev 
Subject: Re: [bitcoin-dev] Hardfork bit BIP

>It is always possible I'm being dense, but I still don't understand how this 
>proposal makes a chain-forking situation better for anybody.

>If there are SPV clients that don't pay attention to versions in block 
>headers, then setting the block version negative doesn't directly help them, 
>they will ignore it in any case.

It is unfortunate SPV clients are not following that. However, they SHOULD 
follow that. It becomes a self fulfilling prophecy if we decide not to do that 
if SPV are not following that.

>If the worry is full nodes that are not upgraded, then a block with a negative 
>version number will, indeed, fork them off the the chain, in exactly the same 
>way a block with new hard-forking consensus rules would. And with the same 
>consequences (if there is any hashpower not paying attention, then a worthless 
>minority chain might continue on with the old rules).

It will distinguish between a planned hardfork and an accidental hardfork, and 
full nodes may react differently. Particularly, a planned unknown hardfork is a 
strong indication that the original chain has become economic minority and the 
non-upgraded full node should stop accepting incoming tx immediately.

>If the worry is not-upgraded SPV clients connecting to the old, not-upgraded 
>full nodes, I don't see how this proposed BIP helps.

Same for not-upgraded full nodes following not-upgraded full nodes. Anyway, the 
header with enough PoW should still be propagated.

>I think a much better idea than this proposed BIP would be a BIP that 
>recommends that SPV clients to pay attention to block version numbers in the 
>headers that they download, and warn if there is a soft OR hard fork that they 
>don't know about.

Normal version number only suggests softforks, which is usually not a concern 
for SPV clients. An unknown hardfork is a completely different story as the 
values of the forks are completely unknown.

>It is also a very good idea for SPV clients to pay attention to timestamps in 
>the block headers that the receive, and to warn if blocks were generated 
>either much slower or faster than statistically likely. Doing that (as Bitcoin 
>Core already does) will mitigate Sybil attacks in general.

Yes, they should.

-- 
--
Gavin Andresen


___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Pre-BIP Growth Soft-hardfork

2016-02-07 Thread jl2012--- via bitcoin-dev
This looks very interesting. The first time implementing it might be more
painful but that will make subsequent hardforks a lot easier.

Do you think it's good to include the median timestamp of the past 11 blocks
after the block height in coinbase? That would make it easier to use it as
activation threshold of consensus rule changes.

For the witness commitment, it will also be treated as a merge mined
commitment?

It is also good to emphasize that it is the responsibility of miners, not
devs, to ensure that the hardfork is accepted by the supermajority of the
economy.


-Original Message-
From: bitcoin-dev-boun...@lists.linuxfoundation.org
[mailto:bitcoin-dev-boun...@lists.linuxfoundation.org] On Behalf Of Luke
Dashjr via bitcoin-dev
Sent: Sunday, 7 February, 2016 17:53
To: Bitcoin Dev 
Subject: [bitcoin-dev] Pre-BIP Growth Soft-hardfork

Here's a draft BIP I wrote almost a year ago. I'm going to look into
revising and completing it soon, and would welcome any suggestions for doing
so.

This hardfork BIP aims to accomplish a few important things:
- Finally deploying proper merge-mining as Satoshi suggested before he left.
- Expanding the nonce space miners can scan in-chip, avoiding expensive
  calculations on the host controller as blocks get larger.
- Provide a way to safely deploy hardforks without risking leaving old nodes
  vulnerable to attack.

https://github.com/luke-jr/bips/blob/bip-mmhf/bip-mmhf.mediawiki

Luke
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Segregated Witness BIPs

2015-12-27 Thread jl2012 via bitcoin-dev
The SW payment address format BIP is completely rewritten to introduce 2 
types of new addresses:


https://github.com/bitcoin/bips/pull/267

jl2012 via bitcoin-dev 於 2015-12-24 09:22 寫到:

The SW payment address format BIP draft is ready and is pending BIP
number assignment:
https://github.com/bitcoin/bips/pull/267

This is the 3rd BIP for segwit. The 2nd one for Peer Services is being
prepared by Eric Lombrozo

Eric Lombrozo via bitcoin-dev 於 2015-12-23 10:22 寫到:

I've been working with jl2012 on some SEGWIT BIPs based on earlier
discussions Pieter Wuille's implementation. We're considering
submitting three separate BIPs:

CONSENSUS BIP: witness structures and how they're committed to blocks,
cost metrics and limits, the scripting system (witness programs), and
the soft fork mechanism.

PEER SERVICES BIP: relay message structures, witnesstx serialization,
and other issues pertaining to the p2p protocol such as IBD,
synchronization, tx and block propagation, etc...

APPLICATIONS BIP: scriptPubKey encoding formats and other wallet
interoperability concerns.

The Consensus BIP is submitted as a draft and is pending BIP number
assignment: https://github.com/bitcoin/bips/pull/265 [1]
The other two BIPS will be drafted soon.

---
Eric

Links:
--
[1] https://github.com/bitcoin/bips/pull/265

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Segregated Witness BIPs

2015-12-24 Thread jl2012 via bitcoin-dev
The SW payment address format BIP draft is ready and is pending BIP 
number assignment:

https://github.com/bitcoin/bips/pull/267

This is the 3rd BIP for segwit. The 2nd one for Peer Services is being 
prepared by Eric Lombrozo


Eric Lombrozo via bitcoin-dev 於 2015-12-23 10:22 寫到:

I've been working with jl2012 on some SEGWIT BIPs based on earlier
discussions Pieter Wuille's implementation. We're considering
submitting three separate BIPs:

CONSENSUS BIP: witness structures and how they're committed to blocks,
cost metrics and limits, the scripting system (witness programs), and
the soft fork mechanism.

PEER SERVICES BIP: relay message structures, witnesstx serialization,
and other issues pertaining to the p2p protocol such as IBD,
synchronization, tx and block propagation, etc...

APPLICATIONS BIP: scriptPubKey encoding formats and other wallet
interoperability concerns.

The Consensus BIP is submitted as a draft and is pending BIP number
assignment: https://github.com/bitcoin/bips/pull/265 [1]
The other two BIPS will be drafted soon.

---
Eric

Links:
--
[1] https://github.com/bitcoin/bips/pull/265

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


[bitcoin-dev] A new payment address format for segregated witness or not?

2015-12-20 Thread jl2012 via bitcoin-dev
On the -dev IRC I asked the same question and people seem don't like it. 
I would like to further elaborate this topic and would like to consult 
merchants, exchanges, wallet devs, and users for their preference


Background:

People will be able to use segregated witness in 2 forms. They either 
put the witness program directly as the scriptPubKey, or hide the 
witness program in a P2SH address. They are referred as "native SW" and 
"SW in P2SH" respectively


Examples could be found in the draft BIP: 
https://github.com/jl2012/bips/blob/segwit/bip-segwit.mediawiki


As a tx malleability fix, native SW and SW in P2SH are equally good.

The SW in P2SH is better in terms of:
1. It allows payment from any Bitcoin reference client since version 
0.6.0.
2. Slightly better privacy by obscuration since people won't know 
whether it is a traditional P2SH or a SW tx before it is spent. I don't 
consider this is important since the type of tx will be revealed 
eventually, and is irrelevant when native SW is more popular


The SW in P2SH is worse in terms of:
1. It requires an additional push in scriptSig, which is not prunable in 
transmission, and is counted as part of the core block size

2. It requires an additional HASH160 operation than native SW
3. It provides 160bit security, while native SW provides 256bit
4. Since it is less efficient, the tx fee is likely to be higher than 
native SW (but still lower than non-SW tx)

---

The question: should we have a new payment address format for native SW?

The native SW address in my mind is basically same as existing P2PKH and 
P2SH addresses:


BASE58(address_version|witness_program|checksum) , where checksum is the 
first 4 bytes of dSHA256(address_version|witness_program)


Why not a better checksum algorithm? Reusing the existing algorithm make 
the implementation much easier and safe.


Pros for native SW address:
1. Many people and services are still using BASE58 address
2. Promote the use of native SW which allows lower fee, instead of the 
less efficient SW in P2SH

3. Not all wallets and services support payment protocol (BIP70)
4. Easy for wallets to implement
5. Even if a wallet wants to only implement SW in P2SH, they need a new 
wallet format anyway. So there is not much exta cost to introduce a new 
address format.
6. Since SW is very flexible, this is very likely to be the last address 
format to define.


Cons for native SW address:
1. Addresses are bad and should not be used anymore (some arguments 
could be found in BIP13)

2. Payment protocol is better
3. With SW in P2SH, it is not necessary to have a new address format
4. Depends on the length of the witness program, the address length 
could be a double of the existing address
5. Old wallets won't be able to pay to a new address (but no money could 
be lost this way)


--

So I'd like to suggest 2 proposals:

Proposal 1:

To define a native SW address format, while people can still use payment 
protocol or SW in P2SH if the want


Proposal 2:

No new address format is defined. If people want to pay as lowest fee as 
possible, they must use payment protocol. Otherwise, they may use SW in 
P2SH


Since this topic is more relevant to user experience, in addition to 
core devs, I would also like to consult merchants, exchanges, wallet 
devs, and users for their preferences.

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] On the security of softforks

2015-12-20 Thread jl2012 via bitcoin-dev

Rusty Russell via bitcoin-dev 於 2015-12-19 23:14 寫到:

Jonathan Toomim via bitcoin-dev 
writes:
On Dec 18, 2015, at 10:30 AM, Pieter Wuille via bitcoin-dev 
 wrote:


1) The risk of an old full node wallet accepting a transaction that 
is

invalid to the new rules.

The receiver wallet chooses what address/script to accept coins on.
They'll upgrade to the new softfork rules before creating an address
that depends on the softfork's features.

So, not a problem.



Mallory wants to defraud Bob with a 1 BTC payment for some beer. Bob
runs the old rules. Bob creates a p2pkh address for Mallory to
use. Mallory takes 1 BTC, and creates an invalid SegWit transaction
that Bob cannot properly validate and that pays into one of Mallory's
wallets. Mallory then immediately spends the unconfirmed transaction
into Bob's address. Bob sees what appears to be a valid transaction
chain which is not actually valid.


Pretty sure Bob's wallet will be looking for "OP_DUP OP_HASH160
 OP_EQUALVERIFY OP_CHECKSIG" scriptSig.  The SegWit-usable
outputs will (have to) look different, won't they?

Cheers,
Rusty.


I think he means Mallory is paying with an invalid Segwit input, not 
output (there is no "invalid output" anyway). However, this is not a 
issue if Bob waits for a few confirmations.

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


[bitcoin-dev] Segregated witness softfork with moderate adoption has very small block size effect

2015-12-19 Thread jl2012 via bitcoin-dev
I have done some calculation for the effect of a SW softfork on the 
actual total block size.


Definitions:

Core block size (CBS): The block size as seen by a non-upgrading full 
node

Witness size (WS): The total size of witness in a block
Total block size (TBS): CBS + WS
Witness discount (WD): A discount factor for witness for calculation of 
VBS (1 = no discount)

Virtual block size (VBS): CBS + (WS * WD)
Witness adoption (WA): Proportion of new format transactions among all 
transactions

Prunable ratio (PR): Proportion of signature data size in a transaction

With some transformation it could be shown that:

 TBS = CBS / (1 - WA * PR) = VBS / (1 - WA * PR * (1 - WD))

sipa suggested a WD of 25%.

The PR heavily depends on the transaction script type and input-output 
ratio. For example, the PR of 1-in 2-out P2PKH and 1-in 1-out 2-of-2 
multisig P2SH are about 47% and 72% respectively. According to sipa's 
presentation, the current average PR on the blockchain is about 60%.


Assuming WD=25% and PR=60%, the MAX TBS with different MAX VBS and WA is 
listed at:


http://i.imgur.com/4bgTMRO.png

The highlight indicates whether the CBS or VBS is the limiting factor.

With moderate SW adoption at 40-60%, the total block size is 1.32-1.56MB 
when MAX VBS is 1.25MB, and 1.22-1.37MB when MAX VBS is 1.00MB.


P2SH has been introduced for 3.5 years and only about 10% of bitcoin is 
stored this way (I can't find proportion of existing P2SH address). A 
1-year adoption rate of 40% for segwit is clearly over-optimistic unless 
the tx fee becomes really high.


(btw the PR of 60% may also be over-optimistic, as using SW nested in 
P2SH will decrease the PR, and therefore TBS becomes even lower)


I am not convinced that SW softfork should be the *only* short term 
scalability solution





___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] We need to fix the block withholding attack

2015-12-19 Thread jl2012 via bitcoin-dev
After the meeting I find a softfork solution. It is very inefficient and 
I am leaving it here just for record.


1. In the first output of the second transaction of a block, mining pool 
will commit a random nonce with an OP_RETURN.


2. Mine as normal. When a block is found, the hash is concatenated with 
the committed random nonce and hashed.


3. The resulting hash must be smaller than 2 ^ (256 - 1/64) or the block 
is invalid. That means about 1% of blocks are discarded.


4. For each difficulty retarget, the secondary target is decreased by 2 
^ 1/64.


5. After 546096 blocks or 10 years, the secondary target becomes 2 ^ 
252. Therefore only 1 in 16 hash returned by hasher is really valid. 
This should make the detection of block withholding attack much easier.


All miners have to sacrifice 1% reward for 10 years. Confirmation will 
also be 1% slower than it should be.


If a node (full or SPV) is not updated, it becomes more vulnerable as an 
attacker could mine a chain much faster without following the new rules. 
But this is still a softfork, by definition.


---

ok, back to topic. Do you mean this? 
http://lists.linuxfoundation.org/pipermail/bitcoin-dev/2012-June/001506.html




Peter Todd via bitcoin-dev 於 2015-12-19 13:42 寫到:

At the recent Scaling Bitcoin conference in Hong Kong we had a chatham
house rules workshop session attending by representitives of a super
majority of the Bitcoin hashing power.

One of the issues raised by the pools present was block withholding
attacks, which they said are a real issue for them. In particular, 
pools

are receiving legitimate threats by bad actors threatening to use block
withholding attacks against them. Pools offering their services to the
general public without anti-privacy Know-Your-Customer have little
defense against such attacks, which in turn is a threat to the
decentralization of hashing power: without pools only fairly large
hashing power installations are profitable as variance is a very real
business expense. P2Pool is often brought up as a replacement for 
pools,

but it itself is still relatively vulnerable to block withholding, and
in any case has many other vulnerabilities and technical issues that 
has

prevented widespread adoption of P2Pool.

Fixing block withholding is relatively simple, but (so far) requires a
SPV-visible hardfork. (Luke-Jr's two-stage target mechanism) We should
do this hard-fork in conjunction with any blocksize increase, which 
will

have the desirable side effect of clearly show consent by the entire
ecosystem, SPV clients included.


Note that Ittay Eyal and Emin Gun Sirer have argued(1) that block
witholding attacks are a good thing, as in their model they can be used
by small pools against larger pools, disincentivising large pools.
However this argument is academic and not applicable to the real world,
as a much simpler defense against block withholding attacks is to use
anti-privacy KYC and the legal system combined with the variety of
withholding detection mechanisms only practical for large pools.
Equally, large hashing power installations - a dangerous thing for
decentralization - have no block withholding attack vulnerabilities.

1) http://hackingdistributed.com/2014/12/03/the-miners-dilemma/

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] We need to fix the block withholding attack

2015-12-19 Thread jl2012 via bitcoin-dev

Chris Priest via bitcoin-dev 於 2015-12-19 22:34 寫到:

Block witholding attacks are only possible if you have a majority of
hashpower. If you only have 20% hashpower, you can't do this attack.
Currently, this attack is only a theoretical attack, as the ones with
all the hashpower today are not engaging in this behavior. Even if
someone who had a lot of hashpower decided to pull off this attack,
they wouldn't be able to disrupt much. Once that time comes, then I
think this problem should be solved, until then it should be a low
priority. There are more important things to work on in the meantime.



This is not true. For a pool with 5% total hash rate, an attacker only
needs 0.5% of hash rate to sabotage 10% of their income. It's already
enough to kill the pool

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] We need to fix the block withholding attack

2015-12-19 Thread jl2012 via bitcoin-dev

Chris Priest 於 2015-12-19 22:47 寫到:

On 12/19/15, jl2012  wrote:

Chris Priest via bitcoin-dev 於 2015-12-19 22:34 寫到:

Block witholding attacks are only possible if you have a majority of
hashpower. If you only have 20% hashpower, you can't do this attack.
Currently, this attack is only a theoretical attack, as the ones with
all the hashpower today are not engaging in this behavior. Even if
someone who had a lot of hashpower decided to pull off this attack,
they wouldn't be able to disrupt much. Once that time comes, then I
think this problem should be solved, until then it should be a low
priority. There are more important things to work on in the meantime.



This is not true. For a pool with 5% total hash rate, an attacker only
needs 0.5% of hash rate to sabotage 10% of their income. It's already
enough to kill the pool




This begs the question: If this is such a devastating attack, then why
hasn't this attack brought down every pool in existence? As far as I'm
aware, there are many pools in operation despite this possibility.


It did happen: 
https://www.reddit.com/r/Bitcoin/comments/28242v/eligius_falls_victim_to_blocksolution_withholding/


The worst thing is that the proof for such attack is probabilistic, not
deterministic.

A smarter attacker may even pretend to be many small miners, make it
even more difficult or impossible to prove who are attacking.


Then shouldn't this be something the pool deals with, not the bitcoin 
protocol?


The only solution is to ask for KYC registration, unless one could 
propose

a cryptographic solution that does not require a consensus fork.

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Segregated Witness in the context of Scaling Bitcoin

2015-12-17 Thread jl2012 via bitcoin-dev

This is not correct.

As only about 1/3 of nodes support BIP65 now, would you consider CLTV tx 
are less secure than others? I don't think so. Since one invalid CLTV tx 
will make the whole block invalid. Having more nodes to fully validate 
non-CLTV txs won't make them any safer. The same logic also applies to 
SW softfork.


You may argue that a softfork would make the network as a whole less 
secure, as old nodes have to trust new nodes. However, the security of 
all content in the same block must be the same, by definition.


Anyway, I support SW softfork at the beginning, and eventually (~2 
years) moving to a hardfork with higher block size limit and better 
commitment structure.


Jeff Garzik via bitcoin-dev 於 2015-12-17 13:27 寫到:



Illustration:  If SW is deployed via soft fork, the count of nodes
that validate witness data is significantly lower than the count of
nodes that validate non-witness data.  Soft forks are not trustless
operation, they depend on miner trust, slowly eroding the trustless
validation of older nodes over time.

Higher security in one data area versus another produces another
economic value distinction between the two goods in the basket, and
creates a "pay more for higher security in core block, pay less for
lower security in witness" dynamic.

This economic distinction is not present if SW is deployed via hard
fork.


___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Segregated Witness in the context of Scaling Bitcoin

2015-12-17 Thread jl2012 via bitcoin-dev
I know my reply is a long one but please read before you hit send. I 
have 2 proposals: fast BIP102 + slow SWSF and fast SWSF only. I guess no 
one here is arguing for not doing segwit; and it is on the top of my 
wish list. My main argument (maybe also Jeff's) is that segwit is too 
complicated and may not be a viable short term solution (with the 
reasons I listed that I don't want to repeat)


And also I don't agree with you that BIP102 is *strictly* inferior than 
segwit. We never had a complex softfork like segwit, but we did have a 
successful simple hardfork (BIP50), and BIP102 is very simple. (Details 
in my last post. I'm not going to repeat)


Mark Friedenbach 於 2015-12-17 04:33 寫到:

There are many reasons to support segwit beyond it being a soft-fork.
For example:

* the limitation of non-witness data to no more than 1MB makes the
quadratic scaling costs in large transaction validation no worse than
they currently are;
* redeem scripts in witness use a more accurate cost accounting than
non-witness data (further improvements to this beyond what Pieter has
implemented are possible); and
* segwit provides features (e.g. opt-in malleability protection) which
are required by higher-level scaling solutions.

With that in mind I really don't understand the viewpoint that it
would be better to engage a strictly inferior proposal such as a
simple adjustment of the block size to 2MB.


___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Block size: It's economics & user preparation & moral hazard

2015-12-16 Thread jl2012 via bitcoin-dev
I would also like to summarize my observation and thoughts after the 
Hong Kong workshop.


1. I'm so glad that I had this opportunity to meet so many smart 
developers who are dedicated to make Bitcoin better. Regular conference 
like this is very important for a young project, and it is particularly 
important for Bitcoin, with consensus as the core value. I hope such a 
conference could be conducted at least once in 2 years in Hong Kong, 
which is visa-friendly for most people in both East and West.


2. I think some consensus has emerged at/after the conference. There is 
no doubt that segregated witness will be implemented. For block size, I 
believe 2MB as the first step is accepted by the super majority of 
miners, and is generally acceptable / tolerable for devs.


3. Chinese miners are requesting consensus among devs nicely, instead of 
using their majority hashing power to threaten the community. However, 
if I were allowed to speak for them, I think 2MB is what they really 
want, and they believe it is for the best interest of themselves and the 
whole community


4. In the miners round table on the second day, one of the devs 
mentioned that he didn't want to be seen as the decision maker of 
Bitcoin. On the other hand, Chinese miners repeatedly mentioned that 
they want several concrete proposals from devs which they could choose. 
I see no contradiction between these 2 viewpoints.


Below are some of my personal views:

5. Are we going to have a "Fee Event" / "Economic Change Event" in 2-6 
months as Jeff mentioned? Frankly speaking I don't know. As the fee 
starts to increase, spammers will first get squeezed --- which could be 
a good thing. However, I have no idea how many txs on the blockchain are 
spam. We also need to consider the effect of halving in July, which may 
lead to speculation bubble and huge legitimate tx volume.


6. I believe we should avoid a radical "Economic Change Event" at least 
in the next halving cycle, as Bitcoin was designed to bootstrap the 
adoption by high mining reward in the beginning. For this reason, I 
support an early and conservative increase, such as BIP102 or 2-4-8. 2MB 
is accepted by most people and it's better than nothing for BIP101 
proponents. By "early" I mean to be effective by May, at least 2 months 
before the halving.


7. Segregated witness must be done. However, it can't replace a 
short-term block size hardfork for the following reasons:
(a) SW softfork does not allow higher volume if users are not upgrading. 
In order to bootstrap the new tx type, we may need the help of 
altruistic miners to provide a fee discount for SW tx.
(b) In terms of block space saving, SW softfork is most efficient for 
multisig tx, which is still very uncommon
(c) My most optimistic guess is SW will be ready in 6 months, which will 
be very close to halving and potential tx volume burst. And it may not 
be done in 2016, as it does not only involve consensus code, but also 
change in the p2p protocol and wallet design


8. Duplex payment channel / Lightning Network may be viable solutions. 
However, they won't be fully functional until SW is done so they are 
irrelevant in this discussion


9. No matter what is going to be done / not done, I believe we should 
now have a clear road map and schedule for the community: a short-term 
hardfork or not? The timeline of SW? It is bad to leave everything 
uncertain and people can't well prepared for any potential radical 
changes


10. Finally, I hope this discussion remains educated and evidence-based, 
and no circling

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Segregated Witness in the context of Scaling Bitcoin

2015-12-16 Thread jl2012 via bitcoin-dev

There are at least 2 proposals on the table:

1. SWSF (segwit soft fork) with 1MB virtual block limit, approximately 
equals to 2MB actual limit


2. BIP102: 2MB actual limit

Since the actual limits for both proposals are approximately the same, 
it is not a determining factor in this discussion


The biggest advantage of SWSF is its softfork nature. However, its 
complexity is not comparable with any previous softforks we had. It is 
reasonable to doubt if it could be ready in 6 months


For BIP102, although it is a hardfork, it is a very simple one and could 
be deployed with ISM in less than a month. It is even simpler than 
BIP34, 66, and 65.


So we have a very complicated softfork vs. a very simple hardfork. The 
only reason makes BIP102 not easy is the fact that it's a hardfork.


The major criticism for a hardfork is requiring everyone to upgrade. Is 
that really a big problem?


First of all, hardfork is not a totally unknown territory. BIP50 was a 
hardfork. The accident happened on 13 March 2013. Bitcoind 0.8.1 was 
released on 18 March, which only gave 2 months of grace period for 
everyone to upgrade. The actual hardfork happened on 16 August. 
Everything completed in 5 months without any panic or chaos. This 
experience strongly suggests that 5 months is already safe for a simple 
hardfork. (in terms of simplicity, I believe BIP102 is even simpler than 
BIP50)


Another experience is from BIP66. The 0.10.0 was released on 16 Feb 
2015, exactly 10 months ago. I analyze the data on 
https://bitnodes.21.co and found that 4600 out of 5090 nodes (90.4%) 
indicate BIP66 support. Considering this is a softfork, I consider this 
as very good adoption already.


With the evidence from BIP50 and BIP66, I believe a 5 months 
pre-announcement is good enough for BIP102. As the vast majority of 
miners have declared their support for a 2MB solution, the legacy 1MB 
fork will certainly be abandoned and no one will get robbed.



My primary proposal:

Now - 15 Jan 2016: formally consult the major miners and merchants if 
they support an one-off rise to 2MB. I consider approximately 80% of 
mining power and 80% of trading volume would be good enough


16 - 31 Jan 2016: release 0.11.3 with BIP102 with ISM vote requiring 80% 
of hashing power


1 Jun 2016: the first day a 2MB block may be allowed

Before 31 Dec 2016: release SWSF



My secondary proposal:

Now: Work on SWSF in a turbo mode and have a deadline of 1 Jun 2016

1 Jun 2016: release SWSF

What if the deadline is not met? Maybe pushing an urgent BIP102 if 
things become really bad.



In any case, I hope a clear decision and road map could be made now. 
This topic has been discussed to death. We are just bringing further 
uncertainty if we keep discussing.



Matt Corallo via bitcoin-dev 於 2015-12-16 15:50 寫到:

A large part of your argument is that SW will take longer to deploy
than a hard fork, but I completely disagree. Though I do not agree
with some people claiming we can deploy SW significantly faster than a
hard fork, once the code is ready (probably a six month affair) we can
get it deployed very quickly. It's true the ecosystem may take some
time to upgrade, but I see that as a feature, not a bug - we can build
up some fee pressure with an immediate release valve available for
people to use if they want to pay fewer fees.

 On the other hand, a hard fork, while simpler for the ecosystem to
upgrade to, is a 1-2 year affair (after the code is shipped, so at
least 1.5-2.5 from today if we all put off heads down and work). One
thing that has concerned me greatly through this whole debate is how
quickly people seem to think we can roll out a hard fork. Go look at
the distribution of node versions on the network today and work
backwards to get nearly every node upgraded... Even with a year
between fork-version-release and fork-activation, we'd still kill a
bunch of nodes and instead of reducing their security model, lead them
to be outright robbed.



___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Segregated Witness features wish list

2015-12-13 Thread jl2012--- via bitcoin-dev

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

I'm trying to list the minimal consensus rule changes needed for segwit 
softfork. The list does not cover the changes in non-consensus critical 
behaviors, such as relay of witness data.


1. OP_NOP4 is renamed as OP_SEGWIT
2. A script with OP_SEGWIT must fail if the scriptSig is not completely 
empty
3. If OP_SEGWIT is used in the scriptPubKey, it must be the only and the 
last OP code in the scriptPubKey, or the script must fail
4. The OP_SEGWIT must be preceded by exactly one data push (the 
"serialized script") with at least one byte, or the script must fail
5. The most significant byte of serialized script is the version byte, 
an unsigned number

6. If the version byte is 0x00, the script must fail
7. If the version byte is 0x02 to 0xff, the rest of the serialized 
script is ignored and the output is spendable with any form of witness 
(even if the witness contains something invalid in the current script 
system, e.g. OP_RETURN)

8. If the version byte is 0x01,
8a. rest of the serialized script is deserialized, and be interpreted as 
the scriptPubKey.

8b. the witness is interpreted as the scriptSig.
8c. the script runs with existing rules (including P2SH)
9. If the script fails when OP_SEGWIT is interpreted as a NOP, the 
script must fail. However, this is guaranteed by the rules 2, 3, 4, 6 so 
no additional check is needed.

10. The calculation of Merkle root in the block header remains unchanged
11. The witness commitment is placed somewhere in the block, either in 
coinbase or an OP_RETURN output in a specific tx


Format of the witness commitment:
The witness commitment could be as simple as a hash tree of all witness 
in a block. However, this is bad for further development of sum tree for 
compact SPV fraud proof as we have to maintain one more tree in the 
future. Even if we are not going to implement any sum checking in first 
version of segwit, it's better to make it easier for future softforks. 
(credit: gmaxwell)
12. The block should indicate how many sum criteria there are by 
committing the number following the witness commitment
13. The witness commitment is a hash-sum tree with the number of sum 
criteria committed in rule 12
14. Each sum criterion is a fixed 8 byte signed number (Negative value 
is allowed for use like counting delta-UTXO. 8 bytes is needed for fee 
commitment. Multiple smaller criteria may share a 8 byte slot, as long 
as they do not allow negative value)
15. Nodes will ignore the sum criteria that they do not understand, as 
long as the sum is correctly calculated


Size limit:
16. We need to determine the size limit of witness
17. We also need to have an upper limit for the number of sum criteria, 
or a malicious miner may find a block with many unknown sum criteria and 
feed unlimited amount of garbage to other nodes.


All other functions I mentioned in my wish list could be softforked 
later to this system


To mitigate the risk described in rule 17, we may ask miners to vote for 
an increase (but not decrease) in the number of sum criteria. Initially 
there should be 0 sum criteria. If we would like to introduce a new 
criteria, miners will support the proposal by setting the number of sum 
criteria as 1. However, until 95% of miners support the increase, all 
values of the extra sum criteria must be 0. Therefore, even a malicious 
miner may declare any amount of sum criteria, those criteria unknown to 
the network must be 0, and no extra data is needed to construct the 
block. This voting mechanism could be a softfork over rule 12 and 13, 
and is not necessary in the initial segwit deployment.

-BEGIN PGP SIGNATURE-
Version: GnuPG v2

iQGcBAEBCAAGBQJWbY0jAAoJEO6eVSA0viTSU1oMAJIrYWerwk84poZBL/ezEsIp
9fCLnFZ4lyO2ijAm5UmwLXGijY03kqp29b0zmyIWV2WuoeW2lN64tLHQRilT0+5R
n5/viQOMv0C0MYs525+/dpNkk2q2MiFmyyozdbU6zcyfdkrkYdChCFOQ9GsxzQHk
n4lL4/RSKdqJZg4x2yEGgdyKA6XrQHaFirdr/K2bhhbk4Q0SOuYjy8Wxa2oCHFCC
WG4K2NBnKCI7DuVXQK+ZC8dYXMwbemeFfPHY6dZVti7j/OFsllyxno/CFKO3rsCs
+uko4XJk6pH0Ncjrc1n0l0v9xIKF5hTqSxFs+GVvhkiBdTDZVe7CdedO9qJWf1hE
bbmLXTURCDQUFe9F3uKsnYfMoD5eniWHx2OQcJcNPlLMJd9zObB3HdgFMW6N53KN
QXLmxobU/xFhmFknz1ShGEIdGSaH0gqnb+WEkO5v5vBO4L6Cikc+lcp7zXqQzWpW
uqm3QSrbKcbR6JEwDFoGQpDkcqpwpTIrOAk4B1jJRg==
=J2KF
-END PGP SIGNATURE-


Gregory Maxwell 於 2015-12-10 04:51 寫到:

On Thu, Dec 10, 2015 at 6:47 AM, jl2012--- via bitcoin-dev
<bitcoin-dev@lists.linuxfoundation.org> wrote:
4. Sum of fee, sigopcount, size etc as part of the witness hash tree: 
for


I should have also commented on this: the block can indicate how many
sum criteria there are; and then additional ones could be soft-forked
in. Haven't tried implementing it yet, but there you go. :)


___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Forget dormant UTXOs without confiscating bitcoin

2015-12-13 Thread jl2012--- via bitcoin-dev

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

On Mon, Dec 14, 2015 at 12:14 AM, Danny Thorpe  
wrote:
What is the current behavior / cost that this proposal is trying to 
avoid? Are ancient utxos required to be kept in memory always in a 
fully validating node, or can ancient utxos get pushed out of memory 
like a normal LRU caching db?


I don't see why it must be kept in memory. But storage is still a 
problem. With the 8 year limit and a fixed max block size, it indirectly 
sets an upper limit for UTXO set.



Chris Priest via bitcoin-dev :

This isn't going to kill bitcoin, but it won't make it any better.


Do you believe that thousands of volunteer full nodes are obliged to 
store an UTXO record, just because one paid US$0.01 to an anonymous 
miner 100 years ago? It sounds insanely cheap, isn't it? My proposal (or 
similar proposal by Peter Todd) is to solve this problem. Many 
commercial banks have a dormant threshold less than 8 years so I believe 
it is a balanced choice.


Back to the topic, I would like to further elaborate my proposal.

We have 3 types of full nodes:

Archive nodes: full nodes that store the whole blockchain
Full UTXO nodes: full nodes that fully store the latest UTXO state, but 
not the raw blockchain
Lite UTXO nodes: full nodes that store only UTXO created in that past 
42 blocks


Currently, if one holds nothing but a private key, he must consult 
either an archive node or a full UTXO node for the latest UTXO state to 
spend his coin. We currently do not have any lite UTXO node, and such 
node would not work properly beyond block 42.


With the softfork I described in my original post, if the UTXO is 
created within the last 42 blocks, the key holder may consult any 
type of full node, including a lite UTXO node, to create the 
transaction.


If the UTXO has been confirmed by more than 42 blocks, a lite UTXO 
node obviously can't provide the necessary information to spend the 
coin. However, not even a full UTXO node may do so. A full UTXO node 
could tell the position of the UTXO in the blockchain, but can't provide 
all the information required by my specification. Only an archive node 
may do so.


What extra information is needed?

(1) If your UTXO was generated in block Y, you first need to know the 
TXO state (spent / unspent) of all outputs in block Y at block (Y + 
42). Only UTXOs at that time are relevant.


(2) You also need to know if there was any spending of any block Y UTXOs 
after block (Y + 42).


It is not possible to construct the membership prove I require without 
these information. It is designed this way, so that lite UTXO nodes 
won't need to store any dormant UTXO records: not even the hash of 
individual dormant UTXO records. If the blockchain grows to insanely 
big, it may take days or weeks to retrieve to records. However, I don't 
think this is relevant as one has already left his coins dormant for >8 
years. Actually, you don't even need the full blockchain. For (1), all 
you need is the 42 blocks from Y to Y+42 minus any witness data, 
as you don't need to do any validation. For (2), you just need the 
coinbase of Y+420001 to present, where any spending would have been 
committed, and retrieve the full block only if a spending is found.


So the Bitcoin Bank (miners) is not going to shred your record and 
confiscate your money. Instead, the Bank throws your record to the 
garage (raw blockchain). You can search for your record by yourself, or 
employ someone (archive node) to search it for you. In any case it 
incurs costs. But as thousands of bankers have kept your record on their 
limited desk space for 8 years for free (though one of them might 
receive a fraction of a penny from you), you shouldn't complain with any 
moral, technical, or legal reason. And no matter what users say, I 
believe something like this will happen when miners and full nodes can't 
handle the UTXO set.


I'd like to see more efficient proposals that archive the same goals.

p.s. there were some typos in my original. The second sentence of the 
second paragraph should be read as "For every block X+42, it will 
commit to a hash for all UTXOs generated in block X."

-BEGIN PGP SIGNATURE-
Version: GnuPG v2

iQGcBAEBCAAGBQJWbbR2AAoJEO6eVSA0viTScEoL/RPlsxr0A5wTtgdi+9i4AFlV
Sw/He89+YPGe5VCG74YNAPLEUF1/rICzUJ4DulvNTOo/5xtmkv5ok4bD7v1JZnH3
DE2PExMQYs2X4Qm6mkcwi8IWlMR2U5j5ebUq21Kj4AqVFj9UcQmYGhPehB2f+cM9
Wki/TDwNj5fV8AZ4uR9pPgaf+bvVQQ9BOOLiIMiTbphNCx1hfGfYcsqmXlCbGk9A
PatGR88aQTxpa7PhbCZwwf76cKuOaYYZeHr9jRR9RL5rZVXgE1SI/niBytJhXaP8
lwYtk4Bpz0IGd23v1dArNQQoOp5Xycbeq1l1qyv/qtxju65No+dhqiEcFBZVI1AS
VcndMQ+yvNuxVgib2Ifh9YjXelWAqqLzzoVcz2RxXh6HJ0tVKxBokwdAcsclZb93
zQ1JhDR4vBpLquytZA8lDIxJraNCdB/KEAOAey6ljP3zL7fBLBp1oZw4DDDtFy8V
EMjrOSVnjyuyfey2YXsGnnHuQS0mpwmSroV2400uGQ==
=2xRy
-END PGP SIGNATURE-

___
bitcoin-dev mailing list

Re: [bitcoin-dev] Segregated Witness features wish list

2015-12-13 Thread jl2012--- via bitcoin-dev

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

Pieter Wuille 2015-12-13 13:07 :


The use of a NOP opcode to indicate a witness script was something I
considered at first too, but it's not really needed. You wouldn't be
able to use that opcode in any place a normal opcode could occur, as
it needs to be able to inspect the full scriptSig (rather than just
its resulting stack) anyway. So both in practice and conceptually it
is only really working as a template that gets assigned a special
meaning (like P2SH did). We don't need an opcode for that, and instead
we could say that any scriptPubKey (or redeemscript) that consists of
a single push is a witness program.

5. The most significant byte of serialized script is the version byte, 
an

unsigned number
6. If the version byte is 0x00, the script must fail


What is that good for?


Just to make sure a script like OP_0 OP_SEGWIT will fail.

Anyway, your design may be better so forget it

7. If the version byte is 0x02 to 0xff, the rest of the serialized 
script is
ignored and the output is spendable with any form of witness (even if 
the

witness contains something invalid in the current script system, e.g.
OP_RETURN)


Do you mean the scriptPubKey itself, or the script that follows after
the version byte?
* The scriptPubKey itself: that's in contradiction with your rule 4,
as segwit scripts are by definition only a push (+ opcode), so they
can't be an OP_RETURN.
* The script after the version byte: agree - though it doesn't
actually need to be a script at all even (see further).


I am not referring to the serialized script, but the witness. Basically,
it doesn't care what the content look like.



It is useful however to allow segwit inside P2SH


Agree


So let me summarize by giving an equivalent to your list above,
reflecting how my current prototype works:
A) A scriptPubKey or P2SH redeemscript that consists of a single push
of 2 to 41 bytes gets a new special meaning, and the byte vector
pushed by it is called the witness program.


Why 41 bytes? Do you expect all witness program to be P2SH-like?


The program
must not fail and result in a single TRUE on the stack, and nothing
else (to prevent stuffing the witness with pointless data during relay
of transactions).


Could we just implement this as standardness rule? It is always possible
to stuff the scriptSig with pointless data so I don't think it's a new
attack vector. What if we want to include the height and tx index of
the input for compact fraud proof? Such fraud proof should not be an
opt-in function and not be dependent on the version byte

For the same reason, we should also allow traditional tx to have data
in the witness field, for any potential softfork upgrade
-BEGIN PGP SIGNATURE-
Version: GnuPG v2

iQGcBAEBCAAGBQJWbbugAAoJEO6eVSA0viTSD8oMAKFvd/+KZgH13tErEA+iXzF5
pwT4/eoQWSTvxIDVrFN+9wV79ogO4/aiCDEdmNF2IZD3QqmhKl7iOPw2SEseRTbe
e1r5z67yuudXyEQocZvy5+NOUp3N978b8weuRsHWG1HXgxTRmgZTrEeNtbEUs0X2
n5l6e0scnZAu70svBXr8X9HnOm2P/QLxtAqyNW19caCi+Dg/4Curx48tXQ/I9IxT
SYFVzB++FIoua49Cf1RJN+dUfywg67wT5l9NX4uWAX0qNB+p6BPP8df/72G/u564
NIaJs3IFiUaNktXz9aDM4s7pSzR6PlCK6LFKjE52sBY5uREHGU4PnfX9YqtwiEXA
Hr3YoFiepxAwl6icJi3wHKa6i0NGvj1fR1h6xuJ7ulzNv5mwuzXPOgvTDK4wpejl
ee8wsQZwmzchAfgyfPsgSaPh/jjBwm2S+WDMbL4HDmnWqVDl8dG3I/b3XP0aegY9
4RxPhLOA1qToNDGhnm+JNqT60OKgatpDN/4bRgRscA==
=4B1D
-END PGP SIGNATURE-

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


[bitcoin-dev] Forget dormant UTXOs without confiscating bitcoin

2015-12-12 Thread jl2012--- via bitcoin-dev
It is a common practice in commercial banks that a dormant account might 
be confiscated. Confiscating or deleting dormant UTXOs might be too 
controversial, but allowing the UTXOs set growing without any limit 
might not be a sustainable option. People lose their private keys. 
People do stupid things like sending bitcoin to 1BitcoinEater. We 
shouldn’t be obliged to store everything permanently. This is my 
proposal:


Dormant UTXOs are those UTXOs with 42 confirmations. In every block 
X after 42, it will commit to a hash for all UTXOs generated in 
block X-42. The UTXOs are first serialized into the form: 
txid|index|value|scriptPubKey, then a sorted Merkle hash is calculated. 
After some confirmations, nodes may safely delete the UTXO records of 
block X permanently.


If a user is trying to redeem a dormant UTXO, in addition the signature, 
they have to provide the scriptPubKey, height (X), and UTXO value as 
part of the witness. They also need to provide the Merkle path to the 
dormant UTXO commitment.


To confirm this tx, the miner will calculate a new Merkle hash for the 
block X, with the hash of the spent UTXO replaced by 1, and commit the 
hash to the current block. All full nodes will keep an index of latest 
dormant UTXO commitments so double spending is not possible. (a 
"meta-UTXO set")


If all dormant UTXOs under a Merkle branch are spent, hash of the branch 
will become 1. If all dormant UTXOs in a block are spent, the record for 
this block could be forgotten. Full nodes do not need to remember which 
particular UTXO is spent or not, since any person trying to redeem a 
dormant UTXO has to provide such information.


It becomes the responsibility of dormant coin holders to scan the 
blockchain for the current status of the UTXO commitment for their coin. 
They may also need to pay extra fee for the increased tx size.


This is a softfork if there is no hash collision but this is a 
fundamental assumption in Bitcoin anyway. The proposal also works 
without segregated witness, just by replacing "witness" with "scriptSig"


___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


[bitcoin-dev] Impacts of Segregated Witness softfork

2015-12-09 Thread jl2012--- via bitcoin-dev
Although the plan is to implement SW with softfork, I think many 
important (but non-consensus critical) components of the network would 
be broken and many things have to be redefined.


1. Definition of "Transaction ID". Currently, "Transaction ID" is simply 
a hash of a tx. With SW, we may need to deal with 2 or 3 IDs for each 
tx. Firstly we have the "backward-compatible txid" (bctxid), which has 
exactly the same meaning of the original txid. We also have a "witness 
ID" (wid), which is the hash of the witness. And finally we may need a 
"global txid" (gtxid), which is a hash of bctxid|wid. A gtxid is needed 
mainly for the relay of txs between full nodes. bctxid and wid are 
consensus critical while gtxid is for relay network only.


2. IBLT / Bitcoin relay network: As the "backward-compatible txid" 
defines only part of a tx, any relay protocols between full nodes have 
to use the "global txid" to identify a tx. Malleability attack targeting 
relay network is still possible as the witness is malleable.


3. getblocktemplete has to be upgraded to deal with witness data and 
witness IDs. (Stratum seems to be not affected? I'm not sure)


4. Protocols relying on the coinbase tx (e.g. P2Pool, merged mining): 
depends on the location of witness commitment, these protocols may be 
broken.


Feel free to correct me and add more to the list.




___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


[bitcoin-dev] Segregated Witness features wish list

2015-12-09 Thread jl2012--- via bitcoin-dev
It seems the current consensus is to implement Segregated Witness. SW 
opens many new possibilities but we need a balance between new features 
and deployment time frame. I'm listing by my priority:


1-2 are about scalability and have highest priority

1. Witness size limit: with SW we should allow a bigger overall block 
size. It seems 2MB is considered to be safe for many people. However, 
the exact size and growth of block size should be determined based on 
testing and reasonable projection.


2. Deployment time frame: I prefer as soon as possible, even if none of 
the following new features are implemented. This is not only a technical 
issue but also a response to the community which has been waiting for a 
scaling solution for years


3-6 promote safety and reduce level of trust (higher priority)

3. SIGHASH_WITHINPUTVALUE [1]: there are many SIGHASH proposals but this 
one has the highest priority as it makes offline signing much easier.


4. Sum of fee, sigopcount, size etc as part of the witness hash tree: 
for compact proof of violations in these parameters. I prefer to have 
this feature in SWv1. Otherwise, that would become an ugly softfork in 
SWv2 as we need to maintain one more hash tree


5. Height and position of an input as part of witness will allow compact 
proof of non-existing UTXO. We need this eventually. If it is not done 
in SWv1, we could softfork it nicely in SWv2. I prefer this earlier as 
this is the last puzzle for compact fraud proof.


6. BIP62 and OP_IF malleability fix [2] as standardness rules: 
involuntary malleability may still be a problem in the relay network and 
may make the relay less efficient (need more research)


7-15 are new features and long-term goals (lower priority)

7. Enable OP_CAT etc:
OP_CAT will allow tree signatures described by [3]. Even without Schnorr 
signature, m-of-n multisig will become more efficient if m < n.


OP_SUBSTR/OP_LEFT/OP_RIGHT will allow people to shorten a payment 
address, while sacrificing security.


I'm not sure how those disabled bitwise logic codes could be useful

Multiplication and division may still considered to be risky and not 
very useful?


8. Schnorr signature: for very efficient multisig [3] but could be 
introduced later.


9. Per-input lock-time and relative lock-time: define lock-time and 
relative lock-time in witness, and signed by user. BIP68 is not a very 
ideal solution due to limited lock time length and resolution


10. OP_PUSHLOCKTIME and OP_PUSHRELATIVELOCKTIME: push the lock-time and 
relative lock-time to stack. Will allow more flexibility than OP_CLTV 
and OP_CSV


11. OP_RETURNTURE which allows softfork of any new OP codes [4]. It is 
not really necessary with the version byte design but with OP_RETURNTURE 
we don't need to pump the version byte too frequently.


12. OP_EVAL (BIP12), which enables Merkleized Abstract Syntax Trees 
(MAST) with OP_CAT [5]. This will also obsolete BIP16. Further 
restrictions should be made to make it safe [6]:

a) We may allow at most one OP_EVAL in the scriptPubKey
b) Not allow any OP_EVAL in the serialized script, nor anywhere else in 
the witness (therefore not Turing-complete)
c) In order to maintain the ability to statically analyze scripts, the 
serialized script must be the last push of the witness (or script 
fails), and OP_EVAL must be the last OP code in the scriptPubKey


13. Combo OP codes for more compact scripts, for example:

OP_MERKLEHASH160, if executed, is equivalent to OP_SWAP OP_IF OP_SWAP 
OP_ENDIF OP_CAT OP_HASH160 [3]. Allowing more compact tree-signature and 
MAST scripts.


OP_DUPTOALTSTACK, OP_DUPFROMALTSTACK: copy to / from alt stack without 
removing the item


14. UTXO commitment: good but not in near future

15. Starting as a softfork, moving to a hardfork? SW Softfork is a quick 
but dirty solution. I believe a hardfork is unavoidable in the future, 
as the 1MB limit has to be increased someday. If we could plan it ahead, 
we could have a much cleaner SW hardfork in the future, with codes 
pre-announced for 2 years.



[1] https://bitcointalk.org/index.php?topic=181734.0
[2] 
https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-November/011679.html

[3] https://blockstream.com/2015/08/24/treesignatures/
[4] https://bitcointalk.org/index.php?topic=1106586.0
[5] 
https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-September/010977.html

[6] https://bitcointalk.org/index.php?topic=58579.msg690093#msg690093
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Dealing with OP_IF and OP_NOTIF malleability

2015-11-06 Thread jl2012 via bitcoin-dev
I assume this proposal is implemented at the same time as BIP62. As long 
as OP_IF/OP_NOTIF interprets the argument as a number, zero-padded 
number and negative zero are already prohibited in BIP62


Tier Nolan via bitcoin-dev 於 2015-11-06 04:37 寫到:

I meant not to use the OP_PUSH opcodes to do the push.

Does OP_0 give a zero length byte array?

Would this script return true?

OP_0

OP_PUSHDATA1 (length = 1, data = 0)

OP_EQUAL

The easiest definition is that OP_0 and OP_1 must be used to push the
data and not any other push opcodes.

On Fri, Nov 6, 2015 at 9:32 AM, Oleg Andreev 
wrote:


One and zero should be defined as arrays of length one.

Otherwise, it is still possible to mutate the transaction by
changing the length of the array.


They should also be minimally encoded but that is covered by

previous rules.

These two lines contradict each other. Minimally-encoded "zero" is
an array of length zero, not one. I'd suggest defining this
explicitly here as "IF/NOTIF argument must be either zero-length
array or a single byte 0x01".



___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Compatibility requirements for hard or soft forks

2015-11-01 Thread jl2012 via bitcoin-dev
My answer is simply "No", you don't have to maintain backward 
compatibility for non-standard tx.


The same question applies to P2SH. Before the deployment of BIP16, one 
could have created a time-locked tx with one of the output was in the 
form of HASH160  EQUAL. The , however, is not a hash of a 
valid serialized script, so the output is now permanently frozen.


It also applies to all the OP codes disabled by Satoshi: one could have 
created a time-locked tx with those now disabled OP codes.


Same for BIP65 with the use of OP_NOP2. Following your logic, we can't 
make any softfork related to the script system.


I think it is very important to make it clear that non-standard txs and 
non-standard scripts may become invalid in the future


Gavin Andresen via bitcoin-dev 於 2015-10-28 10:06 寫到:

I'm hoping this fits under the moderation rule of "short-term changes
to the Bitcoin protcol" (I'm not exactly clear on what is meant by
"short-term"; it would be lovely if the moderators would start a
thread on bitcoin-discuss to clarify that):

Should it be a requirement that ANY one-megabyte transaction that is
valid
under the existing rules also be valid under new rules?

Pro:  There could be expensive-to-validate transactions created and
given a
lockTime in the future stored somewhere safe. Their owners may have no
other way of spending the funds (they might have thrown away the
private
keys), and changing validation rules to be more strict so that those
transactions are invalid would be an unacceptable confiscation of
funds.

Con: It is extremely unlikely there are any such large, timelocked
transactions, because the Core code has had a clear policy for years
that
100,000-byte transactions are standard and are relayed and
mined, and
larger transactions are not. The requirement should be relaxed so that
only
valid 100,000-byte transaction under old consensus rules must be valid
under new consensus rules (larger transactions may or may not be
valid).

I had to wrestle with that question when I implemented BIP101/Bitcoin
XT
when deciding on a limit for signature hashing (and decided the right
answer was to support any "non-attack"1MB transaction; see
https://bitcoincore.org/~gavin/ValidationSanity.pdf [1] for more
details).

--

--
Gavin Andresen


Links:
--
[1] https://bitcoincore.org/~gavin/ValidationSanity.pdf

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] BIP 113: Median time-past is a HARDfork, not a softfork!

2015-11-01 Thread jl2012 via bitcoin-dev
Currently, a tx maybe included in a block only if its locktime (x) is 
smaller than the timestamp of a block (y)


BIP113 says that a tx maybe included in a block only if x is smaller 
than the median-time-past (z)


It is already a consensus rule that y > z. Therefore, if x < z, x < y

The new rule is absolutely stricter than the old rule, so it is a 
softfork. Anything wrong with my interpretation?


Luke Dashjr via bitcoin-dev 於 2015-11-01 14:06 寫到:
BIP 113 makes things valid which currently are not (any transaction 
with a
locktime between the median time past, and the block nTime). Therefore 
it is a

hardfork. Yet the current BIP describes and deploys it as a softfork.

Furthermore, Bitcoin Core one week ago merged #6566 adding BIP 113 
logic to
the mempool and block creation. This will probably produce invalid 
blocks

(which CNB's safety TestBlockValidity call should catch), and should be
reverted until an appropriate solution is determined.

Rusty suggested something like adding N hours to the median time past 
for
comparison, and to be a proper hardfork, this must be max()'d with the 
block
nTime. On the other hand, if we will have a hardfork in the next year 
or so,

it may be best to just hold off and deploy as part of that.

Further thoughts/input?

Luke
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Why not checkpointing the transactions?

2015-10-08 Thread jl2012 via bitcoin-dev

You are mixing multiple issues.

1. It is not possible to "checkpoint" in a totally decentralized and 
trustless way. You need the whole blockchain to confirm its validity, as 
a single invalid tx in the history will invalidate ALL blocks after it, 
even if the invalid tx is irrelevant to you.


2. Downloading the whole blockchain does not mean you need to store the 
whole blockchain. Spent transactions outputs can be safely removed from 
your harddrive. Please read section 7 of Satoshi's paper: 
https://bitcoin.org/bitcoin.pdf . This function is already implemented 
in Bitcoin Core 0.11


3. If you don't even want to download the whole blockchain, you can 
download and validate the portions that your are interested. Satoshi 
called it Simplified Payment Verification (SPV), the section 8 of his 
paper. It is secure as long as >50% of miners are honest. Android 
Bitcoin Wallet is an SPV wallet based on bitcoinj.


Finally, I think this kind of question would be better asked on the 
bitcointalk forum. The mailing list should be more specific to 
development, not merely some vague idea.




telemaco via bitcoin-dev 於 2015-10-08 23:18 寫到:

Hello,

I have been working on database engineering for many years and there
are some things i don't understand very well about how bitcoin
architecture works. I have not written here because i would not like
to disturb development with yet another of those far to implement
ideas that does not contribute to actual code as sometimes is said
here.

On any case today I have been listening the last beyond bitcoin video
about the new bitshares 2.0 and how they are changing the transaction
structure to do it more similar to what relational database management
systems have been doing for 30 years.

Keep a checkpointed state and just carry the new transactions. On
rdbms, anyone if they want to perform historical research or
something, they can just get the transaction log backups and reply
every single transaction since the beginning of history.
Why is bitcoin network replying every single transaction since the
beginning and not start from a closer state. Why is that information
even stored on every core node? Couldn't we just have a checkpointed
state and the new transactions and leave to "historical" nodes or
collectors the backup of all the transactions since the beginning of
history?

Replication rdbms have been working with this model for some time,
just being able to replicate at table, column, index, row or even db
level between many datacenters/continents and already serving the
financial world, banks and exchanges. Their tps is very fast because
they only transfer the smallest number of transactions that nodes
decide to be suscribed to, maybe japan exchange just needs
transactional info from japanese stocks on nasdaq or something
similar. But even if they suscribe to everything, the transactional
info is to some extent just a very small amount of information.

Couldn't we have just a very small transactional system with the
fewest number of working transactions and advancing checkpointed
states? We should be able to have nodes of the size of watches with
that structure, instead of holding everything for ever for all
eternity and hope on moore's law to keep us allowing infinite growth.
What if 5 internet submarine cables get cut on a earth movement or war
or there is a shortage of materials for chip manufacturing and the
network moore's law cannot keep up. Shouldn't performance optimization
and capacity planning go in both ways?. Having a really small working
"transaction log" allows companies to rely some transactional info to
little pdas on warehouses, or just relay a small amount of information
to a satellite, not every single transaction of the company forever.

After all if we could have a very small transactional workload and
leave behind the overload of all the previous transactions, we could
have bitcoin nodes on watches and have an incredibly decentralized
system that nobody can disrupt as the decentralization would be
massive. We could even create a very small odbc, jdbc connector on the
bitcoin client and just let any traditional rdbms system handle the
heavy load and just let bitcoin core rely everyone and his mother to a
level that noone could ever disrupt a very small amount of
transactional data.

Just some thoughts. Please don't be very harsh, i am still researching
bitcoin code and my intentions are the best as i cannot be more
passionate about the project.

Thanks,


___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] CHECKSEQUENCEVERIFY - We need more usecases to motivate the change

2015-10-03 Thread jl2012 via bitcoin-dev
BIP68 allows per-input locktime, though I don't know how this could be 
useful.


BIP68 and BIP112 are mostly ready. If we try to reimplement 
relative-locktime without using nSequence, we may need to wait for 
another year for deployment.


A compromise is to make BIP68 optional, indicated by a bit in tx 
nVersion, as I suggested earlier (1). This will allow deploying 
relative-locktime without further delay while not permanently limiting 
future upgrades.


(1) 
http://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-August/010043.html


Peter Todd via bitcoin-dev 於 2015-10-03 10:30 寫到:

BIP68 and BIP112 collectively define the CHECKSEQUENCEVERIFY semantics,
which can be summarized conceptually as a relative CHECKLOCKTIMEVERIFY.
However, CSV does define behavior for the previously undefined 
nSequence

field, which is the only "free-form" field we currently have in the
transaction serialization format that can be used for future upgrades -
we should justify this new behavior carefully as it limits our options
in the future. Adding new fields to the serialization format is very
difficult, due to the very broad system-wide impact of the hard-fork
required to do so.

So we need to make the case for two main things:

1) We have applications that need a relative (instead of absolute CLTV)

2) Additionally to RCLTV, we need to implement this via nSequence

To show we need RCLTV BIP112 provides the example "Escrow with 
Timeout",

which is a need that was brought up by GreenAddress, among others; I
don't think we have an issue there, though getting more examples would
be a good thing. (the CLTV BIP describes seven use cases, and one
additional future use-case)

However I don't think we've done a good job showing why we need to
implement this feature via nSequence. BIP68 describes the new nSequence
semantics, and gives the rational for them as being a
"Consensus-enforced tx replacement" mechanism, with a bidirectional
payment channel as an example of this in action. However, the
bidirectional payment channel concept itself can be easily implemented
with CLTV alone. There is a small drawback in that the initial
transaction could be delayed, reducing the overall time the channel
exists, but the protocol already assumes that transactions can be
reliably confirmed within a day - significantly less than the proposed
30 days duration of the channel. That example alone I don't think
justifies a fairly complex soft-fork that limits future upgrades; we
need more justification.

So, what else can the community come up with? nSequence itself exists
because of a failed feature that turned out to not work as intended;
it'd be a shame to make that kind of mistake again, so let's get our
semantics and use-cases in the BIPs and documented before we deploy.

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Crossing the line? [Was: Re: Let's deploy BIP65 CHECKLOCKTIMEVERIFY!]

2015-10-02 Thread jl2012 via bitcoin-dev
According to the Oxford Dictionary, "coin" as a verb means "invent (a 
new word or phrase)". Undoubtedly you created the first functional SPV 
client but please retract the claim "I coined the term SPV" or that's 
plagiarism.


And I'd like to highlight the following excerpt from the whitepaper: 
"the simplified method can be fooled by an attacker's fabricated 
transactions for as long as the attacker can continue to overpower the 
network. One strategy to protect against this would be to accept alerts 
from network nodes when they detect an invalid block, prompting the 
user's software to download the full block and alerted transactions to 
confirm the inconsistency."


Header only clients without any fraud detecting mechanism are functional 
but incomplete SPV implementations, according to Sathoshi's original 
definition. This might be good enough for the first generation SPV 
wallet, but eventually SPV clients should be ready to detect any rule 
violation in the blockchain, including things like block size (as 
Satoshi mentioned "invalid block", not just "invalid transaction").


Mike Hearn via bitcoin-dev 於 2015-10-02 08:23 寫到:

FWIW the "coining" I am referring to is here:

https://bitcointalk.org/index.php?topic=7972.msg116285#msg116285 [4]

OK, with that, here goes. Firstly some terminology. I'm going to call
these things SPV clients for "simplified payment verification".
Headers-only is kind of a mouthful and "lightweight client" is too
vague, as there are several other designs that could be described as
lightweight like RPC frontend and Stefans WebCoin API approach

At that time nobody used the term "SPV wallet" to refer to what apps
like BreadWallet or libraries like bitcoinj do. Satoshi used the term
"client only mode", Jeff was calling them "headers only client" etc.
So I said, I'm going to call them SPV wallets after the section of the
whitepaper that most precisely describes their operation.


___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Let's deploy BIP65 CHECKLOCKTIMEVERIFY!

2015-09-29 Thread jl2012 via bitcoin-dev

Jonathan Toomim (Toomim Bros) via bitcoin-dev 於 2015-09-29 09:30 寫到:

SPV clients will appear to behave normally, and
will continue to show new transactions and get confirmations in a
timely fashion. However, they will be systematically susceptible to
attack from double-spends that attempt to spend funds in a way that
the upgraded nodes will reject. These transactions will appear to get
1 confirmation, then regress to zero conf, every single time. These
attacks can be performed for as long as someone mines with the old
version.


1. Who told you to accept 1-confirmation tx? Satoshi recommended 6 
confirmations in the whitepaper. Take your own risk if you do not follow 
his advice.


2. This is true only if your SPV client naively follows the longest 
chain without even looking at the block version. This might be good 
enough for the 1st generation SPV client, but future generations should 
at least have basic fraud detecting mechanism.





If an attacker thinks he could get more than 25 BTC of
double-spends per block, he might even choose to mine with the
obsolete version in order to get predictable orphans and to trick SPV
clients and fully verifying wallets on the old version.


This point is totally irrelevant. No matter there is a softfork or not, 
SPV users are always vulnerable to such double-spending attack if they 
blindly follow the longest chain AND accept 1-confirmation. The fiat 
currency system might be safer for them.

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Let's deploy BIP65 CHECKLOCKTIMEVERIFY!

2015-09-28 Thread jl2012 via bitcoin-dev

Mike Hearn via bitcoin-dev 於 2015-09-28 11:38 寫到:


My point about IsStandard is that miners can and do bypass it,
without expecting that to carry financial consequences or lower the
security of other users. By making it so a block which includes
non-standard transactions can end up being seen as invalid, you are
increasing the risk of accidents that carry financial consequences.


Bypassing IsStandard should be considered as an "expert mode". The 
message should be "don't bypass it unless you understand what you are 
doing".


By the way, miners are PAID to protect the network. It is their greatest 
responsibility to follow the development and keep their software up to 
date.





How do ordinary Bitcoin users benefit from this rollout strategy? Put
simply, what is the point of this whole complex soft fork endeavour?


Let me try to answer this question. Softfork is beneficial to non-mining 
full nodes as they will follow the majority chain. In the case of a 
hardfork (e.g. BIP101), non-upgrading full nodes will insist to follow 
the minority chain. (unless you believe that all non-miner should use an 
SPV client)


Put it in a different angle. In a softfork, the new fork is a persistent 
95% attack against the old fork, which will force all in-cooperating 
miners to join (or leave). In a hardfork, however, there is no mechanism 
to stop the old fork and we may have 2 chains co-exist for a long time.


Although it is not mentioned in the whitepaper, the ability to softfork 
is a feature of Bitcoin. Otherwise, we won't have these OP_NOPs and the 
original OP_RETURN.

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Let's deploy BIP65 CHECKLOCKTIMEVERIFY!

2015-09-27 Thread jl2012 via bitcoin-dev
+1 for deploying BIP65 immediately without further waiting. Agree with 
all Peter's points.


If BIP65 has to follow the 0.12 schedule, it will take almost 9 months 
from now to complete the softfork. I don't see any good reason to wait 
for that long. We have too much talk, too little action.


Some mining pools hinted that they may adopt BitcoinXT at the end of 
2015. If we could start deploying BIP65 earlier, they will have a 
patched version by the time they switch. Gavin has agreed to support 
BIP65 in XT.


By the way, is there any chance to backport it to 0.9? In the deployment 
of BIP66 some miners requested a backport to 0.9 and that's why we have 
0.9.5.

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Weekly development meetings on IRC: schedule

2015-09-23 Thread jl2012 via bitcoin-dev
There could not be a worse timing than this for those in China (3-4am), 
Japan/Korea (4-5am), and Australia (3-6am depends on which part of the 
country). Maybe we have no dev in this part of the planet? Is there any 
chance to review the timing in a weekly or monthly basis (also with a 
doodle vote?)


Will there be any agenda published before the meetings? If I'm really 
interested in the topics, I'll have some reasons to get up in the middle 
of the night.


Wladimir J. van der Laan via bitcoin-dev 於 2015-09-22 10:36 寫到:

Hello,

There was overwhelming response that weekly IRC meetings are a good 
thing.


Thanks to the doodle site we were able to select a time slot that
everyone (that voted) is available:

Thursday 19:00-20:00 UTC, every week, starting September 24 (next 
Thursday)


I created a shared Google Calendar here:
https://www.google.com/calendar/embed?src=MTFwcXZkZ3BkOTlubGliZjliYTg2MXZ1OHNAZ3JvdXAuY2FsZW5kYXIuZ29vZ2xlLmNvbQ

The timezone of this calendar is Reykyavik (Iceland) which is UTC+0.
However, you can use the button on the lower right to add the calendar
to your own calendar, which will then show the meeting in your own
timezone.

See you then,

Wladimir

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


[bitcoin-dev] Fill-or-kill transaction

2015-09-17 Thread jl2012 via bitcoin-dev
Fill-or-kill tx is not a new idea and is discussed in the Scaling 
Bitcoin workshop. In Satoshi's implementation of nLockTime, a huge range 
of timestamp (from 1970 to 2009) is wasted. By exploiting this unused 
range and with compromise in the time resolution, a fill-or-kill system 
could be built with a softfork.


---
Two new parameters, nLockTime2 and nKillTime are defined:

nLockTime2 (Range: 0-1,853,010)
0: Tx could be confirmed at or after block 420,000
1: Tx could be confirmed at or after block 420,004
.
.
719,999: Tx could be confirmed at or after block 3,299,996 (about 55 
years from now)
720,000: Tx could be confirmed if the median time-past >= 1,474,562,048 
(2016-09-22)
720,001: Tx could be confirmed if the median time-past >= 1,474,564,096 
(2016-09-22)

.
.
1,853,010 (max): Tx could be confirmed if the median time-past >= 
3,794,966,528 (2090-04-04)


nKillTime (Range: 0-2047)
if nLockTime2 < 720,000, the tx could be confirmed at or before block 
(nLockTime2 + nKillTime * 4)
if nLockTime2 >= 720,000, the tx could be confirmed if the median 
time-past <= (nLockTime2 - 720,001 + nKillTime) * 2048


Finally, nLockTime = 500,000,000 + nKillTime + nLockTime2 * 2048

Setting a bit flag in tx nVersion will activate the new rules.

The resolution is 4 blocks or 2048s (34m)
The maximum confirmation window is 8188 blocks (56.9 days) or 
16,769,024s (48.5 days)


For example:
With nLockTime2 = 20 and nKillTime = 100, a tx could be confirmed only 
between block 420,080 and 420,480
With nLockTime2 = 730,000 and nKillTime = 1000, a tx could be confirmed 
only between median time-past of 1,495,042,048 and 1,497,090,048



Why is this a softfork?

Remember this formula: nLockTime = 500,000,000 + nKillTime + nLockTime2 
* 2048


For height based nLockTime2 (<= 719,999)

For nLockTime2 = 0 and nKillTime = 0, nLockTime = 500,000,000, which 
means the tx could be confirmed after 1970-01-01 with the original lock 
time rule. As the new rule does not allow confirmation until block 
420,000, it's clearly a softfork.


It is not difficult to see that the growth of nLockTime will never catch 
up nLockTime2.


At nLockTime2 = 719,999 and nKillTime = 2047, nLockTime = 1,974,559,999, 
which means 2016-09-22. However, the new rule will not allow 
confirmation until block 3,299,996 which is decades to go




For time based nLockTime2 (> 720,000)

For nLockTime2 = 720,000 and nKillTime = 0, nLockTime = 1,974,560,000, 
which means the tx could be confirmed after median time-past 
1,474,560,000 (assuming BIP113). However, the new rule will not allow 
confirmation until 1,474,562,048, therefore a soft fork.


For nLockTime2 = 720,000 and nKillTime = 2047, nLockTime = 
1,974,562,047, which could be confirmed at 1,474,562,047. Again, the new 
rule will not allow confirmation until 1,474,562,048. The 1 second 
difference makes it a soft fork.


Actually, for every nLockTime2 value >= 720,000, the lock time with the 
new rule must be 1-2048 seconds later than the original rule.


For nLockTime2 = 1,853,010 and nKillTime = 2047, nLockTime = 
4,294,966,527, which is the highest possible value with the 32-bit 
nLockTime



User's perspective:

A user wants his tx either filled or killed in about 3 hours. He will 
set a time-based nLockTime2 according to the current median time-past, 
and set nKillTime = 5


A user wants his tx get confirmed in the block 63, the first block 
with reward below 10BTC. He is willing to pay high fee but don't want it 
gets into another block. He will set nLockTime2 = 210,000 and nKillTime 
= 0



OP_CLTV

Time-based OP_CLTV could be upgraded to support time-based nLockTime2. 
However, height-based OP_CLTV is not compatible with nLockTime2. To 
spend a height-based OP_CLTV output, user must use the original 
nLockTime.


We may need a new OP_CLTV2 which could verify both nLockTime and 
nLockTime2



55 years after?

The height-based nLockTime2 will overflow in 55 years. It is very likely 
a hard fork will happen to implement a better fill-or-kill system. If 
not, we could reboot everything with another tx nVersion for another 55 
years.



___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


[bitcoin-dev] MAST with OP_EVAL and OP_CAT

2015-09-10 Thread jl2012 via bitcoin-dev
Inspired by Pieter's Tree Signatures, I believe Merkleized Abstract 
Syntax Trees (MAST) could be implemented with only OP_CAT and OP_EVAL 
(BIP12).


The idea is very simple. Using a similar example in Pieter's paper,

scriptSig =   Z1 0 1 1 X6 1 K9 0 


scriptPubKey = DUP HASH160  EQUALVERIFY EVAL
serialized script = 8 PICK SHA256 (SWAP IF SWAP ENDIF CAT SHA256)*4  
EQUALVERIFY EVAL


This will run the 10-th sub-script, when there are 11 sub-scripts in the 
MAST


I think this is the easiest way to enable MAST since the reference 
implementation for BIP12 is already there. We could enable OP_CAT only 
inside OP_EVAL so this will be a pure softfork.


Ref:
Tree Signatures: https://blockstream.com/2015/08/24/treesignatures/
BIP12: https://github.com/bitcoin/bips/blob/master/bip-0012.mediawiki
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] BIP 100 specification

2015-09-03 Thread jl2012 via bitcoin-dev
1. I think there is no need to have resolution at byte level, while 
resolution at MB level is not enough. kB would be a better choice.


2. In my specification a v4 block without a vote is invalid, so there is 
no need to consider absent or invalid votes


3. We should allow miners to explicitly vote for the status quo, so they 
don't need to change the coinbase vote every time the size is changed. 
They may indicate it by /BV/ in the coinbase, and we should look for the 
first "/BVd*/" instead of "/BVd+/"


4. Alternatively, miners may vote in different styles: /BV1234567/, 
/BV1500K/, /BV3M/. The first one means 1.234567MB, the second one is 
1.5MB, the last one is 3MB. The pattern is "/BV(\d+[KM]?)?/"


Tier Nolan via bitcoin-dev 於 2015-09-03 07:59 寫到:

On Thu, Sep 3, 2015 at 8:57 AM, jl2012 via bitcoin-dev
<bitcoin-dev@lists.linuxfoundation.org> wrote:


*

hardLimit floats within the range 1-32M, inclusive.


Does the 32MB limit actually still exist anywhere in the code?  In
effect, it is re-instating a legacy limitation.

The message size limit is to minimize the storage required per peer.
If a 32MB block size is required, then each network input buffer must
be at least 32MB. This makes it harder for a node to support a large
number of peers.

There is no reason why a single message is used for each block.  Using
the merkleblock message (or a different dedicated message), it would
be possible to send messages which only contain part of a block and
have a limited maximum size.

This would allow receiving parts of a block from multiple sources.

This is a separate issue but should be considered if moving past 32MB
block sizes (or maybe as a later protocol change).


* Changing hardLimit is accomplished by encoding a proposed value
within a block's coinbase scriptSig.

* Votes refer to a byte value, encoded within the pattern "/BVd+/"
Example: /BV800/ votes for 8,000,000 byte hardLimit. If there is
more than one match with with pattern, the first match is counted.


Is there a need for byte resolution?  Using MB resolution would use up
much fewer bytes in the coinbase.

Even with the +/- 20% rule, miners could vote for the nearest MB.
Once the block size exceeds 5MB, then there is enough resolution
anyway.


* Absent/invalid votes and votes below minimum cap (1M) are
counted as 1M votes. Votes above the maximum cap (32M) are counted
as 32M votes.


I think abstains should count for the status quo.  Votes which are out
of range should be clamped.

Having said that, if core supports the change, then most miners will
probably vote one way or another.


New hardLimit is the median of the followings:
min(current hardLimit * 1.2, 20-percentile)
max(current hardLimit / 1.2, 80-percentile)
current hardLimit


I think this is unclear, though mathematically exact.

Sort the votes for the last 12,000 blocks from lowest to highest.

Blocks which don't have a vote are considered a vote for the status
quo.

Votes are limited to +/- 20% of the current value.  Votes that are out
of range are considered to vote for the nearest in range value.

The raise value is defined as the vote for the 2400th highest block
(20th percentile).

The lower value  is defined as the vote for the 9600th highest block
(80th percentile).

If the raise value is higher than the status quo, then the new limit
is set to the raise value.

If the lower value is lower than the status quo, then the new limit is
set to the lower value.

Otherwise, the size limit is unchanged.

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] block size - pay with difficulty

2015-09-02 Thread jl2012 via bitcoin-dev

Jeff Garzik via bitcoin-dev 於 2015-09-03 00:05 寫到:

Schemes proposing to pay with difficulty / hashpower to change block
size should be avoided.  The miners incentive has always been fairly
straightforward - it is rational to deploy new hashpower as soon as
you can get it online.  Introducing the concepts of (a) requiring
out-of-band collusion to change block size and/or (b) requiring miners
to have idle hashpower on hand to change block size are both
unrealistic and potentially corrosive.  That potentially makes the
block size - and therefore fee market - too close, too sensitive to
the wild vagaries of the mining chip market.

Pay-to-future-miner has neutral, forward looking incentives worth
researching.


___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Ref: 
https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-August/010723.html


I explained here why pay with difficulty is bad for everyone: miners and 
users, and described the use of OP_CLTV for pay-to-future-miner


However, a general problem of pay-to-increase-block-size scheme is it 
indirectly sets a minimal tx fee, which could be difficult and 
arbitrary, and is against competition



___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Short review of previously-proposed exotic SIGHASH types

2015-08-31 Thread jl2012 via bitcoin-dev

Bryan Bishop via bitcoin-dev 於 2015-08-30 14:56 寫到:



SIGHASH_WITHOUT_PREV_SCRIPTPUBKEY
SIGHASH_WITHOUT_PREV_VALUE
SIGHASH_WITHOUT_INPUT_TXID
SIGHASH_WITHOUT_INPUT_INDEX
SIGHASH_WITHOUT_INPUT_SEQUENCE
SIGHASH_WITHOUT_OUTPUT_SCRIPTPUBKEY
SIGHASH_WITHOUT_OUTPUT_VALUE
SIGHASH_WITHOUT_INPUTS
SIGHASH_WITHOUT_OUTPUTS
SIGHASH_WITHOUT_INPUT_SELF
SIGHASH_WITHOUT_OUTPUT_SELF
SIGHASH_WITHOUT_TX_VERSION
SIGHASH_WITHOUT_TX_LOCKTIME
SIGHASH_SIGN_STACK_ELEMENT:
https://github.com/scmorse/bitcoin-misc/blob/master/sighash_proposal.md



Thanks for your summary. This one seems particularly interesting. 
However, it does not allow fine adjustment for each input and output 
separately, so I wonder if it really "fully enable any seen or unforseen 
use case of the CTransactionSignatureSerializer." as it claims.

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Consensus based block size retargeting algorithm (draft)

2015-08-31 Thread jl2012 via bitcoin-dev

Jorge Timón 於 2015-08-30 14:56 寫到:

On Sun, Aug 30, 2015 at 7:13 PM,   wrote:
This is based on the assumption that miners would always like to use 
up the

last byte of the available block size. However, this is just not true:

1. The 6 year blockchain history has shown that most miners have a 
soft cap

with their block size.

2. Chinese miners, controlling 60% of the network, rejected Gavin's 
initial

20MB proposal and asked for 8MB:
http://cointelegraph.com/news/114577/chinese-mining-pools-propose-alternative-8-mb-block-size
[...]


No, I'm not making such assumption. I'm focusing on what they CAN do,
while suspending judgement on their good will and not trying to
predict their future behavior from historic behaviour.
With 60% of the hashrate, you can easily get 100% by orphaning
everybody else's blocks. More importantly, being under the same
jurisdiction they can be forced to behave in certain way (for example,
censor transactions) by law.
I'm very worried about the current situation no matter how benevolent
current miners are. Thus weakening the only limit to mining
centralization that we have at the consensus rule level seems
extremely risky at this point.


The reason for 60% of block were generated in China is same as the 
reason for 60% of your clothes were made in China. The electricity there 
is the cheapest on the planet. Many dams were built in the past 10 years 
and now they have huge amount of surplus electricity due to economic 
downturn.


Not sure if you are aware of this thread: 
https://bitcointalk.org/index.php?topic=1072474.0 . Could you imagine 
this in any developed country? As long as mining is largely dependent on 
energy, there is no hope to break the balance/imbalance.


Bandwidth is probably only a few percent of miners' cost. There is no 
evidence that the current level of centralization is a result of block 
size. Instead, clear evidence has shown that centralization is a result 
of pool mining*, invention of ASIC, and disparity of energy cost. (* 
People started pool mining in 2010 because they wanted lower variance, 
not because of the inability to run a full node)



For many reasons miners may want to have a smaller block size, which 
we
don't need to list them here. Although they can limit it by a softfork 
or
even 51% attack, it is a very violent process. Why don't we just allow 
them

to vote for a lower limit?

So I think the right way is to choose a mining-centralization-safe 
limit,
and let it free float within a range based on miner's vote. If we are 
lucky

enough to have some responsible miners, they will keep it as low as
possible, until the legitimate tx volume catches up. Even in the worst 
case,
the block size is still mining-centralization-safe. The upper limit 
may
increase linearly, if not exponentially, until we find a better 
long-term

solution. (sort of a combination of BIP100 and 101, with different
parameters)


My point is, a "soft cap" determined by miners clearly doesn't protect
us from mining centralization: the "hard cap" does.
Knowing that, and given that miners can currently set their own policy
block size maximum, what does this "voting on a lower limit" achieve?
What are the gains? Why are we "lucky" if they keep the lower one as
low as possible?


Even if we could quantify the level of centralization, it is a continuum 
and we must compromise between utility and centralization. Unless 
BIP101/103 is adopted, adjusting the hard cap always require a hardfork. 
For obvious technical and political reasons we can't have hardfork too 
frequently. Therefore, we need to leave some leeway: the hard cap may be 
a bit too high for today, but we are sure that technology will catch up 
in the near future.


Assuming we have plenty amount of "benevolent" miners, they will keep 
the block size low unless there is a real demand for larger block space. 
This is different from setting an individual soft limit, as that will 
lead to block size scarcity and therefore higher tx fee, which may be 
good for all miners. And as we say "miners can always decrease the block 
size with softfork or 51% attack", BIP100 materializes this possibility 
in a much smoother way.


I say "lucky" because I wholeheartedly believe it is good to keep the 
block as small as we really need. We can't do this by an equation so I 
would prefer to leave the power to miners (and they always have this 
power, anyway).



For the matter of "urgency", I agree with you that there is no actual
urgency AT THIS MOMENT. However, if a hardfork may take 5 years to 
deploy

(as you suggested), we really have the urgency to make a decision now.


Thank you for admitting it is not urgent!
I suggested 5 years for the concrete hardfork in bip99 because it's
clearly non-urgent and I wanted to be very conservative. I'm happy to
reduce that to say, 1 year (specially given that the change is very
simple to implement).
For a simple block size change (like, say bip102) 1 year (maybe 6

Re: [bitcoin-dev] BIP: Using Median time-past as endpoint for locktime calculations

2015-08-28 Thread jl2012 via bitcoin-dev
I have an ugly solution to this problem, with minimal change to the 
current softfork logic, and still allows all features described in 
sipa's Version bits BIP


1. xVersion = nVersion AND 0b10011000
2. miners supporting BIP65 will set xVersion = 8 or greater
3. If 750 of the last 1000 blocks are xVersion = 8, reject invalid 
xVersion 8 (or greater) blocks
4. If 950 of the last 1000 blocks are xVersion = 8, reject all blocks 
with xVersion  8


So the logic is exactly the same as BIP66, with the AND masking in step 
1. After the BIP65 softfork is implied, xVersion may take only one of 
the following values: 8, 16, 24


This is basically moving the high bits in sipa's proposal to the 
middle of the nVersion field. After the BIP65 softfork, this will still 
leave 29 available bits for parallel soft forks, just in different 
position.


This is ugly, but I believe this is the easiest solution

Ref: https://gist.github.com/sipa/bf69659f43e763540550

Peter Todd via bitcoin-dev 於 2015-08-27 19:19 寫到:

On Thu, Aug 27, 2015 at 11:08:32PM +0100, Btc Drak wrote:

This BIP was assigned number 113.

I have updated the text accordingly and added credits to Gregory 
Maxwell.


Please see the changes in the pull request:
https://github.com/bitcoin/bips/pull/182


On Thu, Aug 27, 2015 at 11:11:10PM +0100, Btc Drak via bitcoin-dev 
wrote:

I have changed BIPS 112 and 113 to reflect this amended deployment
strategy. I'm beginning to think the issues created by Bitcoin XT are
so serious it probably deserves converting OPs text into an
informational BIP.


I thought we had decided that the masking thing doesn't work as
intended?

To recap, XT nodes are producing blocks with nVersion=0b001...111

You're suggesting that we apply a mask of ~0b001...111 then trigger the
soft-fork on nVersion = 0b0...100 == 4, with miners producing blocks 
with

nVersion=0b0...1000

That will work, but it still uses up a version bit. The reason why is
blocks with nVersion=0b001...000 - the intended deployment of the
nVersion bits proposal - will be rejected by the nVersion = 4 rule,
hard-forking them off the network. In short, we have in fact burnt a
version bit unnecessarily.

If you're going to accept hard-forking some people off the network, why
not just go with my stateless nVersion bits w/ time-expiration proposal
instead? The only case where it leads to a hard-fork is if a soft-fork
has been rejected by the time the upgrade deadline is reached. It's 
easy

to set this multiple years into the future, so I think in practice it
won't be a major issue for non-controversial soft-forks.

Equally, spending the time to implement the original stateful nVersion
bits proposal is possible as well, though higher risk due to the extra
complexity of tracking soft-fork state.

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Questiosn about BIP100

2015-08-27 Thread jl2012 via bitcoin-dev
Mode could be ruled out immediately. Just consider this: 34% 8MB, 33% 
1.5MB, 33% 1.2MB


I personally believe the median is the most natural and logical choice. 
51% of miners can always force the 49% to follow the simple majority 
choice through a 51% attack. Using median will eliminate the incentive 
to 51% attack due to this reason. The incentive to 51% attack will exist 
when you use any value other than 50-percentile. The further it is from 
50, the bigger the incentive.


Having said that, I don't think it is an absolutely bad idea to use a 
value other than 50-percentile. The exact value is debatable.


However, if you use something other than median, you should make it 
symmetrical. For example, the block size will increase if the 
20-percentile is bigger than the current limit, and the block size will 
decrease if the 80-percentile is smaller than the current limit.





Jeff Garzik via bitcoin-dev 於 2015-08-27 16:49 寫到:

20th percentile, though there is some argument to take the 'mode' of
several tranches

On Thu, Aug 27, 2015 at 11:07 AM, Andrew C achow...@gmail.com wrote:


I have been reading the pdf and one thing I can't figure out is what
you mean by most common floor. Is that the smallest block size
that has a vote or the block size with the most votes or something
else?

On Mon, Aug 24, 2015 at 10:40 AM Jeff Garzik jgar...@gmail.com
wrote:

Great questions.

- Currently working on technical BIP draft and implementation,
hopefully for ScalingBitcoin.org. Only the PDF is publicly
available as of today.
- Yes, the initial deployment is in the same manner as size votes.

On Fri, Aug 21, 2015 at 7:38 PM, Andrew C via bitcoin-dev
bitcoin-dev@lists.linuxfoundation.org wrote:

Hi all,

Is there any client or code that currently implements BIP 100? And
how will it be deployed? WIll the initial fork be deployed in the
same manner that the max block size changes are deployed described
in the bip?

Thanks

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev [1]




Links:
--
[1] https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] BIPS proposal for implementing AML-KYC in bitcoin

2015-08-27 Thread jl2012 via bitcoin-dev
Very good, I can't wait to see it. Please code it up and submit a pull 
request to github. Don't expect someone will do it for you.


prabhat via bitcoin-dev 於 2015-08-27 08:06 寫到:


snip.




Folks, suggest something, scrap my idea, but let's build something to
save this ecosystem, otherwise it is impossible to realise this dream
of decentralized currency. Other coins and protocols are there who may
implement something, and egoists always meet the ashes.



___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] [BIP-draft] CHECKSEQUENCEVERIFY - An opcode for relative locktime

2015-08-24 Thread jl2012 via bitcoin-dev
Your proposal also permanently burns a sequence bit. It depends on how 
we value a nSequence bit and a nVersion bit. I think there is a 
trade-off here:


1. nSequence is signed by each TxIn individually, while all TxIns must 
share the same nVersion


2. If nVersion is used to indicate the meaning of nSequence (as I 
suggested):

Pros:
It saves a nSequence bit and allows more space for redefining the 
nSequence

Cons:
It burns a nVersion bit.
All TxIns in a tx must share the same meaning for their nSequence

3. If nSequence is used to indicate the meaning of itself (as you 
suggested):

Pros:
It saves a nVersion bit
Different TxIn may have different meaning with their nSequence
Cons:
It burns a nSequence bit, thus less space for extension

I don't think there is a perfect choice. However, I still prefer my 
proposal because:


1. nSequence is signed by each TxIn individually and could be more 
interesting than nVersion.
2. If nVersion is expected to be a monotonic number, 2 bytes = 65536 
versions is enough for 65 millenniums if it ticks once per year. 4 bytes 
is an overkill. Why don't we spend a bit if there is a good reason? Most 
softforks (e.g. OP_CLTV, OP_CSV, BIP66) are not optional. These kind of 
optional new functions would not be common and should never use up the 
version bits. (or, could you suggest a better use of the tx version 
bits?)



Mark Friedenbach 於 2015-08-23 22:54 寫到:

Sorry this was meant for the list:

There are only 32 bits in the version field. If you're going to spend
a bit for perpetuity to indicate whether or not a feature is active,
you'd better have a good reason to make that feature optional.

I haven't seen a compelling use case for having BIP 68 be optional in
that way. As you note, BIP 68 semantics is already optional by
toggling the most significant bit, and that doesn't permanently burn a
version bit.


___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


[bitcoin-dev] Encouraging mining of the first few big blocks with OP_CSV and BIP68

2015-08-23 Thread jl2012 via bitcoin-dev
Someone is going to burn 150BTC to create a backlog of 30-day in 
September. 
https://www.reddit.com/r/Bitcoin/comments/3hgke4/coinwallet_says_bitcoin_stress_test_in_september/ 
However, the money could be spent more wisely by encouraging mining of 
the first few big blocks


Assumptions:
1. OP_CSV and BIP68 are enabled
2. Max tx size remains 1MB

The donor will create a transaction, with an input of 150BTC, and 10 
outputs:

1. 0 BTC to OP_RETURN garbage
2. 42 BTC to OP_1 OP_CSV
3. 21 BTC to OP_2 OP_CSV
4. 10.5 BTC to OP_3 OP_CSV
5. 5.25 BTC to OP_4 OP_CSV
6. 2.625 BTC to OP_5 OP_CSV
7. 1.3125 BTC to OP_6 OP_CSV
8. 0.65625 BTC to OP_7 OP_CSV
9. 0.328125 BTC to OP_8 OP_CSV
10. 0.328125 BTC to OP_9 OP_CSV

The first output will fill up the size to 1MB.

This tx could not be confirmed by a pre-hardfork miner because the 
coinbase tx will consume some block space. The first big block miner 
will be able to collect 66BTC of fee. The block confirming the first big 
block will collect 42BTC of fee, etc. This will create a long enough 
chain to bootstrap the hardfork.


The amount is chosen to make sure the difference is 25BTC, so miners 
would have less incentive to create a fork instead of confirming other's 
block. However, a miner cartel may launch a 51% attack to collect all 
money. Such incentive may be reduced by adjusting the distribution of 
donation. (Actually, such cartel may be formed anytime, just for 
collecting more block reward)

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] [BIP-draft] CHECKSEQUENCEVERIFY - An opcode for relative locktime

2015-08-23 Thread jl2012 via bitcoin-dev

Gregory Maxwell via bitcoin-dev 於 2015-08-23 21:01 寫到:



Seperately, to Mark and Btcdrank: Adding an extra wrinkel to the
discussion has any thought been given to represent one block with more
than one increment?  This would leave additional space for future
signaling, or allow, for example, higher resolution numbers for a
sharechain commitement.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


I think this comment is more related to BIP68 instead of OP_CSV? Without 
further complicating the BIP68, I believe the best way to leave room for 
improvement is to spend a bit in tx nVersion to indicate the activation 
of BIP68. I have raised this issue before with 
http://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-August/010043.html 
However, it seems Mark isn't in favor of my proposal


The idea is not to permanently change the meaning of nSequence. 
Actually, BIP68 is only enforced if the most significant bit of the 
sequence number field is set. So BIP68 is optional, anyway. All I 
suggest is to move the flag from nSequence to nVersion. However, this 
will leave much bigger room for using nSequence for other purpose in the 
future.


AFAIK, nSequence is the only user definable and signed element in TxIn. 
There could be more interesting use of this field and we should not 
change its meaning permanently. (e.g. if nSequence had 8 bytes instead 
of 4 bytes, it could be used to indicate the value of the input to fix 
this problem: https://bitcointalk.org/index.php?topic=181734.0 )


___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] CLTV/CSV/etc. deployment considerations due to XT/Not-BitcoinXT miners

2015-08-20 Thread jl2012 via bitcoin-dev

Peter Todd via bitcoin-dev 於 2015-08-19 01:50 寫到:




2) nVersion mask, with IsSuperMajority()

In this option the nVersion bits set by XT/Not-Bitcoin-XT miners would
be masked away, prior to applying standard IsSuperMajority() logic:

block.nVersion  ~0x2007

This means that CLTV/CSV/etc. miners running Bitcoin Core would create
blocks with nVersion=8, 0b1000. From the perspective of the
CLTV/CSV/etc.  IsSuperMajority() test, XT/Not-Bitcoin-XT miners would 
be

advertising blocks that do not trigger the soft-fork.

For the perpose of soft-fork warnings, the highest known version can
remain nVersion=8, which is triggered by both XT/Not-Bitcoin-XT blocks
as well as a future nVersion bits implementation. Equally,
XT/Not-Bitcoin-XT soft-fork warnings will be triggered, by having an
unknown bit set.

When nVersion bits is implemented by the Bitcoin protocol, the plan of
setting the high bits to 0b001 still works. The three lowest bits will
be unusable for some time, but will be eventually recoverable as
XT/Not-Bitcoin-XT mining ceases.

Equally, further IsSuperMajority() softforks can be accomplished with
the same masking technique.

This option does complicate the XT-coin protocol implementation in the
future. But that's their problem, and anyway, the maintainers
(Hearn/Andresen) has strenuously argued(5) against the use of 
soft-forks

and/or appear to be in favor of a more centralized mandatory update
schedule.(6)



If you are going to mask bits, would you consider to mask all bits 
except the 4th bit? So other fork proposals may use other bits for 
voting concurrently.


And as I understand, the masking is applied only during the voting 
stage? After the softfork is fully enforced with 95% support, the 
nVersion will be simply =8, without any masking?

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Bitcoin is an experiment. Why don't we have an experimental hardfork?

2015-08-19 Thread jl2012 via bitcoin-dev

odinn via bitcoin-dev 於 2015-08-19 07:25 寫到:


 The big problem is

BIP101 being deployed as a Schism hardfork.


This is certainly a problem.




No, BitcoinXT won't become a Schism hardfork, or may be just for a few 
days, at most.


There is one, and only one scenario that BitcoinXT will win: it is 
supported by major exchanges, merchants, and investors, and they request 
miners to support it. When BIP101 is activated, these exchanges will 
refuse to accept or exchange tokens from the old chain. Miners in the 
old chain can't sell their newly generated coins and can't pay the 
electricity bill. They will soon realize that they are mining fool's 
gold and will be forced to switch to the new chain or sell their ASIC. 
The old chain will be abandoned and has no hope to revive without a 
hardfork to decrease the difficulty. The dust will settle in days if not 
hours.


Will the adoption of BitcoinXT lead by miners? No, it won't. Actually, 
Chinese miners who control 60% of the network has already said that they 
would not adopt XT. So they must not be the leader in this revolution. 
Again, miners need to make sure they could sell their bitcoin in a good 
price, and that's not possible without support of exchanges and 
investors.


What about that Not-Bitcoin-XT? The creator of the spoof client may stay 
anonymous, but the miners cannot. 95% of the blocks come from known 
entities and they have to be responsible to their actions. And again, 
they have real money in stake. If bitcoin is destroyed, their ASIC 
serves at best as very inefficient heaters.


So Bitcoin-XT is basically in a win-all-or-lose-all position. It all 
relies on one condition: the support of major exchanges, merchants, and 
investors. Their consensus is what really matters. With their consensus, 
that could not be a Schism hardfork. Without their consensus, nothing 
will happen.


---

Or let me analyse in a different angle. BitcoinXT is in no way similar 
to your examples of Schism hardforks. All of your examples, ASIC-reset 
hardfork, Anti-Block-creator hardfork, and Anti-cabal hardfork, are 
hostile to the current biggest miners and will destroy their investment. 
These miners have no choice but stick to the original protocol so 2 
chains MUST coexist. However, BIP101 has no such effect at all and 
miners may freely switch between the forks. They will always choose the 
most valuable fork, so only one fork will survive.

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


[bitcoin-dev] Bitcoin is an experiment. Why don't we have an experimental hardfork?

2015-08-18 Thread jl2012 via bitcoin-dev
As I understand, there is already a consensus among core dev that block 
size should/could be raised. The remaining questions are how, when, how 
much, and how fast. These are the questions for the coming Bitcoin 
Scalability Workshops but immediate consensus in these issues are not 
guaranteed.


Could we just stop the debate for a moment, and agree to a scheduled 
experimental hardfork?


Objectives (by order of importance):

1. The most important objective is to show the world that reaching 
consensus for a Bitcoin hardfork is possible. If we could have a 
successful one, we would have more in the future


2. With a slight increase in block size, to collect data for future 
hardforks


3. To slightly relieve the pressure of full block, without minimal 
adverse effects on network performance


With the objectives 1 and 2 in mind, this is to NOT intended to be a 
kick-the-can-down-the-road solution. The third objective is more like a 
side effect of this experiment.



Proposal (parameters in ** are my recommendations but negotiable):

1. Today, we all agree that some kind of block size hardfork will happen 
on t1=*1 June 2016*


2. If no other consensus could be reached before t2=*1 Feb 2016*, we 
will adopt the backup plan


3. The backup plan is: t3=*30 days* after m=*80%* of miner approval, but 
not before t1=*1 June 2016*, the block size is increased to s=*1.5MB*


4. If the backup plan is adopted, we all agree that a better solution 
should be found before t4=*31 Dec 2017*.


Rationale:

t1 = 1 June 2016 is chosen to make sure everyone have enough time to 
prepare for a hardfork. Although we do not know what actually will 
happen but we know something must happen around that moment.


t2 = 1 Feb 2016 is chosen to allow 5 more months of negotiations (and 2 
months after the workshops). If it is successful, we don't need to 
activate the backup plan


t3 = 30 days is chosen to make sure every full nodes have enough time to 
upgrade after the actual hardfork date is confirmed


t4 = 31 Dec 2017 is chosen, with 1.5 year of data and further debate, 
hopefully we would find a better solution. It is important to 
acknowledge that the backup plan is not a final solution


m = 80%: We don't want a very small portion of miners to have the power 
to veto a hardfork, while it is important to make sure the new fork is 
secured by enough mining power. 80% is just a compromise.


s = 1.5MB. As the 1MB cap was set 5 years ago, there is no doubt that 
all types of technology has since improved by 50%. I don't mind making 
it a bit smaller but in that case not much valuable data could be 
gathered and the second objective of this experiment may not be 
archived.




If the community as a whole could agree with this experimental hardfork, 
we could announce the plan on bitcoin.org and start coding of the patch 
immediately. At the same time, exploration for a better solution 
continues. If no further consensus could be reached, a new version of 
Bitcoin Core with the patch will be released on or before 1 Feb 2016 and 
everyone will be asked to upgrade immediately.

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Miners are struggling with blocks far smaller than 750KB blocks and resorting to SPV mining

2015-08-17 Thread jl2012 via bitcoin-dev
The traffic between the pool server and individual hashers is far busier 
than 50kB/30s. If their bandwidth is so limited, hashers would have 
switched to other pools already.


All these data may prove is they have very bad mining codes. For 
example, their hashers may not be required to update the transaction 
list regularly. I don't think they are struggling. They are just too 
lazy or think that's too risky to improve their code. After all, they 
are generating half million USD per day and a few seconds of downtime 
would hurt.


By the way, vast majority of the full blocks (0.99MB) on the blockchain 
are generated by Chinese pools.


Luv Khemani via bitcoin-dev 於 2015-08-17 04:42 寫到:

Hi all,

 I previously mentioned in a post that i believe that technically
nodes are capable of handling blocks an order of magnitude larger than
the current blocksize limit, the only missing thing was an incentive
to run them. I have been monitoring the blockchain for the past couple
of weeks and am seeing that even miners who have all the incentives
are for whatever reason struggling to download and validate much
smaller blocks.

The data actually paints a very grim picture of the current
bandwidth/validating capacity of the global mining network.

See the following empty blocks mined despite a non-trivial elapsed
time from the previous block just from the past couple of days alone
(Data from insight.bitpay.com):

EmptyBlock /Time since previous block/ Size of previous
block(bytes)/Mined by
370165 29s 720784
Antpool
370160 31s 50129 BTCChinaPool
370076 49s 469988 F2Pool
370059 34s 110994 Antpool
370057 73s 131603 Antpool

We have preceding blocks as small as 50KB with 30s passing and the
miner continues to mine empty blocks via SPV mining.
The most glaring case is Block 370057 where despite 73s elapsing and
the preceding block being a mere 131KB, the miner is unable to
download/validate fast enough to include transactions in his block.
Unless ofcourse the miner is mining empty blocks on purpose, which
does not make sense as all of these pools do mine blocks with
transactions when the elapsed time is greater.

This is a cause for great concern, because if miners are SPV mining
for a whole minute for 750KB blocks, at 8MB blocks, the network will
just fall apart as a significant portion of the hashing power SPV
mines throughout. All a single malicious miner has to do is mine an
invalid block on purpose, let these pools SPV mine on top of them
while it mines a valid block free of their competition. Yes, these
pools deserve to lose money in that event, but the impact of reorgs
and many block orphans for anyone not running a full node could be
disastrous, especially more so in the XT world where Mike wants
everyone to be running SPV nodes. I simply don't see the XT fork
having any chance of surviving if SPV nodes are unreliable.

And if these pools go out of business, it will lead to even more
mining centralization which is already too centralized today.

Can anyone representing these pools comment on why this is happening?
Are these pools on Matt's relay network?


___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Annoucing Not-BitcoinXT

2015-08-16 Thread jl2012 via bitcoin-dev
Thanks to mining centralization, such attempts won't be successful. 
Asking mining pools to mine spoofing blocks in their real name is even 
harder than asking them to run the real BitcoinXT


Node count is always manipulable, there is nothing new. People running 
this will only be interpreted as XT-supporters.


Julie via bitcoin-dev 於 2015-08-16 18:34 寫到:

Announcing Not-BitcoinXT

https://github.com/xtbit/notbitcoinxt#not-bitcoin-xt




-

ONLY AT VFEmail! - Use our Metadata Mitigator to keep your email out
of the NSA's hands!
$24.95 ONETIME Lifetime accounts with Privacy Features!  15GB disk! No
bandwidth quotas!
Commercial and Bulk Mail Options!
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] The use of tx version field in BIP62 and 68

2015-08-08 Thread jl2012 via bitcoin-dev
I think I have explained my motivation but let me try to make it 
clearer.


For example, BIP62 says scriptPubKey evaluation will be required to 
result in a single non-zero value. If we had BIP62 before BIP16, P2SH 
could not be done in the current form because BIP16 leaves more than one 
element on the stake for non-upgrading nodes. BIP17 also violates BIP62.


BIP68 is only enforced if the most significant bit of the sequence 
number field is set., so it is optional, anyway. All I do is to move 
the flag from sequence number to version number.


The blocksize debate shows how a permanent softfork may cause trouble 
later. We need to be very careful when doing further softforks, making 
sure we will have enough flexibility for further development.


Mark Friedenbach 於 2015-08-08 14:56 寫到:

It is not a bug that you are unable to selectively choose these
features with higher version numbers. The version selection is in
there at all because there is a possibility that there exists already
signed transactions which would violate these new rules. We wouldn't
want these transactions to become unspendable. However moving forward
there is no reason to selectively pick and choose which of these new
consensus rules you want to apply your transaction.
On Aug 8, 2015 11:51 AM, jl2012 via bitcoin-dev
bitcoin-dev@lists.linuxfoundation.org wrote:


BIP68 rules and some of the BIP62 rules are applied only if the tx
version is =2 and =3 respectively. Therefore, it is not possible
to create a tx which follows BIP62 but not BIP68. If we introduce v4
tx later, BIP62 and BIP68 will all become mandatory.

Some rules, e.g. scriptPubKey evaluation will be required to
result in a single non-zero value in BIP62, will cause trouble when
we try to introduce a new script system with softfork.

I suggest to divide the tx version field into 2 parts: the higher 4
bits and lower 28 bits.

BIP62 is active for a tx if its highest bits are , and the
second lowest bit is 1.

BIP68 is active for a tx if its highest bits are , and the
third lowest bit is 1.

So it will be easier for us to re-purpose the nSequence, or to take
advantage of malleability in the future. If this is adopted, the
nSequence high bit requirement in BIP68 becomes unnecessary as we
could easily switch it off.

The low bits will allow 28 independent BIPs and should be ample for
many years. When they are exhausted, we can switch the high bits to
a different number (1-15) and redefine the meaning of low bits. By
that time, some of the 28 BIPs might have become obsoleted or could
be merged.

(I'm not sure if there are other draft BIPs with similar
interpretation of tx version but the comments above should also
apply to them)
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev [1]



Links:
--
[1] https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Fees and the block-finding process

2015-08-07 Thread jl2012 via bitcoin-dev

Pieter Wuille via bitcoin-dev 於 2015-08-07 12:28 寫到:

On Fri, Aug 7, 2015 at 5:55 PM, Gavin Andresen
gavinandre...@gmail.com wrote:


On Fri, Aug 7, 2015 at 11:16 AM, Pieter Wuille
pieter.wui...@gmail.com wrote:


I guess my question (and perhaps that's what Jorge is after): do
you feel that blocks should be increased in response to (or for
fear of) such a scenario.


I think there are multiple reasons to raise the maximum block size,
and yes, fear of Bad Things Happening as we run up against the 1MB
limit is one of the reasons.

I take the opinion of smart engineers who actually do resource
planning and have seen what happens when networks run out of
capacity very seriously.


This is a fundamental disagreement then. I believe that the demand is
infinite if you don't set a fee minimum (and I don't think we should),
and it just takes time for the market to find a way to fill whatever
is available - the rest goes into off-chain systems anyway. You will
run out of capacity at any size, and acting out of fear of that
reality does not improve the system. Whatever size blocks are actually
produced, I believe the result will either be something people
consider too small to be competitive (you mean Bitcoin can only do 24
transactions per second? sounds almost the same as you mean Bitcoin
can only do 3 transactions per second?), or something that is very
centralized in practice, and likely both.


What if we reduce the block size to 0.125MB? That will allow 0.375tx/s. 
If 3-24 sounds almost the same, 3-0.375 also sounds almost the same. 
We will have 5 full nodes, instead of 5000, since it is so 
affordable to run a full node.


If 0.125MB sounds too extreme, what about 0.5/0.7/0.9MB? Are we going to 
have more full nodes?


No, I'm not trolling. I really want someone to tell me why we 
should/shouldn't reduce the block size. Are we going to have more or 
less full nodes if we reduce the block size?

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Block size implementation using Game Theory

2015-08-06 Thread jl2012 via bitcoin-dev
It won't work as you thought. If a miner has 95% of hashing power, he 
would have 95% of chance to find the next block and collect the penalty. 
In long term, he only needs to pay 5% penalty. It's clearly biased 
against small miners.


Instead, you should require the miners to burn the penalty. Whether this 
is a good idea is another issue.


Wes Green via bitcoin-dev 於 2015-08-06 19:52 寫到:

Bitcoin is built on game theory. Somehow we seem to have forgotten
that and are trying to fix our block size issue with magic numbers,
projected percentage growth of bandwidth speeds, time limits, etc...
There are instances where these types of solutions make sense, but
this doesn't appear to be one of them. Lets return to game theory.

Proposal: Allow any miner to, up to, double the block size at any
given time - but penalize them. Using the normal block reward,
whatever percentage increase the miner makes over the previous limit
is taken from both the normal reward and fees. The left over is
rewarded to the next miner that finds a block.

If blocks stay smaller for an extended period of time, it goes back
down to the previous limit/ x amount decrease/% decrease  (up for
debate)

Why would this work?: Miners only have incentive to do raise the limit
when they feel there is organic growth in the network. Spam attacks,
block bloat etc would have to be dealt with as it is currently. There
is no incentive to raise the size for spam because it will subside and
the penalty will have been for nothing when the attack ends and block
size goes back down.

I believe it would have the nice side effect of forcing miners to hold
the whole block chain. I believe SPV does not allow you to see all the
transactions in a block and be able to calculate if you should be
adding more to your reward transaction if the last miner made the
blocks bigger. Because of this, the miners would also have an eye on
blockchain size and wont want it getting huge too fast (outsize of
Moore's law of Nielsen's Law). Adding to the gamification.

This system would encourage block size growth due to organic growth
and the penalty would encourage it to be slow as to still keep reward
high and preserve ROE.

What this would look like: The miners start seeing what looks like
natural network growth, and make the decision (or program an
algorithm, the beauty is it leaves the how up to the miners) to
increase the blocksize. They think that, in the long run, having
larger blocks will increase their revenue and its worth taking the hit
now for more fees later. They increase the size to 1.25 MB. As a
result, they reward would be 18.75 (75%). The miner fees were .5BTC.
The miner fees are also reduced to .375BTC. Everyone who receives that
block can easily calculate 1) if the previous miner gave themselves
the proper reward 2) what the next reward should be if they win it.
Miners now start building blocks with a 31.25 reward transaction and
miner fee + .125.


___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


[bitcoin-dev] Wrapping up the block size debate with voting

2015-08-04 Thread jl2012 via bitcoin-dev
As now we have some concrete proposals 
(https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-July/009808.html), 
I think we should wrap up the endless debate with voting by different 
stakeholder groups.


-
Candidate proposals

Candidate proposals must be complete BIPs with reference implementation 
which are ready to merge immediately. They must first go through the 
usual peer review process and get approved by the developers in a 
technical standpoint, without political or philosophical considerations. 
Any fine tune of a candidate proposal may not become an independent 
candidate, unless it introduces some “real” difference. “No change” is 
also one of the voting options.

-
Voter groups

There will be several voter groups and their votes will be counted 
independently. (The time frames mentioned below are just for example.)


Miners: miners of blocks with timestamp between 1 to 30 Sept 2015 are 
eligible to vote. One block one vote. Miners will cast their votes by 
signing with the bitcoin address in coinbase. If there are multiple 
coinbase outputs, the vote is discounted by output value / total 
coinbase output value.
Many well-known pools are reusing addresses and they may not need to 
digitally sign their votes. In case there is any dispute, the digitally 
signed vote will be counted.


Bitcoin holders: People with bitcoin in the UTXO at block 372500 (around 
early September) are eligible to vote. The total “balance” of each 
scriptPubKey is calculated and this is the weight of the vote. People 
will cast their votes by digital signature.

Special output types:
Multi-sig: vote must be signed according to the setting of the 
multi-sig.

P2SH: the serialized script must be provided
Publicly known private key: not eligible to vote
Non-standard script according to latest Bitcoin Core rules: not eligible 
to vote in general. May be judged case-by-case


Developers: People with certain amount of contribution in the past year 
in Bitcoin Core or other open sources wallet / alternative 
implementations. One person one vote.


Exchanges: Centralized exchanges listed on Coindesk Bitcoin Index, 
Winkdex, or NYSE Bitcoin index, with 30 days volume 100,000BTC are 
invited. This includes Bitfinex, BTC China, BitStamp, BTC-E, itBit, 
OKCoin, Huobi, Coinbase. Exchanges operated for at least 1 year with 
100,000BTC 30-day volume may also apply to be a voter in this category. 
One exchange one vote.


Merchants and service providers: This category includes all bitcoin 
accepting business that is not centralized fiat-currency exchange, e.g. 
virtual or physical stores, gambling sites, online wallet service, 
payment processors like Bitpay, decentralized exchange like 
Localbitcoin, ETF operators like Secondmarket Bitcoin Investment Trust. 
They must directly process bitcoin without relying on third party. They 
should process at least 100BTC in the last 30-days. One merchant one 
vote.


Full nodes operators: People operating full nodes for at least 168 hours 
(1 week) in July 2015 are eligible to vote, determined by the log of 
Bitnodes. Time is set in the past to avoid manipulation. One IP address 
one vote. Vote must be sent from the node’s IP address.



Voting system

Single transferable vote is applied. 
(https://en.wikipedia.org/wiki/Single_transferable_vote). Voters are 
required to rank their preference with “1”, “2”, “3”, etc, or use “N” to 
indicate rejection of a candidate.
Vote counting starts with every voter’s first choice. The candidate with 
fewest votes is eliminated and those votes are transferred according to 
their second choice. This process repeats until only one candidate is 
left, which is the most popular candidate. The result is presented as 
the approval rate: final votes for the most popular candidate / all 
valid votes


After the most popular candidate is determined, the whole counting 
process is repeated by eliminating this candidate, which will find the 
approval rate for the second most popular candidate. The process repeats 
until all proposals are ranked with the approval rate calculated.



Interpretation of results:

It is possible that a candidate with lower ranking to have higher 
approval rate. However, ranking is more important than the approval 
rate, unless the difference in approval rate is really huge. 90% support 
would be excellent; 70% is good; 50% is marginal; 50% is failed.



Technical issues:

Voting by the miners, developers, exchanges, and merchants are probably 
the easiest. We need a trusted person to verify the voters’ identity by 
email, website, or digital signature. The trusted person will collect 
votes and publish the named votes so anyone could verify the results.


For full nodes, we need a trusted person to setup a website as an 
interface to vote. The votes with IP address will be published.


For bitcoin holders, 

Re: [bitcoin-dev] Wrapping up the block size debate with voting

2015-08-04 Thread jl2012 via bitcoin-dev

Bitcoin's consensus rules are a consensus system


What is your definition of consensus? Do you mean 100% agreement? 
Without a vote how do you know there is 100% (or whatever percentage) 
agreement?



Find a solution that everyone agrees on, or don't.


Who are the everyone?

Pieter Wuille 於 2015-08-04 05:03 寫到:

I would like to withdraw my proposal from your self-appointed vote.

If you want to let a majority decide about economic policy of a
currency, I suggest fiat currencies. They have been using this
approach for quite a while, I hear.

Bitcoin's consensus rules are a consensus system, not a democracy.
Find a solution that everyone agrees on, or don't.
On Aug 4, 2015 9:51 AM, jl2012 via bitcoin-dev
bitcoin-dev@lists.linuxfoundation.org wrote:


As now we have some concrete proposals


(https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-July/009808.html

[1]), I think we should wrap up the endless debate with voting by
different stakeholder groups.

-
Candidate proposals

Candidate proposals must be complete BIPs with reference
implementation which are ready to merge immediately. They must first
go through the usual peer review process and get approved by the
developers in a technical standpoint, without political or
philosophical considerations. Any fine tune of a candidate proposal
may not become an independent candidate, unless it introduces some
“real” difference. “No change” is also one of the voting
options.
-
Voter groups

There will be several voter groups and their votes will be counted
independently. (The time frames mentioned below are just for
example.)

Miners: miners of blocks with timestamp between 1 to 30 Sept 2015
are eligible to vote. One block one vote. Miners will cast their
votes by signing with the bitcoin address in coinbase. If there are
multiple coinbase outputs, the vote is discounted by output value /
total coinbase output value.
Many well-known pools are reusing addresses and they may not need
to digitally sign their votes. In case there is any dispute, the
digitally signed vote will be counted.

Bitcoin holders: People with bitcoin in the UTXO at block 372500
(around early September) are eligible to vote. The total
“balance” of each scriptPubKey is calculated and this is the
weight of the vote. People will cast their votes by digital
signature.
Special output types:
Multi-sig: vote must be signed according to the setting of the
multi-sig.
P2SH: the serialized script must be provided
Publicly known private key: not eligible to vote
Non-standard script according to latest Bitcoin Core rules: not
eligible to vote in general. May be judged case-by-case

Developers: People with certain amount of contribution in the past
year in Bitcoin Core or other open sources wallet / alternative
implementations. One person one vote.

Exchanges: Centralized exchanges listed on Coindesk Bitcoin Index,
Winkdex, or NYSE Bitcoin index, with 30 days volume 100,000BTC are
invited. This includes Bitfinex, BTC China, BitStamp, BTC-E, itBit,
OKCoin, Huobi, Coinbase. Exchanges operated for at least 1 year with
100,000BTC 30-day volume may also apply to be a voter in this
category. One exchange one vote.

Merchants and service providers: This category includes all bitcoin
accepting business that is not centralized fiat-currency exchange,
e.g. virtual or physical stores, gambling sites, online wallet
service, payment processors like Bitpay, decentralized exchange like
Localbitcoin, ETF operators like Secondmarket Bitcoin Investment
Trust. They must directly process bitcoin without relying on third
party. They should process at least 100BTC in the last 30-days. One
merchant one vote.

Full nodes operators: People operating full nodes for at least 168
hours (1 week) in July 2015 are eligible to vote, determined by the
log of Bitnodes. Time is set in the past to avoid manipulation. One
IP address one vote. Vote must be sent from the node’s IP address.


Voting system

Single transferable vote is applied.
(https://en.wikipedia.org/wiki/Single_transferable_vote [2]). Voters
are required to rank their preference with “1”, “2”,
“3”, etc, or use “N” to indicate rejection of a candidate.
Vote counting starts with every voter’s first choice. The
candidate with fewest votes is eliminated and those votes are
transferred according to their second choice. This process repeats
until only one candidate is left, which is the most popular
candidate. The result is presented as the approval rate: final votes
for the most popular candidate / all valid votes

After the most popular candidate is determined, the whole counting
process is repeated by eliminating this candidate, which will find
the approval rate for the second most popular candidate. The process
repeats until all proposals are ranked with the approval rate
calculated.


Interpretation of results:

It is possible that a candidate with lower ranking

Re: [bitcoin-dev] Wrapping up the block size debate with voting

2015-08-04 Thread jl2012 via bitcoin-dev
As I mentioned, the candidate proposals must go through usual peer 
review process, which includes proper testing, I assume.


Scaling down is always possible with softforks, or miners will simply 
produce smaller blocks. BIP100 has a scaling down mechanism but it still 
requires miners to vote so it doesn't really make much difference


But anyway, this is off-topic, as candidate proposals may include 
mechanism for scaling down.


Venzen Khaosan 於 2015-08-04 05:23 寫到:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

It is not scientific or sensible to go from proposal stage straight to
voting and then implementation stage.

The proposals you have diligently gathered, summarized and presented
in your document must go through testing, and scenario simulation with
published results, in order for objective evaluation to be made 
possible.


For that matter, even running up against a capacity limit has not
been simulated or tested. Additionally, (and looking the other way)
there is a lack of provision for scaling DOWN in the current proposals
- - hard to envision, yes - but what goes up will eventually come down.
A global credit contraction is not unlikely, nor is natural disaster,
and these scenarios have implications for usage, scale, degree of
decentralization and security.

CS is science, there is no reason for this generation not to apply
rigorous Computer Science to Bitcoin.

Venzen


On 08/04/2015 02:50 PM, jl2012 via bitcoin-dev wrote:

As now we have some concrete proposals
(https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-July/009808.html),



I think we should wrap up the endless debate with voting by different

stakeholder groups.

- Candidate proposals

Candidate proposals must be complete BIPs with reference
implementation which are ready to merge immediately. They must
first go through the usual peer review process and get approved by
the developers in a technical standpoint, without political or
philosophical considerations. Any fine tune of a candidate proposal
may not become an independent candidate, unless it introduces some
“real” difference. “No change” is also one of the voting options.
- Voter groups

There will be several voter groups and their votes will be counted
independently. (The time frames mentioned below are just for
example.)

Miners: miners of blocks with timestamp between 1 to 30 Sept 2015
are eligible to vote. One block one vote. Miners will cast their
votes by signing with the bitcoin address in coinbase. If there are
multiple coinbase outputs, the vote is discounted by output value /
total coinbase output value. Many well-known pools are reusing
addresses and they may not need to digitally sign their votes. In
case there is any dispute, the digitally signed vote will be
counted.

Bitcoin holders: People with bitcoin in the UTXO at block 372500
(around early September) are eligible to vote. The total “balance”
of each scriptPubKey is calculated and this is the weight of the
vote. People will cast their votes by digital signature. Special
output types: Multi-sig: vote must be signed according to the
setting of the multi-sig. P2SH: the serialized script must be
provided Publicly known private key: not eligible to vote
Non-standard script according to latest Bitcoin Core rules: not
eligible to vote in general. May be judged case-by-case

Developers: People with certain amount of contribution in the past
year in Bitcoin Core or other open sources wallet / alternative
implementations. One person one vote.

Exchanges: Centralized exchanges listed on Coindesk Bitcoin Index,
Winkdex, or NYSE Bitcoin index, with 30 days volume 100,000BTC
are invited. This includes Bitfinex, BTC China, BitStamp, BTC-E,
itBit, OKCoin, Huobi, Coinbase. Exchanges operated for at least 1
year with 100,000BTC 30-day volume may also apply to be a voter in
this category. One exchange one vote.

Merchants and service providers: This category includes all
bitcoin accepting business that is not centralized fiat-currency
exchange, e.g. virtual or physical stores, gambling sites, online
wallet service, payment processors like Bitpay, decentralized
exchange like Localbitcoin, ETF operators like Secondmarket Bitcoin
Investment Trust. They must directly process bitcoin without
relying on third party. They should process at least 100BTC in the
last 30-days. One merchant one vote.

Full nodes operators: People operating full nodes for at least 168
hours (1 week) in July 2015 are eligible to vote, determined by the
log of Bitnodes. Time is set in the past to avoid manipulation. One
IP address one vote. Vote must be sent from the node’s IP address.

 Voting system

Single transferable vote is applied.
(https://en.wikipedia.org/wiki/Single_transferable_vote). Voters
are required to rank their preference with “1”, “2”, “3”, etc, or
use “N” to indicate rejection of a candidate. Vote counting starts
with every voter’s first

Re: [bitcoin-dev] BIP draft: Hardfork bit

2015-08-03 Thread jl2012 via bitcoin-dev
I have put it on the github: 
https://github.com/jl2012/bips/blob/master/hardforkbit.mediawiki


I removed the specification of coinbase message to make it simpler. 
Instead, it requires that a flag block must not be shared by multiple 
hardfork proposals.


I'm not sure whether it is a Standard, Informational, or Process BIP

I'm also thinking whether we should call it hardfork bit, hardfork 
flag, or with other name.


Michael Ruddy 於 2015-08-02 06:53 寫到:

I think your hardfork bit proposal is clever.
It addresses the particular valid concern of re-org facing users of a
fork that a small/near/fluctuating majority, or less, of mining power
supported.
While the economic majority argument may be enough on its own in
that case, it still has some aspect of being a hand wave.
This proposal adds support to those economic actors, which makes it
easier for them to switch if/when they choose. That is, it provides a
good fallback mechanism that allows them to make a decision and say,
we're doing this.
Do you have the latest version up on github, or someplace where it
would be easier to collaborate on the specific text?


___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] A compromise between BIP101 and Pieter's proposal

2015-07-31 Thread jl2012 via bitcoin-dev
Yes, data-center operators are bound to follow laws, including NSLs  
and gag orders. How about your ISP? Is it bound to follow laws,  
including NSLs and gag orders?

https://edri.org/irish_isp_introduces_blocking/

Do you think everyone should run a full node behind TOR? No way, your  
repressive government could just block TOR:

http://www.technologyreview.com/view/427413/how-china-blocks-the-tor-anonymity-network/

Or they could raid your home and seize your Raspberry Pi if they  
couldn't read your encrypted internet traffic. You will have a hard  
time proving you are not using TOR for child porn or cocaine.

https://en.wikipedia.org/wiki/Encryption_ban_proposal_in_the_United_Kingdom

If you are living in a country like this, running Bitcoin in an  
offshore VPS could be much easier. Anyway, Bitcoin shouldn't be your  
first thing to worry about. Revolution is probably your only choice.


Data-centers would get hacked. How about your Raspberry Pi?

Corrupt data-center employee is probably the only valid concern.  
However, there is nothing (except cost) to stop you from establishing  
multiple full nodes all over the world. If your Raspberry Pi at home  
could no longer fully validate the chain, it could become a  
header-only node to make sure your VPS full nodes are following the  
correct chaintip. You may even buy hourly charged cloud hosting in  
different countries to run header-only nodes at negligible cost.


There is no single point of failure in a decentralized network. Having  
multiple nodes will also save you from Sybil attack and geopolitical  
risks. Again, if all data-centres and governments in the world are  
turning against Bitcoin, it is delusional to think we could fight  
against them without using any real weapon.


By the way, I'm quite confident that my current full node at home are  
capable of running at 8MB blocks.



Quoting Adam Back a...@cypherspace.org:


I think trust the data-center logic obviously fails, and I was talking
about this scenario in the post you are replying to.  You are trusting the
data-center operator period.  If one could trust data-centers to run
verified code, to not get hacked, filter traffic, respond to court orders
without notifying you etc that would be great but that's unfortunately not
what happens.

Data-center operators are bound to follow laws, including NSLs and gag
orders.  They also get hacked, employ humans who can be corrupt,
blackmailed, and themselves centralisation points for policy attack.
Snowden related disclosures and keeping aware of security show this is very
real.

This isn't much about bitcoin even, its just security reality for hosting
anything intended to be secure via decentralisation, or just hosting in
general while at risk of political or policy attack.

Adam



___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


[bitcoin-dev] A compromise between BIP101 and Pieter's proposal

2015-07-31 Thread jl2012 via bitcoin-dev
There is a summary of the proposals in my previous mail at  
https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-July/009808.html


I think there could be a compromise between Gavin's BIP101 and  
Pieter's proposal (called BIP103 here). Below I'm trying to play  
with the parameters, which reasons:


1. Initiation: BIP34 style voting, with support of 750 out of the last  
1000 blocks. The hardfork bit mechanism might be used:  
http://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-July/009576.html


Rationale: This follows BIP101, to make sure the new chain is secure.  
Also, no miner would like to be the first one to mine a large block if  
they don't know how many others would accept it.


2. Starting date: 30 days after 75% miner support, but not before  
2016-01-12 00:00 UTC


Rationale: A 30-day grace period is given to make sure everyone has  
enough time to follow. This is a compromise between 14 day in BIP101  
and 1 year in BIP103. I tend to agree with BIP101. Even 1 year is  
given, people will just do it on the 364th day if they opt to  
procrastinate.


2016-01-12 00:00 UTC is Monday evening in US and Tuesday morning in  
China. Most pool operators and devs should be back from new year  
holiday and not sleeping. (If the initiation is delayed, we may  
require that it must be UTC Tuesday midnight)


3. The block size at 2016-01-12 will be 1,414,213 bytes, and  
multiplied by 1.414213 by every 2^23 seconds (97 days) until exactly  
8MB is reached on 2017-05-11.


Rationale: Instead of jumping to 8MB, I suggest to increase it  
gradually to 8MB in 16 months. 8MB should not be particularly painful  
to run even with current equipment (you may see my earlier post on  
bitctointalk: https://bitcointalk.org/index.php?topic=1054482.0). 8MB  
is also agreed by Chinese miners, who control 60% of the network.


4. After 8MB is reached, the block size will be increased by 6.714%  
every 97 days, which is equivalent to exactly octuple (8x) every 8.5  
years, or double every 2.9 years, or +27.67% per year. Stop growth at  
4096MB on 2042-11-17.


Rationale: This is a compromise between 17.7% p.a. of BIP103 and 41.4%  
p.a. of BIP101. This will take us almost 8 years from now just to go  
back to the original 32MB size (4 years for BIP101 and 22 years for  
BIP103)


SSD price is expected to drop by 50%/year in the coming years. In  
2020, we will only need to pay 2% of current price for SSD. 98% price  
reduction is enough for 40 years of 27.67% growth.

Source: http://wikibon.org/wiki/v/Evolution_of_All-Flash_Array_Architectures

Global bandwidth is expected to grow by 37%/year until 2021 so 27.67%  
should be safe at least for the coming 10 years.
Source:  
https://www.telegeography.com/research-services/global-bandwidth-forecast-service/


The final cap is a compromise between 8192MB@2036 of BIP101 and  
2048MB@2063 of BIP103



---

Generally speaking, I think we need to have a faster growth in the  
beginning, just to normalize the block size to a more reasonable one.  
After all, the 1MB cap was introduced when Bitcoin was practically  
worthless and with inefficient design. We need to decide a new  
optimal size based on current adoption and technology.


About fee market: I do agree we need a fee market, but the fee  
pressure must not be too high at this moment when block reward is  
still miner's main income source. We already have a fee market: miners  
will avoid building big blocks with low fee because that will increase  
the orphan risk for nothing.


About secondary layer: I respect everyone building secondary layer  
over the blockchain. However, while the SWIFT settlement network is  
processing 300tps, Bitcoin's current 7tps is just nothing more than an  
experiment. If the underlying settlement system does not have enough  
capacity, any secondary layer built on it will also fall apart. For  
example, people may DoS attack a Lightening network by provoking a  
huge amount of settlement request which some may not be confirmed on  
time. Ultimately, this will increase the risk of running a LN service  
and increase the tx fee inside LN. After all, the value of secondary  
layer primarily comes from instant confirmation, not scarcity of the  
block space.


___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] CORRECTIONS: A summary of block size hardfork proposals

2015-07-30 Thread jl2012 via bitcoin-dev

I am making some corrections to my previous summary

Currently, there are 4 block size BIP by Bitcoin developers:

BIP100 by Jeff:  
http://gtf.org/garzik/bitcoin/BIP100-blocksizechangeproposal.pdf
BIP101 by Gavin:  
https://github.com/bitcoin/bips/blob/master/bip-0101.mediawiki

BIP102 by Jeff: https://github.com/bitcoin/bips/pull/173/files
BIP??? by Pieter (called BIP103 below):  
https://gist.github.com/sipa/c65665fc360ca7a176a6


To facilitate further discussion, I'd like to summarize these  
proposals by a series of questions. Please correct me if I'm wrong.  
Something like sigop limit are less controversial and are not shown.


Should we use a miner voting mechanism to initiate the hardfork?
BIP100: Yes, support with 10800 out of last 12000 blocks (90%)
BIP101: Yes, support with 750 out of last 1000 blocks (75%)
BIP102: No
BIP103: No

When should we initiate the hardfork?
BIP100: 2016-01-11#
BIP101: 2 weeks after 75% miner support, but not before 2016-01-11
BIP102: 2015-11-11
BIP103: 2017-01-01

# The network does not actually fork until having 90% miner support

What should be the block size at initiation?
BIP100: 1MB
BIP101: 8MB*
BIP102: 2MB
BIP103: 1MB

* It depends on the exact time of initiation, e.g. 8MB if initiated on  
2016-01-11, 16MB if initiated on 2018-01-10.


Should we allow further increase / decrease?
BIP100: By miner voting, 0.5x - 2x every 12000 blocks (~3 months)
BIP101: Double every 2 years, with linear interpolations in between  
(41.4% p.a.)

BIP102: No
BIP103: +4.4% every 97 days (double every 4.3 years, or 17.7% p.a.)

The earliest date for a =2MB block?
BIP100: 2016-04-03^
BIP101: 2016-01-11
BIP102: 2015-11-11
BIP103: 2020-12-27

^ Assuming 10 minutes blocks and votes cast before 2016-01-11 are not counted

What should be the final block size?
BIP100: 32MB is the max, but it is possible to reduce by miner voting
BIP101: 8192MB
BIP102: 2MB
BIP103: 2048MB

When should we have the final block size?
BIP100: Decided by miners
BIP101: 2036-01-06
BIP102: 2015-11-11
BIP103: 2063-07-09



___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] BIP draft: Hardfork bit

2015-07-23 Thread jl2012 via bitcoin-dev


Quoting Tier Nolan via bitcoin-dev bitcoin-dev@lists.linuxfoundation.org:


On Thu, Jul 23, 2015 at 5:23 PM, jl2012 via bitcoin-dev 
bitcoin-dev@lists.linuxfoundation.org wrote:


2) Full nodes and SPV nodes following original consensus rules may not be
aware of the deployment of a hardfork. They may stick to an
economic-minority fork and unknowingly accept devalued legacy tokens.



This change means that they are kicked off the main chain immediately when
the fork activates.

The change is itself a hard fork.  Clients have be updated to get the
benefits.


I refrain from calling it the main chain. I use original chain and  
new chain instead as I make no assumption about the distribution of  
mining power. This BIP still works when we have a 50/50 hardfork. The  
main point is to protect all users on both chains, and allow them to  
make an informed choice.




3) In the case which the original consensus rules are also valid under the

new consensus rules, users following the new chain may unexpectedly reorg
back to the original chain if it grows faster than the new one. People may
find their confirmed transactions becoming unconfirmed and lose money.



I don't understand the situation here.  Is the assumption of a group of
miners suddenly switching (for example, they realise that they didn't
intend to support the new rules)?



Again, as I make no assumption about the mining power distribution,  
the new chain may actually have less miner support. Without any  
protection (AFAIK, for example, BIP100, 101, 102), the weaker new  
chain will get 51%-attacked by the original chain constantly.





Flag block is constructed in a way that nodes with the original consensus
rules must reject. On the other hand, nodes with the new consensus rules
must reject a block if it is not a flag block while it is supposed to be.
To achieve these goals, the flag block must 1) have the hardfork bit
setting to 1, 2) include a short predetermined unique description of the
hardfork anywhere in its coinbase, and 3) follow any other rules required
by the hardfork. If these conditions are not fully satisfied, upgraded
nodes shall reject the block.



Ok, so set the bit and then include BIP-GIT-HASH of the canonical BIP on
github in the coinbase?


I guess the git hash is not known until the code is written? (correct  
me if I'm wrong) As the coinbase message is consensus-critical, it  
must be part of the source code and therefore you can't use any kind  
of hash of the code itself (a chicken-and-egg problem)



Since it is a hard fork, the version field could be completely
re-purposed.  Set the bit and add the BIP number as the lower bits in the
version field.  This lets SPV clients check if they know about the hard
fork.


This may not be compatible with the other version bits voting mechanisms.


There network protocol could be updated to add getdata support for asking
for a coinbase only merkleblock.  This would allow SPV clients to obtain
the coinbase.


Yes



Automatic warning system: When a flag block is found on the network, full

nodes and SPV nodes should look into its coinbase. They should alert their
users and/or stop accepting incoming transactions if it is an unknown
hardfork. It should be noted that the warning system could become a DoS
vector if the attacker is willing to give up the block reward. Therefore,
the warning may be issued only if a few blocks are built on top of the flag
block in a reasonable time frame. This will in turn increase the risk in
case of a real planned hardfork so it is up to the wallet programmers to
decide the optimal strategy. Human warning system (e.g. the emergency alert
system in Bitcoin Core) could fill the gap.



If the rule was that hard forks only take effect 100 blocks after the flag
block, then this problem is eliminated.

Emergency hard forks may still have to take effect immediately though, so
it would have to be a custom not a rule.


The flag block itself is a hardfork already and old miners will not  
mine on top of the flag block. So your suggestion won't be helpful in  
this situation.


To make it really meaningful, we need to consume one more bit of the  
'version' field (notice bit). Supporting miners will turn on the  
notice bit, and include a message in coinbase (notice block). When a  
full node/SPV node find many notice blocks with the same coinbase  
message, they could bet that the subsequent flag block is a legit one.  
However, an attacker may still troll you by injecting an invalid flag  
block after many legit notice blocks. So I'm not sure if it is worth  
the added complexity.




___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev