Re: [bitcoin-dev] OP_DIFFICULTY to enable difficulty hedges (bets) without an oracle and 3rd party.

2019-05-24 Thread Natanael via bitcoin-dev
On Thu, May 23, 2019 at 9:58 PM Pieter Wuille via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> If the difficulty can be directly observed by the script language, you
> would need to re-evaluate all scripts in unconfirmed transactions
> whenever the difficulty changes. This complicates implementation of
> mempools, but it also makes reasoning about validity of (chains of)
> unconfirmed transactions harder, as an unconfirmed predecessor may
> have conditions that change over time.


To deal with potentially wildly varying difficulty, could the value exposed
be the sum of accumulated PoW, or in other words the sum of each block's
difficulty value in the entire chain? This should be a value that will only
rise unless a reorg happens after a difficulty drop happens (only likely to
be the result of users manually blacklisting an otherwise valid block that
is several blocks back in the chain).

This mimics the effect of the block number which only grows. So if you're
starting at time A with difficulty X, then you'd estimate what you think
the accumulated PoW ought to be at time B with expected difficulty Y (as
compared to the current value at time A), and put that value into the
script.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] new BIP: Self balancing between excessively low/high fees and block size

2019-04-07 Thread Natanael via bitcoin-dev
Related ideas previously submitted by me;

https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-March/013885.html

Title: Block size adjustment idea - expedience fees + difficulty scaling
proportional to block size (+ fee pool)

Den sön 7 apr. 2019 17:45simondev1 via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> skrev:

> Dear bitcoin developers,
>
> New BIP: https://github.com/bitcoin/bips/pull/774
>
> ==Abstract==
> Logarithm of transaction fee limits block size.
>
> ==Motivation==
> Keep block space small.
> Waste less with spam transactions.
> Auto balance Fees: Increase very low fees, Descrease very high fees.
> Allow larger size when sender pays a lot.
> Allow wallets to calculate/display how much average free block space there
> is for each fee price.
> Allow senders to have more control about how the fee/priority of their
> transaction will behave, especially in the case of increased adoption in
> the future.
>
> ==Specification==
> Every transaction has to fit into the following block space:
> Input variable 'FeeInSatoshiPerByte': Must be positive or 0
> type: double
> unit: Satishi per byte
> Output:
> type: uint
> unit: bytes
> Formula:
> floor( log10( 1.1 + FeeInSatoshiPerByte ) * 1024 * 1024 )
>
> ==Implementation==
> Sort transactions by FeeInSatoshiPerByte (lowest first)
> For each transaction starting from lowest FeeInSatoshiPerByte: Sum up the
> bytes of space used so far. Check if summed up bytes of space used so far
> is smaller or equal than the formula result.
> If this is valid for each transaction then the blocksize is valid.
>
> ==Backward compatibility==
> Soft fork: If applied AND old hardcoded block size limit is kept.
> Hard fork: If applied AND old hardcoded block size limit is removed.
>
> Regards, simondev1
>
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] BIP- & SLIP-0039 -- better multi-language support

2018-11-19 Thread Natanael via bitcoin-dev
Den mån 19 nov. 2018 21:21 skrev Steven Hatzakis via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org>:

> Hi Weiji, and Everyone,
>
> I think this is an important topic so sharing my two cents in case in
> helps: It makes sense for users to know that they can't merely just
> translate a word from one language into another and expect the same
> underlying entropy to be mapped, as the wordlists are not the same (i.e.
> words differ at the same index values across languages).
>
> However, while the words for each language cannot translate directly to
> their equivalent in another language, in terms of entropy (bits), the
> underlying entropy is, in fact, the same, when comparing mnemonics
> generated across languages (see English/Spanish comparison below) when
> sourced from the same initial entropy.
>
> Importantly, the entropy is a pre-image of the resulting mnemonic and
> doesn't change as the language changes, where the only changes are to the
> resulting words which depend on the language chosen, for a given entropy
> string. Ideally, the wallet/software should deal with these nuances, I
> don't think the protocol needs any revision (except for how the BIP39 seed
> is derived, perhaps), even if someone made up their own wordlist, as long
> as the wallet/software has a copy of it to map those words to the
> underlying index values, it's *those underlying index values and the
> entropy they map too is what really matters**. *
>
> I fully support the idea for users to back up this pre-image (initial
> entropy) as it can also be used to check the validity of the mnemonic and
> check that it mapped correctly, see Ian Coleman's BIP39 tool which shows
> index values, a feature that I proposed last year and was since
> implemented. Below is an example of how two mnemonics generated with the
> same entropy will produce different BIP39 seeds.
>
> * Example initial entropy of 128 bits +4 bit checksum derived from hash of
> byte array: *
>
> 10001101000 01010100100 1101101 1111101 01010001101 00010010001
> 0110010 10101110100 00100100011 111 01100011010 1100010 (+1110
> checksum)
>
> *In English*: minimum fee sure ticket faculty banana gate purse caught
> valley globe shift
>
> The same initial entropy above (all 132 bits) produces this mnemonic:
>
> *In Spanish*: mercado faja soledad tarea evadir aries gafas peine búho
> tumor gerente reja
>
> And the underlying index values below are the same for both the English
> and Spanish mnemonics above:
>
> Word Indexes: 1128, 676, 1744, 1805, 653, 145, 770, 1396, 291, 1927, 794,
> 1582
>
> *ISSUE AT HAND*:  While the initial entropy is the same, and word indexes
> the same for a given entropy, (i.e. same pre-image), the resulting BIP39
> seed is not the same when comparing the above English mnemonic with its
> Spanish counterpart:
>
>- *English BIP39 seed:*
>
> ce7618075099c89e986f18dc495daa3be190450ed07bef77d4334a54dbc1cd7e205797ffed2615ac0999a5d691f65bf316e2cdbfd2c9d7d90b03e77ff1e6a6f5
>- *Spanish BIP39 seed*:
>
> 9f164de0fb09af51b5831886e424d6d2479d49b5e5a1b28f5c09467ea36089b144cd94bb9b636b3c27ccff96a8958e5b7ce43cf1dea81423fc66fa7fef0aea2c
>
>
> *Option 1:* Without changing anything in terms of the entropy
> generation/mapping process in the BIP39 spec, the wallet/client-side
> software would ideally recognize the language and show the corresponding
> index value per wordlist, and reverse-calculate the entropy and then re-map
> it to the language selected.
>
> *Option 2*: Perhaps a revision is needed to how the BIP39 seed is
> generated in the first place, such as by hashing the entropy instead of the
> words. Any thoughts on how viable that could be where the initial entropy
> is fed into the PBKDF2 function and not the words?
>
> *Closing thoughts and tiny checksum nitpick: *
>
>   - The multiple BIP39 seeds per language lend some similarities to
> BIP44 multi-account, so perhaps this can be an advantage, depends on how it
> is applied in UI/UX's (compared to having one BIP39 seed regardless of
> language, for a given initial entropy).
>   - There is perhaps an opportunity to add greater detail to the BIP39
> spec in terms of standards/best-practices for computing checksum values, as
> some software may be hashing bits, versus hashing bytes, or hashing the
> entropy as a hex string, etc.. for a given entropy, which will result in
> different checksum values for the same "valid" mnemonic, that might not be
> "valid" in another wallet which may format the data differently before
> hashing to compute the checksum.
>

This probably wouldn't work as a drop-in replacement, but having the
identifier of the chosen wordlist be part of the mnemonic might work?
Perhaps the raw seed would then be [hash of chosen dictionary]+[sequence of
word indexes].

The user experience then involves always selecting a dictionary by name. I
also suggest maintaining an official list of named dictionaries.

The purpose of including the dictionary i

Re: [bitcoin-dev] Should Graftroot be optional?

2018-05-24 Thread Natanael via bitcoin-dev
Den tor 24 maj 2018 04:08Gregory Maxwell via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> skrev:

>
> My understanding of the question is this:
>
> Are there any useful applications which would be impeded if a signing
> party who could authorize an arbitrary transaction spending a coin had
> the option to instead sign a delegation to a new script?
>
> The reason this question is interesting to ask is because the obvious
> answer is "no":  since the signer(s) could have signed an arbitrary
> transaction instead, being able to delegate is strictly less powerful.
> Moreover, absent graftroot they could always "delegate" non-atomically
> by spending the coin with the output being the delegated script that
> they would have graftrooted instead.
>
> Sometimes obvious answers have non-obvious counter examples, e.g.
> Andrews points related to blindsigning are worth keeping in mind.
>

As stated above by Wuille this seems to not be a concern for typical P2SH
uses, but my argument here is simply that in many cases, not all
stakeholders in a transaction will hold one of the private keys required to
sign. And such stakeholders would want a guarantee that the original script
is followed as promised.

I agree that such flags typically wouldn't have a meaningful effect for
funds from non-P2SH addresses, since the entire transaction / script could
be replaced by the very same keyholders.

I'm not concerned by the ability to move funds to an address with the new
rules that you'd otherwise graftroot in, only that you can provide a
transparent guarantee that you ALSO follow the original script as promised.
What happens *after* you have followed the original script is unrelated,
IMHO.

>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Should Graftroot be optional?

2018-05-24 Thread Natanael via bitcoin-dev
Den tor 24 maj 2018 01:45Gregory Maxwell  skrev:

> I am having a bit of difficulty understanding your example.
>
> If graftroot were possible it would mean that the funds were paid to a
> public key.  That holder(s) of the corresponding private key could
> sign without constraint, and so the accoutability you're expecting
> wouldn't exist there regardless of graftroot.
>
> I think maybe your example is only making the case that it should be
> possible to send funds constrained by a script without a public key
> ever existing at all.  If so, I agree-- but that wasn't the question
> here as I understood it.
>

I have to admit I not an expert on this field, so some of my concerns might
not be relevant. However, I think Wuille understood my points and his reply
answered my concerns quite well. I'm only asking for the optional ability
to prove you're not using these constructions (because some uses requires
committing to an immutable script), and that already seems to exist. So for
the future implementations I only ask that this ability is preserved.

I think such a proof don't need to be public (making such a proof in
private is probably often better), although optionally it might be. A
private contract wouldn't publish these details, while a public commitment
would do so.

>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Should Graftroot be optional?

2018-05-23 Thread Natanael via bitcoin-dev
Den tis 22 maj 2018 20:18Pieter Wuille via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> skrev:

> Hello all,
>
> Given the recent discussions about Taproot [1] and Graftroot [2], I
> was wondering if a practical deployment needs a way to explicitly
> enable or disable the Graftroot spending path. I have no strong
> reasons why this would be necessary, but I'd like to hear other
> people's thoughts.
>

I'm definitely in favor of the suggestion of requiring a flag to allow the
usage of these in a transaction, so that you get to choose in advance if
the script will be static or "editable".

Consider for example a P2SH address for some fund, where you create a
transaction in advance. Even if the parties involved in signing the
transaction would agree (collude), the original intent of this particular
P2SH address may be to hold the fund accountable by enforcing some given
rules by script. To be able to circumvent the rules could break the purpose
of the fund.

The name of the scheme escapes me, but this could include a variety of
proof-requiring committed transactions, like say a transaction that will
pay out if you can provide a proof satisfying some conditions such as
embedding the solution to a given problem. This fund would only be supposed
to pay out of the published conditions are met (which may include an expiry
date).

To then use taproot / graftroot to withdraw the funds despite this
possibility not showing in the published script could be problematic.

I'm simultaneously in favor of being able to have scripts where the usage
of taproot / graftroot isn't visible in advance, but it must simultaneously
be possible to prove a transaction ISN'T using it.

>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Transition to post-quantum

2018-02-15 Thread Natanael via bitcoin-dev
Small correction, see edited quote

Den 15 feb. 2018 23:44 skrev "Natanael" :

Allowing expiration retains insecurity, while *NOT* allowing expiration
makes it a trivial DoS target.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Transition to post-quantum

2018-02-15 Thread Natanael via bitcoin-dev
Den 15 feb. 2018 22:58 skrev "Tim Ruffing via bitcoin-dev" <
bitcoin-dev@lists.linuxfoundation.org>:


Also, the miners will indeed see one valid decommitment. This
decommitment may have been sent by the attacker but it's the preimage
chal of the address, because otherwise it's not valid for the malicious
commitment. But if the decommitment is chal, then this decommitment is
also valid for the commitment of the honest user, which is earliest
additionally. So the honest commitment wins. The attacker does not
succeed and everything is fine.

The reason why this works:
There is only one unique decommitment for the UTXO (assuming H_addr is
collision-resistant). The decommitment does not depend on the
commitment. The attacker cannot send a different decommitment, just
because there is none.


If your argument is that we publish the full transaction minus the public
key and signatures, just committing to it, and then revealing that later
(which means an attacker can't modify the transaction in advance in a way
that produces a valid transaction);

Allowing expiration retains insecurity, while allowing expiration makes it
a trivial DoS target.

Anybody can flood the miners with invalid transaction commitments. No miner
can ever prune invalid commitments until a valid transaction is finalized
which conflicts with the invalid commitments. You can't even rate limit it
safely.

Like I said in the other thread, this is unreasonable. It's much more
practical with  simple hash commitment that you can "fold away" in a Merkle
tree hash and which you don't need to validate until the full transaction
is published.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Transition to post-quantum

2018-02-15 Thread Natanael via bitcoin-dev
Den 15 feb. 2018 17:00 skrev "Tim Ruffing via bitcoin-dev" <
bitcoin-dev@lists.linuxfoundation.org>:


Consensus rules
===
A decommitment d = chal spends a UTXO with address H_addr(chal), if
there exists a commitment c in the blockchain which references the UTXO
and which is the first commitment (among all referencing the UTXO) in
the blockchain such that
1. k = KDF(chal) correctly decrypts Dec(k, c)
and
2. tx = Dec(k, c) is a valid transaction to spend UTXO

The UTXO is spent as described by tx.
Commitments never expire.


I addressed this partially before, and this is unfortunately incomplete.

Situation A: Regardless of expiration of commitments, we allow doubles. (Or
no doubles allowed, but commitments expire.)

If I can block your transaction from confirming (censorship), then I can
make my own commitment + transaction. The miners will see two commitments
referencing the same UTXO - but can see only one transaction which match a
valid challenge and spends them, which is mine. You gained nothing from the
commitment.

Situation B: We don't allow conflicting commitments, and they never expire.
I can now freeze everybody's funds trivially with invalid commitments,
because you can't validate a commitment without seeing a valid transaction
matching it - and exposing an uncommitted transaction breaks the security
promise of commitments.

Any additional data in the commitment but hash it the transaction is
pointless, because the security properties are the same. You can't freeze
an UTXO after only seeing a commitment, and for any two conflicting
transactions you may observe it does not matter at all if one references
UTXO:s or not since you already know both transactions' commitment ages
anyway. Oldest would win no matter the additional data.

Commitments work when the network can't easily be censored for long enough
to deploy the attack (at least for 2-3 blocks worth of time). They fail
when the attacker is capable of performing such an attack.

As I said previously, the only completely solid solution in all
circumstances is a quantum resistant Zero-knowledge proof algorithm, or
some equivalent method of proving knowledge of the key without revealing
any data that enables a quantum attack.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Possible change to the MIT license

2018-02-13 Thread Natanael via bitcoin-dev
Den 13 feb. 2018 15:07 skrev "JOSE FEMENIAS CAÑUELO via bitcoin-dev" <
bitcoin-dev@lists.linuxfoundation.org>:

***
NO PART OF THIS SOFTWARE CAN BE INCLUDED IN ANY OTHER PROJECT THAT USES THE
NAME BITCOIN AS PART OF ITS NAME AND/OR ITS MARKETING MATERIAL UNLESS THE
SOFTWARE PRODUCED BY THAT PROJECT IS FULLY COMPATIBLE WITH THE BITCOIN
(CORE) BLOCKCHAIN
***


That's better solved with trademarks. (whoever would be the trademark
holder - Satoshi?)

This would also prohibit any reimplementation that's not formally verified
to be perfectly compatible from using the name.

It also adds legal uncertainty.

Another major problem is that it neither affects anybody forking older
versions of Bitcoin, not people using existing independent blockchain
implementations and renaming them Bitcoin-Whatsoever.

And what happens when an old version is technically incompatible with a
future version by the Core team due to not understanding various new
softforks? Which version wins the right to the name?

Also, being unable to even mention Bitcoin is overkill.

The software license also don't affect the blockchain data.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Taproot: Privacy preserving switchable scripting

2018-01-24 Thread Natanael via bitcoin-dev
Den 25 jan. 2018 00:22 skrev "Tim Ruffing via bitcoin-dev" <
bitcoin-dev@lists.linuxfoundation.org>:


I think you misread my second proposal. The first step is not only to
publish the hash but to publish a *pair* consisting of the hash and the
transaction.

If the attacker changes the transaction on the wire, the user does not
care and will try again.


I guess I assumed you meant it otherwise because I didn't assume you
intended a commitment to the full transaction just without the asymmetric
key material.

You could treat it the same way as in my suggestion, let it expire and
prune it if the key material isn't published in time.

However... A sufficiently powerful attacker can deploy as soon as he sees
your published signature and key, delay its propagation to the miners,
force expiration and then *still* repeat the attack with his own forgery.

Honestly, as long as we need to allow any form of expiry + relying on
publication of the vulnerable algorithms result for verification, I think
the weakness will remain.

No expiration hurts in multiple ways like via DoS, or by locking in
potentially wrong (or straight up malicious) transactions.

---

There's one way out, I believe, which is quantum safe Zero-knowledge
proofs. Currently STARK:s are one variant presumed quantum safe. It would
be used to completely substitute the publication of the public key and
signatures, and this way we don't even need two-step commitments.

It does however likely require a hardfork to apply to old transactions. (I
can imagine an extension block type softfork method, in which case old
UTXO:s get locked on the mainchain to create equivalent valued extension
block funds.)

Without practical ZKP,  and presuming no powerful QC attackers with the
ability to control the network (basically NSA level attackers), I do think
the Fawkes signature scheme is sufficient. Quantum attacks are likely to be
very expensive anyway, for the foreseeable future.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Taproot: Privacy preserving switchable scripting

2018-01-24 Thread Natanael via bitcoin-dev
Den 24 jan. 2018 16:38 skrev "Tim Ruffing via bitcoin-dev" <
bitcoin-dev@lists.linuxfoundation.org>:

Okay, I think my proposal was wrong...

This looks better (feel free to break again):
1. Commit (H(classic_pk, tx), tx) to the blockchain, wait until confirmed
2. Reveal classic_pk in the blockchain

Then the tx in the first valid commitment wins. If the attacker
intercepts classic_pk, it won't help him. He cannot create the first
valid commitment, because it is created already. (The reason is that
the decommitment is canonical now; for all commitments, the
decommitment is just classic_pk.)


That's not the type of attack I'm imagining. Both versions of your scheme
are essentially equivalent in terms of this attack.

Intended steps:
1: You publish a hash commitment.
2: The hash ends up in the blockchain.
3: You publish the transaction itself, and it matches the hash commitment.
4: Because it matches, miners includes it. It's now in the blockchain.

Attack:
1: You publish a hash commitment.
2: The hash ends up the blockchain.
3: You publish the transaction itself, it matches the hash commitment.
4: The attacker mess with the network somehow to prevent your transaction
from reaching the miners.
5: The attacker cracks your keypair, and makes his own commitment hash for
his own theft transaction.
6: Once that commitment is in the blockchain, he publishes his own theft
transaction.
7: The attacker's theft transaction gets into the blockchain.
8 (optionally): The miners finally see your original transaction with the
older commitment, but now the theft transaction can't be undone. There's
nothing to do about it, nor a way to know if it's intentional or not.
Anybody not verifying commitments only sees a doublespend attempt.

---

More speculation, not really a serious proposal:

I can imagine one way to reduce the probability of success for the attack
by publishing encrypted transactions as the commitment, to then publish the
key - the effect of this is that the key is easier to propagate quickly
across the network than a full transaction, making it harder to succeed
with a network based attack. This naive version by itself is however a
major DoS vector against the network.

You could, in some kind of fork, redefine how blocks are processed such
that you can prune all encrypted transactions that have not had the key
published within X blocks. The validation rules would work such that to
publish the key for an encrypted transaction in a new block, that
transaction must both be recent enough, be valid by itself, and also not
conflict with any other existing plaintext / decrypted transactions in the
blockchain.

Blocks wouldn't necessarily even need to include the encrypted transactions
during propagation. This works because encrypted transactions have zero
effect until the key is published. In this case you'd effectively be
required to publish your encrypted transaction twice to ensure the raw data
isn't lost, once to get into a block and again together with the key to get
it settled.

Since miners will likely keep at least the most recent encrypted
transactions cached to speed up validation, this is faster to settle than
to publish the committed transaction as mentioned in the beginning. This
increases your chances to get your key into the blockchain to settle your
transaction before the attacker completes his attack, versus pushing a full
transaction that miners haven't seen before.

This version would still allow DoS against miners caching all encrypted
transactions. However, if efficient Zero-knowledge proofs became practical
then you can use one to prove your encrypted transaction valid, even
against the UTXO set and in terms of not colliding with existing
commitments - in this case the DoS attack properties are nearly identical
to standard transactions.
If you want to change a committed transaction, you'd need to let the
commitment expire.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Taproot: Privacy preserving switchable scripting

2018-01-24 Thread Natanael via bitcoin-dev
Den 23 jan. 2018 23:45 skrev "Gregory Maxwell via bitcoin-dev" <
bitcoin-dev@lists.linuxfoundation.org>:

On Tue, Jan 23, 2018 at 10:22 PM, Anthony Towns  wrote:
> Hmm, at least people can choose not to reuse addresses currently --
> if everyone were using taproot and that didn't involve hashing the key,

Can you show me a model of quantum computation that is conjectured to
be able to solve the discrete log problem but which would take longer
than fractions of a second to do so? Quantum computation has to occur
within the coherence lifetime of the system.


Quantum computers works like randomized black boxes, you run them in many
cycles with a certain probability of getting the right answer.

The trick to them is that they bias the probabilities of their qubits to
read out the correct answer *more often than at random*, for many classes
of problems. You (likely) won't get the correct answer immediately.

https://en.wikipedia.org/wiki/Quantum_computing

Quoting Wikipedia:

> An algorithm is composed of a fixed sequence of quantum logic gates and a
problem is encoded by setting the initial values of the qubits, similar to
how a classical computer works. The calculation usually ends with a
measurement, collapsing the system of qubits into one of the 2 n
{\displaystyle 2^{n}} 2^{n} pure states, where each qubit is zero or one,
decomposing into a classical state. The outcome can therefore be at most n
{\displaystyle n} n classical bits of information (or, if the algorithm did
not end with a measurement, the result is an unobserved quantum state).
Quantum algorithms are often probabilistic, in that they provide the
correct solution only with a certain known probability.

A non programmed QC is essentially an RNG driven by quantum effects. You
just get random bits.

A programmed one will need to run the and program over and over until you
can derive the correct answer from one of its outputs. How fast this goes
depends on the problem and the algorithm.

Most people here have heard of Grover's algorithm, it would crack a
symmetric 256 bit key in approximately 2^128 QC cycles - completely
impractical. Shor's algorithm is the dangerous one for ECC since it cracks
current keys at "practical" speeds.

https://eprint.iacr.org/2017/598 - resource estimates, in terms of size of
the QC. Does not address implementation speed.

I can't seem to find specific details, but I remember having seen estimates
of around 2^40 cycles in practical implementations for 256 bit ECC (this
assumes use error correction schemes, QC machines with small some
imperfections, and more). Unfortunately I can't find a source for this
estimate. I've seen lower estimates too, but they seem entirely
theoretical.

Read-out time for QC:s is indeed insignificant, in terms of measuring the
state of the qubits after a complete cycle.

Programming time, time to prepared for readout, reset, reprogramming, etc,
that will all take a little longer. In particular with more qubits
involved, since they all need to be in superposition and be coherent at
some point. Also, you still have to parse all the different outputs (on a
classical computer) to find your answer among them.
Very very optimistic cycle speeds are in the GHz range, and then that's
still on the order of ~15 minutes for 2^40 cycles. Since we don't even have
a proper general purpose QC yet, nor one with usable amounts of qubits, we
don't even know if we can make them run at a cycle per second, or per
minute...

However if somebody *does* build a fast QC that's nearly ideal, then
Bitcoin's typical use of ECC would be in immediate danger. The most
optimistic QC plausible would indeed be able to crack keys in under a
minute. But my own wild guess is that for the next few decades none will be
faster than a runtime measured in weeks for cracking keys.

---

Sidenote, I'm strongly in favor of implementing early support for the
Fawkes scheme mentioned previously.

We could even patch it on top of classical transactions - you can only
break ECC with a known public key, so just commit to the signed transaction
into the blockchain before publishing it. Then afterwards you publish the
transaction itself, with a reference to the commitment. That transaction
can then be assumed legit simply because there was no room to crack the key
before the commitment, and the transaction matches the commitment.

Never reuse keys, and you're safe against QC:s.

Sidenote: There's a risk here with interception, insertion of a new
commitment and getting the new transaction into the blockchain first.
However, I would suggest a mining policy here were two known conflicting
transactions with commitments are resolved such that the one with the
oldest commitment wins. How to address detection of conflicting
transactions with commitments older than confirmed transactions isn't
obvious. Some of these may be fully intentional by the original owner, such
as a regretted transaction.

Another sidenote: HD wallets with hash based hardened deri

Re: [bitcoin-dev] Proposal to reduce mining power bill

2018-01-18 Thread Natanael via bitcoin-dev
A large miner would only need to divide his hardware setup into clusters
that pretend to be different independent miners to create these "miner
tokens", as explained before, to significantly raise his chances that he on
nearly every single round would be able to mine.

Once each individual token is about the expire, the number just dedicates a
fraction of his  mining power to renew it. At the same time he can even
create multiple new tokens given enough hardware.

This does not reduce energy use. The only notable effect is to delay income
for new miners. This makes profitability calculations more annoying.

Long term, it only behaves like an artificially raised difficulty target.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] BIP Proposal: Utilization of bits denomination

2017-12-14 Thread Natanael via bitcoin-dev
Reposting /u/BashCo's post on reddit here, for visibility:

---8<---

> Before anyone says 'bits' are too confusing because it's a computer
science term, here's a list of homonyms [https://en.wikipedia.org/
wiki/List_of_true_homonyms] that you use every day. Homonyms are fine
because our brains are able to interpret language based on context, so it's
a non-argument.


This ignores the fact that there exists multiple meanings of bits *within
the same context*, and that beginners likely can't tell them apart.

Feel free to try it yourself - talk about Bitcoin "bits" of a particular
value with somebody who  doesn't understand Bitcoin. Then explain that the
cryptography uses 256 bit keys. I would be surprised if you could find
somebody who would not be confused by that.

Let's say a website says a song is 24 bits. Was that 24 bit audio
resolution or 24 bit price? Somebody writes about 256 bit keys, are that
their size or value?

You guys here can probably tell the difference. Can everybody...? Bits will
cause confusion, because plenty of people will not be able to tell these
apart. They will not know WHEN to apply one definition or the other.

https://www.reddit.com/r/bitcoin/comments/24m3nb/_/ch8gua7
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Fwd: [Lightning-dev] Lightning in the setting of blockchain hardforks

2017-08-17 Thread Natanael via bitcoin-dev
Couldn't scripts like this have a standardized "hardfork unroll" mechanism,
where if a hardfork is activated and signaled to its clients, then those
commitments that are only meant for their original chain can be reversed
and undone just on the hardfork? Then the users involved would just send an
unroll transaction which is only valid on the hardfork.

- Sent from my phone

Den 17 aug. 2017 14:52 skrev "Conrad Burchert via bitcoin-dev" <
bitcoin-dev@lists.linuxfoundation.org>:

> Some notes:
>
> Hardforks like Bitcoin ABC without a malleability fix are very unlikely to
> have payment channels, so the problem does not exist for those.
>
> The designers of a hardfork which does have a malleability fix will
> probably know about payment channels, so they can just build a replay
> protection that allows the execution of old commitments. That needs some
> kind of timestamping of commitments, which would have to be integrated in
> the channel design. The easiest way would be to just write the time of
> signing the commitment in the transaction and the replay protection accepts
> old commitments, but rejects one's which were signed after the hardfork.
> These timestamps can essentially be one bit (before or after a hardfork)
> and if the replay protection in the hardfork only accepts old commitments
> for something like a year, then it can be reused for more hardforks later
> on. Maybe someone comes up with an interesting way of doing this without
> using space.
>
> Nevertheless hardforking while having channels open will always be a mess
> as an open channel requires you to watch the blockchain. Anybody who is
> just not aware of the hardfork or is updating his client a few days too
> late, can get his money stolen by an old commitment transaction where he
> forgets to retaliate on the new chain. As other's can likely figure out
> your client version the risk of retaliation is not too big for an attacker.
>
>
>
> 2017-08-17 13:31 GMT+02:00 Bryan Bishop via bitcoin-dev <
> bitcoin-dev@lists.linuxfoundation.org>:
>
>>
>> -- Forwarded message --
>> From: Christian Decker 
>> Date: Thu, Aug 17, 2017 at 5:39 AM
>> Subject: Re: [Lightning-dev] Lightning in the setting of blockchain
>> hardforks
>> To: Martin Schwarz ,
>> lightning-...@lists.linuxfoundation.org
>>
>>
>> Hi Martin,
>>
>> this is the perfect venue to discuss this, welcome to the mailing list :-)
>> Like you I think that using the first forked block as the forkchain's
>> genesis block is the way to go, keeping the non-forked blockchain on the
>> original genesis hash, to avoid disruption. It may become more difficult in
>> the case one chain doesn't declare itself to be the forked chain.
>>
>> Even more interesting are channels that are open during the fork. In
>> these cases we open a single channel, and will have to settle two. If no
>> replay protection was implemented on the fork, then we can use the last
>> commitment to close the channel (updates should be avoided since they now
>> double any intended effect), if replay protection was implemented then
>> commitments become invalid on the fork, and people will lose money.
>>
>> Fun times ahead :-)
>>
>> Cheers,
>> Christian
>>
>> On Thu, Aug 17, 2017 at 10:53 AM Martin Schwarz 
>> wrote:
>>
>>> Dear all,
>>>
>>> currently the chain_id allows to distinguish blockchains by the hash of
>>> their genesis block.
>>>
>>> With hardforks branching off of the Bitcoin blockchain, how can
>>> Lightning work on (or across)
>>> distinct, permanent forks of a parent blockchain that share the same
>>> genesis block?
>>>
>>> I suppose changing the definition of chain_id to the hash of the first
>>> block of the new
>>> branch and requiring replay and wipe-out protection should be
>>> sufficient. But can we
>>> relax these requirements? Are slow block times an issue? Can we use
>>> Lightning to transact
>>> on "almost frozen" block chains suffering from a sudden loss of
>>> hashpower?
>>>
>>> Has there been any previous discussion or study of Lightning in the
>>> setting of hardforks?
>>> (Is this the right place to discuss this? If not, where would be the
>>> right place?)
>>>
>>> thanks,
>>> Martin
>>> ___
>>> Lightning-dev mailing list
>>> lightning-...@lists.linuxfoundation.org
>>> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
>>>
>>
>> ___
>> Lightning-dev mailing list
>> lightning-...@lists.linuxfoundation.org
>> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
>>
>>
>>
>>
>> --
>> - Bryan
>> http://heybryan.org/
>> 1 512 203 0507 <(512)%20203-0507>
>>
>> ___
>> bitcoin-dev mailing list
>> bitcoin-dev@lists.linuxfoundation.org
>> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>>
>>
>
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.l

Re: [bitcoin-dev] Full node "tip" function

2017-05-08 Thread Natanael via bitcoin-dev
Den 8 maj 2017 23:01 skrev "Sergio Demian Lerner via bitcoin-dev" <
bitcoin-dev@lists.linuxfoundation.org>:

I'll soon present a solution to encourage full nodes to store the
blockchain based on Proof-of-Unique-Blockchain-Storage (PoUBS)


Proving that you're holding your own copy of the blockchain, not shared
with other nodes? I don't think that's possible to do securely. It falls on
that the whole blockchain is both public and static, while any such proof
of independence needs to rely on unique capabilities per node.

All you can do with a challenge-response protocol is to prevent honest
nodes from being unwitting backends to dishonest transparent proxy nodes
(by binding the challenge to cryptographic node identities).

Even latency bounding protocols can't stop you from putting multiple
*seemingly independent* nodes in front of the same backend with one single
copy of the blockchain.

I believe best you can do is to force somebody to hold multiple copies
locally on multiple hardware units to not run out of memory I/O when
creating proofs for multiple remote nodes, through using memory heavy
functions for the proof of storage, forcing quick random access. However
somebody willing to put enough RAM in a server rack to hold the full
blockchain could still easily pretend to be multiple regular nodes with
independent copies.

Any kind of attempt at forcing the full copy of the blockchain to be in
memory close to the CPU will either rule out most nodes from passing or
will be cheatable.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Small Nodes: A Better Alternative to Pruned Nodes

2017-05-03 Thread Natanael via bitcoin-dev
Den 3 maj 2017 16:05 skrev "Erik Aronesty via bitcoin-dev" <
bitcoin-dev@lists.linuxfoundation.org>:

> But as you've observed, the failure probabilities are rather high,
> especially if an active attacker targets nodes carrying less commonly
> available blocks.

Wouldn't the solution be for nodes to use whatever mechanism an attacker
uses to determine less commonly available blocks and choose to store a
random percentage of them as well as their deterministic random set?

IE X blocks end of chain (spv bootstrap), Y% deterministic random set,  Z%
patch/fill set to deter attacks


Then he uses Sybil attacks to obscure what's actually rare and not. Even
proof of storage isn't enough, you need proof of INDEPENDENT storage, which
is essentially impossible, as well as a way of determining which nodes are
run by the same people (all the AWS nodes should essentially count as one).
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


[bitcoin-dev] Properties of an ideal PoW algorithm & implementation

2017-04-18 Thread Natanael via bitcoin-dev
To expand on this below;

Den 18 apr. 2017 00:34 skrev "Natanael" :

IMHO the best option if we change PoW is an algorithm that's moderately
processing heavy (we still need reasonably fast verification) and which
resists partial state reuse (not fast or fully "linear" in processing like
SHA256) just for the sake of invalidating asicboost style attacks, and it
should also have an existing reference implementation for hardware that's
provably close in performance to the theoretical ideal implementation of
the algorithm (in other words, one where we know there's no hidden
optimizations).

[...] The competition would mostly be about packing similar gate designs
closely and energy efficiency. (Now that I think about it, the proof MAY
have to consider energy use too, as a larger and slower but more efficient
chip still is competitive in mining...)


What matters for miners in terms of cost is primarily (correctly computed)
hashes per joule (watt-seconds). The most direct proxy for this in terms of
algorithm execution is the number of transistor (gate) activations per
computed hash (PoW unit).

To prove that an implementation is near optimal, you would show there's a
minimum number of necessary transistor activations per computed hash, and
that your implementation is within a reasonable range of that number.

We also need to show that for a practical implementation you can't reuse
much internal state (easiest way is "whitening" the block header,
pre-hashing or having a slow hash with an initial whitening step of its
own). This is to kill any ASICBOOST type optimization. Performance should
be constant, not linear relative to input size.

The PoW step should always be the most expensive part of creating a
complete block candidate! Otherwise it loses part of its meaning. It should
however still also be reasonably easy to verify.

Given that there's already PoW ASIC optimizations since years back that use
deliberately lossy hash computations just because those circuits can run
faster (X% of hashes are computed wrong, but you get Y% more computed
hashes in return which exceeds the error rate), any proof of an
implementation being near optimal (for mining) must also consider the
possibility of implementations of a design that deliberately allows errors
just to reduce the total count of transistor activations per N amount of
computed hashes. Yes, that means the reference implementation is allowed to
be lossy.

So for a reasonably large N (number of computed hashes, to take batch
processing into consideration), the proof would show that there's a
specific ratio for a minimum number of average gate activations per
correctly computed hash, a smallest ratio = X number of gate activations /
(N * success rate) across all possible implementations of the algorithm.
And you'd show your implementation is close to that ratio.

It would also have to consider a reasonable range of time-memory tradeoffs
including the potential of precomputation. Hopefully we could implement an
algorithm that effectively makes such precomputation meaningless by making
the potential gain insignificant for any reasonable ASIC chip size and
amount of precomputation resources.

A summary of important mining PoW algorithm properties;

* Constant verification speed, reasonably fast even on slow hardware

* As explained above, still slow / expensive enough to dominate the costs
of block candidate creation

* Difficulty must be easy to adjust (no problem for simple hash-style
algorithms like today)

* Cryptographic strength, something like preimage resistance (the algorithm
can't allow forcing a particular output, the chance must not be better than
random within any achievable computational bounds)

* As explained above, no hidden shortcuts. Everybody has equal knowledge.

* Predictable and close to constant PoW computation performance, and not
linear in performance relative to input size the way SHA256 is (lossy
implementations will always make it not-quite-constant)

* As explained above, no significant reusable state or other reusable work
(killing ASICBOOST)

* As explained above, no meaningful precomputation possible. No unfair
headstarts.

* Should only rely on just transistors for implementation, shouldn't rely
on memory or other components due to unknowable future engineering results
and changes in cost

* Reasonably compact implementation, measured in memory use, CPU load and
similar metrics

* Reasonably small inputs and outputs (in line with regular hashes)

* All mining PoW should be "embarrassingly parallel" (highly
parallellizable) with minimal or no gain from batch computation,
performance scaling should be linear with increased chip size & cycle
speed.

What else is there? Did I miss anything important?
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Malice Reactive Proof of Work Additions (MR POWA): Protecting Bitcoin from malicious miners

2017-04-17 Thread Natanael via bitcoin-dev
Den 17 apr. 2017 16:14 skrev "Erik Aronesty via bitcoin-dev" <
bitcoin-dev@lists.linuxfoundation.org>:


It's too bad we can't make the POW somehow dynamic so that any specialized
hardware is impossible, and only GPU / FPGA is possible.



Maybe a variant of Keccak where the size of the sponge is increased along
with additional zero bits required.  Seems like this would either have to
resist specialized hardware or imply sha3 is compromised such that the size
of the sponge does not incerase the number of possible output bits as
expected.


Technically SHA3 (keccak) already has the SHAKE mode, an extensible output
function (XOF). It's basically a hash with arbitary output length (with
fixed state size, 256 bits is the common choice). A little bit like hooking
a hash straight into a stream cipher.

The other modes should *probably* not allow the same behavior, though. I
can't guarantee that however.

You may be interested in looking at parameterizable ciphers and if any of
them might be applicable to PoW.

IMHO the best option if we change PoW is an algorithm that's moderately
processing heavy (we still need reasonably fast verification) and which
resists partial state reuse (not fast or fully "linear" in processing like
SHA256) just for the sake of invalidating asicboost style attacks, and it
should also have an existing reference implementation for hardware that's
provably close in performance to the theoretical ideal implementation of
the algorithm (in other words, one where we know there's no hidden
optimizations).

Anything relying on memory or other such expensive components is likely to
fall flat eventually as fast memory is made more compact, cheaper and moves
closer to the cores.

That should be approximately what it takes to level out the playing field
in ASIC manufacturing, because then we would know there's no fancy tricks
to deploy that would give one player unfair advantage. The competition
would mostly be about packing similar gate designs closely and energy
efficiency. (Now that I think about it, the proof MAY have to consider
energy use too, as a larger and slower but more efficient chip still is
competitive in mining...)

We should also put a larger nonce in the header if possible, to reduce the
incentive to mess with the entropy elsewhere in blocks.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] I do not support the BIP 148 UASF

2017-04-15 Thread Natanael via bitcoin-dev
Den 15 apr. 2017 13:51 skrev "Chris Acheson via bitcoin-dev" <
bitcoin-dev@lists.linuxfoundation.org>:


Not sure if you missed my previous reply to you, but I'm curious about
your thoughts on this particular point. I contend that for any UASF,
orphaning non-signalling blocks on the flag date is [maybe] safer [for
those in on the UASF fork] than just
considering the fork active on the flag date.


Note my additions.

Enforcement by orphaning non-compliance makes it harder to reverse a buggy
softfork, since you necessarily increase the effort needed to return enough
mining power to the safe chain since you now have mostly unmonitored mining
hardware fighting you actively, whose operators you might not be able to
contact. You'd practically have to hardfork out of the situation.

There's also the risk of the activation itself triggering concensus bugs
(multiple incompatible UASF forks), if there's multiple implementations of
it in the network (or one buggy one). We have already seen something like
it happen. This can both happen on the miner side, client side or both
(miner side only would lead to a ton of orphaned blocks, client side means
netsplit).

It is also not economically favorable for any individual miner to be the
one to mine empty blocks on top of any surviving softfork-incompatible
chain. As a miner you would only volunteer to do it if you believe the
softfork is necessary or itself will enable greater future profit.

Besides that, I also just don't believe that UASF itself as a method to
activate softforks is a good choice. The only two reliable signals we have
for this purpose in Bitcoin are block height (flag day) and standard miner
signaling, as every other metric can be falsified or gamed.

But there's also more problems - a big one is that we have no way right now
for a node to tell another "the transaction you just relayed to me is
invalid according to an active softfork" (or "will become invalid". This
matters for several reasons.

The first one that came to my mind is that we have widespread usage of
zero-confirmation payments in the network.

This was already dangerous for other reasons, but this UASF could make it
guaranteed cost-free to exploit - because as many also know, we ALSO
already have a lot of nodes that do not enforce the non-default rejection
policies (otherwise we'd never see such transactions on blocks), including
many alternative Bitcoin clients.

The combination of these factors means that you can present an UASF invalid
transaction to a non-updated client that is supposedly protected by the
deliberate orphaning effort, and have it accept this as a payment. To never
see it get confirmed, or to eventually see it doublespent by an UASF-valid
transaction.

I would not at all be surprised if it turned out that many
zero-confirmation accepting services do not reject non-default
transactions, or if they aren't all UASF-segwit aware.

This is why a flag day or similar is more effective - it can't be ignored
unlike "just another one of those UASF proposals" that you might not have
evaluated or not expect to activate.

This is by the way also a reason that I believe that all nodes and services
should publish all concensus critical policies that they enforce. This
would make it far easier to alert somebody that they NEED TO prepare for
whatever proposal that might conflict with their active policies.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Segwit2Mb - combined soft/hard fork - Request For Comments

2017-04-02 Thread Natanael via bitcoin-dev
My point, if you missed it, is that there's a mathematical equivalence
between using two limits (and calculating the ratio) vs using one limit and
a ratio. The output is fully identical. The only difference is the order of
operations. Saying there's no blocksize limit with this is pretty
meaningless, because you're just saying you're using an abstraction that
doesn't make the limit visible.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Segwit2Mb - combined soft/hard fork - Request For Comments

2017-04-01 Thread Natanael via bitcoin-dev
Den 1 apr. 2017 16:07 skrev "Jorge Timón" :

On Sat, Apr 1, 2017 at 3:15 PM, Natanael  wrote:
>
>
> Den 1 apr. 2017 14:33 skrev "Jorge Timón via bitcoin-dev"
> :
>
> Segwit replaces the 1 mb size limit with a weight limit of 4 mb.
>
>
> That would make it a hardfork, not a softfork, if done exactly as you say.
>
> Segwit only separates out signature data. The 1 MB limit remains, but
would
> now only cover the contents of the transaction scripts. With segwit that
> means we have two (2) size limits, not one. This is important to remember.
> Even with segwit + MAST for large complex scripts, there's still going to
be
> a very low limit to the total number of possible transactions per block.
And
> not all transactions will get the same space savings.

No, because of the way the weight is calculated, it is impossible to
create a block that old nodes would perceive as bigger than 1 mb
without also violating the weight limit.
After segwit activation, nodes supporting segwit don't need to
validate the 1 mb size limit anymore as long as they validate the
weight limit.


https://github.com/bitcoin/bips/blob/master/bip-0141.mediawiki#Block_size

Huh, that's odd. It really does still count raw blockchain data blocksize.

It just uses a ratio between how many units each byte is worth for block
data vs signature data, plus a cap to define the maximum. So the current
max is 4 MB, with 1 MB of non-witness blockchain data being weighted to 4x
= 4 MB. That just means you replaced the two limits with one limit and a
ratio.

A hardfork increasing the size would likely have the ratio modified too.
With exactly the same effect as if it was two limits...

Either way, there's still going to be non-segwit nodes for ages.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Hard fork proposal from last week's meeting

2017-04-01 Thread Natanael via bitcoin-dev
Den 1 apr. 2017 16:35 skrev "Eric Voskuil via bitcoin-dev" <
bitcoin-dev@lists.linuxfoundation.org>:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

On 03/31/2017 11:18 PM, Jared Lee Richardson wrote:
>> If a typical personal computer cannot run a node there is no
>> security.
>
> If you can't describe an attack that is made possible when typical
> personal computers can't run nodes, this kind of logic has no place
> in this discussion.

"Governments are good at cutting off the heads of a centrally
controlled networks..."


That's what's so great about Bitcoin. The blockchain is the same
everywhere.

So if you can connect to private peers in several jurisdictions, chances
are they won't all be lying to you in the exact same way. Which is what
they would need to do to fool you.

If you run your own and can't protect it, they'll just hack your node and
make it lie to you.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Hard fork proposal from last week's meeting

2017-04-01 Thread Natanael via bitcoin-dev
Den 1 apr. 2017 01:13 skrev "Eric Voskuil via bitcoin-dev" <
bitcoin-dev@lists.linuxfoundation.org>:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

On 03/31/2017 02:23 PM, Rodney Morris via bitcoin-dev wrote:
> If the obsession with every personal computer being able to run a
> fill node continues then bitcoin will be consigned to the dustbin
> of history,

The cause of the block size debate is the failure to understand the
Bitcoin security model. This failure is perfectly exemplified by the
above statement. If a typical personal computer cannot run a node
there is no security.


If you're capable of running and trusting your own node chances are you
already have something better than a typical personal computer!

And those who don't have it themselves likely know where they can run or
access a node they can trust.

If you're expecting average joe to trust the likely not updated node on his
old unpatched computer full of viruses, you're going to have a bad time.

The real solution is to find ways to reduce the required trust in a
practical manner.

Using lightweight clients with multiple servers have already been
mentioned, Zero-knowledge proofs (if the can be made practical and stay
secure...) is another obvious future tool, and hardware wallets helps
against malware.

If you truly want everybody to run their own full nodes, the only plausible
solution is managed hardware in the style of Chromebooks, except that you
could pick your own distribution and software repository. Meaning you're
still trusting the exact same people whose nodes you would otherwise rely
on, except now you're mirroring their nodes on your own hardware instead.
Which at most improves auditability.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Segwit2Mb - combined soft/hard fork - Request For Comments

2017-04-01 Thread Natanael via bitcoin-dev
Den 1 apr. 2017 14:33 skrev "Jorge Timón via bitcoin-dev" <
bitcoin-dev@lists.linuxfoundation.org>:

Segwit replaces the 1 mb size limit with a weight limit of 4 mb.


That would make it a hardfork, not a softfork, if done exactly as you say.

Segwit only separates out signature data. The 1 MB limit remains, but would
now only cover the contents of the transaction scripts. With segwit that
means we have two (2) size limits, not one. This is important to remember.
Even with segwit + MAST for large complex scripts, there's still going to
be a very low limit to the total number of possible transactions per block.
And not all transactions will get the same space savings.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Block size adjustment idea - expedience fees + difficulty scaling proportional to block size (+ fee pool)

2017-03-30 Thread Natanael via bitcoin-dev
Sorry for sending a double, hit the wrong button...

Den 31 mars 2017 06:14 skrev "Natanael" :

>
>
> Den 30 mars 2017 11:34 skrev "Natanael" :
>
> Block size dependent difficulty scaling. Hardfork required.
>
> Larger blocks means greater difficulty - but it doesn't scale linearly,
> rather a little less than linearly. That means miners can take a penalty in
> difficulty to claim a greater number of high fee transactions in the same
> amount of time (effectively increasing "block size bitrate"), increasing
> their profits. When such profitable fees aren't available, they have to
> reduce block size.
>
> In other words, the users literally pay miners to increase block size (or
> don't pay, which reduces it).
>
>
> This can be simplified if we do get a fee pool (less hardfork code, more
> softfork code), except that the effect will be partially reduced by the
> mining subsidy until it approximately reaches parity with average total
> fees.
>
> We don't need to alter difficulty calculation.
> Instead we alter the percentage of the fees that the miner gets to claim
> VS what he have to donate to the pool based on the size of the block he
> generated.
> Larger block = smaller percentage of fees. This is another way to pay for
> blocksize. The effect of this is that on average, miners that generate
> smaller blocks takes a share of what otherwise would be part of the mining
> profits of those generating larger blocks.
>
> We would need to keep pieces of the section from above on expected
> blocksize calculation. Because the closer you are to the expected
> blocksize, the more you keep. And thus we need to adjust it according to
> usage.
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Block size adjustment idea - expedience fees + difficulty scaling proportional to block size (+ fee pool)

2017-03-30 Thread Natanael via bitcoin-dev
Den 30 mars 2017 11:34 skrev "Natanael" :

Block size dependent difficulty scaling. Hardfork required.

Larger blocks means greater difficulty - but it doesn't scale linearly,
rather a little less than linearly. That means miners can take a penalty in
difficulty to claim a greater number of high fee transactions in the same
amount of time (effectively increasing "block size bitrate"), increasing
their profits. When such profitable fees aren't available, they have to
reduce block size.

In other words, the users literally pay miners to increase block size (or
don't pay, which reduces it).


This can be simplified if we do get a fee pool (less hardfork code, more
softfork code), except that the effect will be partially reduced by the
mining subsidy until it approximately reaches parity with average total
fees.

We don't need to alter difficulty calculation.
Instead we alter the percentage of the fees that the miner gets to claim VS
what he have to donate to the pool based on the size of the block he
generated.
Larger block = smaller percentage of fees. This is another way to pay for
blocksize. The effect of this is that on average, miners that generate
smaller blocks takes a share of what otherwise would be part of the mining
profits of those generating larger blocks.

We would need to keep pieces of the section from above on expected
blocksize calculation. Because the closer you are to the expected
blocksize, the more you keep. And thus we need to adjust it according to
usage.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Block size adjustment idea - expedience fees + difficulty scaling proportional to block size (+ fee pool)

2017-03-30 Thread Natanael via bitcoin-dev
Den 30 mars 2017 11:34 skrev "Natanael" :

Block size dependent difficulty scaling. Hardfork required.

Larger blocks means greater difficulty - but it doesn't scale linearly,
rather a little less than linearly. That means miners can take a penalty in
difficulty to claim a greater number of high fee transactions in the same
amount of time (effectively increasing "block size bitrate"), increasing
their profits. When such profitable fees aren't available, they have to
reduce block size.

In other words, the users literally pay miners to increase block size (or
don't pay, which reduces it).


This can be simplified if we do get a fee pool (less hardfork code, more
softfork code), except that the effect will be partially reduced by the
mining subsidy until it approximately reaches parity with average total
fees.

We don't need to alter difficulty calculation.
Instead we alter the percentage of the fees that the miner gets to claim VS
what he have to donate to the pool based on the size of the block he
generated.
Larger block = smaller percentage of fees. This is another way to pay for
blocksize. The effect of this is that on average, miners that generate
smaller blocks takes a share of what otherwise would be part of the mining
profits of those generating larger blocks.

We would need to keep pieces of the section from above on expected
blocksize calculation. Because the closer you are to the expected
blocksize, the more you keep. And thus we need to adjust it according to
usage.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Block size adjustment idea - expedience fees + difficulty scaling proportional to block size (+ fee pool)

2017-03-30 Thread Natanael via bitcoin-dev
Den 30 mars 2017 12:04 skrev "Luke Dashjr" :


I don't see a purpose to this proposal. Miners always mine as if it's their
*only* chance to get the fee, because if they don't, another miner will. Ie,
after 1 block, the fee effectively drops to 0 already.


I believe that with correctly configured incentives, you can make it more
profitable to delay some transactions with lower fees but still include
them in the next block then to include them all at once. This would smooth
out the inclusion of transactions.

This may require changing the difficulty scaling from using a simple
logarithm to a function that first behaves like a logarithm up to some
multiple of the standard block size, after which difficulty starts
increasing faster and reaches a greater-than-linear ratio in expected
required hash per mined bit. Perhaps tipping over at around a blocksize 3x
the standard blocksize. Since the standard blocksize increases with
continous load after retargeting, the blocksize at which this happens also
increases.

Also, together with the above the fee pool does slightly counteract what
you say, as they'll get a share via the pool when there's less transactions
available the next time they create a block (like insurance).

For a user, what's the incentive to pay an individual miner the fee
directly?
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


[bitcoin-dev] Block size adjustment idea - expedience fees + difficulty scaling proportional to block size (+ fee pool)

2017-03-30 Thread Natanael via bitcoin-dev
I had these following ideas as I was thinking about how to allow larger
blocks without incentivizing the creation of excessively large blocks. I
don't know how much of this is novel, I'd appreciate if anybody could link
to any relevant prior art. I'm making no claims on this, anything novel in
here is directly released into public domain.

In short, I'm trying to rely on simple but direct incentives to improve the
behavior of Bitcoin.

Feedback requested. Some simulations requested, see below if you're willing
to help. Any questions are welcome.

---

Expedience fees. Softfork compatible.

You want to really make sure your transaction gets processed quickly?
Transactions could have a second fee type, a specially labeled
anyone-can-spend output with an op_return value defining a "best-before"
block number and some function describing the decline of the fee value for
every future block, such that before block N the miners can claim the full
expedience fee + the standard fee (if any), between block N+1 and N+X the
miner can claim a reduced expedience fee + standard fee, afterwards only
the standard fee.

When a transaction is processed late such that not the full expedience fee
can be claimed, the remainder of the expedience fee output is returned to
the specified address among the inputs/outputs (could be something like
in#3 for the address used by the 3rd UTXO input). This would have to be
done for all remaining expedience fees within the last transaction in the
block, inserted there by the miner.

These additional UTXO:s does increase overhead somewhat, but hopefully not
by too much. If we're going to modify the transaction syntax eventually,
then we could take the chance to design for this to reduce overhead.

My current best idea for how to handle returned expedience fees in
multiuser transactions (coinjoin, etc) is to donate it to an agreed upon
address. For recurring donation addresses (the fee pool included!), this
reduces the number of return UTXO:s in the fee processing transaction.

The default client policy may be to split the entire fee across an
expedience fee and a fee pool donation, where the donation part becomes
larger the later the transaction gets processed. This is expected to slow
down the average inclusion speed of already delayed transactions, but they
remain profitable to include.

The dynamics here is simple, a miner is incentivized to process a
transaction with an expedience fee before a standard fee of the same
value-per-bit in order to not reduce the total value of the available fees
of all standing transactions they can process. The longer they wait, the
less total fees available.

Sidenote: a steady stream of expedience fees reduces the profitability of
block withholding attacks (!), at some threshold it should make it entirely
unprofitable vs standard mining. This is due to the increased risk of
losing valuable expedience fees added after you finished your first block
(as the available value will be reduced in your block #2, vs what other
miners can claim while still mining on that previous block).
(Can somebody verify this with simulations?)

---

Fee pool. Softfork compatible.

We want to smooth out fee payments too for the future when the subsidy
drops, to prevent deliberate forking to steal fees. We can introduce a
designated P2SH anyone-can-spend fee pool address. The miner can never
claim the full fees from his block or claim the full amount in the pool,
only some percentage of both. The remainder goes back into the pool (this
might be done at the end of the same expedience fee processing transaction
described above). Anybody can deliberately pay to the pool.

The fee pool is intended to act as a "buffer" such that it remains
profitable to not try to steal fees but to just mine normally, even during
relatively extreme fee value variance (consider the end of a big
international shopping weekend).

The fee value claimed by the miners between blocks is allowed to vary, but
we want to avoid order-of-magnitude size variation (10x). We do however
want the effect of expedience fees to have an impact. Perhaps some
logarithmic function can smooth it out? Forcing larger fees to be
distributed over longer time periods?

---

Block size dependent difficulty scaling. Hardfork required.

Larger blocks means greater difficulty - but it doesn't scale linearly,
rather a little less than linearly. That means miners can take a penalty in
difficulty to claim a greater number of high fee transactions in the same
amount of time (effectively increasing "block size bitrate"), increasing
their profits. When such profitable fees aren't available, they have to
reduce block size.

In other words, the users literally pay miners to increase block size (or
don't pay, which reduces it).

(Sidenote: I am in favor of combining this with the idea of a 32 MB max
blocksize (or larger), with softforked scheduled lower size caps (perhaps
starting at 4 MB max) that grows according to a schedule. This reduces th

Re: [bitcoin-dev] Transaction signalling through output address hashing

2017-02-05 Thread Natanael via bitcoin-dev
Den 5 feb. 2017 16:33 skrev "John Hardy via bitcoin-dev" <
bitcoin-dev@lists.linuxfoundation.org>:

Currently in order to signal support for changes to Bitcoin, only miners
are able to do so on the blockchain through BIP9.

One criticism is that the rest of the community is not able to participate
in consensus, and other methods of assessing community support are fuzzy
and easily manipulated through Sybil.

I was trying to think if there was a way community support could be
signaled through transactions without requiring a hard fork, and without
increasing the size of transactions at all.

My solution is basically inspired by hashcash and vanity addresses


Censorship by miners isn't the only problem. Existing and normal
transactions will probabilistically collide with these schemes, and most
wallets have no straightforward way of supporting it.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Three hardfork-related BIPs

2017-01-28 Thread Natanael via bitcoin-dev
Den 28 jan. 2017 05:04 skrev "Luke Dashjr via bitcoin-dev" <
bitcoin-dev@lists.linuxfoundation.org>:

Satoshi envisioned a system where full nodes could publish proofs of invalid
blocks that would be automatically verified by SPV nodes and used to ensure
even they maintained the equivalent of full node security so long as they
were
not isolated. But as a matter of fact, this vision has proven impossible,
and
there is to date no viable theory on how it might be fixed. As a result, the
only way for nodes to have full-node-security is to actually be a true full
node, and therefore the plan of only having full nodes in datacenters is
simply not realistic without transforming Bitcoin into a centralised system.


Beside Zero-knowledge proofs, which is capable of proving much so more than
just validity, there are multi types of fraud proofs that only rely on the
format of the blocks. Such as publishing the block header + the two
colliding transactions included in it (in the case of double spending), or
if the syntax or logic is broken then you just publish that single
transaction.

There aren't all  that many cases where fraud proofs are unreasonably large
for a networked system like in Bitcoin. If Zero-knowledge proofs can be
applied securely, then I can't think of any exceptions at all for when the
proofs would be unmanageable. (Remember that full Zero-knowledge proofs can
be chained together!)
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Anti-transaction replay in a hardfork

2017-01-24 Thread Natanael via bitcoin-dev
Den 25 jan. 2017 08:22 skrev "Johnson Lau" :

Assuming Alice is paying Bob with an old style time-locked tx. Under your
proposal, after the hardfork, Bob is still able to confirm the time-locked
tx on both networks. To fulfil your new rules he just needs to send the
outputs to himself again (with different tx format). But as Bob gets all
the money on both forks, it is already a successful replay


Why would Alice be sitting on an old-style signed transaction with UTXO:s
none of which she controls (paying somebody else), with NO ability to
substitute the transaction for one where she DOES control an output,
leaving her unable to be the one spending the replay protecting child
transaction?
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Anti-transaction replay in a hardfork

2017-01-24 Thread Natanael via bitcoin-dev
Den 25 jan. 2017 08:06 skrev "Johnson Lau" :

What you describe is not a fix of replay attack. By confirming the same tx
in both network, the tx has been already replayed. Their child txs do not
matter.


Read it again.

The validation algorithm would be extended so that the transaction can't be
replayed, because replaying it in the other network REQUIRES a child
transaction in the same block that is valid, a child transaction the is
unique to the network. By doing this policy change simultaneously in both
networks, old pre-signed transactions *can not be replayed by anybody but
the owner* of the coins (as he must spend them immediately in the child
transaction).

It means that as soon as spent, the UTXO sets immediately and irrevocably
diverges across the two networks. Which is the entire point, isn't it?
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Anti-transaction replay in a hardfork

2017-01-24 Thread Natanael via bitcoin-dev
Den 24 jan. 2017 15:33 skrev "Johnson Lau via bitcoin-dev" <
bitcoin-dev@lists.linuxfoundation.org>:



B. For transactions created before this proposal is made, they are not
protected from anti-replay. The new fork has to accept these transactions,
as there is no guarantee that the existing fork would survive nor maintain
any value. People made time-locked transactions in anticipation that they
would be accepted later. In order to maximise the value of such
transactions, the only way is to make them accepted by any potential
hardforks.


This can be fixed.

Make old-format transactions valid *only when paired with a fork-only
follow-up transaction* which is spending at least one (or all) of the
outputs of the old-format transaction.

(Yes, I know this introduces new statefulness into the block validation
logic. Unfortunately it is necessary for maximal fork safety. It can be
disabled at a later time if ever deemed no longer necessary.)

Meanwhile, the old network SHOULD soft-fork in an identical rule with a
follow-up transaction format incompatible with the fork.

This means that old transactions can not be replayed across forks/networks,
because they're not valid when stand-alone. It also means that all wallet
clients either needs to be updated OR paired with software that intercepts
generated transactions, and automatically generates the correct follow-up
transaction for it (old network only).

The rules should be that old-format transactions can't reference new-format
transactions, even if only a softfork change differ between the formats.
This prevents an unnecessary amount of transactions pairs generated by old
wallets. Thus they can spend old outputs, but not spend new ones.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Could a sidechain protocol create an addressable "Internet of Blockchains" and facilitate scaling?

2016-10-11 Thread Natanael via bitcoin-dev
Den 12 okt. 2016 01:33 skrev "John Hardy via bitcoin-dev" <
bitcoin-dev@lists.linuxfoundation.org>:
> Sidechains seem an inevitable tool for scaling. They allow Bitcoins to be
transferred from the main blockchain into external blockchains, of which
there can be any number with radically different approaches.
>
> In current thinking I have encountered, sidechains are isolated from each
other. To move Bitcoin between them would involve a slow transfer back to
the mainchain, and then out again to a different sidechain.
>
> Could we instead create a protocol for addressable blockchains, all using
a shared proof of work, which effectively acts as an Internet of
Blockchains?

More of a treechain / clusterchain, then?

> Instead of transferring Bitcoin into individual sidechains, you move them
into the master sidechain, which I'll call Angel. The Angel blockchain sits
at the top of of a tree of blockchains, each of which can have radically
different properties, but are all able to transfer Bitcoin and data between
each other using a standardised protocol.
>
> Each blockchain has its own address, much like an IP address. The Angel
blockchain acts as a registrar, a public record of every blockchain and its
properties. Creating a blockchain is as simple as getting a
createBlockchain transaction included in an Angel block, with details of
parameters such as block creation time, block size limit, etc. A
decentralised DNS of sorts.
>
> Mining in Angel uses a standardised format, creating hashes which allow
all different blockchains to contribute to the same Angel proof of work.
Miners must hash the address of the blockchain they are mining, and if they
mine a hash of sufficient difficulty for that blockchain they are able to
create a block.
>
> Blockchains can have child blockchains, so a child of Angel might have
the address aa9, and a child of aa9 might have the address aa9:e4d. The
lower down the tree you go, the lower the security, but the lower the
transaction fees. If a miner on a lower level produces a hash of sufficient
difficulty, they can use it on any parents, up to and including the Angel
blockchain, and claim fees on each.
>
> Children always synchronise and follow all parents (and their
reorganisations), and parents are aware of their children. This allows you
to do some pretty cool things with security. If a child tries to withdraw
to a parent after spending on the child (a double spend attempt) this will
be visible instantly, and all child nodes will immediately be able to
broadcast proof of the double spend to parent chain nodes so they do not
mine on those blocks. This effectively means children can inherit a level
of security from their parents without the same PoW difficulty.
>
> There are so many conflicting visions for how to scale Bitcoin. Angel
allows the free market to decide which approaches are successful, and for
complementary blockchains with different use cases, such as privacy, high
transaction volume, and Turing completeness to more seamlessly exist
alongside each other, using Bitcoin as the standard medium of exchange.
>
> I wrote this as a TLDR summary for a (still evolving) idea I had on the
best approach to scale Bitcoin infinitely. I've written more of my thoughts
on the idea at
https://seebitcoin.com/2016/09/introducing-buzz-a-turing-complete-concept-for-scaling-bitcoin-to-infinity-and-beyond/
>
> Does anybody think this would be a better, more efficient way of
implementing sidechains? It allows infinite scaling, and standardisation
allows better pooling of resources.

I've got a similar idea since quite a while back, but I've never really
written it down in full. Here one link:

http://www.metzdowd.com/pipermail/cryptography/2015-January/024338.html

Some thoughts on how to design it;

The basic idea is to compress the validation data maximally, and yet
achieve Turing completeness for an arbitary number of interacting chains,
or "namespaces".

The whole thing is checkpointed and uses Zero-knowledge proofs to enable
secure pruning, making it essentially a rolling blockchain with complete
preservation of history. It grows approximately linearly with
non-deprecated state.

This latest checkpoint's header + the following headers and accompanying
Zero-knowledge proofs would together act as the root for the system.

Having that is all you would need to confirm that any particular piece of
data from the blockchain is correct, given that it comes with enough
metadata to trace it all the way to the root. (Merkle tree hashes, ZKP:s,
etc).

Every chain would be registered under a unique name (the root chain would
essentially just deal with registering chain names + their rules), and
would define its own external API towards other chains, and it would define
its own rules for how its data can be updated and when. Every single
interaction with a chain is done by an atomic program (transaction), and
all sets of validated changes must be conflict-free (especially across
chains). Everything would

Re: [bitcoin-dev] Bitcoin Guarantees Strong, not Eventual, Consistency.

2016-03-02 Thread Natanael via bitcoin-dev
To say that Bitcoin is strongly consistent is to say that the memory pool
and the last X blocks aren't part of Bitcoin. If you want to avoid making
that claim, you can at best argue that Bitcoin has both a strongly
consistent component AND an eventually consistent component.

The entire point of the definition of eventually consistency is that your
computer system is running continously and DO NOT have a final state, and
therefore you must be able to describe the behavior when your system either
may give responses to queries across time that are either perfectly
consistent *or not* perfectly consistent.

And Bitcoin by default *does not* ignore the contents of the last X blocks.
A Bitcoin node being queried about the current blockchain state WILL give
inconsistent answers when there's block rearrangements = no strong
consistency. Not to mention that your definition ignores the nonzero
probability of a block rearrangement extending beyond your constant omega.

Bitcoin provides a probabilistic, accumulative probability. Not a perfect
one.
Den 2 mar 2016 04:04 skrev "Emin Gün Sirer" <
bitcoin-dev@lists.linuxfoundation.org>:

>
> There seems to be a perception out there that Bitcoin is eventually
> consistent. I wrote this post to describe why this perception is completely
> false.
>
> Bitcoin Guarantees Strong, not Eventual, Consistency
>
> http://hackingdistributed.com/2016/03/01/bitcoin-guarantees-strong-not-eventual-consistency/
>
> I hope we can lay this bad meme to rest. Bitcoin provides a strong
> guarantee.
> - egs
>
>
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] We need to fix the block withholding attack

2015-12-20 Thread Natanael via bitcoin-dev
Wouldn't block withhold be fixed by not letting miners in pools know which
block candidates are valid before the pool knows? (Note: I haven't read any
other proposals for how to fix it, this may already be known)

As an example, by having the pool use the unique per-miner nonces sent to
each miner for effective division of labor as a kind of seed / commitment
value, where one in X block candidates will be valid, where X is the
current ratio between partial PoW blocks sent as mining proofs and the full
difficulty?

The computational work of the pool remains low (checking this isn't harder
than the partial PoW validation already performed), they pool simply looks
at which commitment value from the pool that the miner used, looks up the
correct committed value and hashes that together with the partial PoW. If
it hits the target, the block is valid.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] We need to fix the block withholding attack

2015-12-20 Thread Natanael via bitcoin-dev
Den 20 dec 2015 12:38 skrev "Tier Nolan via bitcoin-dev" <
bitcoin-dev@lists.linuxfoundation.org>:
>
> On Sun, Dec 20, 2015 at 5:12 AM, Emin Gün Sirer <
bitcoin-dev@lists.linuxfoundation.org> wrote:
>>
>>  An attacker pool (A) can take a certain portion of its hashpower,
>> use it to mine on behalf of victim pool (B), furnish partial proofs of
work
>> to B, but discard any full blocks it discovers.
>
> I wonder if part of the problem here is that there is no pool identity
linked to mining pools.
>
> If the mining protocols were altered so that miners had to indicate their
identity, then a pool couldn't forward hashing power to their victim.

Our approaches can be combined.

Each pool (or solo miner) has a public key included in their blocks that
identifies them to their miners (solo miners can use their own unique
random keys every time). This public key may be registered with DNSSEC+DANE
and the pool could point to their domain in the block template as an
identifier.

For each block the pool generates a nonce, and for each of every miner's
workers it double-hashes that nonce with their own public key and that
miner's worker ID and the previous block hash (to ensure no accidental
overlapping work is done).

The double-hash is a commitment hash, the first hash is the committed value
to be used by the pool as described below. Publishing the nonce reveals how
the hashes were derived to their miners.

Each miner puts this commitment hash in their blocks, and also the public
key of the pool separately as mentioned above.

Here's where it differs from standard mining: both the candidate block PoW
hash and the pool's commitment value above determines block validity
together.

If total difficulty is X and the ratio for full blocks to candidate blocks
shared with the pool is Y, then the candidate block PoW now has to meet X/Y
while hashing the candidate block PoW + the pool's commitment hash must
meet Y, which together makes for X/Y*Y and thus the same total difficulty.

So now miners don't know if their blocks are valid before the pool does, so
withholding isn't effective, and the public key identifiers simultaneously
stops a pool from telling honest but naive miners to attack other pools
using whatever other schemes one might come up with.

The main differences are that there's a public key identifier the miners
are told about in advance and expect to see in block templates, and that
that now the pool has to publish this commitment value together with the
block that also contains the commitment hash, and that this is verified
together with the PoW.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Open Block Chain Licence, BIP[xxxx] Draft

2015-09-01 Thread Natanael via bitcoin-dev
Den 2 sep 2015 00:03 skrev "Btc Drak via bitcoin-dev" <
bitcoin-dev@lists.linuxfoundation.org>:
>
> I think it gets worse. Who are the copyright owners (if this actually
> applies). You've got people publishing transaction messages, you've
> got miners reproducing them and publishing blocks. Who are all the
> parties involved? Then to take pedantry to the next level, does a
> miner have permission to republish messages? How do you know? What if
> the messages are reproducing others copyright/licensed material? It's
> not possible to license someone else's work. There are plenty rabbit
> holes to go down with this train of thought.

Worse yet - transaction malleability creates derative works with multiple
copyright holders (the original one, plus the author of the modification).
Is that even legal to do? What to do if a miner unknowingly accepts an
illegally modified transaction in a block? And can he who modified it ALSO
sue anybody replicating the block for infringement?

Better just put everything in public domain, or the closest thing to it you
can get. Copyright in the blockchain is essentially the DVDCSS illegal
prime mess all over again, but in a P2P network.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Open Block Chain Licence, BIP[xxxx] Draft

2015-09-01 Thread Natanael via bitcoin-dev
Creative Commons Zero, if anything at all.

It essentially emulates public domain in jurisdictions that do not
officially have a public domain.

- Sent from my tablet
Den 1 sep 2015 15:30 skrev "Ahmed Zsales via bitcoin-dev" <
bitcoin-dev@lists.linuxfoundation.org>:

> Hello,
>
> We believe the network requires a block chain licence to supplement the
> existing MIT Licence which we believe only covers the core reference client
> software.
>
> Replacing or amending the existing MIT Licence is beyond the scope of this
> draft BIP.
>
> Rationale and details of our draft BIP for discussion and evaluation are
> here:
>
>
> https://drive.google.com/file/d/0BwEbhrQ4ELzBMVFxajNZa2hzMTg/view?usp=sharing
>
> Regards,
>
> Ahmed
>
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Censorship

2015-08-31 Thread Natanael via bitcoin-dev
One last comment here on this topic;

For anybody who wants to discuss decentralized communication mechanisms in
general, they can come to www.reddit.com/r/p2pcomms (up until these
decentralized forums have become stable and common).

I've seen quite a few more of these projects lately, I want to make a list
of them and would definitely like to contribute to making them not just
usable, but good enough to gain popularity on their own.

(And in case you wonder about my approach to moderation: let every user
pick which moderators / filters / servers he trusts, and let them share
their subscription preferences in place of sharing links to centralized
forums.)

- Sent from my phone
Den 31 aug 2015 10:45 skrev "NxtChg via bitcoin-dev" <
bitcoin-dev@lists.linuxfoundation.org>:

>
> >I am creating a de-centralized forum, and I mean truly decentralized as I
> nor anyone else will be able to control it.
>
> Zander is working on the same thing:
> https://www.reddit.com/r/AetheralResearch/
>
> But it's actually quite difficult to make it truly censorship-resistant:
> both in solving the theymos factor and spam/abuse/overloading as an attack.
>
>
> >There is no doubt that the centralization and censorship of the Bitcoin
> community is massively inhibiting the advance of Bitcoin
> >and also the growth of the Bitcoin economy. We are scaring away
> intellectuals, businessman, and newbies that are just getting started.
>
> We have /r/bitcoinxt and so far it has been great. But we also need a
> regular forum.
>
> Roger Ver controls bitcoin.com, as I understand?
> https://bitcoin.com/forum/ would be nice.
>
> And it must be a real community, not "say whatever you want because free
> speech". We've seen how that turned out to be.
>
> Something like battle.net or Steam forums: heavily moderated, not for
> opinions, but for spam/noise/insults.
>
> Again, this needs leadership. Anyone can install a forum software, what is
> needed is an "official seal of approval" and regular presence of top XT
> people there.
>
> And a will to setup proper moderation. Then people will move.
>
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Consensus based block size retargeting algorithm (draft)

2015-08-29 Thread Natanael via bitcoin-dev
My current idea:

* There's a scheduled hardcap that goes up over time.

* Miners vote on the blocksize limit within the hardcap, choosing the new
votecap. No particular idea for scheduling change. The 2016 block period
seems a bit long though, in case of sudden peak load.
(I'd suggest rolling vote over X blocks, enacted Y blocks later (with votes
counted from block A to block B = block A+X, the change is enacted at block
C = B+Y = A+X+Y). I'm fine with fixed-period schedules too if they span a
reasonable time, such as IMHO 2 days - we need rapid peak adjustment. No
suggestion on vote result calculation mechanism.)

* Casting votes are free.

* The mean (average) blocksize over the last time period X is calculated
for every block, or at the end of every fixed-length period (depending on
what scheduling is used for votes).

* Creating blocks larger than the mean but below the votecap raises the
difficulty target for the miner (and slightly raises the mean for future
blocks).

* The degree of difficulty raise depends on where between the mean and
votecap that the size of the given block is (and it follows that lots of
votes for large raise reduces per-extra-Kb penalty, allowing for cheaper
peak load adjustment if a large miner majority agrees). The degree of
increase may be either linear or logarithmic, I've got no suggestion
currently on any particular metric.
(Some might think this is an easy way for miners to collude to make large
blocks cheaper. If so, you could commit to only pay fee to miners that
don't vote for a block size above the size you accept, as a
counter-incentive.)

* Question: When the votecap is lowered, should the calculated mean be
forced down to follow (forcing a penalty for making blocks close to the
votecap straight after the change)? If so, how? Or should it be allowed to
fall naturally as new blocks with size below the votecap are created?

This is how miners would pay for actually creating larger blocks, and
leaves us with three methods of keeping the size in check (hardcap, votecap
and softcap). The softcap mechanism is then our third check to use if
deemed necessary (orphaning valid blocks if considered problematically
large). This third option do not need coordination with miners, they just
need to be aware which block size is accepted by the community.

I can't think of any sensible non-miner mechanism of deciding max block
size outside of using a community coordinated softcap, anything else will
not work reliably. Too hard to measure objectively and judge fairly.

The community would thus agree on a hardcap schedule in advance, and have
the option to threaten orphaning blocks via softfork later on if
circumstances would change and the votecap is too large.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] If you had a single chance to double the transactions/second Bitcoin allows...

2015-08-07 Thread Natanael via bitcoin-dev
Den 7 aug 2015 23:37 skrev "Sergio Demian Lerner via bitcoin-dev" <
bitcoin-dev@lists.linuxfoundation.org>:
>
> Mark,
> It took you 3 minutes to respond to my e-mail. And I responded to you 4
minutes later. If you had responded to me in 10 minutes, I would be of out
the office and we wouldn't have this dialogue. So 5 minutes is a lot of
time.
>
> Obviously this is not a technical response to the technical issues you
argue. But "minutes" is a time scale we humans use to measure time very
often.

But what's more likely to matter is seconds. What you need then is some
variant of multisignature notaries (Greenaddress.it, lightning network),
where the combination of economic incentives and legal liability gives you
the assurance of doublespend protection from the time of publication of the
transaction to the first block confirmation.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Making Electrum more anonymous

2015-07-22 Thread Natanael via bitcoin-dev
- Sent from my tablet
Den 22 jul 2015 17:51 skrev "Thomas Voegtlin via bitcoin-dev" <
bitcoin-dev@lists.linuxfoundation.org>:
>
> Hello,
>
> Although Electrum clients connect to several servers in order to fetch
> block headers, they typically request address balances and address
> histories from a single server. This means that the chosen server knows
> that a given set of addresses belong to the same wallet. That is true
> even if Electrum is used over TOR.
>
> There have been various proposals to improve on that, but none of them
> really convinced me so far. One recurrent proposal has been to create
> subsets of wallet addresses, and to send them to separate servers. In my
> opinion, this does not really improve anonymity, because it requires
> trusting more servers.
>
> Here is an idea, inspired by TOR, on which I would like to have some
> feedback: We create an anonymous routing layer between Electrum servers
> and clients.

Why not look at something like Dissent? http://dedis.cs.yale.edu/dissent/

This protocol reduces the impact of Sybil attacks.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev