Re: [Bitcoin-development] PSA: Please sign your git commits

2014-05-23 Thread Kyle Jerviss
Multisig is great for irreversible actions, but pointless most of the 
time, which is why no PGP developer or user ever thought to implement it.

If you lose a key and an attacker signs a bogus email or commit with it, 
we all roll back with no lasting harm done.

Wladimir wrote:
> On Thu, May 22, 2014 at 8:06 PM, Jeff Garzik  wrote:
>> Related:  Current multi-sig wallet technology being rolled out now,
>> with 2FA and other fancy doodads, is now arguably more secure than my
>> PGP keyring.  My PGP keyring is, to draw an analogy, a non-multisig
>> wallet (set of keys), with all the associated theft/data
>> destruction/backup risks.
>>
>> The more improvements I see in bitcoin wallets, the more antiquated my
>> PGP keyring appears.  Zero concept of multisig.  The PGP keyring
>> compromise process is rarely exercised.  2FA is lacking.  At least
>> offline signing works well. Mostly.
> Would be incredible to have multisig for git commits as well. I don't
> think git supports multiple signers for one commit at this point -
> amending the signature replaces the last one - but it would allow for
> some interesting multi-factor designs in which the damage when a dev's
> computer is compromised would be reduced.
>
> Sounds like a lot of work to get a good workflow there, though.
>
> My mail about single-signing commits was already longer than I
> expected when I started writing there. Even though the process is
> really simple.
>
> Though if anyone's interest is piqued by this, please pick it up.
>
> Wladimir
>
> --
> "Accelerate Dev Cycles with Automated Cross-Browser Testing - For FREE
> Instantly run your Selenium tests across 300+ browser/OS combos.
> Get unparalleled scalability from the best Selenium testing platform available
> Simple to use. Nothing to install. Get started now for free."
> http://p.sf.net/sfu/SauceLabs
> ___
> Bitcoin-development mailing list
> Bitcoin-development@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bitcoin-development


--
"Accelerate Dev Cycles with Automated Cross-Browser Testing - For FREE
Instantly run your Selenium tests across 300+ browser/OS combos.
Get unparalleled scalability from the best Selenium testing platform available
Simple to use. Nothing to install. Get started now for free."
http://p.sf.net/sfu/SauceLabs
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] we can all relax now

2013-11-06 Thread Kyle Jerviss
You are ignoring the gambler's ruin. We do not operate on an infinite 
timeline.  If you find a big pool willing to try this, please give me 
enough advance warning to get my popcorn ready.


Peter Todd wrote:

On Wed, Nov 06, 2013 at 01:06:47PM -0500, Christophe Biocca wrote:

I might try building this sometime soon. I think it may also serve an
educational purpose when trying to understand the whole network's behaviour.

What level of accuracy are we looking for though? Obviously we need to
fully emulate the steps of the network protocol, and we need to be able to
specify time taken for transmission/processing for each node. Do we care
about the actual contents of the messages (to be able to simulate double
spend attempts, invalid transactions and blocks, SPV node communication),
and their validation (actual signatures and proof of work)?

I imagine the latter is pretty useless, beyond specifying that the
signature/proof of work is valid/invalid.

If we could build up a set of experiments we'd like to run on it, it would
help clarify what's needed.

Off the top of my head:

- Peter Todd's miner strategy of sending blocks to only 51% of the
hashpower.

Speaking of, I hadn't gotten around to doing up the math behind that
strategy properly; turns out 51% I was overly optimistic and the actual
threshold is 29.3%

Suppose I find a block. I have Q hashing power, and the rest of the
network 1-Q. Should I tell the rest of the network, or withhold that
block and hope I find a second one?

Now in a purely inflation subsidy environment, where I don't care about
the other miners success, of course I should publish. However, if my
goals are to find *more* blocks than the other miners for whatever
reason, maybe because transaction fees matter or I'm trying to get
nLockTime'd announce/commit fee sacrifices, it gets more complicated.


There are three possible outcomes:

1) I find the next block, probability Q
2) They find the next block, probability 1-Q
2.1) I find the next block, probability Q, or (1-Q)*Q in total.
2.2) They find the next block, probability (1-Q)^2 in total.

Note how only in the last option do I lose. So how much hashing power do
I need before it is just as likely that the other miners will find two
blocks before I find either one block, or two blocks? Easy enough:

Q + (1-Q)*Q = (1-Q)^2 -> Q^2 - Q + 1/2 -> Q = (1 - \sqrt(2))/2

Q ~= 29.2%

So basically, if I'm trying to beat other miners, once I have >29.3% of
the hashing power I have no incentive to publish the blocks I mine!

But hang on, does it matter if I'm the one who actually has that hashing
power? What if I just make sure that only >29.3% of the hashing power
has that block? If my goal is to make sure that someone does useless
work, and/or they are working on a lower height block than me, then no,
I don't care, which means my original "send blocks to >51% of the
hashing power" analysis was actually wrong, and the strategy is even
more crazy: "send blocks to >29.3% of the hashing power" (!)


Lets suppose I know that I'm two blocks ahead:

1) I find the next block: Q(3:0)
2) They find the next block: (1-Q) (2:1)
2.1) I find the next block: (1-Q)*Q(3:1)
2.2) They find the next block: (1-Q)^2 (2:2)
2.2.1) I find the next block: (1-Q)^2 * Q  (3:2)
2.2.2) They find the next block: (1-Q)^3   (2:3)

At what hashing power should I release my blocks? So remember, I win
this round on outcomes 1, 2.1, 2.2.1 and they only win on 2.2.2:

Q + (1-Q)*Q + (1-Q)^2*Q = (1-Q)^3 -> Q = 1 - 2^-3

Q ~= 20.6%

Interesting... so as I get further ahead, or to be exact the group of
miners who have a given block gets further ahead, I need less hashing
power for my incentives to be to *not* publish the block I just found.
Conversely this means I should try to make my blocks propagate to less
of the hashing power, by whatever means necessary.

Now remember, none of the above strategy requires me to have a special
low-latency network or anything fancy. I don't even have to have a lot
of hashing power - the strategy still works if I'm, say, a 5% pool. It
just means I don't have the incentives people thought I did to propagate
my blocks widely.

The other nasty thing about this, is suppose I'm a miner and recently
got a block from another miner: should I forward that block, or not
bother? Well, it depends: if I have no idea how much of the hashing
power has that block, I should forward the block. But again, if my goal
is to be most likely to get the next block, I should only forward in
such a way that >30% of the hashing power has the block.

This means that if I have some information about what % already has that
block, I have less incentive to forward! For instance, suppose that
every major miner has been publishing their node addresses in their
blocks - I'll have a pretty good idea of who probably has that most
recent block, so I can easily make a well-optimized decision not to
forward. Similarly because the 30

Re: [Bitcoin-development] we can all relax now

2013-11-06 Thread Kyle Jerviss
Each block that you solve has a reward.  In practice, some blocks will 
be orphaned, so the expected reward is slightly less than the nominal 
reward.  Each second that you delay publishing a block, the expected 
reward drops somewhat.


On an infinite timeline, the total reward approaches the expected 
reward.  But reality is discrete, and zero tends to be a brick wall.  If 
you delay publishing a block, you will get either the nominal reward, or 
zero, not some fraction in between.  And if your personal random walk 
involves an excursion through negative land, you may not stick around 
long enough for it to come back.


Thus, a positive expected value is not sufficient for some strategy to 
be a good one.


Peter Todd wrote:

On Wed, Nov 06, 2013 at 10:15:40PM -0600, Kyle Jerviss wrote:

You are ignoring the gambler's ruin. We do not operate on an
infinite timeline.  If you find a big pool willing to try this,
please give me enough advance warning to get my popcorn ready.

Gamblers ruin has nothing to do with it.

At every point you want to evaluate the chance the other side will get
ahead, vs. cashing in by just publishing the blocks you have. (or some
of them) I didn't mention it in the analysis, but obviously you want to
keep track of how much the blocks you haven't published are worth to
you, and consider publishing some or all of your lead to the rest of the
network if you stand to lose more than you gain.

Right now it's a mostly theoretical attack because the inflation subsidy
is enormous and fees don't matter, but once fees do start to matter
things get a lot more complex. An extreme example is announce/commit
sacrifices to mining fees: if I'm at block n+1, the rest of the network
is at block n, and there's a 100BTC sacrifice at block n+2, I could
easily be in a situation where I have zero incentive to publish my block
to keep everyone else behind me, and just hope I find block n+2. If I
do, great! I'll immediately publish to lock-in my winnings and start
working on block n+3


Anyway, my covert suggestion that pools contact me was more to hopefully
strike fear into the people mining at a large pool and get them to
switch to a small one. :) If everyone mined solo or on p2pool none of
this stuff would matter much... but we can't force them too yet.



--
November Webinars for C, C++, Fortran Developers
Accelerate application performance with scalable programming models. Explore
techniques for threading, error checking, porting, and tuning. Get the most
from the latest Intel processors and coprocessors. See abstracts and register
http://pubads.g.doubleclick.net/gampad/clk?id=60136231&iu=/4140/ostg.clktrk


___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


--
November Webinars for C, C++, Fortran Developers
Accelerate application performance with scalable programming models. Explore
techniques for threading, error checking, porting, and tuning. Get the most 
from the latest Intel processors and coprocessors. See abstracts and register
http://pubads.g.doubleclick.net/gampad/clk?id=60136231&iu=/4140/ostg.clktrk___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] we can all relax now

2013-11-06 Thread Kyle Jerviss
What I want is configurable 1/10/100 millisecond ticks, and accurate 
flow of information.


It doesn't seem necessary to really emulate the whole protocol, nor to 
be overly concerned with the content of messages, nor to simulate every 
little housekeeping step or network message.


I'm not looking for a bitcoin-network-in-a-bottle, I just want to see 
flows.  In the current situation, how often does a miner win if they 
hold their block until they see another one?  How does that change with 
various numbers of remote sensors?


Other applications in the future could very well involve transaction 
spread, double spends, network partitions, transaction replacement, etc.


If the simulation run in question involves blocks, I'd like realistic 
latencies for blocks.  If it is about transactions, the latencies should 
be realistic for transactions.


What is realistic for those?  That brings me to...

I'll kick in another 1 BTC for an instrumentation package for the 
reference client.  Same conditions as before.  A runtime option, 
disabled by default, that collects data for the simulator.  If this 
creates an uproar, I'll also accept a compile-time option. Support 
dumping to a file that can be uploaded to a parser as the bare minimum, 
and if you are feeling clever, add automatic uploads to a server 
specified in the conf file, or whatever.  All data should be anonymous, 
of course.  Local file should be in a format that humans can read (JSON, 
XML, CSV, etc) so that people can verify that the data is indeed anonymous.


I want stats on peers (number, turnover, latency, in/out, etc), stats on 
local operations (I/O stats, sigs per second when verifying a block, 
fraction of sig cache hits when validating, etc) and whatever else might 
be useful to a simulator.  Each parameter should collect min, max, mean, 
std. deviation, etc so that the simulator can provide realistic virtual 
nodes.


Also, I don't want anyone to think that they need to satisfy me 
personally to collect on either of these two bounties.  I will pay mine 
for a product that is generally along the lines I have laid out, if a 
couple of the core devs (Gavin, Greg, Jeff, sipa, Luke, etc) agree that 
your work is useful.



Christophe Biocca wrote:


I might try building this sometime soon. I think it may also serve an 
educational purpose when trying to understand the whole network's 
behaviour.


What level of accuracy are we looking for though? Obviously we need to 
fully emulate the steps of the network protocol, and we need to be 
able to specify time taken for transmission/processing for each node. 
Do we care about the actual contents of the messages (to be able to 
simulate double spend attempts, invalid transactions and blocks, SPV 
node communication), and their validation (actual signatures and proof 
of work)?


I imagine the latter is pretty useless, beyond specifying that the 
signature/proof of work is valid/invalid.


If we could build up a set of experiments we'd like to run on it, it 
would help clarify what's needed.


Off the top of my head:

- Peter Todd's miner strategy of sending blocks to only 51% of the 
hashpower.
- Various network split conditions, and how aware of the split nodes 
would be (and the effect of client variability).
- Testing the feasability of network race double spends, or Finney 
attacks.

- Various network partition scenarios.
- Tricking SPV nodes.

On Nov 6, 2013 6:37 AM, "Jeff Garzik" > wrote:


I will contribute 1 BTC to this bounty, under same terms and
expiration.



--
November Webinars for C, C++, Fortran Developers
Accelerate application performance with scalable programming
models. Explore
techniques for threading, error checking, porting, and tuning. Get
the most
from the latest Intel processors and coprocessors. See abstracts
and register
http://pubads.g.doubleclick.net/gampad/clk?id=60136231&iu=/4140/ostg.clktrk
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net

https://lists.sourceforge.net/lists/listinfo/bitcoin-development



--
November Webinars for C, C++, Fortran Developers
Accelerate application performance with scalable programming models. Explore
techniques for threading, error checking, porting, and tuning. Get the most
from the latest Intel processors and coprocessors. See abstracts and register
http://pubads.g.doubleclick.net/gampad/clk?id=60136231&iu=/4140/ostg.clktrk


___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


-