Re: [bitcoin-dev] Memory leaks?

2015-10-21 Thread Jonathan Toomim via bitcoin-dev
The method I was using was essentially

grep VmRSS /proc/$pid/status

Comparing these two methods, I get

Your method (PSS):
2408313
My method (RSS):
VmRSS:   2410396 kB

On Oct 21, 2015, at 12:29 AM, Tom Zander  wrote:

> On Tuesday 20 Oct 2015 20:01:16 Jonathan Toomim wrote:
> Please make sure you measure your memory usage correctly on Linux, it is
> notoriously easy to get misleading info from tools like top.
> 
> I use this one on Linux.
> 
> $cat ~/bin/showmemusage
> #!/bin/sh
> if test -z "$1"; then
>echo "need a pid"
>exit
> fi
> 
> mem=`echo 0 $(cat /proc/$1/smaps | grep Pss | awk '{print $2}' | \
> sed 's#^#+#' ) | bc`
> echo "$mem KB"




signature.asc
Description: Message signed with OpenPGP using GPGMail
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] request BIP number for: "Support for Datastream Compression"

2015-11-11 Thread Jonathan Toomim via bitcoin-dev
Data compression adds latency and reduces predictability, so engineers have 
decided to leave compression to application layers instead of transport layer 
or lower in order to let the application designer decide what tradeoffs to make.

On Nov 11, 2015, at 10:49 AM, Marco Pontello via bitcoin-dev 
 wrote:

> A random thought: aren't most communication over a data link already 
> compressed, at some point?
> When I used a modem, we had the V.42bis protocol. Now, nearly all ADSL 
> connections using PPPoE, surely are. And so on.
> I'm not sure another level of generic, data agnostic kind of compression will 
> really give us some real-life practical advantage over that.
> 
> Something that could take advantage of of special knowledge of the specific 
> data, instead, would be an entirely different matter.
> 
> Just my 2c.



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Memory leaks?

2015-10-20 Thread Jonathan Toomim via bitcoin-dev
I did that Sunday twice. I'll report the results soon. Short version is that it 
looks like valgrind is just finding 200 kB to 600 kB of pblocktemplate, which 
is declared as a static pointer. Not exactly the multi-GB leak I'm looking for, 
but possibly related.

I've also got two bitcoind processes running on the same machine that I started 
at the same time, running on different ports, all with the same settings, but 
one of which is serving getblocktemplate every 5-6 seconds and the other is 
not, while logging RSS on both every 6 seconds. RSS for the non-serving node is 
now 734 MB, and for the serving node 1997 MB. Graphs coming soon.


On Oct 20, 2015, at 3:12 AM, Mike Hearn <he...@vinumeris.com> wrote:

> OK, then running under Valgrind whilst sending gbt RPCs would be the next 
> step.
> 
> On Mon, Oct 19, 2015 at 9:17 PM, Multipool Admin <ad...@multipool.us> wrote:
> My nodes are continuously running getblocktemplate and getinfo, and I also 
> suspected the issue is in either gbt or the rpc server.
> 
> The instance only takes a few hours to get up to that memory usage.
> 
> On Oct 18, 2015 8:59 AM, "Jonathan Toomim via bitcoin-dev" 
> <bitcoin-dev@lists.linuxfoundation.org> wrote:
> On Oct 14, 2015, at 2:39 AM, Wladimir J. van der Laan <laa...@gmail.com> 
> wrote:
>> This is *most likely* the mempool, but is just not reported correctly.
> 
> I did some testing with PR #6410's better mempool reporting. The improved 
> reporting suggests that actual in-memory usage ("usage":) by CTxMemPool is 
> about 2.5x to 3x higher than the serialized transaction sizes ("bytes":). The 
> excess memory usage that I'm seeing is on the order of 100x higher than the 
> mempool "bytes": value. As such, I think it's unlikely that this is the 
> mempool, or at least not normal/correct mempool behavior.
> 
> Another user (ad...@multipool.us) reported 35 GB of RSS usage. I'm guessing 
> his bitcoind has been running longer than any of mine. His server definitely 
> has more RAM. I don't know which email list he is subscribed to (probably 
> XT), so I'm sharing it with both lists to make sure you're all aware of how 
> big an issue this can be.
> 
>> In the meantime you can mitigate the mempool growth by setting `-mintxfee`, 
>> see
>> https://github.com/bitcoin/bitcoin/blob/v0.11.0/doc/release-notes.md#transaction-flooding
> 
> I have mintxfee and minrelaytxfee set to about 0.3, which is high enough 
> to exclude essentially all of the of the 14700-14800 byte flood transactions. 
> My nodes' mempools only contain about one or two blocks' worth of 
> transactions. So I don't think this is correct either.
> 
> 
> 
> Some additional notes on this issue:
> 
> 1. I think it's related to CreateNewBlock() and getblocktemplate. I ran a 
> Core bitcoind process (commit d78a880) overnight with no mining connected to 
> it, and (IIRC -- my memory is fuzzy) when I woke up it was using around 400 
> MB of RSS and the mempool was at around "bytes":10MB, "usage": 25MB. I ran 
> ./bitcoin-cli getblocktemplate once, and IIRC the RSS shot up to around 800 
> MB. I then ran getblocktemplate every 5 seconds for about 30 minutes, and RSS 
> climbed to 1180 MB. An hour after that with more getblocktemplates, and now 
> RSS is at 1350 MB. [Edit: 1490 MB about 30 minutes later.] getmempoolinfo is 
> still showing "usage" around 25MB or less.
> 
> I'll do some more testing with this and see if I can make it repeatable, and 
> record the results more carefully. Expect a follow-up from me in a day or two.
> 
> 2. valgrind did not show anything super promising. It did report this:
> 
> ==6880== LEAK SUMMARY:
> ==6880==definitely lost: 0 bytes in 0 blocks
> ==6880==indirectly lost: 0 bytes in 0 blocks
> ==6880==  possibly lost: 288 bytes in 1 blocks
> ==6880==still reachable: 10,552 bytes in 39 blocks
> ==6880== suppressed: 0 bytes in 0 blocks
> (Bitcoin Core commit d78a880)
> 
> and this:
> ==6778== LEAK SUMMARY:
> ==6778==definitely lost: 0 bytes in 0 blocks
> ==6778==indirectly lost: 0 bytes in 0 blocks
> ==6778==  possibly lost: 320 bytes in 1 blocks
> ==6778==still reachable: 10,080 bytes in 32 blocks
> ==6778== suppressed: 0 bytes in 0 blocks
> (Bitcoin XT commit fe446d)
> 
> I haven't found anything in there yet that I think would produce the multi-GB 
> memory usage after running for a few days, but I could be missing it. Email 
> me if you want the full log.
> 
> I did not try running getblocktemplate while valgrind was running. I'll have 
> to try that. I also have not let valgrind run for more than an hour.
> 
> 
> 
> P.S.: Sorry for al

Re: [bitcoin-dev] Memory leaks?

2015-10-20 Thread Jonathan Toomim via bitcoin-dev
More notes:

1. I ran a side-by-side comparison with two bitcoind processes (Core, same 
recent git commit as before) on the same computer with the same settings 
running on different ports. With both processes, I logged RSS (via 
/proc/$pid/status) every 6 seconds. With one of those processes, I also ran 
bitcoin-cli getblocktemplate > /dev/null every 6 seconds. I let that run for 
about 30 hours. A graph and links to the CSVs of raw data are below. Results 
seem pretty clear: the getblocktemplate RPC is implicated in this issue.


http://toom.im/files/memlog8518.csv
http://toom.im/files/memlog-nogbt-8503.csv
http://toom.im/files/bitcoind_memory_usage_gbt.png


2. I ran valgrind twice, for about 6 hours each, on bitcoind while hitting it 
with getblocktemplate every 6 hours. Full valgrind output can be found at these 
two URLs:

http://toom.im/files/valgrind-gbt-1.log
http://toom.im/files/valgrind-gbt-2.log

The summary:

==4064== LEAK SUMMARY:
==4064==definitely lost: 0 bytes in 0 blocks
==4064==indirectly lost: 0 bytes in 0 blocks
==4064==  possibly lost: 288 bytes in 1 blocks
==4064==still reachable: 527,594 bytes in 4,367 blocks
==4064== suppressed: 0 bytes in 0 blocks
The main components of that still reachable section seem to just be one output 
of CreateNewBlock that's cached in case another getblocktemplate request is 
received before any new transactions come in:

==4064== 98,304 bytes in 1 blocks are still reachable in loss record 39 of 40
==4064==at 0x4C29180: operator new(unsigned long) (vg_replace_malloc.c:324)
==4064==by 0x28EAA1: 
__gnu_cxx::new_allocator::allocate(unsigned long, void const*) 
(new_allocator.h:104)
==4064==by 0x27EE44: __gnu_cxx::__alloc_traits<std::allocator 
>::allocate(std::allocator&, unsigned long) (alloc_traits.h:182)
==4064==by 0x26DFB0: std::_Vector_base<CTransaction, 
std::allocator >::_M_allocate(unsigned long) (stl_vector.h:170)
==4064==by 0x2D5BDE: std::vector<CTransaction, std::allocator 
>::_M_insert_aux(__gnu_cxx::__normal_iterator<CTransaction*, 
std::vector<CTransaction, std::allocator > >, CTransaction 
const&) (vector.tcc:353)
==4064==by 0x2D3FF8: std::vector<CTransaction, std::allocator 
>::push_back(CTransaction const&) (stl_vector.h:925)
==4064==by 0x2D113E: CreateNewBlock(CScript const&) (miner.cpp:298)
==4064==by 0x442D78: getblocktemplate(UniValue const&, bool) 
(rpcmining.cpp:513)
==4064==by 0x390CEB: CRPCTable::execute(std::string const&, UniValue 
const&) const (rpcserver.cpp:526)
==4064==by 0x41C5AB: HTTPReq_JSONRPC(HTTPRequest*, std::string const&) 
(httprpc.cpp:125)
==4064==by 0x3559BD: boost::detail::function::void_function_invoker2::invoke(boost::detail::function::function_buffer&, HTTPRequest*, 
std::string const&) (function_template.hpp:112)
==4064==by 0x422520: boost::function2<void, HTTPRequest*, std::string 
const&>::operator()(HTTPRequest*, std::string const&) const 
(function_template.hpp:767)

There are a few other similar loss records (mostly referring to pblock or 
pblocktemplate in CreateNewBlock(...), but I see nothing that can explain the 
multi-GB memory consumption.

3. One user on the bitcointalk p2pool thread 
(https://bitcointalk.org/index.php?topic=18313.msg12733791#msg12733791) claimed 
that he had this memory usage issue on Linux, but not on Mac OS X, under a GBT 
workload in both situations. If this is true, that would suggest this might be 
a fragmentation issue due to poor memory allocation. The other likely 
hypothesis is bloated caches. Looking into those two possibilities will be my 
next steps.



On Oct 20, 2015, at 5:39 AM, Jonathan Toomim <j...@toom.im> wrote:

> I did that Sunday twice. I'll report the results soon. Short version is that 
> it looks like valgrind is just finding 200 kB to 600 kB of pblocktemplate, 
> which is declared as a static pointer. Not exactly the multi-GB leak I'm 
> looking for, but possibly related.
> 
> I've also got two bitcoind processes running on the same machine that I 
> started at the same time, running on different ports, all with the same 
> settings, but one of which is serving getblocktemplate every 5-6 seconds and 
> the other is not, while logging RSS on both every 6 seconds. RSS for the 
> non-serving node is now 734 MB, and for the serving node 1997 MB. Graphs 
> coming soon.
> 
> 
> On Oct 20, 2015, at 3:12 AM, Mike Hearn <he...@vinumeris.com> wrote:
> 
>> OK, then running under Valgrind whilst sending gbt RPCs would be the next 
>> step.
>> 
>> On Mon, Oct 19, 2015 at 9:17 PM, Multipool Admin <ad...@multipool.us> wrote:
>> My nodes are continuously running getblocktemplate and getinfo, and I also 
>> suspected the issue is in either gbt or the rpc server.
>> 
>> The instance only takes a few ho

[bitcoin-dev] Can kick

2015-12-08 Thread Jonathan Toomim via bitcoin-dev
I am leaning towards supporting a can kick proposal. Features I think are 
desirable for this can kick:

0. Block size limit around 2 to 4 MB. Maybe 3 MB? Based on my testnet data, I 
think 3 MB should be pretty safe.
1. Hard fork with a consensus mechanisms similar to BIP101
2. Approximately 1 or 2 month delay before activation to allow for miners to 
upgrade their infrastructure.
3. Some form of validation cost metric. BIP101's validation cost metric would 
be the minimum strictness that I would support, but it would be nice if there 
were a good UTXO growth metric too. (I do not know enough about the different 
options to evaluate them right now.)

I will be working on a few improvements to block propagation (especially from 
China) over the next few months, like blocktorrent and stratum-based GFW 
penetration. I hope to have these working within a few months. Depending on how 
those efforts and others (e.g. IBLTs) go, we can look at increasing the block 
size further, and possibly enacting a long-term scaling roadmap like BIP101.


signature.asc
Description: Message signed with OpenPGP using GPGMail
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Capacity increases for the Bitcoin system.

2015-12-08 Thread Jonathan Toomim via bitcoin-dev

On Dec 9, 2015, at 8:09 AM, Gregory Maxwell  wrote:

> On Tue, Dec 8, 2015 at 11:48 PM, Jonathan Toomim  wrote:
> 
> By contrast it does not reduce the safety factor for the UTXO set at
> all; which most hold as a much greater concern in general;

I don't agree that "most" hold UTXO as a much greater concern in general. I 
think that it's a concern that has been addressed less, which means it is a 
more unsolved concern. But it is not currently a bottleneck on block size. 
Miners can afford way more RAM than 1 GB, and non-mining full nodes don't need 
to store the UTXO in memory.I think that at the moment, block propagation time 
is the bottleneck, not UTXO size. It confuses me that SigWit is being pushed as 
a short-term fix to the capacity issue when it does not address the short-term 
bottleneck at all.

> and that
> isn't something you can say for a block size increase.

True.

I'd really like to see a grand unified cost metric that includes UTXO 
expansion. In the mean time, I think miners can use a bit more RAM.

> With respect to witness safety factor; it's only needed in the case of
> strategic or malicious behavior by miners-- both concerns which
> several people promoting large block size increases have not only
> disregarded but portrayed as unrealistic fear-mongering. Are you
> concerned about it?

Some. Much less than e.g. Peter Todd, for example, but when other people see 
something as a concern that I don't, I try to pay attention to it. I expect 
Peter wouldn't like the safety factor issue, and I'm surprised he didn't bring 
it up.

Even if I didn't care about adversarial conditions, it would still interest me 
to pay attention to the safety factor for political reasons, as it would make 
subsequent blocksize increases much more difficult. Conspiracy theorists might 
have a field day with that one...

> In any case-- the other improvements described in
> my post give me reason to believe that risks created by that
> possibility will be addressable.

I'll take a look and try to see which of the worst-case concerns can and cannot 
be addressed by those improvements.


signature.asc
Description: Message signed with OpenPGP using GPGMail
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Capacity increases for the Bitcoin system.

2015-12-08 Thread Jonathan Toomim via bitcoin-dev
On Dec 8, 2015, at 6:02 AM, Gregory Maxwell via bitcoin-dev 
 wrote:

> The particular proposal amounts to a 4MB blocksize increase at worst.

I understood that SegWit would allow about 1.75 MB of data in the average case 
while also allowing up to 4 MB of data in the worst case. This means that the 
mining and block distribution network would need a larger safety factor to deal 
with worst-case situations, right? If you want to make sure that nothing goes 
wrong when everything is at its worst, you need to size your network pipes to 
handle 4 MB in a timely (DoS-resistant) fashion, but you'd normally only be 
able to use 1.75 MB of it. It seems to me that it would be safer to use a 3 MB 
limit, and that way you'd also be able to use 3 MB of actual transactions.

As an accounting trick to bypass the 1 MB limit, SegWit sounds like it might 
make things less well accounted for.



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Capacity increases for the Bitcoin system.

2015-12-08 Thread Jonathan Toomim via bitcoin-dev

On Dec 9, 2015, at 7:50 AM, Jorge Timón  wrote:

> I don't undesrtand. SPV nodes won't think they are validating transactions 
> with the new version unless they adapt to the new format. They will be simply 
> unable to receive payments using the new format if it is a softfork (although 
> as said I agree with making it a hardfork on the simpler design and smaller 
> fraud proofs grounds alone).
> 
Okay, I might just not understand how a sigwit payment would look to current 
software yet. I'll add learning about that to my to-do list...


signature.asc
Description: Message signed with OpenPGP using GPGMail
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Capacity increases for the Bitcoin system.

2015-12-08 Thread Jonathan Toomim via bitcoin-dev
Agree. This data does not belong in the coinbase. That space is for miners to 
use, not devs.

I also think that a hard fork is better for SegWit, as it reduces the size of 
fraud proofs considerably, makes the whole design more elegant and less 
kludgey, and is safer for clients who do not upgrade in a timely fashion. I 
don't like the idea that SegWit would invalidate the security assumptions of 
non-upgraded clients (including SPV wallets). I think that for these clients, 
no data is better than invalid data. Better to force them to upgrade by cutting 
them off the network than to let them think they're validating transactions 
when they're not.


On Dec 8, 2015, at 11:55 PM, Justus Ranvier via bitcoin-dev 
 wrote:

> If such a change is going to be deployed via a soft fork instead of a
> hard fork, then the coinbase is the worst place to put the segwitness
> merkle root.
> 
> Instead, put it in the first output of the generation transaction as an
> OP_RETURN script.
> 
> This is a better pattern because coinbase space is limited while output
> space is not. The next time there's a good reason to tie another merkle
> tree to a block, that proposal can be designated for the second output
> of the generation transaction.



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Capacity increases for the Bitcoin system.

2015-12-08 Thread Jonathan Toomim via bitcoin-dev

On Dec 9, 2015, at 7:48 AM, Luke Dashjr  wrote:

> How about we pursue the SegWit softfork, and at the same time* work on a
> hardfork which will simplify the proofs and reduce the kludgeyness of merge-
> mining in general? Then, if the hardfork is ready before the softfork, they
> can both go together, but if not, we aren't stuck delaying the improvements of
> SegWit until the hardfork is completed.

So that all our code that parses the blockchain needs to be able to find the 
sigwit data in both places? That doesn't really sound like an improvement to 
me. Why not just do it as a hard fork? They're really not that hard to do.


signature.asc
Description: Message signed with OpenPGP using GPGMail
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Capacity increases for the Bitcoin system.

2015-12-14 Thread Jonathan Toomim via bitcoin-dev
This means that a server supporting SW might only hear of the tx data and not 
get the signature data for some transactions, depending on how the relay rules 
worked (e.g. if the SW peers had higher minrelaytxfee settings than the legacy 
peers). This would complicate fast block relay code like IBLTs, since we now 
have to check to see that the recipient has both the tx data and the 
witness/sig data.

The same issue might happen with block relay if we do SW as a soft fork. A SW 
node might see a block inv from a legacy node first, and might start 
downloading the block from that node. This block would then be marked as 
in-flight, and the witness data might not get downloaded. This shouldn't be too 
hard to fix by creating an inv for the witness data as a separate object, so 
that a node could download the block from e.g. Peer 1 and the segwit data from 
Peer 2.

Of course, the code would be simpler if we did this as a hard fork and we could 
rely on everyone on the segwit fork supporting the segwit data. Although maybe 
we want to write the interfaces in a way that supports some nodes not 
downloading the segwit data anyway, just because not every node will want that 
data.

I haven't had time to read sipa's code yet. I apologize for talking out of a 
position of ignorance. For anyone who has, do you feel like sharing how it 
deals with these network relay issues?

By the way, since this thread is really about SegWit and not about any other 
mechanism for increasing Bitcoin capacity, perhaps we should rename it 
accordingly?


On Dec 12, 2015, at 11:18 PM, Mark Friedenbach via bitcoin-dev 
 wrote:

> A segwit supporting server would be required to support relaying segwit 
> transactions, although a non-segwit server could at least inform a wallet of 
> segwit txns observed, even if it doesn't relay all information necessary to 
> validate.
> 
> Non segwit servers and wallets would continue operations as if nothing had 
> occurred.
> 
> If this means essentially that a soft fork deployment of SegWit will require 
> SPV wallet servers to change their logic (or risk not being able to send 
> payments) then it does seem to me that a hard fork to deploy this non 
> controversial change is not only cleaner (on the data structure side) but 
> safer in terms of the potential to affect the user experience.
> 
> 
> — Regards,



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Segregated Witness features wish list

2015-12-14 Thread Jonathan Toomim via bitcoin-dev
1. I think we should limit the sum of the block and witness data to 
nBlockMaxSize*7/4 per block, for a maximum of 1.75 MB total. I don't like the 
idea that SegWit would give us 1.75 MB of capacity in the typical case, but we 
have to have hardware capable of 4 MB in adversarial conditions (i.e. 
intentional multisig). I think a limit to the segwit size allays that concern.

2. I think that segwit is a substantial change to how Bitcoin works, and I very 
strongly believe that we should not rush this. It changes the block structure, 
it changes the transaction structure, it changes the network protocol, it 
changes SPV wallet software, it changes block explorers, and it has changes 
that affect most other parts of the Bitcoin ecosystem. After we decide to 
implement it, and have a final version of the code that will be merged, we 
should give developers of other Bitcoin software time to implement code that 
supports the new transaction/witness formats.

When you guys say "as soon as possible," what do you mean exactly?

On Dec 10, 2015, at 2:47 PM, jl2012--- via bitcoin-dev 
 wrote:

> It seems the current consensus is to implement Segregated Witness. SW opens 
> many new possibilities but we need a balance between new features and 
> deployment time frame. I'm listing by my priority:
> 
> 1-2 are about scalability and have highest priority
> 
> 1. Witness size limit: with SW we should allow a bigger overall block size. 
> It seems 2MB is considered to be safe for many people. However, the exact 
> size and growth of block size should be determined based on testing and 
> reasonable projection.
> 
> 2. Deployment time frame: I prefer as soon as possible, even if none of the 
> following new features are implemented. This is not only a technical issue 
> but also a response to the community which has been waiting for a scaling 
> solution for years
> 



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Segregated Witness features wish list

2015-12-14 Thread Jonathan Toomim via bitcoin-dev
Off-topic: If you want to decentralize hashing, the best solution is probably 
to redesign p2pool to use DAGs. p2pool would be great except for the fact that 
the 30 sec share times are (a) long enough to cause significant reward variance 
for miners, but (b) short enough to cause hashrate loss from frequent switching 
on hardware that wasn't designed for it (e.g. Antminers, KNC) and (c) uneven 
rewards to different miners due to share orphan rates. DAGs can fix all of 
those issues. I had a talk with some medium-sized Chinese miners on Thursday in 
which I told them about p2pool, and I got the impression that they would prefer 
it over their existing pools due to the 0% fees and trustless design if the 
performance issues were fixed. If anybody is interested in helping with this 
work, ping me or Bob McElrath backchannel to be included in our conversation.


On Dec 14, 2015, at 8:32 PM, Adam Back  wrote:

> The other thing which is not protocol related, is that companies can
> help themselves and help Bitcoin developers help them, by working to
> improve decentralisation with better configurations, more use of
> self-hosted and secured full nodes, and decentralisation of policy
> control over hashrate.  That might even include buying a nominal (to a
> reasonably funded startup) amount of mining equipment.  Or for power
> users to do more of that.  Some developers are doing mining.
> Blockstream and some employees have a little bit of hashrate.  If we
> could define some metrics and best practices and measure the
> improvements, that would maybe reduce miners concerns about
> centralisation risk and allow a bigger block faster, alongside the
> IBLT & weak block network protocol improvements.



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Increasing the blocksize as a (generalized) softfork.

2015-12-30 Thread Jonathan Toomim via bitcoin-dev

On Dec 30, 2015, at 3:49 PM, Jonathan Toomim  wrote:

> Since we've been relying on the trustworthiness of miners during soft forks 
> in the past (and it only failed us once!), why not

make it explicit?

(Sorry for the premature send.)


signature.asc
Description: Message signed with OpenPGP using GPGMail
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Increasing the blocksize as a (generalized) softfork.

2015-12-30 Thread Jonathan Toomim via bitcoin-dev
On Dec 30, 2015, at 11:00 AM, Bob McElrath via bitcoin-dev 
 wrote:

> joe2015--- via bitcoin-dev [bitcoin-dev@lists.linuxfoundation.org] wrote:
>> That's the whole point.  After a conventional hardfork everyone
>> needs to upgrade, but there is no way to force users to upgrade.  A
>> user who is simply unaware of the fork, or disagrees with the fork,
>> uses the old client and the currency splits.
>> 
>> Under this proposal old clients effectively enter "zombie" mode,
>> forcing users to upgrade.
> 
> This is a very complex way to enter zombie mode.


Another way you could make non-upgraded nodes enter zombie mode is to 
explicitly 51% attack the minority fork.

All soft forks are controlled, coordinated, developer-sanctioned 51% attacks 
against nodes that do not upgrade. The generalized softfork technique is a 
method of performing a soft fork that completely eliminates any usefulness to 
non-upgraded nodes while merge-mining another block structure to provide 
functionality to the nodes who have upgraded and know where to look for the new 
data.

Soft forks are "safe" forks because you can trust the miners to censor blocks 
and transactions that do not conform to the new consensus rules. Since we've 
been relying on the trustworthiness of miners during soft forks in the past 
(and it only failed us once!), why not

The generalized softfork method has the advantage of being merge-mined, so 
miners don't have to lose any revenue while performing this 51% attack against 
non-upgraded nodes. But then you're stuck with all of your transactions in a 
merge-mined/commitment-based data structure, which is a bit awkward and ugly. 
But you could avoid all of that code ugliness by just convincing the miners to 
donate some hashrate (say, 5.1% if the IsSupermajority threshold is 95%, or you 
could make it dynamic to save some money) to ensuring that the minority fork 
never has any transactions in the chain. That way, you can replace the 
everlasting code ugliness with a little bit of temporary sociopolitical 
ugliness. Fortunately, angry people are easier to ignore than ugly code. /s

Maybe we could call this a softly enforced hard fork? It's basically a combined 
hard fork for the supermajority and a soft fork to make the minority chain 
useless.

I don't personally think that these 51% attacks are useful or necessary. This 
is one of the main reasons why I don't like soft forks. I find them 
distasteful, and think that leaving minorities free to practice their own 
religions and blockchain rules is a good thing. But I could see how this could 
address some of the objections that others have raised about the dangers of 
hardforks, so I'm putting it out there.

> Once a chain is seen to be 6 or more blocks ahead of my chain tip, we should
> enter "zombie mode" and refuse to mine or relay

I like this method. However, it does have the problem of being voluntary. If 
nodes don't upgrade to a version that has the latent zombie gene long before a 
fork, then it does nothing.




signature.asc
Description: Message signed with OpenPGP using GPGMail
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] We need to fix the block withholding attack

2015-12-25 Thread Jonathan Toomim via bitcoin-dev
On Dec 25, 2015, at 3:15 AM, Ittay via bitcoin-dev 
 wrote:

> Treating the pool block withholding attack as a weapon has bad connotations, 
> and I don't think anyone directly condones such an attack.

I directly condone the use of block withholding attacks whenever pools get 
large enough to perform selfish mining attacks. Selfish mining and large, 
centralized pools also have bad connotations.

It's an attack against pools, not just large pools. Solo miners are immune. As 
such, the presence or use of block withholding attacks makes Bitcoin more 
similar to Satoshi's original vision. One of the issues with mining 
centralization via pools is that miners have a direct financial incentive to 
stay relatively small, but pools do not. Investing in mining is a zero-sum 
game, where each miner gains revenue by making investments at the expense of 
existing miners. This also means that miners take revenue from themselves when 
they upgrade their hashrate. If a miner already has 1/5 of the network 
hashrate, then the marginal revenue for that miner of adding 1 TH/s is only 4/5 
of the marginal revenue for a miner with 0% of the network and who adds 1 TH/s. 
The bigger you get, the smaller your incentive to get bigger.

This incentive applies to miners, but it does not apply to pools. Pools have an 
incentive to get as big as possible (except for social backlash and altruistic 
punishment issues). Pools are the problem. I think we should be looking for 
ways of making pooled mining less profitable than solo mining or p2pool-style 
mining. Block withholding attacks are one such tool, and maybe the only usable 
tool we'll get. If we have to choose between making bitcoin viable long-term 
and avoiding things with bad connotations, it might be better to let our hands 
get a little bit dirty.

I don't intend to perform any such attacks myself. I like to keep my hat a nice 
shiny white. However, if anyone else were to perform such an attack, I would 
condone it.

P.S.: Sorry, pool operators. I have nothing against you personally. I just 
think pools are dangerous, and I wish they didn't exist.


signature.asc
Description: Message signed with OpenPGP using GPGMail
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] We need to fix the block withholding attack

2015-12-26 Thread Jonathan Toomim via bitcoin-dev
Another option for how to deal with block withholding attacks: Give the miner 
who finds the block a bonus. This could even be part of the coinbase 
transaction.

Block withholding is effective because it costs the attacker 0% and costs the 
pool 100%. If the pool's coinbase tx was 95% to the pool, 5% (1.25 BTC) to the 
miner, that would make block withholding attacks much more expensive to the 
attacker without making a huge impact on reward variance for small miners. If 
your pool gets attacked by a block withholding attack, then you can respond by 
jacking up the bonus ratio. At some point, block withholding attacks become 
unfeasibly expensive to perform. This can work because the pool sacrifices a 
small amount of variance for its customers by increasing the bonus, but the 
block attacker sacrifices revenue. This should make the attacker give up before 
the pool does.

This system already exists in p2pool, although there the reward bonus for the 
block's finder is only 0.5%.

This must have been proposed before, right? Anyone know of a good analysis of 
the game theory math?


signature.asc
Description: Message signed with OpenPGP using GPGMail
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Block size: It's economics & user preparation & moral hazard

2015-12-26 Thread Jonathan Toomim via bitcoin-dev
On Dec 26, 2015, at 8:44 AM, Pieter Wuille via bitcoin-dev 
 wrote:
> Furthermore, 75% is pretty terrible as a switchover point, as it guarantees 
> that old nodes will still see a 25% forked off chain temporarily.
> 
Yes, 75% plus a grace period is better. I prefer a grace period of about 4000 
to 8000 blocks (1 to 2 months).

From my discussions with miners, I think we will be able to create a hardfork 
proposal that reaches about 90% support among miners, or possibly higher. I'll 
post a summary of those discussions in the next 24 hours.


signature.asc
Description: Message signed with OpenPGP using GPGMail
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


[bitcoin-dev] Consensus census

2015-12-27 Thread Jonathan Toomim via bitcoin-dev
I traveled around in China for a couple weeks after Hong Kong to visit with 
miners and confer on the blocksize increase and block propagation issues. I 
performed an informal survey of a few of the blocksize increase proposals that 
I thought would be likely to have widespread support. The results of the 
version 1.0 census are below.

My brother is working on a website for a version 2.0 census. You can view the 
beta version of it and participate in it at https://bitcoin.consider.it. If you 
have any requests for changes to the format, please CC him at m...@toom.im.

https://docs.google.com/spreadsheets/d/1Cg9Qo9Vl5PdJYD4EiHnIGMV3G48pWmcWI3NFoKKfIzU/edit#gid=0

Or a snapshot for those behind the GFW without a VPN:
http://toom.im/files/consensus_census.pdf

HTML follows:

Miner   HashrateBIP103  2 MB now (BIP102)   2 MB now, 4 MB in 2 yr  
2-4-8 (Adam Back)   3 MB now3 MB now, 10 MB in 3 yr BIP101
F2Pool  22% N/A Acceptable  Acceptable  Preferred   
Acceptable  Acceptable  Too fast
AntPool 23% Too slowAcceptable  Acceptable  Acceptable  
N/A N/A Too fast
Bitfury 18% N/A Acceptable  Probably/maybe  Maybe   N/A 
Probably too fast   Too fast
BTCC Pool   11% N/A Acceptable  Acceptable  Acceptable  
Acceptable  Acceptable, I think N/A
KnCMiner7%  N/A Probably?   Probably?   "We like 2-4-8" 
Probably?   N/A N/A
BW.com  7%  N/A N/A N/A N/A N/A N/A N/A
Slush   4%  N/A N/A N/A N/A N/A N/A N/A
21 Inc. 3%  N/A N/A N/A N/A N/A N/A N/A
Eligius 1%  N/A N/A N/A N/A N/A N/A N/A
BitClub 1%  N/A N/A N/A N/A N/A N/A N/A
GHash.io1%  N/A N/A N/A N/A N/A N/A N/A
Misc2%  N/A N/A N/A N/A N/A N/A N/A
Certainly in favor  74% 56% 63% 33% 22%
Possibly in favor   81% 81% 81% 40% 33% 
0%
Total votes counted 81% 81% 81% 40% 51% 
63%
F2Pool: Blocksize increase could be phased in at block 400,000. No 
floating-point math. No timestamp-based forking (block height is okay). 
Conversation was with Wang Chun via IRC.
AntPool/Bitmain: We should get miners and devs together for few rounds of 
voting to decide which plan to implement. (My brother is working on a tool 
which may be useful for this. More info soon.) The blocksize increase should be 
merged into Bitcoin Core, and should not be implemented in an alternate client 
like BitcoinXT. A timeline of about 3 months for the fork was discussed, though 
I don't know if that was acceptable or preferable to Bitmain. Conversation was 
mostly with Micree Zhan and Kevin Pan at the Bitmain HQ. Jihan Wu was absent.
Bitfury: We should fix performance issues in bitcoind before 4 MB, and we MUST 
fix performance issues before 8 MB. A plan that includes 8 MB blocks in the 
future and assumes the performance fixes will be implemented might be 
acceptable to us, but we'll have to evaluate it more before coming to a 
conclusion. 2-4-8 "is like parachute basejumping - if you jump, and was unable 
to fix parachute during the 90sec drop - you will be 100% dead. plan A) 
[multiple hard forks] more safe." Conversation was with Alex Petrov at the 
conference and via email.
KnC: I only had short conversations with Sam Cole, but from what I can tell, 
they would be okay with just about anything reasonable.
BTCC: It would be much better to have the support of Core, but if Core doesn't 
include a blocksize increase soon in the master branch, we may be willing to 
start running a fork. Conversation was with Samson Mow and a few others at BTCC 
HQ.
The conversations I had with all of these entities were of an informal, 
non-binding nature. Positions are subject to change. BIP100 was not included in 
my talks because (a) coinbase voting already covers it pretty well, and (b) it 
is more complicated than the other proposals and currently does not seem likely 
to be implemented. I generally did not bring up SegWit during the conversations 
I had with miners, and neither did the miners, so it is also absent. (I thought 
that it was too early for miners to have an informed opinion of SegWit's 
relative merits.) I have not had any contact with BW.com or any of the smaller 
entities. Questions can be directed to j...@toom.im.



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] An implementation of BIP102 as a softfork.

2015-12-30 Thread Jonathan Toomim via bitcoin-dev
As a first impression, I think this proposal is intellectually interesting, but 
crufty and hackish and should never actually be deployed. Writing code for 
Bitcoin in a future in which we have deployed a few generalized softforks this 
way sounds terrifying.

Instead of this:

CTransaction GetTransaction(CBlock block, unsigned int index) {
return block->vtx[index];
}

We might have this:

CTransaction GetTransaction(CBlock block, unsigned int index) {
if (!IsBIP102sBlock(block)) {
return block->vtx[index];
} else {
if (!IsOtherGeneralizedSoftforkBlock(block)) {
// hooray! only one generalized softfork level to deal with!
return 
LookupBlock(GetGSHashFromCoinbase(block->vtx[0].vin[0].scriptSig))->vtx[index];
   } else {
   throw NotImplementedError; // I'm too lazy to write pseudocode 
this complicated just to argue a point
}
}

It might be possible to make that a bit simpler with recursion, or by doing 
subsequent generalized softforks in a way that doesn't have multi-levels-deep 
block-within-a-block-within-a-block stuff. Still: ugh.




On Dec 29, 2015, at 9:46 PM, joe2015--- via bitcoin-dev 
 wrote:

> Below is a proof-of-concept implementation of BIP102 as a softfork:
> 
> https://github.com/ZoomT/bitcoin/tree/2015_2mb_blocksize
> https://github.com/jgarzik/bitcoin/compare/2015_2mb_blocksize...ZoomT:2015_2mb_blocksize?diff=split=2015_2mb_blocksize
> 
> BIP102 is normally a hardfork.  The softfork version (unofficial
> codename BIP102s) uses the idea described here:
> http://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-December/012073.html
> 
> The basic idea is that post-fork blocks are constructed in such a way
> they can be mapped to valid blocks under the pre-fork rules.  BIP102s
> is a softfork in the sense that post-fork miners are still creating a
> valid chain under the old rules, albeit indirectly.
> 
> From the POV of non-upgraded clients, BIP102s circumvents the
> block-size limit by moving transaction validation data "outside" of
> the block.  This is a similar trick used by Segregated Witness and
> Extension Blocks (both softfork proposals).
> 
> From the POV of upgraded clients, the block layout is unchanged,
> except:
> - A larger 2MB block-size limit (=BIP102);
> - The header Merkle root has a new (backwards compatible)
>  interpretation;
> - The coinbase encodes the Merkle root of the remaining txs.
> Aside from this, blocks maintain their original format, i.e. a block
> header followed by a vector of transactions.  This keeps the
> implementation simple, and is distinct from SW and EB.
> 
> Since BIP102s is a softfork it means that:
> - A miner majority (e.g. 75%, 95%) force miner consensus (100%).  This
>  is not true for a hardfork.
> - Fraud risk is significantly reduced (6-conf unlikely depending on
>  activation threshold).
> This should address some of the concerns with deploying a block-size
> increase using a hardfork.
> 
> Notes:
> 
> - The same basic idea could be adapted to any of the other proposals
>  (BIP101, 2-4-8, BIP202, etc.).
> - I used Jeff Garzik's BIP102 implementation which is incomplete (?).
>  The activation logic is left unchanged.
> - I am not a Bitcoin dev so hopefully no embarrassing mistakes in my
>  code :-(
> 
> --joe
> 
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] An implementation of BIP102 as a softfork.

2015-12-30 Thread Jonathan Toomim via bitcoin-dev

On Dec 30, 2015, at 6:19 AM, Peter Todd  wrote:

> Your fear is misplaced: it's trivial to avoid recursion with a bit of
> planning...

That makes some sense. I downgrade my emotions from "a future in which we have 
deployed a few generalized softforks this way sounds terrifying" to "the idea 
of a future in which we have deployed at least one generalized softfork this 
way gives me the heebie jeebies."


signature.asc
Description: Message signed with OpenPGP using GPGMail
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] On the security of softforks

2015-12-17 Thread Jonathan Toomim via bitcoin-dev

On Dec 18, 2015, at 10:30 AM, Pieter Wuille via bitcoin-dev 
 wrote:

> 1) The risk of an old full node wallet accepting a transaction that is
> invalid to the new rules.
> 
> The receiver wallet chooses what address/script to accept coins on.
> They'll upgrade to the new softfork rules before creating an address
> that depends on the softfork's features.
> 
> So, not a problem.


Mallory wants to defraud Bob with a 1 BTC payment for some beer. Bob runs the 
old rules. Bob creates a p2pkh address for Mallory to use. Mallory takes 1 BTC, 
and creates an invalid SegWit transaction that Bob cannot properly validate and 
that pays into one of Mallory's wallets. Mallory then immediately spends the 
unconfirmed transaction into Bob's address. Bob sees what appears to be a valid 
transaction chain which is not actually valid.

Clueless Carol is one of the 4.9% of miners who forgot to upgrade her mining 
node. Carol sees that Mallory included an enormous fee in his transactions, so 
Carol makes sure to include both transactions in her block.

Mallory gets free beer.

Anything I'm missing?


signature.asc
Description: Message signed with OpenPGP using GPGMail
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] We can trivially fix quadratic CHECKSIG with a simple soft-fork modifying just SignatureHash()

2015-12-29 Thread Jonathan Toomim via bitcoin-dev
I suggest we use short-circuit evaluation. If someone complains, we figure it 
out as we go, maybe depending on the nature of the complaint. If nobody 
complains, we get it done faster.

We're humans. We have the ability to respond to novel conditions without 
relying on predetermined rules and algorithms. I suggest we use that ability 
sometimes.

On Dec 29, 2015, at 4:55 AM, jl2012 <jl2...@xbt.hk> wrote:

> What if someone complains? We can't even tell whether a complaint is legit or 
> just trolling. That's why I think we need some general consensus rules which 
> is not written in code, but as a social contract. Breaking those rules would 
> be considered as a hardfork and is allowed only in exceptional situation.
> 
> Jonathan Toomim via bitcoin-dev 於 2015-12-29 07:42 寫到:
>> That sounds like a rather unlikely scenario. Unless you have a
>> specific reason to suspect that might be the case, I think we don't
>> need to worry about it too much. If we announce the intention to
>> perform such a soft fork a couple of months before the soft fork
>> becomes active, and if nobody complains about it destroying their
>> secret stash, then I think that's fair enough and we could proceed.
>> On Dec 28, 2015, at 11:47 PM, jl2012 via bitcoin-dev
>> <bitcoin-dev@lists.linuxfoundation.org> wrote:
>>> Do we need to consider that someone may have a timelocked big tx, with 
>>> private key lost?
>> ___
>> bitcoin-dev mailing list
>> bitcoin-dev@lists.linuxfoundation.org
>> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
> 



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] further test results for : "Datastream Compression of Blocks and Tx's"

2015-11-28 Thread Jonathan Toomim via bitcoin-dev
It appears you're using the term "compression ratio" to mean "size reduction". 
A compression ratio is the ratio (compressed / uncompressed). A 1 kB file 
compressed with a 10% compression ratio would be 0.1 kB. It seems you're using 
(1 - compressed/uncompressed), meaning that the compressed file would be 0.9 kB.

On Nov 28, 2015, at 6:48 AM, Peter Tschipper via bitcoin-dev 
 wrote:

> The following show the compression ratio acheived for various sizes of data.  
> Zlib is the clear
> winner for compressibility, with LZOx-999 coming close but at a cost.
> 
> range Zlib-1 cmp%
> Zlib-6 cmp%   LZOx-1 cmp% LZOx-999 cmp%
> 0-250b12.44   12.86   10.79   14.34
> 250-500b  19.33   12.97   10.34   11.11
> 
> 
> 
> 
> 



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] BIP proposal: Increase block size limit to 2 megabytes

2016-02-07 Thread Jonathan Toomim via bitcoin-dev
On Feb 7, 2016, at 9:24 AM, jl2...@xbt.hk wrote:

> You are making a very naïve assumption that miners are just looking for 
> profit for the next second. Instead, they would try to optimize their short 
> term and long term ROI. It is also well known that some miners would mine at 
> a loss, even not for ideological reasons, if they believe that their action 
> is beneficial to the network and will provide long term ROI. It happened 
> after the last halving in 2012. Without any immediate price appreciation, the 
> hashing rate decreased by only less than 10%
> 


In 2012, revenue dropped by about 50% instantaneously. That does not mean that 
profitability became negative.

The difficulty at the time of the halving was about 3M. The exchange rate was 
about $12. A common miner at the time was the Radeon 6970, which performed 
about 350 Mh/s on 200 W for about 1.75 Mh/J. A computer with 4 6970s would use 
about 1 kW of power, once AC/DC losses and CPU overhead are taken into account. 
This 1 kW rig would have earned about $0.22/kWh before the halving, and 
$0.11/kWh after the halving. Since it's not hard to find electricity cheaper 
than $0.11/kWh, the hashrate didn't drop much.

It's a common misconception that the mining hashrate increases until an 
equilibrium is reached, and nobody is making a profit any longer. However, this 
is not true. The hashrate stops increasing when the expected operating profit 
over a reasonable time frame is no longer greater than the hardware cost, not 
when the operating profit approaches zero. For example, an S7 right now costs a 
little over $1000. If I don't expect to earn more than $1000 in operating 
profit over the next year or two with an S7, then I won't buy one.

Right now, an S7 earns about $190/month and costs about $60/month to operate, 
for a profit of $120/month. After the halving, revenue would drop to $95/month 
(or less, depending on difficulty and exchange rate), leaving profit at about 
$35/month. The $120/month profit is good enough motivation to buy hardware now, 
and the $35/month would be good enough motivation to keep running hardware 
after the halving.

I know in advance when the halvings are coming. There's going to be one in 
about 5 months, for example. I'm going to stop buying miners before the halving 
even if they're very profitable for a month because I don't want to be stuck 
with hardware that won't reach 100% return on investment (ROI).




signature.asc
Description: Message signed with OpenPGP using GPGMail
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] BIP proposal: Increase block size limit to 2 megabytes

2016-02-07 Thread Jonathan Toomim via bitcoin-dev

On Feb 6, 2016, at 9:21 PM, Jannes Faber via bitcoin-dev 
 wrote:

> They *must* be able to send their customers both coins as separate 
> withdrawals.
> 
Supporting the obsolete chain is unnecessary. Such support has not been offered 
in any cryptocurrency hard fork before, as far as I know. I do not see why it 
should start now.
> If not, that amounts to theft of their customers funds.
> 
If they announce their planned behavior before the fork, I do not see any 
ethical or legal issues.


signature.asc
Description: Message signed with OpenPGP using GPGMail
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] BIP proposal: Increase block size limit to 2 megabytes

2016-02-07 Thread Jonathan Toomim via bitcoin-dev

On Feb 7, 2016, at 7:19 AM, Anthony Towns via bitcoin-dev 
 wrote:

> The stated reasoning for 75% versus 95% is "because it gives "veto power"
> to a single big solo miner or mining pool". But if a 20% miner wants to
> "veto" the upgrade, with a 75% threshold, they could instead simply use
> their hashpower to vote for an upgrade, but then not mine anything on
> the new chain. At that point there'd be as little as 55% mining the new
> 2MB chain with 45% of hashpower remaining on the old chain. That'd be 18
> minute blocks versus 22 minute blocks, which doesn't seem like much of
> a difference in practice, and at that point hashpower could plausibly
> end up switching almost entirely back to the original consensus rules
> prior to the grace period ending.


Keep in mind that within a single difficulty adjustment period, the difficulty 
of mining a block on either chain will be identical. Even if the value of a 1MB 
branch coin is $100 and the hashrate on the 1 MB branch is 100 PH/s, and the 
value of a 2 MB branch coin is $101 and the hashrate on the 2 MB branch is 1000 
PH/s, the rational thing for a miner to do (for the first adjustment period) is 
to mine on the 2 MB branch, because the miner would earn 1% more on that branch.

So you're assuming that 25% of the hashrate chooses to remain on the minority 
version during the grace period, and that 20% chooses to switch back to the 
minority side. The fork happens. One branch has 1 MB blocks every 22 minutes, 
and the other branch has 2 MB blocks every 18 minutes. The first branch cannot 
handle the pre-fork transaction volume, as it only has 45% of the capacity that 
it had pre-fork. The second one can, as it has 111% of the pre-fork capacity. 
This makes the 1 MB branch much less usable than the 2 MB branch, which in turn 
causes the market value of newly minted coins on that branch to fall, which in 
turn causes miners to switch to the more profitable 2MB branch. This 
exacerbates the usability difference, which exacerbates the price difference, 
etc. Having two competing chains with equal hashrate using the same PoW 
function and nearly equal features is not a stable state. Positive feedback 
loops exist to make the vast majority of the users and the hashrate join one 
side.

Basically, any miners who stick to the minority branch are going to lose a lot 
of money.


signature.asc
Description: Message signed with OpenPGP using GPGMail
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] INV overhead and batched INVs to reduce full node traffic

2016-02-25 Thread Jonathan Toomim via bitcoin-dev

> On Feb 25, 2016, at 9:56 PM, Gregory Maxwell  wrote:
> The batching was
> temporarily somewhat hobbled between 0.10 and 0.12 (especially when
> you had any abusive frequently pinging peers attached), but is now
> fully functional again and it now manages to batch many transactions
> per INV pretty effectively. T

Thanks for the response. I've been mostly using and working on 0.11-series 
versions, which very rarely send out INV batches. In my examination, about 85% 
of the packets had a single hash in it. Nice to know this is one of the other 
improvements in 0.12.




signature.asc
Description: Message signed with OpenPGP using GPGMail
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


[bitcoin-dev] INV overhead and batched INVs to reduce full node traffic

2016-02-25 Thread Jonathan Toomim via bitcoin-dev
The INV scheme used by Bitcoin is not very efficient at all. Once you take into 
account Bitcoin, TCP (including ACKs), IP, and ethernet overheads, each INV 
takes 193 bytes, according to wireshark. That's 127 bytes for the INV message 
and 66 bytes for the ACK. All of this is for 32 bytes of payload, for an 
"efficiency" of 16.5% (i.e. 83.5% overhead). For a 400 byte transaction with 20 
peers, this can result in 3860 bytes sent in INVs for only 400 bytes of actual 
data.

An improvement that I've been thinking about implementing (after Blocktorrent) 
is an option for batched INVs. Including the hashes for two txes per IP packet 
instead of one would increase the INV size to 229 bytes for 64 bytes of payload 
-- that is, you add 36 bytes to the packet for every 32 bytes of actual 
payload. This is a marginal efficiency of 88.8% for each hash after the first. 
This is *much* better.

Waiting a short period of time to accumulate several hashes together and send 
them as a batched INV could easily reduce the traffic of running bitcoin nodes 
by a factor of 2, and possibly even more than that. However, if too many people 
used it, such a technique would slow down the propagation of transactions 
across the bitcoin network slightly, which might make some people unhappy. The 
ill effects could likely be mitigated by choosing a different batch size for 
each peer based on each peer's preferences. Each node could choose one or two 
peers to which they send INVs in batches of one or two, four more peers in 
which they send batches of two to four, and the rest in batches of four to 
eight, for example.

(This is a continuation of a conversation started on 
https://bitcointalk.org/index.php?topic=1377345 
.)

Jonathan


signature.asc
Description: Message signed with OpenPGP using GPGMail
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] INV overhead and batched INVs to reduce full node traffic

2016-02-27 Thread Jonathan Toomim via bitcoin-dev
Well, here's another idea: we could shorten the tx hashes to about 4 to 6 bytes 
instead of 32.

Let's say we have a 1 GB mempool with 2M transactions in it. A 4 byte shorthash 
would have a 0.046% chance of resulting in a collision with another transaction 
in our mempool, assuming a random distribution of hash values.

Of course, an attacker might construct transactions specifically for 
collisions. To protect against that, we set up a different salt value for each 
connection, and for the INV message, we use a 4 to 6 byte salted hash instead 
of the full thing. In case a peer does have a collision with one salt value, 
there are still 7 other peers with different salt values. The probability that 
they all fail is about 2.2e-27 with a 4-byte hash for a single peer. If we have 
500,000 full nodes and 1M transactions per 10 minutes, the chance is 1.1e-15 
that even one peer misses even one transaction.

This strategy would come with about 12 bytes of additional memory overhead per 
peer per tx, or maybe a little more. In exchange for that 12 bytes per peer*tx, 
we would save up to 28 bytes per peer*tx of network bandwidth. In typical 
conditions (e.g. 100-ish MB mempool, 16 peers, 2 MB blocks, 500 B serialized tx 
size), that could result in 1.792 MB net traffic saved per block (7.7 GB/month) 
at the expense of 12 MB of RAM. Overall, this technique might have the ability 
to reduce INV traffic by 5-8x in the asymptotic case, or maybe 2-3x for a 
realistic case.

I know short hashes like this have been proposed many times before for block 
propagation (e.g. by Gavin in his O(1) scaling gist, or in XTB). Has anyone 
else thought of using them like this in INV messages? Can anyone think of any 
major problems with the idea?


signature.asc
Description: Message signed with OpenPGP using GPGMail
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] BIP proposal: Inhibiting a covert attack on the Bitcoin POW function

2017-04-05 Thread Jonathan Toomim via bitcoin-dev
Just checking to see if I understand this optimization correctly. In order to 
find merkle roots in which the rightmost 32 bits are identical (i.e. partial 
hash collisions), we want to compute as many merkle root hashes as quickly as 
possible. The fastest way to do this is to take the top level of the Merkle 
tree, and to collect a set of left branches and right branches which can be 
independently manipulated. While the left branch can easily be manipulated by 
changing the extranonce in the coinbase transaction, the right branch would 
need to be modified by changing one of the transactions in the right branch or 
by changing the number of transactions in the right branch. Correct so far?

With the stratum mining protocol, the server (the pool) includes enough 
information for the coinbase transaction to be modified by stratum client (the 
miner), but it does not include any information about the right side of the 
merkle tree except for the top-level hash. Stratum also does not allow the 
client to supply any modifications to the merkle tree (including the right 
side) back to the stratum server. This means that any implementation of this 
final optimization would need to be using a protocol other than stratum, like 
getblocktemplate, correct?

I think it would be helpful for the discussion to know if this optimization 
were currently being used or not, and if so, how widely.

All of the consumer-grade hardware that I have seen defaults to stratum-only 
operation, and I have not seen or heard of any hardware available that can run 
more efficiently using getblocktemplate. As the current pool infrastructure 
uses stratum exclusively, this optimization would require significant retooling 
among pools, and probably a redesign of their core algorithms to help discover 
and share these partial collisions more frequently. It's possible that some 
large private farms have deployed a special system for solo mining that uses 
this optimization, of course, but it's also possible that there's a teapot in 
space somewhere between the orbit of Earth and Mars.

Do you know of any ways to perform this optimization via stratum? If not, do 
you have any evidence that this optimization is actually being used by private 
solo mining farms? Or is this discussion purely about preventing this 
optimization from being used in the future?

-jtoomim

> On Apr 5, 2017, at 2:37 PM, Gregory Maxwell via bitcoin-dev 
>  wrote:
> 
> An obvious way to generate different candidates is to grind the
> coinbase extra-nonce but for non-empty blocks each attempt will
> require 13 or so additional sha2 runs which is very inefficient.
> 
> This inefficiency can be avoided by computing a sqrt number of
> candidates of the left side of the hash tree (e.g. using extra
> nonce grinding) then an additional sqrt number of candidates of
> the right  side of the tree using transaction permutation or
> substitution of a small number of transactions.  All combinations
> of the left and right side are then combined with only a single
> hashing operation virtually eliminating all tree related
> overhead.
> 
> With this final optimization finding a 4-way collision with a
> moderate amount of memory requires ~2^24 hashing operations
> instead of the >2^28 operations that would be require for
> extra-nonce  grinding which would substantially erode the
> benefit of the attack.



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] BIP proposal: Inhibiting a covert attack on the Bitcoin POW function

2017-04-06 Thread Jonathan Toomim via bitcoin-dev
Ethically, this situation has some similarities to the DAO fork. We have an 
entity who closely examined the code, found an unintended characteristic of 
that code, and made use of that characteristic in order to gain tens of 
millions of dollars. Now that developers are aware of it, they want to modify 
the code in order to negate as much of the gains as possible.

There are differences, too, of course: the DAO attacker was explicitly 
malicious and stole Ether from others, whereas Bitmain is just optimizing their 
hardware better than anyone else and better than some of us think they should 
be allowed to.

In both cases, developers are proposing that the developers and a majority of 
users collude to reduce the wealth of a single entity by altering the 
blockchain rules.

In the case of the DAO fork, users were stealing back stolen funds, but that 
justification doesn't apply in this case. On the other hand, in this case we're 
talking about causing someone a loss by reducing the value of hardware 
investments rather than forcibly taking back their coins, which is less direct 
and maybe more justifiable.

While I don't like patented mining algorithms, I also don't like the idea of 
playing Calvin Ball on the blockchain. Rule changes should not be employed as a 
means of disempowering and empoverishing particular entities without very good 
reason. Whether patenting a mining optimization qualifies as good reason is 
questionable.


signature.asc
Description: Message signed with OpenPGP using GPGMail
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev