Re: [Bitcoin-development] Fwd: Block Size Increase Requirements
WOW Way to burn your biggest adopters who put your transactions into the chain...what a douche. From: Mike Hearnmailto:m...@plan99.net Sent: 1/06/2015 8:15 PM To: Alex Mizrahimailto:alex.mizr...@gmail.com Cc: Bitcoin Devmailto:bitcoin-development@lists.sourceforge.net Subject: Re: [Bitcoin-development] Fwd: Block Size Increase Requirements Whilst it would be nice if miners in China can carry on forever regardless of their internet situation, nobody has any inherent right to mine if they can't do the job - if miners in China can't get the trivial amounts of bandwidth required through their firewall and end up being outcompeted then OK, too bad, we'll have to carry on without them. But I'm not sure why it should be a big deal. They can always run a node on a server in Taiwan and connect the hardware to it via a VPN or so. -- ___ Bitcoin-development mailing list Bitcoin-development@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bitcoin-development -- ___ Bitcoin-development mailing list Bitcoin-development@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bitcoin-development
Re: [Bitcoin-development] No Bitcoin For You
Nah don't make blocks 20mb, then you are slowing down block propagation and blowing out conf tikes as a result. Just decrease the time it takes to make a 1mb block, then you still see the same propagation times today and just increase the transaction throughput. From: Jim Phillipsmailto:j...@ergophobia.org Sent: 26/05/2015 12:27 PM To: Mike Hearnmailto:m...@plan99.net Cc: Bitcoin Devmailto:bitcoin-development@lists.sourceforge.net Subject: Re: [Bitcoin-development] No Bitcoin For You On Mon, May 25, 2015 at 1:36 PM, Mike Hearn m...@plan99.net wrote: This meme about datacenter-sized nodes has to die. The Bitcoin wiki is down right now, but I showed years ago that you could keep up with VISA on a single well specced server with today's technology. Only people living in a dreamworld think that Bitcoin might actually have to match that level of transaction demand with today's hardware. As noted previously, too many users is simply not a problem Bitcoin has and may never have! ... And will certainly NEVER have if we can't solve the capacity problem SOON. In a former life, I was a capacity planner for Bank of America's mid-range server group. We had one hard and fast rule. When you are typically exceeding 75% of capacity on a given metric, it's time to expand capacity. Period. You don't do silly things like adjusting the business model to disincentivize use. Unless there's some flaw in the system and it's leaking resources, if usage has increased to the point where you are at or near the limits of capacity, you expand capacity. It's as simple as that, and I've found that same rule fits quite well in a number of systems. In Bitcoin, we're not leaking resources. There's no flaw. The system is performing as intended. Usage is increasing because it works so well, and there is huge potential for future growth as we identify more uses and attract more users. There might be a few technical things we can do to reduce consumption, but the metric we're concerned with right now is how many transactions we can fit in a block. We've broken through the 75% marker and are regularly bumping up against the 100% limit. It is time to stop debating this and take action to expand capacity. The only questions that should remain are how much capacity do we add, and how soon can we do it. Given that most existing computer systems and networks can easily handle 20MB blocks every 10 minutes, and given that that will increase capacity 20-fold, I can't think of a single reason why we can't go to 20MB as soon as humanly possible. And in a few years, when the average block size is over 15MB, we bump it up again to as high as we can go then without pushing typical computers or networks beyond their capacity. We can worry about ways to slow down growth without affecting the usefulness of Bitcoin as we get closer to the hard technical limits on our capacity. And you know what else? If miners need higher fees to accommodate the costs of bigger blocks, they can configure their nodes to only mine transactions with higher fees.. Let the miners decide how to charge enough to pay for their costs. We don't need to cripple the network just for them. -- *James G. Phillips IV* https://plus.google.com/u/0/113107039501292625391/posts *Don't bunt. Aim out of the ball park. Aim for the company of immortals. -- David Ogilvy* *This message was created with 100% recycled electrons. Please think twice before printing.* -- One dashboard for servers and applications across Physical-Virtual-Cloud Widest out-of-the-box monitoring support with 50+ applications Performance metrics, stats and reports that give you Actionable Insights Deep dive visibility with transaction tracing using APM Insight. http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___ Bitcoin-development mailing list Bitcoin-development@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bitcoin-development -- One dashboard for servers and applications across Physical-Virtual-Cloud Widest out-of-the-box monitoring support with 50+ applications Performance metrics, stats and reports that give you Actionable Insights Deep dive visibility with transaction tracing using APM Insight. http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___ Bitcoin-development mailing list Bitcoin-development@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bitcoin-development
Re: [Bitcoin-development] No Bitcoin For You
I wouldn't say same trade-off because you need the whole 20mb block before you can start to use it where as a 1mb block can be used quicker thus transactions found in tge block quicker etc. As for tge higher rate of orphans, I think this would be complimented by a faster correction rate, so if you're pumping out blocks at a rate of 1 per minute, if we get a fork and the next block comes in 10 minutes and is the decider, it took 10 minutes to determine which block is the orphan. But at a rate of 1 block per 1 minute then it only takes 1 minute to resolve the orphan (obviously this is very simplified) so I'm not so sure that orphan rate is a big issue here. Indeed you would need to draw upon more confirmations for easier block creation but surely that is not an issue? Why would sync time be longer as opposed to 20mb blocks? From: gabe appletonmailto:gapplet...@gmail.com Sent: 26/05/2015 12:41 PM To: Thy Shizzlemailto:thyshiz...@outlook.com Cc: Jim Phillipsmailto:j...@ergophobia.org; Mike Hearnmailto:m...@plan99.net; Bitcoin Devmailto:bitcoin-development@lists.sourceforge.net Subject: Re: [Bitcoin-development] No Bitcoin For You But don't you see the same trade-off in the end there? You're still propagating the same amount of data over the same amount of time, so unless I misunderstand, the costs of such a move should be approximately the same, just in different areas. The risks as I understand are as follows: 20MB: 1. Longer per-block propagation (eventually) 2. Longer processing time (eventually) 3. Longer sync time 1 Minute: 1. Weaker individual confirmations (approx. equal per confirmation*time) 2. Higher orphan rate (immediately) 3. Longer sync time That risk-set makes me want a middle-ground approach. Something where the immediate consequences aren't all that strong, and where we have some idea of what to do in the future. Is there any chance we can get decent network simulations at various configurations (5MB/4min, etc)? Perhaps re-appropriate the testnet? On Mon, May 25, 2015 at 10:30 PM, Thy Shizzle thyshiz...@outlook.com wrote: Nah don't make blocks 20mb, then you are slowing down block propagation and blowing out conf tikes as a result. Just decrease the time it takes to make a 1mb block, then you still see the same propagation times today and just increase the transaction throughput. -- From: Jim Phillips j...@ergophobia.org Sent: 26/05/2015 12:27 PM To: Mike Hearn m...@plan99.net Cc: Bitcoin Dev bitcoin-development@lists.sourceforge.net Subject: Re: [Bitcoin-development] No Bitcoin For You On Mon, May 25, 2015 at 1:36 PM, Mike Hearn m...@plan99.net wrote: This meme about datacenter-sized nodes has to die. The Bitcoin wiki is down right now, but I showed years ago that you could keep up with VISA on a single well specced server with today's technology. Only people living in a dreamworld think that Bitcoin might actually have to match that level of transaction demand with today's hardware. As noted previously, too many users is simply not a problem Bitcoin has and may never have! ... And will certainly NEVER have if we can't solve the capacity problem SOON. In a former life, I was a capacity planner for Bank of America's mid-range server group. We had one hard and fast rule. When you are typically exceeding 75% of capacity on a given metric, it's time to expand capacity. Period. You don't do silly things like adjusting the business model to disincentivize use. Unless there's some flaw in the system and it's leaking resources, if usage has increased to the point where you are at or near the limits of capacity, you expand capacity. It's as simple as that, and I've found that same rule fits quite well in a number of systems. In Bitcoin, we're not leaking resources. There's no flaw. The system is performing as intended. Usage is increasing because it works so well, and there is huge potential for future growth as we identify more uses and attract more users. There might be a few technical things we can do to reduce consumption, but the metric we're concerned with right now is how many transactions we can fit in a block. We've broken through the 75% marker and are regularly bumping up against the 100% limit. It is time to stop debating this and take action to expand capacity. The only questions that should remain are how much capacity do we add, and how soon can we do it. Given that most existing computer systems and networks can easily handle 20MB blocks every 10 minutes, and given that that will increase capacity 20-fold, I can't think of a single reason why we can't go to 20MB as soon as humanly possible. And in a few years, when the average block size is over 15MB, we bump it up again to as high as we can go then without pushing typical computers or networks beyond their capacity. We can worry about ways to slow down growth without
Re: [Bitcoin-development] No Bitcoin For You
Indeed Jim, your internet connection makes a good reason why I don't like 20mb blocks (right now). It would take you well over a minute to download the block before you could even relay it on, so much slow down in propagation! Yes I do see how decreasing the time to create blocks is a bit of a band-aid fix, and to use tge term I've seen mentioned here kicking the can down the road I agree that this is doing this, however as you say bandwidth is our biggest enemy right now and so hopefully by the time we exceed the capacity gained by the decrease in block time, we can then look to bump up block size because hopefully 20mbps connections will be baseline by then etc. From: Jim Phillipsmailto:j...@ergophobia.org Sent: 26/05/2015 12:53 PM To: Thy Shizzlemailto:thyshiz...@outlook.com Cc: Mike Hearnmailto:m...@plan99.net; Bitcoin Devmailto:bitcoin-development@lists.sourceforge.net Subject: Re: [Bitcoin-development] No Bitcoin For You Frankly I'm good with either way. I'm definitely in favor of faster confirmation times. The important thing is that we need to increase the amount of transactions that get into blocks over a given time frame to a point that is in line with what current technology can handle. We can handle WAY more than we are doing right now. The Bitcoin network is not currently Disk, CPU, or RAM bound.. Not even close. The metric we're closest to being restricted by would be Network bandwidth. I live in a developing country. 2Mbps is a typical broadband speed here (although 5Mbps and 10Mbps connections are affordable). That equates to about 17MB per minute, or 170x more capacity than what I need to receive a full copy of the blockchain if I only talk to one peer. If I relay to say 10 peers, I can still handle 17x larger block sizes on a slow 2Mbps connection. Also, even if we reduce the difficulty so that we're doing 1MB blocks every minute, that's still only 10MB every 10 minutes. Eventually we're going to have to increase that, and we can only reduce the confirmation period so much. I think someone once said 30 seconds or so is about the shortest period you can practically achieve. -- *James G. Phillips IV* https://plus.google.com/u/0/113107039501292625391/posts http://www.linkedin.com/in/ergophobe *Don't bunt. Aim out of the ball park. Aim for the company of immortals. -- David Ogilvy* *This message was created with 100% recycled electrons. Please think twice before printing.* On Mon, May 25, 2015 at 9:30 PM, Thy Shizzle thyshiz...@outlook.com wrote: Nah don't make blocks 20mb, then you are slowing down block propagation and blowing out conf tikes as a result. Just decrease the time it takes to make a 1mb block, then you still see the same propagation times today and just increase the transaction throughput. -- From: Jim Phillips j...@ergophobia.org Sent: 26/05/2015 12:27 PM To: Mike Hearn m...@plan99.net Cc: Bitcoin Dev bitcoin-development@lists.sourceforge.net Subject: Re: [Bitcoin-development] No Bitcoin For You On Mon, May 25, 2015 at 1:36 PM, Mike Hearn m...@plan99.net wrote: This meme about datacenter-sized nodes has to die. The Bitcoin wiki is down right now, but I showed years ago that you could keep up with VISA on a single well specced server with today's technology. Only people living in a dreamworld think that Bitcoin might actually have to match that level of transaction demand with today's hardware. As noted previously, too many users is simply not a problem Bitcoin has and may never have! ... And will certainly NEVER have if we can't solve the capacity problem SOON. In a former life, I was a capacity planner for Bank of America's mid-range server group. We had one hard and fast rule. When you are typically exceeding 75% of capacity on a given metric, it's time to expand capacity. Period. You don't do silly things like adjusting the business model to disincentivize use. Unless there's some flaw in the system and it's leaking resources, if usage has increased to the point where you are at or near the limits of capacity, you expand capacity. It's as simple as that, and I've found that same rule fits quite well in a number of systems. In Bitcoin, we're not leaking resources. There's no flaw. The system is performing as intended. Usage is increasing because it works so well, and there is huge potential for future growth as we identify more uses and attract more users. There might be a few technical things we can do to reduce consumption, but the metric we're concerned with right now is how many transactions we can fit in a block. We've broken through the 75% marker and are regularly bumping up against the 100% limit. It is time to stop debating this and take action to expand capacity. The only questions that should remain are how much capacity do we add, and how soon can we do it. Given that most existing computer systems and networks can
Re: [Bitcoin-development] Reducing the block rate instead of increasing the maximum block size
Yes This! So many people seem hung up on growing the block size! If gaining a higher tps throughput is the main aim, I think that this proposition to speed up block creation has merit! Yes it will lead to an increase in the block chain still due to 1mb ~1 minute instead of ~10 minute, but the change to the protocol is minor, you are only adding in a different difficulty rate starting from hight blah, no new features or anything are being added so there seems to me much less of a security risk! Also that impact if a hard fork should be minimal because there is nothing but absolute incentive for miners to mine at the new easier difficulty! I feel this deserves a great deal of consideration as opposed to blowing out the block through miners voting etc From: Sergio Lernermailto:sergioler...@certimix.com Sent: 11/05/2015 5:05 PM To: bitcoin-development@lists.sourceforge.netmailto:bitcoin-development@lists.sourceforge.net Subject: [Bitcoin-development] Reducing the block rate instead of increasing the maximum block size In this e-mail I'll do my best to argue than if you accept that increasing the transactions/second is a good direction to go, then increasing the maximum block size is not the best way to do it. I argue that the right direction to go is to decrease the block rate to 1 minute, while keeping the block size limit to 1 Megabyte (or increasing it from a lower value such as 100 Kbyte and then have a step function). I'm backing up my claims with many hours of research simulating the Bitcoin network under different conditions [1]. I'll try to convince you by responding to each of the arguments I've heard against it. Arguments against reducing the block interval 1. It will encourage centralization, because participants of mining pools will loose more money because of excessive initial block template latency, which leads to higher stale shares When a new block is solved, that information needs to propagate throughout the Bitcoin network up to the mining pool operator nodes, then a new block header candidate is created, and this header must be propagated to all the mining pool users, ether by a push or a pull model. Generally the mining server pushes new work units to the individual miners. If done other way around, the server would need to handle a high load of continuous work requests that would be difficult to distinguish from a DDoS attack. So if the server pushes new block header candidates to clients, then the problem boils down to increasing bandwidth of the servers to achieve a tenfold increase in work distribution. Or distributing the servers geographically to achieve a lower latency. Propagating blocks does not require additional CPU resources, so mining pools administrators would need to increase moderately their investment in the server infrastructure to achieve lower latency and higher bandwidth, but I guess the investment would be low. 2. It will increase the probability of a block-chain split The convergence of the network relies on the diminishing probability of two honest miners creating simultaneous competing blocks chains. To increase the competition chain, competing blocks must be generated in almost simultaneously (in the same time window approximately bounded by the network average block propagation delay). The probability of a block competition decreases exponentially with the number of blocks. In fact, the probability of a sustained competition on ten 1-minute blocks is one million times lower than the probability of a competition of one 10-minute block. So even if the competition probability of six 1-minute blocks is higher than of six ten-minute blocks, this does not imply reducing the block rate increases this chance, but on the contrary, reduces it. 3, It will reduce the security of the network The security of the network is based on two facts: A- The miners are incentivized to extend the best chain B- The probability of a reversal based on a long block competition decreases as more confirmation blocks are appended. C- Renting or buying hardware to perform a 51% attack is costly. A still holds. B holds for the same amount of confirmation blocks, so 6 confirmation blocks in a 10-minute block-chain is approximately equivalent to 6 confirmation blocks in a 1-minute block-chain. Only C changes, as renting the hashing power for 6 minutes is ten times less expensive as renting it for 1 hour. However, there is no shop where one can find 51% of the hashing power to rent right now, nor probably will ever be if Bitcoin succeeds. Last, you can still have a 1 hour confirmation (60 1-minute blocks) if you wish for high-valued payments, so the security decreases only if participant wish to decrease it. 4. Reducing the block propagation time on the average case is good, but what happen in the worse case? Most methods proposed to reduce the block propagation delay do it only on the average case. Any kind of block compression
Re: [Bitcoin-development] Solution for Block Size Increase
Nicolas, can you think if there would be a problem with allowing blocks to be created faster instead of increasing block size? So say if difficulty was reduced to allow block creation every 2 minutes instead of 10 minutes, can you think of any bad outcome from this (I know this is different to what you are saying) but I'm thinking if we allow 1mb blocks to be built faster, that way transactions are processed quicker thus gaining a higher tps rate, i'd think no hard fork need occur right? Is there any downsides that you can see? Obviously miners need yo update, but I mean if they don't it just means they potentially take too long to make blocks and thus loose out in reward so there is the incentive for them to update to the easier difficulty, while still allowing blocks done on the harder difficulty for backwards compatibility. Thoughts? From: Nicolas DORIERmailto:nicolas.dor...@gmail.com Sent: 8/05/2015 9:17 AM To: bitcoin-development@lists.sourceforge.netmailto:bitcoin-development@lists.sourceforge.net Subject: [Bitcoin-development] Solution for Block Size Increase Executive Summary: I explain the objectives that we should aim to reach agreement without drama, controversy, and relief the core devs from the central banker role. (As Jeff Garzik pointed out) Knowing the objectives, I propose a solution based on the objectives that can be agreed on tomorrow, would permanently fix the block size problem without controversy and would be immediately applicable. The objectives: There is consensus on the fact that nobody wants the core developers to be seen as central bankers. There is also consensus that more decentralization is better than less. (assuming there is no cost to it) This means you should reject all arguments based on economical, political and ideological principles about what Bitcoin should become. This includes: 1) Whether Bitcoin should be storage of value or suitable for coffee transaction, 2) Whether we need a fee market, block scarcity, and how much of it, 3) Whether we need to periodically increase block size via some voodoo formula which speculate on future bandwidth and cost of storage, Taking decisions based on such reasons is what central bankers do, and you don’t want to be bankers. This follow that decisions should be taken only for technical and decentralization considerations. (more about decentralization after) Scarcity will evolve without you taking any decisions about it, for the only reason that storage and bandwidth is not free, nor a transaction, thanks to increased propagation time. This backed in scarcity will evolve automatically as storage, bandwidth, encoding, evolve without anybody taking any decision, nor making any speculation on the future. Sadly, deciding how much decentralization should be in the system by tweaking the block size limit is also an economic decision that should not have its place between the core devs. This follow : 4) Core devs should not decide about the amount of suitable decentralization by tweaking block size limit, Still, removing the limit altogether is a no-no, what would happen if a block of 100 GB is created? Immediately the network would be decentralized, not only for miners but also for bitcoin service providers. Also, core devs might have technical consideration on bitcoin core which impose a temporary limit until the bug resolved. The solution: So here is a proposal that address all my points, and, I think, would get a reasonable consensus. It can be published tomorrow without any controversy, would be agreed in one year, and can be safely reiterated every year. Developers will also not have to play politics nor central banker. (well, it sounds to good to be true, I waiting for being wrong) The solution is to use block voting. For each block, a miner gives the size of the block he would like to have at the next deadline (for example, 30 may 2015). The rational choice for them is just enough to clear the memory pool, maybe a little less if he believes fee pressure is beneficial for him, maybe a little more if he believes he should leave some room for increased use. At the deadline, we take the median of the votes and implement it as a new block size limit. Reiterate for the next year. Objectives reached: - No central banking decisions on devs shoulder, - Votes can start tomorrow, - Implementation has only to be ready in one year, (no kick-in-the-can) - Will increase as demand is growing, - Will increase as network capacity and storage is growing, - Bitcoin becomes what miners want, not what core devs and politician wants, - Implementation reasonably easy, - Will get miner consensus, no impact on existing bitcoin services, Unknown: - Effect on bitcoin core stability (core devs might have a valid technical reason to impose a limit) - Maybe a better statistical function is possible Additional input for the debate: Some people were
Re: [Bitcoin-development] Where do I start?
Zero conf :D From: gabe appletonmailto:gapplet...@gmail.com Sent: 16/04/2015 12:15 PM To: bitcoin-development@lists.sourceforge.netmailto:bitcoin-development@lists.sourceforge.net Subject: [Bitcoin-development] Where do I start? Background: I'm a CS student quickly approaching his research project, and I'd like to do something meaningful with it. Essentially, I'd like to know what issues someone up for their bachelor's degree might actually be able to help on, and where I can start. Obviously I'm not going to be able to just dive into a 6-year-running project without some prior research, so I'm looking for a start. What are some current things that are lacking in Bitcoin core? Or am I better off making something else for the ecosystem? -- BPM Camp - Free Virtual Workshop May 6th at 10am PDT/1PM EDT Develop your own process in accordance with the BPMN 2 standard Learn Process modeling best practices with Bonita BPM through live exercises http://www.bonitasoft.com/be-part-of-it/events/bpm-camp-virtual- event?utm_ source=Sourceforge_BPM_Camp_5_6_15utm_medium=emailutm_campaign=VA_SF___ Bitcoin-development mailing list Bitcoin-development@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bitcoin-development -- BPM Camp - Free Virtual Workshop May 6th at 10am PDT/1PM EDT Develop your own process in accordance with the BPMN 2 standard Learn Process modeling best practices with Bonita BPM through live exercises http://www.bonitasoft.com/be-part-of-it/events/bpm-camp-virtual- event?utm_ source=Sourceforge_BPM_Camp_5_6_15utm_medium=emailutm_campaign=VA_SF___ Bitcoin-development mailing list Bitcoin-development@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bitcoin-development
Re: [Bitcoin-development] network disruption as a service and proof of local storage
If the IP discovery is your main motivation, why don't you introduce some onion routing into transactions? That would solve this problem easily, of course there is an overhead which will slightly slow down the relay of transactions but not significantly, also make it an option not enforced, for those worried about IP association. From: Robert McKaymailto:rob...@mckay.com Sent: 28/03/2015 2:33 AM To: Matt Whitlockmailto:b...@mattwhitlock.name Cc: bitcoin-development@lists.sourceforge.netmailto:bitcoin-development@lists.sourceforge.net Subject: Re: [Bitcoin-development] network disruption as a service and proof of local storage The main motivation is to try and stop a single entity running lots of nodes in order to harvest transaction origin IPs. That's what's behind this. Probably the efforts are a waste of time.. if someone has to keep a few hundred copies of the blockchain around in order to keep IP specific precomputed data around for all the IPs they listen on then they'll just buy a handful of 5TB HDs and call it a day.. still some of the ideas proposed are quite interesting and might not have much downside. Rob On 2015-03-27 15:16, Matt Whitlock wrote: I agree that someone could do this, but why is that a problem? Isn't the goal of this exercise to ensure more full nodes on the network? In order to be able to answer the challenges, an entity would need to be running a full node somewhere. Thus, they have contributed at least one additional full node to the network. I could certainly see a case for a company to host hundreds of lightweight (e.g., EC2) servers all backed by a single copy of the block chain. Why force every single machine to have its own copy? All you really need to require is that each agency/participant have its own copy. On Friday, 27 March 2015, at 2:32 pm, Robert McKay wrote: Basically the problem with that is that someone could setup a single full node that has the blockchain and can answer those challenges and then a bunch of other non-full nodes that just proxy any such challenges to the single full node. Rob On 2015-03-26 23:04, Matt Whitlock wrote: Maybe I'm overlooking something, but I've been watching this thread with increasing skepticism at the complexity of the offered solution. I don't understand why it needs to be so complex. I'd like to offer an alternative for your consideration... Challenge: Send me: SHA256(SHA256(concatenation of N pseudo-randomly selected bytes from the block chain)). Choose N such that it would be infeasible for the responding node to fetch all of the needed blocks in a short amount of time. In other words, assume that a node can seek to a given byte in a block stored on local disk much faster than it can download the entire block from a remote peer. This is almost certainly a safe assumption. For example, choose N = 1024. Then the proving node needs to perform 1024 random reads from local disk. On spinning media, this is likely to take somewhere on the order of 15 seconds. Assuming blocks are averaging 500 KiB each, then 1024 blocks would comprise 500 MiB of data. Can 500 MiB be downloaded in 15 seconds? This data transfer rate is 280 Mbps. Almost certainly not possible. And if it is, just increase N. The challenge also becomes more difficult as average block size increases. This challenge-response protocol relies on the lack of a partial getdata command in the Bitcoin protocol: a node cannot ask for only part of a block; it must ask for an entire block. Furthermore, nodes could ban other nodes for making too many random requests for blocks. On Thursday, 26 March 2015, at 7:09 pm, Sergio Lerner wrote: If I understand correctly, transforming raw blocks to keyed blocks takes 512x longer than transforming keyed blocks back to raw. The key is public, like the IP, or some other value which perhaps changes less frequently. Yes. I was thinking that the IP could be part of a first layer of encryption done to the blockchain data prior to the asymetric operation. That way the asymmetric operation can be the same for all users (no different primers for different IPs, and then the verifiers does not have to verify that a particular p is actually a pseudo-prime suitable for P.H. ) and the public exponent can be just 3. Two protocols can be performed to prove local possession: 1. (prover and verifier pay a small cost) The verifier sends a seed to derive some n random indexes, and the prover must respond with the hash of the decrypted blocks within a certain time bound. Suppose that decryption of n blocks take 100 msec (+-100 msec of network jitter). Then an attacker must have a computer 50 faster to be able to consistently cheat. The last 50 blocks should not be part of the list to allow nodes to catch-up and encrypt the blocks in
Re: [Bitcoin-development] Address Expiration to Prevent Reuse
Indeed, and with things like BIP32 it would be pointless to use one address, and I agree it is silly to reuse addresses, some for the privacy aspect, some for the revealing the pubkey on a spend aspect. But just because it is silly, doesn't mean it's necessarily required for devs to disallow it. I mean if a business doesn't care who can see their bitcoin takings and they are willing to keep shifting the bitcoin and live woth the exposed pubkey let them yea? http://www.forexminute.com/bitcoin/australian-association-asks-voluntary-bitcoin-register-individuals-companies-51183 From: Gregory Maxwellmailto:gmaxw...@gmail.com Sent: 27/03/2015 2:13 PM To: Thy Shizzlemailto:thyshiz...@outlook.com Cc: s...@sky-ip.orgmailto:s...@sky-ip.org; Tom Hardingmailto:t...@thinlink.com; Bitcoin Developmentmailto:bitcoin-development@lists.sourceforge.net Subject: Re: [Bitcoin-development] Address Expiration to Prevent Reuse On Fri, Mar 27, 2015 at 1:51 AM, Thy Shizzle thyshiz...@outlook.com wrote: Yes I agree, also there is talks about a government body I know of warming to bitcoin by issuing addresses for use by a business and then all transactions can be tracked for that business entity. This is one proposal I saw put forward by a country specific bitcoin group to their government and so not allowing address reuse would neuter that :( I hope you're mistaken, because that would be a serious attack on the design of bitcoin, which obtains privacy and fungibility, both essential properties of any money like good, almost exclusively through avoiding reuse. [What business would use a money where all their competition can see their sales and identify their customers, where their customers can track their margins and suppliers? What individuals would use a system where their inlaws could criticize their spending? Where their landlord knows they got a raise, or where thieves know their net worth?] Though no one here is currently suggesting blocking reuse as a network rule, the reasonable and expected response to what you're suggesting would be to do so. If some community wishes to choose not to use Bitcoin, great, but they don't get to simply choose to screw up its utility for all the other users. You should advise this country specific bitcoin group that they shouldn't speak for the users of a system which they clearly do not understand. -- Dive into the World of Parallel Programming The Go Parallel Website, sponsored by Intel and developed in partnership with Slashdot Media, is your hub for all things parallel software development, from weekly thought leadership blogs to news, videos, case studies, tutorials and more. Take a look and join the conversation now. http://goparallel.sourceforge.net/___ Bitcoin-development mailing list Bitcoin-development@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bitcoin-development
Re: [Bitcoin-development] Criminal complaints against network disruption as a service startups
giving the user an anonymity option which can be exercised as part of any transaction. Thy Shizzle: I don't believe that at all. Analyzing information publicly available is not illegal. Chainalysis or whatever you call it would be likened to observing who comes and feeds birds at the park everyday. You can sit in the park and observe who feeds the birds, just as you can connect to the Bitcoin P2P network and observe the blocks being formed into the chain and transactions etc. Unless there is some agreement taking place where it is specified that upon connecting to the Bitcoin P2P swarm you agree to a set of terms, however as every node is providing their own entry into the P2P swarm it becomes really up to the node providing the connection to uphold and enforce the terms of the agreement. If you allow people to connect to you without terms of agreement, you cannot cry foul when they record the data that passes through. To say Chainalysis needs to cease is silly, the whole point of the public blockchain is for Chainalysis, whether it be for the verification of transactions, research or otherwise. -Original Message- From: odinn odinn.cyberguerri...@riseup.net Sent: 23/03/2015 1:48 PM To: bitcoin-development@lists.sourceforge.net bitcoin-development@lists.sourceforge.net Subject: Re: [Bitcoin-development] Criminal complaints against network disruption as a service startups If you (e.g. Chainalysis) or anyone else are doing surveillance on the network and gathering information for later use, and whether or not the ultimate purpose is to divulge it to other parties for compliance purposes, you can bet that ultimately the tables will be turned on you, and you will be the one having your ass handed to you so to speak, before or after you are served, in legal parlance. Whether or not the outcome of that is meaningful and beneficial to any concerned parties and what is the upshot of it in the end depends on on what you do and just how far you decide to take your ill-advised enterprise. Chainalysis and similar operations would be, IMHO, well advised to cease operations. This doesn't mean they will, but guess what: Shot over the bow, folks. Jan Møller: What we were trying to achieve was determining the flow of funds between countries by figuring out which country a transaction originates from. To do that with a certain accuracy you need many nodes. We chose a class C IP range as we knew that bitcoin core and others only connect to one node in any class C IP range. We were not aware that breadwallet didn't follow this practice. Breadwallet risked getting tar-pitted, but that was not our intention and we are sorry about that. Our nodes DID respond with valid blocks and merkle-blocks and allowed everyone connecting to track the blockchain. We did however not relay transactions. The 'service' bit in the version message is not meant for telling whether or how the node relays transactions, it tells whether you can ask for block headers only or full blocks. Many implementations enforce non standard rules for handling transactions; some nodes ignore transactions with address reuse, some nodes happily forward double spends, and some nodes forward neither blocks not transactions. We did blocks but not transactions. In hindsight we should have done two things: 1. relay transactions 2. advertise address from 'foreign' nodes Both would have fixed the problems that breadwallet experienced. My understanding is that breadwallet now has the same 'class C' rule as bitcoind, which would also fix it. Getting back on the topic of this thread and whether it is illegal, your guess is as good as mine. I don't think it is illegal to log incoming connections and make statistical analysis on it. That would more or less incriminate anyone who runs a web-server and looks into the access log. At lease one Bitcoin service has been collecting IP addresses for years and given them to anyone visiting their web-site (you know who) and I believe that this practise is very wrong. We have no intention of giving IP addresses away to anyone, but we believe that you are free to make statistics on connection logs when nodes connect to you. On a side note: When you make many connections to the network you see lots of strange nodes and suspicious patterns. You can be certain that we were not the only ones connected to many nodes. My takeaway from this: If nodes that do not relay transactions is a problem then there is stuff to fix. /Jan On Fri, Mar 13, 2015 at 10:48 PM, Mike Hearn m...@plan99.net wrote: That would be rather new and tricky legal territory. But even putting the legal issues to one side, there are definitional issues. For instance if the Chainalysis nodes started following the protocol specs better and became just regular nodes that happen to keep logs, would that still be a violation? If so, what about blockchain.info? It'd
Re: [Bitcoin-development] Criminal complaints against network disruption as a service startups
I don't believe that at all. Analyzing information publicly available is not illegal. Chainalysis or whatever you call it would be likened to observing who comes and feeds birds at the park everyday. You can sit in the park and observe who feeds the birds, just as you can connect to the Bitcoin P2P network and observe the blocks being formed into the chain and transactions etc. Unless there is some agreement taking place where it is specified that upon connecting to the Bitcoin P2P swarm you agree to a set of terms, however as every node is providing their own entry into the P2P swarm it becomes really up to the node providing the connection to uphold and enforce the terms of the agreement. If you allow people to connect to you without terms of agreement, you cannot cry foul when they record the data that passes through. To say Chainalysis needs to cease is silly, the whole point of the public blockchain is for Chainalysis, whether it be for the verification of transactions, research or otherwise. -Original Message- From: odinn odinn.cyberguerri...@riseup.net Sent: 23/03/2015 1:48 PM To: bitcoin-development@lists.sourceforge.net bitcoin-development@lists.sourceforge.net Subject: Re: [Bitcoin-development] Criminal complaints against network disruption as a service startups -BEGIN PGP SIGNED MESSAGE- Hash: SHA512 If you (e.g. Chainalysis) or anyone else are doing surveillance on the network and gathering information for later use, and whether or not the ultimate purpose is to divulge it to other parties for compliance purposes, you can bet that ultimately the tables will be turned on you, and you will be the one having your ass handed to you so to speak, before or after you are served, in legal parlance. Whether or not the outcome of that is meaningful and beneficial to any concerned parties and what is the upshot of it in the end depends on on what you do and just how far you decide to take your ill-advised enterprise. Chainalysis and similar operations would be, IMHO, well advised to cease operations. This doesn't mean they will, but guess what: Shot over the bow, folks. Jan Møller: What we were trying to achieve was determining the flow of funds between countries by figuring out which country a transaction originates from. To do that with a certain accuracy you need many nodes. We chose a class C IP range as we knew that bitcoin core and others only connect to one node in any class C IP range. We were not aware that breadwallet didn't follow this practice. Breadwallet risked getting tar-pitted, but that was not our intention and we are sorry about that. Our nodes DID respond with valid blocks and merkle-blocks and allowed everyone connecting to track the blockchain. We did however not relay transactions. The 'service' bit in the version message is not meant for telling whether or how the node relays transactions, it tells whether you can ask for block headers only or full blocks. Many implementations enforce non standard rules for handling transactions; some nodes ignore transactions with address reuse, some nodes happily forward double spends, and some nodes forward neither blocks not transactions. We did blocks but not transactions. In hindsight we should have done two things: 1. relay transactions 2. advertise address from 'foreign' nodes Both would have fixed the problems that breadwallet experienced. My understanding is that breadwallet now has the same 'class C' rule as bitcoind, which would also fix it. Getting back on the topic of this thread and whether it is illegal, your guess is as good as mine. I don't think it is illegal to log incoming connections and make statistical analysis on it. That would more or less incriminate anyone who runs a web-server and looks into the access log. At lease one Bitcoin service has been collecting IP addresses for years and given them to anyone visiting their web-site (you know who) and I believe that this practise is very wrong. We have no intention of giving IP addresses away to anyone, but we believe that you are free to make statistics on connection logs when nodes connect to you. On a side note: When you make many connections to the network you see lots of strange nodes and suspicious patterns. You can be certain that we were not the only ones connected to many nodes. My takeaway from this: If nodes that do not relay transactions is a problem then there is stuff to fix. /Jan On Fri, Mar 13, 2015 at 10:48 PM, Mike Hearn m...@plan99.net wrote: That would be rather new and tricky legal territory. But even putting the legal issues to one side, there are definitional issues. For instance if the Chainalysis nodes started following the protocol specs better and became just regular nodes that happen to keep logs, would that still be a violation? If so, what about blockchain.info? It'd be shooting ourselves in the foot to try and forbid block
Re: [Bitcoin-development] Electrum 2.0 has been tagged
@Neill, Indeed supplying entropy is necessary for testing etc, that's the main reason why I put this in my .NET implementation, the test vectors require us to supply entropy and build the mnemonic from the supplied wordlist and you are correct that changes to the word list will null and void the test vectors. Also it allows us to do fun things like swap between languages so one entropy set can have many seeds based on many languages etc, just novel little things like that. I'm not at all against a standard wordlist. The point I want to get across is that people seem to think that BIP39 is restricted by its word list but not at all. As long as you give a BIP39 implementation 12 words or more (in 3 word increments) it will always derive the same seed bytes, independent of any word list and this is the most important message to realise. @Thomas V if you must record a version, why don't you just put an integer at the end of your mnemonic or something? I can't understand why you have disregarded BIP39 when designing Electrum 2.0? 12 - 24 words plus a version integer tacked on the end, tell the user to omit the version integer if they want to import to multibit HD or whatever, job done! I really think you need to rethink the use of BIP39 with Electrum Thomas! If you want to maintain a version field and/or date independent of the BIP39 spec then do so because at least the seed can still be recreated from anyone else utilising BIP39!!! Thy Date: Thu, 12 Mar 2015 06:51:37 -0500 From: nei...@thecodefactory.org To: thashizn...@yahoo.com.au CC: Bitcoin-development@lists.sourceforge.net Subject: Re: [Bitcoin-development] Electrum 2.0 has been tagged Ok, I see your point here, and I was referring to rebuilding from entropy -- which as you noted is not a real world usage. It is a useful implementation test though and at the very least the existing test vectors would need to be regenerated with each word list change. I recently added BIP39 to libbitcoin and our implementation would fail with an arbitrarily new word list because we validate the user provided word list before converting it to a seed (i.e. we check that the encoded entropy/checksum line up and warn the user if that's not the case to distinguish a rubbish word list from a BIP39 mnemonic -- as referenced in the BIP). You're correct that we could use rubbish words, but at the moment it's not allowed there. By removing that validating 'restriction', I agree with you that word lists have no need to be fixed. But realistically, we still don't allow completely arbitrary words to be used because I don't see the word lists changing too often, nor implementations storing word lists of all words and languages. Thanks for clarifying, -Neill. On Thu, Mar 12, 2015 at 04:21:59AM +, Thy Shizzle wrote: I agree that it's true that a static wordlist is required once people have started using BIP39 for anything real and changing the word lists will invalidate any existing mnemonics ^ This is incorrect I think Neill, the reason is that the only thing that happens when you change the wordlist is that entropy points to different words. But remember, entropy is disposed. Yes in my code I allow for the keeping of entropy etc, it also lets me hot swap between different language wordlists etc but in real world implementation the entropy is forgotten and not stored. So changing the wordlist merely allows new mnemonic phrases to be generated but it has a nil impact on previously generated mnemonics UNLESS you are trying to rebuild from entropy but you wouldn't do that. You would be rebuilding from the Mnemonic in real world scenario. You really can have a word list of total rubbish in BIP39 as long as it is 2048 words long that is all! If you input the mnemonic made out of rubbish words so for e.g uyuy jkjasd sdsd sdsdd yuuyu sdsds iooioi sdasds uyuyuy sdsdsd tyyty rwetrtr and no matter what BIP39 implementation you put it in, it will always generate the same seed bytes thus allowing for complete and universal seed derivation without any reliance on word list. The word list is merely to generate a mnemonic, after that it has no role in seed generation so you can change it at anytime and it will never effect future mnemonics. On Thu, Mar 12, 2015 at 02:16:38AM +, Thy Shizzle wrote: That's disappointing the Electrum 2.0 doesn't use BIP39. Agreed, but I don't know the full background on this. Changing the wordlist in the future has ZERO effect on derived seed, whatever mnemonic you provide will always generate the same seed, BIP39 is not mapping the words back to numbers etc to derive seed. That's true for generating new mnemonics (i.e. same entropy can generate any combinations of words), but not for converting a mnemonic to a seed (i.e. a specific wordlist/passphrase should always generate the same seed). I agree
[Bitcoin-development] Broken Threading
Yes apologies for the broken threading, it was the result of me auto forwarding between mail providers etc. To fix this issue I have created this new dedicated outlook account (thyshiz...@outlook.com) that I shall use for all my subscriptions here and I am unsubscribing the yahoo address. This should solve this issue going forward :) -- Dive into the World of Parallel Programming The Go Parallel Website, sponsored by Intel and developed in partnership with Slashdot Media, is your hub for all things parallel software development, from weekly thought leadership blogs to news, videos, case studies, tutorials and more. Take a look and join the conversation now. http://goparallel.sourceforge.net/___ Bitcoin-development mailing list Bitcoin-development@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bitcoin-development
Re: [Bitcoin-development] Testnet3
Strangely enough, it has started to work properly and I didn't even touch my code just had it sitting there in the loop/ping circuit it was performing and capturing with wireshark.that is quite odd! Hi, so I have my .NET node communicating on the P2P network just fine, so I figured as I'll now start looking at making and validating transactions etc I should probably migrate to test net. Now I see that we are up to the third generation testnet testnet3, and I am sending my messages now using packet magic 0x0b110907 and I'm using Wireshark and I can confirm that my messages are going out with that packet magic. Now what is interesting is that when I try connect to a test node obtained from DNS seed testnet-seed.bitcoin.petertodd.org, I send it a version message with the testnet3 packet magic, yet I get no verack or version in response In fact, the only thing I get back is a ping and then the connection is severed by the remote node. What is going on? Also, it works fine with the mainnet packet magic value of 0x0f9beb4d9 and I am debuging my code and ensuring it is looking for the testnet3 packet magic, but I am not getting a response from the node?-- Dive into the World of Parallel Programming The Go Parallel Website, sponsored by Intel and developed in partnership with Slashdot Media, is your hub for all things parallel software development, from weekly thought leadership blogs to news, videos, case studies, tutorials and more. Take a look and join the conversation now. http://goparallel.sourceforge.net/___ Bitcoin-development mailing list Bitcoin-development@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bitcoin-development
Re: [Bitcoin-development] Electrum 2.0 has been tagged
Right you are! I saw Thomas's email about Electrum 2.0 not supporting BIP39. It seems he had the idea that the wordlist was a strict requirement yet it is not, it is unfortunate that Electrum did not go the route of BIP39. The wordlist is irrelevant and merely used to help build mnemonics. Also as I've shown, you can work a version into it, I was going to actually propose it to the BIP39 authors but didn't think it was an issue. I think BIP39 is fantastic. I think Electrum 2.0 (And everyone) should use BIP39 On 2015-03-11 06:21 PM, Thy Shizzle wrote: H I don't think it's fair to say that there has been a failure to standardise. From what I read earlier among the wallets, mostly it came down to if a version was noted and the date. Assuming no date is provided, it just means you are scanning the block chain from day 0 for transactions right? Hardly a big deal as you will still recover funds right? Unfortunately there's more incompatibility than just the date issue: * seed: some follow BIP39, and some roll their own * HD structure: some follow BIP44, some BIP32 derivation, and some roll their own So actually very few wallets are seed-compatible, even ignoring the date question. Version right now is irrelevant as there is only one version of BIP39 currently, probably this will change as 2048 iterations of HMACSHA512 will likely need to be up scaled in the future, I thought about adding one extra word into the mnemonic to signify version, so if you have a 12 word mnemonic then you have 12 words + 1 word version. Version 1 has no extra word, version 2 uses the first word on the list, version 3 uses the second word on the wordlist, so on and so forth. Least that's what I was thinking of doing if I ever had to record a version, won't effect anything because entropy increases in blocks of 3 words so one extra word can simply be thrown on the end. That's a reasonable solution. So in summary I feel that date can be handled by assuming day 0, and version is not an issue yet but may become one and probably it is a good idea to think about standardising a version into BIP39, I have provided a seed idea for discussion. I don't think it is quite the doom and gloom I'm reading :) devrandom: I'd like to offer that the best practice for the shared wallet use case should be multi-device multi-sig. The mobile has a key, the desktop has a key and a third-party security oracle has a third key. The oracle would have different security thresholds for countersigning the mobile. This way you can have the same overall wallet on all devices, but different security profiles on different keys. That said, I do agree that mnemonic phrases should be portable, and find it unfortunate that the ecosystem is failing to standardize on phrase handling. -- devrandom / Miron-- Dive into the World of Parallel Programming The Go Parallel Website, sponsored by Intel and developed in partnership with Slashdot Media, is your hub for all things parallel software development, from weekly thought leadership blogs to news, videos, case studies, tutorials and more. Take a look and join the conversation now. http://goparallel.sourceforge.net/___ Bitcoin-development mailing list Bitcoin-development@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bitcoin-development
Re: [Bitcoin-development] Electrum 2.0 has been tagged
That's disappointing the Electrum 2.0 doesn't use BIP39. From my interpretation of BIP39, wordlists DO NOT REQUIRE to be fixed between wallet providers. There is some recommendations regarding the wordlists to help with things such as predictive text, so mobile apps can easily predict the word being typed in after a few chars etc. This would seem to be the reasoning for the reference word lists. Now there is nothing stopping one from implementing a wordlist of say profanities or star wars terms or whatever and still accepting a mnemonic from another provider. Remember if you have a mnemonic from a different wordlist, simply Normalization of the words occurs and then the hashing the mnemonic to derive the seed bytes. It is not really a restriction at all! BIP39 was designed such that the mnemonic generation is decoupled from seed derivation, just like what you are saying Electrum 2.0 can do! The wordlist is only needed for mnemonic generation NOT seed derivation, so Electrum DOES NOT need a copy of the BIP39 word lists to generate the seed from the phrase, there is really not much reason for Electrum not to accept BIP39 mnemonics at the moment! it requires bugger all code! Here is my seed generation code //literally this is the bulk of the decoupled seed generation code, easy.byte[] salt = Utilities.MergeByteArrays(UTF8Encoding.UTF8.GetBytes(cSaltHeader),_passphraseBytes);return Rfc2898_pbkdf2_hmacsha512.PBKDF2(UTF8Encoding.UTF8.GetBytes(Utilities.NormaliseStringNfkd(MnemonicSentence)), salt); Changing the wordlist in the future has ZERO effect on derived seed, whatever mnemonic you provide will always generate the same seed, BIP39 is not mapping the words back to numbers etc to derive seed. Version is something that can be dealt with after the fact, hopefully standardised (curious why didn't you work with the BIP39 to insert version instead of do something different to BIP39?) So most of what you are suggesting as problems are not. As for the common words between languages, I have discussed this with the provider of the Chinese wordlists as they shared some words between simplified and traditional, but I found it easy to look for a word in the mnemonic that is unique to that language/wordlist and so straight away you can determine the language, remembering you get minimum 12 goes at doing that :) Also then I asked myself, do we really care about detecting the language? Probably not because we don't need to use the wordlist ever again after creation, we literally accept the mnemonic, normalise it then hash it into a seed. From what I'm reading, Electrum 2.0 really should have BIP39, it would take almost no effort to put it in and I think you should do that :) I don't have any interest in BIP39 other than it being a standard. I think TREZOR may have an interest in it? Thomas V: Thanks Mike, and sorry to answer a bit late; it has been a busy couple of weeks. You are correct, a BIP39 seed phrase will not work in Electrum, and vice versa. It is indeed unfortunate. However, I believe BIP39 should not be followed, because it reproduces two mistakes I did when I designed the older Electrum seed system. Let me explain. The first problem I have with BIP39 is that the seed phrase does not include a version number. Wallet development is still in an exploratory phase, and we should expect even more innovation in this domain. In this context, it is unwise to make decisions that prevent future innovation. However, when we give a seed phrase to users, we have a moral obligation to keep supporting this seed phrase in future versions. We cannot simply announce to Electrum users that their old seed phrase is not supported anymore, because we created a new version of the software that uses a different derivation. This could lead to financial losses for users who are unaware of these technicalities. Well, at least, that is how I feel about it. BIP39 and Electrum v2 have a very different ways of handling future innovation. Electrum v2 seed phrases include an explicit version number, that indicates how the wallet addresses should be derived. In contrast, BIP39 seed phrases do not include a version number at all. BIP39 is meant to be combined with BIP43, which stipulates that the wallet structure should depend on the BIP32 derivation path used for the wallet (although BIP43 is not followed by all BIP39 compatible wallets). Thus, innovation in BIP43 is allowed only within the framework of BIP32. In addition, having to explore the branches of the BIP32 tree in order to determine the type of wallet attached to a seed might be somewhat inefficient. The second problem I see with BIP39 is that it requires a fixed wordlist. Of course, this forbids innovation in the wordlist itself, but that's not the main problem. When you write a new standard, it is important to keep this standard minimal, given the goal you want to achieve. I believe BIP39 could (and should) have been written without
Re: [Bitcoin-development] Electrum 2.0 has been tagged
Yes I agree with this sentiment. As for the version, don't forget we can kinda brute force our way to determine a version, because lets say there is 10 versions, we can generate the seed for all 10 versions and then check to see which seed was in use (has transacted) and then use that seed. If no transactions are found, we could restore the wallet with the seed of the latest and greatest version. Not really any need to store the version, sure it may save some time but as Marek rightly says, this is for restoration of a wallet from cold storage not an everyday thing so the extra time to brute force the version etc is acceptable as a trade off for not forcing the remembering of a version. BIP39 is beautiful. On Wed, Mar 11, 2015 at 6:14 PM, Mike Hearn m...@plan99.net wrote: - Electrum v2 with a version number but no date - myTREZOR with no version and no date and BIP44 key derivation. Some seeds I believe are now being generated with 24 words instead of 12. - MultiBit HD with no version and a date in a custom form that creates non-date-like codes you are expected to write down. I think BIP32 and BIP44 are both supported (sorta). - GreenAddress with no version, no date and BIP32 - Other bitcoinj based wallets, with no version and a date written down in normal human form, BIP32 only. To my knowledge, myTREZOR, Multibit HD and GreenAddress uses BIP39, just different scheme for key derivation (myTREZOR uses full BIP44, Multibit HD uses BIP44 with first account only and GreenAddress uses another scheme because it's multisig only wallet). I disagree with the need of some version magic flags or creation date stored in the mnemnonic, for those reasons: a) If we fail in the way how mnemonic algo is defined, then some magic, extra version flag won't save our asses, because we'll fail in meaning of its meaning. Then it will be completely useless, as implementations cannot rely on it. I know Thomas was sound proponent of this solution, but he was unable to give any reasonable rules about who/how define meaning of version flag. b) Creation date is just a short-term hack. Considering that mnemonic words are kind of cold storage (longterm storage), it *really* does not make much difference in 2020, if your wallet has been created in 02/2014 or 10/2016. If there's performance issue with scanning of the blockchain, creation date don't save our asses. We need to find another solution, and as a bonus, we don't need users to know some weird numbers on top of mnemonic itself. From my interpretation of BIP39, wordlists DO NOT REQUIRE to be fixed between wallet providers. There is some recommendations regarding the wordlists to help with things such as predictive text, so mobile apps can easily predict the word being typed in after a few chars etc. Exactly! After some community feedback, we changed BIP39 algo to be one-way only, which means you can use *any* wordlist to create the mnemonic, and any other implementation can derive BIP32 root node even without knowing that particular wordlist. Namely this has been changed because of constructive criticism of ThomasV, and from discussion on the mailing list I had a feeling that we've found a consensus. I was *very* surprised that Electrum 2.0 started to use yet another algo just because. Shortly said, I think BIP39 does perfect job and there's no need to use anything else. Cheers,Marek-- Dive into the World of Parallel Programming The Go Parallel Website, sponsored by Intel and developed in partnership with Slashdot Media, is your hub for all things parallel software development, from weekly thought leadership blogs to news, videos, case studies, tutorials and more. Take a look and join the conversation now. http://goparallel.sourceforge.net/___ Bitcoin-development mailing list Bitcoin-development@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bitcoin-development
Re: [Bitcoin-development] Electrum 2.0 has been tagged
I agree that it's true that a static wordlist is required once people have started using BIP39 for anything real and changing the word lists will invalidate any existing mnemonics ^ This is incorrect I think Neill, the reason is that the only thing that happens when you change the wordlist is that entropy points to different words. But remember, entropy is disposed. Yes in my code I allow for the keeping of entropy etc, it also lets me hot swap between different language wordlists etc but in real world implementation the entropy is forgotten and not stored. So changing the wordlist merely allows new mnemonic phrases to be generated but it has a nil impact on previously generated mnemonics UNLESS you are trying to rebuild from entropy but you wouldn't do that. You would be rebuilding from the Mnemonic in real world scenario. You really can have a word list of total rubbish in BIP39 as long as it is 2048 words long that is all! If you input the mnemonic made out of rubbish words so for e.g uyuy jkjasd sdsd sdsdd yuuyu sdsds iooioi sdasds uyuyuy sdsdsd tyyty rwetrtr and no matter what BIP39 implementation you put it in, it will always generate the same seed bytes thus allowing for complete and universal seed derivation without any reliance on word list. The word list is merely to generate a mnemonic, after that it has no role in seed generation so you can change it at anytime and it will never effect future mnemonics. On Thu, Mar 12, 2015 at 02:16:38AM +, Thy Shizzle wrote: That's disappointing the Electrum 2.0 doesn't use BIP39. Agreed, but I don't know the full background on this. Changing the wordlist in the future has ZERO effect on derived seed, whatever mnemonic you provide will always generate the same seed, BIP39 is not mapping the words back to numbers etc to derive seed. That's true for generating new mnemonics (i.e. same entropy can generate any combinations of words), but not for converting a mnemonic to a seed (i.e. a specific wordlist/passphrase should always generate the same seed). I agree that it's true that a static wordlist is required once people have started using BIP39 for anything real and changing the word lists will invalidate any existing mnemonics (unless your 'new' wordlist simply substitutes one word for another and the index mapping is made public ... which means it's not really an arbitrary word list). Version is something that can be dealt with after the fact, hopefully standardised (curious why didn't you work with the BIP39 to insert version instead of do something different to BIP39?) So most of what you are suggesting as problems are not. I don't see how this can work given the BIP39 spec as it is today (there's simply no room for a version in the bits). I do think versioning would be nice, but as of now, I'm in the camp that thinks complete wallet interoperability is a bit of a myth -- so long as you can fundamentally move into/out of wallets at will. -Neill. As for the common words between languages, I have discussed this with the provider of the Chinese wordlists as they shared some words between simplified and traditional, but I found it easy to look for a word in the mnemonic that is unique to that language/wordlist and so straight away you can determine the language, remembering you get minimum 12 goes at doing that :) Also then I asked myself, do we really care about detecting the language? Probably not because we don't need to use the wordlist ever again after creation, we literally accept the mnemonic, normalise it then hash it into a seed. From what I'm reading, Electrum 2.0 really should have BIP39, it would take almost no effort to put it in and I think you should do that :) I don't have any interest in BIP39 other than it being a standard. I think TREZOR may have an interest in it? Thomas V: Thanks Mike, and sorry to answer a bit late; it has been a busy couple of weeks. You are correct, a BIP39 seed phrase will not work in Electrum, and vice versa. It is indeed unfortunate. However, I believe BIP39 should not be followed, because it reproduces two mistakes I did when I designed the older Electrum seed system. Let me explain. The first problem I have with BIP39 is that the seed phrase does not include a version number. Wallet development is still in an exploratory phase, and we should expect even more innovation in this domain. In this context, it is unwise to make decisions that prevent future innovation. However, when we give a seed phrase to users, we have a moral obligation to keep supporting this seed phrase in future versions. We cannot simply announce to Electrum users that their old seed phrase is not supported anymore, because we created a new version of the software that uses a different derivation. This could lead to financial losses for users who are unaware of these technicalities. Well, at least, that is how I feel about it. BIP39 and Electrum v2
[Bitcoin-development] Testnet3
Hi, so I have my .NET node communicating on the P2P network just fine, so I figured as I'll now start looking at making and validating transactions etc I should probably migrate to test net. Now I see that we are up to the third generation testnet testnet3, and I am sending my messages now using packet magic 0x0b110907 and I'm using Wireshark and I can confirm that my messages are going out with that packet magic. Now what is interesting is that when I try connect to a test node obtained from DNS seed testnet-seed.bitcoin.petertodd.org, I send it a version message with the testnet3 packet magic, yet I get no verack or version in response In fact, the only thing I get back is a ping and then the connection is severed by the remote node. What is going on? Also, it works fine with the mainnet packet magic value of 0x0f9beb4d9 and I am debuging my code and ensuring it is looking for the testnet3 packet magic, but I am not getting a response from the node? -- Dive into the World of Parallel Programming The Go Parallel Website, sponsored by Intel and developed in partnership with Slashdot Media, is your hub for all things parallel software development, from weekly thought leadership blogs to news, videos, case studies, tutorials and more. Take a look and join the conversation now. http://goparallel.sourceforge.net/___ Bitcoin-development mailing list Bitcoin-development@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bitcoin-development
[Bitcoin-development] Useless Address attack?
Hi, so just a thought as my node relays addresses etc. If I wanted to really slow down communication over the P2P network, what's stopping me from popping up a heap of dummy nodes that do nothing more than exchange version and relay addresses, except I send addr messages with all 1000 addresses pointing to my useless nodes that never send invs or respond to getdata etc so clients connect to my dumb nodes instead of legit ones. I'm thinking that if I fill up their address pool with enough addresses to dumb nodes and keep them really fresh time wise, it could have a bit of an impact especially if all 8 outbound connections are used up by my dumb nodes right? I don't want to do this obviously, I'm just thinking about it as I'm building my node, what is there to stop this happening?-- Dive into the World of Parallel Programming The Go Parallel Website, sponsored by Intel and developed in partnership with Slashdot Media, is your hub for all things parallel software development, from weekly thought leadership blogs to news, videos, case studies, tutorials and more. Take a look and join the conversation now. http://goparallel.sourceforge.net/___ Bitcoin-development mailing list Bitcoin-development@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bitcoin-development
Re: [Bitcoin-development] Useless Address attack?
Interesting! Thanks Kevin, I now need to research this and include such protections in my node. Also (I am fuzzy on the details for this), Bitcoind will detect when a node is misbehaving and (I believe) it will blacklist misbehaving nodes for a period of time so it doesn't continually keep trying to connect to tarpit nodes, for example. On Wed, Mar 4, 2015 at 6:13 PM, Kevin Greene kgree...@gmail.com wrote: Bitcoind protects against this by storing the addresses it has learned about in buckets. The bucket an address is stored in is chosen based on the IP of the peer that advertised the addr message, and the address in the addr message itself. The idea is that the bucketing is done in a randomized way so that no attacker should be able to fill your database with his or her own nodes. From addrman.h: /** Stochastic address manager * * Design goals: * * Keep the address tables in-memory, and asynchronously dump the entire to able in peers.dat. * * Make sure no (localized) attacker can fill the entire table with his nodes/addresses. * * To that end: * * Addresses are organized into buckets. * * Address that have not yet been tried go into 256 new buckets. * * Based on the address range (/16 for IPv4) of source of the information, 32 buckets are selected at random * * The actual bucket is chosen from one of these, based on the range the address itself is located. * * One single address can occur in up to 4 different buckets, to increase selection chances for addresses that * are seen frequently. The chance for increasing this multiplicity decreases exponentially. * * When adding a new address to a full bucket, a randomly chosen entry (with a bias favoring less recently seen * ones) is removed from it first. * * Addresses of nodes that are known to be accessible go into 64 tried buckets. * * Each address range selects at random 4 of these buckets. * * The actual bucket is chosen from one of these, based on the full address. * * When adding a new good address to a full bucket, a randomly chosen entry (with a bias favoring less recently * tried ones) is evicted from it, back to the new buckets. * * Bucket selection is based on cryptographic hashing, using a randomly-generated 256-bit key, which should not * be observable by adversaries. * * Several indexes are kept for high performance. Defining DEBUG_ADDRMAN will introduce frequent (and expensive) * consistency checks for the entire data structure. */ On Wed, Mar 4, 2015 at 5:40 PM, Thy Shizzle thashizn...@yahoo.com.au wrote: Hi, so just a thought as my node relays addresses etc. If I wanted to really slow down communication over the P2P network, what's stopping me from popping up a heap of dummy nodes that do nothing more than exchange version and relay addresses, except I send addr messages with all 1000 addresses pointing to my useless nodes that never send invs or respond to getdata etc so clients connect to my dumb nodes instead of legit ones. I'm thinking that if I fill up their address pool with enough addresses to dumb nodes and keep them really fresh time wise, it could have a bit of an impact especially if all 8 outbound connections are used up by my dumb nodes right? I don't want to do this obviously, I'm just thinking about it as I'm building my node, what is there to stop this happening? -- Dive into the World of Parallel Programming The Go Parallel Website, sponsored by Intel and developed in partnership with Slashdot Media, is your hub for all things parallel software development, from weekly thought leadership blogs to news, videos, case studies, tutorials and more. Take a look and join the conversation now. http://goparallel.sourceforge.net/ ___ Bitcoin-development mailing list Bitcoin-development@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bitcoin-development -- Dive into the World of Parallel Programming The Go Parallel Website, sponsored by Intel and developed in partnership with Slashdot Media, is your hub for all things parallel software development, from weekly thought leadership blogs to news, videos, case studies, tutorials and more. Take a look and join the conversation now. http://goparallel.sourceforge.net/___ Bitcoin-development mailing list Bitcoin-development@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bitcoin-development