Re: [Bitcoin-development] Handling miner adoption gracefully for embedded consensus systems via double-spending/replace-by-fee

2014-03-25 Thread kjj
Troy Benjegerdes wrote:
> Mark Friedenbach wrote:
>> Bitcoin is not a centralized system, and neither is its development. I
>> don't even know how to respond to that. Bringing up altchains is a total
>> red herring.
>>
>> This is *bitcoin*-development. Please don't make it have to become a
>> moderated mailing list.
> When I can pick up a miner at Best Buy and pay it off in 9 months I'll
> agree with you that bitcoin *might* be decentralized. Maybe there's a
> chance this *will* happen eventually, but right now we have a couple of
> mining cartels that control most of the hashrate.
>
> There are plenty of interesting alt-hash-chains for which mass produced,
> general purpose (or gpgpu-purpose) hardware exists and is in high volume
> mass production.
Decentralized doesn't mean "everyone is doing it", it means "no one can 
stop you from doing it".  Observe bitcoin development.  A few people do 
the bulk of the work, a bunch more people (like me) do work ranging from 
minor to trivial, and millions do nothing.  And yet, it is still totally 
decentralized because no one can stop anyone from making whatever 
changes they want.

So it is also with mining.  The world overall may make it impractical, 
perhaps even foolish, for you to fire up your CPU and mine solo, but no 
one is stopping you, and more to the point, no one is capable of 
stopping you.  There is no center from which you must ask permission.

On moderation, I note that moderation can also be done in a 
decentralized fashion.  I offer this long overdue example:

:0
* ^From.*ho...@hozed.org
/dev/null

--
Learn Graph Databases - Download FREE O'Reilly Book
"Graph Databases" is the definitive new guide to graph databases and their
applications. Written by three acclaimed leaders in the field,
this first edition is now available. Download your free book today!
http://p.sf.net/sfu/13534_NeoTech
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Handling miner adoption gracefully for embedded consensus systems via double-spending/replace-by-fee

2014-03-25 Thread Troy Benjegerdes
On Mon, Mar 24, 2014 at 01:57:14PM -0700, Mark Friedenbach wrote:
> On 03/24/2014 01:34 PM, Troy Benjegerdes wrote:
> > I'm here because I want to sell corn for bitcoin, and I believe it will be
> > more profitable for me to do that with a bitcoin-blockchain-based system
> > in which I have the capability to audit the code that executes the trade.
> 
> A discussion over such a system would be on-topic. Indeed I have made my
> own proposals for systems with that capability in the past:
> 
> http://sourceforge.net/p/bitcoin/mailman/message/31322676/
> 
> There's no reason to invoke alts however. There are ways where this can
> be done within the bitcoin ecosystem, using bitcoins:
> 
> http://sourceforge.net/p/bitcoin/mailman/message/32108143/
> 
> > I think that's fair, so long as we limit bitcoin-development discussion to
> > issues that are relevant to the owners of the hashrate and companies that
> > pay developer salaries.
> > 
> > What I'm asking for is some honesty that Bitcoin is a centralized system
> > and to stop arguing technical points on the altar of 
> > distributed/decentralized
> > whatever. It's pretty clear if you want decentralized you should go with 
> > altchains.
> 
> Bitcoin is not a centralized system, and neither is its development. I
> don't even know how to respond to that. Bringing up altchains is a total
> red herring.
> 
> This is *bitcoin*-development. Please don't make it have to become a
> moderated mailing list.

When I can pick up a miner at Best Buy and pay it off in 9 months I'll 
agree with you that bitcoin *might* be decentralized. Maybe there's a 
chance this *will* happen eventually, but right now we have a couple of
mining cartels that control most of the hashrate.

There are plenty of interesting alt-hash-chains for which mass produced,
general purpose (or gpgpu-purpose) hardware exists and is in high volume
mass production.



--
Learn Graph Databases - Download FREE O'Reilly Book
"Graph Databases" is the definitive new guide to graph databases and their
applications. Written by three acclaimed leaders in the field,
this first edition is now available. Download your free book today!
http://p.sf.net/sfu/13534_NeoTech
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Handling miner adoption gracefully for embedded consensus systems via double-spending/replace-by-fee

2014-03-24 Thread Luke-Jr
On Saturday, March 22, 2014 8:47:02 AM Peter Todd wrote:
> To make a long story short, it was soon suggested that Bitcoin Core be
> forked - the software, not the protocol - and miners encouraged to
> support it.

There's been at least one public miner-oriented fork of Bitcoin Core since 0.7 
or earlier. Miners still running vanilla Bitcoin Core are neglecting their 
duty to the community. That being said, the more forks, the better for 
decentralisation.

Luke

--
Learn Graph Databases - Download FREE O'Reilly Book
"Graph Databases" is the definitive new guide to graph databases and their
applications. Written by three acclaimed leaders in the field,
this first edition is now available. Download your free book today!
http://p.sf.net/sfu/13534_NeoTech
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Handling miner adoption gracefully for embedded consensus systems via double-spending/replace-by-fee

2014-03-24 Thread Mark Friedenbach
On 03/24/2014 01:34 PM, Troy Benjegerdes wrote:
> I'm here because I want to sell corn for bitcoin, and I believe it will be
> more profitable for me to do that with a bitcoin-blockchain-based system
> in which I have the capability to audit the code that executes the trade.

A discussion over such a system would be on-topic. Indeed I have made my
own proposals for systems with that capability in the past:

http://sourceforge.net/p/bitcoin/mailman/message/31322676/

There's no reason to invoke alts however. There are ways where this can
be done within the bitcoin ecosystem, using bitcoins:

http://sourceforge.net/p/bitcoin/mailman/message/32108143/

> I think that's fair, so long as we limit bitcoin-development discussion to
> issues that are relevant to the owners of the hashrate and companies that
> pay developer salaries.
> 
> What I'm asking for is some honesty that Bitcoin is a centralized system
> and to stop arguing technical points on the altar of distributed/decentralized
> whatever. It's pretty clear if you want decentralized you should go with 
> altchains.

Bitcoin is not a centralized system, and neither is its development. I
don't even know how to respond to that. Bringing up altchains is a total
red herring.

This is *bitcoin*-development. Please don't make it have to become a
moderated mailing list.

--
Learn Graph Databases - Download FREE O'Reilly Book
"Graph Databases" is the definitive new guide to graph databases and their
applications. Written by three acclaimed leaders in the field,
this first edition is now available. Download your free book today!
http://p.sf.net/sfu/13534_NeoTech
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Handling miner adoption gracefully for embedded consensus systems via double-spending/replace-by-fee

2014-03-24 Thread Troy Benjegerdes
I think that's fair, so long as we limit bitcoin-development discussion to
issues that are relevant to the owners of the hashrate and companies that
pay developer salaries.

What I'm asking for is some honesty that Bitcoin is a centralized system
and to stop arguing technical points on the altar of distributed/decentralized
whatever. It's pretty clear if you want decentralized you should go with 
altchains.

I'm here because I want to sell corn for bitcoin, and I believe it will be
more profitable for me to do that with a bitcoin-blockchain-based system
in which I have the capability to audit the code that executes the trade.


On Sun, Mar 23, 2014 at 04:53:48PM -0700, Mark Friedenbach wrote:
> This isn't distributed-systems-development, it is bitcoin-development.
> Discussion over chain parameters is a fine thing to have among people
> who are interested in that sort of thing. But not here.
> 
> On 03/23/2014 04:17 PM, Troy Benjegerdes wrote:
> > I find it very irresponsible for Bitcoiners to on one hand extol the virtues
> > of distributed systems and then in the same message claim any discussion
> > about alternate chains as 'off-topic'.
> > 
> > If bitcoin-core is for *distributed systems*, then all the different 
> > altcoins
> > with different hash algorithms should be viable topics for discussion.


--
Learn Graph Databases - Download FREE O'Reilly Book
"Graph Databases" is the definitive new guide to graph databases and their
applications. Written by three acclaimed leaders in the field,
this first edition is now available. Download your free book today!
http://p.sf.net/sfu/13534_NeoTech
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Handling miner adoption gracefully for embedded consensus systems via double-spending/replace-by-fee

2014-03-23 Thread Mark Friedenbach
This isn't distributed-systems-development, it is bitcoin-development.
Discussion over chain parameters is a fine thing to have among people
who are interested in that sort of thing. But not here.

On 03/23/2014 04:17 PM, Troy Benjegerdes wrote:
> I find it very irresponsible for Bitcoiners to on one hand extol the virtues
> of distributed systems and then in the same message claim any discussion
> about alternate chains as 'off-topic'.
> 
> If bitcoin-core is for *distributed systems*, then all the different altcoins
> with different hash algorithms should be viable topics for discussion.

--
Learn Graph Databases - Download FREE O'Reilly Book
"Graph Databases" is the definitive new guide to graph databases and their
applications. Written by three acclaimed leaders in the field,
this first edition is now available. Download your free book today!
http://p.sf.net/sfu/13534_NeoTech
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Handling miner adoption gracefully for embedded consensus systems via double-spending/replace-by-fee

2014-03-23 Thread Troy Benjegerdes
> > Right, but there's also a lot of the community who thinks
> > proof-of-publication applications are bad and should be discouraged. I
> > argued before that the way OP_RETURN was being deployed didn't actually
> > give any reason to use it vs. other data encoding methods.
> >
> > Unfortunately underlying all this is a real ignorance about how Bitcoin
> > actually works and what proof-of-publication actually is:
> 
> I understand that proof of publication is not the same thing as
> regular timestamping, but requiring permanent storage in the
> blockchain is not the only way you can implement proof of publication.
> Mark Friedenbach proposes this:
> 
> Store hashes, or a hash root, and soft-fork that blocks are only
> accepted if (a) the data tree is provided, or (b) sufficient work is
> built on it and/or sufficient time has passed
> 
> This way full nodes can ignore the published data until is sufficiently 
> buried.
> 
> > I think we're just going to have to agree to disagree on our
> > interpretations of the economics with regard to attacking merge-mined
> > chains. Myself, I'm very, very wary of systems that have poor security
> > against economically irrational attackers regardless of how good the
> > security is, in theory, against economically rational ones.
> 
> The attacker was of course economically irrational in my previous
> example for which you didn't have any complain. So I think we can
> agree that a merged mined separated chain is more secure than a
> non-merged mined separated chain and that attacking a merged mined
> chain is not free.
> By not being clear on this you're indirectly promoting non-merged
> mined altchains as a better option than merged mined altchains, which
> is what I don't think is responsible on your part.
> 

I can't speak for Peter, but *I* am currently of the opinion that non-merged
mined altchains using memory-hard proof-of-work are a far better option than
sha-256 merged-mined altchains. This is not a popular position on this list,
and I would like to respectfully disagree, but still collaborate on all the
other things where bitcoin-core *is* the best-in-class code available.

A truly 'distributed' system must support multiple alchains, and multiple 
proof-of-work hash algorithms, and probably support proof-of-stake as well.

If sha-256 is the only game in town the only advantage over the federal
reserve is I can at least audit the code that controls the money supply,
but it's not in any way distributed if the hash power is concentrated
among 5-10 major pools and 5-10 sha-256 asic vendors.

I find it very irresponsible for Bitcoiners to on one hand extol the virtues
of distributed systems and then in the same message claim any discussion
about alternate chains as 'off-topic'.

If bitcoin-core is for *distributed systems*, then all the different altcoins
with different hash algorithms should be viable topics for discussion.


Troy Benjegerdes 'da hozer'  ho...@hozed.org
7 elements  earth::water::air::fire::mind::spirit::soulgrid.coop

  Never pick a fight with someone who buys ink by the barrel,
 nor try buy a hacker who makes money by the megahash


--
Learn Graph Databases - Download FREE O'Reilly Book
"Graph Databases" is the definitive new guide to graph databases and their
applications. Written by three acclaimed leaders in the field,
this first edition is now available. Download your free book today!
http://p.sf.net/sfu/13534_NeoTech
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Handling miner adoption gracefully for embedded consensus systems via double-spending/replace-by-fee

2014-03-23 Thread Troy Benjegerdes
On Sat, Mar 22, 2014 at 03:08:25PM -0400, Peter Todd wrote:
> On Sat, Mar 22, 2014 at 10:08:36AM -0500, Troy Benjegerdes wrote:
> > On Sat, Mar 22, 2014 at 04:47:02AM -0400, Peter Todd wrote:
> > > There's been a lot of recent hoopla over proof-of-publication, with the
> > > OP_RETURN  length getting reduced to a rather useless 40 bytes at
> > > the last minute prior to the 0.9 release. Secondly I noticed a
> > > overlooked security flaw in that OP_CHECKMULTISIG sigops weren't taken
> > > into account, making it possible to broadcast unminable transactions and
> > > bloat mempools.(1) My suggestion was to just ditch bare OP_CHECKMULTISIG
> > > outputs given that the sigops limit and the way they use up a fixed 20
> > > sigops per op makes them hard to do fee calculations for. They also make
> > > it easy to bloat the UTXO set, potentially a bad thing. This would of
> > > course require things using them to change. Currently that's just
> > > Counterparty, so I gave them the heads up in my email.
> > 
> > I've spend some time looking at the Datacoin code, and I've come to the 
> > conclusion the next copycatcoin I release will have an explicit 'data' 
> > field with something like 169 bytes (a bakers dozen squared), which will 
> > add 1 byte to each transaction if unused, and provide a small, but usable
> > data field for proof of publication. As a new coin, I can also do a
> > hardfork that increases the data size limit much easier if there is a
> > compelling reason to make it bigger.
> > 
> > I think this will prove to be a much more reliable infrastructure for 
> > proof of publication than various hacks to overcome 40 byte limits with
> > Bitcoin.
> > 
> > I am disclosing this here so the bitcoin 1% has plenty of time to evaluate
> > the market risk they face from the 40 byte limit, and put some pressure to
> > implement some of the alternatives Todd proposes.
> 
> Lol! Granted, I guess I should "disclose" that I'm working on tree
> chains, which just improve the scalability of blockchains directly. I'm
> think tree-chains could be implemented as a soft-fork; if applied to
> Bitcoin the datacoin 1% might face market risk.  :P

Soft-fork tree chains with reasonable data/memo/annotation storage would be
extremely interesting. The important question, however, is how does one 
build a *business* around such a thing, including getting paid as a developer.

What I find extremely relevant to the **bitcoin-dev** list are discussions
about how to motivate the people who own the hashrate and bulk of the coins
(aka, the bitcoin 1%) to PAY DEVELOPERS, and thus it is good marketing FOR
BITCOIN DEVELOPERS to remind the people who profit from our efforts they need
to make it profitable for developers to work on bitcoin.

If it's more profitable for innovative developers to premine and release
$NEWCOIN-blockchain than it is to work on Bitcoin-blockchain, is that a valid
discussion for this list? Or do you just want to stick your heads in the sand
while VC's look to disrupt Bitcoin?

--
Learn Graph Databases - Download FREE O'Reilly Book
"Graph Databases" is the definitive new guide to graph databases and their
applications. Written by three acclaimed leaders in the field,
this first edition is now available. Download your free book today!
http://p.sf.net/sfu/13534_NeoTech
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Handling miner adoption gracefully for embedded consensus systems via double-spending/replace-by-fee

2014-03-22 Thread Jorge Timón
On 3/22/14, Peter Todd  wrote:
> Well remember that my thinking re: UTXO is that we need to move to a
> system like TXO commitments where storing the entirety of the UTXO set
> for all eternity is *not* required. Of course, that doesn't necessarily
> mean you can't have the advantages of UTXO commitments, but they need to
> be limited in some reasonable way so that long-term storage requirements
> do not grow without bound unreasonably. For example, having TXO
> commitments with a bounded size committed UTXO set seems reasonable; old
> UTXO's can be dropped from the bounded sized set, but can still be spent
> via the underlying TXO commitment mechanism.

Although not having to download the whole blockchain to operate a
trust-less full node is theoretically possible it is not clear that
they will work in practice or would be accepted, and we certainly
don't have that now.
So I don't think future potential theoretical scalability improvements
are solid arguments in favor of supporting proof of publication now.

> Like I said the real issue is making it easy to get those !IsStandard()
> transactions to the miners who are interested in them. The service bit
> flag I proposed + preferential peering - reserve, say, 50% of your
> peering slots for nodes advertising non-std tx relaying - is simple
> enough, but it is vulnerable to sybil attacks if done naively.

My point is that this seems relevant to competing mining policies in general.

> Right, but there's also a lot of the community who thinks
> proof-of-publication applications are bad and should be discouraged. I
> argued before that the way OP_RETURN was being deployed didn't actually
> give any reason to use it vs. other data encoding methods.
>
> Unfortunately underlying all this is a real ignorance about how Bitcoin
> actually works and what proof-of-publication actually is:

I understand that proof of publication is not the same thing as
regular timestamping, but requiring permanent storage in the
blockchain is not the only way you can implement proof of publication.
Mark Friedenbach proposes this:

Store hashes, or a hash root, and soft-fork that blocks are only
accepted if (a) the data tree is provided, or (b) sufficient work is
built on it and/or sufficient time has passed

This way full nodes can ignore the published data until is sufficiently buried.

> I think we're just going to have to agree to disagree on our
> interpretations of the economics with regard to attacking merge-mined
> chains. Myself, I'm very, very wary of systems that have poor security
> against economically irrational attackers regardless of how good the
> security is, in theory, against economically rational ones.

The attacker was of course economically irrational in my previous
example for which you didn't have any complain. So I think we can
agree that a merged mined separated chain is more secure than a
non-merged mined separated chain and that attacking a merged mined
chain is not free.
By not being clear on this you're indirectly promoting non-merged
mined altchains as a better option than merged mined altchains, which
is what I don't think is responsible on your part.

> Again, what it comes down to in the end is that when I'm advising
> Mastercoin, Counterparty, Colored Coins, etc. on how they should design
> their systems I know that if they do proof-of-publication on the Bitcoin
> blockchain, it may cost a bit more money than possible alternatives per
> transaction, but the security is very well understood and robust. Fact
> is, these applications can certainly afford to pay the higher
> transaction fees - they're far from the least economically valuable use
> of Blockchain space. Meanwhile the alternatives have, at best, much more
> dubious security properties and at worse no security at all.
> (announce/commit sacrifices is a great example of this, and very easy to
> understand)

I agree that we disagree on additional non-validated data in the main
chain vs merged mined chains as the best way to implement additional
features.
But please, you don't need to spread and maintain existing myths about
merged mining to make your case. If you insist on doing it I will
start to think that the honesty of your arguments is not something
important to you, and you just prefer to try to get people on your
side by any means, which would be very disappointing.

--
Learn Graph Databases - Download FREE O'Reilly Book
"Graph Databases" is the definitive new guide to graph databases and their
applications. Written by three acclaimed leaders in the field,
this first edition is now available. Download your free book today!
http://p.sf.net/sfu/13534_NeoTech
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Handling miner adoption gracefully for embedded consensus systems via double-spending/replace-by-fee

2014-03-22 Thread Peter Todd
On Sat, Mar 22, 2014 at 02:53:41PM +0100, Jorge Timón wrote:
> On 3/22/14, Peter Todd  wrote:
> > There's been a lot of recent hoopla over proof-of-publication, with the
> > OP_RETURN  length getting reduced to a rather useless 40 bytes at
> > the last minute prior to the 0.9 release.
> 
> 
> I'm not against about miners accepting transactions that have longer
> data  in non-utxo polluting OP_RETURN than whatever is specified as
> standard by the reference implementation, maybe it should be risen in
> standard but I think it was assumed that the most common case would be
> to include the root hash of some "merklized" structure.
> My only argument against non-validated proof of publication is that in
> the long run it will be very expensive since they will have to compete
> with transactions that actually use the utxo, a feature that is more
> valuable. But that's mostly speculation and doesn't imply the need for

Well remember that my thinking re: UTXO is that we need to move to a
system like TXO commitments where storing the entirety of the UTXO set
for all eternity is *not* required. Of course, that doesn't necessarily
mean you can't have the advantages of UTXO commitments, but they need to
be limited in some reasonable way so that long-term storage requirements
do not grow without bound unreasonably. For example, having TXO
commitments with a bounded size committed UTXO set seems reasonable; old
UTXO's can be dropped from the bounded sized set, but can still be spent
via the underlying TXO commitment mechanism.

> any action against it. I would strongly opposed to against a
> limitation on OP_RETURN at the protocol level (other than the block
> size limit itself, that is) and wouldn't mind if they're removed from
> isStandard. I didn't payed much attention to that and honestly I don't
> care enough.
>
> Maybe this encourages miners to adopt their own policies, which could
> be good for things like replace-by-fee, the rational policy for
> miners, which I strongly support (combined with game theory can
> provide "instant" transactions as you pointed out in another thread).
> 
> Maybe the right approach to keep improving modularity and implement
> different and configurable mining policies.

Like I said the real issue is making it easy to get those !IsStandard()
transactions to the miners who are interested in them. The service bit
flag I proposed + preferential peering - reserve, say, 50% of your
peering slots for nodes advertising non-std tx relaying - is simple
enough, but it is vulnerable to sybil attacks if done naively.

> > All these methods have some overhead compared to just using OP_RETURN
> > and thus cost more.
> 
> I thought the consensus was that op_return was the right way to put
> non-validated data in the chain, but limiting it in standard policies
> doesn't seem consistent with that.

Right, but there's also a lot of the community who thinks
proof-of-publication applications are bad and should be discouraged. I
argued before that the way OP_RETURN was being deployed didn't actually
give any reason to use it vs. other data encoding methods.

Unfortunately underlying all this is a real ignorance about how Bitcoin
actually works and what proof-of-publication actually is:

14-03-20.log:12:47 < gavinandresen> jgarzik: RE: mastercoin/OP_RETURN:
what's the current thinking on Best Way To Do It?  Seems if I was to do
it I'd just embed 20-byte RIPEMD160 hashes in OP_RETURN, and fetch the
real data from a DHT or website (or any-of-several websites).
Blockchain as reference ledger, not as data storage.

> Peter Todd, I don't think you're being responsible or wise saying
> nonsense like "merged mined chains can be attacked for free" and I
> suggest that you prove your claims by attacking namecoin "for free",
> please, enlighten us, how that's done?

I think we're just going to have to agree to disagree on our
interpretations of the economics with regard to attacking merge-mined
chains. Myself, I'm very, very wary of systems that have poor security
against economically irrational attackers regardless of how good the
security is, in theory, against economically rational ones.

Again, what it comes down to in the end is that when I'm advising
Mastercoin, Counterparty, Colored Coins, etc. on how they should design
their systems I know that if they do proof-of-publication on the Bitcoin
blockchain, it may cost a bit more money than possible alternatives per
transaction, but the security is very well understood and robust. Fact
is, these applications can certainly afford to pay the higher
transaction fees - they're far from the least economically valuable use
of Blockchain space. Meanwhile the alternatives have, at best, much more
dubious security properties and at worse no security at all.
(announce/commit sacrifices is a great example of this, and very easy to
understand)

-- 
'peter'[:-1]@petertodd.org
bbcc531d48bea8d67597e275b5abcff18e18f46266723e91


signature.asc

Re: [Bitcoin-development] Handling miner adoption gracefully for embedded consensus systems via double-spending/replace-by-fee

2014-03-22 Thread Peter Todd
On Sat, Mar 22, 2014 at 10:08:36AM -0500, Troy Benjegerdes wrote:
> On Sat, Mar 22, 2014 at 04:47:02AM -0400, Peter Todd wrote:
> > There's been a lot of recent hoopla over proof-of-publication, with the
> > OP_RETURN  length getting reduced to a rather useless 40 bytes at
> > the last minute prior to the 0.9 release. Secondly I noticed a
> > overlooked security flaw in that OP_CHECKMULTISIG sigops weren't taken
> > into account, making it possible to broadcast unminable transactions and
> > bloat mempools.(1) My suggestion was to just ditch bare OP_CHECKMULTISIG
> > outputs given that the sigops limit and the way they use up a fixed 20
> > sigops per op makes them hard to do fee calculations for. They also make
> > it easy to bloat the UTXO set, potentially a bad thing. This would of
> > course require things using them to change. Currently that's just
> > Counterparty, so I gave them the heads up in my email.
> 
> I've spend some time looking at the Datacoin code, and I've come to the 
> conclusion the next copycatcoin I release will have an explicit 'data' 
> field with something like 169 bytes (a bakers dozen squared), which will 
> add 1 byte to each transaction if unused, and provide a small, but usable
> data field for proof of publication. As a new coin, I can also do a
> hardfork that increases the data size limit much easier if there is a
> compelling reason to make it bigger.
> 
> I think this will prove to be a much more reliable infrastructure for 
> proof of publication than various hacks to overcome 40 byte limits with
> Bitcoin.
> 
> I am disclosing this here so the bitcoin 1% has plenty of time to evaluate
> the market risk they face from the 40 byte limit, and put some pressure to
> implement some of the alternatives Todd proposes.

Lol! Granted, I guess I should "disclose" that I'm working on tree
chains, which just improve the scalability of blockchains directly. I'm
think tree-chains could be implemented as a soft-fork; if applied to
Bitcoin the datacoin 1% might face market risk.  :P

-- 
'peter'[:-1]@petertodd.org
bbcc531d48bea8d67597e275b5abcff18e18f46266723e91


signature.asc
Description: Digital signature
--
Learn Graph Databases - Download FREE O'Reilly Book
"Graph Databases" is the definitive new guide to graph databases and their
applications. Written by three acclaimed leaders in the field,
this first edition is now available. Download your free book today!
http://p.sf.net/sfu/13534_NeoTech___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Handling miner adoption gracefully for embedded consensus systems via double-spending/replace-by-fee

2014-03-22 Thread Mark Friedenbach
Please, by all means: ignore our well-reasoned arguments about
externalized storage and validation cost and alternative solutions.
Please re-discover how proof of publication doesn't require burdening
the network with silly extra data that must be transmitted, kept, and
validated from now until the heat death of the universe. Your failure
will make my meager bitcoin holdings all the more valuable! As despite
persistent assertions to the contrary, making quality software freely
available at zero cost does not pay well, even in finance. You will not
find core developers in the Bitcoin 1%.

Please feel free to flame me back, but off-list. This is for *bitcoin*
development.

On 03/22/2014 08:08 AM, Troy Benjegerdes wrote:
> On Sat, Mar 22, 2014 at 04:47:02AM -0400, Peter Todd wrote:
>> There's been a lot of recent hoopla over proof-of-publication, with the
>> OP_RETURN  length getting reduced to a rather useless 40 bytes at
>> the last minute prior to the 0.9 release. Secondly I noticed a
>> overlooked security flaw in that OP_CHECKMULTISIG sigops weren't taken
>> into account, making it possible to broadcast unminable transactions and
>> bloat mempools.(1) My suggestion was to just ditch bare OP_CHECKMULTISIG
>> outputs given that the sigops limit and the way they use up a fixed 20
>> sigops per op makes them hard to do fee calculations for. They also make
>> it easy to bloat the UTXO set, potentially a bad thing. This would of
>> course require things using them to change. Currently that's just
>> Counterparty, so I gave them the heads up in my email.
> 
> I've spend some time looking at the Datacoin code, and I've come to the 
> conclusion the next copycatcoin I release will have an explicit 'data' 
> field with something like 169 bytes (a bakers dozen squared), which will 
> add 1 byte to each transaction if unused, and provide a small, but usable
> data field for proof of publication. As a new coin, I can also do a
> hardfork that increases the data size limit much easier if there is a
> compelling reason to make it bigger.
> 
> I think this will prove to be a much more reliable infrastructure for 
> proof of publication than various hacks to overcome 40 byte limits with
> Bitcoin.
> 
> I am disclosing this here so the bitcoin 1% has plenty of time to evaluate
> the market risk they face from the 40 byte limit, and put some pressure to
> implement some of the alternatives Todd proposes.
> 

--
Learn Graph Databases - Download FREE O'Reilly Book
"Graph Databases" is the definitive new guide to graph databases and their
applications. Written by three acclaimed leaders in the field,
this first edition is now available. Download your free book today!
http://p.sf.net/sfu/13534_NeoTech
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Handling miner adoption gracefully for embedded consensus systems via double-spending/replace-by-fee

2014-03-22 Thread Troy Benjegerdes
On Sat, Mar 22, 2014 at 04:47:02AM -0400, Peter Todd wrote:
> There's been a lot of recent hoopla over proof-of-publication, with the
> OP_RETURN  length getting reduced to a rather useless 40 bytes at
> the last minute prior to the 0.9 release. Secondly I noticed a
> overlooked security flaw in that OP_CHECKMULTISIG sigops weren't taken
> into account, making it possible to broadcast unminable transactions and
> bloat mempools.(1) My suggestion was to just ditch bare OP_CHECKMULTISIG
> outputs given that the sigops limit and the way they use up a fixed 20
> sigops per op makes them hard to do fee calculations for. They also make
> it easy to bloat the UTXO set, potentially a bad thing. This would of
> course require things using them to change. Currently that's just
> Counterparty, so I gave them the heads up in my email.

I've spend some time looking at the Datacoin code, and I've come to the 
conclusion the next copycatcoin I release will have an explicit 'data' 
field with something like 169 bytes (a bakers dozen squared), which will 
add 1 byte to each transaction if unused, and provide a small, but usable
data field for proof of publication. As a new coin, I can also do a
hardfork that increases the data size limit much easier if there is a
compelling reason to make it bigger.

I think this will prove to be a much more reliable infrastructure for 
proof of publication than various hacks to overcome 40 byte limits with
Bitcoin.

I am disclosing this here so the bitcoin 1% has plenty of time to evaluate
the market risk they face from the 40 byte limit, and put some pressure to
implement some of the alternatives Todd proposes.

-- 

Troy Benjegerdes 'da hozer'  ho...@hozed.org
7 elements  earth::water::air::fire::mind::spirit::soulgrid.coop

  Never pick a fight with someone who buys ink by the barrel,
 nor try buy a hacker who makes money by the megahash


--
Learn Graph Databases - Download FREE O'Reilly Book
"Graph Databases" is the definitive new guide to graph databases and their
applications. Written by three acclaimed leaders in the field,
this first edition is now available. Download your free book today!
http://p.sf.net/sfu/13534_NeoTech
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Handling miner adoption gracefully for embedded consensus systems via double-spending/replace-by-fee

2014-03-22 Thread Jorge Timón
On 3/22/14, Peter Todd  wrote:
> There's been a lot of recent hoopla over proof-of-publication, with the
> OP_RETURN  length getting reduced to a rather useless 40 bytes at
> the last minute prior to the 0.9 release.


I'm not against about miners accepting transactions that have longer
data  in non-utxo polluting OP_RETURN than whatever is specified as
standard by the reference implementation, maybe it should be risen in
standard but I think it was assumed that the most common case would be
to include the root hash of some "merklized" structure.
My only argument against non-validated proof of publication is that in
the long run it will be very expensive since they will have to compete
with transactions that actually use the utxo, a feature that is more
valuable. But that's mostly speculation and doesn't imply the need for
any action against it. I would strongly opposed to against a
limitation on OP_RETURN at the protocol level (other than the block
size limit itself, that is) and wouldn't mind if they're removed from
isStandard. I didn't payed much attention to that and honestly I don't
care enough.
Maybe this encourages miners to adopt their own policies, which could
be good for things like replace-by-fee, the rational policy for
miners, which I strongly support (combined with game theory can
provide "instant" transactions as you pointed out in another thread).

Maybe the right approach to keep improving modularity and implement
different and configurable mining policies.

> All these methods have some overhead compared to just using OP_RETURN
> and thus cost more.

I thought the consensus was that op_return was the right way to put
non-validated data in the chain, but limiting it in standard policies
doesn't seem consistent with that.

> Finally I'll be writing something more detailed soon about why
> proof-of-publication is essential and miners would be smart to support
> it. But the tl;dr: of it is if you need proof-of-publication for what
> your system does you're much more secure if you're embedded within
> Bitcoin rather than alongside of it. There's a lot of very bad advise
> getting thrown around lately for things like Mastercoin, Counterparty,
> and for that matter, Colored Coins, to use a separate PoW blockchain or
> a merge-mined one. The fact is if you go with pure PoW, you risk getting
> attacked while your still growing, and if you go for merge-mined PoW,
> the attacker can do so for free. We've got a real-world example of the
> former with Twister, among many others, usually resulting in a switch to
> a centralized checkpointing scheme. For the latter we have Coiledcoin,
> an alt that made the mistake of using SHA256 merge-mining and got killed
> off early at birth with a zero-cost 51% attack. There is of course a
> censorship risk to going the embedded route, but at least we know that
> for the forseeable future doing so will require explicit blacklists,
> something most people here are against.

The "proof of publication vs separate chain" discussion is orthogonal
to the "merged mining vs independent chain" one.
If I remember correctly, last time you admitted after my example that
merged mining was comparatively better than a separate chain, that it
was economically harder to attack. I guess ecological arguments won't
help here, but you're confusing people developing independent chains
and thus pushing them to a less secure (apart from more wasteful
setup) system design.
Coiledcoin just proofs that merged mining may not be the best way to
bootstrap a currency, but you can also start separated and then switch
to merged mining once you have sufficient independent support.
As far as I can tell twister doesn't have a realistic reward mechanism
for miners so the incentives are broken before considering merged
mining.
Proof of work is irreversible and it's a good thing to share it.
Thanks Satoshi for proposing this great idea of merged mining and
thanks vinced for a first implementation with a data structure that
can be improved.

Peter Todd, I don't think you're being responsible or wise saying
nonsense like "merged mined chains can be attacked for free" and I
suggest that you prove your claims by attacking namecoin "for free",
please, enlighten us, how that's done?
It should be easier with the scamcoin ixcoin, with a much lower
subsidy to miners so I don't feel bad about the suggestion if your
"free attack" somehow works (certainly using some magic I don't know
about).

-- 
Jorge Timón

http://freico.in/

--
Learn Graph Databases - Download FREE O'Reilly Book
"Graph Databases" is the definitive new guide to graph databases and their
applications. Written by three acclaimed leaders in the field,
this first edition is now available. Download your free book today!
http://p.sf.net/sfu/13534_NeoTech
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
h