Re: [bitcoin-dev] Playing with full-rbf peers for fun and L2s security

2022-06-14 Thread Peter Todd via bitcoin-dev
On Wed, Jun 15, 2022 at 02:53:58AM +, Luke Dashjr wrote:
> Bitcoin Knots still uses this service bit, FWIW (though due to a bug in some 
> older versions, it wasn't signalled by default). There are probably at least 
> 100 nodes with full RBF already.

Right. However it looks like you do not add NODE_REPLACE_BY_FEE to the list
returned by GetDesirableServiceFlags, so those nodes won't preferentially peer
with each other.

Also, if NODE_REPLACE_BY_FEE is added to the desirable service flags, it
ideally needs to be supported by the DNS seeds too. Currently it is not.

-- 
https://petertodd.org 'peter'[:-1]@petertodd.org


signature.asc
Description: PGP signature
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Playing with full-rbf peers for fun and L2s security

2022-06-14 Thread Luke Dashjr via bitcoin-dev
Bitcoin Knots still uses this service bit, FWIW (though due to a bug in some 
older versions, it wasn't signalled by default). There are probably at least 
100 nodes with full RBF already.

On Wednesday 15 June 2022 02:27:20 Peter Todd via bitcoin-dev wrote:
> On Mon, Jun 13, 2022 at 08:25:11PM -0400, Antoine Riard via bitcoin-dev 
wrote:
> > If you're a node operator curious to play with full-rbf, feel free to
> > connect to this node or spawn up a toy, public node yourself. There is a
> > ##uafrbf libera chat if you would like information on the settings or
> > looking for full-rbf friends (though that step could be automated in the
> > future by setting up a dedicated network bit and reserving a few outbound
> > slots for them).
>
> I previously maintained a Bitcoin Core fork that did just that, using
> nServices bit 26:
>
> https://github.com/petertodd/bitcoin/commit/1cc1a46a633535c42394380b656d681
>258a111ac
>
> IIRC I was using the code written to prefer segwit peers; I have no idea if
> a similar approach is still easy to implement as I haven't worked on the
> Bitcoin Core codebase for years.

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Playing with full-rbf peers for fun and L2s security

2022-06-14 Thread Peter Todd via bitcoin-dev
On Mon, Jun 13, 2022 at 08:25:11PM -0400, Antoine Riard via bitcoin-dev wrote:
> If you're a node operator curious to play with full-rbf, feel free to
> connect to this node or spawn up a toy, public node yourself. There is a
> ##uafrbf libera chat if you would like information on the settings or
> looking for full-rbf friends (though that step could be automated in the
> future by setting up a dedicated network bit and reserving a few outbound
> slots for them).

I previously maintained a Bitcoin Core fork that did just that, using nServices
bit 26:

https://github.com/petertodd/bitcoin/commit/1cc1a46a633535c42394380b656d681258a111ac

IIRC I was using the code written to prefer segwit peers; I have no idea if a
similar approach is still easy to implement as I haven't worked on the Bitcoin
Core codebase for years.

-- 
https://petertodd.org 'peter'[:-1]@petertodd.org


signature.asc
Description: PGP signature
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Why OpenTimestamps does not "linearize" its transactions

2022-06-14 Thread Andrew Poelstra via bitcoin-dev
On Tue, Jun 14, 2022 at 01:15:08PM -0400, Undiscussed Horrific Abuse, One 
Victim of Many via bitcoin-dev wrote:
> I'm replying to Peter, skipping the other emails.
> 
> I perceive all these emails as disruptive trolling, ignoring the
> importance of real timestamping, while handwaving about things that
> are roughly false and harmful.
> 
> Since the start of cryptocurrency, Bitcoin has been used to write
> timestamps that stay intact despite malicious action to arbitrary
> systems and records, showing the earliest on-chain publication of
> data. It seems misleading that OTS does not do that, when it is such a
> prominent system.
>

Please be cautious with tone and when assuming bad faith. I don't believe
that Peter is trolling. Also, as politely as I can, when something like
OTS whose model is dead-simple, well-documented, and has been running for
years providing significant value to many people, comes under attack for
being underspecified or failing to do what it says ... this is a
surprising claim, to say the least.


After talking to a few people offline, all of whom are baffled at this
entire conversation, I think the issue might come down to the way that
people interpret "timestamping".

If you believe that "timestamping" means providing a verifiable ordering
to events, then of course OTS does not accomplish this, nor has it ever
claimed to. If you think that "timestamping" means proving that some
data existed at a particular time, then this is exactly what OTS does.

Personally -- and I suspect this is true of Peter as well -- I have always
read the word as having the latter meaning, and it never occurred to me
until now that others might have a different interpretation.


I apologize for contributing to a thread that is getting a bit out of hand,
but I hope this can help the different parties see where the confusion is.




-- 
Andrew Poelstra
Director of Research, Blockstream
Email: apoelstra at wpsoftware.net
Web:   https://www.wpsoftware.net/andrew

The sun is always shining in space
-Justin Lewis-Webster



signature.asc
Description: PGP signature
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Why OpenTimestamps does not "linearize" its transactions

2022-06-14 Thread Undiscussed Horrific Abuse, One Victim of Many via bitcoin-dev
I'm replying to Peter, skipping the other emails.

I perceive all these emails as disruptive trolling, ignoring the
importance of real timestamping, while handwaving about things that
are roughly false and harmful.

Since the start of cryptocurrency, Bitcoin has been used to write
timestamps that stay intact despite malicious action to arbitrary
systems and records, showing the earliest on-chain publication of
data. It seems misleading that OTS does not do that, when it is such a
prominent system.

>> This does not provide the service you describe. It would be trivial to
>> include enough cryptographic information in the original OP_RETURN, so
>> as to obviate the need for publicizing the .ots file.
>
> That approach does not scale. Via merkle trees, the OpenTimestamps system
> routinely timestamps tens of thousands of messages with a single
> transaction:
>
> https://petertodd.org/2016/opentimestamps-announcement#scalability-through-aggregation

This makes total sense to reduce the expense and size of etching these
very short hashes.

But the OTS approach hashes in a _private nonce_ for every document,
preventing anybody from validating the earliness of an item in a
merkle tree without access to every proof.

Do you think OTS would be interested in publicizing nonces and
document hashes, if the user consents?

Non-developers need a tool where they can choose to pay funds to write
a strong timestamp that guarantees earliness of publication of a
document, and for free discern the earliness of timestamped data they
provide to the tool.

> Client-side validated .ots files are a necessary requirement to achieve
> this
> scalability.

Nothing in an engineering task is a strict requirement, aside from the
specification. The data could be publicised elsewhere, or funds
provided to store it on-chain.

> FWIW the most I've personally done is timestamped 750 million items from
> the
> Internet Archive with a single transaction.

That's impressive. It's always great when we write something that can
condense something huge into something tiny and keep working, and use
it reliably.

I would have put the files in a shared datalad repository, and put the
tip commit of the repository in an OP_RETURN along with a tag such as
'DL' or 'IA'.

Then a tool could look for all 'DL' or 'IA' transactions, and verify
that mine was the earliest. You would of course need access to the
etched repositories' git commits.

If the hash can't be verified by an anonymous observer, the archive is
only timestamped for people with the proof. How is the challenge of
protecting many proofs different from the challenge of protecting the
data they prove?

>> If I send my .ots file to another party, a 4th party can replace it
>> with their own, because there is no cryptographic pinning ensuring its
>> contents. This changes the timestamp to one later, no longer proving
>> the earliness of the data.
>
> They can also simply delete their copy of the data, making it impossible to
> prove anything about it.

If they can destroy your .ots proof, the information on the blockchain
no longer demonstrates anything.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Why OpenTimestamps does not "linearize" its transactions

2022-06-14 Thread Peter Todd via bitcoin-dev
On Tue, Jun 14, 2022 at 08:45:43AM -0400, Undiscussed Horrific Abuse, One 
Victim of Many via bitcoin-dev wrote:
> > The basic service that a timestamp service provides is “this content (or at
> > least a digest of this content) existed at least as early as this
> > timestamp.” It says nothing about how long before the timestamp the content
> 
> OTS needlessly adds the requirement that the user publicize their .ots
> files to everybody who will make use of the timestamp.
>
> This does not provide the service you describe. It would be trivial to
> include enough cryptographic information in the original OP_RETURN, so
> as to obviate the need for publicizing the .ots file.

That approach does not scale. Via merkle trees, the OpenTimestamps system
routinely timestamps tens of thousands of messages with a single transaction:

https://petertodd.org/2016/opentimestamps-announcement#scalability-through-aggregation

Client-side validated .ots files are a necessary requirement to achieve this
scalability.

FWIW the most I've personally done is timestamped 750 million items from the
Internet Archive with a single transaction.

> If I send my .ots file to another party, a 4th party can replace it
> with their own, because there is no cryptographic pinning ensuring its
> contents. This changes the timestamp to one later, no longer proving
> the earliness of the data.

They can also simply delete their copy of the data, making it impossible to
prove anything about it.

-- 
https://petertodd.org 'peter'[:-1]@petertodd.org


signature.asc
Description: PGP signature
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Why OpenTimestamps does not "linearize" its transactions

2022-06-14 Thread Peter Todd via bitcoin-dev
On Tue, Jun 14, 2022 at 07:53:29AM -0400, Undiscussed Horrific Abuse, One 
Victim of Many via bitcoin-dev wrote:
> I was privately asked for more opinions. I am sharing them publicly below:
> 
> It's always been clear that OTS proves longness of duration but not
> shortness. It doesn't demonstrate that an earlier work was not
> published, because it hashes each document hash with private material
> the author must separately publicize. Any unpublished private material
> could be an earlier equivalent to a public proof.
> 
> the reason i call this 'designed to be broken' is that it lets people
> rewrite history to their stories by republishing other people's
> documents under different contexts.

See "What Can and Can't Timestamps Prove?":

https://petertodd.org/2016/opentimestamps-announcement#what-can-and-cant-timestamps-prove

OpenTimestamps makes a trade-off: timestamps have significant limitations as to
what they're able to prove. But in exchange, they have exceptionally good
scalability, making them essentially free to use. Timestamps are also much
easier to add on to existing processes and systems such as Git repositories.
Schemes that prove uniqueness require much more engineering and redesign work
to actually accomplish anything.

-- 
https://petertodd.org 'peter'[:-1]@petertodd.org


signature.asc
Description: PGP signature
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Why OpenTimestamps does not "linearize" its transactions

2022-06-14 Thread digital vagabond via bitcoin-dev
If someone wants more linearity and uniqueness guarantees from a timestamp,
that isnt what OTS was designed for. Here is a protocol that was:
https://www.commerceblock.com/mainstay/

On Tue, Jun 14, 2022, 3:56 PM Bryan Bishop via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> On Tue, Jun 14, 2022 at 8:48 AM Undiscussed Horrific Abuse, One Victim of
> Many via bitcoin-dev  wrote:
>
>> OTS needlessly adds the requirement that the user publicize their .ots
>> files to everybody who will make use of the timestamp.
>
>
> Publication is not a component of the OTS system.
>
> This does not provide the service you describe. It would be trivial to
>> include enough cryptographic information in the original OP_RETURN, so
>> as to obviate the need for publicizing the .ots file.
>>
>
> (Why would it be needless to require everyone to publish OTS files but not
> needless to require everyone to publish via OP_RETURN? In fact, now you
> have blockchain users that don't ever use your OP_RETURN data.)
>
>
>> If I send my .ots file to another party, a 4th party can replace it
>> with their own, because there is no cryptographic pinning ensuring its
>> contents. This changes the timestamp to one later, no longer proving
>> the earliness of the data.
>>
>
> You can't replace a timestamp in the OTS system; you can only make a new
> timestamp. To use the earlier timestamp, you would have to use the earlier
> timestamp. At any time it is allowed to make a new timestamp based on the
> current clock. The use case for OTS is proving document existence as of a
> certain time and that if you had doctored a file then said doctoring was no
> later than the earliest timestamp that can be provided.
>
> I was just talking about this the other day actually...
> https://news.ycombinator.com/item?id=31640752
>
> - Bryan
> https://twitter.com/kanzure
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Why OpenTimestamps does not "linearize" its transactions

2022-06-14 Thread Bryan Bishop via bitcoin-dev
On Tue, Jun 14, 2022 at 8:48 AM Undiscussed Horrific Abuse, One Victim of
Many via bitcoin-dev  wrote:

> OTS needlessly adds the requirement that the user publicize their .ots
> files to everybody who will make use of the timestamp.


Publication is not a component of the OTS system.

This does not provide the service you describe. It would be trivial to
> include enough cryptographic information in the original OP_RETURN, so
> as to obviate the need for publicizing the .ots file.
>

(Why would it be needless to require everyone to publish OTS files but not
needless to require everyone to publish via OP_RETURN? In fact, now you
have blockchain users that don't ever use your OP_RETURN data.)


> If I send my .ots file to another party, a 4th party can replace it
> with their own, because there is no cryptographic pinning ensuring its
> contents. This changes the timestamp to one later, no longer proving
> the earliness of the data.
>

You can't replace a timestamp in the OTS system; you can only make a new
timestamp. To use the earlier timestamp, you would have to use the earlier
timestamp. At any time it is allowed to make a new timestamp based on the
current clock. The use case for OTS is proving document existence as of a
certain time and that if you had doctored a file then said doctoring was no
later than the earliest timestamp that can be provided.

I was just talking about this the other day actually...
https://news.ycombinator.com/item?id=31640752

- Bryan
https://twitter.com/kanzure
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Why OpenTimestamps does not "linearize" its transactions

2022-06-14 Thread Undiscussed Horrific Abuse, One Victim of Many via bitcoin-dev
hi r1m, i'll talk with you as long as it's fun to do so.

>> the reason i call this 'designed to be broken' is that it lets people
>> rewrite history to their stories by republishing other people's
>> documents under different contexts.
>
> The basic service that a timestamp service provides is “this content (or at
> least a digest of this content) existed at least as early as this
> timestamp.” It says nothing about how long before the timestamp the content

OTS needlessly adds the requirement that the user publicize their .ots
files to everybody who will make use of the timestamp.

This does not provide the service you describe. It would be trivial to
include enough cryptographic information in the original OP_RETURN, so
as to obviate the need for publicizing the .ots file.

If I send my .ots file to another party, a 4th party can replace it
with their own, because there is no cryptographic pinning ensuring its
contents. This changes the timestamp to one later, no longer proving
the earliness of the data.

>> I would not be surprised if OTS also fails to add tx history
>> containing its hashes to associated wallets, letting them be lost in
>> chain forks.

> for me. Are there wallets that you’ve seen that incorporate OTS? I’d love to

I mean the cryptographic wallets that hold the funds spent in etching
the hash to the chain.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Why OpenTimestamps does not "linearize" its transactions

2022-06-14 Thread rot13maxi via bitcoin-dev
Good morning Undiscussed Horrific Abuse, One Victim of Many,

> the reason i call this 'designed to be broken' is that it lets people
> rewrite history to their stories by republishing other people's
> documents under different contexts.

The basic service that a timestamp service provides is “this content (or at 
least a digest of this content) existed at least as early as this timestamp.” 
It says nothing about how long before the timestamp the content existed, and 
says nothing about how long after the timestamp the content continues to exist. 
It also says nothing about uniqueness or validity of the content. For example, 
a document that existed for a year before its timestamp and was deleted 
immediately afterwards, and a document that was created the instant before its 
timestamp and was retained “forever” afterwards would have timestamp that are 
equally valid (provided you retained the digest of the document to validate the 
timestamp in the former case). Assurances around uniqueness (for example, 
preventing double spends) are a proof-of-publication or set-consistency 
problem, and assurances around validity are a validation problem. These other 
semantics can be built into systems that also rely on timestamps, but you can 
have a useful time stamping system without them. This is what OTS provides. 
When you say it’s “designed to be broken” do you mean that it claims to provide 
assurances that it doesn’t, or that the set of assurances that it provides are 
not a useful set.

> I would not be surprised if OTS also fails to add tx history
> containing its hashes to associated wallets, letting them be lost in
> chain forks.

I’ve always used OTS through the cli, which just spits out and works with .ots 
files, which are sterilized commitment operations. Storage of the ots files for 
later checking has always been a “problem left to the application” for me. Are 
there wallets that you’ve seen that incorporate OTS? I’d love to see them!

Best,
rot13maxi

On Tue, Jun 14, 2022 at 7:53 AM, Undiscussed Horrific Abuse, One Victim of Many 
via bitcoin-dev  wrote:

> I was privately asked for more opinions. I am sharing them publicly below:
>
> It's always been clear that OTS proves longness of duration but not
> shortness. It doesn't demonstrate that an earlier work was not
> published, because it hashes each document hash with private material
> the author must separately publicize. Any unpublished private material
> could be an earlier equivalent to a public proof.
>
> the reason i call this 'designed to be broken' is that it lets people
> rewrite history to their stories by republishing other people's
> documents under different contexts.
>
> I would not be surprised if OTS also fails to add tx history
> containing its hashes to associated wallets, letting them be lost in
> chain forks.
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Why OpenTimestamps does not "linearize" its transactions

2022-06-14 Thread Undiscussed Horrific Abuse, One Victim of Many via bitcoin-dev
hey various,

it's been obvious since its inception that opentimestamps is designed
to be broken.

if you have energy to normalise a better system, or support one of the
other better systems that already exists, that's wonderful.

i suspect the opentimestamps ecosystem is very experienced at defending itself.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Why OpenTimestamps does not "linearize" its transactions

2022-06-14 Thread Undiscussed Horrific Abuse, One Victim of Many via bitcoin-dev
I was privately asked for more opinions. I am sharing them publicly below:

It's always been clear that OTS proves longness of duration but not
shortness. It doesn't demonstrate that an earlier work was not
published, because it hashes each document hash with private material
the author must separately publicize. Any unpublished private material
could be an earlier equivalent to a public proof.

the reason i call this 'designed to be broken' is that it lets people
rewrite history to their stories by republishing other people's
documents under different contexts.

I would not be surprised if OTS also fails to add tx history
containing its hashes to associated wallets, letting them be lost in
chain forks.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


[bitcoin-dev] Why OpenTimestamps does not "linearize" its transactions

2022-06-14 Thread Peter Todd via bitcoin-dev
On Mon, May 02, 2022 at 08:59:49AM -0700, Jeremy Rubin wrote:
> Ok, got it. Won't waste anyone's time on terminology pedantism.
> 
> 
> The model that I proposed above is simply what *any* correct timestamping
> service must do. If OTS does not follow that model, then I suspect whatever
> OTS is, is provably incorrect or, in this context, unreliable, even when
> servers and clients are honest.

Do you think RFC 3628 is "provably incorrect" too? It's just a standard for
Trusted Time-Stamping Authorities to issue timestamp proofs via digital
signatures, in the most straight forward manner of signing a message claiming
that some digest existed as of some time.

As the RFC says in the introduction:

The TSA's role is to time-stamp a datum to establish evidence indicating 
that a
datum existed before a particular time.  This can then be used, for 
example, to
verify that a digital signature was applied to a message before the
corresponding certificate was revoked thus allowing a revoked public key
certificate to be used for verifying signatures created prior to the time of
revocation.

Simple and straight forward.

The problem here is starts with the fact that you're asking timestamp services
to do things that they're not claiming they do; a timestamp proof simply proves
that some message m existed prior to some time t. Nothing more.

Worse though, linearization is a busted approach.

> Unreliable might mean different things to
> different people, I'm happy to detail the types of unreliability issue that
> arise if you do not conform to the model I presented above (of which,
> linearizability is one way to address it, there are others that still
> implement epoch based recommitting that could be conceptually sound without
> requiring linearizability).
> 
> Do you have any formal proof of what guarantees OTS provides against which
> threat model? This is likely difficult to produce without a formal model of
> what OTS is, but perhaps you can give your best shot at producing one and
> we can carry the conversation on productively from there.

So as you know, an OpenTimestamps proof consists of a series of commitment
operations that act on an initial message m, leading to a message known to have
been created at some point in time. Almost always a Bitcoin block header. But
other schemes like trusted timestamps are possible too.

A commitment operation (namely hashes + concatenation) simply needs the
property that for a given input message m, the output H(m) can't be predicted
without knowledge of m. In the case of concatenation, this property is achieved
trivially by the fact that the output includes m verbatim. Similarly, SHA1 is
still a valid commitment operation.

Behind the scenes the OTS infrastructure builds merkle trees of commitment
operations for scalability reasons. But none of those details are relevant to
the validity of OTS proofs - the OTS infrastructure could magically mine a
block per transaction with the digest in the coinbase, and from the client's
point of view, everything would work the same.


The important thing to recognize is that timestamp proof is simply a one-sided
bound on when a given message existed, proving a message existed _prior_ to
some point in time. For example:

$ ots verify hello-world.txt.ots
Assuming target filename is 'hello-world.txt'
Success! Bitcoin block 358391 attests existence as of 2015-05-28 EDT

Obviously, the message "Hello World!" existed prior to 2015 (Indeed, it's such
a short message it's brute-forcable. But for sake of example, we'll ignore
that).

Thus your claim re: linearization that:

> Having a chain of transactions would serve to linearize history of
> OTS commitments which would let you prove, given reorgs, that knowledge of
> commit A was before B a bit more robustly.

...misunderstands the problem. We care about proving statements about messages.
Not timestamp proofs. Building infrastructure to order timestamp proofs
themselves is pointless.


What you're alluding to is dual-sided bounds on when messages were created.
That's solved by random beacons: messages known to have been created *after* a
point in time, and unpredictable prior. A famous example of course being the
genesis block quote:

The Times 03/Jan/2009 Chancellor on brink of second bailout for banks

Bitcoin block hashes make for a perfectly good random beacon for use-cases with
day to hour level precision. For higher precision, absolute time, there are
many trusted alternatives like the NIST random beacon, Roughtime, etc.


OpenTimestamps could offer a trustless _relative_ random beacon service by
making the per-second commitments a merkle mountain range, and publishing the
tip digests. In fact, that's how I came up with merkle mountain ranges in the
first place, and there's code from 2012 to do exactly that in depths of the git
repo. But that's such a niche use-case I decided against that approach for now;
I'll probably resurrect it in the future 

Re: [bitcoin-dev] Package Relay Proposal

2022-06-14 Thread Gloria Zhao via bitcoin-dev
Hi Suhas,

Thanks for your attention and feedback!

> Transaction A is both low-fee and non-standard to some nodes on the
network...
> ...Whenever a transaction T that spends A is relayed, new nodes will send
INV(PKGINFO1, T) to all package-relay peers...
> ...because of transaction malleability, and to avoid being blinded to a
transaction unnecessarily, these nodes will likely still send
getdata(PKGINFO1, T) to every node that announces T...

Yes, we'd request pkginfo unless we already had the transaction in our
mempool. The pkginfo step is intended to prevent nodes from ever
downloading a transaction more than once; I was going for a benchmark of
"packages are announced once per p2p connection, transaction data
downloaded once per node".

In this scenario, both A and T's wtxids would be sent once per p2p
connection and transaction data downloaded once per node. If T has other
unconfirmed parents, the low-fee ones will only be announced once (in
pkginfo) per link. If it has high-fee parents, they will indeed be
announced more than once per link (once individually, then again in
pkginfo).

More precisely: if a package contains any transactions which are
non-standard to one peer and standard to another, the package transactions
(parents, not child) that pass the fee filter on their own will be
announced twice instead of once.

> I think a good design goal would be to not waste bandwidth in
non-adversarial situations. In this case, there would be bandwidth waste
from downloading duplicate data from all your peers, just because the
announcement doesn't commit to the set of parent wtxids that we'd get from
the peer (and so we are unable to determine that all our peers would be
telling us the same thing, just based on the announcement).

Each transaction is only downloaded once per node here, and each package
announced/pkginfo sent once per link. I definitely understand that this
doesn't pass a benchmark of "every transaction is announced at most once
per link," but it's still on the magnitude of 32-byte hashes. Adding a
commitment to parents in the announcements is an extra hash per link in all
cases - my question is whether it's worth it? We'd also need to write new
inv/getdata message types for package relay, though that's probably a
weaker argument.

> it won't always be the case that a v1 package relay node will be able to
validate that a set of package transactions is fully sorted topologically,
because there may be (non-parent) ancestors that are missing from the
package and the best a peer can validate is topology within the package --
this means that a peer can validly (under this BIP) relay transaction
packages out of the true topological sort (if all ancestors were included).

Good point. Since v1 packages don't necessarily include the full ancestor
set, we wouldn't be able to verify that two parents are in the right order
if they have an indirect dependency, e.g. parent 1 spends a tx
("grandparent") which spends parent 2. Note that the grandparent couldn't
possibly be in the mempool unless parent 2 is. We'd eventually get
everything submitted as long as we received the grandparent, and then know
whether the package was topologically sorted. But I think you're right that
this could be a "nice to have" instead of a protocol requirement.

> Could you explain again what the benefit of including the blockhash is?
It seems like it is just so that a node could prioritize transaction relay
from peers with the same chain tip to maximize the likelihood of
transaction acceptance, but in the common case this seems like a pretty
negligible concern...

The blockhash is necessary in order to disambiguate between a malformed
package and difference in chain tip. If a parent is missing from a package,
it's possible it was mined in a recent block that we haven't seen yet.
Validating using a UTXO set, all we see is "missing inputs" when we try to
validate the child; we wouldn't know if our peer had sent us a malformed
package or if we were behind.

I'm hoping some of these clarifications are helpful to post publicly, but I
know I haven't fully addressed all the concerns you've brought up. Will
continue to think about this.

Best,
Gloria

On Wed, Jun 8, 2022 at 4:59 PM Suhas Daftuar via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> Hi,
>
> Thanks again for your work on this!
>
> One question I have is about potential bandwidth waste in the case of
> nodes running with different policy rules.  Here's my understanding of a
> scenario I think could happen:
>
> 1) Transaction A is both low-fee and non-standard to some nodes on the
> network.
> 2) Whenever a transaction T that spends A is relayed, new nodes will send
> INV(PKGINFO1, T) to all package-relay peers.
> 3) Nodes on the network that have implemented package relay, but do not
> accept A, will send getdata(PKGINFO1, T) and learn all of T's unconfirmed
> parents (~32 bytes * number of parents(T)).
> 4) Such nodes will reject T.  But because of