Re: [bitcoin-dev] Why OpenTimestamps does not "linearize" its transactions

2022-06-19 Thread Peter Todd via bitcoin-dev
On Tue, Jun 14, 2022 at 09:16:55PM -0400, Undiscussed Horrific Abuse, One 
Victim of Many via bitcoin-dev wrote:
> I worry that this form of "rfc timestamping" misleads its users into
> believing the timestamps of their documents are preserved. These kinds
> of user interaction issues can be very dangerous.
> 
> I would recommend uploading .ots files to chains with cheap storage,
> such as arweave or bitcoin sv.

According to Coingeek, Bitcoin SV's transaction fees are currently
0.1sats/byte. With BSV's price at $60, that works out to $644/GB.

Meanwhile, Amazon Glacier Deep Archive costs $0.012/GB/year.

Assuming a 25 year data lifetime, Bitcoin SV is still 2000x more expensive than
Amazon. And with the number of BSV nodes quickly dwindling, I'd be inclined to
trust Amazon more for long term storage.

-- 
https://petertodd.org 'peter'[:-1]@petertodd.org


signature.asc
Description: PGP signature
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Why OpenTimestamps does not "linearize" its transactions

2022-06-15 Thread Undiscussed Horrific Abuse, One Victim of Many via bitcoin-dev
> I do reiterate that it is blindingly easy to pin a public hash to the
> bitcoin blockchain that asserts the earliest publication of a document
> or collection of documents, and that this is desperately needed, to
> protect the accuracy of history when it is not safe.

The concern raised here relates to scaling, and here we disagree on
the proper direction of Bitcoin. To me it seems clear that Bitcoin was
designed to scale better than it has. It honestly looks like
developers are arbitrarily avoiding storing much data on chain, with
quickly shoehorned solutions like the lightning protocol. Bitcoin
simply got big too fast. I believe it was intended to handle large
data smoothly: not with single gigabyte blocks that every user must
store, but with simplistically designed and well-backed decentralised
propagation and storage of data. I see that not having happened due to
mostly political issues, and that's unfortunate, but other chains have
made strides here.

I don't think satoshi was familiar with how people behave when they
have a lot of money.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Why OpenTimestamps does not "linearize" its transactions

2022-06-15 Thread Undiscussed Horrific Abuse, One Victim of Many via bitcoin-dev
On 6/14/22, Andrew Poelstra  wrote:
> On Tue, Jun 14, 2022 at 01:15:08PM -0400, Undiscussed Horrific Abuse, One
> Victim of Many via bitcoin-dev wrote:
>> I'm replying to Peter, skipping the other emails.
>>
>> I perceive all these emails as disruptive trolling, ignoring the
>> importance of real timestamping, while handwaving about things that
>> are roughly false and harmful.
>>
>> Since the start of cryptocurrency, Bitcoin has been used to write
>> timestamps that stay intact despite malicious action to arbitrary
>> systems and records, showing the earliest on-chain publication of
>> data. It seems misleading that OTS does not do that, when it is such a
>> prominent system.
>>
>
> Please be cautious with tone and when assuming bad faith. I don't believe
> that Peter is trolling. Also, as politely as I can, when something like
> OTS whose model is dead-simple, well-documented, and has been running for
> years providing significant value to many people, comes under attack for
> being underspecified or failing to do what it says ... this is a
> surprising claim, to say the least.

Thank you for your reply, Andrew. I don't think Peter is trolling, but
I do suspect some body like a spy agency of strengthening the
timestamping solutions that have nonces in their merkle trees,
avoiding usability for storing public records in a way that could be
verified by anonymous and censored third parties.

> After talking to a few people offline, all of whom are baffled at this
> entire conversation, I think the issue might come down to the way that
> people interpret "timestamping".
>
> If you believe that "timestamping" means providing a verifiable ordering
> to events, then of course OTS does not accomplish this, nor has it ever
> claimed to. If you think that "timestamping" means proving that some
> data existed at a particular time, then this is exactly what OTS does.
>
> Personally -- and I suspect this is true of Peter as well -- I have always
> read the word as having the latter meaning, and it never occurred to me
> until now that others might have a different interpretation.

I looked some into the history of timestamping and I see that what you
are saying is the academic norm.

I don't see OTS as proving the data existed at a particular time,
because the proof is held in a document the user must protect. I
understand somewhat now that it is designed for users who can actually
protect that data sufficiently.

I do reiterate that it is blindingly easy to pin a public hash to the
bitcoin blockchain that asserts the earliest publication of a document
or collection of documents, and that this is desperately needed, to
protect the accuracy of history when it is not safe.

I worry that this form of "rfc timestamping" misleads its users into
believing the timestamps of their documents are preserved. These kinds
of user interaction issues can be very dangerous.

I would recommend uploading .ots files to chains with cheap storage,
such as arweave or bitcoin sv. This way people can prove which one was
first, when there is more than one. For that to work, we need a norm
of how and where to do it, so that users look in the same place, and
it is the people who make the public services and standards, that set
that norm.

Thank you for your reply, and I apologise for my poor support.

It is obvious that Peter has put incredible hard and long work into
providing OTS to the community in a clean and robust fashion, and that
is always very wonderful, and I have very thoroughly failed to
acknowledge that.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Why OpenTimestamps does not "linearize" its transactions

2022-06-14 Thread Andrew Poelstra via bitcoin-dev
On Tue, Jun 14, 2022 at 01:15:08PM -0400, Undiscussed Horrific Abuse, One 
Victim of Many via bitcoin-dev wrote:
> I'm replying to Peter, skipping the other emails.
> 
> I perceive all these emails as disruptive trolling, ignoring the
> importance of real timestamping, while handwaving about things that
> are roughly false and harmful.
> 
> Since the start of cryptocurrency, Bitcoin has been used to write
> timestamps that stay intact despite malicious action to arbitrary
> systems and records, showing the earliest on-chain publication of
> data. It seems misleading that OTS does not do that, when it is such a
> prominent system.
>

Please be cautious with tone and when assuming bad faith. I don't believe
that Peter is trolling. Also, as politely as I can, when something like
OTS whose model is dead-simple, well-documented, and has been running for
years providing significant value to many people, comes under attack for
being underspecified or failing to do what it says ... this is a
surprising claim, to say the least.


After talking to a few people offline, all of whom are baffled at this
entire conversation, I think the issue might come down to the way that
people interpret "timestamping".

If you believe that "timestamping" means providing a verifiable ordering
to events, then of course OTS does not accomplish this, nor has it ever
claimed to. If you think that "timestamping" means proving that some
data existed at a particular time, then this is exactly what OTS does.

Personally -- and I suspect this is true of Peter as well -- I have always
read the word as having the latter meaning, and it never occurred to me
until now that others might have a different interpretation.


I apologize for contributing to a thread that is getting a bit out of hand,
but I hope this can help the different parties see where the confusion is.




-- 
Andrew Poelstra
Director of Research, Blockstream
Email: apoelstra at wpsoftware.net
Web:   https://www.wpsoftware.net/andrew

The sun is always shining in space
-Justin Lewis-Webster



signature.asc
Description: PGP signature
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Why OpenTimestamps does not "linearize" its transactions

2022-06-14 Thread Undiscussed Horrific Abuse, One Victim of Many via bitcoin-dev
I'm replying to Peter, skipping the other emails.

I perceive all these emails as disruptive trolling, ignoring the
importance of real timestamping, while handwaving about things that
are roughly false and harmful.

Since the start of cryptocurrency, Bitcoin has been used to write
timestamps that stay intact despite malicious action to arbitrary
systems and records, showing the earliest on-chain publication of
data. It seems misleading that OTS does not do that, when it is such a
prominent system.

>> This does not provide the service you describe. It would be trivial to
>> include enough cryptographic information in the original OP_RETURN, so
>> as to obviate the need for publicizing the .ots file.
>
> That approach does not scale. Via merkle trees, the OpenTimestamps system
> routinely timestamps tens of thousands of messages with a single
> transaction:
>
> https://petertodd.org/2016/opentimestamps-announcement#scalability-through-aggregation

This makes total sense to reduce the expense and size of etching these
very short hashes.

But the OTS approach hashes in a _private nonce_ for every document,
preventing anybody from validating the earliness of an item in a
merkle tree without access to every proof.

Do you think OTS would be interested in publicizing nonces and
document hashes, if the user consents?

Non-developers need a tool where they can choose to pay funds to write
a strong timestamp that guarantees earliness of publication of a
document, and for free discern the earliness of timestamped data they
provide to the tool.

> Client-side validated .ots files are a necessary requirement to achieve
> this
> scalability.

Nothing in an engineering task is a strict requirement, aside from the
specification. The data could be publicised elsewhere, or funds
provided to store it on-chain.

> FWIW the most I've personally done is timestamped 750 million items from
> the
> Internet Archive with a single transaction.

That's impressive. It's always great when we write something that can
condense something huge into something tiny and keep working, and use
it reliably.

I would have put the files in a shared datalad repository, and put the
tip commit of the repository in an OP_RETURN along with a tag such as
'DL' or 'IA'.

Then a tool could look for all 'DL' or 'IA' transactions, and verify
that mine was the earliest. You would of course need access to the
etched repositories' git commits.

If the hash can't be verified by an anonymous observer, the archive is
only timestamped for people with the proof. How is the challenge of
protecting many proofs different from the challenge of protecting the
data they prove?

>> If I send my .ots file to another party, a 4th party can replace it
>> with their own, because there is no cryptographic pinning ensuring its
>> contents. This changes the timestamp to one later, no longer proving
>> the earliness of the data.
>
> They can also simply delete their copy of the data, making it impossible to
> prove anything about it.

If they can destroy your .ots proof, the information on the blockchain
no longer demonstrates anything.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Why OpenTimestamps does not "linearize" its transactions

2022-06-14 Thread Peter Todd via bitcoin-dev
On Tue, Jun 14, 2022 at 08:45:43AM -0400, Undiscussed Horrific Abuse, One 
Victim of Many via bitcoin-dev wrote:
> > The basic service that a timestamp service provides is “this content (or at
> > least a digest of this content) existed at least as early as this
> > timestamp.” It says nothing about how long before the timestamp the content
> 
> OTS needlessly adds the requirement that the user publicize their .ots
> files to everybody who will make use of the timestamp.
>
> This does not provide the service you describe. It would be trivial to
> include enough cryptographic information in the original OP_RETURN, so
> as to obviate the need for publicizing the .ots file.

That approach does not scale. Via merkle trees, the OpenTimestamps system
routinely timestamps tens of thousands of messages with a single transaction:

https://petertodd.org/2016/opentimestamps-announcement#scalability-through-aggregation

Client-side validated .ots files are a necessary requirement to achieve this
scalability.

FWIW the most I've personally done is timestamped 750 million items from the
Internet Archive with a single transaction.

> If I send my .ots file to another party, a 4th party can replace it
> with their own, because there is no cryptographic pinning ensuring its
> contents. This changes the timestamp to one later, no longer proving
> the earliness of the data.

They can also simply delete their copy of the data, making it impossible to
prove anything about it.

-- 
https://petertodd.org 'peter'[:-1]@petertodd.org


signature.asc
Description: PGP signature
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Why OpenTimestamps does not "linearize" its transactions

2022-06-14 Thread Peter Todd via bitcoin-dev
On Tue, Jun 14, 2022 at 07:53:29AM -0400, Undiscussed Horrific Abuse, One 
Victim of Many via bitcoin-dev wrote:
> I was privately asked for more opinions. I am sharing them publicly below:
> 
> It's always been clear that OTS proves longness of duration but not
> shortness. It doesn't demonstrate that an earlier work was not
> published, because it hashes each document hash with private material
> the author must separately publicize. Any unpublished private material
> could be an earlier equivalent to a public proof.
> 
> the reason i call this 'designed to be broken' is that it lets people
> rewrite history to their stories by republishing other people's
> documents under different contexts.

See "What Can and Can't Timestamps Prove?":

https://petertodd.org/2016/opentimestamps-announcement#what-can-and-cant-timestamps-prove

OpenTimestamps makes a trade-off: timestamps have significant limitations as to
what they're able to prove. But in exchange, they have exceptionally good
scalability, making them essentially free to use. Timestamps are also much
easier to add on to existing processes and systems such as Git repositories.
Schemes that prove uniqueness require much more engineering and redesign work
to actually accomplish anything.

-- 
https://petertodd.org 'peter'[:-1]@petertodd.org


signature.asc
Description: PGP signature
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Why OpenTimestamps does not "linearize" its transactions

2022-06-14 Thread digital vagabond via bitcoin-dev
If someone wants more linearity and uniqueness guarantees from a timestamp,
that isnt what OTS was designed for. Here is a protocol that was:
https://www.commerceblock.com/mainstay/

On Tue, Jun 14, 2022, 3:56 PM Bryan Bishop via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> On Tue, Jun 14, 2022 at 8:48 AM Undiscussed Horrific Abuse, One Victim of
> Many via bitcoin-dev  wrote:
>
>> OTS needlessly adds the requirement that the user publicize their .ots
>> files to everybody who will make use of the timestamp.
>
>
> Publication is not a component of the OTS system.
>
> This does not provide the service you describe. It would be trivial to
>> include enough cryptographic information in the original OP_RETURN, so
>> as to obviate the need for publicizing the .ots file.
>>
>
> (Why would it be needless to require everyone to publish OTS files but not
> needless to require everyone to publish via OP_RETURN? In fact, now you
> have blockchain users that don't ever use your OP_RETURN data.)
>
>
>> If I send my .ots file to another party, a 4th party can replace it
>> with their own, because there is no cryptographic pinning ensuring its
>> contents. This changes the timestamp to one later, no longer proving
>> the earliness of the data.
>>
>
> You can't replace a timestamp in the OTS system; you can only make a new
> timestamp. To use the earlier timestamp, you would have to use the earlier
> timestamp. At any time it is allowed to make a new timestamp based on the
> current clock. The use case for OTS is proving document existence as of a
> certain time and that if you had doctored a file then said doctoring was no
> later than the earliest timestamp that can be provided.
>
> I was just talking about this the other day actually...
> https://news.ycombinator.com/item?id=31640752
>
> - Bryan
> https://twitter.com/kanzure
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Why OpenTimestamps does not "linearize" its transactions

2022-06-14 Thread Bryan Bishop via bitcoin-dev
On Tue, Jun 14, 2022 at 8:48 AM Undiscussed Horrific Abuse, One Victim of
Many via bitcoin-dev  wrote:

> OTS needlessly adds the requirement that the user publicize their .ots
> files to everybody who will make use of the timestamp.


Publication is not a component of the OTS system.

This does not provide the service you describe. It would be trivial to
> include enough cryptographic information in the original OP_RETURN, so
> as to obviate the need for publicizing the .ots file.
>

(Why would it be needless to require everyone to publish OTS files but not
needless to require everyone to publish via OP_RETURN? In fact, now you
have blockchain users that don't ever use your OP_RETURN data.)


> If I send my .ots file to another party, a 4th party can replace it
> with their own, because there is no cryptographic pinning ensuring its
> contents. This changes the timestamp to one later, no longer proving
> the earliness of the data.
>

You can't replace a timestamp in the OTS system; you can only make a new
timestamp. To use the earlier timestamp, you would have to use the earlier
timestamp. At any time it is allowed to make a new timestamp based on the
current clock. The use case for OTS is proving document existence as of a
certain time and that if you had doctored a file then said doctoring was no
later than the earliest timestamp that can be provided.

I was just talking about this the other day actually...
https://news.ycombinator.com/item?id=31640752

- Bryan
https://twitter.com/kanzure
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Why OpenTimestamps does not "linearize" its transactions

2022-06-14 Thread Undiscussed Horrific Abuse, One Victim of Many via bitcoin-dev
hi r1m, i'll talk with you as long as it's fun to do so.

>> the reason i call this 'designed to be broken' is that it lets people
>> rewrite history to their stories by republishing other people's
>> documents under different contexts.
>
> The basic service that a timestamp service provides is “this content (or at
> least a digest of this content) existed at least as early as this
> timestamp.” It says nothing about how long before the timestamp the content

OTS needlessly adds the requirement that the user publicize their .ots
files to everybody who will make use of the timestamp.

This does not provide the service you describe. It would be trivial to
include enough cryptographic information in the original OP_RETURN, so
as to obviate the need for publicizing the .ots file.

If I send my .ots file to another party, a 4th party can replace it
with their own, because there is no cryptographic pinning ensuring its
contents. This changes the timestamp to one later, no longer proving
the earliness of the data.

>> I would not be surprised if OTS also fails to add tx history
>> containing its hashes to associated wallets, letting them be lost in
>> chain forks.

> for me. Are there wallets that you’ve seen that incorporate OTS? I’d love to

I mean the cryptographic wallets that hold the funds spent in etching
the hash to the chain.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Why OpenTimestamps does not "linearize" its transactions

2022-06-14 Thread rot13maxi via bitcoin-dev
Good morning Undiscussed Horrific Abuse, One Victim of Many,

> the reason i call this 'designed to be broken' is that it lets people
> rewrite history to their stories by republishing other people's
> documents under different contexts.

The basic service that a timestamp service provides is “this content (or at 
least a digest of this content) existed at least as early as this timestamp.” 
It says nothing about how long before the timestamp the content existed, and 
says nothing about how long after the timestamp the content continues to exist. 
It also says nothing about uniqueness or validity of the content. For example, 
a document that existed for a year before its timestamp and was deleted 
immediately afterwards, and a document that was created the instant before its 
timestamp and was retained “forever” afterwards would have timestamp that are 
equally valid (provided you retained the digest of the document to validate the 
timestamp in the former case). Assurances around uniqueness (for example, 
preventing double spends) are a proof-of-publication or set-consistency 
problem, and assurances around validity are a validation problem. These other 
semantics can be built into systems that also rely on timestamps, but you can 
have a useful time stamping system without them. This is what OTS provides. 
When you say it’s “designed to be broken” do you mean that it claims to provide 
assurances that it doesn’t, or that the set of assurances that it provides are 
not a useful set.

> I would not be surprised if OTS also fails to add tx history
> containing its hashes to associated wallets, letting them be lost in
> chain forks.

I’ve always used OTS through the cli, which just spits out and works with .ots 
files, which are sterilized commitment operations. Storage of the ots files for 
later checking has always been a “problem left to the application” for me. Are 
there wallets that you’ve seen that incorporate OTS? I’d love to see them!

Best,
rot13maxi

On Tue, Jun 14, 2022 at 7:53 AM, Undiscussed Horrific Abuse, One Victim of Many 
via bitcoin-dev  wrote:

> I was privately asked for more opinions. I am sharing them publicly below:
>
> It's always been clear that OTS proves longness of duration but not
> shortness. It doesn't demonstrate that an earlier work was not
> published, because it hashes each document hash with private material
> the author must separately publicize. Any unpublished private material
> could be an earlier equivalent to a public proof.
>
> the reason i call this 'designed to be broken' is that it lets people
> rewrite history to their stories by republishing other people's
> documents under different contexts.
>
> I would not be surprised if OTS also fails to add tx history
> containing its hashes to associated wallets, letting them be lost in
> chain forks.
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Why OpenTimestamps does not "linearize" its transactions

2022-06-14 Thread Undiscussed Horrific Abuse, One Victim of Many via bitcoin-dev
hey various,

it's been obvious since its inception that opentimestamps is designed
to be broken.

if you have energy to normalise a better system, or support one of the
other better systems that already exists, that's wonderful.

i suspect the opentimestamps ecosystem is very experienced at defending itself.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Why OpenTimestamps does not "linearize" its transactions

2022-06-14 Thread Undiscussed Horrific Abuse, One Victim of Many via bitcoin-dev
I was privately asked for more opinions. I am sharing them publicly below:

It's always been clear that OTS proves longness of duration but not
shortness. It doesn't demonstrate that an earlier work was not
published, because it hashes each document hash with private material
the author must separately publicize. Any unpublished private material
could be an earlier equivalent to a public proof.

the reason i call this 'designed to be broken' is that it lets people
rewrite history to their stories by republishing other people's
documents under different contexts.

I would not be surprised if OTS also fails to add tx history
containing its hashes to associated wallets, letting them be lost in
chain forks.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


[bitcoin-dev] Why OpenTimestamps does not "linearize" its transactions

2022-06-14 Thread Peter Todd via bitcoin-dev
On Mon, May 02, 2022 at 08:59:49AM -0700, Jeremy Rubin wrote:
> Ok, got it. Won't waste anyone's time on terminology pedantism.
> 
> 
> The model that I proposed above is simply what *any* correct timestamping
> service must do. If OTS does not follow that model, then I suspect whatever
> OTS is, is provably incorrect or, in this context, unreliable, even when
> servers and clients are honest.

Do you think RFC 3628 is "provably incorrect" too? It's just a standard for
Trusted Time-Stamping Authorities to issue timestamp proofs via digital
signatures, in the most straight forward manner of signing a message claiming
that some digest existed as of some time.

As the RFC says in the introduction:

The TSA's role is to time-stamp a datum to establish evidence indicating 
that a
datum existed before a particular time.  This can then be used, for 
example, to
verify that a digital signature was applied to a message before the
corresponding certificate was revoked thus allowing a revoked public key
certificate to be used for verifying signatures created prior to the time of
revocation.

Simple and straight forward.

The problem here is starts with the fact that you're asking timestamp services
to do things that they're not claiming they do; a timestamp proof simply proves
that some message m existed prior to some time t. Nothing more.

Worse though, linearization is a busted approach.

> Unreliable might mean different things to
> different people, I'm happy to detail the types of unreliability issue that
> arise if you do not conform to the model I presented above (of which,
> linearizability is one way to address it, there are others that still
> implement epoch based recommitting that could be conceptually sound without
> requiring linearizability).
> 
> Do you have any formal proof of what guarantees OTS provides against which
> threat model? This is likely difficult to produce without a formal model of
> what OTS is, but perhaps you can give your best shot at producing one and
> we can carry the conversation on productively from there.

So as you know, an OpenTimestamps proof consists of a series of commitment
operations that act on an initial message m, leading to a message known to have
been created at some point in time. Almost always a Bitcoin block header. But
other schemes like trusted timestamps are possible too.

A commitment operation (namely hashes + concatenation) simply needs the
property that for a given input message m, the output H(m) can't be predicted
without knowledge of m. In the case of concatenation, this property is achieved
trivially by the fact that the output includes m verbatim. Similarly, SHA1 is
still a valid commitment operation.

Behind the scenes the OTS infrastructure builds merkle trees of commitment
operations for scalability reasons. But none of those details are relevant to
the validity of OTS proofs - the OTS infrastructure could magically mine a
block per transaction with the digest in the coinbase, and from the client's
point of view, everything would work the same.


The important thing to recognize is that timestamp proof is simply a one-sided
bound on when a given message existed, proving a message existed _prior_ to
some point in time. For example:

$ ots verify hello-world.txt.ots
Assuming target filename is 'hello-world.txt'
Success! Bitcoin block 358391 attests existence as of 2015-05-28 EDT

Obviously, the message "Hello World!" existed prior to 2015 (Indeed, it's such
a short message it's brute-forcable. But for sake of example, we'll ignore
that).

Thus your claim re: linearization that:

> Having a chain of transactions would serve to linearize history of
> OTS commitments which would let you prove, given reorgs, that knowledge of
> commit A was before B a bit more robustly.

...misunderstands the problem. We care about proving statements about messages.
Not timestamp proofs. Building infrastructure to order timestamp proofs
themselves is pointless.


What you're alluding to is dual-sided bounds on when messages were created.
That's solved by random beacons: messages known to have been created *after* a
point in time, and unpredictable prior. A famous example of course being the
genesis block quote:

The Times 03/Jan/2009 Chancellor on brink of second bailout for banks

Bitcoin block hashes make for a perfectly good random beacon for use-cases with
day to hour level precision. For higher precision, absolute time, there are
many trusted alternatives like the NIST random beacon, Roughtime, etc.


OpenTimestamps could offer a trustless _relative_ random beacon service by
making the per-second commitments a merkle mountain range, and publishing the
tip digests. In fact, that's how I came up with merkle mountain ranges in the
first place, and there's code from 2012 to do exactly that in depths of the git
repo. But that's such a niche use-case I decided against that approach for now;
I'll probably resurrect it in the future