On Tue, Apr 11, 2017, at 11:41, Eric Voskuil wrote:
> It's not the headers/tx-hashes of the blocks that I'm referring to, it
> is the confirmation and spend information relative to all txs and all
> outputs for each branch. This reverse navigation (i.e. utxo
> information) is essential, must be p
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
On 04/11/2017 01:43 AM, Tomas wrote:
> Splitting transactions only happens *on storage* and is just a
> minor optimization compared to storing them in full.
Ok
> Sure, we can still call switching tips a "reorg". And it is indeed
> a trade off as or
On Tue, Apr 11, 2017, at 03:44, Eric Voskuil wrote:
> As I understand it you would split tx inputs and outputs and send them
> independently, and that you intend this to be a P2P network
> optimization - not a consensus rule change. So my comments are based
> on those inferences. If we are talki
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
On 04/08/2017 04:58 PM, Tomas wrote:
> You seem to ignore here the difference between base load and peak
> load. If Compact blocks/XThin with further optimizations can
> presync nearly 100% of the transactions, and nodes can do as much
> as possib
Thank you for your elaborate response Eric,
On Sun, Apr 9, 2017, at 00:37, Eric Voskuil wrote:
> My point was that "Using a storage engine without UTXO-index" has been
> done, and may be a useful reference, not that implementation details
> are the same.
I haven't dived into libbitcoin V2/V3 enou
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
On 04/06/2017 05:17 PM, Tomas wrote:
> Thanks, but I get the impression that the similarity is rather
> superficial.
My point was that "Using a storage engine without UTXO-index" has been
done, and may be a useful reference, not that implementation
On Sun, Apr 9, 2017, at 00:12, Gregory Maxwell wrote:
> In Bitcoin Core the software _explicitly_ and intentionally does not
> exploit mempool pre-validation because doing that very easily leads to
> hard to detect consensus faults and makes all mempool code consensus
> critical when it otherwise
On Sat, Apr 8, 2017 at 8:21 PM, Johnson Lau wrote:
> pre-synced means already in mempool and verified? Then it sounds like we just
> need some mempool optimisation? The tx order in a block is not important,
> unless they are dependent
In Bitcoin Core the software _explicitly_ and intentionally
I would advise anyone worried about 'hard drive access' to order a
512GB NVME (pci-express interface) flash drive (or a laptop), and
I expect the performance will make you wonder why you ever bothered
with cloud.
My (very brief) analysis of the performance of a full chain download
on a new laptop
> Please no conspiracy theory like stepping on someone’s toes. I believe
> it’s always nice to challenge the established model. However, as I’m
> trying to make some hardfork design, I intend to have a stricter UTXO
> growth limit. As you said "protocol addressing the UTXO growth, might not
> be w
> On 9 Apr 2017, at 03:56, Tomas wrote:
>
>
>> I don’t fully understand your storage engine. So the following deduction
>> is just based on common sense.
>>
>> a) It is possible to make unlimited number of 1-in-100-out txs
>>
>> b) The maximum number of 100-in-1-out txs is limited by the numb
> I don’t fully understand your storage engine. So the following deduction
> is just based on common sense.
>
> a) It is possible to make unlimited number of 1-in-100-out txs
>
> b) The maximum number of 100-in-1-out txs is limited by the number of
> previous 1-in-100-out txs
>
> c) Since bitcr
On Sat, Apr 8, 2017, at 20:27, Tom Harding via bitcoin-dev wrote:
>
>
> On Apr 7, 2017 12:42, "Gregory Maxwell" wrote:
>> On Fri, Apr 7, 2017 at 6:52 PM, Tom Harding via bitcoin-dev
>> wrote:
>> > A network in which many nodes maintain a transaction index also
>> > enables a
>> > cl
> On 8 Apr 2017, at 15:28, Tomas via bitcoin-dev
> wrote:
>>
>
> I think you are being a bit harsh here . I am also clearly explaining
> the difference only applies to peak load, and just making a suggestion.
> I simply want to stress the importance of protocol / implementation
> separation as
On Apr 7, 2017 12:42, "Gregory Maxwell" wrote:
On Fri, Apr 7, 2017 at 6:52 PM, Tom Harding via bitcoin-dev
wrote:
> A network in which many nodes maintain a transaction index also enables a
> class of light node applications that ask peers to prove existence and
> spentness of TXO's.
Only with
On Sat, Apr 8, 2017, at 02:44, Gregory Maxwell wrote:
> As you note that the output costs still bound the resource
> requirements.
Resource cost is not just a measure of storage requirement; data that
needs to be accessed during peak load induce more cost then data only
used during base load or o
On Fri, Apr 7, 2017 at 9:14 PM, Tomas wrote:
> The long term *minimal disk storage* requirement, can obviously not be less
> then all the unspent outputs.
Then I think you may want to retract the claim that "As this solution,
reversing the costs of outputs and inputs, [...] updates to the
protoco
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
On 04/07/2017 02:44 PM, Tomas via bitcoin-dev wrote:
> Hi Eric,
>
> On Fri, Apr 7, 2017, at 21:55, Eric Voskuil via bitcoin-dev wrote:
>> Optimization for lower memory platforms then becomes a process
>> of reducing the need for paging. This is the
Hi Eric,
On Fri, Apr 7, 2017, at 21:55, Eric Voskuil via bitcoin-dev wrote:
> Optimization for lower memory platforms then becomes a process of
> reducing the need for paging. This is the purpose of a cache. The seam
> between disk and memory can be filled quite nicely by a small amount
> of cache
Answering both,
On Fri, Apr 7, 2017 at 11:18 AM, Gregory Maxwell via bitcoin-dev
wrote:
>>
>> I'm still lost on this-- AFAICT your proposals long term resource
>> requirements are directly proportional to the amount of
>> unspent output
>> data, which grows over time at some fraction of the
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
On 04/07/2017 11:39 AM, Bram Cohen via bitcoin-dev wrote:
> Expanding on this question a bit, it's optimized for parallel
> access, but hard drive access isn't parallel and memory accesses
> are very fast, so shouldn't the target of optimization be a
On Fri, Apr 7, 2017 at 6:52 PM, Tom Harding via bitcoin-dev
wrote:
> A network in which many nodes maintain a transaction index also enables a
> class of light node applications that ask peers to prove existence and
> spentness of TXO's.
Only with the additional commitment structure such as those
On Apr 6, 2017 6:31 PM, "Tomas via bitcoin-dev" <
bitcoin-dev@lists.linuxfoundation.org> wrote:
Bitcrust just uses a *transaction-index*, where outputs can be looked up
regardless of being spent.
A network in which many nodes maintain a transaction index also enables a
class of light node appl
Expanding on this question a bit, it's optimized for parallel access, but
hard drive access isn't parallel and memory accesses are very fast, so
shouldn't the target of optimization be about cramming as much as possible
in memory and minimizing disk accesses?
On Fri, Apr 7, 2017 at 11:18 AM, Grego
On Thu, Apr 6, 2017 at 10:12 PM, Tomas via bitcoin-dev
wrote:
>As this
> solution, reversing the costs of outputs and inputs, seems to have
> excellent performance characteristics (as shown in the test results),
> updates to the protocol addressing the UTXO growth, might not be worth
> considering
Thank you,
The benches are running in Google Cloud Engine; currently on 8 vCPU
32gb, but I tend to switch hardware regularly.
Roughly, the results are better for Bitcrust with high end hardware and
the difference for total block validations is mostly diminished at 2
vCPU, 7,5 gb.
Note that t
Interesting work.
I was wondering if you could tell us what specs for the machine being used
as preliminary benchmark is here: https://bitcrust.org/results ?
I'd be interested to also see comparisons with 0.14 which has some
improvements for script validation with more cores.
On Fri, Apr 7, 2017
Thank you Marcos,
Though written in Rust, bitcrust-db is definitely usable as pluggable
module as its interface will be roughly some queries, add_tx and
add_block with blobs and flags. (Bitcrust internally uses a
deserialize-only model, keeping references to the blobs with the parsed
data).
How
Hi Tomas,
I've read it and think it is an excellent work, I'd like to see it
integrated into bitcoin-core as a 'kernel module'.
I see there are a lot of proof of concepts out there, IMO every one
deserve a room in the bitcoin client as a selectable feature, to make the
software more flexible and
On Fri, Apr 7, 2017, at 03:09, Gregory Maxwell wrote:
>
> How do you deal with validity rules changing based on block height?
I expected that one :). Just like the 100 blocks coinbase rule, changes
by softforks need to be added as metadata to the transaction-index, but
this is not yet in place.
On Fri, Apr 7, 2017, at 02:32, Gregory Maxwell wrote:
> Perhaps a simple question would help:
>
> What is the minimal amount of space your system requires to take a new
> block received from the P2P network and verifying that all its spends
> were valid spends of existing coins unspent coins to
On Fri, Apr 7, 2017 at 12:48 AM, Tomas wrote:
> Bitcrust separates script validation (base load, when transaction come
> in) from order validation (peak load, when blocks come in).
How do you deal with validity rules changing based on block height?
> For script validation it would obviously need
Hi Eric,
Thanks, but I get the impression that the similarity is rather
superficial.
To address your points:
> (1) higher than necessary storage space requirement due to storing the
> indexing data required for correlate the spends, and
Hmm. No. Spends are simply scanned in the spend-tree (fu
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
On 04/06/2017 03:12 PM, Tomas via bitcoin-dev wrote:
Hi Tomas,
> I have been working on a bitcoin implementation that uses a
> different approach to indexing for verifying the order of
> transactions. Instead of using an index of unspent outputs, d
I have been working on a bitcoin implementation that uses a different
approach to indexing for verifying the order of transactions. Instead of
using an index of unspent outputs, double spends are verified by using a
spend-tree where spends are scanned against spent outputs instead of
unspent output
35 matches
Mail list logo