Re: [Mimblewimble] introduction

2018-03-10 Thread Ignotus Peverell
Thanks for the feedback Jameson. Obviously we're still in the pre-alpha stage, 
closing-in on alpha, but I think we have a good shot at having a fairly 
reliable first release when we get there. I'm hoping node operators like you 
won't get woken up in the middle of the night because of us too often.

I expect most of our I/O is spent in our MMR storage implementation, we don't 
ask very much from the k/v store (rocksdb until someone hates it enough to step 
up :-)). And that storage, while not being easy to wrap your head around at 
first (or second) is, from a storage standpoint, very simple [1]. Contrast with 
Parity or go-ethereum that store the entire Patricia tree in rocksdb/leveldb.

- Igno

[1] https://github.com/mimblewimble/grin/blob/master/store/src/types.rs

‐‐‐ Original Message ‐‐‐
On 9 March 2018 7:56 PM, Jameson Lopp  wrote:

> Anecdotally, over the past year I've experienced performance issues running 
> both Ripple and Parity nodes. These issues were generally related to disk I/O 
> and more specifically pointed toward RocksDB being the culprit.
> I solved the Ripple I/O issue by changing nodes to use Ripple's 
> implementation-specific "NuDB"
> The Parity issue got a lot better when they upgraded the version of RocksDB 
> that they were using, though I heard that there is also an initiative to 
> write a Parity specific DB.
>
> On Fri, Mar 9, 2018 at 1:29 PM, Ignotus Peverell 
>  wrote:
>
>> I'm not sure why but RocksDb seems really unpopular and lmdb very popular 
>> these days. Honestly, I didn't put that much thought into RocksDb 
>> originally. When I started on grin, I looked at the code of other Rust 
>> blockchain implementations. Parity was the more advanced one (on Ethereum) 
>> and they were using RocksDb, so I figured it would work out okay and the 
>> bindings would at least be decent. One often overlooked aspects of a 
>> database is the quality of the bindings in your PL, because poorly written 
>> bindings can make all the database guarantees go away. And I was a lot more 
>> worried about the cryptography and the size of range proofs back then.
>>
>> I know the opinions of the lmdb author and others regarding atomicity in 
>> storages and frankly, I think they're a little too storage-focused (I've 
>> known some Oracle DBAs with similar positions). In my experience, from an 
>> application standpoint, putting too much trust in storage guarantees is a 
>> bad idea. Everything fails eventually, and when it does storage people are 
>> pretty quick to put the blame on disks (gotta do Raid 60), networks, 
>> language bindings, or you. Btw I'm guilty as well, I have implemented some 
>> simple storages in the past.
>>
>> Truth is, it's actually rather easy to write a resilient blockchain node on 
>> a not-so-resilient storage (note: I'm talking about a node here, not 
>> wallets). The data is immutable and can be replayed at will. You messed up 
>> on the last block? Fine, restart on the one before that and just make sure 
>> it's all idempotent. If you're dealing with balances it's a little more 
>> complicated, but a node does not. And with careful design, you can make a 
>> lot of things idempotent. It's also practically impossible for grin to rely 
>> on an atomic storage because we have a separate state (Merkle Mountain 
>> Ranges) that are specifically designed to be easy to store in a flat file, 
>> while very unwieldy and slow to store in a k/v db. They're append-only for 
>> the most part so dealing with failure is also very easy (note: does not 
>> preclude bugs, but those get fixed). And when you squint right, the whole 
>> blockchain storage is append-only. From a storage standpoint, it's hard to 
>> find a more fault-tolerant use case.
>>
>> So anyway, I'm definitely not married to RocksDb, but I don't think it 
>> matters enormously either. My biggest beef with it at this point is that 
>> it's a pain to build and has probably 10x the number of features we need. 
>> But swapping it out is just 200 LOC [1]. So maybe it's worth doing it just 
>> for this reason.
>>
>> Now I'm going to link to this email on the 10 other places where I've been 
>> asked about this :-)
>>
>> - Igno
>>
>> [1] https://github.com/mimblewimble/grin/blob/master/store/src/lib.rs
>>
>> ‐‐‐ Original Message ‐‐‐
>>
>> On 8 March 2018 10:44 PM, Luke Kenneth Casson Leighton  wrote:
>>
>>> On Thu, Mar 8, 2018 at 8:03 PM, Ignotus Peverell
>>>
>>> igno.pever...@protonmail.com wrote:
>>>
>>> > > > There is a denial-of-service option when a user downloads the chain,
>>> > > >
>>> > > > the peer can give gigabytes of data and list the wrong unspent 
>>> > > > outputs.
>>> > > >
>>> > > > The user will see that the result do not add up to 0, but cannot tell 
>>> > > > where
>>> > > >
>>> > > > the problem is.
>>> >
>>> > > which to be honest I do not quite understand. The user normally 
>>> > > downloads
>>> > >
>>> > > the chain by requesting blocks from peers, starting with just the 
>>

Re: [Mimblewimble] introduction

2018-03-10 Thread Luke Kenneth Casson Leighton
a couple other things: 1 lmdb is specifically designed for efficiency
(optimised for read because it's for use in openldap).  that *happens*
to accidentally result in fast write performance under certain
commonly-occurring [but *not* all] circumstances.  2 if you use
compression on the values (i'd recommend snappy as it's a very fast
partial-compression algorithm) whilst you gain on a reduced storage
size you _will_ lose the speed benefits of not having that memcpy copy
the value (like on every other key-value store), you would have to do
decompress on every single value before you could access it.

l.

-- 
Mailing list: https://launchpad.net/~mimblewimble
Post to : mimblewimble@lists.launchpad.net
Unsubscribe : https://launchpad.net/~mimblewimble
More help   : https://help.launchpad.net/ListHelp


Re: [Mimblewimble] introduction

2018-03-09 Thread Luke Kenneth Casson Leighton
On Fri, Mar 9, 2018 at 6:29 PM, Ignotus Peverell
 wrote:

> I'm not sure why but RocksDb seems really unpopular and lmdb
> very popular these days.

 could have something to do with rocksdb, when it's put into
 real-world scenarios, its legacy from leveldb which is known
 to cause data corruption and was abandoned by the google
 developers... maybe that has something to do with it? :)

 lmdb is popular in part because in an extremely
challenging-to-understand technical way it guarantees not to corrupt
the key-value store *without requiring a write-log* [as long as you
enable fsync mode... which in turn can hammer SSDs... which is the
focus of some improvements in lmdb right now].

 also the compression in leveldb / rocksdb... yyeaah how's that work
out on a low-cost android phone with a 1ghz ARM Core with only a
32-bit-wide DDR3 bus bandwidth and 32k 1st-level instruction and data
caches, as opposed to a hyper-threaded 3.5ghz 12-core with 1mb
1st-level cache per core, 12mb cache-coherent 2nd-level, and
256-bit-wide 2.4ghz DDR4 multi-DIMM funneled memory?


>  One often overlooked aspects of a database is the quality of the bindings
> in your PL, because poorly written bindings can make all the database
> guarantees go away.

 ok one very very important thing to know about lmdb, is: as it's a
memory-mapped (shm with copy-on-write semantics) it returns *direct*
pointers to the values.  this is extremely important to know because
most key-value stores return entire memory-copies of the values, even
if you're storing 100 megabyte files.

 there do exist "safe-i-fied" variations of lmdb go bindings... it's
up to you, just bear in mind if you do so you'll be losing one of the
main benefits of lmdb.

 i remember someone, roger binns, a long looong time ago, telling me
the wonderful phrase, "if you make software idiot-proof only idiots
will use it" :)  basically lmdb encourages and invites people to be...
intelligent :)


> And I was a lot more worried about the cryptography
> and the size of range proofs back then.

 yehyeh.


> I know the opinions of the lmdb author and others regarding atomicity
> in storages and frankly, I think they're a little too storage-focused

 yyyeah it's kiinda reasonable to assume that the underlying storage
is reliable? :)  and that you're running suitable backups.  what
howard's pointing out is that many of these new key-value stores, even
if the underlying storage *is* reliable, simply corrupt the data
anyway, particularly when it comes to power-loss events.

 lmdb was *literally* the only key-value store that did not corrupt
data, in one comprehensive study.  the only reason it was corrupting
data in the preliminary report was because the PhD researcher did not
know about the fsync option in lmdb (he'd disabled it).  when he
switched it on and re-ran the tests, *zero* data corruption.


> They're append-only for the most part so dealing with failure is
> also very easy

 ok so one very useful feature of lmdb is, not only does it have
range-search capability, but if you can guarantee that the key being
inserted is larger than any other key that's been inserted up to that
point, you can call a special function "insert at end".  this i
believe only requires like 2 writes to disk, or something mad.

 if the key is the blockchain number and that's guaranteed to be
incrementing, you're good to go.

 oh: it also has atomic write transactions, *without* locking out
readers.  because of the copy-on-write semantics.  the writer locks
the root node (beginning a transaction), starts preparing the new
version of the database (each write to a memory-block makes a COPY of
that memory block), and finally once done there's a bit of
arseing-about locking all readers out for a bit whilst the root node
is updated, and you're done.  i say "arseing about", but actually all
readers have their own "transaction" - i.e. they'll be running off of
their own root-node during that open transaction, so the
"arseing-about" to get readers sync'd up only occurs when the reader
closes the read transaction.  opening the *next* read transaction will
be when they get the *new* (latest) root block.

 my point of mentioning this is: to do a guaranteed (fast)
last-key-insert you do this:

 * open write transaction
 * seek to end of store
 * read last key
 * add one (or whatever)
 * write new value under new key with the "insert-at-end" function.
* close write transaction.



> So anyway, I'm definitely not married to RocksDb, but I don't think it
> matters enormously either. My biggest beef with it at this point is that
> it's a pain to build and has probably 10x the number of features we need.
> But swapping it out is just 200 LOC [1]. So maybe it's worth doing it just 
> for this reason.

 yehyeh.  howard chu would almost certainly be interested to help, and
look things over.


> Now I'm going to link to this email on the 10 other places where I've been 
> asked about this :-)

 :)

l.

-- 
Mailing list: https://la

Re: [Mimblewimble] introduction

2018-03-09 Thread Jameson Lopp
Anecdotally, over the past year I've experienced performance issues running
both Ripple and Parity nodes. These issues were generally related to disk
I/O and more specifically pointed toward RocksDB being the culprit.

I solved the Ripple I/O issue by changing nodes to use Ripple's
implementation-specific "NuDB"

The Parity issue got a lot better when they upgraded the version of RocksDB
that they were using, though I heard that there is also an initiative to
write a Parity specific DB.

On Fri, Mar 9, 2018 at 1:29 PM, Ignotus Peverell <
igno.pever...@protonmail.com> wrote:

> I'm not sure why but RocksDb seems really unpopular and lmdb very popular
> these days. Honestly, I didn't put that much thought into RocksDb
> originally. When I started on grin, I looked at the code of other Rust
> blockchain implementations. Parity was the more advanced one (on Ethereum)
> and they were using RocksDb, so I figured it would work out okay and the
> bindings would at least be decent. One often overlooked aspects of a
> database is the quality of the bindings in your PL, because poorly written
> bindings can make all the database guarantees go away. And I was a lot more
> worried about the cryptography and the size of range proofs back then.
>
> I know the opinions of the lmdb author and others regarding atomicity in
> storages and frankly, I think they're a little too storage-focused (I've
> known some Oracle DBAs with similar positions). In my experience, from an
> application standpoint, putting too much trust in storage guarantees is a
> bad idea. Everything fails eventually, and when it does storage people are
> pretty quick to put the blame on disks (gotta do Raid 60), networks,
> language bindings, or you. Btw I'm guilty as well, I have implemented some
> simple storages in the past.
>
> Truth is, it's actually rather easy to write a resilient blockchain node
> on a not-so-resilient storage (note: I'm talking about a node here, not
> wallets). The data is immutable and can be replayed at will. You messed up
> on the last block? Fine, restart on the one before that and just make sure
> it's all idempotent. If you're dealing with balances it's a little more
> complicated, but a node does not. And with careful design, you can make a
> lot of things idempotent. It's also practically impossible for grin to rely
> on an atomic storage because we have a separate state (Merkle Mountain
> Ranges) that are specifically designed to be easy to store in a flat file,
> while very unwieldy and slow to store in a k/v db. They're append-only for
> the most part so dealing with failure is also very easy (note: does not
> preclude bugs, but those get fixed). And when you squint right, the whole
> blockchain storage is append-only. From a storage standpoint, it's hard to
> find a more fault-tolerant use case.
>
> So anyway, I'm definitely not married to RocksDb, but I don't think it
> matters enormously either. My biggest beef with it at this point is that
> it's a pain to build and has probably 10x the number of features we need.
> But swapping it out is just 200 LOC [1]. So maybe it's worth doing it just
> for this reason.
>
> Now I'm going to link to this email on the 10 other places where I've been
> asked about this :-)
>
> - Igno
>
> [1] https://github.com/mimblewimble/grin/blob/master/store/src/lib.rs
>
> ‐‐‐ Original Message ‐‐‐
>
> On 8 March 2018 10:44 PM, Luke Kenneth Casson Leighton 
> wrote:
>
> > On Thu, Mar 8, 2018 at 8:03 PM, Ignotus Peverell
> >
> > igno.pever...@protonmail.com wrote:
> >
> > > > > There is a denial-of-service option when a user downloads the
> chain,
> > > > >
> > > > > the peer can give gigabytes of data and list the wrong unspent
> outputs.
> > > > >
> > > > > The user will see that the result do not add up to 0, but cannot
> tell where
> > > > >
> > > > > the problem is.
> > >
> > > > which to be honest I do not quite understand. The user normally
> downloads
> > > >
> > > > the chain by requesting blocks from peers, starting with just the
> headers
> > > >
> > > > which can be checked for proof-of-work.
> > >
> > > The paper here refers to the MimbleWimble-style fast sync (IBD),
> >
> > hiya igno,
> >
> > lots of techie TLAs here that clearly tell me you're on the case and
> >
> > know what you're doing. it'll take me a while to catch up / get to
> >
> > the point where i could usefully contribute, i must apologise.
> >
> > in the meantime (switching tracks), one way i can definitely
> >
> > contribute to the underlying reliability is to ask why rocksdb has
> >
> > been chosen?
> >
> > https://www.reddit.com/r/Monero/comments/4rdnrg/lmdb_vs_rocksdb/
> >
> > https://github.com/AltSysrq/lmdb-zero
> >
> > rocksdb is based on leveldb, which was designed to hammer both the
> >
> > CPU and the storage, on the assumption by google engineers that
> >
> > everyone will be using leveldb in google data centres, with google's
> >
> > money, and with google's resources, i.e. CPU is cheap and there

Re: [Mimblewimble] introduction

2018-03-09 Thread Ignotus Peverell
I'm not sure why but RocksDb seems really unpopular and lmdb very popular these 
days. Honestly, I didn't put that much thought into RocksDb originally. When I 
started on grin, I looked at the code of other Rust blockchain implementations. 
Parity was the more advanced one (on Ethereum) and they were using RocksDb, so 
I figured it would work out okay and the bindings would at least be decent. One 
often overlooked aspects of a database is the quality of the bindings in your 
PL, because poorly written bindings can make all the database guarantees go 
away. And I was a lot more worried about the cryptography and the size of range 
proofs back then.

I know the opinions of the lmdb author and others regarding atomicity in 
storages and frankly, I think they're a little too storage-focused (I've known 
some Oracle DBAs with similar positions). In my experience, from an application 
standpoint, putting too much trust in storage guarantees is a bad idea. 
Everything fails eventually, and when it does storage people are pretty quick 
to put the blame on disks (gotta do Raid 60), networks, language bindings, or 
you. Btw I'm guilty as well, I have implemented some simple storages in the 
past.

Truth is, it's actually rather easy to write a resilient blockchain node on a 
not-so-resilient storage (note: I'm talking about a node here, not wallets). 
The data is immutable and can be replayed at will. You messed up on the last 
block? Fine, restart on the one before that and just make sure it's all 
idempotent. If you're dealing with balances it's a little more complicated, but 
a node does not. And with careful design, you can make a lot of things 
idempotent. It's also practically impossible for grin to rely on an atomic 
storage because we have a separate state (Merkle Mountain Ranges) that are 
specifically designed to be easy to store in a flat file, while very unwieldy 
and slow to store in a k/v db. They're append-only for the most part so dealing 
with failure is also very easy (note: does not preclude bugs, but those get 
fixed). And when you squint right, the whole blockchain storage is append-only. 
From a storage standpoint, it's hard to find a more fault-tolerant use case.

So anyway, I'm definitely not married to RocksDb, but I don't think it matters 
enormously either. My biggest beef with it at this point is that it's a pain to 
build and has probably 10x the number of features we need. But swapping it out 
is just 200 LOC [1]. So maybe it's worth doing it just for this reason.

Now I'm going to link to this email on the 10 other places where I've been 
asked about this :-)

- Igno

[1] https://github.com/mimblewimble/grin/blob/master/store/src/lib.rs

‐‐‐ Original Message ‐‐‐

On 8 March 2018 10:44 PM, Luke Kenneth Casson Leighton  wrote:

> On Thu, Mar 8, 2018 at 8:03 PM, Ignotus Peverell
> 
> igno.pever...@protonmail.com wrote:
> 
> > > > There is a denial-of-service option when a user downloads the chain,
> > > > 
> > > > the peer can give gigabytes of data and list the wrong unspent outputs.
> > > > 
> > > > The user will see that the result do not add up to 0, but cannot tell 
> > > > where
> > > > 
> > > > the problem is.
> > 
> > > which to be honest I do not quite understand. The user normally downloads
> > > 
> > > the chain by requesting blocks from peers, starting with just the headers
> > > 
> > > which can be checked for proof-of-work.
> > 
> > The paper here refers to the MimbleWimble-style fast sync (IBD),
> 
> hiya igno,
> 
> lots of techie TLAs here that clearly tell me you're on the case and
> 
> know what you're doing. it'll take me a while to catch up / get to
> 
> the point where i could usefully contribute, i must apologise.
> 
> in the meantime (switching tracks), one way i can definitely
> 
> contribute to the underlying reliability is to ask why rocksdb has
> 
> been chosen?
> 
> https://www.reddit.com/r/Monero/comments/4rdnrg/lmdb_vs_rocksdb/
> 
> https://github.com/AltSysrq/lmdb-zero
> 
> rocksdb is based on leveldb, which was designed to hammer both the
> 
> CPU and the storage, on the assumption by google engineers that
> 
> everyone will be using leveldb in google data centres, with google's
> 
> money, and with google's resources, i.e. CPU is cheap and there will
> 
> be nothing else going on. they also didn't do their homework in many
> 
> other ways, resulting in an unstable pile of poo. and rocksdb is
> 
> based on that.
> 
> many people carrying out benchmark tests forget to switch off the
> 
> compression, or they forget to compress the key and/or the value being
> 
> stored when comparing against lmdb, or bdb, and so on.
> 
> so. why was rocksdb chosen?
> 
> l.



-- 
Mailing list: https://launchpad.net/~mimblewimble
Post to : mimblewimble@lists.launchpad.net
Unsubscribe : https://launchpad.net/~mimblewimble
More help   : https://help.launchpad.net/ListHelp


Re: [Mimblewimble] introduction

2018-03-08 Thread Luke Kenneth Casson Leighton
On Thu, Mar 8, 2018 at 8:03 PM, Ignotus Peverell
 wrote:
>> > There is a denial-of-service option when a user downloads the chain,
>> > the peer can give gigabytes of data and list the wrong unspent outputs.
>> > The user will see that the result do not add up to 0, but cannot tell where
>> > the problem is.
>
>> which to be honest I do not quite understand. The user normally downloads
>> the chain by requesting blocks from peers, starting with just the headers
>> which can be checked for proof-of-work.
>
> The paper here refers to the MimbleWimble-style fast sync (IBD),

 hiya igno,

 lots of techie TLAs here that clearly tell me you're on the case and
know what you're doing.  it'll take me a while to catch up / get to
the point where i could usefully contribute, i must apologise.

 in the meantime (switching tracks), one way i can definitely
contribute to the underlying reliability is to ask why rocksdb has
been chosen?
   https://www.reddit.com/r/Monero/comments/4rdnrg/lmdb_vs_rocksdb/
   https://github.com/AltSysrq/lmdb-zero

 rocksdb is based on leveldb, which was designed to hammer both the
CPU and the storage, on the *assumption* by google engineers that
everyone will be using leveldb in google data centres, with google's
money, and with google's resources, i.e. CPU is cheap and there will
be nothing else going on.  they also didn't do their homework in many
other ways, resulting in an unstable pile of poo.  and rocksdb is
*based* on that.

 many people carrying out benchmark tests forget to switch off the
compression, or they forget to compress the key and/or the value being
stored when comparing against lmdb, or bdb, and so on.

 so.  why was rocksdb chosen?

l.

-- 
Mailing list: https://launchpad.net/~mimblewimble
Post to : mimblewimble@lists.launchpad.net
Unsubscribe : https://launchpad.net/~mimblewimble
More help   : https://help.launchpad.net/ListHelp


Re: [Mimblewimble] introduction

2018-03-08 Thread Luke Kenneth Casson Leighton
---
crowd-funded eco-conscious hardware: https://www.crowdsupply.com/eoma68


On Thu, Mar 8, 2018 at 4:17 PM, John Tromp  wrote:
> hi Luke,
>
>> current crypto-currencies with the exception of monero are based
>> around the principle of hiding... so of course they are being hunted
>> and hounded.  monero and i believe grin-coin at least provide both
>> privacy *and traceability*, such that an individual may *prove* to
>> their local tax authorities that yes they accepted the transaction,
>> but that they can also prove that the transaction was completed
>> *outside of their jurisdiction*.
>
> Are you talking about transactions that the user reports to the authorities
> in the first place?

 no.  transactions that the user does *not* report (initialy) because
they're private [and it's not tax-paying time-of-year].

> In that case the user would simply not report transactions
> that are "inside the jurisdiction", whatever that means.

 that would be a criminal offense, not to report earnings / income.
however it's perfectly fine to report earnings / income... *that they
cannot tax*.

l.

-- 
Mailing list: https://launchpad.net/~mimblewimble
Post to : mimblewimble@lists.launchpad.net
Unsubscribe : https://launchpad.net/~mimblewimble
More help   : https://help.launchpad.net/ListHelp


Re: [Mimblewimble] introduction

2018-03-08 Thread Ignotus Peverell
> > There is a denial-of-service option when a user downloads the chain,
> > the peer can give gigabytes of data and list the wrong unspent outputs.
> > The user will see that the result do not add up to 0, but cannot tell where
> > the problem is.

> which to be honest I do not quite understand. The user normally downloads
> the chain by requesting blocks from peers, starting with just the headers
> which can be checked for proof-of-work.

The paper here refers to the MimbleWimble-style fast sync (IBD), where you only 
get the block headers, the UTXO set with their range proofs and the kernels. 
You can validate the whole thing without getting the full history with an 
almost identical security model. In fast sync you don't get any actual block 
(practically you still want some recent history in case of reorg but let's 
gloss over that for now). This is actually one of the main attractions of MW, 
as you can make IBD very efficient this way.

First, the DoS angle mentioned in the paper isn't nearly at the top of the list 
of what gives me more grey hair. Download bandwidth is usually higher and 
cheaper than upload, so the attacker would incur non-negligible cost. You can 
also force-switch peer if it's too slow. And it's worth noting that no matter 
what, you definitely know you've been lied to. It's also a DoS angle that's 
limited in time and place, unlike say a spam attack that would impact everyone, 
potentially forever.

Second, the paper makes minimal assumptions about what the rest of the chain 
looks like. It turns out that in Grin we have Merkle-like trees (Merkle 
Mountain Ranges) that are reasonably compact and commit to the TXO set as well 
as the kernels. So we can definitely implement a more incremental IDB mode that 
gets the headers first (just like now), then the trees, then stream the UTXOs 
and kernels making sure each of them are present in the trees and there are no 
extras.

There's still one rather subtle way an attacker could be obnoxious, even with 
the incremental IBD. I'll leave that as an exercise to the reader, but it'd be 
fairly straightforward to fix as well if it became a problem, by committing to 
a (very small) UTXO bitset.

- Igno

‐‐‐ Original Message ‐‐‐

On 8 March 2018 4:17 PM, John Tromp john.tr...@gmail.com wrote:

> hi Luke,
> 
> > current crypto-currencies with the exception of monero are based
> > 
> > around the principle of hiding... so of course they are being hunted
> > 
> > and hounded. monero and i believe grin-coin at least provide both
> > 
> > privacy and traceability, such that an individual may prove to
> > 
> > their local tax authorities that yes they accepted the transaction,
> > 
> > but that they can also prove that the transaction was completed
> > 
> > outside of their jurisdiction.
> 
> Are you talking about transactions that the user reports to the authorities
> 
> in the first place? In that case the user would simply not report transactions
> 
> that are "inside the jurisdiction", whatever that means.
> 
> > in reading the mimblewimble whitepaper i noticed that it said that
> > 
> > spammers can carry out a denial-of-service attack by flooding the
> > 
> > network with "wrong unspent outputs". the proposed solution was to
> > 
> > download the blockchain from a torrent or from multiple users.
> 
> The whitepaper at
> 
> https://download.wpsoftware.net/bitcoin/wizardry/mimblewimble.txt has
> 
> this paragraph:
> 
> 3\. There is a denial-of-service option when a user downloads the
> 
> chain, the peer can give
> 
> gigabytes of data and list the wrong unspent outputs. The user will
> 
> see that the result
> 
> do not add up to 0, but cannot tell where the problem is.
> 
> which to be honest I do not quite understand. The user normally
> 
> downloads the chain
> 
> by requesting blocks from peers, starting with just the headers which
> 
> can be checked for proof-of-work.
> 
> Having identified the chain of headers with the most work (cumulative
> 
> difficulty), the user then requests
> 
> the full blocks one or a few at a time. If any of them have bad data,
> 
> then the user would reject them,
> 
> and ban the peer that provided it with the bad block. I don't see how
> 
> the user would receive "gigabytes"
> 
> of bad data in this model, unless all peers (s)he connects to are malicious.
> 
> regards,
> 
> -John
> 
> Mailing list: https://launchpad.net/~mimblewimble
> 
> Post to : mimblewimble@lists.launchpad.net
> 
> Unsubscribe : https://launchpad.net/~mimblewimble
> 
> More help : https://help.launchpad.net/ListHelp

-- 
Mailing list: https://launchpad.net/~mimblewimble
Post to : mimblewimble@lists.launchpad.net
Unsubscribe : https://launchpad.net/~mimblewimble
More help   : https://help.launchpad.net/ListHelp


Re: [Mimblewimble] introduction

2018-03-08 Thread John Tromp
hi Luke,

> current crypto-currencies with the exception of monero are based
> around the principle of hiding... so of course they are being hunted
> and hounded.  monero and i believe grin-coin at least provide both
> privacy *and traceability*, such that an individual may *prove* to
> their local tax authorities that yes they accepted the transaction,
> but that they can also prove that the transaction was completed
> *outside of their jurisdiction*.

Are you talking about transactions that the user reports to the authorities
in the first place? In that case the user would simply not report transactions
that are "inside the jurisdiction", whatever that means.

> in reading the mimblewimble whitepaper i noticed that it said that
> spammers can carry out a denial-of-service attack by flooding the
> network with "wrong unspent outputs".  the proposed solution was to
> download the blockchain from a torrent or from multiple users.

The whitepaper at
https://download.wpsoftware.net/bitcoin/wizardry/mimblewimble.txt has
this paragraph:

3. There is a denial-of-service option when a user downloads the
chain, the peer can give
   gigabytes of data and list the wrong unspent outputs. The user will
see that the result
   do not add up to 0, but cannot tell where the problem is.

which to be honest I do not quite understand. The user normally
downloads the chain
by requesting blocks from peers, starting with just the headers which
can be checked for proof-of-work.
Having identified the chain of headers with the most work (cumulative
difficulty), the user then requests
the full blocks one or a few at a time. If any of them have bad data,
then the user would reject them,
and ban the peer that provided it with the bad block. I don't see how
the user would receive "gigabytes"
of bad data in this model, unless all peers (s)he connects to are malicious.

regards,
-John

-- 
Mailing list: https://launchpad.net/~mimblewimble
Post to : mimblewimble@lists.launchpad.net
Unsubscribe : https://launchpad.net/~mimblewimble
More help   : https://help.launchpad.net/ListHelp