Re: [Lightning-dev] Towards a gridlike Lightning Network

2018-04-20 Thread Benjamin Mord
Good afternoon ZmnSCPxj,

"I do not see a bloom filter?"

Well, if you look at it kinda sideways, you are using a bloom filter in
your March 23rd proposal. As originally defined, I think the "false
positives" in bloom filtering were the unfortunate cost of performance. In
BIP 37, the false positives become desirable, although still are 'false' in
that their only function is to serve as red herrings. But (omitting i for
clarity), your proposal takes BIP 37's spin on bloom filters one step
further to actually take the 'false positives' as the very definition of
our desired set, since what you are "searching for" is just your own public
key, which ends up being the least interesting result within that set.

" Regarding 24 vs 23, the condition for 23 allows a 3 members of a 5-member
neighborhood to think they form a single 3-member neighborhood, while the
remaining 2 members think they are in a 5-member neighborhood that includes
the other 3 members who have formed a 3-member neighborhood."

Oh, I see. But the reason that occurs is because different nodes are
considering different numbers of high-order bits. If everyone used the same
number of high-order bits, then it would become an equivalence
relationship, with which we can partition the network. This is because
then, a=b, would imply b=a, and also a=b and b=c, would imply a=c.
https://en.wikipedia.org/wiki/Equivalence_relation#Partition

I think algorithm from March 24 is broken actually, on second look, but I
understand now what you are trying to achieve. You want to allow local
judgement over best local cell size, and yet somehow end up with precise
uniform agreement on who is in which cells, because cycles require such
precision. But if you throw in that the network is dynamic, knowledge is
imperfect, and malicious behavior may occur, then I think strict
equivalence relationships and cycles become brittle. Perhaps we should
generalize the equivalence relationship into a distance function, so that
we can start thinking of this as a metric space which we want to fill with
some sort of structure. Perhaps then we can design efficient yet robustly
"fuzzy" structures. Perhaps we want a fuzzy fractal of some sort. Hmm...

"Intermediate nodes already know two hops?  The incoming and outgoing hop?
Or do you need more information?"

Yes, nodes would need to know one hope more, since the idea would be to
attract competition to the high-usage yet high-fee links.

Thanks,
Ben


On Thu, Apr 19, 2018 at 11:24 PM, ZmnSCPxj  wrote:

> Good morning Benjamin,
>
>
> I think there are two distinct concepts here. The first is the
> identification of a 'neighborhood', and the second is the establishment of
> an order within that neighborhood for purpose of cycle formation.
>
> Your use of bloom filters to define a neighborhood, is I think the most
> valuable contribution. Formation of neighborhoods with high connectivity,
> with sparse but redundant connections among these neighborhoods, does seem
> like an economically efficient approach to maintaining useful competition
> and redundancy. If there are any graph theorists or category theorists on
> the list, perhaps they could offer some formal validation or optimization.
> For this, I prefer your March 23 proposal over March 24, I'm curious what
> improvement is intended in March 24 vs 23?
>
>
> I do not see a bloom filter? But then I am not a mathematician so it is
> possible I fail to see how the Bloom filter arises from the algorithm I
> described.
>
> Regarding 24 vs 23, the condition for 23 allows a 3 members of a 5-member
> neighborhood to think they form a single 3-member neighborhood, while the
> remaining 2 members think they are in a 5-member neighborhood that includes
> the other 3 members who have formed a 3-member neighborhood.
>
>
> The emergent definition and maintenance of a unique ordering for cycle
> establishment within a neighborhood is, I think, a much more ambitious
> undertaking. I'm not sure how we efficiently make that robust in a dynamic
> context, except perhaps with interactive coordination among the members
> operating off something other than just static global data. Otherwise
> different members would have different ideas about cycle order, depending
> on when they first joined. I also don't see how cycles recover when someone
> leaves.
>
> As people come and go, cycles will break. As the lightning network grows
> overall, neighborhoods identified by one setting of the bloom filter will
> become undesirably large. Perhaps a less ambitious but more robust
> heuristic would be one where probability of establishing a channel is
> proportional to the number of bits in common in the pubkey hash, normalized
> by the number of nodes currently observed?
>
>
> I believe that is what the algorithm already does? It dynamically sizes
> neighborhoods to be small, with high probability of neighborhoods to be
> 3->5 members.
>
> This heuristic would automatically adjust granularity 

Re: [Lightning-dev] Proposal for Advertising Lightning nodes via DNS records.

2018-04-20 Thread ZmnSCPxj via Lightning-dev
Good morning Tyler,

> Great points.  IsStandard() is something I hadn't considered yet, but I think 
> miners are incentivized to want Numerifides transactions as a registration 
> will need a solid miners fee, and "revoked" names will cause escalating fee 
> wars that the miners can just soak up.  I think a standard that uses mappings 
> in a sane way (and maybe pushdata2/4 won't be allowed if 255 bytes are 
> enough) would be allowable given the benefit it brings of truly 
> decentralized, human-readable trust.

Granted, but using scriptpubkey will require changes to miner software, and 
would require large number of mining pools to support it.  And large numbers of 
mining pools will not support it significantly unless you have already 
demonstrated its usefulness, so you may find bootstrapping less easy.

One thing that can be done would be to publish the command in the witness 
vector and use an intermediate transaction.  This at least lets you use the 
cheaper witness space.

1.  First pay to a P2WSH OP_HASH160  
OP_EQUALVERIFY  OP_CHECKSIG
2.  Spend the above P2WSH, which requires you to provide the witness data 
 
3.  The spend should pay out a value to the P2WSH  
OP_CHECKSEQUENCEVERIFY OP_DROP  OP_CHECKSIG

This puts the extra data into the witness area, which is cheaper, and also 
utilizes P2WSH so that you do not have to convince miners to use Numerifides.  
bitcoin-dev will still cry because it puts non-financial data onchain, but at 
least fewer tears will be shed since it is the witness area.

> I also wonder what the economic incentive might be for every node to store 
> and gossip the Numerifides mappings - sure they want everyone to find them, 
> but who cares about other people? It could be a situation like the current 
> Bitcoin mempool where it's saved on a best-effort basis and is 
> semi-transient, but that makes troubleshooting lookups problematic.

You have an economic incentive to *store* all the Numerifides mappings -- if 
you do not, somebody could fool you with a revoked mapping, or you might not be 
able to locate a mapping you need to use.

Incentive to then *share* mappings could be that peers would try a "tit for 
tat" strategy: they will give you one (or a few) mappings "for free", but if 
you do not give any back, they will stop sharing with you.  So you are 
incentivized to contact multiple peers and try to trade information from one 
with information from another.  But that requires a durable identity from you, 
which may be undesirable.

One could also wonder what economic incentive might be to *seed* torrents as 
opposed to leech them only, other than a "high-level" consideration that if 
nobody seeds, nobody can leech.

> Also, I know this is only tangentially related to Lightning so if this is a 
> discussion best left off the mailing list, just let me know.

bitcoin-dev will probably have more ideas and might be able to point you at 
some prior art for similar systems.

Regards,
ZmnSCPxj___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Proposal for Advertising Lightning nodes via DNS records.

2018-04-20 Thread Tyler H
Great points.  IsStandard() is something I hadn't considered yet, but I
think miners are incentivized to want Numerifides transactions as a
registration will need a solid miners fee, and "revoked" names will cause
escalating fee wars that the miners can just soak up.  I think a standard
that uses mappings in a sane way (and maybe pushdata2/4 won't be allowed if
255 bytes are enough) would be allowable given the benefit it brings of
truly decentralized, human-readable trust.

I also wonder what the economic incentive might be for every node to store
and gossip the Numerifides mappings - sure they want everyone to find them,
but who cares about other people? It could be a situation like the current
Bitcoin mempool where it's saved on a best-effort basis and is
semi-transient, but that makes troubleshooting lookups problematic.

Also, I know this is only tangentially related to Lightning so if this is a
discussion best left off the mailing list, just let me know.

Thanks,
Tyler

On Fri, Apr 20, 2018 at 1:46 AM ZmnSCPxj  wrote:

> Good morning Tyler,
>
> I like the efficiency your method brings and I'm also not that enthused
> about bloating the blockchain with "non-financial data", however I do think
> there's value in having the data live in the base chain, both from
> accessibility and censorship resistance of the data to less additional
> "networks".
>
>
> Gossiped data is almost impossible to censor (ask Streisand how well that
> works to censor her Malibu home).  However, mere gossip is often
> unverifiable.
>
> What we do here is twofold:
>
> 1.  We use the blockchain layer for verification.  Commands 
> "google.com=127.0.0.1"
> are backed by actual Bitcoin satoshi being locked, sacrificing opportunity
> costs, making them costly and verifiably costly, unlike gossip which is
> unverifiable.
> 2.  We use the gossip overlay for censorship resistance.  Once a command
> has been confirmed on the Bitcoin blockchain, we can share that command to
> our peers on the gossip overlay, and unless all our peers are colluding, it
> is likely that a command gets out somehow.
>
> This design also uses P2WSH, so 51% miners, at least, cannot censor
> Numerifides commands: all they see is a hash of something which could be a
> LN fundchannel or a M-of-N SegWit or etc etc. We wait for the transaction
> to confirm (which starts the CSV relative-locktime countdown anyway), after
> which the miner cannot "take back" its confirmation of your Numerifide
> command without losing costly work, and only THEN reveal the P2WSH preimage
> on the Numerifides gossip overlay network.
>
> The gossip overlay then provides censorship resistance on top of that,
> revealing the preimage of the P2WSH (AFTER it has been confirmed onchain)
> and revealing your Numerifide command.  It is unlikely that anyone can stop
> the gossip overlay unless they control your entire Internet connection, in
> which case you have more serious problems and might not even be able to
> have a current view of the Bitcoin blockchain anyway.
>
> Already today any user that includes a commensurate miner's fee can use
> the pushdata opcodes and add whatever data they want to the blockchain.
>
>
> Granted.  It still makes bitcoin-dev cry when this is done.  And in any
> case, reducing the blockchain footprint has real benefits of reducing the
> amount that gets given to miners and increasing what can be put into
> command bids anyway.
>
>
> One thing that the design requires is a separate method of communicating
> bindings and not being censored - if it were onchain, a DNS lookup could
> simply be no more than a light client requesting the relevant block.
>
>
> Possibly.  Note however that the "publish everything onchain" design
> requires cooperation of a Bitcoin miner, since it seems you are using
> scriptpubkey rather than P2WSH.  In particular the IsStandard() check will
> mean your transaction will not get transmitted on the normal Bitcoin peer
> network and you will need direct connection and cooperation of a Bitcoin
> miner, to get your non-standard script in a scriptpubkey.
>
> If you intend to use P2SH or P2WSH, then you will need a gossip layer to
> reveal the script preimage anyway, so you might as well use the more
> efficient P2WSH-based construction I showed.
>
> I think anything that gets seriously far along will need to have some data
> crunched and if only 100 users per day would fill up blocks then of course
> constraints would necessitate other avenues.
>
>
> Yes.  Knowing that, we might as well start efficient.
>
> Regards,
> ZmnSCPxj
>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev