Re: [Lightning-dev] Link-level payment splitting via intermediary rendezvous nodes

2018-11-12 Thread ZmnSCPxj via Lightning-dev
Good morning Christian,

I am nowhere near a mathematician, thus, cannot countercheck your expertise 
here (and cannot give a counterproposal thusly).

But I want to point out the below scenarios:

1.  C is the payer.  He is in contact with an unknown payee (who in reality is 
E).  E provides the onion-wrapped route D->E with ephemeral key and other data 
necessary, as well as informing C that D is the rendez-vous point.  Then C 
creates a route from itself to D (via channel C->D or via C->A->D).

2.  B is the payer.  He knows the entire route B->C->D->E and knows that payee 
is C.  Unfortunately the C<->D channel is low capacity or down or etc etc.  At 
C, B has provided the onion-wrapped route D->E with ephemeral key and other 
data necessary, as well as informing to C that D is the next node.  Then C 
either pays via C->D or via C->A->D.

Even if there is an off-by-one error in our thinking about rendez-vous nodes, 
could it not be compensated also by an off-by-one in the link-level payment 
splitting via intermediary rendez-vous node?
In short, D is the one that switches keys instead of A.

The operation of processing a hop would be:

1.  Unwrap the onion with current ephemeral key.
2.  Dispatch based on realm byte.
2.1.  If realm byte 0:
2.1.1.  Normal routing behavior, extract HMAC, etc etc
2.2.  If realm byte 2 "switch ephemeral keys":
2.2.1.  Set current ephemeral key to bytes 1 -> 32 of packet.
2.2.2.  Shift onion by one hop packet.
2.2.3.  Goto 1.

Would that not work?
(I am being naive here, as I am not a mathist and I did not understand half 
what you wrote, sorry)

Then at C, we have the onion from D->E, we also know the next ephemeral key to 
use (we can derive it since we would pass it to D anyway).
It rightshifts the onion by one, storing the next ephemeral key to the new hop 
it just allocated.
Then it encrypts the onion using a new ephemeral key that it will use to 
generate the D<-A<-C part of the onion.

Regards,
ZmnSCPxj


Sent with ProtonMail Secure Email.

‐‐‐ Original Message ‐‐‐
On Tuesday, November 13, 2018 11:45 AM, Christian Decker 
 wrote:

> Great proposal ZmnSCPxj, but I think I need to raise a small issue with
> it. While writing up the proposal for rendez-vous I came across a
> problem with the mechanism I described during the spec meeting: the
> padding at the rendez-vous point would usually zero-padded and then
> encrypted in one go with the shared secret that was generated from the
> previous ephemeral key (i.e., the one before the switch). That ephemeral
> key is not known to the recipient (barring additional rounds of
> communication) so the recipient would be unable to compute the correct
> MACs. There are a number of solutions to this, basically setting the
> padding to something that the recipient could know when generating its
> half onion.
>
> My current favorite goes like this:
>
> 1.  Rendez-vous RV receives an onion, performs ECDH like normal to get
> the shared secret, decrypts its payload, simultaneously encrypts
> the padding.
>
> 2.  It extracts its per-hop payload and shifts the entire packet over
> (shift its payload out and the newly generated padding in)
>
> 3.  It then notices that it should perform an ephemeral key switch, now
> deviating from the normal protocol (which would just be to generate
> the new ephemeral key, serialize and forward)
> 3.1. It zero-fills the padding that it just added (so we are in a
> state that the recipient knew when generating its partial onion
> 3.2 It performs ECDH with the switched in ephemeral key to get a new
> shared secret that which is then used to unwrap one additional
> layer of encryption, and most importantly encrypt the padding so
> the next hop doesn't see the zero-filled padding.
> 3.3 Only then will it generate the new ephemeral key for the next
> hop, based on the switched in ephemeral key and the newly
> generated shared secret, serialize the packet and forward it.
>
> This has the advantage of reusing all the existing machinery but
> assembling it a bit differently, by adding a little detour when
> generating the next onion. It involves one additional ECDH at the
> rendez-vous, one ChaCha20 encryption and one scalar multiplication to
> generate the next ephemeral keys. It does not need more space than the
> single ephemeral key in the per-hop payload.
>
> And now for the reason that I write this as a reply to your post: with
> this scheme it is not possible for C to find an ephemeral key that would
> end up identical to the one that D would require to decrypt the onion
> correctly. This would not be an issue if D is informed about this split
> and would basically accept whatever it gets, but that kind of defeats
> the transparency that you were going for with your proposal.
>
> I'm open for other proposals but I currently can't think of a way to
> make sure that a) the recipient can 

Re: [Lightning-dev] Packet switching via intermediary rendezvous node

2018-11-12 Thread Christian Decker
Hi ZmnSCPxj,

like I mentioned in the other mailing thread we have a minor
complication in order get rendez-vous working.

If I'm not mistaken it'll not be possible for us to have spontaneous
ephemeral key switches while forwarding a payment. Specifically either
the sender or the recipient have to know the switchover points in their
respective parts of the onion. Otherwise it'll not be possible to cover
the padding in the HMAC, for the same reason that we couldn't meet up
with the same ephemeral key at the rendez-vous point.

Sorry about not noticing this before.

Cheers,
Christian

ZmnSCPxj via Lightning-dev 
writes:
> Good morning list,
>
> Although, packet switching was part of the agenda, we decided, that we would 
> defer this to some later version of BOLT spec.
>
> Interestingly, some sort of packet switching becomes possible, due to the 
> below features we did not defer:
>
> 1.  Multi-hop onion packets (i.e. s/realm/packettype/)
> 2.  Identify "next" by node-id instead of short-channel-id (actually, we 
> solved this by "short-channel-id is not binding" and next hop is identified 
> by short-channel-id still).
> 3.  Onion ephemeral key switching (required by rendez-vous routing).
>
> ---
>
> Suppose we define the below packettype (notice below type number is even, but 
> I am uncertain how "is OK to be odd", is appropriate for this):
>
> packettype 0: same as current realm 0
> packettype 2: ephemeral key switch (use ephemeral key in succeeding 65-byte 
> packet)
> packettype 4: identify next node by node-id on succeeding 65-byte packet
>
> Suppose I were to receive a packettype 0 in an onion.  It identifies a 
> short-channel-id.  Now suppose this particular channel has no capacity.  As I 
> showed in thread " Link-level payment splitting via intermediary rendezvous 
> nodes" 
> https://lists.linuxfoundation.org/pipermail/lightning-dev/2018-November/001547.html,
>  it is possible, that I can route it via some other route *composed of 
> multiple channels*, by using packettype 4 at the end of this route to connect 
> it to the rest of the onion I receive.
>
> However, in this case, in effect, the short-channel-id simply identifies the 
> "next" node along this route.
>
> Suppose we also identify a new packettype (packettype 4)) where he "next" 
> node is identified by its node-id.
>
> Let us make the below scenarios.
>
> 1.  Suppose the node-id so identified, I have a channel with this node.  And 
> suppose this channel has capacity.  I can send the payment directly to that 
> node.  This is no different from today.
> 2.  Suppose the node-id so identified, I have a channel with this node.  But 
> this channel has not capacity.  However, I can look for alternate route.  And 
> by using rendez-vous feature "switch ephemeral key" I can generate a route 
> that is multiple hops, in order to reach the identified node-id, and connect 
> the rest of the onion to this.  This case is same as if the node is 
> identified by short-channel-id.
> 3.  Suppose the node-id so identified, I have not a channel with this node.  
> However, I can again look for alternate route.  Again, by using "switch 
> ephemeral key" feature, I can generate a route that is multiple hops, in 
> order to reach the identified node-id, and again connect the rest of the 
> onion to this.
>
> Now, the case 3 above, can build up packet switching.  I might have a 
> routemap that contains the destination node-id and have an accurate route 
> through the network, and identify the path directly to the next node.  If 
> not, I could guess/use statistics that one of my peers is likely to know how 
> to route to that node, and also forward a packettype 4 to the same node-id to 
> my peer.
>
> This particular packet switching, also allows some uncertainty about the 
> destination.  For instance, even if I wish to pay CJP, actually I make an 
> onion with packettype 4 Rene, packettype 4 CJP. packettype 0 HMAC=0.  Then I 
> send the above onion (appropriately layered-encrypted) to my direct peer 
> cdecker, who attempts to make an effort to route to Rene.  When Rene receives 
> it, it sees packettype 4 CJP, and then makes an effort to route to CJP, who 
> sees packettype 0 HMAC=0 meaning CJP is the recipient.
>
> Further, this is yet another use of the siwtch-ephemeral-key packettype.
>
> Thus:
>
> 1.  It allows packet switching
> 2.  It increases anonymity set of rendez-vous routing. Node that sees 
> packettype 2 (switch ephemeral key) does not know, if it is sending to a 
> packet-switched or link-level payment rerouting, or if it is the rendes-vous 
> for a deniable payment.
> 3.  Mapless Lightning nodes 
> (https://lists.linuxfoundation.org/pipermail/lightning-dev/2018-November/001552.html)
>  could ask a peer to be their pathfinding provider, with some amount of 
> uncertinaty (it is possible that somebody else sent a packettype 4 to me, and 
> I selected you as peer who might know the destination; also, the destination 
> specified 

Re: [Lightning-dev] Approximate assignment of option names: please fix!

2018-11-12 Thread ZmnSCPxj via Lightning-dev
Good Morning Rusty,

OG AMP is inherently spontaneous in nature, therefore invoice might not exist 
to put the feature on.
Thus it should be global feature.

Do we tie spontaneous payment to OG AMP or do we support one which is payable 
by base AMP or normal singlepath?

Given that both `option_switch_ephkey` and `option_og_amp` require 
understanding extended onion packet types, would it not be better to merge them 
into `option_extra_onion_packet_types`?



Sent with ProtonMail Secure Email.

‐‐‐ Original Message ‐‐‐
On Tuesday, November 13, 2018 7:49 AM, Rusty Russell  
wrote:

> Hi all,
>
> I went through the wiki and made up option names (not yet
> numbers, that comes next). I re-read our description of global vs local
> bits:
>
> The feature masks are split into local features (which only
> affect the protocol between these two nodes) and global features
> (which can affect HTLCs and are thus also advertised to other
> nodes).
>
> You might want to promote your local bit to a global bit so you can
> advertize them (wumbo?)? But if it's expected that every node will
> eventually support a bit, then it should probably stay local.
>
> Please edit your bits as appropriate, so I can assign bit numbers soon:
>
> https://github.com/lightningnetwork/lightning-rfc/wiki/Lightning-Specification-1.1-Proposal-States
>
> Thanks!
> Rusty.
>
> Lightning-dev mailing list
> Lightning-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


[Lightning-dev] Approximate assignment of option names: please fix!

2018-11-12 Thread Rusty Russell
Hi all,

I went through the wiki and made up option names (not yet
numbers, that comes next).  I re-read our description of global vs local
bits:

The feature masks are split into local features (which only
affect the protocol between these two nodes) and global features
(which can affect HTLCs and are thus also advertised to other
nodes).

You *might* want to promote your local bit to a global bit so you can
advertize them (wumbo?)?  But if it's expected that every node will
eventually support a bit, then it should probably stay local.

Please edit your bits as appropriate, so I can assign bit numbers soon:


https://github.com/lightningnetwork/lightning-rfc/wiki/Lightning-Specification-1.1-Proposal-States

Thanks!
Rusty.
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Recovering protocol with watchtowers

2018-11-12 Thread ZmnSCPxj via Lightning-dev
Good morning Margherita,

How does this scheme protect privacy of node?
I would at least suggest that the pubkey used be different from the node normal 
pubkey.

If I find out the pubkey of A, can I spoof A and give a higher nonce value with 
a blob containing random data (which is indistinguishable from a 
properly-implemented ciphertext) to the watchtowers of A?  Presumably part of 
the protocol will require that any updates be signed using the key of A, 
otherwise I can easily corrupt the data being backed up for A.

In case of a breach while node A is offline, can the Watchtowers do anything?
Please note, that main purpose of Watchtowers is to handle breaches while 
client is offline, not backup.
It would be pointless for Watchtower to exist if it can only provide data while 
A is online.
The best time to attack A is when A is unable to defend itself.
Please refer to 
https://lists.linuxfoundation.org/pipermail/lightning-dev/2018-April/001196.html
 for information on what we consider the best design for watchtowers so far.

Please note also, that you cannot make a single channel with multiple peers; 
you must make a channel for each peer.
(well it is theoretically possible, with eltoo, to make multi-party channel. 
but it requires all members to be online at every update, such that if a single 
party is offline, the channel is unuseable; having multiple 2-party channels 
with each peer is far better since if one of your peers drops, you can use the 
channels with other peers)

Regards,
ZmnSCPxj

‐‐‐ Original Message ‐‐‐
On Tuesday, November 13, 2018 2:59 AM, Margherita Favaretto 
 wrote:

> Hello, dev-lightning community,
> I’m writing to you to share an update of my Thesis project (previous object 
> e-mail: Recovering protocol for Lightning network 11/01/2018).
> I warn you that this message is quite long, but I hope that could be 
> interesting for you, and then it would be great receiving your opinions. :-)
>
> The problem that I'm focusing on is the recovering mechanism of false 
> positive in the Lightning network, that it is possible to re-define as how it 
> is possible to have a solution of backup in the network in case of false 
> positive nodes, with a particular attention to privacy and security.
> First of all, compared to the solution proposed in the previous e-mail, I've 
> decided not to use the other connected nodes as the back-up of recent status, 
> but using Watchtowers. In fact, the problem of using a normal node is that it 
> might be offline, and so it could not guarantee the service of backup.
> In my design, I consider a watchtower simply as a full node that is online 
> 24h, but I have not considered the mechanism of monitoring status channel 
> (maybe we can overlap the two functions later).
>
> An example is that in the near future, the main e-commerce organizations may 
> offer a new service of "Watchtower- Recovery", that the nodes can purchase to 
> back up their commitments data. This means that the user can leverage a 
> payment channel with the watchtower offering the service.
>
> This feature strongly suggests using more than one watchtower, to mitigate 
> the risk that a single watchtower is attacked and all data inside are deleted.
>
> In my solution, I define two new concepts:
> - nonce-time Tn, as the current value of nonce-time (sequential integer 
> number that defines the order   of the backups)
> - payload P, that consists of
>   1.a zip of all status channels of a node A at a specific time T1
>   2. the nonce time corresponds to the time T1 of the status contained in the 
> zip
>   3. channel_id of the channel with A
> This payload is encrypted by the public key of the node A, so the watchtowers 
> cannot know the status channel of A. -> {zip(T1), T1, channel_id} pk(A)
>
> The idea is not sent all data to all watchtowers, but just send the actual 
> nonce-time and the actual payload to one of the watchtower, and just send the 
> new nonce-time to the others. Therefore, we can split the data into different 
> watchtowers, without sending the payload after each transaction to all of 
> them.
>
> To explain the design, let's consider Alice, which has a channel (with Eltoo 
> protocol) with Bob, Charlie, Diana, and three watchtowers W0, W1 and W2. 
> Everytime that Alice is online, she is connected to the three watchtowers.
>
> How to send data to the watchtowers
>
> Alice and Bob change their status channel. So, Alice sends the new status to 
> the watchtowers W0 and shares the current nonce-time with W1 and W2. When 
> Alice sends her information to the three watchtowers, these memorize the 
> node, current nonce-time, payload:
>
> W0: A T0 P0
> W1: A T0   -
> W2: A T0   -
>
> Alice and Charlie change the status of their channel. So, Alice sends the new 
> status to W1 and sends the new nonce-time to the others, which substitute the 
> previous current nonce-time in the information of A:
>
> W0: A T1 P0
> W1: A T1 P1
> W2: A T1  -
>
> Alice and Diana 

Re: [Lightning-dev] Proposal for Advertising Channel Liquidity

2018-11-12 Thread ZmnSCPxj via Lightning-dev
Good morning lisa,

>>> can simply close the channel. So if I'm charging for liquidity, I'd actually
>>> want to charge for the amount (in mSAT/BTC) times time.
>>
>> So perhaps you could make a market here by establishing a channel saying
>> that
>>
>>   "I'll pay 32 msat per 500 satoshi per hour for the first 3 days"
>>
>> When you open the channel with 500,000 satoshi donated by the other guy,
>> you're then obliged to transfer 32 satoshi every hour to the other guy
>> for three days (so a total of 14c or so).
>>
>> If the channel fails beforehand, they don't get paid; if you stop
>> paying you can still theoretically do a mutual close.
>
> I think that this can also be gamed by a second, cooperating node that sends 
> payments through the channel to meet the rate and capture the fees for the 
> first. You can make this less likely by charging higher transmission fees 
> that make such an attack infeasible, and it's less 'damaging' than an 
> immediate close in that there's still open capacity available for some time, 
> at least until the 'bogus' payments have drained the capacity that you 
> solicited in the first place.

I believe not?
I do not see any terms in the contract regarding payments through the channel 
other than the "liveness" payment.
So regardless of activity (or lack of activity) in the channel, the above 
payments should be made.
If the taker misses a payment, the maker closes the channel outright, freeing 
itself from the obligation.
If the maker refuses to route, it loses out on potential routing fees.
Any activity through it do not seem to matter.

This mechanism may actually be superior to the CLTV-encumberance I and Rene 
proposed.

Regards,
ZmnSCPxj___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Proposal for Advertising Channel Liquidity

2018-11-12 Thread ZmnSCPxj via Lightning-dev
Good morning lisa,

> As you point out below, at the very least a liquidity providing node would 
> get paid. Another thing worth considering is that the spec, as written, is 
> merely a mechanism for advertising and receiving offers for dual funding. 
> There are no rules about what offers you, as a liquidity advertising node, 
> have to accept. A node operator has the flexibility to reject any offer above 
> or below their stated fee rate, or if they don't relish the idea of funding a 
> badly skewed channel. If you're worried about capital being tied up 
> unnecessarily, you can reject offers without a sizeable input of their own.
>
> There are, however, scenarios where requests for badly skewed channels make 
> sense. Imagine that you're a large vendor, such as Amazon. You're likely not 
> going to ever need much outbound capacity, but you will be perpetually in the 
> market for more inbound capacity.
>
> In fact, as a liquidity provider, I think that you'll probably be delighted 
> to have an open channel with Amazon, as there's a good chance that channel 
> will be highly utilized, which means more fee traffic for you, and a high 
> probability that they'll be requesting more liquidity from you in the future, 
> as the existing channel gets unbalanced.

This is correct and I have since changed my mind.  A true market would only 
impose that the market taker actually pay for the service.

>> A counterpoint to this argument, however, is that if the fee for the 
>> liquidity is high enough, then it does not matter to you whether I use the 
>> 1.0 BTC or not: you have already been paid for it.
>>
>> This however brings up the other potential attack:
>>
>> 1.  I advertise that I have 1.0 BTC available for liquidity requests.
>> 2.  You answer this advertisement, and pay me a good fee for this 1.0 BTC 
>> being locked into a channel.
>> 3.  After the channel is opened, I immediately close it, having earned the 
>> fee for liquidity, without in fact delivering that liquidity.
>>
>> Perhaps we can modify channel commitment transactions for channels opened 
>> via liquidity requests, so that they have an `nSequence` that prevents them 
>> from being claimed for a month or so.  What do you think?
>
> At what point should a liquidity providing node (maker) be able to close the 
> channel? Immediately is not very beneficial to either of you -- you both tied 
> up your money for the time required to push through bitcoin txns through, 
> plus you lose closing + opening fees. Stipulating a length of time isn't 
> necessarily beneficial either -- if you've connected to a high volume payment 
> channel, the liquidity you've provided will be used up rather quickly, 
> rendering the channel itself pretty useless.

Please see the other thread regarding the proposed mechanism that I and Rene 
generated.

https://lists.linuxfoundation.org/pipermail/lightning-dev/2018-November/001555.html

In this mechanism, only the liquidity provider is encumbered by the agreed-upon 
channel lifetime.

In particular, section "Reduction of Licky Obligation", I point out that if the 
merchant has received funds, then the money of the liquidity provider that is 
encumbered by this lifetime is reduced.
That is, the channel balance on the side of the liquidity provider is reduced 
due to the merchant receiving funds.
I pointed out also, that this should be perfectly fine for the merchant, since 
the point of it is to receive payments, and this change in channel balance 
implies that the merchant has received payments.

Once the channel has saturated to the minimum receivable amount, only the 
channel reserve of the liquidity provider remains encumbered with the channel 
lifetime.
It would be quite fine for the liquidity provider to close the channel, as the 
locked funds are now only quite small.

> Considering incentives, keeping a high-traffic channel open should be worth 
> more in routing fees than the liquidity that you've provided. If the 
> liquidity market acts rationally, it should price itself to reflect this 
> reality and the risk of being laolu'd should remain fairly insignificant.

There are two sides of the laolu attack (I actually came up with both sides of 
the attack and wrote it on-list before the summit, but I suppose "being 
laolued" is easier to say than "being ZmnSCPxjed").

1.  If the liquidity market values the routing fees more than the liquidity 
fees, such that liquidity fees are small, then I can attack this market by 
requesting capacity, paying a tiny liquidity fee, then shutting off my node and 
letting the liquidity provider close the channel after it realizes it will 
never earn routing fees from me.
2.  If the liquidity market values the liquidity fees more than the routing 
fees, then I can attack this market by offering capacity, then closing the 
channel and re-offering my freed capacity to another customer.

To prevent one of the above attacks should be sufficient, since this will lead 
the market to 

Re: [Lightning-dev] Proposal for Advertising Channel Liquidity

2018-11-12 Thread lisa neigut
On Wed, Nov 7, 2018 at 11:02 PM Olaoluwa Osuntokun 
wrote:

> > A node, via their node_announcement,
>
> Most implementations today will ignore node announcements from nodes that
> don't have any channels, in order to maintain the smallest routing set
> possible (no zombies, etc). It seems for this to work, we would need to
> undo
> this at a global scale to ensure these announcements propagate?
>

Right on. I'm not too worried about this tbh; a new node on the network
could easily fix this by taking liquidity from another node that's already
offering it, to create a few balanced channels to itself. This would a) put
it on the map and b) make the liquidity that it's looking to offer more
valuable, as the other channels it's opened make it more likely to be
routed through.


>
> Aside from the incentives for leaches to arise that accept the fee then
> insta close (they just drain the network and then no one uses this), I
> think
> this is a dope idea in general! In the past, I've mulled over similar
> constructions under a general umbrella of "Channel Liquidity Markets"
> (CLM),
> though via extra-protocol negotiation.
>
> -- Laolu
>
>
> On Wed, Nov 7, 2018 at 2:38 PM lisa neigut  wrote:
>
>> Problem
>> 
>> Currently it’s difficult to reliably source inbound capacity for your
>> node. This is incredibly problematic for vendors and nodes hoping to setup
>> shop as a route facilitator. Most solutions at the moment require an
>> element of out of band negotiation in order to find other nodes that can
>> help with your capacity needs.
>>
>> While splicing and dual funding mechanisms will give some relief by
>> allowing for the initial negotiation to give the other node an opportunity
>> to put funds in either at channel open or after the fact, the problem of
>> finding channel liquidity is still left as an offline problem.
>>
>> Proposal
>> =
>> To solve the liquidity discovery problem, I'd like to propose allowing
>> nodes to advertise initial liquidity matching. The goal of this proposal
>> would be to allow nodes to independently source inbound capacity from a
>> 'market' of advertised liquidity rates, as set by other nodes.
>>
>> A node, via their node_announcement, can advertise that they will match
>> liquidity and a fee rate that they will provide to any incoming
>> open_channel request that indicates requests it.
>>
>> `node_announcement`:
>> new feature flag: option_liquidity_provider
>> data:
>>  [4 liquidity_fee_proportional_millionths] (option_liquidity_provider)
>> fee charged per satoshi of liquidity added at channel open
>>  [4 liquidity_fee_base_msat] (option_liquidity_provider) base fee charged
>> for providing liquidity at channel open
>>
>> `open_channel`:
>> new feature flag (channel_flags): option_liquidity_buy [2nd least
>> significant bit]
>> push_msat: set to fee payment for requested liquidity
>> [8 liquidity_msat_request]: (option_liquidity_buy) amount of dual funding
>> requested at channel open
>>
>> `accept_channel`:
>> tbd. hinges on a dual funding proposal for how second node would send
>> information about their funding input.
>>
>> If a node cannot provide the liquidity requested in `open_channel`, it
>> must return an error.
>> If the amount listed in `push_msat` does not cover the amount of
>> liquidity provided, the liquidity provider node must return an error.
>>
>> Errata
>> ==
>> It's an open question as to whether or not a liquidity advertising node
>> should also include a maximum amount of liquidity that they will
>> match/provide. As currently proposed, the only way to discover if a node
>> can meet your liquidity requirement is by sending an open channel request.
>>
>> This proposal depends on dual funding being possible.
>>
>> Should a node be able to request more liquidity than they put into the
>> channel on their half? In the case of a vendor who wants inbound capacity,
>> capping the liquidity request allowed seems unnecessary.
>>
>> Conclusion
>> ===
>> Allowing nodes to advertise liquidity paves the way for automated node
>> re-balancing. Advertised liquidity creates a market of inbound capacity
>> that any node can take advantage of, reducing the amount of out-of-band
>> negotiation needed to get the inbound capacity that you need.
>>
>>
>> Credit to Casey Rodamor for the initial idea.
>> ___
>> Lightning-dev mailing list
>> Lightning-dev@lists.linuxfoundation.org
>> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
>>
>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Proposal for Advertising Channel Liquidity

2018-11-12 Thread lisa neigut
On Wed, Nov 7, 2018 at 10:17 PM Anthony Towns  wrote:

> On Wed, Nov 07, 2018 at 06:40:13PM -0800, Jim Posen wrote:
> > can simply close the channel. So if I'm charging for liquidity, I'd
> actually
> > want to charge for the amount (in mSAT/BTC) times time.
>
> So perhaps you could make a market here by establishing a channel saying
> that
>
>   "I'll pay 32 msat per 500 satoshi per hour for the first 3 days"
>
> When you open the channel with 500,000 satoshi donated by the other guy,
> you're then obliged to transfer 32 satoshi every hour to the other guy
> for three days (so a total of 14c or so).
>
> If the channel fails beforehand, they don't get paid; if you stop
> paying you can still theoretically do a mutual close.
>

I think that this can also be gamed by a second, cooperating node that
sends payments through the channel to meet the rate and capture the fees
for the first. You can make this less likely by charging higher
transmission fees that make such an attack infeasible, and it's less
'damaging' than an immediate close in that there's still open capacity
available for some time, at least until the 'bogus' payments have drained
the capacity that you solicited in the first place.


> Maybe a complicated addition to the protocol though?
>
> Cheers,
> aj
>
>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Proposal for Advertising Channel Liquidity

2018-11-12 Thread lisa neigut
Hello ZmnSCPxj,

You bring up some good points.

On Wed, Nov 7, 2018, 21:19 ZmnSCPxj  Good morning Lisa,
>
> On Wednesday, November 7, 2018 2:17 PM, ZmnSCPxj via Lightning-dev <
> lightning-dev@lists.linuxfoundation.org> wrote:
>
> Good morning Lisa,
>
> >Should a node be able to request more liquidity than they put into the
> channel on their half? In the case of a vendor who wants inbound capacity,
> capping the liquidity request
> >allowed seems unnecessary.
>
> My initial thought is that it would be dangerous to allow the initiator of
> the request to request for arbitrary capacity.
>
> For instance, suppose that, via my legion of captive zombie computers
> (which are entirely fictional and exist only in this example, since I am an
> ordinary human person) I have analyzed the blockchain and discovered that
> you have 1.0 BTC you have reserved for liquidity requests under this
> protocol.  I could then have one of those computers spin up a temporary
> Lightning Node, request for 1.0BTC incoming capacity with only some nominal
> fee, then shut down the node permanently, leaving your funds in an
> unuseable channel, unable to earn routing fees or such.  This loses you
> potential earnings from this 1.0 BTC.
>
> If instead I were obligated to have at least greater capacity tied into
> this channel, then I would also be tying up at least 1.0 BTC into this
> channel as well, making this attack more expensive for me, as it also loses
> me any potential earnings from the 1.0 BTC of my own that I have locked up.
>
> As you point out below, at the very least a liquidity providing node would
get paid. Another thing worth considering is that the spec, as written, is
merely a mechanism for advertising and receiving offers for dual funding.
There are no rules about what offers you, as a liquidity advertising node,
have to accept. A node operator has the flexibility to reject any offer
above or below their stated fee rate, or if they don't relish the idea of
funding a badly skewed channel. If you're worried about capital being tied
up unnecessarily, you can reject offers without a sizeable input of their
own.

There are, however, scenarios where requests for badly skewed channels make
sense. Imagine that you're a large vendor, such as Amazon. You're likely
not going to ever need much outbound capacity, but you will be perpetually
in the market for more inbound capacity.

In fact, as a liquidity provider, I think that you'll probably be delighted
to have an open channel with Amazon, as there's a good chance that channel
will be highly utilized, which means more fee traffic for you, and a high
probability that they'll be requesting more liquidity from you in the
future, as the existing channel gets unbalanced.


> A counterpoint to this argument, however, is that if the fee for the
> liquidity is high enough, then it does not matter to you whether I use the
> 1.0 BTC or not: you have already been paid for it.
>
> This however brings up the other potential attack:
>
> 1.  I advertise that I have 1.0 BTC available for liquidity requests.
> 2.  You answer this advertisement, and pay me a good fee for this 1.0 BTC
> being locked into a channel.
> 3.  After the channel is opened, I immediately close it, having earned the
> fee for liquidity, without in fact delivering that liquidity.
>
> Perhaps we can modify channel commitment transactions for channels opened
> via liquidity requests, so that they have an `nSequence` that prevents them
> from being claimed for a month or so.  What do you think?
>

At what point should a liquidity providing node (maker) be able to close
the channel? Immediately is not very beneficial to either of you -- you
both tied up your money for the time required to push through bitcoin txns
through, plus you lose closing + opening fees. Stipulating a length of time
isn't necessarily beneficial either -- if you've connected to a high volume
payment channel, the liquidity you've provided will be used up rather
quickly, rendering the channel itself pretty useless.

I think there's definitely some clever things we can do to provide stronger
guarantees around a 'minimum service offer', and they can be investigated
independently of the advertisement mechanism that I've proposed here.
Independent of what guarantees the protocol offers, there's a bunch of
strategies that individual nodes can additionally take to limit potential
losses: starting with by soliciting small liquidity offers, shopping around
for the best rates, blacklisting IP addresses/node id's of unreliable
nodes, using a ratcheting mechanism (start with a small liquidity request
that you close/rebalance upward as the incoming capacity is drained).

Considering incentives, keeping a high-traffic channel open should be worth
more in routing fees than the liquidity that you've provided. If the
liquidity market acts rationally, it should price itself to reflect this
reality and the risk of being laolu'd should remain fairly insignificant.


>