Re: [Lightning-dev] Proposal for Advertising Channel Liquidity

2018-11-07 Thread Jim Posen
Thanks for proposing this! I think it is absolutely one of the biggest
onboarding/usability challenges for many use cases.

My first thought is that like ZmnSCPxj mentioned, the person offering
liquidity can simply close the channel. So if I'm charging for liquidity,
I'd actually want to charge for the amount (in mSAT/BTC) times time. So
like 1 mSAT per satoshi of bandwidth per hour or something like that. I
don't think there's a perfect way of enforcing this at the protocol layer,
but maybe you could lock up the fees in channel reserve which decreases
over time and gets donated to miners on an early close?

Instead of a flat payment for liquidity, I've considered in the past a
model where you pre-pay on fees. So if I'm a large merchant and I expect to
be receiving lots of volume in payments, it is totally rational for you to
put up liquidity opening a channel to me because you will earn fees on
payments routed to me through that channel. So what I could do to convince
you is to say, "I expect if you open a 1 BTC channel to me, you will earn
at least 10 mSAT per minute in routing fees. And if you don't I'll cover
the difference." So every minute, I'll pay you 10 mSAT up front, then for
all HTLCs that come through the channel to me up to that limit, you'll
forward the fees onto me as reimbursement. I don't think this protocol is
any less vulnerable to attacks, but perhaps aligns incentives better?

My other concern with this sort of proposal is that it makes it easier to
perform HTLC withholding/loop attacks, which are executed by the receiving
end of a circuit. Currently on the network, there's a nice built-in
protection that it's not obvious how to convince victim to open a channel
to you. This is probably something that should get dealt with separately,
but part of me doubts that it'll be possible to create a liquidity market
without factoring in reputation.
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Commitment Transaction Format Update Proposals?

2018-10-20 Thread Jim Posen
Instead of leaving an extra output for CPFP, is it not sufficient to just
sign all inputs with ANYONECANPAY and expect the sender to make an exact
output for the fees input? It would require an extra tx assuming they don't
already have a properly sized UTXO handy (which they may!), but I believe
CPFP would require that as well. Am I missing something?

I'm a fan of the symmetric delays because it simplifies the game theory
analysis, but I don't think the delays need to be the same for both
participants (max of `to_self_delay` for both sides), just that the delay
is applied equally regardless of who publishes the commitment tx. Like your
`to_self_delay` can be what I specify and vice versa, what's the reason for
taking the max?

-jimpo
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Mitigations for loop attacks

2018-05-10 Thread Jim Posen
Hmm, I'm not quite following the situation. What do you mean by "directs
normal traffic"? Since the sender constructs the entire circuit, routing
nodes do not get any discretion over which nodes to forward a payment to,
only whether to forward or fail. What an attacker could do is perform a
loop attack and send a payment to another node that they control and delay
the payment on the receiving end. Note that the sending node loses no
reputation, only the receiving node. Since the hops being attacked are the
ones in the middle and they are faithfully enforcing the reputation
protocol, the receiving node's reputation should be penalized properly,
making it unlikely the attack will succeed in a second attempt.

On Thu, May 10, 2018 at 2:56 PM, Chris Gough 
wrote:

> hello, I'm a curious lurker trying to follow this conversation:
>
> On Thu, 10 May 2018, 2:40 pm ZmnSCPxj via Lightning-dev, <
> lightning-dev@lists.linuxfoundation.org> wrote:
>
>>
>> The concern however is that the CLTV already partly leaks the distance
>> from the payee, whereas the reputation-loss-rate leaks distance from the
>> payer.  It is often not interesting to know that some entity is getting
>> paid, but it probably far more interesting to know WHO paid WHO, so leaking
>> both distances simultaneously is more than twice as worse as leaking just
>> one distance.
>>
>
> Consider an asymetrically-resourced malevolent node that wants the ability
> to harm a specific small nodes without aquiring a bad reputation (and is
> willing to pay for it). In preparation, this bad boss node directs normal
> traffic to sacrificial nodes they control, while understating the
> reputation-risk (truthfully as it turns out, because they have out of band
> influence over the node). When the time comes, the sacrificial node
> inflicts delay on the victim node and they both suffer, while the boss
> keeps her nose clean.
>
> Is it the case that understating risk of legitimate traffic from boss node
> to sacrificial node effectively allows transfer of reputation to the
> sacrificial node in preparation for attack, while at the same time
> obscuring their association?
>
> Chris Gough
>
>>
> ___
> Lightning-dev mailing list
> Lightning-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
>
>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Mitigations for loop attacks

2018-05-09 Thread Jim Posen
One more point in terms of information leakage is that noise can be added
to the "this is the rate that you'll lose reputation at" field to help
obfuscate the number of upstream hops. I proposed setting it to "this is
the upstream rate that I'm losing reputation at" + downstream HTLC value,
but a node can decide to add noise. If they make it too low however,
there's a risk of insufficiently punishing bad nodes and if they make it
too high, there's a heightened risk that the payment fails because the
downstream reputation is insufficient along the route.

This is why I say it's kind of symmetric to the CLTV value: if the delta is
too low, there's risk of loss of funds, if the delta is too high, someone
might decide to fail the payment instead of taking the delay risk.

On Wed, May 9, 2018 at 10:23 AM, Jim Posen <jim.po...@gmail.com> wrote:

> Thanks for the thoughtful responses.
>
> > You missed the vital detail: that you must prove channel closure if you
> > can't unpeel the onion further.  That *will* hit an unresponsive party
> > with a penalty.
>
> Ah, that is a good point. I still find the proposal overall worryingly
> complex in terms of communication overhead, time it takes to prove channel
> closure, all of your points in [1], [2], [3], etc. Furthermore, this
> mandates that immediate channel closure is the only allowed reaction to a
> party delaying an HTLC for a time period above a threshold -- the node
> reputation approach gives more discretion to the preceding hop.
> Deobfuscating the route may turn out to be the right option, but I think
> the reputation system has certain advantages over this.
>
> > The models we tried in Milan all created an incentive to fail payments,
> > which is a non-starter.
>
> Do you mind elaborating or summarizing the reasons? The way I'm analyzing
> it, even if there's a nominal spam fee paid to routing nodes that fail
> payments, as long as it's low enough (say 2-5% for arguments sake), the
> nodes still have more to gain by forwarding the payment and earning the
> full fee on a completed payment, and possibly the reputation boost
> associated with completing a payment if that system was in effect.
> Moreover, a node that constantly fails payments will be blacklisted by the
> sender eventually and stop receiving HTLCs from them at all. Overall, I
> don't think this is a profitable strategy. Furthermore, I think it works
> quite well in combination with the reputation system.
>
> > This seems like we'd need some serious evaluation to show that this
> > works, because the risks are very high.
>
> I agree that it needs to be evaluated. I may start working on some network
> simulations to test various DOS mitigation strategies.
>
> > I can destroy your node's reputation by routing crap through you; even
> > if it costs me marginaly more reputation than it does you, that just
> > means that the largest players can force failure upon smaller players,
> > centralizing the network.  And I think trying to ensure that it costs me
> > more reputation than the sum of downstream reputation loss leaks too
> > much information
>
> I will add to ZmnSCPxj's response, which is mostly on point. The key here
> is that the only way to lose significant reputation is to delay a payment
> yourself or forward to a malicious downstream that delays -- neither of
> these can be forced by the sender alone. This amounts to a system where you
> are on the hook for any malicious behavior of your downstream peers, which
> is why you must keep a reputation score for each which they earn over time.
> This should keep all links in the network high quality and quickly
> disconnect off delaying nodes if the incentives are right.
>
> While I agree that a lot of reputation is leaked by aggregating the losses
> along the route, this serves exactly to prevent large nodes with high
> reputation from ruining links elsewhere. There are two things a node
> looking to cause reputation loss could do. 1) Identify a node (not itself)
> it thinks will delay a payment and send to them. This locks up funds on
> their behalf, but is actually good behavior because it identifies a faulty
> node and rightfully forces a loss in their reputation, eventually resulting
> in them being booted from the network. Everyone upstream loses some
> reputation for having connectivity to them, but less because of the loss
> aggregation along the route. 2) Delay a payment oneself and force upstream
> reputation loss. This is why I think it's important that the reputation
> loss aggregate so that the malicious party loses the most.
>
> As for the amount of information leaked, yes, it helps determine the
> number of upstream hops in a route. However, the CLTV values help det

Re: [Lightning-dev] Mitigations for loop attacks

2018-05-09 Thread Jim Posen
Thanks for the thoughtful responses.

> You missed the vital detail: that you must prove channel closure if you
> can't unpeel the onion further.  That *will* hit an unresponsive party
> with a penalty.

Ah, that is a good point. I still find the proposal overall worryingly
complex in terms of communication overhead, time it takes to prove channel
closure, all of your points in [1], [2], [3], etc. Furthermore, this
mandates that immediate channel closure is the only allowed reaction to a
party delaying an HTLC for a time period above a threshold -- the node
reputation approach gives more discretion to the preceding hop.
Deobfuscating the route may turn out to be the right option, but I think
the reputation system has certain advantages over this.

> The models we tried in Milan all created an incentive to fail payments,
> which is a non-starter.

Do you mind elaborating or summarizing the reasons? The way I'm analyzing
it, even if there's a nominal spam fee paid to routing nodes that fail
payments, as long as it's low enough (say 2-5% for arguments sake), the
nodes still have more to gain by forwarding the payment and earning the
full fee on a completed payment, and possibly the reputation boost
associated with completing a payment if that system was in effect.
Moreover, a node that constantly fails payments will be blacklisted by the
sender eventually and stop receiving HTLCs from them at all. Overall, I
don't think this is a profitable strategy. Furthermore, I think it works
quite well in combination with the reputation system.

> This seems like we'd need some serious evaluation to show that this
> works, because the risks are very high.

I agree that it needs to be evaluated. I may start working on some network
simulations to test various DOS mitigation strategies.

> I can destroy your node's reputation by routing crap through you; even
> if it costs me marginaly more reputation than it does you, that just
> means that the largest players can force failure upon smaller players,
> centralizing the network.  And I think trying to ensure that it costs me
> more reputation than the sum of downstream reputation loss leaks too
> much information

I will add to ZmnSCPxj's response, which is mostly on point. The key here
is that the only way to lose significant reputation is to delay a payment
yourself or forward to a malicious downstream that delays -- neither of
these can be forced by the sender alone. This amounts to a system where you
are on the hook for any malicious behavior of your downstream peers, which
is why you must keep a reputation score for each which they earn over time.
This should keep all links in the network high quality and quickly
disconnect off delaying nodes if the incentives are right.

While I agree that a lot of reputation is leaked by aggregating the losses
along the route, this serves exactly to prevent large nodes with high
reputation from ruining links elsewhere. There are two things a node
looking to cause reputation loss could do. 1) Identify a node (not itself)
it thinks will delay a payment and send to them. This locks up funds on
their behalf, but is actually good behavior because it identifies a faulty
node and rightfully forces a loss in their reputation, eventually resulting
in them being booted from the network. Everyone upstream loses some
reputation for having connectivity to them, but less because of the loss
aggregation along the route. 2) Delay a payment oneself and force upstream
reputation loss. This is why I think it's important that the reputation
loss aggregate so that the malicious party loses the most.

As for the amount of information leaked, yes, it helps determine the number
of upstream hops in a route. However, the CLTV values help determine the
number of downstream hops in a route in exactly the same way. I see these
as symmetric in a sense.

To address ZmnSCPxj's point:

> But it also looks more and more like a policy of "just
`update_htlc_fail`" keeps our reputation high: indeed never accepting a
forwarding attempt would ensure reputation.
> However, earning via fees should help provide incentive against "Just
`update_htlc_fail`" always.  If the goal is "how do I earn money fastest"
then there is some optimal threshhold > of risk-of-reputation-loss vs.
fee-earnings-if-I-forward that is unlikely to be near the "Just fail it"
spectrum, but somewhere in between.  We hope.

This is exactly the question that your local view of peer reputations helps
solve: are the potential fees here worth the risk of forwarding this
payment to this downstream? If their reputation is high, then you will want
to forward because you think there's a low chance of you incurring
reputation loss. If their reputation is low and the HTLC value is too high,
you will fail it. So I disagree that "just `update_htlc_fail`" is an
optimal strategy. Consider as well that all fees you earn on successful
payments are profit to you as well as a reputation boost in the view of
both of your 

Re: [Lightning-dev] Commitment delay asymmetry

2018-04-15 Thread Jim Posen
>
> It seems to me that adding an entire new attack vector in order to only
> *mitigate* (not eliminate!) another attack vector is not a good enough
> trade-off.  In particular the new attack seems *easier* to perform.  The
> current attack where I annoy the other side until it closes has the risk
> that the other side may have a high tolerance for annoyance, and decide not
> to close the channel unilaterally anyway.  But in a symmetric-delay world,
> I do not have to wait for the other side to get annoyed: I just trigger the
> lockup period immediately in the active attack.
>

I don't see the two attacks in the symmetric case as any different from one
another. In 1.1, you force a unilateral close by becoming unresponsive and
forcing the other side to eventually broadcast the commitment. In this case
you waste the other party's channel balance for the full time of the delay
PLUS the additional time they wait around to determine if you are ever
going to come online. In 1.2, you force a unilateral close by broadcasting
yourself. This is actually a weaker attack because the other party only has
to wait for the delay period and there is no uncertainty about when they
will get access to funds. So basically, I see no reason for an attacker to
ever choose 1.2 over 1.1.

So the question is whether 1.1 or 2.1 is a worse DOS. To me it's pretty
clear that it is 2.1 because the attacker does not get penalized and can
for more quickly use any remaining channel balance to open a new channel
with someone else and start over.

I also would not classify 1.1 nor 2.1 as a passive attack -- the attacker
is proactively rebalancing the victim's channel balances in order to waste
the maximal amount of time-money. Passive attacks [1] are where an attacker
does not directly interact with the victim and just eavesdrops or tries to
observe and extract information.


> > For example, in the case where the side unilaterally closing the channel
> has zero balance, the other side gets no delay and symmetry as measured by
> (coins locked) * (duration of lock) equals zero on both sides. When the
> side closing the channel has at least 50% of the balance, both sides must
> wait the full delay. Thoughts?
>
> So on channel setup where I am the funder to a 1BTC  channel I make to
> Daniel:
>
> * Daniel holds a commitment transaction with: ZmnSCPxj=1BTC+no delay,
> Daniel=0BTC+no delay
> * I hold a commitment transaction with: ZmnSCPxj=1BTC+no delay,
> Daniel=0BTC+no delay
>

I rather like Daniel's suggestion to scale the delay in proportion to the
time-money lost by the broadcasting party. Essentially, the delay just
serves as punishment, so we should ensure that the punishment delivered is
no greater than the time-value lost by the initiator of the unilateral
close.

This example is not quite right: the commitment delays do not need to be
the same in both commitment transaction with this scaling strategy. So the
delay for the local output is ALWAYS the to_local_delay, as it is in the
BOLT 3 spec today. When assigning the delay on the remote output, however,
instead of using 0 as BOLT specifies now or to_remote_delay as I originally
proposed, a better rule might be min(to_remote_delay, to_local_delay *
to_local_value / to_remote_value). So the delay is never worse than what
the opposite side would get by broadcasting themself, but is the punishment
duration is reduced if the attacker broadcasts a commitment transaction in
which the balance of funds is skewed towards the victim's end of the
channel. However, I'm not sure how much this matters because as I argued
above, an attacker should always prefer to become unresponsive rather than
broadcast the commitment themself.

[1] https://en.wikipedia.org/wiki/Passive_attack
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Commitment delay asymmetry

2018-04-15 Thread Jim Posen
I believe that anyone attempting a DOS by forcing on-chain settlement can
do it just as easily with asymmetric delays than with symmetric delays.

If my goal is to waste the time-value of your money in a channel, in a
world with symmetric delays, I could just publish the commitment
transaction and you would have to wait the full delay for access to your
funds. True. But with delays asymmetric as they are now, I can just as
easily refuse to participate in a mutual close, forcing you to close
on-chain. This is just as bad. In fact, I'd argue that it is worse, because
I lose less by doing this (in the sense that I as the attacker get
immediate access to my funds). So in my assessment, it is a very active
attack and symmetric delays are something of a mitigation. You are right
that the balance of funds in the channel becomes a factor too, but note
that there is the reserve balance, so I'm always losing access to some
funds for some time.

-jimpo

On Sun, Apr 15, 2018 at 6:35 AM, ZmnSCPxj  wrote:

> Good morning Daniel,
>
>
> This makes a lot of sense to me as a way to correct the incentives for
> closing channels. I figure that honest nodes that have truly gone offline
> will not require (or be able to take advantage of) immediate access to
> their balance, such that this change shouldn't cause too much inconvenience.
>
> I was trying to think if this could open up a DOS vector - dishonest nodes
> performing unilateral closes even when mutual closes are possible just to
> lock up the other side's coins - but it seems like not much of a concern. I
> figure it's hard to pull off on a large scale.
>
>
>
> Now that you bring this up, I think, it is indeed a concern, and one we
> should not take lightly.
>
> As a purely selfish rational being, it matters not to me whether my
> commitment transaction will delay your output or not; all that matters is
> that it delays mine, and that is enough for me to prefer a bilateral close
> if possible.  I think we do not need to change commitment transactions to
> be symmetrical then --- it is enough that the one holding the commitment
> transaction has its own outputs delayed.
>
> If I had a goal to disrupt rather than cooperate with the Lightning
> Network, and commitment transactions would also delay the side not holding
> the commitment transaction (i.e. "symmetrical delay" commitments), I would
> find it easier to disrupt cheaply if I could wait for a channel to be
> unbalanced in your favor (i.e. you own more money on it than I do), then
> lock up both our funds by doing a unilateral transaction.  Since it is
> unbalanced in your favor, you end up losing more utility than I do.
> Indeed, in the situation where you are funding a new channel to me, I have
> 0 satoshi on the channel and can perform this attack costlessly.
>
> Now perhaps one may argue, in the case of asymmetric delays, that if I
> were evil, I could still disrupt the network by misbehaving and forcing the
> other side to push its commitment transaction.  Indeed I could even just
> accept a channel and then always fail to forward any payment you try to
> make over it, performing a disruption costlessly too (as I have no money in
> this).  But this attack is somewhat more passive than the above attack
> under a symmetrical delay commitment transaction scheme.
>
> Regards,
> ZmnSCPxj
>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Commitment delay asymmetry

2018-04-13 Thread Jim Posen
> By extension, perhaps both sides should use the maximum delay either one
> asks for?
>

I'm not sure that is necessary. As long as both parties have to wait the
same amount of time regardless of whether they publish the commitment or
the other side does, that would resolve the issue.


> I don't think it's urgent, but please put it into the brainstorming part
> of the wiki so we don't lose track?[1]
>

I don't have access to add to the wiki. I'd write a section like:

# Symmetric CSV Delay

Change the script of the remote output of all commitment transactions to
require the full CSV delay. This acts as further incentive for both parties
to mutually close instead of waiting for the other side to unilaterally
close, and serves as punishment to misbehaving or unresponsive nodes that
force the other endpoint to go to chain.
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Pinging a route for capacity

2018-03-02 Thread Jim Posen
Regarding ping flooding, if it is problematic, the best solution is
probably including a small proof-of-work with the ping, similar to BIP 154.
However, the whole purpose of the ping in the first place is to be a
cheaper way to collect routing information than attempting to send a
payment, so I think adding a PoW starts to become counterproductive. Note
that the sender needs to expend a certain amount of computation just
creating the onion packet up front (on the order of a few ms, I believe),
so perhaps that is sufficient.

Also, if someone wanted to DoS the network, there are much better ways than
using this proposed ping mechanism. For example, someone can send payments
along any circuit with a randomly generated payment hash (for which the
preimage is unknown), and force a payment failure at the end of the route.
That is basically a way to ping that works now, but is more expensive for
everyone.

On Thu, Mar 1, 2018 at 4:16 PM, gb <kiw...@yahoo.com> wrote:

>  and any thoughts on protections against flood pinging?
>
> On Thu, 2018-03-01 at 09:45 -0500, Jim Posen wrote:
>
>
> > The main benefit is that this should make it quicker to send a
> > successful payment because latency is lower than sending an actual
> > payment and the sender could ping all possible routes in parallel,
> > whereas they can't send multiple payments in parallel. The main
> > downside I can think of is that, by the same token, it is faster and
> > cheaper for someone to extract information about channel capacities on
> > the network with a binary search.
> >
> >
> > -jimpo
> > ___
> > Lightning-dev mailing list
> > Lightning-dev@lists.linuxfoundation.org
> > https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
>
>
>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Improving the initial gossip sync

2018-02-07 Thread Jim Posen
I like Christian's proposal of adding a simple announcement cutoff
timestamp with the intention of designing something more sophisticated
given more time.

I prefer the approach of having an optional feature bit signalling that a
`set_gossip_timestamp` message must be sent immediately after `init`, as
Laolu suggested. This way it doesn't conflict with and other possible
handshake extensions.


On Feb 7, 2018 9:50 AM, "Fabrice Drouin"  wrote:

Hi,

Suppose you partition nodes into 3 generic roles:
- payers: they mostly send payments, are typically small and operated
by end users, and are offline quite a lot
- relayers: they mostly relay payments, and would be online most of
the time (if they're too unreliable other nodes will eventually close
their channels with them)
- payees: they mostly receive payments, how often they can be online
is directly link to their particular mode of operations (since you
need to be online to receive payments)

Of course most nodes would play more or less all roles. However,
mobile nodes would probably be mostly "payers", and they have specific
properties:
- if they don't relay payments they don't have to be announced. There
could be millions of mobile nodes that would have no impact on the
size of the routing table
- it does not impact the network when they're offline
- but they need an accurate routing table. This is very different from
nodes who mostly relay or accept payements
- they would be connected to a very small number of nodes
- they would typically be online for just  a few hours every day, but
could be stopped/paused/restarted many times a day

Laolu wrote:
> So I think the primary distinction between y'alls proposals is that
> cdecker's proposal focuses on eventually synchronizing all the set of
> _updates_, while Fabrice's proposal cares *only* about the newly created
> channels. It only cares about new channels as the rationale is that if
once
> tries to route over a channel with a state channel update for it, then
> you'll get an error with the latest update encapsulated.

If you have one filter per day and they don't match (because your peer
has channels that you missed, or
 have been closed and you were not aware of it) then you will receive
all channel announcements for
this particular day, and the associated updates

Laolu wrote:
> I think he's actually proposing just a general update horizon in which
> vertexes+edges with a lower time stamp just shouldn't be set at all. In
the
> case of an old zombie channel which was resurrected, it would eventually
be
> re-propagated as the node on either end of the channel should broadcast a
> fresh update along with the original chan ann.

Yes but it could take a long time. It may be worse on testnet since it
seems that nodes
don't change their fees very often. "Payer nodes" need a good routing
table (as opposed
to "relayers" which could work without one if they never initiate payments)

Laolu wrote:
> This seems to assume that both nodes have a strongly synchronized view of
> the network. Otherwise, they'll fall back to sending everything that went
on
> during the entire epoch regularly. It also doesn't address the zombie
churn
> issue as they may eventually send you very old channels you'll have to
deal
> with (or discard).

Yes I agree that for nodes which have connections to a lot of peers,
strongly synchronized routing tables is
harder to achieve since a small change may invalidate an entire
bucket. Real queryable filters would be much
better, but worst case scenario is we've sent an additionnal 30 Kb or
o of sync messages.
(A very naive filter would be sort + pack all short ids for example)

But we focus on nodes which are connected to a very small number of
peers, and in this particular
case it is not an unrealistic expectation.
We have built a prototype and on testnet it works fairly well. I also
found nodes which have no direct
channel betweem them but produce the same filters for 75% of the
buckets ("produce" here means
that I opened a simple gossip connection to them, got their routing
table and used it to generate filters).


Laolu wrote:
> How far back would this go? Weeks, months, years?
Since forever :)
One filter per day for all annoucements that are older than now - 1
week (modulo 144)
One filter per block for recent announcements

>
> FWIW this approach optimizes for just learning of new channels instead of
> learning of the freshest state you haven't yet seen.

I'd say it optimizes the case where you are connected to very few
peers, and are online a few times every day (?)

>
> -- Laolu
>
>
> On Mon, Feb 5, 2018 at 7:08 AM Fabrice Drouin 
> wrote:
>>
>> Hi,
>>
>> On 5 February 2018 at 14:02, Christian Decker
>>  wrote:
>> > Hi everyone
>> >
>> > The feature bit is even, meaning that it is required from the peer,
>> > since we extend the `init` message itself, and a peer that does not
>> > support this feature would be 

Re: [Lightning-dev] QuickMaths for Onions: Linear Construction of Sphinx Shared-Secrets

2018-02-04 Thread Jim Posen
Nice work!

I reread the relevant section in BOLT 4 and it is written in a way to
suggest the quadratic time algorithm. I have opened a PR to update the
recommendation and reference code:
https://github.com/lightningnetwork/lightning-rfc/pull/374.

On Fri, Feb 2, 2018 at 6:20 PM, Conner Fromknecht <
conner@lightning.engineering> wrote:

> Hello everyone,
>
> While working on some upgrades to our lightning-onion repo [1], roasbeef 
> pointed
> out that all of our implementations use a quadratic algorithm to
> iteratively apply the intermediate blinding factors.
>
> I spent some time working on a linear algorithm that reduces the total
> number of scalar multiplications. Overall, our packet construction
> benchmarks showed an 8x speedup, from 37ms to 4.5ms, and now uses ~70% less
> memory. The diff is only ~15 LOC, and thought this would be a
> useful optimization for our implementations to have. I can make a PR that
> updates the example source in lightning-rfc if there is interest.
>
> A description, along with the modified source, can be found in my PR to
> lightning-onion [2]. The correctness of the output has been verified
> against the (updated) BOLT 4 test vector [3].
>
> [1] https://github.com/lightningnetwork/lightning-onion
> [2] https://github.com/lightningnetwork/lightning-onion/pull/18
> [3] https://github.com/lightningnetwork/lightning-rfc/pull/372
>
> Cheers,
> Conner
>
> ___
> Lightning-dev mailing list
> Lightning-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
>
>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev