Re: [tor-relays] >23% Tor exit relay capacity found to be malicious - call for support for proposal to limit large scale attacks

2020-07-06 Thread nusenu
> I've written up what I think would be a useful building block:
> https://gitlab.torproject.org/tpo/metrics/relay-search/-/issues/40001

thanks, I'll reply here since I (and probably others) can not reply there.

> Three highlights from that ticket that tie into this thread:
> 
> (A) Limiting each "unverified" relay family to 0.5% doesn't by itself
> limit the total fraction of the network that's unverified. I see a lot of
> merit in another option, where the total (global, network-wide) influence
> from relays we don't "know" is limited to some fraction, like 50% or 25%.

I like it (it is even stricter than what I proposed), you are basically saying
the "known" pool should always control a fixed (or minimal?) portion - lets say 
75% - 
of the entire network no matter what capacity the "unknown" pool has but it 
doesn't address the key question: 
How do you specifically define "known" and how do you verify entities before 
you move them to the "known" pool?


> (B) I don't know what you have in mind with verifying a physical address
> (somebody goes there in person? somebody sends a postal letter and waits
> for a response?)

The process is outlined at the bottom of my first email in this thread
(short: a random challenge sent to an address in a letter which is returned via 
email).

> but I think it's trying to be a proxy for verifying
> that we trust the relay operator, 

"trust" is a strong word. I wouldn't call them 'trusted' just because they
demonstrated their ability to pay someone to scan letters send
to a physical address.

I would describe it more as a proxy for "less likely to be a random 
opportunistic attacker
exploiting tor users with zero risks for themselves".

> and I think we should brainstorm more
> options for achieving this trust. In particular, I think "humans knowing
> humans" could provide a stronger foundation.

I'm all ears for better options but at some point I'd like to see
some actual improvement in practice.

I would dislike to be in the same situation in one year from now
because we are still discussing the perfect solution.

> More generally, I think we need to very carefully consider the extra
> steps we require from relay operators (plus the work they imply for
> ourselves), and what security we get from them. 

I agree.


> (C) Whichever mechanism(s) we pick for assigning trust to relays,
> one gap that's been bothering me lately is that we lack the tools for
> tracking and visualizing which relays we trust, especially over time,

> and especially with the amount of network churn that the Tor network
> sees. It would be great to have an easier tool where each of us could
> assess the overall network by whichever "trust" mechanisms we pick --
> and then armed with that better intuition, we could pick the ones that
> are most ready for use now and use them to influence network weights.


reminds me of an atlas feature request for family level graphs
https://trac.torproject.org/projects/tor/ticket/23509
https://lists.torproject.org/pipermail/tor-relays/2017-September/012942.html

I'm generating some timeseries graphs now to see what exit fraction (stacked)
is managed by 
https://torservers.net/partners.html
and those mentioned at the bottom of
https://lists.torproject.org/pipermail/tor-relays/2020-January/018022.html
+ some custom additions for operators I had some contact before
over time (past 6 months).
spoiler: it used to be >50% until some malicious actor came along and reduced 
it to <50%

Seeing their usual fraction over time can be used as an input when deciding what
fixed fraction should always be managed by them.

> At the same time, we need to take other approaches to reduce the impact
> and incentives for having evil relays in the network. For examples:
> 
> (1) We need to finish getting rid of v2 onion services, so we stop the
> stupid arms race with threat intelligence companies who run relays in
> order to get the HSDir flag in order to scrape legacy onion addresses.

outlined, planned and announced (great):
https://blog.torproject.org/v2-deprecation-timeline


> (2) We need to get rid of http and other unauthenticated internet protocols:

This is something browser vendors will tackle for us I hope, but it
will not be anytime soon.

kind regards,
nusenu




-- 
https://mastodon.social/@nusenu



signature.asc
Description: OpenPGP digital signature
___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


Re: [tor-relays] >23% Tor exit relay capacity found to be malicious - call for support for proposal to limit large scale attacks

2020-07-06 Thread nusenu


Charly Ghislain:
> I have nothing against this proposal although im not sure it would be that
> much efficient.
> Especially, how does it make relay operations 'less sustainable' or 'more
> risky'?

I assume you mean "make _malicious_ relay operations 'less sustainable' ..".

It would be less sustainable because they would have to run relays for longer 
before they can
start exploiting tor users.
And "more risky" because physical letter delivery requires them to pay someone 
and 
hiding money trails is usually harder than hiding the origin of someone 
creating a random email address.


-- 
https://mastodon.social/@nusenu



signature.asc
Description: OpenPGP digital signature
___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


Re: [tor-relays] >23% Tor exit relay capacity found to be malicious - call for support for proposal to limit large scale attacks

2020-07-06 Thread nusenu
Scott Bennett:
> Your proposed method of delaying the problem would impose a labor burden
> on the tor project as well 

If we assume that malicious relay activity is impacted I'd assume that the time 
saved 
using the proposal might as well outweight the time spend on bad-relays@

After implementation the proposal does not require resources from The 
Torproject 
besides publishing of the registry.


> Why would
> an automated solution not work? 

I believe the email verification can be automated completely. 
Also the mailing of letters can be automated but if - let's say 10  - 
letters/year are send
I'm not sure it is worth it.

> That would be a fast reaction and would not depend
> upon multiple human actions. 

There is no human interaction involved in the proposal to enforce a cap. The 
cap would be "on by default"
and lifted after verification is passed.

> You might also implement a "repeat offender"
> policy, whereby if the authorities lifted a relay's Exit flag more than n 
> times
> within a month, a BadExit flag would be applied in addition, which then (and
> only then) would require the operator to contact the tor project about it.

Malicious actors usually come back with new relays (new keys, new IPs)
after they got cough. 


-- 
https://mastodon.social/@nusenu



signature.asc
Description: OpenPGP digital signature
___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


Re: [tor-relays] Authority Nodes

2020-07-06 Thread Matt Westfall
LOL this requirement:  - Should be run by somebody that Tor (i.e. Roger)
knows.

One thing that I think would help Tor a lot and have seen some discussions
on, would be a better 'trustworthy' way to measure bandwidth.  I know it's
measured a couple of different ways now, with 'observed' bandwidth and some
testing/probing from the directory authorities, but as outlined in your
e-mail adding more directory authorities is a tedious process at best, so
is there a way that something could be set up where Tor maintainers could
put a flag manually on a relay to indicate that it can and should, initiate
bandwidth tests and report them back to the actual authorities?

Matt Westfall
President & CIO
ECAN Solutions, Inc.
Everything Computers and Networks
804.592.1672
http://ecansol.com


On Sat, Jun 20, 2020 at 5:59 AM Roger Dingledine 
wrote:

> On Fri, Jun 19, 2020 at 07:10:43AM -0300, Vitor Milagres wrote:
> > I see the Authority Nodes are located only in North America and Europe.
> > I would like to contribute to the TOR network as much as possible. I am
> > currently running a node and I would like to make it an Authority Node as
> > well.
> > I am from Brazil and I believe it would possibly be a good idea to have a
> > new Authority Node in South America.
> > What are the requirements? What should I do to become one of them?
> > FYI, the node I am running is 79DFB0E1D79D1306AF03A4B094C55A576989ABD1
>
> Thanks for your interest in running a directory authority! Long ago we
> wrote up a set of goals for new directory authorities:
> https://gitweb.torproject.org/torspec.git/tree/attic/authority-policy.txt
>
> It is definitely an informal policy at this point, but it still gets
> across some of the requirements.
>
> If you're able to run an exit relay at your location, that's definitely
> more useful than another directory authority at this point.
>
> Also, because we haven't automated some steps, each new directory
> authority that we add means additional coordination complexity, especially
> when we identify misbehaving relays and need to bump them out of the
> network quickly.
>
> Here are two big changes since that document:
>
> (1) The directory authorities periodically find themselves needing to
> scale to quite large bandwidths -- sustaining 200mbit at a minimum,
> and being able to burst to 400mbit or 500mbit, is pretty much needed at
> this point:
> https://bugs.torproject.org/33018
>
> (2) Tor ships with hundreds of hard-coded relays called Fallback
> Directories, which distribute the load for bootstrapping into the Tor
> network, and which also provide alternate access points if the main
> directory authorities are blocked.
> https://trac.torproject.org/projects/tor/wiki/doc/FallbackDirectoryMirrors
> So while the directory authorities are still a trust bottleneck,
> they are less of a performance bottleneck than they used to be.
>
> In summary: if you want to run a directory authority, your next step
> is to join the Tor community, get to know us and get us to know you,
> come to one of the dev meetings (once the world is able to travel
> again), and see where things go from there.
>
> Thanks,
> --Roger
>
> ___
> tor-relays mailing list
> tor-relays@lists.torproject.org
> https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays
>
___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


Re: [tor-relays] >23% Tor exit relay capacity found to be malicious - call for support for proposal to limit large scale attacks

2020-07-06 Thread Roger Dingledine
On Sun, Jul 05, 2020 at 06:35:32PM +0200, nusenu wrote:
> To prevent this from happening over and over again
> I'm proposing two simple but to some extend effective relay requirements 
> to make malicious relay operations more expensive, time consuming,
> less sustainable and more risky for such actors:
> 
> a) require a verified email address for the exit or guard relay flag.
> (automated verification, many relays)
> 
> b) require a verified physical address for large operators (>=0.5% exit or 
> guard probability)
> (manual verification, low number of operators). 

Thanks Nusenu!

I like the general goals here.

I've written up what I think would be a useful building block:
https://gitlab.torproject.org/tpo/metrics/relay-search/-/issues/40001



Three highlights from that ticket that tie into this thread:

(A) Limiting each "unverified" relay family to 0.5% doesn't by itself
limit the total fraction of the network that's unverified. I see a lot of
merit in another option, where the total (global, network-wide) influence
from relays we don't "know" is limited to some fraction, like 50% or 25%.

(B) I don't know what you have in mind with verifying a physical address
(somebody goes there in person? somebody sends a postal letter and waits
for a response?), but I think it's trying to be a proxy for verifying
that we trust the relay operator, and I think we should brainstorm more
options for achieving this trust. In particular, I think "humans knowing
humans" could provide a stronger foundation.

More generally, I think we need to very carefully consider the extra
steps we require from relay operators (plus the work they imply for
ourselves), and what security we get from them. Is verifying that each
relay corresponds to some email address worth the higher barrier in
being a relay operator? Are there other approaches that achieve a better
balance? The internet has a lot of experience now on sybil-resistance
ideas, especially on ones that center around proving online resources
(and it's mostly not good news).

(C) Whichever mechanism(s) we pick for assigning trust to relays,
one gap that's been bothering me lately is that we lack the tools for
tracking and visualizing which relays we trust, especially over time,
and especially with the amount of network churn that the Tor network
sees. It would be great to have an easier tool where each of us could
assess the overall network by whichever "trust" mechanisms we pick --
and then armed with that better intuition, we could pick the ones that
are most ready for use now and use them to influence network weights.



At the same time, we need to take other approaches to reduce the impact
and incentives for having evil relays in the network. For examples:

(1) We need to finish getting rid of v2 onion services, so we stop the
stupid arms race with threat intelligence companies who run relays in
order to get the HSDir flag in order to scrape legacy onion addresses.

(2) We need to get rid of http and other unauthenticated internet protocols:
I've rebooted this ticket:
https://gitlab.torproject.org/tpo/applications/tor-browser/-/issues/19850
with a suggestion of essentially disabling http connections when the
security slider is set to 'safer' or 'safest', to see if that's usable
enough to eventually make it the default in Tor Browser.

(3) We need bandwidth measuring techniques that are more robust and
harder to game, e.g. the design outlined in FlashFlow:
https://arxiv.org/abs/2004.09583

--Roger

___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


[tor-relays] Work with ISPs

2020-07-06 Thread Charly Ghislain
Hi list,

With the recent warning by nusenu about the malicious relays and the
proposal to work around the issue , ive been wondering:

Did anyone ever try to convince some isp to put a low-cap tor relay on the
router of their 'unlimited bandwidth' clients?
Or has there been any discussion on that matter already?

I understand most would be strongly reticent, but I think some may find a
commercial/marketing value in this.

Thanks,

Charly
___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


Re: [tor-relays] >23% Tor exit relay capacity found to be malicious - call for support for proposal to limit large scale attacks

2020-07-06 Thread Scott Bennett
nusenu  wrote:

>
> Pascal Terjan:
> > I am not convinced it would help large scale attacks.
> > Running 50 relays is not much and it each was providing 0.49% of
> > capacity that would give them 24.5%...
> > I would expect that an attacker would create more relays than that and
> > unless there is a good way to find out this is a single entity, they
> > will all be well below 0.5%
>
>
> Yes, they will try to circumvent thresholds by pretending to not be a group.
> The good thing is that this requires additional resources and time on the 
> attacker side to hide
> the fact that they are adding many relays without triggering certain 
> detections.
>
 Your proposed method of delaying the problem would impose a labor burden
on the tor project as well and would be slow to react to changes.  Why would
an automated solution not work?  For example, if the directory authorities
calculate the traffic percentages every hour or so or even every several hours,
then why not just remove a Guard or Exit flag from any guard or exit exceeding
the publicized percentage?  That would be a fast reaction and would not depend
upon multiple human actions.  You might also implement a "repeat offender"
policy, whereby if the authorities lifted a relay's Exit flag more than n times
within a month, a BadExit flag would be applied in addition, which then (and
only then) would require the operator to contact the tor project about it.


  Scott Bennett, Comm. ASMELG, CFIAG
**
* Internet:   bennett at sdf.org   *xor*   bennett at freeshell.org  *
**
* "A well regulated and disciplined militia, is at all times a good  *
* objection to the introduction of that bane of all free governments *
* -- a standing army."   *
*-- Gov. John Hancock, New York Journal, 28 January 1790 *
**
___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


Re: [tor-relays] >23% Tor exit relay capacity found to be malicious - call for support for proposal to limit large scale attacks

2020-07-06 Thread Michael Gerstacker
Am So., 5. Juli 2020 um 18:36 Uhr schrieb nusenu :

> Hi,
>
> I'm currently writing a follow-up blog post to [1] about a large scale
> malicious tor exit relay operator
> that did run more than 23% of the Tor network's exit capacity (May 2020)
> before (some) of it got reported to the bad-relays team and
> subsequently removed from the network by the Tor directory authorities.
> After the initial removal the malicious actor quickly restored its
> activities and
> was back at >20% of the Tor network's exit capacity within weeks (June
> 2020).
>
> [1]
> https://medium.com/@nusenu/the-growing-problem-of-malicious-relays-on-the-tor-network-2f14198af548
>
> To prevent this from happening over and over again
> I'm proposing two simple but to some extend effective relay requirements
> to make malicious relay operations more expensive, time consuming,
> less sustainable and more risky for such actors:
>
> a) require a verified email address for the exit or guard relay flag.
> (automated verification, many relays)
>
> b) require a verified physical address for large operators (>=0.5% exit or
> guard probability)
> (manual verification, low number of operators).
> It is not required that the address is public or stored after it got
> verified.
> For details see bellow [2].
>
> 0.5% exit probability is currently about 500-600 Mbit/s of advertised
> bandwidth.
>
>
> Q: How many operators would be affected by the physical address
> verification requirement if we use 0.5% as a threshold?
> A: About 30 operators.
>
> There are currently about 18 exit [3] and 12 guard operators that run
> >0.5% exit/guard capacity
> if we ignore the fresh exit groups from 2020.
> Most exit operators (14 out of these 18) are organizations with public
> addresses or have their address published in WHOIS
> anyway.
>
> At the end of the upcoming blog post I'd like to give people some idea as
> to how much support this proposal has.
>
> Please let me know if you find this idea to limit attackers useful,
> especially if you are a long term relay operator,
> one of the 30 operators running >=0.5% exit/guard capacity, a Tor
> directory authority operator or part of The Torproject.
>
>
> thanks for your support to fight malicious tor relays!
> nusenu
> --
> https://mastodon.social/@nusenu
>
>
> [2]
> Physical address verification procedure could look like this:
>
> The Torproject publishes a central registry of trusted entities that
> agreed to verify addresses of large scale operators.
>
> The registry is broken down by area so no central entity needs to see all
> addresses or is in the position to block all submissions.
> (even though the number of physical address verifications are expected be
> stay bellow 50 for the time being).
>
> Examples could be:
>
> Riseup.net:  US, ...
> Chaos Computer Club (CCC) :  DE, ...
> DFRI: SE, ...
>
> (these organizations host Tor directory authorities)
>
>
> - Relay operators that would like to run more than 0.5% guard/exit
> fraction select their respective area and contact the entity to
> initiate verification.
>
> - before sending an address verification request the operator verifies
> that they meet the following requirements:
>   - the oldest relay is not younger than two months (
> https://community.torproject.org/relay/community-resources/swag/ )
>   - all relays have a proper MyFamily configuration
>   - relays include the verified email address and PGP key fingerprint in
> the relay's ContactInfo
>   - at least one of their relays gained the exit or guard flag
>   - they have a sustained bandwidth usage of at least 100 Mbit/s
> (accumulated)
>   - intention to run the capacity for at least 4 months
>
> - upon receiving a request the above requirements are verified by the
> verification entity in addition to:
>   - relay(s) are currently running
>   - the address is in the entity's area
>
> - a random string is generated by the address verification entity and send
> with the welcome tshirt (if requested) to the operator
>
> - after sending the token the address is deleted
>
> - upon receiving the random string the operator sends it back via email to
> the verification entity
> while signing the email with the PGP key mentioned in the relays
> ContactInfo
>
> - the verification entity compares the received string with the generated
> and mailed string
>
> - if the string matches the entity sends the relay fingerprint to the
> directory authority list to unlock the cap for the operator
>
> After this one-time procedure the operator can add more relays as long as
> they are in the same family as the approved relay (no new verification
> needed).
>
>
I would not be very happy to be required to give away personal identifying
information even if it's a "trusted entity".

Even if Tor is focused on offering anonymity to its users and not
necessarily to its relay operators a move towards this by an organisation
that supports privacy wherever they can would seem like a strange idea to
me.

I remember that i