Re: [tor-relays] Home Tor Middle Relay Blacklisted

2022-01-12 Thread Gary C. New via tor-relays
Zorc and Sebastian:
I appreciate you sharing your experiences and solutions. Presently, I have a 
couple of Reverse Proxies (domain based using dnsmasq) already routing to 
dedicated Split-Tunneling VPN's to Off-Shore Exits on my router (for other 
purposes). I'm not sure why it didn't dawn on me to implement an On-Shore 
configuration for Tor Blacklisted Sites. My only concern is latency, with this 
type of configuration, but it's better than the current forbidden situation.
I appreciate the heads-up related to Tor and IPv6-only being a non-starter. It 
sounds like if I were to migrate to a Tor Bridge, I would need to have my IPv4 
refreshed to a new IP. I think I'd prefer to contribute as a Tor Relay, though. 
I could possibly continue to operate my Middle Relay on the IPv4 address and 
then use the IPv6 address as my default gateway, but then I'd have to migrate 
all my private network devices to IPv6.
Hmm... Fork in the road. Which to take?
Respectfully,

Gary—
This Message Originated by the Sun.
iBigBlue 63W Solar Array (~12 Hour Charge)
+ 2 x Charmast 26800mAh Power Banks
= iPhone XS Max 512GB (~2 Weeks Charged) 

On Monday, January 10, 2022, 4:41:50 AM PST, zorc via tor-relays 
 wrote:  
 
 > Fellow Tor Operators:
>
> After about 9 months of running Tor as a Middle Relay from my home network, 
> I'm beginning to experience signs of my public semi-static IPv4 address being 
> blacklisted with 403 Forbidden errors from Reuters and Venmo. I've confirmed 
> by successfully accessing both sites with my mobile internet connection.
>
> I'm not surprised that Venmo is blacklisting, but extremely surprised I'm 
> being blocked by Reuters. You would think such a organization would be a 
> proponent of free speech. I wouldn't be surprised if Reuters used Tor in some 
> capacity. It doesn't make sense.
>
> When Googling my public semi-static IPv4 address, it appears in several Tor 
> blacklists. That being said, I'm at the point that, at a minimum, I will have 
> to ask my ISP to freshen my public semi-static IPv4 address.
>
> Previously, when speaking with my ISP, they mentioned offering a static IPv6 
> address at no cost. I'm wondering if that offer was with the expectation that 
> I would have to give up my existing IPv4 semi-static address? If they 
> provided both IPv4 and IPv6 addresses, at no cost, I'd like to run a Tor 
> Bridge using the?semi-static?IPv4 address and configure my existing Middle 
> Tor Relay to use the new static IPv6 address. That way, I'll be able to 
> browse unimpeded through the?semi-static?IPv4 address and not have to be 
> concerned with the static IPv6 address being blacklisted.
>
> Are other Tor Operators experiencing similar issues? Will I continue to 
> experience blacklisting issues, even after migrating to a Tor Bridge? What 
> are best practices in moving an existing Tor Relay to a new address, while 
> avoiding the loss of flags?
>
> As always, I appreciate the feedback.
>
> Respectfully,
>
> Gary?

Hi Gary,

I'm having a similar experience. My main solution is (somewhat ironically I 
guess) to use a VPN. The sum of the two work reasonably well. When the VPN is 
blocked by something, my residential IP is usually not. What I also did once is 
directly contact a service I am paying for, and tell them to please use the Tor 
exit list instead of the Tor all-relays list if they really need. After some 
back and forth, this site started working again, although I never heard 
anything back from them whether this was indeed due to my enquiry. Maybe this 
could work for Reuters too if you explain to them how Tor is helping especially 
journalists?
As for the IPv6, my understanding is, that currently IPv6-only relays are not 
possible, i.e. your IPv4 would still show up on the blocklists. In addition, it 
would also show up as a Tor relay, i.e. your bridge would become blocked too. 
Not 100% sure on this, so someone please correct me if I'm wrong.

Cheers,
zorc

PGP 64c416e08f0575609d6212c075c6b8e01967b659

Sent with ProtonMail Secure Email.
___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays
  ___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


[tor-relays] Ansible role to deploy Bridges

2022-01-12 Thread Erasme - Relay Operator via tor-relays
Hi all,

In the effort of deploying obfs4 bridges for the community we are sharing our 
Ansible role that allowed us to deploy multiple nodes:

https://github.com/NewNewYorkBridges/ansible-tor-bridge

For now it is only available on Debian but we will make it available for other 
distributions.

We hope this role is helpful for the community. Feel free to send us your 
feedback!

Kind regards,
Erasme
___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


Re: [tor-relays] cases where relay overload can be a false positive

2022-01-12 Thread Mike Perry

On 1/12/22 5:36 PM, David Goulet wrote:

On 01 Jan (21:12:38), s7r wrote:

One of my relays (guard, not exit) started to report being overloaded since
once week ago for the first time in its life.

The consensus weight and advertised bandwidth are proper as per what they
should be, considering the relay's configuration. More than this, they have
not changed for years. So, I started to look at it more closely.

Apparently the overload is triggered at 5-6 days by flooding it with circuit
creation requests. All I can see in tor.log is:

[warn] Your computer is too slow to handle this many circuit creation
requests! Please consider using the MaxAdvertisedBandwidth config option or
choosing a more restricted exit policy. [68382 similar message(s) suppressed
in last 482700 seconds]

[warn] Your computer is too slow to handle this many circuit creation
requests! Please consider using the MaxAdvertisedBandwidth config option or
choosing a more restricted exit policy. [7882 similar message(s) suppressed
in last 60 seconds]

This message is logged like 4-5 or 6 time as 1 minute (60 sec) difference
between each warn entry.

After that, the relay is back to normal. So it feels like it is being probed
or something like this. CPU usage is at 65%, RAM is at under 45%, SSD no
problem, bandwidth no problem.


Very plausible theory, especially in the context of such "burst" of traffic,
we can rule out that all the sudden your relay has become facebook.onion
guard.


Metrics port says:

tor_relay_load_tcp_exhaustion_total 0

tor_relay_load_onionskins_total{type="tap",action="processed"} 52073
tor_relay_load_onionskins_total{type="tap",action="dropped"} 0
tor_relay_load_onionskins_total{type="fast",action="processed"} 0
tor_relay_load_onionskins_total{type="fast",action="dropped"} 0
tor_relay_load_onionskins_total{type="ntor",action="processed"} 8069522
tor_relay_load_onionskins_total{type="ntor",action="dropped"} 273275

So if we account the dropped ntor circuits with the processed ntor circuits
we end up with a reasonable % (it's  >8 million vs <300k).


Yeah so this is ~3.38% drop so it immediately triggers the overload signal.

>

So the question here is: does the computed consensus weight of a relay
change if that relay keeps sending reports to directory authorities that it
is being overloaded? If yes, could this be triggered by an attacker, in
order to arbitrary decrease a relay's consensus weight even when it's not
really overloaded (to maybe increase the consensus weights of other
malicious relays that we don't know about)?


Correct, this is a possibility indeed. I'm not entirely certain that this is
the case at the moment as sbws (bandwidth authority software) might not be
downgrading the bandwidth weights just yet.

But regardless, the point is that it is where we are going to. But we have
control over this so now is a good time to notice these problems and act.

I'll try to get back to you asap after talking with the network team.


My thinking is that sbws would avoid reducing weight of a relay that is 
overloaded until it sees a series of these overload lines, with fresh 
timestamps. For example, just one with a timestamp that never updates 
again could be tracked but not reacted to, until the timestamp changes N 
times.


We can (and should) also have logic that prevents sbws from demoting the 
capacity of a Guard relay so much that it loses the Guard flag, so DoS 
attacks can't easily cause clients to abandon a Guard, unless it goes 
entirely down.


Both of these things can be done in sbws side. This would not solve 
short blips of overload from still being reported on the metrics portal, 
but maybe we want to keep that property.



Also, as a side note, I think that if the dropped/processed ratio is not
over 15% or 20% a relay should not consider itself overloaded. Would this be
a good idea?


Plausible that it could be better idea! Unclear what an optimal percentage is
but, personally, I'm leaning towards that we need higher threshold so they are
not triggered in normal circumstances.

But I think if we raise this to 20% let say, it might not stop an attacker
from triggering it. It might just make it that it is a bit longer.


Hrmm. Parameterizing this threshold as a consensus parameter might be a 
good idea. I think that if we can make it such that an attack has to be 
"severe" and "ongoing" long enough such that a relay has lost capacity 
and/or lost the ability to complete circuits, and that relay can't do 
anything about it, that relay unfortunately should not be used as much. 
It's not like the circuit will be likely to succeed or be fast enough to 
use in that case anyway.


We need better DoS defenses generally :/

--
Mike Perry
___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


Re: [tor-relays] cases where relay overload can be a false positive

2022-01-12 Thread David Goulet
On 01 Jan (21:12:38), s7r wrote:
> Hello,

Hi s7r!

Sorry for the delay, some vacationing happened for most of us eheh :).

> 
> One of my relays (guard, not exit) started to report being overloaded since
> once week ago for the first time in its life.
> 
> The consensus weight and advertised bandwidth are proper as per what they
> should be, considering the relay's configuration. More than this, they have
> not changed for years. So, I started to look at it more closely.
> 
> Apparently the overload is triggered at 5-6 days by flooding it with circuit
> creation requests. All I can see in tor.log is:
> 
> [warn] Your computer is too slow to handle this many circuit creation
> requests! Please consider using the MaxAdvertisedBandwidth config option or
> choosing a more restricted exit policy. [68382 similar message(s) suppressed
> in last 482700 seconds]
> 
> [warn] Your computer is too slow to handle this many circuit creation
> requests! Please consider using the MaxAdvertisedBandwidth config option or
> choosing a more restricted exit policy. [7882 similar message(s) suppressed
> in last 60 seconds]
> 
> This message is logged like 4-5 or 6 time as 1 minute (60 sec) difference
> between each warn entry.
> 
> After that, the relay is back to normal. So it feels like it is being probed
> or something like this. CPU usage is at 65%, RAM is at under 45%, SSD no
> problem, bandwidth no problem.

Very plausible theory, especially in the context of such "burst" of traffic,
we can rule out that all the sudden your relay has become facebook.onion
guard.

> 
> Metrics port says:
> 
> tor_relay_load_tcp_exhaustion_total 0
> 
> tor_relay_load_onionskins_total{type="tap",action="processed"} 52073
> tor_relay_load_onionskins_total{type="tap",action="dropped"} 0
> tor_relay_load_onionskins_total{type="fast",action="processed"} 0
> tor_relay_load_onionskins_total{type="fast",action="dropped"} 0
> tor_relay_load_onionskins_total{type="ntor",action="processed"} 8069522
> tor_relay_load_onionskins_total{type="ntor",action="dropped"} 273275
> 
> So if we account the dropped ntor circuits with the processed ntor circuits
> we end up with a reasonable % (it's  >8 million vs <300k).

Yeah so this is ~3.38% drop so it immediately triggers the overload signal.

> 
> So the question here is: does the computed consensus weight of a relay
> change if that relay keeps sending reports to directory authorities that it
> is being overloaded? If yes, could this be triggered by an attacker, in
> order to arbitrary decrease a relay's consensus weight even when it's not
> really overloaded (to maybe increase the consensus weights of other
> malicious relays that we don't know about)?

Correct, this is a possibility indeed. I'm not entirely certain that this is
the case at the moment as sbws (bandwidth authority software) might not be
downgrading the bandwidth weights just yet.

But regardless, the point is that it is where we are going to. But we have
control over this so now is a good time to notice these problems and act.

I'll try to get back to you asap after talking with the network team.

> 
> Also, as a side note, I think that if the dropped/processed ratio is not
> over 15% or 20% a relay should not consider itself overloaded. Would this be
> a good idea?

Plausible that it could be better idea! Unclear what an optimal percentage is
but, personally, I'm leaning towards that we need higher threshold so they are
not triggered in normal circumstances.

But I think if we raise this to 20% let say, it might not stop an attacker
from triggering it. It might just make it that it is a bit longer.

Thanks for your feedback! We'll get back to this thread asap.

David

-- 
SSS3IvmdWRIFm4XsNZMnPLAvdQnAxoWYx1+Twou0Ay0=


signature.asc
Description: PGP signature
___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


Re: [tor-relays] Bridge showing offline

2022-01-12 Thread torbridge

Hi all,

now it hit my bridge again. it's reported offline on metrics 
(https://metrics.torproject.org/rs.html#details/811CA1BA98FE0987F552859489EF74B5004BB584) 
with nothing in the logs. Except that I had no unique clients since 
about 3 days

I have my ports monitored by uptimerobot as well and they are fine.

Does anyone have an idea on how to track this down before I do a restart?
As mentioned below: I am running tor 0.4.6.8 on FreeBSD in a jail.
Thanks for your help!

Cord

On 07.01.2022 09:08, torbri...@wp1180731.server-he.de wrote:

Hi,

I experience a similar problem and ended up restarting the service 
once a week. With this hack, metrics shows it online all the time and 
I get connections. There were no traces in the logs either.
BTW: I am running 0.4.6.8 on FreeBSD. I someone want's to guide me how 
to analyze this further I am happy to help.


Cord

On 06.01.2022 19:52, Eddie wrote:
This has now happened again, on the same bridge, showing off-line to 
the metrics, but the logs showing it running with activity taking place.


Guess I'll keep an eye on the metrics to see if it pops back to 
available.


I'm wondering how often this might be happening, because I don't 
check the metrics that often and the instance documented below didn't 
change the Uptime or Last Restart to indicate anything had happened.  
So unless I notice while it's in this state, like now, I have no way 
of knowing if it happens regularly or not.


Cheers.




On 10/14/2021 12:11 PM, Eddie wrote:

On 10/14/2021 2:44 AM, Bleedangel Tor Admin wrote:
You are running 0.4.5.8, maybe updating to a newer version of tor 
will help?


Sent from ProtonMail for iOS


On Thu, Oct 14, 2021 at 03:02, Georg Koppen  wrote:

Eddie:
> Looking at tor metrics, one of my bridges is showing as off-line:
> B080140DC1BAB5B86D1CE5A4CA2EF64F20282440
>
> However, the log isn't showing any issues:
>
> Oct 14 00:00:28.000 [notice] Tor 0.4.5.8 opening new log file.
> Oct 14 00:00:28.000 [notice] Configured hibernation. This interval
> began at 2021-10-14 00:00:00; the scheduled wake-up time was 
2021-10-14

> 00:00:00; we expect to exhaust our quota for this interval around
> 2021-10-15 00:00:00; the next interval begins at 2021-10-15 00:00:00
> (all times local)
> Oct 14 01:49:40.000 [notice] Heartbeat: Tor's uptime is 124 days 
6:00
> hours, with 0 circuits open. I've sent 907.23 GB and received 
922.28 GB.

> I've received 63052 connections on IPv4 and 8375 on IPv6. I've made
> 512684 connections with IPv4 and 100974 with IPv6.
> Oct 14 01:49:40.000 [notice] While not bootstrapping, fetched 
this many
> bytes: 1801791624 (server descriptor fetch); 175792 (server 
descriptor
> upload); 221750347 (consensus network-status fetch); 10974 
(authority

> cert fetch); 19905655 (microdescriptor fetch)
> Oct 14 01:49:40.000 [notice] Heartbeat: Accounting enabled. 
Sent: 140.52

> MB, Received: 140.69 MB, Used: 281.21 MB / 200.00 GB, Rule: sum. The
> current accounting interval ends on 2021-10-15 00:00:00, in 
22:10 hours.
> Oct 14 01:49:40.000 [notice] Heartbeat: In the last 6 hours, I 
have seen

> 14 unique clients.
>
> Initially I thought of a previous issue I had with IPv6 
connectivity,
> but don't think this is the problem here as the 2nd bridge on 
the same
> server is showing on-line.  Also an IPv6 port scan shows the 
ports for

> both bridges as accessible.
>
> Ideas ??

I wonder whether that is another instance
https://gitlab.torproject.org/tpo/core/tor/-/issues/40424. Hard to 
tell,

though. Does that issue happen regularly?

Georg

___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


I didn't touch anything, and this morning the metrics say the bridge 
is running normally, with an uptime of over 125 days.


Looks like it might have been a metrics issue, not the bridge itself.

Cheers.


___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays



___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays