[tor-relays] 1 circuit using 1.5Gig or ram? [0.3.3.2-alpha]

2018-02-12 Thread starlight . 2017q4
On 12 Feb (19:44:02 UTC), David Goulet wrote:
>Wow... 1599323088 bytes is insane. This should _not_ happen for only 1
>circuit. We actually have checks in place to avoid this but it seems they
>either totally failed or we have a edge case.
>
>Can you tell me what scheduler were you using (look for "Scheduler" in the
>notice log).
>
>Any warnings in the logs that you could share or everything was normal?
>
>Finally, if you can share the OS you are running this relay and if Linux, the
>kernel version.


Don't know if it's relevant but my relay was hit in similar fashion in December.
Running 0.2.9.14 (no KIST) on Linux at the time (no other related log messages,
MaxMemInQueues=1GB reduced from 2GB after OOM termination):

Dec 15 15:28:52 Tor[]: assign_to_cpuworker failed. Ignoring.
Dec 15 15:48:16 Tor[]: assign_to_cpuworker failed. Ignoring.
Dec 15 16:39:44 Tor[]: We're low on memory.  Killing circuits with over-long 
queues. (This behavior is controlled by MaxMemInQueues.)
Dec 15 17:39:45 Tor[]: Removed 442695264 bytes by killing 1 circuits; 18766 
circuits remain alive. Also killed 0 non-linked directory connections.
Dec 15 19:03:22 Tor[]: We're low on memory.  Killing circuits with over-long 
queues. (This behavior is controlled by MaxMemInQueues.)
Dec 15 19:03:23 Tor[]: Removed 1060505952 bytes by killing 1 circuits; 19865 
circuits remain alive. Also killed 0 non-linked directory connections.

More recently (and more reasonably, MaxMemInQueues=512MB), running 0.3.2.9:

Feb  4 20:12:39 Tor[]: Scheduler type KIST has been enabled.
Feb  6 08:12:41 Tor[]: Heartbeat: Tor's uptime is 1 day 11:59 hours. I've sent 
29.00 MB and received 364.99 MB.
Feb  6 14:04:43 Tor[]: We're low on memory.  Killing circuits with over-long 
queues. (This behavior is controlled by MaxMemInQueues.)
Feb  6 14:04:43 Tor[]: Removed 166298880 bytes by killing 2 circuits; 20213 
circuits remain alive. Also killed 0 non-linked directory connections.
Feb  6 14:11:17 Tor[]: Heartbeat: Tor's uptime is 1 day 17:59 hours, with 20573 
circuits open. I've sent 910.29 GB and received 902.58 GB.
Feb  6 14:11:17 Tor[]: Circuit handshake stats since last time: 1876499/3018306 
TAP, 4322015/4322131 NTor.
Feb  6 14:11:17 Tor[]: Since startup, we have initiated 0 v1 connections, 0 v2 
connections, 1 v3 connections, and 23846 v4 connections; and received 6 v1 
connections, 7844 v2 connections, 11906 v3 connections, and 214565 v4 
connections.
Feb  6 14:12:41 Tor[]: Heartbeat: Tor's uptime is 1 day 17:59 hours. I've sent 
31.62 MB and received 420.63 MB.
Feb  6 14:22:50 Tor[]: We're low on memory.  Killing circuits with over-long 
queues. (This behavior is controlled by MaxMemInQueues.)
Feb  6 14:22:50 Tor[]: Removed 181501584 bytes by killing 2 circuits; 19078 
circuits remain alive. Also killed 0 non-linked directory connections.
Feb  6 15:01:50 Tor[]: We're low on memory.  Killing circuits with over-long 
queues. (This behavior is controlled by MaxMemInQueues.)
Feb  6 15:01:50 Tor[]: Removed 105918912 bytes by killing 1 circuits; 19679 
circuits remain alive. Also killed 0 non-linked directory connections.
Feb  6 15:46:24 Tor[]: Channel padding timeout scheduled 157451ms in the past. 
Feb  6 19:30:36 Tor[]: new bridge descriptor 'Binnacle' (fresh): 
$4F0DB7E687FC7C0AE55C8F243DA8B0EB27FBF1F2~Binnacle at 108.53.208.157
Feb  6 20:11:17 Tor[]: Heartbeat: Tor's uptime is 1 day 23:59 hours, with 18045 
circuits open. I've sent 1043.74 GB and received 1034.65 GB.
Feb  6 20:11:17 Tor[]: Circuit handshake stats since last time: 260970/368918 
TAP, 3957087/3957791 NTor.

Perhaps this indicates some newer KIST mitigation logic is effective.

___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


Re: [tor-relays] 1 circuit using 1.5Gig or ram? [0.3.3.2-alpha]

2018-02-12 Thread David Goulet
On 12 Feb (21:14:14), Stijn Jonker wrote:
> Hi David,
> 
> On 12 Feb 2018, at 20:44, David Goulet wrote:
> 
> > On 12 Feb (20:09:35), Stijn Jonker wrote:
> >> Hi all,
> >>
> >> So in general 0.3.3.1-alpha-dev and 0.3.3.2-alpha running on two nodes
> >> without any connection limits on the iptables firewall seems to be a lot
> >> more robust against the recent increase in clients (or possible [D]DoS). 
> >> But
> >> tonight for a short period of time one of the relays was running a bit 
> >> "hot"
> >> so to say.
> >>
> >> Only to be greated by this log entry:
> >> Feb 12 18:54:55 tornode2 Tor[6362]: We're low on memory (cell queues total
> >> alloc: 1602579792 buffer total alloc: 1388544, tor compress total alloc:
> >> 1586784 rendezvous cache total alloc: 489909). Killing circuits
> >> withover-long queues. (This behavior is controlled by MaxMemInQueues.)
> >> Feb 12 18:54:56 tornode2 Tor[6362]: Removed 1599323088 bytes by killing 1
> >> circuits; 39546 circuits remain alive. Also killed 0 non-linked directory
> >> connections.
> >
> > Wow... 1599323088 bytes is insane. This should _not_ happen for only 1
> > circuit. We actually have checks in place to avoid this but it seems they
> > either totally failed or we have a edge case.
> Yeah it felt a "bit" much. A couple megs I wouldn't have shared :-)
> 
> > Can you tell me what scheduler were you using (look for "Scheduler" in the
> > notice log).
> 
> The schedular always seems to be KIST (never played with it/tried to change 
> it)
> Feb 11 19:58:24 tornode2 Tor[6362]: Scheduler type KIST has been enabled.
> 
> > Any warnings in the logs that you could share or everything was normal?
> Besides that ESXi host gave an alarm about CPU usage, nothing odd in the logs 
> around that time I could find.
> The general syslog logging worked both locally on the host and remote as the 
> hourly cron jobs surround this entry.
> 
> 
> > Finally, if you can share the OS you are running this relay and if Linux, 
> > the
> > kernel version.
> 
> Debian Stretch, Linux tornode2 4.9.0-5-amd64 #1 SMP Debian 4.9.65-3+deb9u2 
> (2018-01-04) x86_64 GNU/Linux
> not sure it matters, but ESXi based VM, running with 2 vCPU's based on 
> i5-5300U, 4 Gig of memory
> 
> No problems, happy to squash bugs. I guess one of the "musts" when running 
> Alpha code, although this might not be alpha related (I can't judge).

Thanks for all the information!

I've opened https://bugs.torproject.org/25226

Cheers!
David

-- 
1xYrq8XhE25CKCQqvcX/cqKg04v1HthMMM3PwaRqqdU=


signature.asc
Description: PGP signature
___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


Re: [tor-relays] 1 circuit using 1.5Gig or ram? [0.3.3.2-alpha]

2018-02-12 Thread Stijn Jonker
Hi David,

On 12 Feb 2018, at 20:44, David Goulet wrote:

> On 12 Feb (20:09:35), Stijn Jonker wrote:
>> Hi all,
>>
>> So in general 0.3.3.1-alpha-dev and 0.3.3.2-alpha running on two nodes
>> without any connection limits on the iptables firewall seems to be a lot
>> more robust against the recent increase in clients (or possible [D]DoS). But
>> tonight for a short period of time one of the relays was running a bit "hot"
>> so to say.
>>
>> Only to be greated by this log entry:
>> Feb 12 18:54:55 tornode2 Tor[6362]: We're low on memory (cell queues total
>> alloc: 1602579792 buffer total alloc: 1388544, tor compress total alloc:
>> 1586784 rendezvous cache total alloc: 489909). Killing circuits
>> withover-long queues. (This behavior is controlled by MaxMemInQueues.)
>> Feb 12 18:54:56 tornode2 Tor[6362]: Removed 1599323088 bytes by killing 1
>> circuits; 39546 circuits remain alive. Also killed 0 non-linked directory
>> connections.
>
> Wow... 1599323088 bytes is insane. This should _not_ happen for only 1
> circuit. We actually have checks in place to avoid this but it seems they
> either totally failed or we have a edge case.
Yeah it felt a "bit" much. A couple megs I wouldn't have shared :-)

> Can you tell me what scheduler were you using (look for "Scheduler" in the
> notice log).

The schedular always seems to be KIST (never played with it/tried to change it)
Feb 11 19:58:24 tornode2 Tor[6362]: Scheduler type KIST has been enabled.

> Any warnings in the logs that you could share or everything was normal?
Besides that ESXi host gave an alarm about CPU usage, nothing odd in the logs 
around that time I could find.
The general syslog logging worked both locally on the host and remote as the 
hourly cron jobs surround this entry.


> Finally, if you can share the OS you are running this relay and if Linux, the
> kernel version.

Debian Stretch, Linux tornode2 4.9.0-5-amd64 #1 SMP Debian 4.9.65-3+deb9u2 
(2018-01-04) x86_64 GNU/Linux
not sure it matters, but ESXi based VM, running with 2 vCPU's based on 
i5-5300U, 4 Gig of memory

No problems, happy to squash bugs. I guess one of the "musts" when running 
Alpha code, although this might not be alpha related (I can't judge).

Thx,
Stijn

signature.asc
Description: OpenPGP digital signature
___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


Re: [tor-relays] 1 circuit using 1.5Gig or ram? [0.3.3.2-alpha]

2018-02-12 Thread Stijn Jonker

Hi Tor & Others,

On 12 Feb 2018, at 20:29, tor wrote:

I see this occasionally. It's not specific to 0.3.3.x. I reported it 
back in October 2017:


Thx, I more or less added the version in the subject to clearly indicate 
it was on an alpha release



https://lists.torproject.org/pipermail/tor-relays/2017-October/013328.html

Roger replied here:

https://lists.torproject.org/pipermail/tor-relays/2017-October/013334.html


Ah thanks, not sure why my google kung-fu missed this one.

MaxMemInQueues is set to 1.5 GB by default, which is why the 
problematic circuit uses that much RAM before its killed. You can 
lower MaxMemInQueues in torrc, however that will obviously have other 
impacts on your relay. If you have plenty of RAM, I'd maybe just leave 
things alone for now since Tor is already killing the circuit.


My tornodes have 4Gig or ram, so I also put the MaxMemInQueues at 1,5G 
whilst the (D)DoS attacks were more troublesome (wasn't aware it was the 
default).


I agree in theory some mitigation against this would be nice, but I'm 
not smart enough to offer anything specific. It seems Roger and other 
devs are already thinking about the issue.


Not a coder myself (except some scripting)

For those looking for the paper as well, the original URL gives a 403, I 
believe this is a copy (alterations or omitted slides can't check of 
course) http://www.robgjansen.com/talks/sniper-dcaps-20131011.pdf


Thx,
Stijn___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


Re: [tor-relays] 1 circuit using 1.5Gig or ram? [0.3.3.2-alpha]

2018-02-12 Thread David Goulet
On 12 Feb (20:09:35), Stijn Jonker wrote:
> Hi all,
> 
> So in general 0.3.3.1-alpha-dev and 0.3.3.2-alpha running on two nodes
> without any connection limits on the iptables firewall seems to be a lot
> more robust against the recent increase in clients (or possible [D]DoS). But
> tonight for a short period of time one of the relays was running a bit "hot"
> so to say.
> 
> Only to be greated by this log entry:
> Feb 12 18:54:55 tornode2 Tor[6362]: We're low on memory (cell queues total
> alloc: 1602579792 buffer total alloc: 1388544, tor compress total alloc:
> 1586784 rendezvous cache total alloc: 489909). Killing circuits
> withover-long queues. (This behavior is controlled by MaxMemInQueues.)
> Feb 12 18:54:56 tornode2 Tor[6362]: Removed 1599323088 bytes by killing 1
> circuits; 39546 circuits remain alive. Also killed 0 non-linked directory
> connections.

Wow... 1599323088 bytes is insane. This should _not_ happen for only 1
circuit. We actually have checks in place to avoid this but it seems they
either totally failed or we have a edge case.

Can you tell me what scheduler were you using (look for "Scheduler" in the
notice log).

Any warnings in the logs that you could share or everything was normal?

Finally, if you can share the OS you are running this relay and if Linux, the
kernel version.

Big thanks!
David

-- 
1xYrq8XhE25CKCQqvcX/cqKg04v1HthMMM3PwaRqqdU=


signature.asc
Description: PGP signature
___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


Re: [tor-relays] 1 circuit using 1.5Gig or ram? [0.3.3.2-alpha]

2018-02-12 Thread tor
I see this occasionally. It's not specific to 0.3.3.x. I reported it back in 
October 2017:

https://lists.torproject.org/pipermail/tor-relays/2017-October/013328.html

Roger replied here:

https://lists.torproject.org/pipermail/tor-relays/2017-October/013334.html

MaxMemInQueues is set to 1.5 GB by default, which is why the problematic 
circuit uses that much RAM before its killed. You can lower MaxMemInQueues in 
torrc, however that will obviously have other impacts on your relay. If you 
have plenty of RAM, I'd maybe just leave things alone for now since Tor is 
already killing the circuit.

I agree in theory some mitigation against this would be nice, but I'm not smart 
enough to offer anything specific. It seems Roger and other devs are already 
thinking about the issue.

___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


[tor-relays] 1 circuit using 1.5Gig or ram? [0.3.3.2-alpha]

2018-02-12 Thread Stijn Jonker

Hi all,

So in general 0.3.3.1-alpha-dev and 0.3.3.2-alpha running on two nodes 
without any connection limits on the iptables firewall seems to be a lot 
more robust against the recent increase in clients (or possible [D]DoS). 
But tonight for a short period of time one of the relays was running a 
bit "hot" so to say.


Only to be greated by this log entry:
Feb 12 18:54:55 tornode2 Tor[6362]: We're low on memory (cell queues 
total alloc: 1602579792 buffer total alloc: 1388544, tor compress total 
alloc: 1586784 rendezvous cache total alloc: 489909). Killing circuits 
withover-long queues. (This behavior is controlled by MaxMemInQueues.)
Feb 12 18:54:56 tornode2 Tor[6362]: Removed 1599323088 bytes by killing 
1 circuits; 39546 circuits remain alive. Also killed 0 non-linked 
directory connections.
Feb 12 19:04:10 tornode2 Tor[6362]: Your network connection speed 
appears to have changed. Resetting timeout to 60s after 18 timeouts and 
1000 buildtimes.


So 1 Circuit being able to claim 1,5 gig or ram, now this seems a big 
much. Whilst the DoS protection seems to do something (see below). Now 
this could be a new attack or just an error etc. However wouldn't some 
sort of fair memory balance between circuits be an other mitigation 
factor to consider? Not saying it should be as strict as "circuit 
memory"/"# of circuits" but 99.x% of memory for one circuit feels wrong 
for a relay.


Feb 12 13:58:34 tornode2 Tor[6362]: DoS mitigation since startup: 910770 
circuits rejected, 10 marked addresses. 25972 connections closed. 324 
single hop clients refused.
Feb 12 19:58:34 tornode2 Tor[6362]: DoS mitigation since startup: 
1222320 circuits rejected, 12 marked addresses. 33359 connections 
closed. 402 single hop clients refused.


Thx,
Stijn___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


Re: [tor-relays] Publishing bridge contact information

2018-02-12 Thread entensaison

 
Am Montag, 12. Februar 2018 um 12:33 schrieb nusenu 
:

 

If you block the ORPort, won't the reachability check fail?

Fine question. At least this has been the case in the past, though I
know there was discussion and maybe development to overcome this
weakness. But even if it's not possible yet, having bridge contact
information would allow us in the _future_ to reach out to bridge
operators to inform them that they don't have to keep their OR port 
open

anymore, and maybe even shouldn't.
 

https://trac.torproject.org/projects/tor/ticket/7349
https://trac.torproject.org/projects/tor/ticket/17159

Sorry, I'm not sure where to ask this questions
but reading this thread I realizes that I misunderstood this howto:
https://trac.torproject.org/projects/tor/wiki/doc/PluggableTransports/obfs4proxy

Is it necessary for the ExtOrPort to be random?
Does the port change automatically?
Is it possible to specify the port?

And how can the wiki pages be changed?

Thanks :)






 

--
https://mastodon.social/@nusenu
twitter: @nusenu_

___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays
___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


Re: [tor-relays] Publishing bridge contact information

2018-02-12 Thread nusenu
>> If you block the ORPort, won't the reachability check fail?
> 
> Fine question. At least this has been the case in the past, though I
> know there was discussion and maybe development to overcome this
> weakness. But even if it's not possible yet, having bridge contact
> information would allow us in the _future_ to reach out to bridge
> operators to inform them that they don't have to keep their OR port open
> anymore, and maybe even shouldn't.
> 

https://trac.torproject.org/projects/tor/ticket/7349
https://trac.torproject.org/projects/tor/ticket/17159


-- 
https://mastodon.social/@nusenu
twitter: @nusenu_



signature.asc
Description: OpenPGP digital signature
___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


Re: [tor-relays] Publishing bridge contact information

2018-02-12 Thread Karsten Loesing
On 2018-02-12 11:19, Alexander Dietrich wrote:
> On 2018-02-11 00:43, nusenu wrote:
> 
>> - we could tell operators that running obfs3 and obfs4 is a bad idea
> 
> Are you saying obfs3 and obfs4 shouldn't run simultaneously on the same
> bridge? That would be good to know indeed.

Citing from the IMDEA paper that was mentioned later on this thread:

"The combination of PTs with different security properties raises
several security concerns, since the security of the bridge is only as
strong as its weakest link. First, an adversary detecting the weakest
transport and blocking the IP disables also stronger transports for
free, e.g., for the nearly 100 bridges that offer obfs3 or obfs4 in
combination with obfs2, which is deprecated and trivial to identify
through traffic analysis. Second, it allows an adversary to confirm a
bridge, even in presence of transports that implement reply protection.
For example, for the most popular combination obfs3+obfs4+ScrambleSuit,
offered by 524 bridges, an adversary can confirm a bridge, e.g.,
identified through traffic analysis [39], through a vertical scan using
obfs3 on the candidate IP address."

https://software.imdea.org/~juanca/papers/torbridges_ndss17.pdf

>> - we could tell operator that exposing their vanilla ORPort is a bad idea
> 
> If you block the ORPort, won't the reachability check fail?

Fine question. At least this has been the case in the past, though I
know there was discussion and maybe development to overcome this
weakness. But even if it's not possible yet, having bridge contact
information would allow us in the _future_ to reach out to bridge
operators to inform them that they don't have to keep their OR port open
anymore, and maybe even shouldn't.

I see how this doesn't fully answer your question. Maybe somebody else
knows more about the current state of things.

> Kind regards,
> Alexander

All the best,
Karsten



signature.asc
Description: OpenPGP digital signature
___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


Re: [tor-relays] Publishing bridge contact information

2018-02-12 Thread Karsten Loesing
On 2018-02-12 11:39, nusenu wrote:
> Once the decision has been made to publish contactInfo, people with
> access to the current contactInfo (bridgeDB, isis?) should sent current 
> bridge operators
> a pre-notice about the upcoming change so they have a chance to react to it.
> 
> I assume you will not implement this change retroactively (only contactInfo 
> going forward
> from a given date will be published but not from past descriptors).

Fine question. I guess there's not much value in having contact
information for bridges running in the past, but only for current bridges.

We could pick a date like March 1 or April 1 and start including contact
information in descriptors published after that date.

Then it also makes sense to reach out to bridge operators with such an
announcement.

> We could also to reach out to the IMDEA researcher that wrote the bridge paper
> "Dissecting Tor Bridges: a Security Evaluation of
> Their Private and Public Infrastructures"
> as they might have some additional ideas why this could be a bad idea?

Good idea, I'll do that now.

(Again, adding more context for this list, this is different research
group than the one disclosing a list of bridge IP addresses a couple
month ago.)

All the best,
Karsten



signature.asc
Description: OpenPGP digital signature
___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


Re: [tor-relays] Publishing bridge contact information

2018-02-12 Thread nusenu
Once the decision has been made to publish contactInfo, people with
access to the current contactInfo (bridgeDB, isis?) should sent current bridge 
operators
a pre-notice about the upcoming change so they have a chance to react to it.

I assume you will not implement this change retroactively (only contactInfo 
going forward
from a given date will be published but not from past descriptors).
 
We could also to reach out to the IMDEA researcher that wrote the bridge paper
"Dissecting Tor Bridges: a Security Evaluation of
Their Private and Public Infrastructures"
as they might have some additional ideas why this could be a bad idea?

-- 
https://mastodon.social/@nusenu
twitter: @nusenu_



signature.asc
Description: OpenPGP digital signature
___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


Re: [tor-relays] Publishing bridge contact information

2018-02-12 Thread Karsten Loesing
On 2018-02-11 00:43, nusenu wrote:
>> Possible advantages are:
>>  - Relay Search would support searching for bridges by contact information.
>>  - People who keep a watching eye on the Tor network could reach out to
>> bridge operators to inform them that they're running an outdated tor/PT
>> version, or that running bridges and exits together is not cool.
> 
> some more come to mind:
> 
> - we could tell operators of obfs2 and obfs3 bridges that they would be much 
> more
> useful if they run obfs4 PT (increase the usefulness of current resources)
> 
> - we could tell operators that running obfs3 and obfs4 is a bad idea
> 
> - we could tell operator that exposing their vanilla ORPort is a bad idea

Yes, those all make sense. They're sort of variants of the second bullet
point above, so I think we should just combine them.

I'm summarizing advantages and disadvantages that we have so far below:

Possible advantages are:
 - Relay Search would support searching for bridges by contact information.
 - People who keep a watching eye on the Tor network could reach out to
bridge operators to inform them that they're running an outdated tor/PT
version, that running bridges and exits together is not cool, that they
might better be running different PTs, or that running a PT together
with another PT or with an exposed vanilla OR port might be a bad idea.
 - If somebody ever revives OnionTip/TorTip, bridges could participate
and receive donations for running a bridge. Or t-shirts, who knows. Note
that I'm not promising either here, but without contact information,
neither would even be possible.
 - We will be able to analyze bridge shares and in particular bridge
operator diversity.

Possible disadvantages are:
 - If somebody runs a relay and a bridge, both with the same contact
information, a censoring adversary might guess that the bridge might run
on a nearby IP address as the relay. However, they could as well assume
that for all relays and block or scan the IP space around all known relays.
 - Somebody might use an email address as bridge contact information
that can be linked to an IP address in public sources, e.g. mailing list
archives, forum postings, or whois information. If that IP address is
the same or nearby a bridge IP address, then the bridge can be located
quite easily.
 - Bridge operators might be surprised to see their contact information
in a public archive. We do have a warning in the tor manual
https://www.torproject.org/docs/tor-manual.html.en#ContactInfo, but
maybe nobody reads the fine manual.

All the best,
Karsten



signature.asc
Description: OpenPGP digital signature
___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


Re: [tor-relays] Publishing bridge contact information

2018-02-12 Thread Karsten Loesing
On 2018-02-08 19:48, to...@protonmail.com wrote:
> Whatever you decide, I think you should have this mentioned in the setup docs 
> for bridges.

We have the following explanation in the manual:

"Administrative contact information for this relay or bridge. This line
can be used to contact you if your relay or bridge is misconfigured or
something else goes wrong. Note that we archive and publish all
descriptors containing these lines and that Google indexes them, so
spammers might also collect them. You may want to obscure the fact that
it’s an email address and/or generate a new address for this purpose."

https://www.torproject.org/docs/tor-manual.html.en#ContactInfo

What other docs do you have in mind that we should change in case we
decide to publish bridge contact information?

All the best,
Karsten



signature.asc
Description: OpenPGP digital signature
___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays