Re: [tor-relays] MetricsPort: tor_relay_connections_total type confusion

2022-10-28 Thread David Goulet
On 28 Oct (11:04:09), n...@danwin1210.de wrote:
> Hello David,
> 
> again, thanks for your work on adding more metrics to tor's MetricsPort!
> Many relay operators will love this and documentation will be useful [1].
> 
> I reported
> https://gitlab.torproject.org/tpo/core/tor/-/issues/40699
> which got closed yesterday, but there was likely a misunderstanding and
> the changes did not solve the underlying issue.
> 
> The core issue is: The metric called
> tor_relay_connections(_total) [2][3]
> contains a mix of different types (gauge, counter).

The patch I got in makes all the tor_relay_connections_total{} metrics
"gauges" because in effect, some can go up and down and some might only go up
but I figured even the one that only accumulate can also be gauges.

Is that a problem to your knowledge from a Prometheus standpoint?

Cheers!
David

> 
> When mixing types in a single metric, the TYPE definition will always be
> wrong for one or the other value.
> For example grafana will show this if you use a counter metric without
> rate():
> "Selected metric is a counter. Consider calculating rate of counter by
> adding rate()."
> 
> It is a best practice to avoid mixing different types in a single metric.
> From the prometheus best practices [4]:
> "As a rule of thumb, either the sum() or the avg() over all dimensions of
> a given metric should be meaningful (though not necessarily useful). If it
> is not meaningful, split the data up into multiple metrics. For example,
> having the capacity of various queues in one metric is good, while mixing
> the capacity of a queue with the current number of elements in the queue
> is not."
> 
> An idea to address the underlying issue:
> One connection metric for counter and one for gauge:
> 
> - tor_relay_connections_total for counters, like the current label
> state="created"
> - tor_relay_connections for gauge metrics, like the current label
> state="opened". "rejected" also appears to be a gauge metric.
> 
> Another nice feature of these metrics would be to have a label for what
> type of system is connecting (src="relay", src="non-relay") - more on that
> in yesterday's email.
> A tool by toralf [4] also shows these and uses the source IP but tor
> itself does not need to look at the source IP to determine the type,
> something discussed in last week's relay operator meetup.
> 
> best regards,
> nat
> 
> [1] https://gitlab.torproject.org/tpo/web/support/-/issues/312
> [2]
> https://gitlab.torproject.org/tpo/core/tor/-/commit/06a26f18727d3831339c138ccec07ea2f7935014
> [3]
> https://gitlab.torproject.org/tpo/core/tor/-/commit/6d40e980fb149549bbef5d9e80dbdf886d87d207
> [4] https://prometheus.io/docs/practices/naming/
> 
> 

-- 
RRmWZi+kxUk/ehwUda6Z6UE/zsCYNl2ts0zzPswJAPI=


signature.asc
Description: PGP signature
___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


Re: [tor-relays] relay memory leak?

2022-07-25 Thread David Goulet
On 25 Jul (19:31:16), Toralf Förster wrote:
> On 7/25/22 14:48, David Goulet wrote:
> >   It is usually set around 75% of your total memory
> 
> Is there's a max limit ?

Capped to "SIZE_MAX" which on 64 bit is gigantic, like around 18k Petabytes.
On Linux, we use /proc/meminfo (MemTotal line) and so whatever also max limit
the kernel would put for that.

Cheers!
David

-- 
PW825o3fN9w1ZZLckWGtJOIVNnnQhkVuzufPhTKHPqk=


signature.asc
Description: PGP signature
___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


Re: [tor-relays] relay memory leak?

2022-07-25 Thread David Goulet
On 22 Jul (23:28:51), Fran via tor-relays wrote:
> Hey,
> 
> new non-exit relay, Debian 11, tor 0.4.7.8-1~d11.bullseye+1, ~ 1 week old
> (-> no guard)
> 
> KVM VM with atm 4 cores, host passthrough AMD EPYC (-> AES HW accel.).
> 
> As can be seen at the attached screenshots memory consumption is irritating
> as well as the quite high CPU load.
> 
> All was fine when it had ~100 Mbit/s but then onion skins exploded (110 per
> second -> up to 4k per second) as well as CPU and memory.
> 
> 
> Tor complains:
> 
> > Your computer is too slow to handle this many circuit creation requests!
> Please consider using the MaxAdvertisedBandwidth config option or choosing a
> more restricted exit policy.
> 
> 
> And from time to time memory killer takes action
> 
> 
> torrc is pretty basic:
> 
> Nickname 123
> ContactInfo 123
> RunAsDaemon 1
> Log notice syslog
> RelayBandwidthRate 2X MBytes
> RelayBandwidthBurst 2X MBytes
> SocksPort 0
> ControlSocket 0
> CookieAuthentication 0
> AvoidDiskWrites 1
> Address  
> OutboundBindAddress 
> ORPort :yyy
> Address  []
> OutboundBindAddress [zzz]
> ORPort [zzz]:xxx
> MetricsPort :sss
> MetricsPortPolicy accept fff
> DirPort yy
> Sandbox 1
> NoExec 1
> CellStatistics 1
> ExtraInfoStatistics 1
> ConnDirectionStatistics 1
> EntryStatistics 1
> ExitPortStatistics 1
> HiddenServiceStatistics 1
> 
> 
> Ideas/suggestions (apart from limiting BW) to fix this?

We are currently seeing huge memory pressure on relays. I was unsuccessful at
finding any kind of memory leaks at the moment and so there is a distinct
possibility that relays have been accumulating somehow legit memory. We are
still heavily investigating all this and coming up with ways to reduce the
footprint.

In the meantime, we know that the "CellStatistics" option is very very memory
hungry and so you could disable that one and see if this stabilizes thing for
you.

The other option that can help with memory pressure usually is the
"MaxMemInQueues" (man 1 tor). Essentially, it tells "tor" when to start
running its "out of memory handler" (OOM). It is usually set around 75% of your
total memory but you could reduce it and see if this helps.

I would although, in the current network conditions, really NOT put it below
2GB. And if the OOM gets triggered too many times and you can spare memory,
bump it up to 4GB at the very least.

The current network conditions are abnormal and, often couple with other
things, creates these resource pressure on relays that we rarely experience
and so our team needs to investigate a needle in a haystack when it happens.

Thanks for the report! And thanks to all to help us through these difficult
times for our relays and users.

Cheers!
David

-- 
gBLT7DYMF2Nm0xKoiSACNi4AD0YA2FrdU7AvJ0fWCVc=


signature.asc
Description: PGP signature
___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


Re: [tor-relays] tor prometheus metrics -> expected label value, got "INVALID"

2022-01-24 Thread David Goulet
On 23 Jan (11:36:05), Fran via tor-relays wrote:
> Hej,
> 
> taking another look and comparing it to onion_services exporter there is a
> slight difference in the metrics output.
> 
> prometheus-onion-service-exporter:
> 
> onion_service_up{address="foobar:587",name="mail_v3_587",type="tcp"} 1
> 
> vs output from tor:
> 
> tor_hs_app_write_bytes_total{onion=foobar,port=80} 298891

Ah wow... you are indeed right.

I had problems recently with an exporter I made for something unrelated that
was not exporting labels as quoted strings...

I have opened a ticket:

https://gitlab.torproject.org/tpo/core/tor/-/issues/40552

Thanks for reporting and investigating!! :)

Cheers!
David

-- 
3obTrBY9FLhQsK4/YufSTiyO7ERRvmLFjMaVvocpZfQ=


signature.asc
Description: PGP signature
___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


Re: [tor-relays] cases where relay overload can be a false positive

2022-01-12 Thread David Goulet
On 01 Jan (21:12:38), s7r wrote:
> Hello,

Hi s7r!

Sorry for the delay, some vacationing happened for most of us eheh :).

> 
> One of my relays (guard, not exit) started to report being overloaded since
> once week ago for the first time in its life.
> 
> The consensus weight and advertised bandwidth are proper as per what they
> should be, considering the relay's configuration. More than this, they have
> not changed for years. So, I started to look at it more closely.
> 
> Apparently the overload is triggered at 5-6 days by flooding it with circuit
> creation requests. All I can see in tor.log is:
> 
> [warn] Your computer is too slow to handle this many circuit creation
> requests! Please consider using the MaxAdvertisedBandwidth config option or
> choosing a more restricted exit policy. [68382 similar message(s) suppressed
> in last 482700 seconds]
> 
> [warn] Your computer is too slow to handle this many circuit creation
> requests! Please consider using the MaxAdvertisedBandwidth config option or
> choosing a more restricted exit policy. [7882 similar message(s) suppressed
> in last 60 seconds]
> 
> This message is logged like 4-5 or 6 time as 1 minute (60 sec) difference
> between each warn entry.
> 
> After that, the relay is back to normal. So it feels like it is being probed
> or something like this. CPU usage is at 65%, RAM is at under 45%, SSD no
> problem, bandwidth no problem.

Very plausible theory, especially in the context of such "burst" of traffic,
we can rule out that all the sudden your relay has become facebook.onion
guard.

> 
> Metrics port says:
> 
> tor_relay_load_tcp_exhaustion_total 0
> 
> tor_relay_load_onionskins_total{type="tap",action="processed"} 52073
> tor_relay_load_onionskins_total{type="tap",action="dropped"} 0
> tor_relay_load_onionskins_total{type="fast",action="processed"} 0
> tor_relay_load_onionskins_total{type="fast",action="dropped"} 0
> tor_relay_load_onionskins_total{type="ntor",action="processed"} 8069522
> tor_relay_load_onionskins_total{type="ntor",action="dropped"} 273275
> 
> So if we account the dropped ntor circuits with the processed ntor circuits
> we end up with a reasonable % (it's  >8 million vs <300k).

Yeah so this is ~3.38% drop so it immediately triggers the overload signal.

> 
> So the question here is: does the computed consensus weight of a relay
> change if that relay keeps sending reports to directory authorities that it
> is being overloaded? If yes, could this be triggered by an attacker, in
> order to arbitrary decrease a relay's consensus weight even when it's not
> really overloaded (to maybe increase the consensus weights of other
> malicious relays that we don't know about)?

Correct, this is a possibility indeed. I'm not entirely certain that this is
the case at the moment as sbws (bandwidth authority software) might not be
downgrading the bandwidth weights just yet.

But regardless, the point is that it is where we are going to. But we have
control over this so now is a good time to notice these problems and act.

I'll try to get back to you asap after talking with the network team.

> 
> Also, as a side note, I think that if the dropped/processed ratio is not
> over 15% or 20% a relay should not consider itself overloaded. Would this be
> a good idea?

Plausible that it could be better idea! Unclear what an optimal percentage is
but, personally, I'm leaning towards that we need higher threshold so they are
not triggered in normal circumstances.

But I think if we raise this to 20% let say, it might not stop an attacker
from triggering it. It might just make it that it is a bit longer.

Thanks for your feedback! We'll get back to this thread asap.

David

-- 
SSS3IvmdWRIFm4XsNZMnPLAvdQnAxoWYx1+Twou0Ay0=


signature.asc
Description: PGP signature
___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


Re: [tor-relays] General overload -> DNS timeouts

2021-12-09 Thread David Goulet
On 18 Nov (10:01:09), Arlen Yaroslav via tor-relays wrote:
> > Some folks might consider switching to non-exit nodes to just get rid of
> >
> > the overload message. Please bear with us while we are debugging the
> >
> > problem and don't do that. :) We'll keep this list in the loop.
> 
> The undocumented configuration option 'OverloadStatistics' can be used to
> disable the reporting of an overloaded state. E.g. place the following in
> your torrc:
> 
> OverloadStatistics 0
> 
> May be worth considering until the reporting feature becomes a bit more
> mature and the issues around DNS resolution become a bit clearer.

Greetings everyone!

We wanted to follow up with all of you on this. It has been a while but we
finally got down to the problem.

We made this ticket public which is where we pulled together the information
we had from Exit operators helping us in private:

https://gitlab.torproject.org/tpo/network-health/team/-/issues/139

You can find here the summary of the problem:
https://gitlab.torproject.org/tpo/network-health/team/-/issues/139#note_2764965

The gist is that tor imposes a 5 seconds timeout basically dictating libevent
to give up on the DNS resolve after 5 seconds. And it will do that 3 times
before an error is returned to tor.

That very error is a "DNS TIMEOUT" which is what we expose on the MetricsPort
and also use for the overload general indicator.

The problem lies with that very error. It is in fact _not_ a "real" DNS
timeout but rather just "took too long for the parameters I have". So these
timeouts should more be seen as a "UX issue" rather than "network issue".

For that reason, we will remove the DNS timeout from the overload general
indicator and we will rename also the "dns timeout" metrics on the MetricsPort
to something with a more meaningful name.

Operators can still use the DNS metrics to monitor health of the DNS by
looking at all other possible errors especially "serverfailed".

Finally, we will most likely also bring down the Tor DNS timeout from 5
seconds to 1 seconds in order to improve UX:

https://gitlab.torproject.org/tpo/core/tor/-/issues/40312

We will likely fix this the current 0.4.7.x development version and backport
it into 0.4.6 stable. Release time line is to come but we hope as soon as
possible.

Thanks everyone for your help, feedback and patience with this problem! In
particular, thanks a lot to Anders Trier for their help and providing us with
an Exit relay we could experiment with and toralf for providing so much useful
information from their relays.

Cheers!
David

-- 
u6A7qkchZSncFBzpYV44fV8NYMmiQ60PU5/P9VOyegk=


signature.asc
Description: PGP signature
___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


Re: [tor-relays] Compression bomb from dizum

2021-11-15 Thread David Goulet
On 06 Nov (21:39:44), Logforme wrote:
> Got the following in my log today:
> Nov 06 18:19:01.000 [warn] Possible compression bomb; abandoning stream.
> Nov 06 18:19:01.000 [warn] Unable to decompress HTTP body (tried Zstandard
> compressed, on Directory connection (client reading) with 45.66.33.45:80).
> 45.66.33.45 is tor.dizum.com, a Tor directory authority.
> 
> False positive or a problem generating directory info at dizum?

I would think false positive here considering that it comes from "dizum".

Lets keep an eye out for more though.

Thanks!
David

-- 
smhtN6GS89mA/K6UVSGViMyg06C3llc28/cEM0NvTcI=


signature.asc
Description: PGP signature
___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


Re: [tor-relays] Crashed Relay

2021-11-10 Thread David Goulet
On 02 Nov (18:20:31), sysmanager7 via tor-relays wrote:
> I was notified by Uptime Robot that relay 1 of 3 had been shut down. 
> Unfortunately I found this out 12 hours after the fact.
> guard flag is lost.
> 
> journalctl -u tor@default
> 
> Nov 01 01:03:18 ubuntu-s-1vcpu-2gb-nyc1-01 Tor[328387]: I learned some more 
> directory information, but not enough to build a circuit: We need more 
> microdescrors: we have 6178/6618, and can only build 49% of likely paths. (We 
> have 99% of guards bw, 50% of midpoint bw, and 98% of exit bw = 49% of path 
> bw.)
> ptors: we have 6178/6618, and can only build 49% of likely paths. (We have 
> 99% of guards bw, 50% of midpoint bw, and 98% of exit bw = 49% of path bw.)
> ived 4371.61 GB. I've received 7345604 connections on IPv4 and 0 on IPv6. 
> I've made 677780 connections with IPv4 and 0 with IPv6.
> Nov 01 06:47:26 ubuntu-s-1vcpu-2gb-nyc1-01 Tor[328387]: Interrupt: we have 
> stopped accepting new connections, and will shut down in 30 seconds. 
> Interrupt aga>
> Nov 01 06:47:26 ubuntu-s-1vcpu-2gb-nyc1-01 Tor[328387]: Delaying directory 
> fetches: We are hibernating or shutting down.
> Nov 01 06:47:56 ubuntu-s-1vcpu-2gb-nyc1-01 Tor[328387]: Clean shutdown 
> finished. Exiting.
> Nov 01 06:48:04 ubuntu-s-1vcpu-2gb-nyc1-01 systemd[1]: tor@default.service: 
> Succeeded.
> Nov 01 06:48:04 ubuntu-s-1vcpu-2gb-nyc1-01 systemd[1]: Stopped Anonymizing 
> overlay network for TCP.

So this log indicate that your "tor" received a SIGINT and did a clean
shutdown.

Not sure why that happened but it happened... :S

David

-- 
fsQzn9293ONauAEmTyBPCVFpFcCWrEEXYnVe8hcOgOo=


signature.asc
Description: PGP signature
___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


Re: [tor-relays] known issues with deb.torproject.org repos?

2021-11-02 Thread David Goulet
On 01 Nov (21:44:53), nusenu wrote:
> 
> 
> David Goulet:
> > On 29 Oct (00:51:15), nusenu wrote:
> > > Hi,
> > > 
> > > 
> > > are there known issues with the nightly master debian package builds?
> > > https://deb.torproject.org/torproject.org/dists/
> > 
> > Not to our knowledge?
> 
> I'm using nightly-master in ansible-relayor CI runs
> and noticed the version says currently [1]
> 
> 0.4.7.1-alpha-dev-20210921T011045Z-1~d11.bullseye+1
> 
> [1] 
> https://deb.torproject.org/torproject.org/dists/tor-nightly-master-bullseye/main/binary-amd64/Packages

This is likely due to
https://gitlab.torproject.org/tpo/core/tor/-/issues/40505 which is blocking
for now the Debian packages and thus why you are still on 0.4.7.1-alpha-dev.

David

-- 
eCVYxw3Iqh/9/IgYu/jMmS7iZf2Wky+ZIob+SBM/7/o=


signature.asc
Description: PGP signature
___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


Re: [tor-relays] known issues with deb.torproject.org repos?

2021-11-01 Thread David Goulet
On 29 Oct (00:51:15), nusenu wrote:
> Hi,
> 
> 
> are there known issues with the nightly master debian package builds?
> https://deb.torproject.org/torproject.org/dists/

Not to our knowledge?

> 
> and a related question:
> Will the stable packages remain on the 0.4.5.x LTS branch until
> the next LTS branch or will it at some point move to the latest stable again? 
> (as it used to be)
> https://gitlab.torproject.org/tpo/core/team/-/wikis/NetworkTeam/CoreTorReleases#current

Usually, Tor latest stable is the Debian unstable version except in some
circumstances that is when Debian is in freeze for instance.

So ultimately, that stable should become 0.4.6.x at some point, can't tell you
exactly when, that depends on our Debian packager.

Cheers!
David

-- 
eCVYxw3Iqh/9/IgYu/jMmS7iZf2Wky+ZIob+SBM/7/o=


signature.asc
Description: PGP signature
___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


Re: [tor-relays] Overloaded state indicator on relay-search

2021-10-18 Thread David Goulet
On 17 Oct (13:54:22), Arlen Yaroslav via tor-relays wrote:
> Hi,

Hi Arlen!

> 
> I've done some further analysis on this. The reason my relay is being marked
> as overloaded is because of DNS timeout errors. I had to dive into the
> source code to figure this out.
> 
> In dns.c, a libevent DNS_ERR_TIMEOUT is being recorded as an
> OVERLOAD_GENERAL error. Am I correct in saying that a single DNS timeout
> error within a 72-hour period will result in an overloaded state? If so, it
> seems overly-stringent given that there are no options available to tune the
> DNS timeout, max retry etc. parameters. Some lower-specced servers with less
> than optimal access to DNS resolvers will suffer because of this.

Correct, 1 single DNS timeout will trigger the general overload flag. There
were discussion to make it N% of all request to timeout before we would report
it with a N being around 1% but unfortunately that was never implemented that
way. And so, at the moment, 1 timeout is enough to trigger the problem.

And I think you are right, we would benefit on raising that threshold big
time.

> 
> Also, I was wondering why these timeouts were not being recorded in the
> Metrics output. I've done some digging and I believe there is a bug in the
> evdns_callback() function. The rep_hist_note_dns_error() is being called as
> follows:
> 
> rep_hist_note_dns_error(type, result);
> 
> but I've noticed the 'type' being set to zero whenever libevent returns a
> DNS error which means the correct dns_stats_t structure is never found, as
> zero is outside the expected range of values (DNS_IPv4_A, DNS_PTR,
> DNS_IPv6_). Adding the BUG assertion confirms this.
> 
> Please let me know if I should raise this in the bug tracker or if you need
> anything else.

This is an _excellent_ find!

I have opened:

https://gitlab.torproject.org/tpo/core/tor/-/issues/40490

We'll likely attempt to submit a patch to libevent and then fix that in Tor.
Until this is fixed in libevent and the entire network can migrate (which can
be years...), we'll have to live with DNS errors _not_ being per-type on the
MetricsPort likely going from:

tor_relay_exit_dns_error_total{record="A",reason="timeout"} 0
...

to a line without a "record" because we can't tell:

tor_relay_exit_dns_error_total{reason="timeout"} 0

Note that for a successful request that is reason="success", we can tell which
record type but not for errors because of that.

To everyone, expect that API breakage on the MetricsPort for the next 0.4.7.x
version and evidently when the stable comes out.

Big thanks for this find!

Cheers!
David

-- 
ntAC7gj16wctf1lTaBQoW+wcUkFG+MROtH5KheSa698=


signature.asc
Description: PGP signature
___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


Re: [tor-relays] Overloaded state indicator on relay-search

2021-10-04 Thread David Goulet
On 02 Oct (01:29:56), torix via tor-relays wrote:
> My relays (Aramis) marked overloaded don't make any sense either.  Two of
> the ones marked with orange are the two with the lowest traffic I have (2-5
> MiB/s and 4-9 MiB/s - not pushing any limits here); the third one with that
> host has more traffic and is fine.
> 
> So far this indicator seems to be no help to me.

Keep in mind that the overload state might not be only about traffic capacity.
Like this page states, there other factors including CPU and memory pressure.

https://support.torproject.org/relay-operators/relay-bridge-overloaded/

We are in a continuous process of making it better with feedback from the
relay community. It is a hard problem because so many things can change or
influence things. And different OSes also makes it challenging.

Another thing here to remember, the overload state will be set for 72 hours
even if a SINGLE overload event occurred.

For more details:
https://lists.torproject.org/pipermail/tor-relays/2021-September/019844.html

(FYI, we are in the process of adding this information in the support page ^).

If you can't find sticking out, that is OK, you can move on and see if it
continues to stick. If so, maybe its worth digging more and when 0.4.7 will be
stable, you'll be able to enable the MetricsPort (man tor) to get into the
rabbit hole a bit deeper.

Cheers!
David

-- 
beACX6/yfvyT3YZaLcsyGOAcmV/wX/HH3vFXWRSiLeo=


signature.asc
Description: PGP signature
___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


Re: [tor-relays] Overloaded state indicator on relay-search

2021-10-01 Thread David Goulet
On 01 Oct (03:08:20), Andreas Kempe wrote:
> Hello David!
> 
> On Mon, Sep 27, 2021 at 08:22:08AM -0400, David Goulet wrote:
> > On 24 Sep (12:36:17), li...@for-privacy.net wrote:
> > > On Thursday, September 23, 2021 3:39:08 PM CEST Silvia/Hiro wrote:
> > > 
> > > > When a relay is in the overloaded state we show an amber dot next to the
> > > > relay nickname.
> > > Nice thing. This flag has noticed me a few days ago.
> > > 
> > > > If you noticed your relay is overloaded, please check the following
> > > > support article to find out how you can recover to a "normal" state:
> > > > 
> > > > 
> > > > https://support.torproject.org/relay-operators/relay-bridge-overloaded/
> > > 
> > > A question about Enabling Metricsport. Is definitely Prometheus necessary?
> > > Or can the Metrics Port write into a LogFile / TextFile?
> > 
> > The output format is Prometheus but you don't need a prometheus server to 
> > get
> > it.
> > 
> > Once opened, you can simply fetch it like this:
> > 
> >   wget http://:/metrics -O /tmp/output.txt
> > 
> >   or
> > 
> >   curl http://:/metrics -o /tmp/output.txt
> > 
> 
> I've only ever gotten an empty reply when trying to extract metrics
> and I still do on Tor 4.6.7. I have some vague memory of the metrics
> previously only being for hidden services.
> 
> All the metrics mentioned in the overload guide[1] seem to be missing.
> Having a quick look at the Git repository, it seems that only
> 0.4.71-alpha and latest master contain the necessary changes for these
> metrics.
> 
> Is my assumption correct or am I doing something wrong?

Correct, relay metrics are only available on >= 0.4.7.1-alpha. Hopefully, we
should have an 0.4.7 stable by end of year (or around that time).

David

-- 
xX/GscsnOTkKMqPla5JDOc2EqZ3GG/imhQ7gx+DQhVE=


signature.asc
Description: PGP signature
___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


Re: [tor-relays] Overloaded state indicator on relay-search

2021-09-28 Thread David Goulet
On 27 Sep (14:23:34), Gary C. New via tor-relays wrote:

>  George,
> The referenced support article provides recommendations as to what might be
> causing the overloaded state, but it doesn't provide the metric(s) for how
> Tor decides whether a relay is overloaded. I'm trying to ascertain the
> later.  I would assume the overloaded state metric(S) is/are a maximum
> timeout value and/or reoccurrence value, etc.  By knowing what the
> overloaded state metric is, I can tune my Tor relay just short of it.  Thank
> you for your reply.  Respectfully,

Hi Gary!

I'll try to answer the best I can from what we've have worked on for these
overload metrics.

Essentially, there few places within a Tor relay that we can easily notice an
"overloaded" state. I'll list them and tell you how we decide:

1. Out-Of-Memory invocation

  Tor has its own OOM and it is invoked when 75% of the total memory tor
  thinks it can use is reached. Thus, let say tor thinks it can use 2GB in
  total then at 1.5GB of memory usage, it will start freeing memory. That, is
  considered an overload state.

  Now the real question here is what is the memory "tor thinks" it has.
  Unfortunately, it is not the greatest estimation but that is what it is.
  When tor starts, it will use MaxMemInQueues for that value or will look at
  the total RAM available on the system and apply this algorithm:

if RAM >= 8GB {
  memory = RAM * 40%
} else {
  memory = RAM * 75%
}
/* Capped. */
memory = min(memory, 8GB) -> [8GB on 64bit and 2GB on 32bit)
/* Minimum value. */
memory = max(250MB, memory)

  Why we picked those numbers, I can't tell you that, these come from the very
  early days of the tor software and I can't tell you why.

  And so to avoid such overload state, clearly run a relay above 2GB of RAM on
  64bit should be the bare minimum in my opinion. 4GB would be much much
  better. In DDoS circumstances, there is a whole lot of memory pressure.

  One keen observer can notice that this approach also has the problem that it
  doesn't shield tor from being called by the OS OOM itself. Reason is that
  because we take the total memory on the system when tor starts, if the
  overall system has many other applications running using RAM, we end up
  eating too much memory and the OS could OOM tor without tor even noticing
  memory pressure. Fortunately, this is not a problem affecting the overload
  status situation.

2. Onionskins processing

  Tor is sadly single threaded _except_ for when the "onion skins" are
  processed that is the cryptographic work that needs to be done on the famous
  "onion layers" in every circuits.

  For that we have a thread pool and outsource all of that work to that pool.
  It can happen that this pool starts dropping work due to back pressure and
  that in turn is an overload state.

  Why this can happen, essentially CPU pressure. If your server is running at
  capacity and it is not only your tor, then this is likely to trigger.

3. DNS Timeout

  This applies only to Exits. If tor starts noticing DNS timeouts, you'll get
  the overload flag. This might not be because your relay is overloaded in
  terms of resources but it signals a problem on the network.

  And DNS timeouts at the Exits are a _huge_ UX problem for tor users and so
  Exit operators really need to be on top of those to help. It is not clear
  with the overload line but at least if an operator notices the overload
  line, it can then investigate DNS timeouts in case there is no resources
  pressure.

4. TCP port exhaustion

  This should be extremely rare though. The idea about this one is that you
  ran out of TCP ports which is a range that is usually, on Linux, 32768-60999
  and so having that many connections would lead to the overload state.

  However, I think (I might be wrong though) that nowadays this range is per
  source IP and not process wide so it would likely have to be deliberate from
  someone to put your relay in that state.


There are two other overload lines that tor relay report:
"overload-ratelimits" and "overload-fd-exhausted" but they are not used yet
for the overload status on Metrics. But you can find them in your relay
descriptor[0] if you are curious.

They are about when your relay reaches its connection global limit too often
and when your relay runs out of file descriptors.

Hope this helps but overall as you can see, a lot of factor can influence
these metrics and so the ideal ideal ideal situation for a tor relay is that
it runs alone on a fairly good machine. Any kinds of pullback from a tor relay
like being overloaded has cascading effects on the network both in terms of UX
but also in terms of load balancing which tor is not yet very good at (but we
are working on hard on making it much better!!).

Cheers!
David

[0]

https://collector.torproject.org/recent/relay-descriptors/server-descriptors/?C=M;O=D

-- 
+7Xz1XCshqTyudrO7K4kGBEl+NghDNbqiTGYZpSOw4U=


signature.asc

Re: [tor-relays] Overloaded state indicator on relay-search

2021-09-27 Thread David Goulet
On 24 Sep (12:36:17), li...@for-privacy.net wrote:
> On Thursday, September 23, 2021 3:39:08 PM CEST Silvia/Hiro wrote:
> 
> > When a relay is in the overloaded state we show an amber dot next to the
> > relay nickname.
> Nice thing. This flag has noticed me a few days ago.
> 
> > If you noticed your relay is overloaded, please check the following
> > support article to find out how you can recover to a "normal" state:
> > 
> > https://support.torproject.org/relay-operators/relay-bridge-overloaded/
> 
> A question about Enabling Metricsport. Is definitely Prometheus necessary?
> Or can the Metrics Port write into a LogFile / TextFile?

The output format is Prometheus but you don't need a prometheus server to get
it.

Once opened, you can simply fetch it like this:

  wget http://:/metrics -O /tmp/output.txt

  or

  curl http://:/metrics -o /tmp/output.txt

Cheers!
David

-- 
7GLrazDic7AYE2euK7PjuTqUtCfU+/3JH4ZWkM1HP/g=


signature.asc
Description: PGP signature
___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


Re: [tor-relays] FallbackDirectoryMirrors relay IP change

2021-08-04 Thread David Goulet
On 02 Aug (20:55:25), Cristian Consonni wrote:
> Hi,
> 
> After 7+ years of running a relay on DigitalOcean, I have decided to
> move it somewhere else, as there are cheaper options.
> 
> I kept the same keys and fingerprint and it seems that it has been
> picked up correctly on Atlas/Tor metrics, as it is showing with the new IP.
> 
> I believe that this relay was listed as a fallback directory mirror -
> how can I check this?

You should have the "Fallbackdir" flag on Metrics website.

> 
> According to the doc[1] I should open a ticket to signal the change or
> write here. The link[2] for opening the ticket seems broken, so here I am.

We have changed how fallback directories are choosen and sadly failed to
update the Gitlab Wiki (we'll get on that).

It doesn't require intervention from operators anymore as we pick them
randomly from the list of relays that match a series of criteria (uptime,
stability, ...).

And that list is re-generated at every new Tor stable release.

Thanks for running a relay!
David

-- 
NNaE1iww+bA7N5RGeoGSbfyUJzplbyt80P4B0t1/MgU=


signature.asc
Description: PGP signature
___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


Re: [tor-relays] Fallback Directories - Upcoming Change

2021-04-08 Thread David Goulet
On 07 Apr (21:43:50), Toralf Förster wrote:
> On 4/7/21 9:04 PM, David Goulet wrote:
> > Over time, we will remove or add more relays at each minor release if the 
> > set
> > of fallback directories not working reaches a 25% threshold or more.
> 
> In the past a fallback dir volunteer committed himself to have address
> and port stable for 2 years.
> 
> If a relay is now removed from the fallback directory list - how long
> shall the Tor relay operator wait before he can change the address
> and/or port?

No more requirements of such anymore. By selecting Stable and relays that have
been around for a while, the theory is that the majority of relays of the
selected list will be stable as in same address and port.

I do hope that overall in the network most relays do not change port/address
often as relay stability is pivotal to an healthy network. But if so, our
monitoring of the fallback directories will trigger an alert once too many
relays are not working anymore as fallback and so we'll issue a new list.

> Technically speaking, how long does it usually takes till a Tor
> (browser) version containing the new fallback dir is spreaded out mostly?

It is quite fast on TB side, see the updated graph:

https://metrics.torproject.org/webstats-tb.html

Cheers!
David

-- 
yJPf4dPV3fAummlfnf68L08UHzwv9IzGFeUdxLc8h7I=


signature.asc
Description: PGP signature
___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


[tor-relays] Fallback Directories - Upcoming Change

2021-04-07 Thread David Goulet
Greetings!

This is to announce that the Tor Project network team will soon change how
fallback directories are selected as we are about to update that list.

As a reminder, here is the fallback directory definition from the tor man
page:

  When tor is unable to connect to any directory cache for directory info
  (usually because it doesn’t know about any yet) it tries a hard-coded
  directory. Relays try one directory authority at a time.  Clients try
  multiple directory authorities and FallbackDirs, to avoid hangs on startup
  if a hard-coded directory is down. Clients wait for a few seconds between
  each attempt, and retry FallbackDirs more often than directory authorities,
  to reduce the load on the directory authorities

Previously, we would ask this list for volunteers and then start a lengthy
process over weeks to add/remove volunteers from an official list and then
make sure the volunteered relays were matching a certain set of criteria.

This process was very time consuming for our small team and so we are making a
change to be more efficient and avoid to slip on the updates like we did in
the previous versions.

We will now select at random relays to become fallback directories that
match those requirements:

  - Fast flag

"Fast" -- A router is 'Fast' if it is active, and its bandwidth is either
in the top 7/8ths for known active routers or at least 100KB/s.

  - Stable flag

"Stable" -- A router is 'Stable' if it is active, and either its Weighted
 MTBF is at least the median for known active routers or its Weighted MTBF
 corresponds to at least 7 days.

  - DirCache is set to 1 (default)

  - As been around for at least 90 days.

The above corresponds to more than 4000 relays at the moment in the consensus
and so we'll randomly pick 200 from those at each release so we can have
regular rotation and shift load over time over the entire network.

Over time, we will remove or add more relays at each minor release if the set
of fallback directories not working reaches a 25% threshold or more.

Finally, it is very possible that we'll change those requirements over time as
we assess this change. We'll do our best to inform this list in time.

Thanks!
Tor Network Team

-- 
fegt0kLbq3rSb5UY2KTCr5TVsjOd2mko2FRZAZic23Q=


signature.asc
Description: PGP signature
___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


Re: [tor-relays] Tor 0.4.5.6 and missing IPv6

2021-03-24 Thread David Goulet
On 11 Mar (19:16:50), s7r wrote:
> On 3/10/2021 5:31 PM, William Kane wrote:
> > Hi,
> > 
> > manually specify IP and port, and set the IPv4Only flag for both
> > ORPort and DirPort.
> > 
> > Reference: https://2019.www.torproject.org/docs/tor-manual.html.en
> > 
> > William
> > 
> 
> I think he has a dynamic IP address which is why, according to pasted torrc,
> the used config is:
> 
> `Address Address my.dynamic.dns.url`
> 
> This is a perfectly normal and accepted use-case. I think this is why it
> complained about not being able to find an IPv6 address, because it was
> resolving a hostname instead of parsing an IP address and most probably it
> did not find any  record.
> 
> Anyway, IPv4Only in the ORPort line is the right config for this use-case
> (where you don't have any IPv6 thus don't want to use IPv6 auto-discovery),
> as David said, most annoying bugs regarding IPv6 auto-discovery were fixed.
> So the suggestion to use IPv4Only in torrc line is not a workaround a bug or
> misbehavior or something, it is a corrected configuration parameter (as per
> manual instructs) and should stay like this even in 0.4.5.7 of course.
> 
> I only find this log message unclear:
> 
> [notice] Unable to find IPv6 address for ORPort 9001. You might want to
> specify IPv6Only to it or set an explicit address or set Address.
> 
> I think it means "... You might want to specify *IPv4Only* to it or set it
> to an explicit address or set configure Address."

Oops.. this fell off my radar, mis-placed in my Inbox :(

Yes but also this was fixed in 0.4.5.7 where we flip the "IPv*Only" option so
in that case it should have read "IPv4Only".

David

-- 
XV9G5n2Z8S7JqTM4GHU9eKrLUI9POwB5m0+OohPUEsM=


signature.asc
Description: PGP signature
___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


Re: [tor-relays] Active MetricsPort logs "Address already in use"

2021-03-24 Thread David Goulet
On 23 Mar (23:18:32), Alexander Dietrich wrote:
> > David Goulet  hat am 22.03.2021 13:24 geschrieben:
> > 
> > > Sending GET requests to the address returns empty responses.
> > 
> > You should be able to get the metrics with a GET on /metrics.
> > 
> > Let us know if this works for you!
> 
> The empty 200 response is returned from "/metrics", I guess due to the 
> "address already in use" problem. Requests to "/" return a 404.

At the moment, the only metrics exported are those of onion services. We still
need to implement exporting relay and client metrics.

If you set an onion service, you should get more stuff :) else we have a bug!

Cheers!
David

-- 
XV9G5n2Z8S7JqTM4GHU9eKrLUI9POwB5m0+OohPUEsM=


signature.asc
Description: PGP signature
___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


Re: [tor-relays] Active MetricsPort logs "Address already in use"

2021-03-22 Thread David Goulet
On 19 Mar (21:11:25), Alexander Dietrich wrote:
> Hello,
> 
> when I activate the "MetricsPort" feature, the Tor log reports that it is
> going to open the port, then it says "Address already in use". According to
> "netstat", the address is indeed in use, but by "tor".

Thanks for this report! I'll open a ticket about this "already in use".

> Sending GET requests to the address returns empty responses.

You should be able to get the metrics with a GET on /metrics.

Let us know if this works for you!

David

-- 
IFgD210MlJt+yahijraRB/TGC29Q74yIQ65x9UnKnXs=


signature.asc
Description: PGP signature
___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


Re: [tor-relays] Tor 0.4.5.6 and missing IPv6

2021-03-10 Thread David Goulet
On 10 Mar (15:31:37), William Kane wrote:
> Hi,
> 
> manually specify IP and port, and set the IPv4Only flag for both
> ORPort and DirPort.
> 
> Reference: https://2019.www.torproject.org/docs/tor-manual.html.en

Yes good advice.

Sorry about this. We believe we fixed most of the issues in 0.4.5 regarding
IPv6 address discovery and so the next stable release 0.4.5.7 (next week
around Tuesday) will have those and thus should be better with address
detection.

Thanks for reporting and sticking with us with these very annoying bugs.

Cheers!
David

-- 
VQUeH7fDiekWdFbRyOcTt2hwKrBMvEjx1NvI9fph8qk=


signature.asc
Description: PGP signature
___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


Re: [tor-relays] IPv6 auto-discovery vs. privacy extensions

2021-03-01 Thread David Goulet
On 25 Feb (23:20:04), Onior Operator wrote:
> 
> > Op 25/02/2021 14:19 schreef David Goulet :
> > 
> >  
> > On 24 Feb (11:08:15), Onion Operator wrote:
> > > Saluton,
> > > 
> > > My relay started to log this message since 0.4.5.5:
> > > 
> > > Auto-discovered IPv6 address [...]:443 has not been found reachable. 
> > > However, IPv4 address is reachable. Publishing server descriptor without 
> > > IPv6 address. [2 similar message(s) suppressed in last 2400 seconds]
> > > 
> > > I think it started with the introduction of IPv6 auto-discovery.
> > > 
> > > The problem, as I understand it, is that my relay has IPv6 privacy
> > > extensions enabled and therefore the IPv6 detection logic gets
> > > fooled. Indeed the IPv6 I see in the logs is one of the temporary
> > > addresses used as client towards other relays.
> > > 
> > > Relevant config is:
> > > 
> > > ORPort 443 IPv4Only
> > > ORPort [...]:443 IPv6Only
> > > 
> > > I added the IPv{4,6}Only options only in searching a solution to this
> > > problem, before 0.4.5.5 the IPv6 relay worked perfectly without.
> > > 
> > > In reading the documentation of AddressDisableIPv6 I got the
> > > impression that if (any?) ORPort is configured with IPv4Only the
> > > IPv6 auto-discovery gets disabled but evidence does not support my
> > > understanding. Is it a bug?
> > > 
> > > Any other way to disable IPv6 auto-discovery?
> > 
> > "AddressDisableIPv6 1" should do it.
> 
> Isn't this going to completely disable IPv6?

Correct.

> 
> > 
> > Also, "ORPort 443 IPv4Only" _only_ should also not make your tor 
> > auto-discover
> > IPv6 at all. If it does, we have a bug! Sending us debug logs (even in 
> > private
> > to my address) would be helpful in that case.
> 
> I suspect we are in this case.

Any logs you can send towards me would be grand. Thanks!

> 
> > 
> > The last option is to "pin" an IPv6 by using either "Address" or directly
> > in the ORPort with "ORPort IP:PORT".
> 
> The man page does not mention IPv6 in the description of "Address" and about
> pinning the IPv6 address in the ORPort, I think it's what I'm already doing
> (the [...] in the second ORPort above is indeed the IPv6 address) or not?

Indeed. I will update the manpage for "Address" to mention IPv6.

You can now use *two* Address statement, one for each IP type (v4 and v6) if
you want and tor will figure it out (correctly hopefully).

David


signature.asc
Description: PGP signature
___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


Re: [tor-relays] IPv6 auto-discovery vs. privacy extensions

2021-02-25 Thread David Goulet
On 24 Feb (11:08:15), Onion Operator wrote:
> Saluton,
> 
> My relay started to log this message since 0.4.5.5:
> 
> Auto-discovered IPv6 address [...]:443 has not been found reachable. However, 
> IPv4 address is reachable. Publishing server descriptor without IPv6 address. 
> [2 similar message(s) suppressed in last 2400 seconds]
> 
> I think it started with the introduction of IPv6 auto-discovery.
> 
> The problem, as I understand it, is that my relay has IPv6 privacy
> extensions enabled and therefore the IPv6 detection logic gets
> fooled. Indeed the IPv6 I see in the logs is one of the temporary
> addresses used as client towards other relays.
> 
> Relevant config is:
> 
> ORPort 443 IPv4Only
> ORPort [...]:443 IPv6Only
> 
> I added the IPv{4,6}Only options only in searching a solution to this
> problem, before 0.4.5.5 the IPv6 relay worked perfectly without.
> 
> In reading the documentation of AddressDisableIPv6 I got the
> impression that if (any?) ORPort is configured with IPv4Only the
> IPv6 auto-discovery gets disabled but evidence does not support my
> understanding. Is it a bug?
> 
> Any other way to disable IPv6 auto-discovery?

"AddressDisableIPv6 1" should do it.

Also, "ORPort 443 IPv4Only" _only_ should also not make your tor auto-discover
IPv6 at all. If it does, we have a bug! Sending us debug logs (even in private
to my address) would be helpful in that case.

The last option is to "pin" an IPv6 by using either "Address" or directly in
the ORPort with "ORPort IP:PORT".

Thanks!
David

-- 
E7wflFgKE/E5SRn+WXE1QvJTtRMvCV3b2OGyVzMvXSY=


signature.asc
Description: PGP signature
___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


Re: [tor-relays] IPv6

2021-02-25 Thread David Goulet
On 24 Feb (12:02:11), Dr Gerard Bulger wrote:
> Thinking of IPv6:
> 
> How far has the team got in implementing IPv6 only OR port facility ?

As of tor 0.4.5.x release, IPv6 is fully supported for tor clients and relays.

> 
> Currently you can only run tor relay of any sort if there is open IPv4 OR
> port to the internet.  This is getting a bit quaint.

That is one piece of it. We still require an IPv4 as in a relay can not run
with *only* an IPv6 at the moment.

One of the property that the network should have (even though it is not always
true) is that every relays should be able to talk to every other relays. And
thus if we have IPv4 only relays that can not talk to IPv6 relays only, we
partition the network and this is no good.

> 
> I am sure I am not alone in having much wasted bandwidth that could be put
> to good Tor use but they are only accessible via IPv6, while they can exit
> of course IPv4 and IPv6
> 
> I realise that so far, despite IPv6 being open on my main exit for some
> years, there is still little IPv6 traffic, but that might suddenly change.

As the network migrates to tor >= 0.4.5.x, inter relay communication will
start to ramp up on IPv6.

Cheers!
David

-- 
E7wflFgKE/E5SRn+WXE1QvJTtRMvCV3b2OGyVzMvXSY=


signature.asc
Description: PGP signature
___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


Re: [tor-relays] metrics

2021-02-21 Thread David Goulet
On 20 Feb (11:52:33), Manager wrote:
>Hello,
> 
>im trying to enable prometheus metrics, and... something goes wrong:
> 
>torrc:
>MetricsPort 9166
>MetricsPortPolicy accept *
> 
>after tor restart in logs:
>Tor[15368]: Opening Metrics listener on 127.0.0.1:9166
>Tor[15368]: Could not bind to 127.0.0.1:9166: Address already in use. Is
>Tor already running?
> 
>-- before restart, no one listen on this port, as `ss | grep :9166` can
>say.
> 
>there is also backtrace in logs:
>Tor[15368]: connection_finished_flushing(): Bug: got unexpected conn type
>20. (on Tor 0.4.5.6 )
>Tor[15368]: tor_bug_occurred_(): Bug:
>../src/core/mainloop/connection.c:5192: connection_finished_flushing: This
>line should not have been reached. (Future instances of this warning will
>be silenced.) (on Tor 0.4.5.6 )

This was reported 3 days ago:

https://gitlab.torproject.org/tpo/core/tor/-/issues/40295

And we pushed a fix upstream and will be in the next tor stable release thus
0.4.5.7. As for the timeline of that release, unclear but I will make a point
to the network team to make it sooner than usually because this problem is
effectively making the MetricsPort unusable :S.

Sorry about this. Thanks for the report!!!

David


signature.asc
Description: PGP signature
___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


Re: [tor-relays] Tor relay failed to start after upgrade (0.4.4.6 --> 0.4.5.5) [solved]

2021-02-09 Thread David Goulet
On 09 Feb (11:55:12), tscha...@posteo.de wrote:
> Tonight my Tor relay on Raspberry Pi (buster) was upgraded from
> 0.4.4.6-1~bpo10+1 to 0.4.5.5-rc-1~bpo10+1 automaticly, but failed to start:
> 
> ---
> [notice] Opening OR listener on 0.0.0.0:587
> [notice] Opened OR listener connection (ready) on 0.0.0.0:587
> [notice] Opening OR listener on [::]:587
> [warn] Socket creation failed: Address family not supported by protocol
> [notice] Opening Directory listener on 0.0.0.0:995
> [notice] Closing partially-constructed OR listener connection (ready) on
> 0.0.0.0:587
> [notice] Closing partially-constructed Directory listener connection (ready)
> on 0.0.0.0:995
> ---
> 
> Until then there was no change in my torrc:
> 
>   ORPort 587
>   DIRPORT 995

Oh wow, interesting, so your Pi doesn't support IPv6 for some reasons and it
failed to start with this config.

I have opened: https://gitlab.torproject.org/tpo/core/tor/-/issues/40283

In 0.4.5.x tor, things have changed where tor will try to auto discover an
IPv6 and automatically bind to it if only "ORPort " is used.

In other words, it now binds to 0.0.0.0 and [::] by default. It then publish
any discovered _usable_ and _reachable_ IPv4/IPv6. If it doesn't find any
IPv6, it will still listen but won't publish any IPv6.

> 
> It seems the value [address:] is no longer optional if you're not IPv6 ready
> [1]:
> 
>   ORPort [address:]PORT|auto [flags]
>   DirPort [address:]PORT|auto [flags]
> 
> So I have to change my torrc to:
> 
>   ORPort 0.0.0.0:587
>   DIRPORT 0.0.0.0:995

Another option would have been to add the "IPv4Only" flag to the ORPort line
and should have done it for you.

Cheers!
David

-- 
YrY9JHvSOJpDfyKS5AyL8h/BjZ/SQEkh00q9ek+1BXk=


signature.asc
Description: PGP signature
___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


Re: [tor-relays] growing guard probability on exits (2020-10-15)

2020-10-16 Thread David Goulet
On 16 Oct (10:49:43), nusenu wrote:
> lets see when this graph stops growing
> https://cryptpad.fr/code/#/2/code/view/1uaA141Mzk91n1EL5w0AGM7zucwFGsLWzt-EsXKzNnE/present/

To help you out here for this line:

"2020-10-15 ?? first Tor dir auths change KISTSchedRunInterval from 10 to 2"

These are the 3 authorities that notified us with the change along with the
most accurate timestamp I have timestamp:

longclaw -> Oct 14 at 16:05:08 UTC
moria1   -> Oct 14 before 16:00 UTC
  (exact consensus time is unknown, would need to dig in the votes but Roger
  said it was changed on moria "earlier today" that is before this time.)
bastet   -> Oct 15 at 15:26:47 UTC

Three are needed consensus on parameters so the Oct 15th 16:00 UTC is the
first consensus to see the change.

Keep in mind that it would take at maximum ~2h for ALL relays to get that
change.

> 
> 
> why is this relevant?
> It puts more entities into an end-to-end correlation position than there used 
> to be
> https://nusenu.github.io/OrNetStats/#tor-relay-operators-in-end-to-end-correlation-position
> 
> and it might also decrease exit traffic on exits when a tor client chooses an 
> exit as guard

It was pointed out by Jaym on IRC, notice here a bump in Exit capacity around
mid September:

  
http://rougmnvswfsmd4dq.onion/bandwidth-flags.html?start=2020-08-18=2020-10-16

That could likely be a reason for this sudden change in probabilities.

Now, _maybe_ the KIST change, which in theory increases bw throughput, allowed
those Exit to push more traffic and thus might help with increasing that
Guard+Exit probability we are seeing in your graph.

Lets keep a close eye on your graph!

Thanks!
David

-- 
7h1/NAPdaaGpI8WG6X4FtryAZZ4EhnznUVVLqIf/04A=


signature.asc
Description: PGP signature
___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


Re: [tor-relays] Network Performance Experiment - KISTSchedRunInterval - October 2020

2020-10-16 Thread David Goulet
On 15 Oct (19:26:09), Roger Dingledine wrote:
> On Thu, Oct 15, 2020 at 11:40:34PM +0200, nusenu wrote:
> > since it is in effect by now 
> > https://consensus-health.torproject.org/#consensusparams
> > could you publish the exact timestamp when it came into effect?
> 
> One can learn this from the recent consensus documents, e.g. at
> https://collector.torproject.org/recent/relay-descriptors/consensuses/
> 
> And I agree that we should have a central experiment page (e.g. on gitlab)
> that lists the experiments, when we ran them, when the network changes
> occurred, what we expected to find, and what we *did* find.
> 
> David or Mike, can you make sure that page happens?

This is the page of what we planned to work on.

https://gitlab.torproject.org/tpo/core/team/-/wikis/NetworkTeam/Sponsor61/PerformanceExperiments

We are still very early on the KIST experiment here so will update the page
with the latest today or very soon once we are starting to see/measure the
effects.

David

-- 
7h1/NAPdaaGpI8WG6X4FtryAZZ4EhnznUVVLqIf/04A=


signature.asc
Description: PGP signature
___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


Re: [tor-relays] Network Performance Experiment - KISTSchedRunInterval - October 2020

2020-10-15 Thread David Goulet
On 15 Oct (23:40:34), nusenu wrote:
> >   KISTSchedRunInterval=2
> > 
> > We are still missing 1 authority to enable this param for it to take effect
> > network wide. Hopefully, it should be today in the coming hours/day.
> 
> since it is in effect by now 
> https://consensus-health.torproject.org/#consensusparams
> could you publish the exact timestamp when it came into effect?

The consensus on October 15th at 16:00 UTC was the first one that was voted
with the change.

> I noticed some unusual things today (exits having a non-zero guard 
> probability),
> did you change more parameters than this one or was this the only one?

We did not.

That's worth looking into!?

David

-- 
p/At8/mqdd1XtA1xdO8tCHLN779O3bdI7LSsRdGRpLA=


signature.asc
Description: PGP signature
___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


[tor-relays] Network Performance Experiment - KISTSchedRunInterval - October 2020

2020-10-15 Thread David Goulet
Greetings relay operators!

Tor has now embarked in a 2 year long scalability project aimed, in part, at
improving the network performance.

The first steps will be to measure performance on the public network in order
to come up with a baseline. We'll likely be adjusting circuit window size,
cell scheduling (KIST) and circuit build timeout (CBT) parameters over the
next months in both contained experiments and on the live network.

This announcement is about KIST parameters, our cell scheduler.

Roughly a year ago, we've discovered that all tor clients are capped to
~3MB/sec in maximum outbound throughput due to how the scheduler is operating.
I won't get into the details but if you are curious, it is here:

  https://gitlab.torproject.org/tpo/core/tor/-/issues/29427

It turns out that we now believe that the entire network, not only clients,
are actually capped at 3MB/sec per channel (a channel is a connection between
client -> relay or relay -> relay also called an OR connection).

We've recently conducted experiments with chutney [1], which operates on the
loopback interface, and we indeed hit those limits.

KIST has a parameter named KISTSchedRunInterval which is currently set at 10
msec and that is our culprit. By lowering it to 2 msec, our experiment showed
that the cap goes from 3MB/sec to ~5MB/sec with burst a bit higher.

Now, the question is why was it set to 10 msec in the first place? Again,
without getting into the technical details of the KIST paper[2], our cell
scheduler requires a "grace period" in order to be able to accumulate cells
and then prioritize over many circuits using an EWMA algorithm which tor has
been using for a long time now. Without this, one can clog the pipes (at the
TCP level) with a very loud transfer by always being scheduled and filling the
TCP buffers leaving nothing for the quieter circuit.

Important to note that the goal of EWMA in tor is to prioritize quiet circuit
for example, an SSH session will be prioritized over a bulk HTTP transfer.
This is so "likely" interactive connections are not delayed and are snappy.

But, lowering this to 2 msec means less time to accumulate and in theory worst
cell prioritization.

However, we think this will not be a problem because we believe the network is
underloaded. And, because of this 3MB/sec cap per channel, it means that tor is
sending burst of cells instead of a constant stream of cells and thus it is
under processing what it possibly could at the relay side. Again, all this in
theory.

All in all, going to 2 msec should improve speed at the very least and not make
the network worst.

We want to test that, measure that for a couple of weeks and then transition to
a higher value and doing that until we get to 10 msec so we can clearly well
compare the effect on EWMA priority and performance.

One possibility will be 2 msec, 5 msec, 10 msec transition period.

Yesterday, a request to our 9 directory authorities have been made to set this
consensus parameter:

  KISTSchedRunInterval=2

We are still missing 1 authority to enable this param for it to take effect
network wide. Hopefully, it should be today in the coming hours/day.

This is where we need your help. We would like you to notify us on this thread
about any noticeable changes in CPU, RAM, or BW usage. In other words,
anything that changes from the "average" you've been seeing is worth informing
us.

We do NOT expect big changes for your relay(s) but there could reasonably be a
change in bandwidth throughput and thus some of you could see a traffic
increase, unclear at the moment.

Huge thanks to everyone here! We will carefully monitor this change and if
things go bad, we'll revert it as fast as we can! Thus, your help becomes
extremely important!

Cheers!
David

[1] https://git.torproject.org/chutney.git/
[2] https://arxiv.org/pdf/1709.01044.pdf


-- 
7h1/NAPdaaGpI8WG6X4FtryAZZ4EhnznUVVLqIf/04A=


signature.asc
Description: PGP signature
___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


Re: [tor-relays] Call for Testing - New Feature: Relay IPv6 Address Discovery

2020-08-26 Thread David Goulet
On 22 Jul (15:54:54), David Goulet wrote:
> Greetings everyone!
> 
> We've very recently merged upstream (tor.git) full IPv6 supports which implies
> many many things. We are still finalizing the work but most of it is in at the
> moment.

Greetings everyone!

Thanks everyone that is helping testing this!

If you are still running a 0.4.5.0-alpha version (or git master), we would
like to ask you one more thing. If you could flip this torrc option to 1 so we
can help our metrics team build proper graphs to learn how much IPv6
connections we have in the network.

  ConnDirectionStatistics 1

Big thanks everyone!
David

-- 
C/P9tCyvJP95vPJI5IUPHZQ9JQz0ffWlbT270iRUOW0=


signature.asc
Description: PGP signature
___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


Re: [tor-relays] Become a Fallback Directory Mirror (deadline: July 23)

2020-07-24 Thread David Goulet
On 24 Jul (13:30:31), David Goulet wrote:
> 
> The new list has been generated and can be found here:

Apology, clarification needs to be made:

> 
> https://gitlab.torproject.org/tpo/core/fallback-scripts/-/blob/master/fallback_offer_list
> Diff: 
> https://gitlab.torproject.org/tpo/core/fallback-scripts/-/commit/0aa75bf9eaa39c55074ffaa32845b7399466798a

The offered list ^ that is all the relays that offered to help since the
dawn of time.

> In tor binary:
> https://gitlab.torproject.org/tpo/core/tor/-/blob/master/src/app/config/fallback_dirs.inc

The generated list that ended up in the binary. Thus this list needs to be
reviewed for accuracy if you ended up in :).

Sorry for the possible confusion!

David

-- 
bmDVAsOgbs4iVQEG/fcnbYlepXyGLt6h/BHi0kQ5sJg=


signature.asc
Description: PGP signature
___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


Re: [tor-relays] Become a Fallback Directory Mirror (deadline: July 23)

2020-07-24 Thread David Goulet
On 08 Jul (13:36:57), gus wrote:
> Dear Relay Operators,
> 
> Do you want your relay to be a Tor fallback directory mirror? 
> Will it have the same address and port for the next 2 years? 

Greetings operators!

First off, we got 225 relays to volunteer for this round which is incredible
so thank you so much to all of you!

The new list has been generated and can be found here:

https://gitlab.torproject.org/tpo/core/fallback-scripts/-/blob/master/fallback_offer_list
Diff: 
https://gitlab.torproject.org/tpo/core/fallback-scripts/-/commit/0aa75bf9eaa39c55074ffaa32845b7399466798a
In tor binary:
https://gitlab.torproject.org/tpo/core/tor/-/blob/master/src/app/config/fallback_dirs.inc

This new list will take effect once we release these new stable and alpha
versions:

  - 0.3.5.12
  - 0.4.1.10
  - 0.4.2.9
  - 0.4.3.7
  - 0.4.4.3-alpha

There are many requirements to fulfill to be in this list so even if you
volunteered your relay, you might not end up in the list. Don't worry, the
more numbers we have the better we can insure stability thus you already
helped greatly!

We'll be tracking any changes to this list here:

https://gitlab.torproject.org/tpo/core/fallback-scripts/-/issues/40001

Please let us know if you need to change/remove your relay from the current
list. Mistake could have been made thus a validation would be welcome!

We'll make a general call around May 2021 to rebuild the list with new
volunteers unless we go below 25% unreachable relays.

Thanks a lot everyone!
David

-- 
bmDVAsOgbs4iVQEG/fcnbYlepXyGLt6h/BHi0kQ5sJg=


signature.asc
Description: PGP signature
___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


[tor-relays] Call for Testing - New Feature: Relay IPv6 Address Discovery

2020-07-22 Thread David Goulet
Greetings everyone!

We've very recently merged upstream (tor.git) full IPv6 supports which implies
many many things. We are still finalizing the work but most of it is in at the
moment.

This is a call for help if anyone would like to test either git master[1] or
nightly builds[2] (only Debian) to test for us a specific feature.

The feature we would love for some of you to test is the IPv6 address
discovery. In short, with this new feature, specifying an ORPort without an
address will automatically bind tor to [::]: and attempt to find the
IPv6 address by looking at (in this order):

  1. "Address" from torrc
  2. "ORPort address:port" from torrc
  3. Interface address. First public IPv6 is used.
  4. Local hostname, DNS  query.

If all fails, the relay will simply never publish an IPv6 in the descriptor
but it will work properly with the IPv4 (still mandatory).

The other new thing is that now tor supports *two* "Address" statement which
can be a hostname or IPv4 or IPv6 now.

Thus this is now valid:

  Address 1.2.3.4
  Address [4242::4242]
  ORPort 9001

Your Tor will bind to 0.0.0.0:9001 and [::]:9001 but will publish the 1.2.3.4
for the IPv4 address and [4242::4242] for IPv6 in the descriptor that is the
address to use to reach your relay's ORPort.

Now, if you happen to have this configuration which I believe might be common
at the moment:

  ORPort 9001
  ORPort [4242::4242]:9001

The second ORPort which specifies an IPv6 address will supersede the "ORPort
9001" which uses [::] and thus you will bind on 0.0.0.0:9001 and
[4242::4242]:9001. You should get a notice log about this.

Thus the recommended configuration to avoid that log notice would be to bind
to specific addresses per family:

  ORPort :9001
  ORPort :9001

And of course, if you want your relay to _not_ listen on IPv6:

  ORPort 9001 IPv4Only

In your notice log, you will see which address is used to bind on the ORPort
and then you will see the reachability test succeed or not on the address that
tor either used from the configuration or auto discovered that is the address
you are supposedly reachable from.

Man page has NOT been updated yet, it will arrive once we stabilize the IPv6
feature and everything around it.

Please, do report (on this thread) _anything_ even slightly annoying about
this like logging or lack of logging and so on. This is a complex feature and
errors can be made thus any testing you can offer is extremely appreciated.

Thanks!!
David

[1] https://gitweb.torproject.org/tor.git/
[2] https://2019.www.torproject.org/docs/debian.html.en

-- 
EeJVrrC/dHQXEXYB1ShOOZ4QuQ8PMnRY2XGq4BYsFq4=


signature.asc
Description: PGP signature
___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


Re: [tor-relays] Error from LZMA encoder: Unable to allocate memory; NULL stream while compressing

2020-07-20 Thread David Goulet
On 19 Jul (00:16:45), nottryingtobel...@protonmail.com wrote:
> My bridge was running fine, then started throwing the same error over and 
> over. See my last two days of logs here: https://pastebin.com/7FNXC6PZ. 
> Function doesn't seem to be affected as the heartbeats still show users.

Did you ran out of memory by any chance (RAM) ?

David

> 
> Can or should I do anything about this?

> ___
> tor-relays mailing list
> tor-relays@lists.torproject.org
> https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


-- 
co2zeI8pm9RUDvGEdjybXSi1uTeVIrZ26+eLOtYRo+o=


signature.asc
Description: PGP signature
___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


Re: [tor-relays] tor_bug_occured_(): This line should not have been reached

2019-07-08 Thread David Goulet
On 07 Jul (01:16:07), Michael Gerstacker wrote:
> Hey guys,
> 
> my relay
> 79E683340B80676DCEB029E48FBD36BC66EBDA1E
> told me:
> 
> 
> Jul 06 15:22:34.000 [notice] DoS mitigation since startup: 0 circuits
> killed with too many cells. 150515 circuits rejected, 16 marked addresses.
> 0 connections closed. 104 single hop clients refused.
> 
> Jul 06 16:23:25.000 [warn] tor_bug_occurred_(): Bug:
> ../src/core/or/channeltls.c:: channel_tls_handle_cell: This line should
> not have been reached. (Future instances of this warning will be silenced.)
> (on Tor 0.3.5.8 )
> 
> Jul 06 16:23:25.000 [warn] Bug: Line unexpectedly reached at
> channel_tls_handle_cell at ../src/core/or/channeltls.c:. Stack trace:
> (on Tor 0.3.5.8 )
> 
> Jul 06 16:23:25.000 [warn] Bug: /usr/bin/tor(log_backtrace_impl+0x46)
> [0x562903c4faa6] (on Tor 0.3.5.8 )
> 
> Jul 06 16:23:25.000 [warn] Bug: /usr/bin/tor(tor_bug_occurred_+0xc0)
> [0x562903c4b1c0] (on Tor 0.3.5.8 )
> 
> Jul 06 16:23:25.000 [warn] Bug:
> /usr/bin/tor(channel_tls_handle_cell+0x49a) [0x562903ad571a] (on Tor
> 0.3.5.8 )
> 
> Jul 06 16:23:25.000 [warn] Bug: /usr/bin/tor(+0x9a16a) [0x562903af916a]
> (on Tor 0.3.5.8 )
> 
> Jul 06 16:23:25.000 [warn] Bug:
> /usr/bin/tor(connection_handle_read+0x99d) [0x562903ac318d] (on Tor 0.3.5.8
> )
> 
> Jul 06 16:23:25.000 [warn] Bug: /usr/bin/tor(+0x69ebe) [0x562903ac8ebe]
> (on Tor 0.3.5.8 )
> 
> Jul 06 16:23:25.000 [warn] Bug:
> /usr/lib/x86_64-linux-gnu/libevent-2.1.so.6(+0x229ba) [0x7f469cdbf9ba] (on
> Tor 0.3.5.8 )
> 
> Jul 06 16:23:25.000 [warn] Bug:
> /usr/lib/x86_64-linux-gnu/libevent-2.1.so.6(event_base_loop+0x5a7)
> [0x7f469cdc0537] (on Tor 0.3.5.8 )
> 
> Jul 06 16:23:25.000 [warn] Bug: /usr/bin/tor(do_main_loop+0xb0)
> [0x562903aca290] (on Tor 0.3.5.8 )
> 
> Jul 06 16:23:25.000 [warn] Bug: /usr/bin/tor(tor_run_main+0x10e5)
> [0x562903ab80d5] (on Tor 0.3.5.8 )
> 
> Jul 06 16:23:25.000 [warn] Bug: /usr/bin/tor(tor_main+0x3a)
> [0x562903ab530a] (on Tor 0.3.5.8 )
> 
> Jul 06 16:23:25.000 [warn] Bug: /usr/bin/tor(main+0x19)
> [0x562903ab4ec9] (on Tor 0.3.5.8 )
> 
> Jul 06 16:23:25.000 [warn] Bug:
> /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xeb) [0x7f469c6a109b]
> (on Tor 0.3.5.8 )
> 
> Jul 06 16:23:25.000 [warn] Bug: /usr/bin/tor(_start+0x2a)
> [0x562903ab4f1a] (on Tor 0.3.5.8 )
> 
> Jul 06 21:22:34.000 [notice] Heartbeat: Tor's uptime is 12 days 6:00 hours,
> with 5113 circuits open. I've sent 2802.42 GB and received 2781.10 GB.
> 
> Jul 06 21:22:34.000 [notice] Circuit handshake stats since last time:
> 13801/13801 TAP, 169020/169020 NTor.
> 
> 
> Its anyway a little bit strange because this is the only relay i have which
> never deleted his logfiles since i set it up and where i can not change the
> language of the operating system.
> I think its related to the host system but i never really thought about
> because it seems to work fine.
> 
> Something i should do now?

Nothing much to do here. Fortunately, the relay recovers after that.

I've created this ticket about this issue:
https://trac.torproject.org/projects/tor/ticket/31107

I haven't had time to look into this just yet. However, thanks for the report!

Cheers!
David

-- 
fE4jqrcBXUKBv6iyHwOJ9X/7UU0f02CVPufALtF1L74=


signature.asc
Description: PGP signature
___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


Re: [tor-relays] Tor memory leak (0.3.5.7)?

2019-06-25 Thread David Goulet
On 24 Jun (01:49:18), starlight.201...@binnacle.cx wrote:
> Hi,
> 
> No leaks.  The VM has 1GB, very light for moderate busy relay.
> 
> Should be 4GB, perhaps 3DB would do.  Settings should include:
> 
> AvoidDiskWrites 1
> DisableAllSwap 1
> MaxMemInQueues 1024MB# perhaps 1536MB
> 
> My relays run about 800MB to 1.2GB.  A nice way to view memory
> usage is with 
> 
>egrep '^Vm' /proc/$(cat /var/run/tor.pid)/status
> 
> which might show something like
> 
>VmPeak:   871696 kB
>VmSize:   710152 kB
>VmLck:710148 kB
>VmHWM:810872 kB
>VmRSS:649328 kB
>VmData:   492052 kB
>VmStk:93 kB
>VmExe:  4704 kB
>VmLib:104724 kB
>VmPTE:  1044 kB
> 
> Notice that current VmSize usage is less than VmPeak,
> indicating that memory was release by glibc malloc.
> 
> Tor consumes substantial skbuf memory in the kernel, which
> accounts for some of the difference in reported size for
> VmRSS and VmSize and total memory consumption.

What OS is this running on?

Can you possibly send back a SIGUSR1 log?

Kernel memory leak means we are buffer bloating it with cells... which we
aren't suppose to do these days :S ...

Thanks!
David

-- 
mtl83aVORGbFMy0uOJnLcNQG64cDwnBUp5hK73imIrw=


signature.asc
Description: PGP signature
___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


Re: [tor-relays] exit relay "internal error"

2019-06-18 Thread David Goulet
On 18 Jun (12:51:45), dns1...@riseup.net wrote:
> Hello,

Hi!

> 
> I have an exit relay on a debian remote vm. Yesterday, after I installed the
> last linux security update, I rebooted It, and than I had a problem with
> additional IPs, that were no more being assigned. In order to understand if
> it were a problem caused by the last upgrade I restored a previous backup,
> that I made on May with tar.
> 
> I didn't solved the problem restoring the backup, so I opened a ticket with
> my provider. Now they have fixed this issue, but my tor instance is going in
> loop. The log says:

Thanks for the report!

First time we see this! We've opened
https://trac.torproject.org/projects/tor/ticket/30916 about this.

We hope to get this fixed soon.

Huge thanks for taking the time to report this! :)

Cheers!
David

> 
> ...
> 
> Jun 18 13:33:31.000 [notice] Bootstrapped 0%: Starting
> Jun 18 13:33:32.000 [notice] Starting with guard context "default"
> 
>  T= 1560854012
> INTERNAL ERROR: Raw assertion failed at ../src/lib/ctime/di_ops.c:179: !
> old_val/usr/bin/tor(dump_stack_symbols_to_error_fds+0x33)[0x55a17b410943]
> /usr/bin/tor(tor_raw_assertion_failed_msg_+0x86)[0x55a17b410fd6]
> /usr/bin/tor(dimap_add_entry+0xa0)[0x55a17b411ba0]
> /usr/bin/tor(construct_ntor_key_map+0x69)[0x55a17b357969]
> /usr/bin/tor(server_onion_keys_new+0x4d)[0x55a17b39f4dd]
> /usr/bin/tor(+0x66e27)[0x55a17b287e27]
> /usr/bin/tor(threadpool_new+0x18b)[0x55a17b3b3f0b]
> /usr/bin/tor(cpu_init+0x9d)[0x55a17b28828d]
> /usr/bin/tor(run_tor_main_loop+0x136)[0x55a17b27a496]
> /usr/bin/tor(tor_run_main+0x1215)[0x55a17b27b935]
> /usr/bin/tor(tor_main+0x3a)[0x55a17b278a8a]
> /usr/bin/tor(main+0x19)[0x55a17b278609]
> /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf1)[0x7ffb901ba2e1]
> /usr/bin/tor(_start+0x2a)[0x55a17b27865a]
> Jun 18 13:33:33.000 [notice] Tor 0.3.5.8 opening log file.
> 
> ...
> 
> What could be the problem?
> 
> thanks
> 
> Gigi
> 
> 
> ___
> tor-relays mailing list
> tor-relays@lists.torproject.org
> https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays

-- 
iWmpKF5Ma8MfvMetoDN1BUw5ttwe8pkvgxAvuLMu008=


signature.asc
Description: PGP signature
___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


Re: [tor-relays] SENDME CELL

2019-04-30 Thread David Goulet
Sorry for the very bad Subject line. Thank you Mutt :).

David

On 30 Apr (15:59:23), David Goulet wrote:
> On 30 Apr (12:06:31), marziyeh latifi wrote:
> > Hello,
> > I have two questions about SENDME cell in TOR:
> > Is SENDME a Control cell or a Relay cell?
> 
> A SENDME cell is a control cell. See:
> 
> https://gitweb.torproject.org/torspec.git/tree/tor-spec.txt#n1531
> 
> However, SENDME exists at the stream level which affects only a specific
> stream within the circuit and thus their StreamID will be non zero which
> explains the "sometimes" in the spec.
> 
> > Which node does  interpret  SENDME cells?The node that receives them or the
> > edge node(client/exit)?
> 
> SENDMEs are always end-to-end so only the end points interpret them.
> 
> For example, when a client starts a download of a large file from an Exit
> node, the Exit will send some data (up to what we call the circuit window
> size) and wait for a SENDME cell _from_ the client to send more. In order
> word, the client sends the SENDME towards the Exit to tell it "please send me
> more data".
> 
> Now, if the client uploads data to the Exit, this is reverse where the Exit
> will send a SENDME when it is ready to receive more data.
> 
> All this is managed by the package and delivery window maintained at each end
> points. See Section 7 of torspec.txt for more details.
> 
> Cheers!
> David
> 
> -- 
> 71kN/ro+ccyP6zH5RukUX1TNXn7KjZ+E8ffp3xaYOzg=



> ___
> tor-relays mailing list
> tor-relays@lists.torproject.org
> https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


-- 
71kN/ro+ccyP6zH5RukUX1TNXn7KjZ+E8ffp3xaYOzg=


signature.asc
Description: PGP signature
___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


Re: [tor-relays] dgoulet - Stockholm 2019 expense request

2019-04-30 Thread David Goulet
On 30 Apr (12:06:31), marziyeh latifi wrote:
> Hello,
> I have two questions about SENDME cell in TOR:
> Is SENDME a Control cell or a Relay cell?

A SENDME cell is a control cell. See:

https://gitweb.torproject.org/torspec.git/tree/tor-spec.txt#n1531

However, SENDME exists at the stream level which affects only a specific
stream within the circuit and thus their StreamID will be non zero which
explains the "sometimes" in the spec.

> Which node does  interpret  SENDME cells?The node that receives them or the
> edge node(client/exit)?

SENDMEs are always end-to-end so only the end points interpret them.

For example, when a client starts a download of a large file from an Exit
node, the Exit will send some data (up to what we call the circuit window
size) and wait for a SENDME cell _from_ the client to send more. In order
word, the client sends the SENDME towards the Exit to tell it "please send me
more data".

Now, if the client uploads data to the Exit, this is reverse where the Exit
will send a SENDME when it is ready to receive more data.

All this is managed by the package and delivery window maintained at each end
points. See Section 7 of torspec.txt for more details.

Cheers!
David

-- 
71kN/ro+ccyP6zH5RukUX1TNXn7KjZ+E8ffp3xaYOzg=


signature.asc
Description: PGP signature
___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


Re: [tor-relays] BadExit why?

2019-02-12 Thread David Goulet
On 12 Feb (21:30:10), Olaf Grimm wrote:
> Hello !
> 
> I provisioning a new exit since two hours. It is a totally new relay in
> a VM. My other relays at the same provider are ok. Why I see "BadExit"
> in Nyx??? Now my first bad experience with my 11 relays...
> 
> fingerprint: CCDC4A28392C7448A34E98DF872213BC16DB27CD
> Nickname Hydra10

This relay is not yet on Relay Search:

http://rougmnvswfsmd4dq.onion/rs.html#search/CCDC4A28392C7448A34E98DF872213BC16DB27CD

I'm guessing it is quite new.

That fingerprint is *not* set as a BadExit so this means you might have gotten
the IP address of an old BadExit.

Can you share the address so I can look it up?

Thanks!
David

> 
> At all exits I have the same firewall rules and torrc configs:
> 
> ufw status
> Status: active
> 
> To Action  From
> -- --  
> 22/tcp ALLOW   Anywhere 
> 9001/tcp   ALLOW   Anywhere 
> 9030/tcp   ALLOW   Anywhere 
> 80/tcp ALLOW   Anywhere 
> 443/tcp    ALLOW   Anywhere 
> 1194/tcp   ALLOW   Anywhere 
> 53/tcp ALLOW   Anywhere 
> 53/udp ALLOW   Anywhere 
> 1194/udp   ALLOW   Anywhere 
> 22/tcp (v6)    ALLOW   Anywhere (v6)
> 9001/tcp (v6)  ALLOW   Anywhere (v6)
> 9030/tcp (v6)  ALLOW   Anywhere (v6)
> 80/tcp (v6)    ALLOW   Anywhere (v6)
> 443/tcp (v6)   ALLOW   Anywhere (v6)
> 1194/tcp (v6)  ALLOW   Anywhere (v6)
> 53/tcp (v6)    ALLOW   Anywhere (v6)
> 53/udp (v6)    ALLOW   Anywhere (v6)
> 1194/udp (v6)  ALLOW   Anywhere (v6)
> 
> Please take a look what happens.
> 
> Olaf
> ___
> tor-relays mailing list
> tor-relays@lists.torproject.org
> https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays

-- 
m14bORVXHT2lvx+QXt1QVjXPHX/hSBZzykB2ifCZFh0=


signature.asc
Description: PGP signature
___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


Re: [tor-relays] Tor RAM usage (DoS or memory leaks?) - Flood of circuits

2019-02-02 Thread David Goulet
On 02 Feb (04:10:02), petra...@protonmail.ch wrote:

Hi, thanks for the report!

> 
> There is something really strange going on indeed. What I noticed is an 
> increase of circuits and my device running out of memory until it stopped 
> working so I had to reboot it on 31. Jan. Then again the memory usage 
> increased until it leveled out at a rather unusual, high usage. The actual 
> bandwidth usage is not unusual though (always around 2Mbps on my relay).
> 
> Attached a screenshot of my memory usage the last few days (I hope 
> attachments do work here; it's in fact Tor using that memory as could be 
> checked with ps and htop).
> 
> Heartbeat messages of the log are:
> 
> Jan 31 10:57:56.000 [notice] Heartbeat: Tor's uptime is 6:00 hours, with 447 
> circuits open. I've sent 2.66 GB and received 2.65 GB.
> Jan 31 16:57:56.000 [notice] Heartbeat: Tor's uptime is 12:00 hours, with 
> 19764 circuits open. I've sent 9.59 GB and received 9.54 GB.
> Jan 31 22:57:56.000 [notice] Heartbeat: Tor's uptime is 18:00 hours, with 
> 54178 circuits open. I've sent 12.36 GB and received 12.30 GB.
> Feb 01 04:57:56.000 [notice] Heartbeat: Tor's uptime is 23:50 hours, with 
> 79333 circuits open. I've sent 14.89 GB and received 14.81 GB.
> Feb 01 10:57:56.000 [notice] Heartbeat: Tor's uptime is 1 day 5:50 hours, 
> with 110815 circuits open. I've sent 19.55 GB and received 19.45 GB.
> Feb 01 16:57:56.000 [notice] Heartbeat: Tor's uptime is 1 day 11:50 hours, 
> with 141724 circuits open. I've sent 24.03 GB and received 23.90 GB.
> Feb 01 22:57:56.000 [notice] Heartbeat: Tor's uptime is 1 day 17:50 hours, 
> with 12829 circuits open. I've sent 29.96 GB and received 29.75 GB.

Do you see some sort of increase during that time period of the DoS
mitigation stats? It would be the heartbeat line that starts with:

"DoS mitigation since startup:" ...

Burst of circuits are possible for many reasons. But, if that leads to
high memory usage and that doesn't come back down to a normal level once
the bursts are over, we may have a problem.

If you end up with any more logs about this or if your relay gets OOMed,
please share so we can investigate what is going on.

Thanks!
David

> 
> ‐‐‐ Original Message ‐‐‐
> On Friday, 1. February 2019 23:50, Roman Mamedov  wrote:
> 
> > Hello,
> >
> > There seems to be an issue with Tor's memory usage.
> > Earlier today, with Tor 3.5.7 and 1.5 GB of RAM running two Tor processes, 
> > the
> > machine got 430 MB into swap, slowing down to a crawl from iowait on 
> > accessing
> > the swapped out memory. Typically 1.5 GB is more than enough for these. 
> > "VIRT"
> > in top was ~1GB each, and "RES" was ~512MB each. Which is weird because that
> > doesn't add up to exhausting the 1.5 GB, and there are no other heavy
> > processes on the machine running. I rebooted it without further 
> > investigation.
> >
> > And right now on another machine running 2.9.16 I see:
> >
> > PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
> > 22432 debian-+ 30 10 5806816 5.157g 10464 R 39.5 33.1 39464:19 tor
> >
> > But not sure if it just accumulated 5.1GB of RAM slowly over time, or shot 
> > up
> > recently.
> >
> > Feb 01 17:00:49.000 [notice] Heartbeat: Tor's uptime is 82 days 23:59 hours,
> > with 70705 circuits open. I've sent 66622.45 GB and received 65906.91 GB.
> > Feb 01 17:00:49.000 [notice] Circuit handshake stats since last time:
> > 11361/11361 TAP, 239752/239752 NTor.
> > Feb 01 17:00:49.000 [notice] Since startup, we have initiated 0 v1
> > connections, 0 v2 connections, 10 v3 connections, and 3385644 v4 
> > connections;
> > and received 14 v1 connections, 78592 v2 connections, 822108 v3 connections,
> > and 8779474 v4 connections.
> > Feb 01 17:00:49.000 [notice] DoS mitigation since startup: 2899572 circuits
> > rejected, 121 marked addresses. 561 connections closed. 21956 single hop 
> > clients refused.
> > Feb 01 17:08:20.000 [warn] connection_edge_process_relay_cell (at origin)
> > failed.
> >
> > 

Re: [tor-relays] AS: "ColoCrossing" - 28 new relays

2018-12-12 Thread David Goulet
On 12 Dec (09:33:58), Toralf Förster wrote:
> On 12/11/18 10:54 PM, nusenu wrote:
> >  from their fingerprints
> I'm just curious that the fingerprints starts with the same sequence. I was
> under the impression that the fingerprint is somehow unique like a hash?

If one would like to position their relay on the hashring at a specific spot,
you can bruteforce the key generation to match the first bytes of the
fingerprint. Usually 4 or 5 bytes are enough and it doesn't take that long to
compute.

And because the position on the hashring is predictable over time for hidden
service *version 2*, then anyone can setup relays that in 5 days will be at
the right position.

Thus the importance to catch these relays before they get the HSDir flag that
is 96 hours of uptime.

Cheers!
David

-- 
WzhUyhDvWQI2JZglnMWl4fhIHYln5DpMG50IrXaHPLU=


signature.asc
Description: PGP signature
___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


Re: [tor-relays] DoSer is back, Tor dev's please consider

2018-03-23 Thread David Goulet
On 22 Mar (23:20:54), tor wrote:
> > Suggestion: DoSCircuitCreationMinConnections=1 be established in consensus
> 
> 
> The man page for the above option says:
> 
> "Minimum threshold of concurrent connections before a client address can be
> flagged as executing a circuit creation DoS. In other words, once a client
> address reaches the circuit rate and has a minimum of NUM concurrent
> connections, a detection is positive. "0" means use the consensus parameter.
> If not defined in the consensus, the value is 3. (Default: 0)"
> 
> Reading this, I get the impression that lowering the value to 1 would
> negatively impact clients behind carrier NAT. Isn't that the case? If we
> only allow 1 concurrent connection per IP, wouldn't that prevent multiple
> users behind a single IP? I would think the same problem would apply to
> lowering DoSConnectionMaxConcurrentCount as well (which I think is currently
> 50 in the consensus, but I've seen suggestions to lower it to 4).
> 
> Am I misunderstanding?

Yes, lowering DoSCircuitCreationMinConnections=1 means that you only need 1
concurrent client TCP connection to start applying the circuit creation DoS
mitigation instead of 3 concurrent. This will thus impact all type of clients
and *especially* hidden services which have many clients. They will open many
circuits in few seconds so making your Guard apply DoS mitigation will make
them sad.

I would strongly suggest to leave it untouched in your option file for now and
let the consensus value being used.

Thanks!
David

-- 
hgJe5VGAkZPnC/W4iPXnCuf1HcG2evYQqVjeb8Ugb4Y=


signature.asc
Description: PGP signature
___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


Re: [tor-relays] 1 circuit using 1.5Gig or ram? [0.3.3.2-alpha]

2018-02-12 Thread David Goulet
On 12 Feb (21:14:14), Stijn Jonker wrote:
> Hi David,
> 
> On 12 Feb 2018, at 20:44, David Goulet wrote:
> 
> > On 12 Feb (20:09:35), Stijn Jonker wrote:
> >> Hi all,
> >>
> >> So in general 0.3.3.1-alpha-dev and 0.3.3.2-alpha running on two nodes
> >> without any connection limits on the iptables firewall seems to be a lot
> >> more robust against the recent increase in clients (or possible [D]DoS). 
> >> But
> >> tonight for a short period of time one of the relays was running a bit 
> >> "hot"
> >> so to say.
> >>
> >> Only to be greated by this log entry:
> >> Feb 12 18:54:55 tornode2 Tor[6362]: We're low on memory (cell queues total
> >> alloc: 1602579792 buffer total alloc: 1388544, tor compress total alloc:
> >> 1586784 rendezvous cache total alloc: 489909). Killing circuits
> >> withover-long queues. (This behavior is controlled by MaxMemInQueues.)
> >> Feb 12 18:54:56 tornode2 Tor[6362]: Removed 1599323088 bytes by killing 1
> >> circuits; 39546 circuits remain alive. Also killed 0 non-linked directory
> >> connections.
> >
> > Wow... 1599323088 bytes is insane. This should _not_ happen for only 1
> > circuit. We actually have checks in place to avoid this but it seems they
> > either totally failed or we have a edge case.
> Yeah it felt a "bit" much. A couple megs I wouldn't have shared :-)
> 
> > Can you tell me what scheduler were you using (look for "Scheduler" in the
> > notice log).
> 
> The schedular always seems to be KIST (never played with it/tried to change 
> it)
> Feb 11 19:58:24 tornode2 Tor[6362]: Scheduler type KIST has been enabled.
> 
> > Any warnings in the logs that you could share or everything was normal?
> Besides that ESXi host gave an alarm about CPU usage, nothing odd in the logs 
> around that time I could find.
> The general syslog logging worked both locally on the host and remote as the 
> hourly cron jobs surround this entry.
> 
> 
> > Finally, if you can share the OS you are running this relay and if Linux, 
> > the
> > kernel version.
> 
> Debian Stretch, Linux tornode2 4.9.0-5-amd64 #1 SMP Debian 4.9.65-3+deb9u2 
> (2018-01-04) x86_64 GNU/Linux
> not sure it matters, but ESXi based VM, running with 2 vCPU's based on 
> i5-5300U, 4 Gig of memory
> 
> No problems, happy to squash bugs. I guess one of the "musts" when running 
> Alpha code, although this might not be alpha related (I can't judge).

Thanks for all the information!

I've opened https://bugs.torproject.org/25226

Cheers!
David

-- 
1xYrq8XhE25CKCQqvcX/cqKg04v1HthMMM3PwaRqqdU=


signature.asc
Description: PGP signature
___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


Re: [tor-relays] 1 circuit using 1.5Gig or ram? [0.3.3.2-alpha]

2018-02-12 Thread David Goulet
On 12 Feb (20:09:35), Stijn Jonker wrote:
> Hi all,
> 
> So in general 0.3.3.1-alpha-dev and 0.3.3.2-alpha running on two nodes
> without any connection limits on the iptables firewall seems to be a lot
> more robust against the recent increase in clients (or possible [D]DoS). But
> tonight for a short period of time one of the relays was running a bit "hot"
> so to say.
> 
> Only to be greated by this log entry:
> Feb 12 18:54:55 tornode2 Tor[6362]: We're low on memory (cell queues total
> alloc: 1602579792 buffer total alloc: 1388544, tor compress total alloc:
> 1586784 rendezvous cache total alloc: 489909). Killing circuits
> withover-long queues. (This behavior is controlled by MaxMemInQueues.)
> Feb 12 18:54:56 tornode2 Tor[6362]: Removed 1599323088 bytes by killing 1
> circuits; 39546 circuits remain alive. Also killed 0 non-linked directory
> connections.

Wow... 1599323088 bytes is insane. This should _not_ happen for only 1
circuit. We actually have checks in place to avoid this but it seems they
either totally failed or we have a edge case.

Can you tell me what scheduler were you using (look for "Scheduler" in the
notice log).

Any warnings in the logs that you could share or everything was normal?

Finally, if you can share the OS you are running this relay and if Linux, the
kernel version.

Big thanks!
David

-- 
1xYrq8XhE25CKCQqvcX/cqKg04v1HthMMM3PwaRqqdU=


signature.asc
Description: PGP signature
___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


Re: [tor-relays] Exits lost their function

2018-02-09 Thread David Goulet
On 09 Feb (19:06:23), Paul wrote:
> What could bring several exits at different providers and different operating 
> systems (Linux and FreeBSD) down on the same day, Jan 21st?
> 
> Since, while they still run as relays, they don’t show as exits any more 
> without any change from my side.
> 
> They do run on Tor 0.3.1.9 or 0.3.2.9 in the same Family.

They could have been flagged as BadExit.

Can you provide the list of fingerprints or/and IPs of your Exits?

Thanks!
David

> 
> Thanks 
> Paul
> ___
> tor-relays mailing list
> tor-relays@lists.torproject.org
> https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays

-- 
And8vxUcJVOn9srRjJ3mpKMUC5pScfYMRq9Qv9yt54Y=


signature.asc
Description: PGP signature
___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


Re: [tor-relays] Experimental DoS mitigation is in tor master

2018-02-01 Thread David Goulet
On 01 Feb (04:01:10), grarpamp wrote:
> > Applications that use a lot of resources will have to rate-limit themselves.
> > Otherwise, relays will rate-limit them.
> 
> It's possible if relays figure that stuff by #2 might not be
> an attack per se, but could be user activities... that relays
> might push back on that one by...
> - Seeking significantly higher default values committed
> - Seeking default action committed as off
> - Setting similar on their own relays if commits don't
> work. And by not being default off, it should be prominently
> documented if #2 affects users activities [1].

That I agree. We've set up default values for now and they will probably be
adapted over time so for now this is experimental to see how much we make
people unhappy (well except for the people doing the DoS ;).

But I do agree that we should document some "real life" use cases that could
trigger defenses at the relay in some very public way (blog post or wiki)
before this goes wide in the network. Large amount of tor clienst behind NAT
is one I have in mind, IPv6 as well...

> 
> Indexers will distribute around it, yielding zero sum gain
> for the network and nodes.
> Multiparty onion p2p protocols could suffer though if #2 is
> expected to affect such things.

I just want to clarify the #2 defense which is the circuit creation
mitigation. The circuit rate is something we can adjust over time and we'll
see how that plays out like I said above.

However, to be identified as malicious, the IP address needs to be above 3
concurrent TCP connections (also a parameter we can adjust if too low). Normal
usage of "tor" client should never make more than 1 single TCP connection to
the Guard, everything is multiplexed in that connection.

So lets assume some person wants to "scan the onion space" and fires up 100
tor clients behind one single IP address which results in massive amount of HS
connections on all .onion it can find. These tor clients in general won't pick
the same Guard but let say 3 of them do which will trigger the concurrent
connection threshold for circuit creation.

Doing 3 circuits a second continously up to a burst of 90 is still a _large_
number that relay needs to mitigate in some ways so it can operates properly
to be fair to the rest of the clients doing way less in normal circumstances.

IMO, because someone can buy big servers and big uplinks doesn't mean they
should be allowed to saturate links on the network. Unfortunately, the network
has limitations and this latest DoS is showing us that relay have to rate
limit stuff in a fair way if possible.

I bet there will be collateral damage from people currently using the network
in insane ways or unique ways. But overall, I don't expect that it will hurt
most of the use cases because 1) we made it that it is rare cases of tor
client usage that can trigger this (or unknown usage) and 2) we can adjust any
single parameters through the consensus if needed.

We'll break some eggs in the beginning and we should act accordingly but one
thing is certain, the current situation is not sustainable for any user on the
network.

From now on, we can only improve this DoS mitigation subsystem! :)

Cheers!
David

> 
> Was it ever discovered / confirmed what tool / usage was actually
> behind this recent ongoing 'DoS' phase? Whether then manifesting
> itself at the IP or tor protocol level.
> 
> Sorry if I missed this in all these threads.
> 
> [1] There could even be a clear section with simple named
> list of all options for further operator reading that could affect
> users activities / protocols.
> ___
> tor-relays mailing list
> tor-relays@lists.torproject.org
> https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays

-- 
okhFQbMsSCX2RtIiPaat//YUDaCrKUWiOgBmw0blDzM=


signature.asc
Description: PGP signature
___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


Re: [tor-relays] MaxMemInQueues - per host, or per instance?

2017-12-22 Thread David Goulet
On 22 Dec (20:37:37), r1610091651 wrote:
> I'm wondering if it is necessary to have a lot of ram assigned to queues?
> Is there some rule of thumb to determine the proper sizing? Based on number
> of circuits maybe?

So there are probably many different answers to this or ways to look at
it but I can speak on how "tor" is built and why it is important to have
this memory limit assigned to queues.

A tor relay gets cells in and most of the time will relay them so send
them outbound. But for every cell that comes in, we need to do some
processing on them that is mostly decryption work.

So we get them, process then put them on a circuit queue. Then tor does
its best to dequeue a "not too big amount of cells" from a circuit and
puts them on the outbound connection buffers which, when the socket is
writable, will be flushed onto the network (write to the socket).

The MaxMemInQueues parameter basically tells the tor OOM handler when it
is time to start cleaning up allocated memories. But here is the catch,
it only handles cells on circuit queues, not connection's buffer (it
actually handles other things but the majority of allocated data is in
cells usually).

For that reason, we are better off for now to keep relays with a sane
value for MaxMemInQueues so the OOM is actually triggered before the
load goes out of control.

If that MaxMemInQueues value is not set in your torrc, tor will pick 3/4
of the total memory of your system. Usually, this is fine for most use
cases but if you machine has 16GB of RAM but only 4GB are available,
problem. So when setting it, it is not that easy to come up with a good
value but a rule of thumb for now is look at how much memory you have
available normally and estimate around it. It is also important to not
go to low, a fast relay limited to 1GB for instance will start to
degrade performance by killing cicuits more often if it sees 20MB/s
(imperically speaking).

I think we could do a better job at estimating it when not set, we could
do a better job with the OOM, we could do lot more but unfortunately for
now, this is the state of thing we need to deal with. We'll be trying to
work on more DoS resistance feature hopefully in the near future.

Hope this help!

Cheers!
David

> 
> Do the wise-minds have a guidance on this one?
> 
> On Fri, 22 Dec 2017 at 21:08 Igor Mitrofanov 
> wrote:
> 
> > Thanks. I do have the IP space.
> >
> > It is a pity multiple instances cannot watch the overall RAM
> > remaining. I have quite a bit of RAM left, but there are large
> > discrepancies in terms of how much RAM different relays are using (>3
> > GB for some, <1 GB for others), so it will be tricky to set
> > MaxMemInQueues without making it too conservative.
> >
> > On Fri, Dec 22, 2017 at 11:46 AM, r1610091651 
> > wrote:
> > > It would expect it to be per instance. Instances are independent of each
> > > other. Further one can only run 2 instances max / ip.
> > >
> > > On Fri, 22 Dec 2017 at 20:40 Igor Mitrofanov <
> > igor.n.mitrofa...@gmail.com>
> > > wrote:
> > >>
> > >> Hi,
> > >>
> > >> Is MaxMemInQueues parameter per-host (global) or per-instance?
> > >> Say, there are 10 relays on the same 24 GB host. Should I set
> > >> MaxMemInQueues to 20 GB, or 2 GB in each torrc?
> > >>
> > >> Thanks,
> > >> Igor
> > >> ___
> > >> tor-relays mailing list
> > >> tor-relays@lists.torproject.org
> > >> https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays
> > >
> > >
> > > ___
> > > tor-relays mailing list
> > > tor-relays@lists.torproject.org
> > > https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays
> > >
> > ___
> > tor-relays mailing list
> > tor-relays@lists.torproject.org
> > https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays
> >

> ___
> tor-relays mailing list
> tor-relays@lists.torproject.org
> https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


-- 
tPcuU+9hl1BRjXh3xHhFgg22HULt2edIxY5kAKLBPPA=


signature.asc
Description: PGP signature
___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


Re: [tor-relays] first impression with 0.3.2.8-rcant a fast exit relay

2017-12-22 Thread David Goulet
On 22 Dec (00:20:38), Toralf Förster wrote:
> With 0.3.2.7-rc the command
>   /usr/sbin/iftop -B -i eth0 -P -N -n -m 320M
> showed every then and when (few times in a hour) for 10-20 sec a traffic 
> value of nearly 0 bytes for the short-term period (the left of the 3 values).
> Usuaally I do poberve between 6 and 26 MByte/sec.
> With the Tor version from today now the outage is about 1-2 sec, but does 
> still occur.

Not sure I fully understand here what you mean. For 1 to 2 sec  you see
0 bytes of outbound traffic :| ?

Doing the same on my fast non-Exit relay (~20MB/s) on the latest 0.3.2, I'm
always capped both ways on the connection.

This systematic delay really sounds more on the kernel side of things.

Are you on BSD or Linux?

Thanks!
David

> Not sure, if this is an expected behaviour or a local problem.
> 
> -- 
> Toralf
> PGP C4EACDDE 0076E94E
> 




> ___
> tor-relays mailing list
> tor-relays@lists.torproject.org
> https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


-- 
DMdcRweJVXVbzthX2gDiX2OwwF5dP4HgkREJLd+rUJM=


signature.asc
Description: PGP signature
___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


Re: [tor-relays] Ongoing DDoS on the Network - Status

2017-12-21 Thread David Goulet
On 21 Dec (22:15:00), Felix wrote:
> > If you are running a relay version >= 0.3.2.x (currently 281 relays in the
> > network), please update as soon as you can with the latest tarball or latest
> > git tag.
> Update as well if HSDir is still present? The network might loose the
> rare ones.

If you are running 032, I will say yes. Now is a good time while we still have
~2000 HSDirs. With KIST scheduler and this latest release, your relay will be
more resilient to this DDoS.

With <= 031, setting the option and then HUP will work without restarting.

Thanks!
David

> -- 
> Cheers, Felix
> ___
> tor-relays mailing list
> tor-relays@lists.torproject.org
> https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays

-- 
DMdcRweJVXVbzthX2gDiX2OwwF5dP4HgkREJLd+rUJM=


signature.asc
Description: PGP signature
___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


Re: [tor-relays] Ongoing DDoS on the Network - Status

2017-12-21 Thread David Goulet
On 20 Dec (11:21:57), David Goulet wrote:
> Hi everyone!
> 
> I'm David and I'm part of the core development team in Tor. A few minutes ago
> I just sent this to the tor-project@ mailing list about the DDoS the network
> is currently under:
> 
> https://lists.torproject.org/pipermail/tor-project/2017-December/001604.html
> 
> There is not much more to say about this right now but I wanted to thanks
> everyone here for running a relay, this situation is not pleasant for anyone
> especially for relay operators for which you need to deal with this attack
> (and extra bonus point during the holidays for some...).
> 
> Second, everyone who provided information, took the time to dig in this
> problem and sent their findings on this list was a HUGE help to us so again,
> thank you very much for this.
> 
> We will update everyone as soon as possible on the status of the tor releases
> that hopefully will contain fixes that should help mitigate this DDoS.

Hi again everyone!

We've just released 0.3.2.8-rc that contains critical fixes in order for tor
to deal with the ongoing DDoS:

https://lists.torproject.org/pipermail/tor-talk/2017-December/043844.html

Packagers have been notified also so hopefully we might get them soonish.

If you are running a relay version >= 0.3.2.x (currently 281 relays in the
network), please update as soon as you can with the latest tarball or latest
git tag.

For the others still on <= 0.3.1.x, we do have a fix that hasn't been released
yet and we'll hopefully have more soon.

In the meantime, I will repeat the recommendation we have until we can roll up
more DoS defenses. If you are affected by this DDoS, set the MaxMemInQueues to
a value that reflects the amount of *available free* RAM your machine, not the
total amount of RAM.

For instance, if you have a server with 16GB of RAM but only 8GB are free,
setting the MaxMemInQueues value to or below 8GB is the wise thing to do until
this DDoS is resolved. Of course, the more you can offer the better!

The reason for this is to force "tor" to trigger its OOM (Out Of Memory
handler) before it is too late. This won't reduce the load but it will make
the relay stay alive, not go out of memory and hopefully stay in the
consensus.

Thanks everyone for your help!
David

-- 
DMdcRweJVXVbzthX2gDiX2OwwF5dP4HgkREJLd+rUJM=


signature.asc
Description: PGP signature
___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


[tor-relays] Ongoing DDoS on the Network - Status

2017-12-20 Thread David Goulet
Hi everyone!

I'm David and I'm part of the core development team in Tor. A few minutes ago
I just sent this to the tor-project@ mailing list about the DDoS the network
is currently under:

https://lists.torproject.org/pipermail/tor-project/2017-December/001604.html

There is not much more to say about this right now but I wanted to thanks
everyone here for running a relay, this situation is not pleasant for anyone
especially for relay operators for which you need to deal with this attack
(and extra bonus point during the holidays for some...).

Second, everyone who provided information, took the time to dig in this
problem and sent their findings on this list was a HUGE help to us so again,
thank you very much for this.

We will update everyone as soon as possible on the status of the tor releases
that hopefully will contain fixes that should help mitigate this DDoS.

Cheers!
David

-- 
aFJe0kbRB1zZXgwFQIvBG0Skn3xAsDGxVQsAiguKjY8=


signature.asc
Description: PGP signature
___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


Re: [tor-relays] Removed x bytes by killing y circuits

2017-12-14 Thread David Goulet
On 14 Dec (20:41:56), Felix wrote:
> Hi everybody
> 
> Can someone explain the following tor log entry:
> 
> Removed 528 bytes by killing 385780 circuits; 0 circuits remain alive.
> (it's the nicest one, see below)
> 
> 
> Memory stays at 3 - 4 GB before and after. Only tor restart gets rid of
> the memory.
> 
> We love to remove 528 bytes.
> 
> Tor log becomes 370 MB within 18 hours.
> 
> 
> Tor 0.3.2.6-alpha (git-87012d076ef58bb9)
> MaxMemInQueues 2 GB

Do you see a log line that relates to which type of "Scheduler" you are using
such as:

[notice] Scheduler type KIST has been enabled.

It is seriously a huge amount of circuit and very little data. For isntance
13344 bytes that is 0.013 MB for 594572 circuits is just weird.

Is there a chance you are being DoS in some capacity? That is bunch of
circuits being opened constantly but with no traffic? You would see that with
many inbound connections and if they come from non relays also.

Another possibility is that Tor is failing to cleanup inactive circuits but
with more information, we can eliminate options more easily.

Thanks!
David

> 
> 
> We receive some of these:
> 
> Dec 14 00:11:00.000 [notice] Heartbeat: Tor's uptime is 8 days 3:00
> hours, with 508694 circuits open.
> Dec 14 00:30:08.000 [notice] We're low on memory.  Killing circuits with
> over-long queues. (This behavior is controlled by MaxMemInQueues.)
> Dec 14 00:30:09.000 [notice] Removed 13344 bytes by killing 594572
> circuits; 0 circuits remain alive. Also killed 3 non-linked directory
> connections.
> 
> Dec 14 01:11:00.000 [notice] Heartbeat: Tor's uptime is 8 days 4:00
> hours, with 67530 circuits open.
> Dec 14 02:07:16.000 [notice] We're low on memory.  Killing circuits with
> over-long queues. (This behavior is controlled by MaxMemInQueues.)
> Dec 14 02:07:17.000 [notice] Removed 206448 bytes by killing 484352
> circuits; 0 circuits remain alive. Also killed 2 non-linked directory
> connections.
> 
> Dec 14 03:11:00.000 [notice] Heartbeat: Tor's uptime is 8 days 6:00
> hours, with 379182 circuits open.
> Dec 14 03:13:17.000 [notice] We're low on memory.  Killing circuits with
> over-long queues. (This behavior is controlled by MaxMemInQueues.)
> Dec 14 03:13:17.000 [notice] Removed 528 bytes by killing 385780
> circuits; 0 circuits remain alive. Also killed 0 non-linked directory
> connections.
> 
> Dec 14 05:11:00.000 [notice] Heartbeat: Tor's uptime is 8 days 8:00
> hours, with 303958 circuits open.
> Dec 14 05:15:17.000 [notice] We're low on memory.  Killing circuits with
> over-long queues. (This behavior is controlled by MaxMemInQueues.)
> Dec 14 05:15:18.000 [notice] Removed 528 bytes by killing 317854
> circuits; 0 circuits remain alive. Also killed 0 non-linked directory
> connections.
> 
> and lots of:
> 
> Dec 14 00:30:22.000 [notice] We're low on memory.  Killing circuits with
> over-long queues. (This behavior is controlled by MaxMemInQueues.)
> Dec 14 00:30:22.000 [notice] Removed 528 bytes by killing 221 circuits;
> 0 circuits remain alive. Also killed 0 non-linked directory connections.
> Dec 14 00:30:22.000 [notice] We're low on memory.  Killing circuits with
> over-long queues. (This behavior is controlled by MaxMemInQueues.)
> Dec 14 00:30:22.000 [notice] Removed 528 bytes by killing 297 circuits;
> 0 circuits remain alive. Also killed 0 non-linked directory connections.
> Dec 14 00:30:22.000 [notice] We're low on memory.  Killing circuits with
> over-long queues. (This behavior is controlled by MaxMemInQueues.)
> Dec 14 00:30:22.000 [notice] Removed 528 bytes by killing 395 circuits;
> 0 circuits remain alive. Also killed 0 non-linked directory connections.
> Dec 14 00:30:22.000 [notice] We're low on memory.  Killing circuits with
> over-long queues. (This behavior is controlled by MaxMemInQueues.)
> Dec 14 00:30:22.000 [notice] Removed 528 bytes by killing 468 circuits;
> 0 circuits remain alive. Also killed 0 non-linked directory connections.
> Dec 14 00:30:22.000 [notice] We're low on memory.  Killing circuits with
> over-long queues. (This behavior is controlled by MaxMemInQueues.)
> Dec 14 00:30:22.000 [notice] Removed 528 bytes by killing 509 circuits;
> 0 circuits remain alive. Also killed 0 non-linked directory connections.
> 
> 
> -- 
> Thanks and cheers, Felix
> ___
> tor-relays mailing list
> tor-relays@lists.torproject.org
> https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays

-- 
PQgdff5S0a51LrwYmq/+PRgWSz+jjvkgZTCn3plzEkY=


signature.asc
Description: PGP signature
___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


Re: [tor-relays] [metrics-team] Atlas is now Relay Search!

2017-11-14 Thread David Goulet
On 14 Nov (13:24:21), Iain R. Learmonth wrote:
> Hi David,
> 
> On 14/11/17 13:01, David Goulet wrote:
> > Quick question for you. Atlas used to have the search box at all time in the
> > corner which for me was very useful because I could do many search without 
> > an
> > extra click to go back one level down like the new site has.
> > 
> > How crazy would it be to bring it back? Always hovering in the top corner? 
> > :)
> > Maybe a ticket would be a better way to ask?
> 
> Please do file a ticket.

Cheers!

https://trac.torproject.org/projects/tor/ticket/24274

> 
> Thanks,
> Iain.
> 
> ___
> tor-relays mailing list
> tor-relays@lists.torproject.org
> https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


-- 
MxBkRXCYwsjs9XYQ2CdV6AR4pWxGtfzRvkWje9ebIvM=


signature.asc
Description: PGP signature
___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


Re: [tor-relays] [metrics-team] Atlas is now Relay Search!

2017-11-14 Thread David Goulet
On 14 Nov (12:52:27), Iain R. Learmonth wrote:
> Hi All,
> 
> You may notice that Atlas has a new look, and is no longer called Atlas.
> For now no URLs have changed but this is part of work to merge this tool
> into the Tor Metrics website.

Hi Iain!

Great stuff! Thanks for this and letting us know also, I would have been very
confused as I use Atlas all the time :).

Quick question for you. Atlas used to have the search box at all time in the
corner which for me was very useful because I could do many search without an
extra click to go back one level down like the new site has.

How crazy would it be to bring it back? Always hovering in the top corner? :)
Maybe a ticket would be a better way to ask?

Big thanks!
David

> 
> The style is determined by the Tor Metrics Style, and modifications have
> been made to fit this.
> 
> The decision was made to deploy these changes before the actual
> integration into metrics.torproject.org to allow for other issues to be
> worked on. This was a big change and it was tricky maintaining two
> branches of the codebase.
> 
> Issues should still be reported on the Metrics/Atlas component in the
> Tor trac if they arise. When we come to full integration, URLs will
> change but there will be a period where we maintain redirects to prevent
> any URLs from breaking while waiting for being updated.
> 
> Thanks,
> Iain.
> 




> ___
> metrics-team mailing list
> metrics-t...@lists.torproject.org
> https://lists.torproject.org/cgi-bin/mailman/listinfo/metrics-team


-- 
M72MGWsMq9KJ+hYLXg8sXrwfexA4QUqnNwWVOMxVBvM=


signature.asc
Description: PGP signature
___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


Re: [tor-relays] Decline in relays

2017-10-23 Thread David Goulet
On 23 Oct (22:49:55), rasptor 4273 wrote:
> My relay has gone off the consensus.
> Fingerprint: E7FFF8C3D5736AB87215C5DB05620103033E69C3

Interesting. And it is still running as of now without any problems? Can you
give me the IP/ORPORT tuple?

You think you can add this to your torrc and then HUP your relay (very
importatnt to NOT restart it).

Log info file 

And then after a some hours (maybe a day), we'll be looking for "Decided to
publish new relay descriptor".

If it appears, we know that your relay keeps uploading to the directory
authorities so thus chances are that there is a problem on the dirauth side
not finding you reachable.

Thanks!
David

> Alias: rasptor4273
> Am running Tor 0.2.5.14 on Debian, Raspberry Pi 2B. I upgraded to that
> version on September 3rd.
> 
> I grepped through these:
> https://collector.torproject.org/archive/relay-descriptors/consensuses/ and
> the latest entry I found for my alias is in the file
> ./17/2017-09-17-13-00-00-consensus.
> 
> Not sure what other information I can provide. Do let me know if I can do
> anything else to help troubleshoot.
> 
> Best,
> Joep
> 
> On Mon, Oct 23, 2017 at 9:14 PM, George <geo...@queair.net> wrote:
> 
> > David Goulet:
> > > Hello everyone!
> > >
> > > Since July 2017, there has been a steady decline in relays from ~7k to
> > now
> > > ~6.5k. This is a bit unusual that is we don't see often such a steady
> > behavior
> > > of relays going offline (at least that I can remember...).
> > >
> > > It could certainly be something normal here. However, we shouldn't rule
> > out a
> > > bug in tor as well. The steadyness of the decline makes me a bit more
> > worried
> > > than usual.
> > >
> > > You can see the decline has started around July 2017:
> > >
> > > https://metrics.torproject.org/networksize.html?start=
> > 2017-06-01=2017-10-23
> > >
> > > What happened around July in terms of tor release:
> > >
> > > 2017-06-08 09:35:17 -0400 802d30d9b7  (tag: tor-0.3.0.8)
> > > 2017-06-08 09:47:44 -0400 e14006a545  (tag: tor-0.2.5.14)
> > > 2017-06-08 09:47:58 -0400 aa89500225  (tag: tor-0.2.9.11)
> > > 2017-06-08 09:55:28 -0400 f833164576  (tag: tor-0.2.4.29)
> > > 2017-06-08 09:55:58 -0400 21a9e5371d  (tag: tor-0.2.6.12)
> > > 2017-06-08 09:56:15 -0400 3db01d3b56  (tag: tor-0.2.7.8)
> > > 2017-06-08 09:58:36 -0400 64ac28ef5d  (tag: tor-0.2.8.14)
> > > 2017-06-08 10:15:41 -0400 dc47d936d4  (tag: tor-0.3.1.3-alpha)
> > > ...
> > > 2017-06-29 16:56:13 -0400 fab91a290d  (tag: tor-0.3.1.4-alpha)
> > > 2017-06-29 17:03:23 -0400 22b3bf094e  (tag: tor-0.3.0.9)
> > > ...
> > > 2017-08-01 11:33:36 -0400 83389502ee  (tag: tor-0.3.1.5-alpha)
> > > 2017-08-02 11:50:57 -0400 c33db290a9  (tag: tor-0.3.0.10)
> > >
> > > Note that on August 1st 2017, 0.2.4, 0.2.6 and 0.2.7 went end of life.
> > >
> > > That being said, I don't have an easy way to list which relays went
> > offline
> > > during the decline (since July basically) to see if a common pattern
> > emerges.
> > >
> > > So few things. First, if anyone on this list noticed that their relay
> > went off
> > > the consensus while still having tor running, it is a good time to
> > inform this
> > > thread :).
> > >
> > > Second, anyone could have an idea of what possibly is going on that is
> > have
> > > one or more theories. Even better, if you have some tooling to try to
> > list
> > > which relays went offline, that would be _awesome_.
> > >
> > > Third, knowing what was the state of packaging in
> > Debian/Redhat/Ubuntu/...
> > > around July could be useful. What if a package in distro X is broken and
> > the
> > > update have been killing the relays? Or something like that...
> > >
> > > Last, looking at the dirauth would be a good idea. Basically, when did
> > the
> > > majority switched to 030 and then 031. Starting in July, what was the
> > state of
> > > the dirauth version?
> > >
> > > Any help is very welcome! Again, this decline could be from natural
> > cause but
> > > for now I just don't want to rule out an issue in tor or packaging.
> >
> > (Replying to OP since it went OT)
> >
> > As some of you know, TDP did a little suite of shell scripts based on
> > OONI data to look at diversity statistics:
> >
> > https://torbsd.github.io/oostats.html
> >
> > With the source here fo

Re: [tor-relays] Decline in relays

2017-10-23 Thread David Goulet
On 23 Oct (09:37:31), Eli wrote:

> I can state the reason I stopped hosting my exit relay was due to tor rpm
> package not being up to date for CentOS 7. The last available version was
> considered out of date and no longer supported. So instead of running a
> relay that was potentially detrimental to the health of the tor network I
> shutdown the node.

I've just pinged our Fedora/CentOS packager, he pointed out this:

https://bodhi.fedoraproject.org/updates/FEDORA-EPEL-2017-abe6f98ebf

6 days ago, the latest up to date Tor LTS version was uploaded. :)

Big thanks for running a relay!

Cheers!
David

> 
> On Oct 23, 2017, 9:32 AM, at 9:32 AM, David Goulet <dgou...@torproject.org> 
> wrote:
> >Hello everyone!
> >
> >Since July 2017, there has been a steady decline in relays from ~7k to
> >now
> >~6.5k. This is a bit unusual that is we don't see often such a steady
> >behavior
> >of relays going offline (at least that I can remember...).
> >
> >It could certainly be something normal here. However, we shouldn't rule
> >out a
> >bug in tor as well. The steadyness of the decline makes me a bit more
> >worried
> >than usual.
> >
> >You can see the decline has started around July 2017:
> >
> >https://metrics.torproject.org/networksize.html?start=2017-06-01=2017-10-23
> >
> >What happened around July in terms of tor release:
> >
> >2017-06-08 09:35:17 -0400 802d30d9b7  (tag: tor-0.3.0.8)
> >2017-06-08 09:47:44 -0400 e14006a545  (tag: tor-0.2.5.14)
> >2017-06-08 09:47:58 -0400 aa89500225  (tag: tor-0.2.9.11)
> >2017-06-08 09:55:28 -0400 f833164576  (tag: tor-0.2.4.29)
> >2017-06-08 09:55:58 -0400 21a9e5371d  (tag: tor-0.2.6.12)
> >2017-06-08 09:56:15 -0400 3db01d3b56  (tag: tor-0.2.7.8)
> >2017-06-08 09:58:36 -0400 64ac28ef5d  (tag: tor-0.2.8.14)
> >2017-06-08 10:15:41 -0400 dc47d936d4  (tag: tor-0.3.1.3-alpha)
> >...
> >2017-06-29 16:56:13 -0400 fab91a290d  (tag: tor-0.3.1.4-alpha)
> >2017-06-29 17:03:23 -0400 22b3bf094e  (tag: tor-0.3.0.9)
> >...
> >2017-08-01 11:33:36 -0400 83389502ee  (tag: tor-0.3.1.5-alpha)
> >2017-08-02 11:50:57 -0400 c33db290a9  (tag: tor-0.3.0.10)
> >
> >Note that on August 1st 2017, 0.2.4, 0.2.6 and 0.2.7 went end of life.
> >
> >That being said, I don't have an easy way to list which relays went
> >offline
> >during the decline (since July basically) to see if a common pattern
> >emerges.
> >
> >So few things. First, if anyone on this list noticed that their relay
> >went off
> >the consensus while still having tor running, it is a good time to
> >inform this
> >thread :).
> >
> >Second, anyone could have an idea of what possibly is going on that is
> >have
> >one or more theories. Even better, if you have some tooling to try to
> >list
> >which relays went offline, that would be _awesome_.
> >
> >Third, knowing what was the state of packaging in
> >Debian/Redhat/Ubuntu/...
> >around July could be useful. What if a package in distro X is broken
> >and the
> >update have been killing the relays? Or something like that...
> >
> >Last, looking at the dirauth would be a good idea. Basically, when did
> >the
> >majority switched to 030 and then 031. Starting in July, what was the
> >state of
> >the dirauth version?
> >
> >Any help is very welcome! Again, this decline could be from natural
> >cause but
> >for now I just don't want to rule out an issue in tor or packaging.
> >
> >Cheers!
> >David
> >
> >--
> >HiTVizeJUSe9JPvs6jBv/6i8YFvEYY/NZmNhD2UixVY=
> >
> >
> >
> >
> >___
> >tor-relays mailing list
> >tor-relays@lists.torproject.org
> >https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays

> ___
> tor-relays mailing list
> tor-relays@lists.torproject.org
> https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


-- 
HiTVizeJUSe9JPvs6jBv/6i8YFvEYY/NZmNhD2UixVY=


signature.asc
Description: PGP signature
___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


[tor-relays] Decline in relays

2017-10-23 Thread David Goulet
Hello everyone!

Since July 2017, there has been a steady decline in relays from ~7k to now
~6.5k. This is a bit unusual that is we don't see often such a steady behavior
of relays going offline (at least that I can remember...).

It could certainly be something normal here. However, we shouldn't rule out a
bug in tor as well. The steadyness of the decline makes me a bit more worried
than usual.

You can see the decline has started around July 2017:

https://metrics.torproject.org/networksize.html?start=2017-06-01=2017-10-23

What happened around July in terms of tor release:

2017-06-08 09:35:17 -0400 802d30d9b7  (tag: tor-0.3.0.8)
2017-06-08 09:47:44 -0400 e14006a545  (tag: tor-0.2.5.14)
2017-06-08 09:47:58 -0400 aa89500225  (tag: tor-0.2.9.11)
2017-06-08 09:55:28 -0400 f833164576  (tag: tor-0.2.4.29)
2017-06-08 09:55:58 -0400 21a9e5371d  (tag: tor-0.2.6.12)
2017-06-08 09:56:15 -0400 3db01d3b56  (tag: tor-0.2.7.8)
2017-06-08 09:58:36 -0400 64ac28ef5d  (tag: tor-0.2.8.14)
2017-06-08 10:15:41 -0400 dc47d936d4  (tag: tor-0.3.1.3-alpha)
...
2017-06-29 16:56:13 -0400 fab91a290d  (tag: tor-0.3.1.4-alpha)
2017-06-29 17:03:23 -0400 22b3bf094e  (tag: tor-0.3.0.9)
...
2017-08-01 11:33:36 -0400 83389502ee  (tag: tor-0.3.1.5-alpha)
2017-08-02 11:50:57 -0400 c33db290a9  (tag: tor-0.3.0.10)

Note that on August 1st 2017, 0.2.4, 0.2.6 and 0.2.7 went end of life.

That being said, I don't have an easy way to list which relays went offline
during the decline (since July basically) to see if a common pattern emerges.

So few things. First, if anyone on this list noticed that their relay went off
the consensus while still having tor running, it is a good time to inform this
thread :).

Second, anyone could have an idea of what possibly is going on that is have
one or more theories. Even better, if you have some tooling to try to list
which relays went offline, that would be _awesome_.

Third, knowing what was the state of packaging in Debian/Redhat/Ubuntu/...
around July could be useful. What if a package in distro X is broken and the
update have been killing the relays? Or something like that...

Last, looking at the dirauth would be a good idea. Basically, when did the
majority switched to 030 and then 031. Starting in July, what was the state of
the dirauth version?

Any help is very welcome! Again, this decline could be from natural cause but
for now I just don't want to rule out an issue in tor or packaging.

Cheers!
David

-- 
HiTVizeJUSe9JPvs6jBv/6i8YFvEYY/NZmNhD2UixVY=


signature.asc
Description: PGP signature
___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


Re: [tor-relays] OR banned?

2017-08-24 Thread David Goulet
On 24 Aug (12:11:47), Marcus Danilo Leite Rodrigues wrote:
> Hello.
> 
> I was running a Tor Relay for the past month (fingerprint
> 71BEBB61D0D35234D57087D035F12971FA315168)
> at my university and it seems that it got banned somehow. I got messages on
> my log like the following:
> 
> http status 400 ("Fingerprint is marked rejected -- please contact us?")
> response from dirserver '171.25.193.9:443'. Please correct.
> 
> I was hoping to get some information regarding this ban and how I could
> correct whatever was done wrong in order to get my relay up and running
> again.

Hello Marcus,

You relay has been found to be harvesting .onion addresses which is strictly
prohibited on the network.

See https://blog.torproject.org/blog/ethical-tor-research-guidelines

Were you conducting some research or ?

Thanks for running a relay!
David

> 
> Best wishes,
> Marcus Rodrigues.

> ___
> tor-relays mailing list
> tor-relays@lists.torproject.org
> https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


-- 
dI6qBwjRAsZuHbMRuPaXkArKESn4fYnY9Gcn/UW8Dlc=


signature.asc
Description: PGP signature
___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


Re: [tor-relays] prop224 warning

2017-08-01 Thread David Goulet
On 01 Aug (17:11:24), Logforme wrote:
> Saw a new thing in my tor log today:
> Aug 01 11:07:27.000 [warn] Established prop224 intro point on circuit
> 799774346

Oh my... that is NOT suppose to be a warning at all... We totally forgot to
remove that log statement...

Very sorry about that! I've created a ticket to fix this asap:

https://trac.torproject.org/projects/tor/ticket/23078

You can safely ignore this (those) warnings and hopefully you don't get too
much of it.

Thanks for the report!
David

> 
> According to google, prop224 is a new hidden service protocol?
> https://trac.torproject.org/projects/tor/ticket/12424
> 
> Which sounds like a great thing. But why do I get a warning about it?
> 
> My relay:
> https://atlas.torproject.org/#details/855BC2DABE24C861CD887DB9B2E950424B49FC34
> 
> ___
> tor-relays mailing list
> tor-relays@lists.torproject.org
> https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays

-- 
zRYMiO7Zc20jTgiNCXjJqXM5lVqZ+v2mjc0JNyXnlBw=


signature.asc
Description: PGP signature
___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


Re: [tor-relays] 2017-06-07 15:37: 65 new tor exits in 30 minutes

2017-06-07 Thread David Goulet
On 07 Jun (19:41:00), nusenu wrote:
> DocTor [1] made me look into this.
> 
> _All_ 65 relays in the following table have the following characteristics:
> (not shown in the table to safe some space)

Yah, we got a report on bad-relays@ as well... We are looking into this but
seems there is a distinctive pattern for most of them.

David

> 
> - OS: Linux
> - run two instances per IP address (the number of relays is only odd
> because in one case they created 3 keys per IP)
> - ORPort: random
> - DirPort: disabled
> - Tor Version: 0.2.9.10
> - ContactInfo: None
> - MyFamily: None
> - Joined the Tor network between 2017-06-07 15:37:32 and 2017-06-07
> 16:08:54 (UTC)
> - Exit Policy summary: {u'reject': [u'25', u'119', u'135-139', u'445',
> u'563', u'1214', u'4661-4666', u'6346-6429', u'6699', u'6881-6999']}
> - table is sorted by colmns 3,1,2 (in that order)
> 
> 
> - Group diversity:
>  - 20 distinct autonomous systems
>  - 18 distinct countries
> 
> https://gist.githubusercontent.com/nusenu/81337aed747ea5c7dec57899b0e27e94/raw/c7e0c4538e4f424b4cc529f3c2b1cabf6a5df579/2017-06-07_tor_network_65_relays_group.txt
> 
> 
> 
> Relay fingerprints are at the bottom of this file.
> 
> This list of relays is NOT identical to the one from DocTor (even though
> the number is identical (65)):
> [1]
> https://lists.torproject.org/pipermail/tor-consensus-health/2017-June/007968.html
> 
> https://twitter.com/nusenu_/status/872536564647198720
> 
> 
> -- 
> https://mastodon.social/@nusenu
> https://twitter.com/nusenu_
> 




> ___
> tor-relays mailing list
> tor-relays@lists.torproject.org
> https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


-- 
F3vakg18tijjqFR690AknN2mb+hDT7jRDxYnpDPmVjY=


signature.asc
Description: PGP signature
___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


[tor-relays] Public ntop service

2017-03-13 Thread David Goulet
Hello Relay Ops!

Few days ago, the bad relay team found some public ntop service running on
these relays on port 3000 for which they have no or an invalid contact info.
If you are one of the operator, please close or lock it down else as a safety
measure we'll have to reject those relays from the network.

https://atlas.torproject.org/#details/D9B536F18046990722D365BFECABBC638B41B165
https://atlas.torproject.org/#details/25990FC54D7268C914170A118EE4EE75025451DA
https://atlas.torproject.org/#details/DB866328A5D55EBD34B5BC293FFFDD43AD81C51A
https://atlas.torproject.org/#details/8FD69D4C0E5CFDCD6831DD6E2141A182FC7DE1EA

The two have contacted directly with the contact info:

https://atlas.torproject.org/#details/64EB3CBCCEB760B9F647FA1B9A4053339830B02A
https://atlas.torproject.org/#details/BC497E213E43B51F7A893A563EEF17A99271A0E8

And thank you for running a relay!
David

-- 
pTh3sRks6H+86/1amEcmv0zbvLoS0CTrRupKHYXthlQ=


signature.asc
Description: PGP signature
___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


Re: [tor-relays] The 9001-9051-v0.2.8.9 Gang: 57 relays and counting...

2017-02-28 Thread David Goulet
On 28 Feb (02:09:00), nusenu wrote:
> 
> 
> Donncha O'Cearbhaill:
> > nusenu:
> >> This group is still growing.
> >>
> >> Note that the following table is _not_ sorted by FP.
> >>
> >> The FP links these relays even across ISP, and given the FP column
> >> pattern it might be obvious what they are after.
> >>
> >> They do not have the hsdir flag yet.
> >>
> >> https://raw.githubusercontent.com/nusenu/tor-network-observations/master/2017-02-24_9001-9051-v0.2.8.9.txt
> >>
> > 
> > Nusenu, thank you for reporting these relay. They are now in the process
> > of being removed from the network.
> 
> Thanks for letting us know.
> 
> It would be nice if you could share:

Hello!

I'll try to help out as much as I can here.

> - if you reached out to the operator (via abuse contacts)

We do that if a valid contact address is present. In this case, we had only
one I believe and still no response. Email was sent yesterday ~afternoon EST.

> - removal reason

Proximity of fingerprint indicates a clear attempt at insertion in the
hashring for an (some) onion address. We are *always* better safe than sorry
with bad relays so even without a 100% confirmation, we go ahead.

> - what was removed

That, we don't disclose for obvious reasons that if the attackers can see what
we removed and when, it makes it easier for them to just adapt in time. Only
subscribers to bad-relays@ can know this.

However, those reject/badexit entries at the directory authority level expire
after a time period and when they do, they become public here in this DocTor
script that monitors any relay that we've expired and will be there for a 6
months period:

https://gitweb.torproject.org/doctor.git/tree/data/tracked_relays.cfg

After that 6 months, you can find commit like this that removes a bunch of
them:

https://gitweb.torproject.org/doctor.git/commit/data?id=f89e3dca452a0d776eed5d32136f8a474f892cac

> - method (by FP, IP, IP-range, ...)

We always reject both FP and IP. Sometimes, it can be a full network range.
Depends on the attack.

> - how long they will be blacklisted

The standard time period is 90 days *but* it's still a human that does that so
it goes beyond that time period sometimes. *HUGE* network block though, we are
more careful at not extending too much the reject time.

> - time of removal

We don't disclose that for now. Only subscribers to bad-relays@ can know this.

There has been *MANY* discussions about having this reject list public and
everything in the open. I believe it wasn't full agreement in the end but for
now it went towards keeping it close.

Thanks!
David

> 
> thanks,
> nusenu
> 




> ___
> tor-relays mailing list
> tor-relays@lists.torproject.org
> https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


-- 
F7k4dGBiwJmiegoPb+2QbzdAVSSAfb5AitHDxdxsEV8=


signature.asc
Description: PGP signature
___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


Re: [tor-relays] Shutdown of TorLand1

2017-02-15 Thread David Goulet
On 15 Feb (21:55:48), tor-ad...@torland.is wrote:
> Hi all,
> 
> after 5 years of operation I will shutdown TorLand1 
> (https://atlas.torproject.org/#details/E1E922A20AF608728824A620BADC6EFC8CB8C2B8)
>  
> on February 17 2017. 
> 
> During the time of operation it pumped almost 6 PetaByte of exit traffic. 
> Compared to the amount of traffic, the number of complains were quite low. 
> Around 1-2 complains per week with a reduced exit policy. Two times I was 
> contacted by LE via email. 
> 
> When I started the exit relay there were around 20-30 high capacity relays 
> available. Today compass shows 180 95+ MBit/s exits. TorLand1 was operated 
> and 
> paid by me without an organization like torservers, nos onions, etc. 
> 
> I hope others will step up and run high capacity exits. The Tor network needs 
> your help. I will continue to run a meek bridge. 

HUGE thanks for your contribution! It's is really unaccountable how much
that helped the network and thus the world :).

Again, BIG thanks!
David

> 
> Regards,
> 
> torland
> ___
> tor-relays mailing list
> tor-relays@lists.torproject.org
> https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays

-- 
gcUatLyGglBJOYXuAioeOQaDTvKomulP8VedNkVNqAo=


signature.asc
Description: PGP signature
___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


Re: [tor-relays] consensus-health

2017-01-03 Thread David Goulet
On 03 Jan (18:24:07), Felix wrote:
> https:// consensus-health.torproject.org/
> (observed 2017-01-03 16:00:00 and 2017-01-03 17:00:00)
> shows
> * dannenberg: Missing entirely from consensus

The ed25519 key of dannenberg expired so it has to be fixed to resolved
the situation. I believe Andreas has been notified already of this.

> * faravahar: Missing Signature! Valid-after time of auth's displayed
> consensus: 2017-01-03 15:00:00

This will continue to be as long as faravahar doesn't update to 0.2.9.8+
as it's not understanding the consensus-method that the other dirauth
have voted on.

> * moria1: Sees only 2620 relays running

This is still a mystery... this happens sometimes when moria1 is run
under valgrind or maybe some network issues. Or maybe some bug in 0.3.0
so we'll see about it.

Cheers!
David

> 
> Is https://
> lists.torproject.org/pipermail/tor-consensus-health/2017-January/date.html
> ok ?
> 
> -- 
> imho - like always
> ___
> tor-relays mailing list
> tor-relays@lists.torproject.org
> https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


signature.asc
Description: PGP signature
___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


Re: [tor-relays] You dont love me anymore :(

2016-10-18 Thread David Goulet
On 18 Oct (20:11:45), Markus Koch wrote:
>  20:08:18 [WARN] Received http status code 404 ("Not found") from
> server '86.59.21.38:80' while fetching
> "/tor/keys/fp-sk/14C131DFC5C6F93646BE72FA1401C02A-
>8DF2E8B4-692049A2E7868BE9933107A39B1CE0C7CBF1BF65".
>  20:06:18 [WARN] Received http status code 404 ("Not found") from
> server '194.109.206.212:80' while fetching
> "/tor/keys/fp-sk/14C131DFC5C6F93646BE72FA1401-
>C02A8DF2E8B4-692049A2E7868BE9933107A39B1CE0C7CBF1BF65".
>  20:05:18 [WARN] http status 400 ("Authdir is rejecting routers in
> this range.") response from dirserver '171.25.193.9:443'. Please
> correct.
>  20:05:18 [WARN] http status 400 ("Authdir is rejecting routers in
> this range.") response from dirserver '154.35.175.225:80'. Please
> correct.
>  20:05:18 [WARN] http status 400 ("Authdir is rejecting routers in
> this range.") response from dirserver '131.188.40.189:80'. Please
> correct.
>  20:05:18 [WARN] http status 400 ("Authdir is rejecting routers in
> this range.") response from dirserver '86.59.21.38:80'. Please
> correct.
> 
> This is my niftypika server. This is animal abuse! Seriously, WTF is
> going wrong?

Hi!

It turns out that our last change to the dirauth configuration to reject newly
discovered malicious relays had the _wrong_ IPs for the relay fingerprints...
and you relay IP was a victim of this :S ...

My apologize! I'm currently working on fixing this, you should be back in the
consensus once authorities update from the mistake.

Again, sorry!
David

> 
> Markus
> ___
> tor-relays mailing list
> tor-relays@lists.torproject.org
> https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


signature.asc
Description: PGP signature
___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays