https://trac.torproject.org/projects/tor/ticket/25461
___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays
Respectfully, I disagree.
https://lists.torproject.org/pipermail/tor-relays/2015-October/007904.html
Thank you for the thought however.
___
tor-relays mailing list
tor-relays@lists.torproject.org
Found other ones: December 24 where egress was much higher then
ingress (but crypto-workers were pegged, not main thread). December
28 & 29, attack like today. Feburary 1 & 2, like today with ingress
higher than egress.
In today's and the latter-two above the main event thread was pegged
On Sun, Mar 4, 2018 at 7:06 PM, Toralf Förster <toralf.foers...@gmx.de> wrote:
> On 03/04/2018 07:41 PM, Dhalgren Tor wrote:
>> the main event-worker thread
>> going from a normal load level of about 30%/core to 100%/core and
>> staying there for about 30 seconds;
&g
Upgraded exit to 0.3.3.3 and now seeing a curious CPU saturation
attack. Whatever the cause, result is the main event-worker thread
going from a normal load level of about 30%/core to 100%/core and
staying there for about 30 seconds; then CPU consumption declines back
to 30%. Gradual change on
>Well, it's still going on, and is pretty much ruining Libero :( . Running
>CentOS 6, here.
>
>When it's happening it can look like this:
>
># netstat -n | grep -c SYN
>17696
I run a fast exit and can offer some advice:
1) mitigate bug #18580 (also #21394); is a DNS denial-of-service and
could
FWIW I run a 100TB (150mbps) exit and am convinced that this size
provides better quality exit connectivity than the highest ranked
monster bandwidth relays. The biggest relays attract the greatest
blacklist treatment due to the volume of abuse emanating from them.
As a user of Tor I frequently
https://www.openssl.org/news/secadv/20160503.txt
In general I understand that padding oracle attacks are principally a
hazard for browser communications. Am assuming that updating OpenSSL
for this fix is not an urgent priority for a Tor Relay.
If anyone knows different please comment.
I believe I now understand the cause of exit relay failure when
Unbound is the resolver and GoDaddy null-routes the exit.
Both to prevent this DOS from taking out your relay if Unbound is
running and to maximize DNS performance:
with a local instance of Unbound running /etc/resolv.conf should
FYI Tor-Relays
GoDaddy AS26496 is null-routing selected Tor Exits, presumably in
response to abuse originating from them. Know of two thus far
FE67A1BA4EF1D13A617AEFB416CB9E44331B223A 2016/01/26 ashtrayhat3
A0F06C2FADF88D3A39AA3072B406F09D7095AC9E 2016/03/16 Dhalgren
though probably more
Possibly this incident is the result of some malware attempting to use
some sort of domain "fast flux" or DGA algorithm. Seems improbable
anyone would be dumb enough to try to DDOS GoDaddy DNS using Tor.
___
tor-relays mailing list
As with the earlier incident, problem came back within hours of
restarting the daemons.
Was able to figure out what's happening Operators running 'unbound' take note!
Problem appears to be the result of someone attempting to DDOS a DNS
service, in this case GoDaddy.
Ran
lsof -Pn -p
a few
Problem came back again while I was working on the exit.
unbound-control purge_requestlist
does not help but it appears that
unbound-control purge_infra
unbound-control purge_requestlist
will clear up the problem without requiring a daemon restart--at least
temporarily.
Also tried
Tor user enumerating GoDaddy domains
triggers the failure.
On Sat, Mar 19, 2016 at 5:53 AM, Michael McConville <mm...@mykolab.com> wrote:
> Dhalgren Tor wrote:
>> Bug #18580: exit relay fails with 'unbound' DNS resolver when lots of
>> requests time-out
>>
>> htt
Hit a repeat of an earlier incident:
https://lists.torproject.org/pipermail/tor-relays/2016-January/008621.html
message from tor daemon is
Resolved [scrubbed] which was already resolved; ignoring
About 5400 of these messages over 37 hours, during which the relay
dropped down to 30% of usual
Gave up switched to 'named' and now it's working fine.
Entered BUG: https://trac.torproject.org/projects/tor/ticket/18580
Be advised, anyone running a fast exit with 'unbound' should switch to
using 'named'.
___
tor-relays mailing list
, Dhalgren Tor <dhalgren@gmail.com> wrote:
>>. . .I have to understand how my ISP reacts to this kind of things.
>
>>For the moment I will keep a low profile and I will block the
>>mentioned IP range for a month.
>
> Webiron's system sends notifications to both th
>. . .I have to understand how my ISP reacts to this kind of things.
>For the moment I will keep a low profile and I will block the
>mentioned IP range for a month.
Webiron's system sends notifications to both the abusix.org contact
for the IP and to ab...@base-domain.tld for the reverse-DNS
Is the end of the month. Maybe they ran out of bandwidth and will be
back 11/1. LeaseWeb over-limit rates are terrifying.
BTW the exit policy includes 443.
___
tor-relays mailing list
tor-relays@lists.torproject.org
>snake oil service like webiron
A most excellent characterization!
As a sales maneuver WebIron has been grandstanding
for months saying that Tor operators are "unwilling
to cleanup" when they know full-well that tor operators
can not / should not filter traffic due to minor brute-
force login
Any exit operators with relays at LeaseWeb who are not enjoying the
new automated abuse-notice system requiring all complaints be acted
upon, send a message directly to the above address. Have a solution.
___
tor-relays mailing list
Routers running at or near a BandwidthRate setting offer terrible
latency and induce connection time-outs. Refuse all DIR-port requests
with "HTTP/1.0 503 Directory busy, try again later ".
'tc' ingress filter is dramatically better for rate-limiting.
For case and 'tc' example see
final post
Was going to wait a few days before reporting back, but early results
are decisive.
The overload situation continued to worsen over a two-day period, with
consensus weight continuing to rise despite the relay often running in
a state of extreme overload and performing its exit function quite
Spent a few minutes activating the DNSSEC trust-anchor for 'unbound'.
Ran 'dig' on a few signed domains and observed that queries that took
under 50 milliseconds without went to 2000 milliseconds with.
My attitude toward DNSSEC has deteriorated steadily over time and this
finishes it off for me.
On 10/2/15, jensm1 wrote:
> You're saying that you're on a 1Gbit/s link, but you are only allowed to
> use 100Mbit/s. Is this averaged over some timescale?
More than 100MB which is 60 TB/month total for both directions.
Is 100 TB/month, a common usage tier. Has a FUP (fair usage
per-decile 126717
. . .horrible
On 10/1/15, Yawning Angel <yawn...@schwanenlied.me> wrote:
> On Thu, 1 Oct 2015 19:05:38 +
> Dhalgren Tor <dhalgren@gmail.com> wrote:
>> 3) observing that statistics show elevated cell-queuing delays when
>> the relay has been
On Thu, Oct 1, 2015 at 1:10 PM, Tim Wilson-Brown - teor
wrote:
>
> Can you help me understand what you think the bug is?
Relay is assigned a consensus weight that is too high w/r/t rate
limit. Excess weight appears to be due to high quality of TCP/IP
connectivity and low
>Don't cap the speed if you have bandwidth limits. The better way to do it is
>using AccountingMax in torrc. Just let it run at its full speed less of the
>time and Tor will enter in hibernation once it has no bandwidth left.
Not possible. Will violate the FUP (fair use policy) on the account.
Have a new exit running in an excellent network on a very fast server
with AES-NI. Server plan is limited to100TB so have set a limit
slightly above this (1800 bytes/sec) thinking that bandwidth would
run 80-90% of the maximum and average to just below the plan limit.
After three days the
On Thu, Oct 1, 2015 at 12:55 PM, Tim Wilson-Brown - teor
<teor2...@gmail.com> wrote:
>
> On 1 Oct 2015, at 14:48, Dhalgren Tor <dhalgren@gmail.com> wrote:
>
> A good number appears to be around 65000 to 7, but 98000 was just
> assigned.
>
>
> Since I
On Thu, Oct 1, 2015 at 12:59 PM, s7r wrote:
> Ouch, that's wrong.
I have it correct. You are mistaken.
See https://www.torproject.org/docs/tor-manual.html.en
and read it closely.
___
tor-relays mailing list
On Thu, Oct 1, 2015 at 1:33 PM, Tim Wilson-Brown - teor
<teor2...@gmail.com> wrote:
>
> On 1 Oct 2015, at 15:22, Dhalgren Tor <dhalgren@gmail.com> wrote:
>
> If the relay stays overloaded I'll try a packet-dropping IPTABLES rule
> to "dirty-up" t
>Maybe use this:
>
>MaxAdvertisedBandwidth
This setting causes the relay to limit the self-meausre value
published in the descriptor. Has no effect on the measurement system.
Would be helpful if it did.
___
tor-relays mailing list
This relay appears to have the same problem:
sofia
https://atlas.torproject.org/#details/7BB160A8F54BD74F3DA5F2CE701E8772B841859D
On Thu, Oct 1, 2015 at 12:33 PM, Dhalgren Tor <dhalgren@gmail.com> wrote:
> Have a new exit running in an excellent network on a very fast server
>
On Thu, Oct 1, 2015 at 5:12 PM, Moritz Bartl <mor...@torservers.net> wrote:
> On 10/01/2015 06:28 PM, Dhalgren Tor wrote:
>> This relay appears to have the same problem:
>> sofia
>> https://atlas.torproject.org/#details/7BB160A8F54BD74F3DA5F2CE701E8772B841859D
>
&g
On Thu, Oct 1, 2015 at 7:45 PM, Steve Snyder wrote:
>
> Another consumer of bandwidth is name resolution, if this is an exit node.
> And the traffic incurred by the resolutions is not reflected in the relay
> statistics.
>
> An exit node that allocates 100% of it's
On Thu, Oct 1, 2015 at 10:17 PM, Yawning Angel wrote:
>
> Using IP tables to drop packets also is going to add queuing delays
> since cwnd will get decreased in response to the loss (CUBIC uses beta
> of 0.2 IIRC).
Unfortunately true. Empirical arrival to a better
On Thu, Oct 1, 2015 at 11:41 PM, Tim Wilson-Brown - teor
wrote:
>
> We could modify the *Bandwidth* options to take TCP overhead into account.
Not practical. TCP/IP overhead varies greatly. I have a guard that
averages 5% while the exit does 10% when saturated and more
when
Is it important to configure the DNSSEC trust-anchor for an instance
of 'unbound' running on an exit node? I put a lot of work into
setting up a new exit and want to take a break, but just noticed this
item. 'unbound' was built from source rather than installed from a
distribution, so this step
39 matches
Mail list logo