On Mon, Dec 12, 2022 at 10:18:53PM +0100, Anders Trier Olesen wrote:
> > It is surprising, isn't it? It certainly feels like calling connect
> > without first binding to an address should have the same effect as
> > manually binding to an address and then calling connect, especially if
> > the
I am happy to report that we have upgraded all our relays to Tor
0.4.8.0-alpha-dev and for the pst 8 days since the upgrade the bind conflict
has ceased. No firewall rules are being used. No sysctl settings helped.
--
Christopher Sheats (yawnbox)
Executive Director
Emerald Onion
Signal: +1
On Freitag, 2. Dezember 2022 16:30:48 CET Chris wrote:
> As I'm sure you've already gathered, your system is maxing out trying to
> deal with all the connection requests. When inet_csk_get_port is called
> and the port is found to be occupied then inet_csk_bind_conflict is
> called to resolve the
> It is surprising, isn't it? It certainly feels like calling connect
> without first binding to an address should have the same effect as
> manually binding to an address and then calling connect, especially if
> the address you bind to is the same as the kernel would have chosen
>
On Mon, Dec 12, 2022 at 12:39:50AM +0100, Anders Trier Olesen wrote:
> I wrote some tests[1] which showed behaviour I did not expect.
> IP_BIND_ADDRESS_NO_PORT seems to work as it should, but calling bind without
> it
> enabled turns out to be even worse than I thought.
> This is what I think is
On Saturday, December 10, 2022, 7:23:28 AM PST, David Fifield
wrote:
On Sat, Dec 10, 2022 at 09:59:14AM +0100, Anders Trier Olesen wrote:
>> IP_BIND_ADDRESS_NO_PORT did not fix your somewhat similar problem in your
>> Haproxy setup, because all the connections are to the same dst tuple >
I wrote some tests[1] which showed behaviour I did not expect.
IP_BIND_ADDRESS_NO_PORT seems to work as it should, but calling bind
without it enabled turns out to be even worse than I thought.
This is what I think is happening: A successful bind() on a socket without
IP_BIND_ADDRESS_NO_PORT
> I urge you to run an experient yourself, if these observations are not
> what you expect. I was surprised, as well.
Very interesting. I'll run some tests.
We do agree that IP_BIND_ADDRESS_NO_PORT should fix OPs' problem, right?
With it enabled, there's no path to inet_csk_bind_conflict which is
Also see this patch, which introduces net.ipv4.ip_autobind_reuse:
https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id=4b01a9674231a97553a55456d883f584e948a78d
Enabling net.ipv4.ip_autobind_reuse allows the kernel to bind SO_REUSEADDR
enabled sockets (which I think they are
On Sat, Dec 10, 2022 at 09:59:14AM +0100, Anders Trier Olesen wrote:
> IP_BIND_ADDRESS_NO_PORT did not fix your somewhat similar problem in your
> Haproxy setup, because all the connections are to the same dst tuple port>
> (i.e 127.0.0.1:ExtORPort).
> The connect() system call is looking for a
Hi David
IP_BIND_ADDRESS_NO_PORT did not fix your somewhat similar problem in your
Haproxy setup, because all the connections are to the same dst tuple (i.e 127.0.0.1:ExtORPort).
The connect() system call is looking for a unique 5-tuple . In the Haproxy setup, the only free variable is
srcport ,
Hi again
I took another look at this problem, and now I'm even more convinced that
what we really need is IP_BIND_ADDRESS_NO_PORT. Here's why.
If torrc OutboundBindAddress is configured, tor calls bind(2) on every
outgoing connection:
On Fri, Dec 09, 2022 at 09:47:07AM +, Alexander Færøy wrote:
> On 2022/12/01 20:35, Christopher Sheats wrote:
> > Does anyone have experience troubleshooting and/or fixing this problem?
>
> Like I wrote in [1], I think it would be interesting to hear if the
> patch from pseudonymisaTor in
On 2022/12/01 20:35, Christopher Sheats wrote:
> Does anyone have experience troubleshooting and/or fixing this problem?
Like I wrote in [1], I think it would be interesting to hear if the
patch from pseudonymisaTor in ticket #26646[2] would be of any help in
the given situation. The patch allows
Excellent. Thank you.
Yes a blanket iotables rule is not going to work well in this set up as
it pools all connections to all IP addresses into one. So if we accept 4
connections to port 443, a blanket iptables rules accepts 4 connections
to all IP addresses combined and drops everything else and
> May I ask what your set up is?
> Are you running your relays on separate VMs on the main system or are
> you using a different set up like having all IP addresses on the same OS
> and using OutboundBindAddress , routing, etc... to separate them? If I
> know more, I might be able to make a script
server1:~$ ss -s
Total: 454644
TCP: 465840 (estab 368011, closed 36634, orphaned 7619, timewait 11466)
Transport Total IPIPv6
RAW 0 0 0
UDP 48480
TCP 42920641381515391
INET 42925441386315391
FRAG 0 0
Sorry to hear it wasn't much help. Even though the additions I suggested
didn't help they certainly couldn't cause any harm and can't be
responsible for the drops in traffic.
As for the torutils scripts, I'm sure toralf would be able to better
investigate that but I have a feeling you have a
Hello,
Thank you for this information. After 24-hours of testing, these configurations
brought Tor to a halt.
At first I started with the sysctl modifications. After a few hours with just
that, there was no improvement in ~75% inet_csk_bind_conflict utilization. I
then installed Torutils for
Hi Christopher
How many open connections do you have? (`ss -s`)
Do you happen to use OutboundBindAddress in your torrc?
What I think we need is for the Tor developers to include this PR in a
release: https://gitlab.torproject.org/tpo/core/tor/-/merge_requests/579
Once that has happened, I think
Hi,
As I'm sure you've already gathered, your system is maxing out trying to
deal with all the connection requests. When inet_csk_get_port is called
and the port is found to be occupied then inet_csk_bind_conflict is
called to resolve the conflict. So in normal circumstances you shouldn't
see it
Hello tor-relays,
We are using Ubuntu server currently for our exit relays. Occasionally, exit
throughput will drop from ~4Gbps down to ~200Mbps and the only observable data
point that we have is a significant increase in inet_csk_bind_conflict, as seen
via 'perf top', where it will hit 85%
22 matches
Mail list logo