[tor-relays] Is this probing normal for a bridge
Hi, On the VPS where I run a couple of bridges, I often see the following: tcp6 0 0 aaa.bbb.cc.dd:443 194.14.247.1:18913 SYN_RECV tcp6 0 0 aaa.bbb.cc.dd:443 54.93.50.35:18457 SYN_RECV tcp6 0 0 aaa.bbb.cc.dd:443 194.68.0.1:29917 SYN_RECV tcp6 0 0 aaa.bbb.cc.dd:443 194.68.0.1:23629 SYN_RECV tcp6 0 0 aaa.bbb.cc.dd:443 194.14.247.1:8846 SYN_RECV tcp6 0 0 aaa.bbb.cc.dd:443 194.14.165.1:11833 SYN_RECV tcp6 0 0 aaa.bbb.cc.dd:443 54.93.50.35:20856 SYN_RECV tcp6 0 0 aaa.bbb.cc.dd:443 194.14.247.1:38085 SYN_RECV tcp6 0 0 aaa.bbb.cc.dd:443 194.14.246.1:60957 SYN_RECV tcp6 0 0 aaa.bbb.cc.dd:443 194.14.165.1:10471 SYN_RECV tcp6 0 0 aaa.bbb.cc.dd:443 194.68.0.1:60852 SYN_RECV tcp6 0 0 aaa.bbb.cc.dd:443 194.14.246.1:45321 SYN_RECV tcp6 0 0 aaa.bbb.cc.dd:443 194.14.246.1:43384 SYN_RECV tcp6 0 0 aaa.bbb.cc.dd:443 194.14.165.1:31634 SYN_RECV tcp6 0 0 aaa.bbb.cc.dd:443 194.68.0.1:29895 SYN_RECV tcp6 0 0 aaa.bbb.cc.dd:443 54.93.50.35:51774 SYN_RECV tcp6 0 0 aaa.bbb.cc.dd:443 54.93.50.35:27223 SYN_RECV tcp6 0 0 aaa.bbb.cc.dd:443 194.14.246.1:6 SYN_RECV tcp6 0 0 aaa.bbb.cc.dd:443 194.14.0.1:31465 SYN_RECV tcp6 0 0 aaa.bbb.cc.dd:443 54.93.50.35:30646 SYN_RECV tcp6 0 0 aaa.bbb.cc.dd:443 194.14.246.1:7 SYN_RECV tcp6 0 0 aaa.bbb.cc.dd:443 194.14.165.1:46609 SYN_RECV tcp6 0 0 aaa.bbb.cc.dd:443 194.14.247.1:57978 SYN_RECV tcp6 0 0 aaa.bbb.cc.dd:443 194.68.0.1:59133 SYN_RECV tcp6 0 0 aaa.bbb.cc.dd:443 54.93.50.35:27371 SYN_RECV tcp6 0 0 aaa.bbb.cc.dd:443 194.68.0.1:13364 SYN_RECV tcp6 0 0 aaa.bbb.cc.dd:443 194.14.165.1:50336 SYN_RECV tcp6 0 0 aaa.bbb.cc.dd:443 194.14.165.1:34511 SYN_RECV tcp6 0 0 aaa.bbb.cc.dd:80 aa.bb.ccc.dd:59349 ESTABLISHED tcp6 0 0 aaa.bbb.cc.dd:443 194.14.0.1:20251 SYN_RECV tcp6 0 0 aaa.bbb.cc.dd:443 194.68.0.1:11573 SYN_RECV tcp6 0 0 aaa.bbb.cc.dd:443 194.14.0.1:37358 SYN_RECV tcp6 0 0 aaa.bbb.cc.dd:443 54.93.50.35:44226 SYN_RECV tcp6 0 0 aaa.bbb.cc.dd:443 54.93.50.35:59194 SYN_RECV tcp6 0 0 aaa.bbb.cc.dd:443 194.14.165.1:38300 SYN_RECV tcp6 0 0 aaa.bbb.cc.dd:443 194.68.0.1:18209 SYN_RECV Is this normal probing by the script kiddies or is it specific because I'm running the bridges. Cheers. ___ tor-relays mailing list tor-relays@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays
[tor-relays] Doubt related to Bridge relay
Hi, I'm new to setting up a relay but I'd like to help the Tor Network. I have some, perhaps silly, doubts: 1. Can I use tor browser from the same computer the bridge relay node is set up? Navigation will remain anonymous (at least similar than when using only tor browser)? 2. Will I notice any decrease or increase in tor network navigation speed? (understanding that I'll previously set up a max bandwidth to the node) 3. Any other concern I should be aware of? Thanks!! ___ tor-relays mailing list tor-relays@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays
[tor-relays] (FWD) Network is suffering with only two torflows; can you help?
FYI, below is a start at a short-term plan to address the current network weighting issues, now that, as of a few weeks ago, gabelmoo's torflow died and Sebastian hasn't been able to get it going properly again. See also nifty's new ticket: https://bugs.torproject.org/33824 --Roger - Forwarded message from Roger Dingledine - Date: Mon, 6 Apr 2020 06:06:26 -0400 From: Roger Dingledine To: dir-auth Subject: Network is suffering with only two torflows; can you help? tl;dr there's a way at the bottom of this mail that you can help even without running torflow yourself. [...] So, it looks like we're down to two remaining torflows, and because that's less than three, we're suffering much more from the current errors in sbws. Specifically, (a) sbws doesn't vote for some relays, and (b) sbws votes a super low weight for some relays. There are somewhere between 500 and 1000 relays that are getting left out of sbws's opinions: https://consensus-health.torproject.org/#bwauthstatus and then some thousands more that are getting drastically underweighted, per the "Bandwidth Auth Statistics" graphs at the bottom of https://consensus-health.torproject.org/graphs.html Relay operators are sad that their relays are being useless, and thinking about how maybe they should shut them off if nobody's going to use them. I hear from GeKo that the sbws devs are aiming to fix the sbws issues by end-of-April. That's great, but I wonder if there are some short term hacks we can do that will help until then. Option 1: dizum, dannenberg, or tor26 start up a torflow. I'm guessing they would have done this already if they were going to. Option 2: maatuska, longclaw, or bastet switch from sbws to torflow. Is this an workable option for any of you? Option 3: Any of you besides Faravahar start hourly importing a copy of moria1's weights: https://freehaven.net/~arma/bwscan.V3BandwidthsFile and you vote them as though you were running your own torflow, but you don't need your own torflow. Option 3 is pretty wild at first glance -- it violates some trust and independence assumptions. But for those who aren't running a bwauth, you're already not being listened to about bandwidth weights. So I think this would be a fine hack in the short term to rescue the network, while we wait for sbws. Any takers? :) Here's the simple two-step process for testing the idea: (1) Set a "45 0-23 * * *" cron line to wget the above freehaven url. (2) In your torrc, set "V3BandwidthsFile /path/to/that/bwfile" If you like the idea, and it's just the wget that's too sketchy for you, I can set up an ssh account that just dumps the file at you when you present the right ssh key. Or for super overkill, you'll notice that moria1's (signed) vote has these two lines it: bandwidth-file-headers timestamp=1586162183 bandwidth-file-digest sha256=24+52C1ItwnAyBTtBteyy6tjxk8JnUpQ/XZx3YStEXE so with some stem or shell programming we should be able to verify the file itself, rather than just verifying the place it came from. We could be getting away from "short-term hack" there though. :) --Roger - End forwarded message - ___ tor-relays mailing list tor-relays@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays
Re: [tor-relays] Relay in a stop/start loop
Hi Roger, Some quick facts - tor is enabled to auto-start when the system boots - I do have nyx on that box, but I just run it every now and then (i.e. it does not run permanently) - there is nothing else than tor on the box - disc space is not an issue: 45G free on /root, 700M free on /boot and 53G free on /home The other things to investigate are way beyond my linux competences My best bet would be to perform a fresh install (OS+tor) Thanks! On Mon, Apr 6, 2020 at 11:11 AM Roger Dingledine wrote: > On Mon, Apr 06, 2020 at 11:01:09AM +0200, Totor be wrote: > > I have upgraded recently my relays (totorbe*) to CentOS8 + tor 0.4.2.7 > and > > they all seemed to run properly > > This morning, I just noticed a weird behavior on one of them (no idea > since > > when this is going on) > > > > When starting tor, it goes to the point of IP identification and then > > starts shutting down ("Interrupt: we have stopped accepting new > > connections...") > > It attempts then to restart immediately and loops start --> interrupt --> > > start etc... > > > > Any idea where too start investigating?? > > Hm! > > It sounds like whatever package you have auto-starts Tor if it notices > that it's not running. That's not unreasonable, but also it ought to have > some error count where it only tries to restart it a certain number of > times per timeframe. This is something that either the package's init > script should do, or that systemd's settings should handle. > > The bigger mystery for you is: why does Tor keep deciding to exit? It > looks like something is sending it the equivalent of a ^C. What is doing > that? This is the main thing to investigate. > > Maybe you are running some external tool like nyx, or your own script > based on stem, which is killing it somehow? > > Maybe systemd is somehow deciding that after some timeout it needs to > die? I find it interesting that you get these two lines right after each > other each time: > > Apr 06 10:35:13.000 [notice] Signaled readiness to systemd > Apr 06 10:35:13.000 [notice] Interrupt > > It's as though systemd is misinterpreting the signal from Tor, and > deciding to kill it rather than be happy. > > So: I suspect "bug in centos package" as a good place to investigate. > Maybe there is already a ticket open in your packaging system? > > Be sure also to check other more general issues like "do you have enough > disk space?" > > --Roger > > ___ > tor-relays mailing list > tor-relays@lists.torproject.org > https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays > ___ tor-relays mailing list tor-relays@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays
Re: [tor-relays] Relay in a stop/start loop
On Mon, Apr 06, 2020 at 11:01:09AM +0200, Totor be wrote: > I have upgraded recently my relays (totorbe*) to CentOS8 + tor 0.4.2.7 and > they all seemed to run properly > This morning, I just noticed a weird behavior on one of them (no idea since > when this is going on) > > When starting tor, it goes to the point of IP identification and then > starts shutting down ("Interrupt: we have stopped accepting new > connections...") > It attempts then to restart immediately and loops start --> interrupt --> > start etc... > > Any idea where too start investigating?? Hm! It sounds like whatever package you have auto-starts Tor if it notices that it's not running. That's not unreasonable, but also it ought to have some error count where it only tries to restart it a certain number of times per timeframe. This is something that either the package's init script should do, or that systemd's settings should handle. The bigger mystery for you is: why does Tor keep deciding to exit? It looks like something is sending it the equivalent of a ^C. What is doing that? This is the main thing to investigate. Maybe you are running some external tool like nyx, or your own script based on stem, which is killing it somehow? Maybe systemd is somehow deciding that after some timeout it needs to die? I find it interesting that you get these two lines right after each other each time: Apr 06 10:35:13.000 [notice] Signaled readiness to systemd Apr 06 10:35:13.000 [notice] Interrupt It's as though systemd is misinterpreting the signal from Tor, and deciding to kill it rather than be happy. So: I suspect "bug in centos package" as a good place to investigate. Maybe there is already a ticket open in your packaging system? Be sure also to check other more general issues like "do you have enough disk space?" --Roger ___ tor-relays mailing list tor-relays@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays