Re: [tor-relays] How to reduce tor CPU load on a single bridge?
Hi Kristian, Thanks for the screenshot. Nice Machine! Not everyone is as fortunate as you when it comes to resources for their Tor deployments. While a cpu affinity option isn't high on the priority list, as you point out, many operating systems do a decent job of load management and there are third-party options available for cpu affinity, but it might be helpful for some to have an application layer option to tune their implementations natively. As an aside... Presently, are you using a single, public address with many ports or many, public addresses with a single port for your Tor deployments? Have you ever considered putting all those Tor instances behind a single, public address:port (fingerprint) to create one super bridge/relay? I'm just wondering if it makes sense to conserve and rotate through public address space to stay ahead of the blacklisting curve? Also... Do you mind disclosing what all your screen instances are for? Are you running your Tor instances manually and not in daemon mode? "Inquiring minds want to know." As always... It is great to engage in dialogue with you. Respectfully, Gary On Tuesday, December 28, 2021, 1:39:31 PM MST, ab...@lokodlare.com wrote: Hi Gary, why would that be needed? Linux has a pretty good thread scheduler imo and will shuffle loads around as needed. Even Windows' thread scheduler is quite decent these days and tools like "Process Lasso" exist if additional fine tuning is needed. Attached is one of my servers running multiple tor instances on a 12/24C platform. The load is spread quite evenly across all cores. Best Regards, Kristian Dec 27, 2021, 22:08 by tor-relays@lists.torproject.org: BTW... I just fact-checked my post-script and the cpu affinity configuration I was thinking of is for Nginx (not Tor). Tor should consider adding a cpu affinity configuration option. What happens if you configure additional Tor instances on the same machine (my Tor instances are on different machines) and start them up? Do they bind to a different or the same cpu core? Respectfully, Gary On Monday, December 27, 2021, 2:44:59 PM MST, Gary C. New via tor-relays wrote: David/Roger: Search the tor-relay mail archive for my previous responses on loadbalancing Tor Relays, which I've been successfully doing for the past 6 months with Nginx (it's possible to do with HAProxy as well). I haven't had time to implement it with a Tor Bridge, but I assume it will be very similar. Keep in mind it's critical to configure each Tor instance to use the same DirectoryAuthority and to disable the upstream timeouts on Nginx/HAProxy. Happy Tor Loadbalancing! Respectfully, Gary P.S. I believe there's a torrc config option to specify which cpu core a given Tor instance should use, too. On Monday, December 27, 2021, 2:00:50 PM MST, Roger Dingledine wrote: On Mon, Dec 27, 2021 at 12:05:26PM -0700, David Fifield wrote: > I have the impression that tor cannot use more than one CPU core???is that > correct? If so, what can be done to permit a bridge to scale beyond > 1×100% CPU? We can fairly easily scale the Snowflake-specific components > around the tor process, but ultimately, a tor client process expects to > connect to a bridge having a certain fingerprint, and that is the part I > don't know how to easily scale. > > * Surely it's not possible to run multiple instances of tor with the > same fingerprint? Or is it? Does the answer change if all instances > are on the same IP address? If the OR ports are never used? Good timing -- Cecylia pointed out the higher load on Flakey a few days ago, and I've been meaning to post a suggestion somewhere. You actually *can* run more than one bridge with the same fingerprint. Just set it up in two places, with the same identity key, and then whichever one the client connects to, the client will be satisfied that it's reaching the right bridge. There are two catches to the idea: (A) Even though the bridges will have the same identity key, they won't have the same circuit-level onion key, so it will be smart to "pin" each client to a single bridge instance -- so when they fetch the bridge descriptor, which specifies the onion key, they will continue to use that bridge instance with that onion key. Snowflake in particular might also want to pin clients to specific bridges because of the KCP state. (Another option, instead of pinning clients to specific instances, would be to try to share state among all the bridges on the backend, e.g. so they use the same onion key, can resume the same KCP sessions, etc. This option seems hard.) (B) It's been a long time since anybody tried this, so there might be surprises. :) But it *should* work, so if there are surprises, we should try to fix them. This overall idea is similar to the "router twins" idea from the distant distant past: https://lists.torproject.org/pipermail/tor-dev/2002-July/001122.html
Re: [tor-relays] Relay operators meetup @ rC3
On Mon, Dec 27, 2021 at 04:29:10AM +0100, Stefan Leibfarth wrote: > the annual Tor relay operators meetup will be tomorrow (28th) at 2200 UTC+1. > No rC3 ticket required We just finished the meet-up. Thanks Leibi for organizing it, and thanks everybody for participating. I've attached the notes that we used for organizing the topics / discussion. Next meet-up is planned to be around FOSDEM in early February. Stay tuned! --Roger Agenda == Meetup url: https://jitsi.rc3.world/tor-relay-meetup network health team ramp-up in 2021 / 2022 - https://gitlab.torproject.org/tpo/network-health/team - bringing metrics team into network-health - bumping out end-of-life (EOL) relays - Community building: - relay operator expectations https://gitlab.torproject.org/tpo/community/relays/-/issues/18 - Tor relay operator survey https://survey.torproject.org/index.php/459487 relay operator non-profits https://torservers.net/partners.html - our periodic online meetups (original plans of in-person meetups!) - What should be the role of a torservers.net central coordination / advocacy org? - Torservers.net still gets press inquiries, even though it has been dormant for a while. - notes from November 2021 meetup: https://lists.torproject.org/pipermail/tor-project/2021-November/003230.html Hear from relay operators here (especially exit relay ops) - I'd like to hear successful experiences of running exit relays from people here. One of my friends hosts one in their home (not a good idea). I think evolution VPS sounds like a good deal for an exit relay, but I'm not sure. - What are the potential legal consequences of running an exit relay, if someone uses it to post illegal material? - In DE, the Providerprivileg should insulate you somewhat against legal consequences (Disclaimer: IANAL), but depending on your local police, you might come in contact with them. (Hasn't happened to us so far, though.) - In EU, it's legal: https://eur-lex.europa.eu/LexUriServ/LexUriServ.do?uri=CELEX:32000L0031:En:HTML but you might still have the cops "knocking" at your door at 6am and confiscate all your hardware, until their sort things out. - In .fr, as Nos Oignons, we ~regularly get letters from the cops, and some convocations to the police station. Trust: - recent big attackers - trust in relays, how to quantify, how to label - what about nusenu's proposal ( https://nusenu.github.io/ContactInfo-Information-Sharing-Specification/ ) ? - What can we learn from (/ how do we feel about) Apple Private Relay & trusting companies? - https://www.apple.com/privacy/docs/iCloud_Private_Relay_Overview_Dec2021.PDF - https://blog.apnic.net/2021/11/26/impact-of-private-relay-netops-isps/ Relay operator support venues / options: - New Tor forum (https://forum.torproject.net/), useful for individual relay operator support (alternative to private helpdesk), not intending to replace tor-relays@ list - tor-relays@ list - #tor-relays irc channel - relaunch of TorBSDv2 coming network performance: - relay overload indicators - sbws progress - new congestion control designs should help with geographic diversity network blocking: - The Russia story - 'Run a bridge' campaign - (un?)fortunately, most people here are running relays, so can't run bridge on the same machines - what types are important -> obfs4 - what internet connection type is best -> static IP - Belarus, previous blocking stories ipv6: - How to run a Relay with a dynamic IPv6 only address? - ipv6-only relays, soon? - no :'( - chicken & egg problem - as long as most relays are ipv4 â tor will stay ipv4 - ipv6-only bridges? - (did setup one today ⦠not yet listed at https://metrics.torproject.org/rs.html#search or https://bridges.torproject.org/status?id=[â¦] ) - There's no reason in principle why ipv6-only bridges shouldn't work. People should try them, identify what goes wrong, and help us fix! - Is anyone seeing *any* ipv6 traffic to obfsproxy4? (No not handed out yet.) Offline Master Keys: - How do you deal with renew of OMKs?, Best practices? - use https://github.com/nusenu/ansible-relayor to automate it, now with prometheus/MetricsPort support :) -> perfect! General Q, answer any questions relay operators have Bridges still need to have reachable ORPort, even if it's never used and dangerous? - Yes, alas. - There's a ticket: https://gitlab.torproject.org/tpo/core/tor/-/issues/7349 - The issue is that bridges do self-reachability tests, and won't publish if their ORPort is unreachable. Also, Serge won't give it the Running flag, so bridgedb won't give it out. It's all just engineering fixes, and we should do them. Next meetup: - @ FOSDEM (5 & 6 February '22) - The exact date and time will be posted @ tor-relay list - KAX17 and what to about it? - discussed, thanks - How to support more diversity in the network, maybe add a flag for relays
Re: [tor-relays] How to reduce tor CPU load on a single bridge?
BTW... I just fact-checked my post-script and the cpu affinity configuration I was thinking of is for Nginx (not Tor). Tor should consider adding a cpu affinity configuration option. What happens if you configure additional Tor instances on the same machine (my Tor instances are on different machines) and start them up? Do they bind to a different or the same cpu core? Respectfully, Gary On Monday, December 27, 2021, 2:44:59 PM MST, Gary C. New via tor-relays wrote: David/Roger: Search the tor-relay mail archive for my previous responses on loadbalancing Tor Relays, which I've been successfully doing for the past 6 months with Nginx (it's possible to do with HAProxy as well). I haven't had time to implement it with a Tor Bridge, but I assume it will be very similar. Keep in mind it's critical to configure each Tor instance to use the same DirectoryAuthority and to disable the upstream timeouts on Nginx/HAProxy. Happy Tor Loadbalancing! Respectfully, Gary P.S. I believe there's a torrc config option to specify which cpu core a given Tor instance should use, too. On Monday, December 27, 2021, 2:00:50 PM MST, Roger Dingledine wrote: On Mon, Dec 27, 2021 at 12:05:26PM -0700, David Fifield wrote: > I have the impression that tor cannot use more than one CPU core???is that > correct? If so, what can be done to permit a bridge to scale beyond > 1×100% CPU? We can fairly easily scale the Snowflake-specific components > around the tor process, but ultimately, a tor client process expects to > connect to a bridge having a certain fingerprint, and that is the part I > don't know how to easily scale. > > * Surely it's not possible to run multiple instances of tor with the > same fingerprint? Or is it? Does the answer change if all instances > are on the same IP address? If the OR ports are never used? Good timing -- Cecylia pointed out the higher load on Flakey a few days ago, and I've been meaning to post a suggestion somewhere. You actually *can* run more than one bridge with the same fingerprint. Just set it up in two places, with the same identity key, and then whichever one the client connects to, the client will be satisfied that it's reaching the right bridge. There are two catches to the idea: (A) Even though the bridges will have the same identity key, they won't have the same circuit-level onion key, so it will be smart to "pin" each client to a single bridge instance -- so when they fetch the bridge descriptor, which specifies the onion key, they will continue to use that bridge instance with that onion key. Snowflake in particular might also want to pin clients to specific bridges because of the KCP state. (Another option, instead of pinning clients to specific instances, would be to try to share state among all the bridges on the backend, e.g. so they use the same onion key, can resume the same KCP sessions, etc. This option seems hard.) (B) It's been a long time since anybody tried this, so there might be surprises. :) But it *should* work, so if there are surprises, we should try to fix them. This overall idea is similar to the "router twins" idea from the distant distant past: https://lists.torproject.org/pipermail/tor-dev/2002-July/001122.html https://lists.torproject.org/pipermail/tor-commits/2003-October/024388.html https://lists.torproject.org/pipermail/tor-dev/2003-August/000236.html > * Removing the fingerprint from the snowflake Bridge line in Tor Browser > would permit the Snowflake proxies to round-robin clients over several > bridges, but then the first hop would be unauthenticated (at the Tor > layer). It would be nice if it were possible to specify a small set of > permitted bridge fingerprints. This approach would also require clients to pin to a particular bridge, right? Because of the different state that each bridge will have? --Roger ___ tor-relays mailing list tor-relays@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays ___ tor-relays mailing list tor-relays@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays ___ tor-relays mailing list tor-relays@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays
Re: [tor-relays] Incorrect "first seen" info
Sure, Search link: https://metrics.torproject.org/rs.html#details/9BC7102D7DD265092CB7C94D38A7E5AF1F24742D Incorrect (and now present) time stamp for "First Seen": 2021-12-25 17:32:42 Earlier and correct time stamp for "First Seen": 2016-01-04 13:56:50 I took a screenshot of that page before the latest upgrade so it appears the change should have something to to with that somehow. This is on Debian. A tor op ‐‐‐ Original Message ‐‐‐ On Monday, December 27th, 2021 at 11:59 PM, nusenu wrote: > Hi, > > could you paste the Relay Search page link > > and state the correct and incorrect first_seen timestamps? > > kind regards, > > nusenu > > --- > > https://nusenu.github.io > > tor-relays mailing list > > tor-relays@lists.torproject.org > > https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays ___ tor-relays mailing list tor-relays@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays
Re: [tor-relays] Relay operators meetup @ rC3
Hi, On 12/27/21 04:29, Stefan Leibfarth wrote: I'll post the public link here as soon as it's available https://jitsi.rc3.world/tor-relay-meetup Best Leibi ___ tor-relays mailing list tor-relays@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays