Re: [tor-dev] Project *Cute* - draft design and challenges
Resending to tor-dev with correct email address. Sorry to those receiving 2 copies. On Oct 8, 2013 2:02 AM, SiNA Rabbani s...@redteam.io wrote: Dear Team, I have started on a draft design document for Project cute. Please let me have your kind comments and suggestions. (https://trac.torproject.org/projects/tor/wiki/org/sponsors/Otter/Cute) All the best, SiNA Cute design and challenges - draft 0.1 By: SiNA Rabbani - sina redteam io *Overview* Project Cute is part of the Otter contract. This project is to provide (in the parlance of our time) point-click-publish hidden services to support more effective documenting and analysis of information and evidence documenting of ongoing human rights violations and corresponding training and advocacy Our goal is to improve hidden services so that they're more awesome, and to create a packaged hidden-service blogging platform which is easy for users to run. *Objective* To make secure publishing available to activists who are not IT professionals. *Activities* Tor offers Hidden Services, the ability to host web sites in the Tor Network. Deploying hidden services successfully requires the ability to configure a server securely. Mistakes in setup can be used to unmask site admins and the location of the server. Automating this process will reduce user error. We have to write point-click-publish plugins that work with existing blogging and micro-blogging software. *Expected results* The result will be a way to provide portals to submit text, pictures, and video. These sites will not have the ability to log information that can be used to track down citizen journalists or other users, and will be resistant to distributed denial of service (DDoS) attacks. *Introduction* This document describes the technical risks associated with running a web-based blog tool and exposing it over a hidden service (.onion address). Our goal is to create a packaged blogging platform that is easy to operate for the non-technical user, while protecting the application from a set of known attacks that can reveal and compromise the network identity. Hidden-services make it possible for content providers and citizen journalists to offer web-applications such as blogs and websites, hosted completely behind a firewall (NAT Network) never revealing the public IP of such services. By design, Hidden Services are resilient to attacks such as traffic analysis and DDoS, therefore it becomes compelling for the adversary to focus on the application layer vulnerabilities. According to OWASP Top 10, Injection is the number one security risk for web applications. Injection flaws, such as SQL, OS, and LDAP injection occur when untrusted data is sent to an interpreter as part of a command or query. The attacker?s hostile data can trick the interpreter into executing unintended commands or accessing data without proper authorization. [1] Running a site such as WordPress involves a LAMP (Linux, Apache, Mysql, PHP) installation. This stack needs to be carefully configured and implemented to meet the desired privacy and security requirements. However, default configuration files are not suitable for privacy and lead to the leakage of identity. WordPress is the most popular blog platform on the Internet. We select WordPress due to it's popular and active core development. WordPress features a plug-in architecture and a template system, Plugins can extend WordPress to do almost anything you can imagine. In the directory you can find, download, rate, and comment on all the best plugins the WordPress community has to offer. http://wordpress.org/plugins/ Themes and plugins are mostly developed by programmers with little or no security in mind. New code is easily added to the site without the need for any programming skills. This is recipe for disaster, a quick look at the publicly available plugin repository and security forums reveals many of the OWASP top 10 vulnerabilities such as XSS and injection vulnerabilities: http://packetstormsecurity.com/search/?q=wordpress http://wordpress.org/plugins/ *Adversary Model* We use the range of attacks available to the adversary adversary as the base for our Threat Model and proposed requirements. *Adversary Goals* *Code Injection* This is the most direct compromise that can take place; the ability for the adversary to execute code on the host machine is disastrous. *XSS the front-end, DoS the back-end* The adversary can overwhelm the database backed or the web-server of a dynamically hosted application, denying access to the service.
Re: [tor-dev] Attentive Otter: Analysis of Instantbird/Thunderbird
Mike Perry: + Leveraging the work done on TorBirdy, we can distribute Instantbird and Tor (and related components) in a single package, or as a combined addon. + Use Tor Launcher as the controller (sukhe recently added Thunderbird support) + Will allow seamless zero-configuration Tor usage for normal case, and will share Tor Browser's future Pluggable Transport support with no additional effort. + See the TorBirdy manual for more information: https://trac.torproject.org/projects/tor/wiki/torbirdy#TorBirdywithTorandTorLauncher I'd like to add that we can probably also leverage the work done on using Mozilla's updater in TBB to perform automatic updates of an Instantbird bundle. -- Lunar lu...@torproject.org signature.asc Description: Digital signature ___ tor-dev mailing list tor-dev@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev
[tor-dev] Project Cute - Design document and challenges - draft
Dear Team, Sorry for multiple emails, I failed to send to tor-dev@ properly. Here is an attempt, a first attempt at the Cute project. Please let me have your comments and suggestions. https://trac.torproject.org/projects/tor/wiki/org/sponsors/Otter/Cute All the best, SiNA Cute design and challenges - draft 0.1 By: SiNA Rabbani - sina redteam io *Overview* Project Cute is part of the Otter contract. This project is to provide (in the parlance of our time) point-click-publish hidden services to support more effective documenting and analysis of information and evidence documenting of ongoing human rights violations and corresponding training and advocacy Our goal is to improve hidden services so that they're more awesome, and to create a packaged hidden-service blogging platform which is easy for users to run. *Objective* To make secure publishing available to activists who are not IT professionals. *Activities* Tor offers Hidden Services, the ability to host web sites in the Tor Network. Deploying hidden services successfully requires the ability to configure a server securely. Mistakes in setup can be used to unmask site admins and the location of the server. Automating this process will reduce user error. We have to write point-click-publish plugins that work with existing blogging and micro-blogging software. *Expected results* The result will be a way to provide portals to submit text, pictures, and video. These sites will not have the ability to log information that can be used to track down citizen journalists or other users, and will be resistant to distributed denial of service (DDoS) attacks. *Introduction* This document describes the technical risks associated with running a web-based blog tool and exposing it over a hidden service (.onion address). Our goal is to create a packaged blogging platform that is easy to operate for the non-technical user, while protecting the application from a set of known attacks that can reveal and compromise the network identity. Hidden-services make it possible for content providers and citizen journalists to offer web-applications such as blogs and websites, hosted completely behind a firewall (NAT Network) never revealing the public IP of such services. By design, Hidden Services are resilient to attacks such as traffic analysis and DDoS, therefore it becomes compelling for the adversary to focus on the application layer vulnerabilities. According to OWASP Top 10, Injection is the number one security risk for web applications. Injection flaws, such as SQL, OS, and LDAP injection occur when untrusted data is sent to an interpreter as part of a command or query. The attacker?s hostile data can trick the interpreter into executing unintended commands or accessing data without proper authorization. [1] Running a site such as WordPress involves a LAMP (Linux, Apache, Mysql, PHP) installation. This stack needs to be carefully configured and implemented to meet the desired privacy and security requirements. However, default configuration files are not suitable for privacy and lead to the leakage of identity. WordPress is the most popular blog platform on the Internet. We select WordPress due to it's popular and active core development. WordPress features a plug-in architecture and a template system, Plugins can extend WordPress to do almost anything you can imagine. In the directory you can find, download, rate, and comment on all the best plugins the WordPress community has to offer. http://wordpress.org/plugins/ Themes and plugins are mostly developed by programmers with little or no security in mind. New code is easily added to the site without the need for any programming skills. This is recipe for disaster, a quick look at the publicly available plugin repository and security forums reveals many of the OWASP top 10 vulnerabilities such as XSS and injection vulnerabilities: http://packetstormsecurity.com/search/?q=wordpress http://wordpress.org/plugins/ *Adversary Model* We use the range of attacks available to the adversary adversary as the base for our Threat Model and proposed requirements. *Adversary Goals* *Code Injection* This is the most direct compromise that can take place; the ability for the adversary to execute code on the host machine is disastrous. *XSS the front-end, DoS the back-end* The adversary can overwhelm the database backed or the web-server of a dynamically hosted application, denying access to the service. *Location/Identity information* Applications that are not designed with privacy in mind tend to reveal information by default. For example, WordPress
Re: [tor-dev] CPAProxy - a thin Objective-C wrapper around Tor
Claudiu-Vlad Ursache: - CPAProxy sets up Tor's DataDir in a temporary directory of an app's sandbox where it's not accessible by other processes. Should that directory be protected in any other way? Having DataDir temporary means that the client will pick new guards everytime it starts. This opens the door to more traffic correlation attacks. See the relevant FAQ for more details about entry guards: https://www.torproject.org/docs/faq.html.en#EntryGuards -- Lunar lu...@torproject.org signature.asc Description: Digital signature ___ tor-dev mailing list tor-dev@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev
Re: [tor-dev] Fwd: [rt.torproject.org #14731] Off by one buffer overflow in tor stable
On Mon, Oct 7, 2013 at 5:55 PM, Pedro Ribeiro ped...@gmail.com wrote: -- Forwarded message -- From: Colin Childs via RT h...@rt.torproject.org Date: 7 October 2013 14:25 Subject: [rt.torproject.org #14731] Off by one buffer overflow in tor stable To: ped...@gmail.com On Mon Oct 07 12:13:21 2013, ped...@gmail.com wrote: Hi, I think there is a small buffer overflow (off by one) in the current stable version of tor. The function in question is format_helper_exit_status, which returns a formatted hex string. It is in common/util.c, starting at line 3270. The function has a comment header that explains how it works. It specifically says it returns a string that is not null terminated, but instead terminates with a newline. The code checks periodically throughout the function whether it has written more bytes than it should. If it does, it errors out and writes a null character in the current character positions. This by itself can lead to a buffer overflow, but I believe the checks ensure that it almost never writes over the buffer size - except in one case. After it has finished everything, it then checks again if there are more than 0 bytes left in the buffer. If there are, it writes two characters - a newline and a null terminator (lines 3342 to 3347). The problem here is if the buffer only has one byte left, an off by one overflow occurs. These usually are very hard to exploit, but can be a security issue nonetheless. However given that I am not familiar with the tor codebase I might be wrong? I also did a quick security audit on the rest of the tor code and couldn't find anything else. I was inspire because of the recent events... Thanks, Pedro! Thanks for looking at Tor! I agree that this probably isn't exploitable, but let's not take any chances. (I'm thinking Not exploitable not because off-by-one buffer overflows are safe, but because in order to get the overflow to happen at all, you would need to have errno be a high-magnitude negative number, which is not usually something that happens on unixy platforms. Moreover, you'd need to arrange for this to happen as a result of launching a pluggable transport proxy, which is not usually something under the attacker's control.) Nonetheless , this should get fixed. I've opened https://trac.torproject.org/projects/tor/ticket/9928 to track it, and written a possible fix. best wishes, -- Nick ___ tor-dev mailing list tor-dev@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev
[tor-dev] REMINDER: Boisterous Otter helpdesk/l10n meeting in 2h, at 1200 Pacific
Hello all, This is your irregularly-scheduled reminder that we will be discussing the Helpdesk and translation in just a smidge over two hours (that's 1200h pacific) in #tor-dev on irc.oftc.net. Highlights will include hearing from Sherief, Lunar, and Phoul about their research into ways to provide support chat accessible through our website. We'll also make a preliminary incursion into the topic of organizing shifts at predictable times. How very exciting! Other developments: I'm going to slice and dice Boisterous into two separate components which we'll run in parallel, otherwise these discussions and whatnot are going to run forever each. In this track, we'll continue with helpdesk and l10n. The other track will focus on outreach, training, videos, and documentation. If you're mainly in it for the helpdesk/l10n, you have nothing further to do: your meetings are just shorter. If you're interested in outreach, training, videos, and documentation, (and I already know that Kelley, Karen, Runa, and Sherief might be), then look for the meeting-scheduling message to go out to tor-dev, probably today. If you drop me a line, then I'll try to do something clever to make sure that the message hits your inbox too. That's all from this reminder. Any questions, folks? -Tom ___ tor-dev mailing list tor-dev@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev
Re: [tor-dev] Fwd: [rt.torproject.org #14731] Off by one buffer overflow in tor stable
On Oct 8, 2013 4:41 PM, Nick Mathewson ni...@alum.mit.edu wrote: On Mon, Oct 7, 2013 at 5:55 PM, Pedro Ribeiro ped...@gmail.com wrote: -- Forwarded message -- From: Colin Childs via RT h...@rt.torproject.org Date: 7 October 2013 14:25 Subject: [rt.torproject.org #14731] Off by one buffer overflow in tor stable To: ped...@gmail.com On Mon Oct 07 12:13:21 2013, ped...@gmail.com wrote: Hi, I think there is a small buffer overflow (off by one) in the current stable version of tor. The function in question is format_helper_exit_status, which returns a formatted hex string. It is in common/util.c, starting at line 3270. The function has a comment header that explains how it works. It specifically says it returns a string that is not null terminated, but instead terminates with a newline. The code checks periodically throughout the function whether it has written more bytes than it should. If it does, it errors out and writes a null character in the current character positions. This by itself can lead to a buffer overflow, but I believe the checks ensure that it almost never writes over the buffer size - except in one case. After it has finished everything, it then checks again if there are more than 0 bytes left in the buffer. If there are, it writes two characters - a newline and a null terminator (lines 3342 to 3347). The problem here is if the buffer only has one byte left, an off by one overflow occurs. These usually are very hard to exploit, but can be a security issue nonetheless. However given that I am not familiar with the tor codebase I might be wrong? I also did a quick security audit on the rest of the tor code and couldn't find anything else. I was inspire because of the recent events... Thanks, Pedro! Thanks for looking at Tor! I agree that this probably isn't exploitable, but let's not take any chances. (I'm thinking Not exploitable not because off-by-one buffer overflows are safe, but because in order to get the overflow to happen at all, you would need to have errno be a high-magnitude negative number, which is not usually something that happens on unixy platforms. Moreover, you'd need to arrange for this to happen as a result of launching a pluggable transport proxy, which is not usually something under the attacker's control.) Nonetheless , this should get fixed. I've opened https://trac.torproject.org/projects/tor/ticket/9928 to track it, and written a possible fix. best wishes, -- Nick ___ tor-dev mailing list tor-dev@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev No problem Nick. If you issue an advisory, credit to Pedro Ribeiro (ped...@gmail.com) is appreciated. Regards Pedro ___ tor-dev mailing list tor-dev@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev
[tor-dev] Tor Beta area
Hi good people, I do run a relay [1], but with my work as Team Leader for lubuntu quality / testing [2] with our small, but dedicated, team I cannot allocate the time to install and test your RC. Instead, I put at your disposal a VM (kvm) with a dedicated ipV4 address running ubuntu server 12.04 LTS. This can be changed to what ever is best used for testing. I can take snapshots, so if you crash burn it, it does not take too long to re-install. It also can pull full logs out out of a broken VM using guestfish[3] If this would be of use for you on alpha / beta testing relay software, please do get in touch. Regards, Phill. 1. https://metrics.torproject.org/relay-search.html?search=176.31.156.199 2. https://wiki.ubuntu.com/Lubuntu/Testing 3. http://irclogs.ubuntu.com/2013/06/28/%23ubuntu-classroom.html#t16:01 -- https://wiki.ubuntu.com/phillw ___ tor-dev mailing list tor-dev@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev
Re: [tor-dev] Hidden Service Scaling
On Tue, Oct 8, 2013 at 1:52 AM, Christopher Baines cbain...@gmail.com wrote: I have been looking at doing some work on Tor as part of my degree, and more specifically, looking at Hidden Services. One of the issues where I believe I might be able to make some progress, is the Hidden Service Scaling issue as described here [1]. So, before I start trying to implement a prototype, I thought I would set out my ideas here to check they are reasonable (I have also been discussing this a bit on #tor-dev). The goal of this is two fold, to reduce the probability of failure of a hidden service and to increase hidden service scalability. I think what I am planning distils down to two main changes. Firstly, when a OP initialises a hidden service, currently if you start a hidden service using an existing keypair and address, the new OP's introduction points replace the existing introduction points [2]. This does provide some redundancy (if slow), but no load balancing. My current plan is to change this such that if the OP has an existing public/private keypair and address, it would attempt to lookup the existing introduction points (probably over a Tor circuit). If found, it then establishes introduction circuits to those Tor servers. Then comes the second problem, following the above, the introduction point would then disconnect from any other connected OP using the same public key (unsure why as a reason is not given in the rend-spec). This would need to change such that an introduction point can talk to more than one instance of the hidden service. So, let's figure out all our possibilities before we pick one, and talk about requirements a little. Alternative 1: Multiple hidden service descriptors. Each instance of a hidden service picks its own introduction points, and uploads a separate hidden service descriptor to a subset of the HSDir nodes handling that service. Alternative 2: Combined hidden service descriptors in the network. Each instance of a hidden service picks its own introduction points, and uploads something to every appropriate HSDir node. The HSDir nodes combine those somethings, somehow, into a hidden service descriptor. Alternative 3: Single hidden service descriptor, one service instance per intro point. Each instance of a hidden service picks its introduction points, and somehow they coordinate so that they, together, get a single unified list of all their introduction points. They use this list to make a single signed hidden service descriptor, and upload that to the appropriate HSDirs. Alternative 4: Single hidden service descriptor, multiple service instances per intro point. This is your design above, where there's one descriptor chosen by a single hidden service instance (or possibly made collaboratively?), and the rest of the service instances fetch it, learn which intro points they're supposed to be at, and parasitically establish fallback introduction circuits there. There are probably other alternatives too; let's see if we can think of some more. Here are some possible desirable things. I don't know if they're all important, or all worth it. Let's discuss! Goal 1) Obscure number of hidden service instances. Goal 2) No master hidden service instance. Goal 3) If there is a master hidden service instance, clean fail-over from one master to the next, undetectable by the network. Goal 4) Obscure which instances are up and which are down. What other goals should we have in this kind of design? -- Nick ___ tor-dev mailing list tor-dev@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev
Re: [tor-dev] Status report - Stream-RTT
On Saturday 10 August 2013 02:37:48 Damian Johnson wrote: Yup. It's unfortunate that tor decided to include an 'Exit' flag with such an unintuitive meaning. You're not the first person to be confused by it. Is this meaning at least documented somewhere and I have just read over it? Here's the relevant part of the spec... https://gitweb.torproject.org/torspec.git/blob/HEAD:/dir-spec.txt#l1738 Patch submitted in ticket 9932[0]. Best, Robert [0] https://trac.torproject.org/projects/tor/ticket/9932 signature.asc Description: This is a digitally signed message part. ___ tor-dev mailing list tor-dev@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev
[tor-dev] RESPONSE REQUESTED: scheduling Otter-related outreach, training, documentation c. kickoff [due wednesday]
People of Earth, heed my call! Based on feedback that Boisterous Otters has too much to do, and on the fact that current Boisterous meetings are using their full time on only half the topics, we're splitting Boisterous into two components. The first component is going to stick with the name Boisterous, and deals with the first two activities: the helpdesk, and l10n. If you're already working on these things, nothing is changing for you, and you may be able to stop reading this right here. The second component is picking up the name **Buoyant**, and will deal with the remaining activities: * written materials on general communications security * short videos multi-language training videos * outreach to Iranians and the Iranian diaspora * social-media, radio, and television outreach. * training, and train-the-trainer work If that sounds like your cup of tea, kettle of fish, or flagon of ale, then you should come help us get started by filling in this [Doodle][] **by Wednesday afternoon** so that I can plan a meeting at the end of this week or the beginning of next. Please fill out the Doodle if you're definitely coming, and plan to be part of the core effort. If you just want to show up to spectate or lend a hand, there's no RSVP needed. I plan to look at the Doodle around 1700-1900h Pacific tomorrow, and I'll use whatever it says then to try and pick the best time. I'll send the time out to tor-dev, bcc-ing everyone who's expressed an interest or answered the poll. That's 24 hours to give me your availability, but only about 16 hours between when I send out the time and the first possible meeting slot, so check your mail tomorrow evening Pacific. If you have any specific suggestions about venue, format, or other running *of the meeting*, and you plan to be there, please reply publicly or privately before I send out the scheduling meeting. If you have specific plans about how to git'er done, you can either email me, or bcc me on a tor-dev-bound email with an appropriately-ottery subject line. Do note, however, that other folks may not be able to read your suggestions if you send them less than 12 hours before the meeting. I'll try to read them anyway, but that's only out of the kindness of my heart. Questions on a postcard please, -Tom [Doodle]: http://doodle.com/pm5cfunbcwuyrwh6 ___ tor-dev mailing list tor-dev@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev
Re: [tor-dev] Hidden Service Scaling
On 08/10/13 23:41, Nick Mathewson wrote: Here are some possible desirable things. I don't know if they're all important, or all worth it. Let's discuss! So, I think it makes more sense to cover the goals first. Goal 1) Obscure number of hidden service instances. Good to have, as it probably helps with the anonymity of a hidden service. This is a guess based on the assumption that attacks based on traffic analysis are harder if you don't know the number of servers that you are looking for. Goal 2) No master hidden service instance. Goal 3) If there is a master hidden service instance, clean fail-over from one master to the next, undetectable by the network. That sounds reasonable. Goal 4) Obscure which instances are up and which are down. I think it would be good to make the failure of a hidden service server (perhaps alone, or one of many for that service) indistinguishable from a breakage in any of the relays. If you don't have this property, distributing the service does little to help with attacks based on correlating server downtime with public events (power outages, network outages, ...). This is a specific form of this goal, that applies if you are in communication with a instance that goes down. What other goals should we have in this kind of design? Goal 5) It should cope (all the goals hold) with taking down (planned downtime), and bringing up instances. Goal 6) Adding instances should not reduce the performance. I can see problems if you have a large powerful server, adding a smaller server could actually reduce the performance, if the load is distributed equally. Goal 7) It should be possible to move between a single instance and mutiple instance service easily. (this might be a specific case of goal 5, or just need consolidating) Alternative 1: Multiple hidden service descriptors. Each instance of a hidden service picks its own introduction points, and uploads a separate hidden service descriptor to a subset of the HSDir nodes handling that service. This is close to breaking goal 1, as each instance would have to have = 1 introduction point, this puts a upper bound on the number of instances. The way the OP picks the number of introduction points to create would have to be thought about with respect to this. Also, goal 4 could be broken, as if the service becomes unreachable through a subset of the introduction points, this probably means that one or more of the instances have gone down. (assuming that an attacker can discover all the introduction points?) Alternative 2: Combined hidden service descriptors in the network. Each instance of a hidden service picks its own introduction points, and uploads something to every appropriate HSDir node. The HSDir nodes combine those somethings, somehow, into a hidden service descriptor. Same problem with goal 4 as alternative 1. Probably also has problems obscuring the number of instances from the HSDir's. Alternative 3: Single hidden service descriptor, one service instance per intro point. Each instance of a hidden service picks its introduction points, and somehow they coordinate so that they, together, get a single unified list of all their introduction points. They use this list to make a single signed hidden service descriptor, and upload that to the appropriate HSDirs. Same problem with goal 4 as alternative 1. I don't believe this has the same problem with the number of instances as Alternative 3 though. Alternative 4: Single hidden service descriptor, multiple service instances per intro point. This is your design above, where there's one descriptor chosen by a single hidden service instance (or possibly made collaboratively?), and the rest of the service instances fetch it, learn which intro points they're supposed to be at, and parasitically establish fallback introduction circuits there. I don't really see how choosing introduction points collaboratively would work, as it could lead to a separation between single instance services, and multiple instance services, which could break goal 7. It would also require the instances to interact, which adds some complexity. As for the fallback circuits, they are probably better off being just circuits. This would be what provides the scaling. The way you do this would have to be thought out though, to avoid breaking goal 6. A simple algorithm would be for the introduction point to just use a round robin for all the circuits to that service, but allow a service to reject a connection (if it has two much load), the introduction point would then continue to the next circuit. The introduction would also know the number of instances, if each instance only connected once. This could be masked by having instances making multiple connections to each introduction point (both in one instance and multiple instance services). While an external attacker might not be able to detect individual instance failure by trying to
Re: [tor-dev] Attentive Otter: Analysis of xmpp-client
On Tue, Oct 8, 2013 at 8:40 PM, Watson Ladd watsonbl...@gmail.com wrote: ... The _easy_ fix is to make go.crypto constant time, You Keep Using That Word, I Do Not Think It Means What You Think It Means ___ tor-dev mailing list tor-dev@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev