Re: [tor-talk] A 2nd Tor Browser (partial) profile?

2020-12-19 Thread Matthew Finkel
On Fri, Dec 18, 2020 at 04:36:53PM -0600, JoeB wrote:
> In TorBrowser 10.0.7 why is there a 2nd (partial) "profile.default",
> just above the original profile.default?  I thought TBB's auto update
> function had gone hay wire, but a clean install also creates 2
> profile.default folders.
> 
> Seems odd to create a 2nd profile w/ same name, so close to each other
> in tree view.
> The (partial or smaller) profile.default has *only 4* main folders:
> cache2, safebrowsing, startupCache, thumbnails. Was there a serious need
> to put those in a separate profile, same name, almost next to the
> original profile.default?

From what I see, you should see this directory structure in any Tor
Browser installation within the last 6 years. I'm not sure why you are
only seeing it in the most recent version. As for why we add this
"Caches" directory, I don't know and I don't know if Tor Browser
actually temporarily caches information there.

For those who are curious, the initial patch was:
https://gitweb.torproject.org/tor-browser.git/commit/?h=tor-browser-24.5.0esr-4.x-1=3daa575b7f154f43bc2e2beb34e8ee30f73ac32a

from https://gitlab.torproject.org/legacy/trac/-/issues/9173.

> 
> In Linux, the smaller profile.default is at:
> /.torbrowser/tor-browser_/Browser/TorBrowser/Data/Browser/Caches/profile.default.
> 
> 
> * BTW, during TBB's *auto update* process, does it verify the D/L update
> package using PGP signatures or other method, or verify files & folders
> after they're installed?
> Meaning, verify with something other than checksums?

Yes, all updates are cryptographically signed. The format is described
on Mozilla's wiki: https://wiki.mozilla.org/Software_Update:MAR

Coincidentally, I documented the Tor Browser keys earlier this week:
https://gitlab.torproject.org/tpo/applications/tor-browser/-/wikis/Signing-Keys#releasealpha-channel-primaryearlier

> 
> Another oddity after *auto updating* the existing TBB (10.0.6) to
> 10.0.7, was several folders & files showed modified dates = 1999-12-31. 
> I'm not sure it hurt anything - I didn't test the auto updated v10.0.7
> much, but it connected to the web.
> 
> Doing a full, clean install to an empty ~/.torbrowser folder with a D/L
> full package fixed MOST of the 1999 modified dates.
> 
> So far I only see the NoScript package
> ({73a6fe31-595d-460b-a920-fcc0f8843232}.xpi) still showing the 1999
> modified date.

That sounds like
https://gitlab.torproject.org/tpo/applications/tor-browser-build/-/issues/11506
-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] Where is the tor signing key?

2020-12-10 Thread Matthew Finkel
On Thu, Dec 10, 2020 at 09:19:46AM +, Colin Baxter wrote:
> 
> The URL https://support.torproject.org/tbb/how-to-verify-signature/
> gives the impression that the signing key email address is
> torbrow...@torproject.org. However
> 
>  gpg2 --search-keys torbrow...@torproject.org 
> 
> gives 
> 
> gpg: key "torbrow...@torproject.org" not found on keyserver.
> 
> What's the correct email address for the signing key?

torbrow...@torproject.org is the correct email address, but you may be
querying the wrong server. On the page you referenced there is a section
for this:

"""
Fetching the Tor Developers key

The Tor Browser team signs Tor Browser releases. Import the Tor Browser
Developers signing key (0xEF6E286DDA85EA2A4BA7DE684E2C6E8793298290):

gpg --auto-key-locate nodefault,wkd --locate-keys torbrow...@torproject.org
"""

The key is available on keys.openpgp.org, as well, if you need it from
another key server:

https://keys.openpgp.org/search?q=torbrowser%40torproject.org
-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] Fake Tor Browser on Google play store

2020-10-08 Thread Matthew Finkel
On Thu, Oct 08, 2020 at 07:23:23PM +, torrio888 wrote:
> On Google play store there is a fake Tor Browser called "Torn Browser" that 
> claims to be "the only official mobile browser supported by the Tor Project, 
> developers of the world’s strongest tool for privacy and freedom online".
> If you install it and check your IP address you see that it doesn't use Tor 
> at all.
> https://play.google.com/store/apps/details?id=com.kaptaigroup.Browser_Mini=en_US=US

Thank you! I'll report this now.

*sigh*
-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] TBB-10 cursors, pointers location

2020-10-06 Thread Matthew Finkel
On Tue, Oct 06, 2020 at 05:57:54PM -0500, joebtfs...@gmx.com wrote:
> This is in Linux Mint 18.1 - Cinnamon.
> I know all cursor themes installed in my Mint installation.
> 
> After a fairly recent Tor Browser update - a few months?? ago, the
> "default" mouse pointer TBB used and the weird "select live links"
> pointer & others all changed to icons I've never seen. For years, it
> used to use the Settings/ selected cursor theme -  same as all other apps.
> 
> Now, TBB is the ONLY app that doesn't use the selected cursor theme.
[...]
> 
> Any ideas?

Do you have Firefox installed and does it use a different cursor theme?
Tor Browser does not (intentionally) bundle a special/specific cursor
theme, Tor Browser should use whichever theme that Firefox uses, unless
there's a good reason we should use a different one.
-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] Idea for a stand-alone browser based on Tor browser source code

2020-07-09 Thread Matthew Finkel
On Tue, Jul 07, 2020 at 12:12:55PM -0700, joel04g_t5...@secmail.pro wrote:
> Tor browser has good defense against many tracking techniques, however,
> internet users may want a way to browse the web without anonymity but with
> some of the great features Tor browser provides. (Security Level,
> Noscript, Isolate trackers, Anti Fingerprinting, etc)
> 
> Some reasons users may want a browser like this include:
> 
> ~Accessing websites that don't like Tor users
> 
> ~Streaming without latency
> 
> ~Doing online banking (Tor could create suspicion of a possible hack
> attempt and block or hassle users)
> 
> ~An easy and hassle free browser for users who really don't like latency
> or with bad internet
> 
> I believe a stand-alone browser with onion routing removed but based on
> the same code Tor browser uses could be useful to many internet users. I'd
> love to hear everyone's thoughts on this!

While this may be useful for some people, The Tor Project has no plans
for providing such a browser. All of the above reasons for using Tor
Browser without Tor are real usability issues we need to address and
resolve. This includes trade-offs between privacy and performance.

The simple fact is that if you want to use a browser that provides the
privacy and security protections of Tor Browser, then you need to use
Tor Browser. You can't use Tor Browser without Tor and get sufficient
privacy protections. Network-level security is the most important
attribute of Tor Browser. Without tor, you are trackable by your IP
address, cache isolation won't save you.

Tor Browser without Tor is just...Browser (basically Firefox in private
browsing mode). If that is really want you want, then I suggest you look
at ghacks-user.js [0] for modifying a Firefox installation. The
maintainers are excellent. Jeremy's pointer to SecBrowser seems like an
option, too (but make sure you understand what you're getting before you
trust it).

[0] https://github.com/ghacksuserjs/ghacks-user.js/
-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] Browser want UNIX sys_admin capability

2020-04-15 Thread Matthew Finkel
On Wed, Apr 15, 2020 at 10:49 PM Nicolas Vigier  wrote:
>
> On Wed, 15 Apr 2020, Mr. Bob Dobalina wrote:
>
> > Dear Friends,
> >
> > it happen to see Firefox and tor bundle browser (linux64) both demand for
> > sys_admin capabilities, see man 7 capabilitites.
> >
> > Why exactly they need the sys_admin caps?
>

How are you installing Tor Browser?

> It should not require CAP_SYS_ADMIN
-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] Revisiting youtube blocking TBB, virtually all 1st attempts to load YT

2020-03-04 Thread Matthew Finkel
On Tue, Mar 03, 2020 at 04:49:16PM -0600, joebtfs...@gmx.com wrote:
[snip]
> It's unclear if the exit relay's country was / is a factor - too little
> testing by me to be conclusive.
> But, I found that manually clearing TBB's cache in about:preferences,
> THEN restarting TBB worked a high percentage of attempts.  Just getting
> new identities was no longer enough; at least forcing new identities 6+
> times, on a number of days.
> 

I assume this means you are running Tor Browser in non-private browser
mode? Otherwise clearing the cache before restarting shouldn't have any
effect.

[snip]
> YT / Google could also have changed their policy - again - how they were
> going to treat TBB or changed their definition of "abuse," so now there
> are many more sites meeting their criteria of abusive.
> 

This is our current assumption, but we don't have any more information
than what you described and our personal experiences.
-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


[tor-talk] .onion alt-svc issues?

2020-03-02 Thread Matthew Finkel
Hello everyone,

I'm curious if anyone has recently experienced issues with websites
offering .onion alternative services. We have a slightly old Tor Browser
ticket for "prioritizing" .onion alt-svc entries over non-.onions, but
in my testing I could not reproduce the described behavior. I am going
to close the ticket as worksforme if this isn't a bug anymore, as it was
probably solved between Firefox 60esr and 68esr.

https://trac.torproject.org/projects/tor/ticket/27502

If you can reliably reproduce it, please provide detailed steps (either
here or on the ticket) and I'll investigate it.

Thanks!
Matt
-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] Validating the DA authority document

2020-02-23 Thread Matthew Finkel
On Sun, Feb 23, 2020 at 12:13 PM Gary Chapman  wrote:
>
> Hi community,
>
> I've tried reading the TOR docs, but I can't seem to wrap my head around

"Tor" [0]

> how the authority document works (as regards signature validation) ... I've
> gotten circuit building working in a standalone c# library, but I'm
> struggling with validating the directory.
>
> For clarity, the document I am referring to, is the following :
> http://193.23.244.244/tor/keys/authority.z

Good, I assume you already read the dir-spec? [1]

With respect to keys, it's important to understand which keys exist
and how they are used.

In the "how version 3 should be better than version 2" section (0.2):

  * The most sensitive data in the entire network (the identity keys
of the directory authorities) needed to be stored unencrypted so
that the authorities can sign network-status documents on the fly.
Now, the authorities' identity keys are stored offline, and used
to certify medium-term signing keys that can be rotated.

Therefore, we know authorities has (at least) two keys, their identity key and
their signing key.

Looking at section (appendix?) B "General-use HTTP URLs",

   The key certificate for this server (if it is an authority) should be
   available at:

  http:///tor/keys/authority.z


This document provides all public keys associated with this authority, so you
are following the correct path.

The document (currently) contains four public keys, along with some metadata
(see 3.1. Creating key certificates for more details):

  1) version number of this certificate
  2) the authority's identity key fingerprint
  3) the period for which this certification is valid
  4) the directory's long-term (RSA) identity key
  5) the directory's medium-term (RSA) signing key
  6) a signature item 4 using the medium-term signing key
  7) a signature over items 1-6 (plus the header of 7) using the long-term
 identity key

In particular, see section 1.3 for how a signature is computed over a document.

You can take the public keys and process them with openssl, if you want to
sanity-check your implementation (either paste the key directly on the command
line as input to this command, or provide it in a file and add the |-in
| arguments):

  $ openssl rsa -RSAPublicKey_in -noout -text
  [snip]
  RSA Public-Key: (3072 bit)
  Modulus:
  00:ed:c6:57:bc:34:71:7e:30:d8:b6:bf:7f:f5:4b:
  10:f3:9d:be:e9:c9:87:32:bf:15:56:1f:06:90:bc:
  1b:ab:74:73:aa:39:14:2f:04:36:47:d9:85:bc:6e:
  05:9e:a3:1c:30:a5:e2:eb:6a:d8:60:0d:df:64:bd:
  5a:7e:14:fc:d3:87:29:bf:2a:7f:0f:77:4f:33:c4:
  [snip]


The important piece of this part  that you should notice is that this key is
using the RSA Public Key PEM format.

>
> I've tried verifying various areas of the document ...
> With various line endings CRLF/LFCR/LF/CR
> With various signature algorithms SHA1withRSA / SHA256withRSA / etc
> With both of the footer signatures
>
> I am using the "dir identity" RSA key at the top of the document as the
> reference key to verify against - I'm assuming this is correct, it's the
> only thing I can find that looks like the top level key.
>
> Unfortunately, no matter what I try, I just get a signature mismatch every
> time and I'm running out of sensible permutations.  Clearly I'm missing
> something.
>
> Could some kind soul please point me in the right direction?

Following from the above, next, if you want to validate the signature on
this certificate using openssl apps, then it is a two-step process (as far
I can figure out):

  1) Obtain the digest of the document to-be-signed
  2) Get the digest from the signature
  3) Verify the two digests are identical

For (1), you can obtain this with:
  $ openssl dgst -sha1 -binary tor26-certificated_text | openssl base64

For (2), you can obtain this with:
  $ openssl rsautl -verify -inkey tor26_id_key -pubin -keyform PEM -in
tor26-certificated_text.bin.sig | openssl base64

(As a side note, I tried very hard getting openssl dgst to validate this
directly, but it expects the signature as a DigestInfo without padding and I
don't know how to transform the sig into that format.)

In the above example, |tor26-certificated_text| is the certificate excluding
the final signature (as defined in section 1.3). |tor26_id_key| is the identity
public key converted from PKCS#1 RSAPEM format to X.509 SubjectPublicKeyInfo
PEM.  |tor26-certificated_text.bin.sig| is the the signature at the bottom of
the file, with the SIGNATURE header and trailer stripped, and base64 decoded.

For the PEMRSA to PEM conversion, I used:

  $ openssl rsa -in tor26_rsa_id_key -RSAPublicKey_in -pubout -out tor26_id_key

When you obtain the two digests, then you can byte-for-byte compare them
however you'd like.

In the above example, I chose to encode them in base64 so that I could easily
compare them on the command line. You can leave them as binary data and compare
them however you'd like.

If the two 

Re: [tor-talk] TBB "Security Level" Question.

2020-02-22 Thread Matthew Finkel
On Sun, Feb 16, 2020 at 2:56 PM  wrote:
>
> Reading documents like https://tb-manual.torproject.org/ answers a lot
> of questions for newer TBB users.  Also, just as Firefox changes
> constantly, TBB has ongoing changes.
>
> On 2/8/20 3:53 PM, mimb...@danwin1210.me wrote:
> > My impression is that the "Security Level" (standard, safer, safest) has
> > somewhat replaced NoScript.
> I don't think that's true.  If you read the differences in the TBB
> safety levels, it's fairly specific.  As for safety levels replacing NS,
> there may be *some* overlap.
>
> Forgetting JS for a moment, there are many things NS does that don't
> involve JS, that are worth using, even if JS is turned on in NS by default.

To be clear, the "Security Levels" are a simple wrapper around NoScript. It
provides three options (Standard, Safer, Safest), therefore Tor Browser users
should only be divided into three groups based on the respective properties.

The situation is more complex than this because Tor Browser reveals more
distinguishing information than the "security level" you selected. Some users
also customize their Tor Browser by installing additional extensions - they are
likely particularly unique.

> > NoScript is still an add-on but the icon does not appear as standard at
> > the top of the browser as used to be the case. Also, the preset
> > customization for "default" sites is to allow everything (except ping).
> Where does the NS icon appear for you?  The icon itself looks much the
> same as in the 1st quantum version.  It used to be placed to the left of
> URL bar - maybe still is, in a fresh install.  I always move it to the
> right of the search bar.

In newer installations the NoScript button doesn't appear next to the
address bar. You can only access the NoScript configuration through
about:addons now.

> > In terms of TBB's "Preferences / Privacy and Security" section, many sites
> > will not work unless the "standard" setting is chosen. Are there any
> > serious security ramifications of "standard" that can undermine the TBB
> > and thus acquire the user's real IP?

Yes, there may be some "security" ramifications. The "security levels" were
created, in part, because in the past some features in the browser had bugs
(vulnerabilities) that we potentially exploitable. By increasing the "security
level" in the browser, there is a trade-off with increasing breakage on the
web - sometimes that breakage is decreasing usability (click-to-play) or no
javascript on unencrypted connections, etc. If a vulnerability in Tor Browser
is exploited such that a connection can be made that bypasses the tor proxy,
then this will reveal your real IP address (under normal circumstances).

> > I assume not or what would be the point of the TBB? I imagine that browser
> > components that might be dangerous in a normal Firefox won't necessarily
> > be operational in a hardened TBB. Hence, "standard" (which includes JS,
> > WebGL, etc) is not a problem.
> For one very big thing, TBB (and Tor and how the Tor network functions),
> unhardened Firefox gives out much more info than TBB - even if TBB is on
> Safe level.
> It hides your true IP address, if users don't install certain addons
> that sometimes may leak your true IPa.
>
> It spoofs a lot of info given out in normal browswers, so the spoofed
> data is the same for all TBB users.  Other data shown by browsers, TBB
> may not give out at all.

Correct. Tor Browser's default configuration is significantly more privacy
preserving than all other browsers available today. In addition, Mozilla has
made significant progress in hardening Firefox, and consequently this has
improved Tor Browser's hardening. If you are really concerned about your IP
address being revealed, then you should either use the Safer or Safest level,
or don't use the Internet (or use Tails or Qubes).

>
>
> >
> > Could someone e.g. Roger please clarify this fact. It does feel a bit odd
> > using sites with JS, etc, freely working whereas in my non-TBB Firefox, I
> > have to constantly allow NoScript to "temporarily trust" most sites.

Yes, this is another trade-off between usability and letting everyone create
custom rules for every website they visit.

- Matt
-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] loading some content changes Tor Browser 9.0 to full screen

2019-11-19 Thread Matthew Finkel
Hi!

Sorry for the delay, thanks for your questions.

On Tue, Nov 5, 2019 at 9:16 AM Joe  wrote:
>
> In TBB 9.0, should about:config "full-screen-api.enabled" be "true?"
> It is =true by default, in my auto-updated TBB 9.0, in Linux Mint.

Yes.

>
> I also see similar (default value) prefs, that may / may not be involved
> here:
> full-screen-api.allow-trusted-requests-only = true
> (does that refer to "trusted requests" from sites, or something else?)
>
> full-screen-api.transition-duration.enter = 0 0 (zeros separated by a
> space)
> full-screen-api.unprefix.enabled = true

Yes.

>
> TBB 9.0 is the first version I remember that loading anything caused TBB
> to go full screen - links, images, videos [non-flash, but played using
> TBB HTML5 player].  Though apparently some things caused problems years
> ago - see old bug.
>
>   Found a several year old trac.torproject bug where some things caused
> window resizing.
> https://trac.torproject.org/projects/tor/ticket/9881
>
> > So what is your proposed patch for this bug then just doing a
> > |browser.link.open_newwindow.restriction = 0|?
>
> > Yes.
> >
> > Plus |full-screen-api.enabled = false| to fix #12609
> > [note:
> > #12609 is closed]
>
> Is that pref's default value now back to true?

It never changed. That comment is a suggestion, it was never
implemented (as far as I know).

>
> My security level is Safer and java script in NS is disabled.
> But even to load text on some sites, at least the first party scripts
> must be allowed.
>
> Maybe js being enabled plus changes in Firefox allow scripts for some
> content to force the (real) detected full screen size, when js is enabled?

Fullscreen is only available if it is initiated by a user clicking on something.

https://searchfox.org/mozilla-esr68/source/dom/base/Element.cpp#3310 says:

  // Only grant fullscreen requests if this is called from inside a trusted
  // event handler (i.e. inside an event handler for a user initiated event).
  // This stops the fullscreen from being abused similar to the popups of old,
  // and it also makes it harder for bad guys' script to go fullscreen and
  // spoof the browser chrome/window and phish logins etc.
  // Note that requests for fullscreen inside a web app's origin are exempt
  // from this restriction.

This also prevents leaking screen dimensions on a webpage unless you
explicitly click on an element that invokes full screen.

Yes, this still leaks real screen dimensions, as Mike discussed in
https://trac.torproject.org/projects/tor/ticket/12609

Disabling fullscreen is not a good solution. We have another ticket,
where the user is prompted before fullscreen is allowed, for that:
https://trac.torproject.org/projects/tor/ticket/12979

>
> But, I've not seen this problem (since TBB screen size was spoofed)
> until upgrading to TBB 9.0.
>
> For several reasons, like accidentally hitting the maximize window
> button vs. close browser button, seems like there should be a pref ? or
> setting that disables the maximize window icon.  That won't fix the
> issue of some content making TBB go full screen.

The maximize button is not the same as requesting fullscreen, in
general. With letterboxing, maximizing the browser does not (should
not) leak real screen dimensions.
-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] TorBrowser is only showing a black windows after today's update

2019-10-23 Thread Matthew Finkel
On Wed, Oct 23, 2019 at 6:28 PM Nirgal  wrote:
>
> Hi
>
> I'm using Tor Browser Bundle on Debian stable, with xfce.
>
> After last update, tor is only showing a black windows.
>
> I get the first small window "Establishing a connexion..." then the regular
> big window "Tor Browser", so I guess some stuff is working. But it's unusable
> because both windows are totally black.
>
> Any one else have that issue?
> Any work around?
>
> When running with --verbose, I get this kind of errors:
> [21969, Main Thread] WARNING: failed to open shm: Permission denied: file 
> /var/tmp/build/firefox-d051ff6e2f60/ipc/chromium/src/base/shared_memory_posix.cc,
>  line 142
> Crash Annotation GraphicsCriticalError: |[0][GFX1-]: Failed to lock new back 
> buffer. (t=0.466201) [GFX1-]: Failed to lock new back buffer.
> [21969, Main Thread] WARNING: failed to open shm: Permission denied: file 
> /var/tmp/build/firefox-d051ff6e2f60/ipc/chromium/src/base/shared_memory_posix.cc,
>  line 142
> Crash Annotation GraphicsCriticalError: |[0][GFX1-]: Failed to lock new back 
> buffer. (t=0.466201) |[1][GFX1-]: Failed to lock new back buffer. 
> (t=0.471717) [GFX1-]: Failed to lock new back buffer.
> [21969, Main Thread] WARNING: failed to open shm: Permission denied: file 
> /var/tmp/build/firefox-d051ff6e2f60/ipc/chromium/src/base/shared_memory_posix.cc,
>  line 142
> Crash Annotation GraphicsCriticalError: |[0][GFX1-]: Failed to lock new back 
> buffer. (t=0.466201) |[1][GFX1-]: Failed to lock new back buffer. 
> (t=0.471717) |[2][GFX1-]: Failed to lock new back buffer. (t=0.494473) 
> [GFX1-]: Failed to lock new back buffer.
> [21969, Main Thread] WARNING: failed to open shm: Permission denied: file 
> /var/tmp/build/firefox-d051ff6e2f60/ipc/chromium/src/base/shared_memory_posix.cc,
>  line 142
> Crash Annotation GraphicsCriticalError: |[0][GFX1-]: Failed to lock new back 
> buffer. (t=0.466201) |[1][GFX1-]: Failed to lock new back buffer. 
> (t=0.471717) |[2][GFX1-]: Failed to lock new back buffer. (t=0.494473) 
> |[3][GFX1-]: Failed to lock new back buffer. (t=0.505148) [GFX1-]: Failed to 
> lock new back buffer.
> [21969, Main Thread] WARNING: failed to open shm: Permission denied: file 
> /var/tmp/build/firefox-d051ff6e2f60/ipc/chromium/src/base/shared_memory_posix.cc,
>  line 142
> Crash Annotation GraphicsCriticalError: |[0][GFX1-]: Failed to lock new back 
> buffer. (t=0.466201) |[1][GFX1-]: Failed to lock new back buffer. 
> (t=0.471717) |[2][GFX1-]: Failed to lock new back buffer. (t=0.494473) 
> |[3][GFX1-]: Failed to lock new back buffer. (t=0.505148) |[4][GFX1-]: Failed 
> to lock new back buffer. (t=0.554666) [GFX1-]: Failed to lock new back buffer.
> [21969, Main Thread] WARNING: failed to open shm: Permission denied: file 
> /var/tmp/build/firefox-d051ff6e2f60/ipc/chromium/src/base/shared_memory_posix.cc,
>  line 142
> Crash Annotation GraphicsCriticalError: |[0][GFX1-]: Failed to lock new back 
> buffer. (t=0.466201) |[1][GFX1-]: Failed to lock new back buffer. 
> (t=0.471717) |[2][GFX1-]: Failed to lock new back buffer. (t=0.494473) 
> |[3][GFX1-]: Failed to lock new back buffer. (t=0.505148) |[4][GFX1-]: Failed 
> to lock new back buffer. (t=0.554666) |[5][GFX1-]: Failed to lock new back 
> buffer. (t=0.704898) [GFX1-]: Failed to lock new back buffer.

Hi, thanks for reporting this!

When you describe seeing a black window, does it look like the
screenshot on this bug?
https://bugzilla.mozilla.org/show_bug.cgi?id=1421353#c0
-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] TBB and Importing Cookies.

2019-10-02 Thread Matthew Finkel
Hi,

Interesting question.

On Wed, Oct 2, 2019 at 4:03 PM  wrote:
>
> Just a quick question about how cookies are used in Tor.
>
> Just for an experiment, I exported cookies from Firefox in JSPN format and
> imported then into TBB (via an add-on that allows the import of JSON
> data).
>
> However, while the cookies were imported in the add-on, only some of them
> appeared in Preferences / Manage Cookies and Site Data.
>
> .google.com and accounts.google.com were in Manage Cookies and Site Data
> but mail.google.com did not appear.
>
> Any ideas why?

Nope. Without more information it's difficult guessing why this happened, too.

In addition to installing the new add-on, did you modify Tor Browser in other
ways? Which add-on did you use for exporting/importing the cookies? It sounds
like "importing" the cookies for mail.google.com failed for some
reason. Did you try
clearing the cookies in Firefox and import them there? And was that successful?
-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] stay clear from exit "BSDNow2016"

2019-07-02 Thread Matthew Finkel
On Tue, Jul 2, 2019 at 1:35 PM nusenu  wrote:
>
> Hi,

Hi nusenu,

>
> the tor exit relay "BSDNow2016" [1] reroutes
> traffic back into the tor network instead of exiting it.
> It uses other exits instead of being
> an actual exit. This allows it to inspect traffic without having to deal 
> "Stay "Stay with abuse.
> I've no evidence that this exit actually inspects traffic.
>
> This has been ongoing since the very first day of its operation (2019-02-10) 
> [2].
> It got reported on 2019-02-11 and 2019-03-13 to bad-relays@
> and two attempts to contact the operator remained with no reply.

Thanks for this info!

What was the decision by the bad-relays@ list members? Unfortunately,
"Please avoid this
exit node" is not a very good solution because recommending every user
exclude one node does
not scale to millions of users (unless Tor Browser ships this
configuration, but that would be a sad
path for Tor Browser). :/

- Matt
-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] Tor Browser Android 8.5.1 obfs4 Bridges Problem

2019-06-10 Thread Matthew Finkel
On Mon, Jun 10, 2019 at 10:02 AM Georg Koppen  wrote:
>
> Lotta Kallio:
> > Yes, i tried. It is not working. If someone can interest with this issue we 
> > would be appreciated in here.
>
> It is weird that those bridges are working for you on desktop and not on
> mobile. Are you on the same network when it is working on desktop and
> not on mobile? If so, could you file a ticket in our bug tracker at
> https://trac.torproject.org/projects/tor ?
>

There was a chat about this on IRC. The current thought is this relates to
one of the recent bridge bugs, like
https://trac.torproject.org/projects/tor/ticket/29875

I'm most confused because the notice-level logs and CIRC events show
the client successfully established a connection with the bridges, but tor
does not mark them as usable. I haven't looked at the referenced ticket,
so maybe there's a reasonable explanation how they're all related.
-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] [tor-relays] ALL GREYPONY SERVERS ARE OPEN FOR DDOS

2019-05-03 Thread Matthew Finkel
On Fri, May 03, 2019 at 10:27:14AM +, Stirling Newberry wrote:
> ATTENTION GREYPONY USERS
> 
> ALL GREYPONY EXITS WILL BE TARGETED FOR DDOS. THIS REQUEST HAS COME FROM 
> PRETTY HIGH UP. GAME OVER BITCHES. YOU GUYS ARE GOING TO BE DONE. LOIC WILL 
> BE USED TO TARGET THE NODES. DON'T SAY YOU WERENT WARNED YOU STUPID SACKS OF 
> FAGGOT SHIT.

Just so we all have the same understanding here, no one should send or
respond to emails like this. Just like harassment, threats (even idle
threats) are not acceptable here.

Thanks, in advance, for your decorum.
-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] How Tor Browser changing circuits for tabs?

2019-04-26 Thread Matthew Finkel
On Sat, Apr 27, 2019 at 12:55 AM  wrote:

> How does Tor Browser change circuits for each tabs?
>

You may find this section of the Tor Browser design document
of interest:
https://2019.www.torproject.org/projects/torbrowser/design/#identifier-linkability


>
> In Tor Browser, a circuit for tabs of Facebook is different from a
> circuit for tabs of Wikipedia for example. I checked these by:
> https://i.ibb.co/7bsRbjy/checking-circuits.jpg
>
> I thought that I could never get this behavior without modifying Tor
> Browser source code.
> So I'm searching the code: unzip-ing
> tor-browser_en-US/Browser/browser/omni.ja and grep-ing "circuit". But I
> couldn't find what I wanted. It's reckless of me.
>

> Any suggestion? If you know, parts of the code that changes circuits for
> tabs, or good documents, let me know them please.
>
>
You'll want to look at torbutton. The code for providing first-party
isolation (FPI) is part of the domain isolator (in particular
isolateCircuitsByDomain):
https://gitweb.torproject.org/torbutton.git/tree/src/components/domain-isolator.js

The circuit display is controlled by this:
https://gitweb.torproject.org/torbutton.git/tree/src/chrome/content/tor-circuit-display.js


> My guess:
> I know that it will change the circuits to send "signal NEWNYM" to Tor
> control port. May Tor Browser use the port with some commands like it
> when opening tabs?
>

NEWNYM controls the state of the entire tor process, there isn't a way to
apply NEWNYM on a single circuit or stream. The way Tor Browser controls
the connection is via SOCKS5 username and password authentication. Tor
doesn't care what values you send, but it will isolate connections that use
different usernames and passwords.

I hope this helps.
-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] tor in arm

2019-04-16 Thread Matthew Finkel
On Tue, Apr 16, 2019 at 10:53:39AM +, Linklinklink wrote:
> tor browser doesn't work in a arm based system, i tried with raspberri pi 3. 
> Who know a very nice alternative for this shit? thanks

Thanks for your interest! "this shit" isn't appropriate for this
mailing list, please be respectful to everyone here.

For Tor Browser on ARM we have an open ticket for it [0], we may begin
supporting it within the next few months, but we've had too many other
higher priority tickets. If you're comfortable with compiling it
yourself, there is a git branch available for review/testing you can
try (without any guarantees) - but let us know if you're successful :)

Unfortunately, I don't have any other alternatives, sorry.

- Matt

[0] https://trac.torproject.org/projects/tor/ticket/12631
-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] tor project website change

2019-03-29 Thread Matthew Finkel
On Thu, Mar 28, 2019 at 05:39:00AM -0700, Mirimir wrote:
> On 03/28/2019 02:27 AM, Nicolas Vigier wrote:
> > On Wed, 27 Mar 2019, Mirimir wrote:
> > 
> >> On 03/27/2019 08:01 AM, Udo van den Heuvel wrote:
> >>> Hello,
> >>>
> >>> Who changed the web content at https://www.torproject.org/download/ ?
> >>> Previously I could relatively easily check for the latest tor version
> >>> but now I get only a number of tor browser options in a page that is way
> >>> too big for what it offers. (and I use a 4K screen)
> >>> Why was this done? What purpose does it serve for tor? (not the browser)
> >>> And where is one supposed to find the tor download page from that (now)
> >>> tor browser page?
> >>>
> >>> Udo
> >>
> >> Yes, the Tor Project site has increasingly focused on Tor browser.
> > 
> > And the blog post is saying:
> > https://blog.torproject.org/meet-new-torprojectorg
> > 
> > In addition to this update, we are also better organizing all the
> > other content into different portals. For instance, last year we
> > launched our support portal to host all the content related to user
> > support. Coming next will be our community.torproject.org portal
> > that will feature content related to the different ways you can join
> > our community and spread the word about Tor. The portal for all of
> > our free software projects will soon be dev.torproject.org.
> > 
> > So there are plans to add the informations that is currently missing
> > from the new website to those new portals. And in the meantime the old
> > website is still available at https://2019.www.torproject.org/.
> 
> Is there a link to that on the new version? I didn't see one on the
> first page.

Not explicitly, the link at the top of the new site for "Documenation"
goes to that domain.

- Matt
-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] tor project website change

2019-03-27 Thread Matthew Finkel
On Wed, Mar 27, 2019 at 04:01:04PM +0100, Udo van den Heuvel wrote:
> Hello,
> 
> Who changed the web content at https://www.torproject.org/download/ ?
> Previously I could relatively easily check for the latest tor version
> but now I get only a number of tor browser options in a page that is way
> too big for what it offers. (and I use a 4K screen)
> Why was this done? What purpose does it serve for tor? (not the browser)
> And where is one supposed to find the tor download page from that (now)
> tor browser page?

Hi Udo,

The website design started at least 6 years ago, if not longer. Multiple
people worked on it and made it possible. The new website provides a
more welcoming interface, in particular for new users. Now the page is
concise and provides links for exactly the version a user likely needs.
If the user needs/wants another version (in another language or if they
want the Alpha version), then they are contained on a subsequent page.
People don't want too many options, they simply want the correct/best
option. This new design provides this for them.

If you have specific suggestions for improving this new design, then
please let us know. It seems you would like easier access to the tor
download page, I also don't see this. This is be a good improvement,
thanks for letting us know.

- Matt
-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] Moving stuff between Tor Browsers

2018-12-21 Thread Matthew Finkel
On Thu, Dec 20, 2018 at 04:24:30PM +0100, Robin Lee wrote:
> Hi list
> 
> I'm wondering what is the proper way to move all the
> settings/bookmarks/saved passwords from one instance of Tor Browser to
> an other? It doesn't matter if information gets overwritten on the
> receiving end. Same version 8.0.4 and same OS (Linux) on both.

There isn't a specific way of doing this built into the browser. You can
backup and restore the entire profile (which is what Mozilla suggest)
[0]. In this case, you'll want the profile at
tor-browser_en-US/Browser/TorBrowser/Data/Browser/profile.default/

Bookmarks can be exported using the built-in backup/restore process[1].

Tor Browser doesn't provide any additional feature for this. It's simply
a more private and secure version of Firefox.

[0]
https://support.mozilla.org/en-US/kb/back-and-restore-information-firefox-profiles#w_backing-up-your-profile
[1]
https://support.mozilla.org/en-US/kb/restore-bookmarks-from-backup-or-move-them

> 
> /Robin
> 
> -- 
> tor-talk mailing list - tor-talk@lists.torproject.org
> To unsubscribe or change other settings go to
> https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk
-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] Get relay information from the "Network Status Consensuses" file?

2018-11-25 Thread Matthew Finkel
On Sun, Nov 25, 2018 at 03:57:34PM +0800, Lucon Yang wrote:
> The "Network Status Consensuses" file contains some relay information I
> need. Now I have a relay's fingerprint, but I did't find a fingerprint
> field in this file.
> How can I get a relay's information from the file by the relay's
> fingerprint?

You'll probably be interested in reading the official consensus document
specification [0].

The short answer is: if you have the relay's fingerprint encoded in
base16 (hex), then you'll need to convert it to base64. The result is on
the r-line in the field next to the relay's name. The section from
the specification says:

   "Identity" is a hash of its identity key, encoded in base64, with
   trailing equals sign(s) removed."

I usually use python for this, because it's relatively easy, but there
may be an even easier methods.

If I want to look at maatuska [1] and I know it's fingerprint encoded in hex
is BD6A829255CB08E66FBE7D3748363586E46B3810, then I convert the encoding
into base64 with:


$ python
Python 2.7.15 (default, May  9 2018, 11:32:33) 
[GCC 7.3.1 20180130 (Red Hat 7.3.1-2)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import binascii
>>> binascii.b2a_base64(binascii.a2b_hex('BD6A829255CB08E66FBE7D3748363586E46B3810'))
'vWqCklXLCOZvvn03SDY1huRrOBA=\n'


This takes the hex string and converts it into binary, and then takes
the binary and converts it into base64. You'll want the result not
including any '=' and new line at the end - vWqCklXLCOZvvn03SDY1huRrOBA.

So, looking at the current consensus:

$ torsocks wget 
https://collector.torproject.org/recent/relay-descriptors/consensuses/2018-11-25-15-00-00-consensus

We can see Maatuska has this r-line:

   r maatuska vWqCklXLCOZvvn03SDY1huRrOBA 8Ws9cPzBQWrlqtTZJWKFEixvz1Y 
2018-11-25 09:42:37 171.25.193.9 80 443


Hope this helps,


[0] https://gitweb.torproject.org/torspec.git/tree/dir-spec.txt#n2258
[1]
https://metrics.torproject.org/rs.html#details/BD6A829255CB08E66FBE7D3748363586E46B3810
-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] Exit nodes can redirect requests?

2018-08-09 Thread Matthew Finkel
On Thu, Aug 09, 2018 at 08:14:03PM +0200, onionsmas...@tutanota.com wrote:
> 
> So I was browsing some old clearnet forum posts using Tails and Tor browser. 
> Some posts had embedded images from a Tor hidden site via onion.casa gateway. 
> That gateway site seems to be inactive nowadays.
> I refreshed the page a few times, and sometimes Tor browser was attempting to 
> load something from the same site but using tor2web.xyz gateway instead.
> I checked page source and didn't spot any references to tor2web.xyz.
> So what happened?

Without seeing the actual website, we can only guess what caused this.
Did you have javascript enabled in Tor Browser? Maybe there was a
javascript file that tries alternative tor2web gateways?

>Can exit nodes redirect requests like this?

It depends. In theory, yes, it could in this case. This would qualify
the exit node as a bad relay, but in practice it could detect onion.casa
is a dead website and it sent a HTTP redirect for tor2web.xyz.

> I mean, if original request was to site.onion.casa/foo but it was redirected 
> to site.tor2web.xyz/foo?
> This was rather strange and I don't really understand what happened. I think 
> it's very questionable if exit nodes do redirects like this. Is it even 
> possible? What have I not noticed?

It seems more likely this was a feature provided by the forum, but if
the exit relay injected a redirect from onion.casa to tor2web.xyz then
it is a good idea to find which relay this is and investigate it.
-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] HSTS forbids "Add an exception" (also, does request URI leak?)

2018-08-08 Thread Matthew Finkel
On Wed, Aug 08, 2018 at 04:27:33PM +, Need Secure Mail wrote:
> On August 8, 2018 1:57 PM, Matthew Finkel  wrote:
> 
> > Right. This is the recommendation in the RFC [0]. It would be
> > counter-productive if the webserver informed the browser that the
> > website should only be loaded over a secure connection, and then the
> > user was given the option of ignore that. That would completely defeat
> > the purpose of HSTS.
> >
> > [0] https://tools.ietf.org/html/rfc6797#page-30
> > Section 12.1
> 
> Thanks, I was already quite familiar with the RFC. I know its rationale.
> 
> But it is an absolute rule that *I* get the final word on what my machine
> does. That is why I run open-source software, after all. I understand that
> most users essentially must be protected from their own bad decisions when
> faced with clickthrough warnings. I have read the pertinent research. It's
> fine that the easy-clickthrough GUI button is removed by HSTS. However,
> if *I* desire to "completely defeat the purpose of HSTS", then I shall
> do so, and my user-agent shall obey me. I understand exactly how HSTS
> works, and I know the implications of overriding it.

Please consider opening a bug with Mozilla for this.

> 
> >> This error made me realize that Tor Browser/Firefox must load at least the
> >> response HTTP headers before displaying the certificate error message. I
> >> did not realize this! I reasonably assumed that it had simply refused to
> >> complete the TLS handshake. No TLS connection, no way to know about HSTS.
> >
> > Why? There are three(?) options here:
> >
> > 1) The domain is preloaded in the browser's STS list, so it knows ahead
> > of time if that site should only use TLS or not.
> 
> Although I did not check the browser's preload list, I have observed
> this on a relatively obscure domain very unlikely to be on it...

Full list (for latest stable Tor Browser):
https://gitweb.torproject.org/tor-browser.git/plain/security/manager/ssl/nsSTSPreloadList.inc?h=tor-browser-52.9.0esr-7.5-2

I don't have a good explanation for why you experienced this.

[snip]

> >> Scary. How much does Tor Browser actually load over an *unauthenticated*
> >> connection? Most importantly, I am curious, does it leak the request
> >> URI path (including query string parameters) this way? Or does it do
> >> something like a `HEAD /` to specifically check for HSTS? No request
> >> headers, no response headers, no way to know about HSTS. Spies running
> >> sslstrip may be interested in that.
> >
> > No? This was one of the main goals of HSTS. It should prevent SSL
> > stripping (for some definitions of prevent).
> 
> Key phrase: "for some definitions of prevent".
> 
> Inductive reasoning: For a site not in the STS preload list and never
> before visited, the only means for the user-agent to know about STS is
> to receive an HTTP response header. The only means to receive an HTTP
> response header, is to send HTTP request headers. Assume that the browser
> does not make an HTTP request. How does it know that the site uses STS?

Correct. HSTS is a TOFU protocol, and it only takes effect on the
second connection. From what I see in the Firefox code, the HSTS value
is only cached after the HTTP response header is parsed. The next time
the website is requested, Firefox checks the cache for a STS entry and
forces TLS if an entry exists.

Unless the browser includes the domain in the STS preload list, you
shouldn't experience a problem loading a broken-TLS-configured website
until the second request. But, maybe I missed something.
-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] HSTS forbids "Add an exception" (also, does request URI leak?)

2018-08-08 Thread Matthew Finkel
On Wed, Aug 08, 2018 at 10:59:23AM +, Need Secure Mail wrote:
> On August 7, 2018 11:14 PM, nusenu  wrote:
> >> did you notice the non-HSTS/HSTS distinction when trying to add an 
> >> exception?
> 
> On August 8, 2018 1:51 AM, grarpamp  wrote:
> > If there is, would have to look closer, thx.
> 
> The following is to help searchers who rammed their heads into this
> problem, as I did when accessing clearnet version of a rather popular
> .onion (LE cert).
> 
> Firefox/Tor Browser disallows adding an exception. The "add an exception"
> button does not even appear! It gives the error message:
> 
> "This site uses HTTP Strict Transport Security (HSTS) to specify that
> Tor Browser may only connect to it securely. As a result, it is not
> possible to add an exception for this certificate."

Right. This is the recommendation in the RFC [0]. It would be
counter-productive if the webserver informed the browser that the
website should only be loaded over a secure connection, and then the
user was given the option of ignore that. That would completely defeat
the purpose of HSTS.

[0] https://tools.ietf.org/html/rfc6797#page-30 Section 12.1

[snip]
> 
> 
> Topic drift observation:
> 
> This error made me realize that Tor Browser/Firefox must load at least the
> response HTTP headers before displaying the certificate error message. I
> did not realize this! I reasonably assumed that it had simply refused to
> complete the TLS handshake. No TLS connection, no way to know about HSTS.

Why? There are three(?) options here:

1) The domain is preloaded in the browser's STS list, so it knows ahead
of time if that site should only use TLS or not. If it is in the
preloaded list, then the browser establishes a TLS connection as the
first step. If this fails, then none of the HTTP Request was leaked. If
the TLS connection fails, then the user is shown an error page and
cannot add an exception.

2) The domain is not in the preloaded list, so the browser learns about
the website setting HSTS on its first successful TLS connection and HTTP
request. This would potentially leak the user's entire request to a MITM
but this (HSTS) would not detect a MITM either. The MITM (or malicious
endpoint) would only be detected if they served an invalid certificate
chain for the domain name. The HSTS header would only prevent the user
from loading the website over an insecure HTTP connection in the future.

3) The user previously loaded the site and the browser cached a STS
value for that domain. If the user tries visiting the website again,
except this time they request an insecure connection, then the browser
will rewrite the URI so it uses TLS port 443 (by default), and then it
will initiate the TLS connection. There isn't any information leaked
before the TLS handshake. Furthermore, if the server and browser cannot
negotiate a valid TLS connection (because the certificate-chain is
invalid, or the ciphersuites don't intersect), then the user is
presented with an error message which they cannot override and add an
exception (as you experienced).

> 
> Scary. How much does Tor Browser actually load over an *unauthenticated*
> connection? Most importantly, I am curious, does it leak the request
> URI path (including query string parameters) this way? Or does it do
> something like a `HEAD /` to specifically check for HSTS? No request
> headers, no response headers, no way to know about HSTS. Spies running
> sslstrip may be interested in that.

No? This was one of the main goals of HSTS. It should prevent SSL
stripping (for some definitions of prevent).

I'm also not sure if you're referring to public key pinning, as well.
Where the website can specify the exact hash of its public key in the
HTTP headers. That is another topic, and that relies on
Trust-On-First-Use, as well.
-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] torjail - run programs in tor network namespace

2018-07-24 Thread Matthew Finkel
On Tue, Jul 24, 2018 at 01:44:36PM -0800, I wrote:
> > "Tor", and not "TOR".
> > --
> > With respect, 
> > Roman
> 
> Is this list for capital punishment?
> Or is this a community for freedom?

This is a list for empowering each other and building a community and
technology for helping those who are in need around the world. Please
refer to our social contract [0], if there's any confusion. Please do
not confuse spelling Tor, as requested, with someone who is very excited
and yelling "TOR!" or maybe a heading on a slideshow saying "TOR
BROWSER".

This isn't about punishment, we're simply asking people follow the
request of the community and people who build this community/technology
and put their time, energy, and life into it.

[0]
https://gitweb.torproject.org/community/policies.git/tree/social_contract.txt
-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] torjail - run programs in tor network namespace

2018-07-23 Thread Matthew Finkel
On Mon, Jul 23, 2018 at 09:51:53AM +0200, bic wrote:
> Hello,
> 
> I want to share a project made in _to hacklab.
> 
> https://github.com/torjail/torjail

Nice! Very interesting.

> 
> We would like to have some feedback about the project, particularly if you
> find some way to deanonimize a program running in torjail, please, submit
> an issue!

A few comments (take or leave them):

1) Tor 0.2.3 was deprecated many years ago, no need for checking the tor
version number or support for torrc options [0].
2) I enjoy the print output when it's configuring the namespaces, but
there's no need for so much yelling :) (s/TOR/Tor/) [1]

> print G " * Resolving via TOR"
> print G " * Traffic via TOR..."
> print G " * Creating the TOR configuration file..."
> print G " * Executing TOR..."

3) Keep in mind, using torsocks is not the same as using Tor's
transproxy.

4) Please be aware of the problem with using "tor" in the project's
name [2].

[0]
https://trac.torproject.org/projects/tor/wiki/org/teams/NetworkTeam/CoreTorReleases#Endoflife
[1] https://www.torproject.org/docs/faq.html.en#WhyCalledTor
[2] https://www.torproject.org/docs/trademark-faq.html.en#combining

> 
> [from readme]
> 
[snip]
> # Firejail support
> 
> We support a nice `-f` flag for uso firejail in pair wit torjail as
> security sandbox.

Have you looked at bubblewrap? It's a nice and simple namespacing
utility, too.

Thanks,
Matt
-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] GNOME Is Removing the Ability to Launch Apps from Nautilus

2018-05-18 Thread Matthew Finkel
On Tue, May 15, 2018 at 01:48:00PM +, Nathaniel Suchy (Lunorian) wrote:
> According to recent commits the desktop enviroment GNOME is removing the
> ability to launch apps from Nautilus. This will likely affect all Tor
> Browser users on Ubuntu in the name of "security". What steps will /
> should be taken from now till the time the update is released to protect
> Tor Browser users from losing access?

Do you have any links about this? This is an interesting development.

Thanks,
Matt
-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] Tor Browser "Sandbox" Development Status?

2018-05-01 Thread Matthew Finkel
On Tue, May 01, 2018 at 05:48:00PM +, Nathaniel Suchy (Lunorian) wrote:
> Hello,
> 
> So I see there is a download available, that's Linux exclusive, for Tor
> Browser "Sandbox", is this a stable or alpha build? Is this considered
> production ready or still an experiment?

Please see both warnings at the top of the page (in particular the
second):
https://trac.torproject.org/projects/tor/wiki/doc/TorBrowser/Sandbox/Linux

Where did you find the download link?

Creating a Sandboxed Tor Browser on all platforms is something we really
want, but it is non-trivial and requires more research and time.

Thanks,
Matt
-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] Fingerprinting issue in Tor Browser for macOS

2018-04-30 Thread Matthew Finkel
On Mon, Apr 30, 2018 at 04:26:00AM +, Nathaniel Suchy (Lunorian) wrote:
> When using Tor Browser for macOS, the EFF's tool panoptclick shows that
> a large amount of fonts are available, while Tor Browser for Linux
> claims only Wingdings is available. This could allow a website know
> whether a Tor User is using Linux or not helping to create a unique
> fingerprint. Thoughts?

Yes, sadly this is the current state. The three supported platforms for
Tor Browser are distinguishable by looking at a user's available fonts.
This is current a compromise between fingerprintability and usability.
Please see the Tor Browser Design document for additional details [0].
However, note this isn't the only method for identifying the underlying
platform.

[0]
Section 4.6, Subsection "Specific Fingerprinting Defenses in the Tor
Browser", Item 6. Fonts
https://www.torproject.org/projects/torbrowser/design/#fingerprinting-linkability
-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] How do the OBFS4 "built-in" Bridges work?

2018-04-30 Thread Matthew Finkel
On Mon, Apr 30, 2018 at 04:21:00AM +, Nathaniel Suchy (Lunorian) wrote:
> So the concerns I brought up are already addressed in an upcoming update?

Yes, and/but no. Moat is a good step in the correct direction, but it by
no means solves all the problems. Moat simply retrieves bridges from
bridges.torproject.org "automatically", rather than requring you do it.
-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] How do the OBFS4 "built-in" Bridges work?

2018-04-30 Thread Matthew Finkel
On Sun, Apr 29, 2018 at 03:41:47PM -0400, Nathaniel Suchy (Lunorian) wrote:
> Thank you for clarifying that. The obfs4 bridges you can get at
> bridges.torproject.org also pose an interesting risk, the ports each
> Bridge IP Address is using seem to be non-standard, I'm in the US and
> most networks I am at do not censor although sometimes certain ports at
> public wifi networks are blocked, could a threat actor threatening you
> or tor users in general realize an IP Address was a Tor Bridge by
> identifying a large amount of traffic to a non-standard port on random
> datacenter IP Addresses?

Yes, it is possible. There's nothing magical about how Tor sends the
traffic and none of the currently-deployed pluggable transports
significantly modify a users traffic pattern. A network operator could
observe strange traffic from a client, where the destination is a rarely
used IP address and the port number is non-standard. This could be a Tor
connection or it could be a brand-new up-and-coming app which could
revolutionalize the world. What does the network operator do? Do they
block the traffic because it *could* be a connection into the Tor
network?

Of course, there is the next step the network operator could take -
active probing. If they suspect a connection is into a Tor bridge, then
they can try connecting to it, and if it responds like a Tor relay then
they can classify it as "Tor". The obfs4 pluggable transport includes
active probing protection where the client must have the bridge's
non-public second identity key as requirement for establishing a
connection with the bridge. If the client does not have this identity
key, then the initial obfs4 connection will fail and the server will
not leak the fact there is a Tor bridge underneath it.

> 
> You can tell Tor Browser your Firewall only allows connections to
> certain ports which I assume when used with bridges would help further
> hide the fact you are using Tor.

Not necessarily. That option only tells Tor "don't choose a relay as my
first-hop (guard/entry relay) if I know it will be blocked". This simply
avoids choosing a relay listening on port  when we already know the
network firewall only allows ports 443 and 80.
-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] How do the OBFS4 "built-in" Bridges work?

2018-04-29 Thread Matthew Finkel
On Sun, Apr 29, 2018 at 02:06:49PM -0400, Nathaniel Suchy (Lunorian) wrote:
> I see that Tor Browser, for users who are censored in their country,
> work, or school (or have some other reason to use bridges) has a variety
> of built in bridges. Once of those are the OBFS4 bridges. My first
> thought would be these are hard coded, of course giving everyone the
> same set of bridges is bad right?

Currently this is how it works, yes. It is not ideal, and there is
on-going development work for rolling out a more scalable method.

> Then a bad actor could download Tor
> Browser, get the list, and null route the IPs on their network(s). Also
> these bridges could get quite crowded. Are the bridges being used to
> fetch other bridges, or something else? How does Tor Browser handle
> these risks / technical issues?

Indeed "Bad actors" could block the bridges hard-coded in Tor Browser.
It is also true many of those default bridges are overloaded.
-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] Structured Information on Tor

2018-04-13 Thread Matthew Finkel
On Fri, Apr 13, 2018 at 03:30:15AM -0400, sVYVGcT5jfBv9vnfL0Ey wrote:
> Hey Guys!

Hi! It's great you're interested this! Just so you know for next time,
as a matter of respect for everyone on this list, we use "y'all" or
"everyone" (or something similar).

> 
> I am a total Tor-Newbie and wanted to know if there is any structured source 
> of information, like in a 200-page book or so, that teaches me everything 
> important? Most of the information I could find is scattered over many sites. 
> I downloaded a few papers from freehaven, which is super interesting, but it 
> is also quite academical information.

Unfortunately, no. There isn't an all-inclusive book about Tor. This is
partly due to no one having any spare time for working on it, and partly
due to Tor evolving so quickly the book would be outdated before it
reached the publishing presses (along with some other reasons). The
current Tor documentation page [0] has nearly all the details you need,
but it's spread across multiple locations.

There are high-level overviews and lower-level specifications, but we are
missing a middle-level description.

The specifications are the core documentation for the different
components. If you want something high-level you can look at [0]
section (a) and the EFF's "Tor and HTTPS" website [1]. If you need more
details about how this all works, [0] section (k) has links for the
various parts of Tor. In particular, the Tor Design paper (The Second
Generation Onion Router) is still relevant, although many parts of Tor
today are different. There is a series of blog posts from a few years
ago describing updates since that paper was published [2][3][4]
(although some of that information is now old, too).

Section (i) at [0] has links for some interesting videos, too.

[0] https://www.torproject.org/docs/documentation
[1] https://www.eff.org/pages/tor-and-https
[2] https://blog.torproject.org/top-changes-tor-2004-design-paper-part-1
[3] https://blog.torproject.org/top-changes-tor-2004-design-paper-part-2
[4] https://blog.torproject.org/top-changes-tor-2004-design-paper-part-3

I hope this helps,
Matt
-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] Intercept: NSA MONKEYROCKET: Cryptocurrency / AnonBrowser Service - Full Take Tracking Users, Trojan SW

2018-03-25 Thread Matthew Finkel
On Sun, Mar 25, 2018 at 11:21:40AM -0400, Wanderingnet wrote:
> I'm puzzled by the entire basis of this posting. Bitcoin is, in itself, 
> well-known to be completely transparent on the blockchain, making 
> transactions traceable as a matter of course. Considerable study has been 
> done in this regard. Bitcoin anonymity can only be guaranteed where 
> transactions - the purchase of bitcoin itself - cannot be tied to a real 
> world account and identity. 
> The warning also seems to hinge on the use of a particular application in 
> order to provide an exploit, though this is unclear. 
> Have I got this wrong? 

Nope, this was included in the original Bitcoin paper, there's nothing
new about that. The fact that the United States government agencies
targetted the system and private comanpies is new, but not surprising
given what we know of their (dubious/unethical/illegal) historical
activities.

But this is now off-topic from onion routing, so this thread shouldn't
continue.

For reference:

"
The traditional banking model achieves a level of privacy by limiting
access to information to the parties involved and the trusted third
party. The necessity to announce all transactions publicly precludes
this method, but privacy can still be maintained by breaking the flow
of information in another place: by keeping public keys anonymous. The
public can see that someone is sending an amount to someone else, but
without information linking the transaction to anyone.
"

https://bitcoin.org/bitcoin.pdf


> 
> https://hackernoon.com/privacy-on-the-blockchain-7549b50160ec
> 
> https://cointelegraph.com/news/are-bitcoin-transactions-traceable
> 
> https://www.novetta.com/wp-content/uploads/2015/10/NovettaBiometrics_BitcoinCryptocurrency_WP-W_9182015.pdf
> 
> 
> ​Sent from ProtonMail, Swiss-based encrypted email.​
> 
> ‐‐‐ Original Message ‐‐‐
> 
> On March 20, 2018 6:40 PM, grarpamp  wrote:
> 
> > ​​
> > 
> > https://theintercept.com/2018/03/20/the-nsa-worked-to-track-down-bitcoin-users-snowden-documents-reveal/
> > 
> > https://www.reddit.com/search?sort=top=week=nsa+bitcoin
> > 
> > https://hn.algolia.com/?sort=byPopularity=pastWeek=nsa+bitcoin
> > 
> > https://slashdot.org/index2.pl?fhfilter=nsa+bitcoin
> > 
> > OAKSTAR MONKEYROCKET aka THUNDERISLAND
> > 
> > "agency working to unmask Bitcoin users about six months before
> > 
> > Ulbricht was arrested, and that it had worked to monitor Liberty
> > 
> > Reserve around the same time"
> > 
> > "American spies were also working to crack Liberty Reserve"
> > 
> > "analysts have .. tracked down senders / receivers of Bitcoin"
> > 
> > the agency was interested in surveilling some competing
> > 
> > cryptocurrencies, "Bitcoin is #1 priority,"
> > 
> > "using this “browsing product,” which “the NSA can then exploit.”
> > 
> > "These programs involve ventures with US companies."
> > 
> > Staff travel to "Virginia partner / these extremely patriotic
> > 
> > business associates" highly cloaked.
> > 
> > "
> > 
> > Partner provided first/last name metadata... large influx of
> > 
> > traffic... ftp process... causing disk space to fill up
> > 
> > "
> > 
> > Powers also used for traditional police work
> > 
> > “other targeted users will include those... International / Organised
> > 
> > Crime and Narcotics...Follow-The-Money" missions
> > 
> > "
> > 
> > cyber targets that utilize online e-currency services...
> > 
> > There’s no elaboration on who is considered a “cyber target.”
> > 
> > "
> > 
> > "
> > 
> > that the NSA would “launch an entire operation ... under false
> > 
> > pretenses” just to track targets is “pernicious,” ... Such a practice
> > 
> > could spread distrust of privacy software in general, particularly in
> > 
> > areas like Iran where such tools are desperately needed by dissidents.
> > 
> > This “feeds a narrative that the U.S. is untrustworthy,”
> > 
> > "
> > 
> > "Despite Bitcoin’s reputation for privacy... you should really lower
> > 
> > your expectations of privacy on this network.”
> > 
> > "
> > 
> > financial privacy “is something that matters incredibly” to the
> > 
> > Bitcoin community, and expects that “people who are privacy conscious
> > 
> > will switch to privacy-oriented coins” after learning of the NSA’s
> > 
> > work here.
> > 
> > "
> > 
> > "
> > 
> > If the government’s criminal investigations secretly relied on NSA
> > 
> > spying, that would be a serious concern. Individuals facing criminal
> > 
> > prosecution have a right to know how the government came by its
> > 
> > evidence, so that they can challenge whether the government’s methods
> > 
> > were lawful. That is a basic principle of due process. The government
> > 
> > should not be hiding the true sources for its evidence in court by
> > 
> > inventing a different trail.
> > 
> > "
> > 
> > https://theintercept.com/2017/11/30/nsa-surveillance-fisa-section-702/
> > 
> > "
> > 
> > Civil liberties advocates have long suspected that the 

Re: [tor-talk] Tor4

2018-03-06 Thread Matthew Finkel
On Tue, Mar 06, 2018 at 06:06:21AM -0500, Anon Hyde wrote:
> I've poor slow english and apologize for any possible inaccuracies. Who is
> "shill"?

This isn't related to your below question, and it's not related to Tor,
so this topic should be dropped.

"A shill, also called a plant or a stooge, is a person who publicly
helps or gives credibility to a person or organization without
disclosing that they have a close relationship with the person or
organization."

https://en.wikipedia.org/wiki/Shill

> 
> I mean the Tor has better design for secure data exchange at this moment,
> and any "Talk" can be build on the onion at several days, even hours. It is
> a surprise me almost complete absence of special (using onion address)
> applications. Most famous the TorChat has last edited in 2012. Nobody needs
> something or what?

There is much need and want for more applications using onion services.

Yes, TorChat gained popularity (mostly due to its name) but it was never
a program the community was comfortable promoting or using. Recently,
Richocet has become a popular application for communicating. OnionShare
is also widely used. There are email providers using onion services, as
well.

https://ricochet.im/
https://onionshare.org/

> 
> I want to draw the community's attention to IPFS.io . This excellent
> technology could be used for the new Tor. Obviously, she has a strong lack
> of secrecy, for another hand for Tor would be very useful to get more
> properties of P2P. It would be great to combine them. On RAM-fs can work
> fast enough.

Indeed, IPFS is an interesting project. It gained support for using
onion services, too. The last comment is very important:

"We want to get all of IPFS and libp2p audited before we start
encouraging people to use it with TOR. Unfortunately, this will take a
while."

Although it would've been nice if they correctly spelled Tor.

https://github.com/ipfs/notes/issues/37

> 
> if I would be have a copy of the file from another Tor-user, I do not need
> to go outside
-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] Launching new tabs in existing TBB from the shell

2018-02-17 Thread Matthew Finkel
On Sat, Feb 17, 2018 at 09:29:15PM +0200, Lars Noodén wrote:
> On 02/17/2018 07:52 PM, Rusty Bird wrote:
> > Lars Nood�n:
> >> I've looked around a bit and wonder how to launch new tabs from the
> >> shell into a running TBB instance.
> > 
> > $ ./Browser/start-tor-browser --allow-remote  # for the master process
> > $ ./Browser/start-tor-browser --allow-remote --new-tab example.com
> > 
> >> Also which documentation should I check for this info?
> > 
> > $ ./Browser/start-tor-browser --help
> > 
> > Rusty
> > 
> Thanks, I've tried those before.  When launching with --new-tab:
> 
> $ ./Browser/start-tor-browser --allow-remote --new-tab  example.com
> 
> It complains with the following popup dialog and does not open any new tabs:
> 
>   "Tor Browser is already running, but is not responding.
>   To open a new window, you must first close the existing
>   Tor Browser process, or restart your system."
> 
> The --allow-remote --new-tab method works fine with unmodified Firefox,
> just not with TBB.

That's surprising. I just tested it with a fresh Tor Browser
installation on 64-bit Linux and it worked without an error. A couple

questions:
  - What version of Tor Browser are you using?
- Press the Alt key, then select the Help menu at the top, then click About 
Tor Browser
  - Did you already try restarting your computer and trying again?


- Matt
-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] company devised process to disable Intel Management Engine

2017-12-10 Thread Matthew Finkel
On Sun, Dec 10, 2017 at 04:29:16PM -0600, Joe wrote:
> Not sure if this info has been posted before
> >"Purism disables intel's flawed management engine on linux-powered laptops
> LINUX PC MAKER Purism has devised a process to disable the flawed Intel
> Management Engine"
> https://www.youtube.com/watch?v=TGE6pABF23s
> 
> It appears Purism is selling laptops with Intel Management Engine disabled
> by...? maybe a proprietary method.
> I didn't catch in the video if they said how Purism is disabling Intel's ME.

This is a bit off-topic, but you might find these interesting:
https://puri.sm/learn/intel-me/
https://puri.sm/posts/coreboot-on-the-librem-13-v2-part-1/
-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] Need a stable .onion address hosted by the Tor project.

2017-10-25 Thread Matthew Finkel
On Wed, Oct 25, 2017 at 02:36:25PM +0200, Rob van der Hoeven wrote:
> Some extra info...
> 
> > I use expyuzz4wqqyqhjn.onion (www.torproject.org) as test .onion
> > address but this address does not seem very stable to me. Getting a
> > response will frequently take several minutes. Is there an .onion
> > address available with better (and more stable) response times?
> 
> I have also tested with: facebookcorewwwi.onion which gave snappy
> response times. Fast (less than 10 seconds) test responses are
> important because the connection test is optional. Although the
> connection test is performed by default, users are free to skip the
> test.

Something to keep in mind is Facebook intentionally makes their onion
service faster than normal. They are not concerned with location-privacy
so they disable most protections. If you need a stable and fast onion
service for testing, Facebook's may be the best choice (for the near
future, at least).

I'm curious, do you know what is failing? Are the connection requests
timing out? If you attach a controller to the tor client and watch for
circuit events, do you see the connection failing at the same point
frequently?

(I haven't run any tests, if I have some time I will)
-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] help

2017-08-15 Thread Matthew Finkel
Hi Petey,

On Wed, Aug 16, 2017 at 03:46:59AM +, petey lean wrote:
> I don't understand why i can't connect to the tor browser, i've joined the
> IRC I just kept getting link to the support page only thing on support page
> i saw about loading tor log files to get help when having a problem was in
> these different mailing lists... here is my tor log file.. or please let me
> know where i should be sending it...
> 
> 
> -- Logs begin at Tue 2017-08-15 19:50:52 UTC, end at Tue 2017-08-15
> 20:24:53 UTC. --
> Aug 15 19:50:55 kali systemd[1]: Starting Anonymizing overlay network for
> TCP...
> Aug 15 19:50:56 kali tor[1400]: Aug 15 19:50:56.422 [notice] Tor 0.3.0.9
> (git-100816d92ab5664d) running on Linux with Libevent 2.1.8-stable, OpenSSL
> 1.1.0f and Zlib 1.2.8.
> Aug 15 19:50:56 kali tor[1400]: Aug 15 19:50:56.424 [notice] Tor can't help
> you if you use it wrong! Learn how to be safe at
> https://www.torproject.org/download/download#warning
> Aug 15 19:50:56 kali tor[1400]: Aug 15 19:50:56.433 [notice] Read
> configuration file "/usr/share/tor/tor-service-defaults-torrc".
> Aug 15 19:50:56 kali tor[1400]: Aug 15 19:50:56.433 [notice] Read
> configuration file "/etc/tor/torrc".
> Aug 15 19:50:56 kali tor[1400]: Configuration was valid
> Aug 15 19:50:56 kali tor[1407]: Aug 15 19:50:56.582 [notice] Tor 0.3.0.9
> (git-100816d92ab5664d) running on Linux with Libevent 2.1.8-stable, OpenSSL
> 1.1.0f and Zlib 1.2.8.
[snip]
> Aug 15 19:50:57 kali Tor[1407]: Bootstrapped 0%: Starting
> Aug 15 19:50:58 kali Tor[1407]: Our clock is 6 hours, 9 minutes behind the
> time published in the consensus network status document (2017-08-16
> 02:00:00 UTC).  Tor needs an accurate clock to work correctly. Please check
> your time and date settings!
[snip]
> Aug 15 20:04:54 kali Tor[1407]: Received directory with skewed time
> (DIRSERV:199.254.238.53:443): It seems that our clock is behind by 7 hours,
> 0 minutes, or that theirs is ahead. Tor requires an accurate clock to work:
> please check your time, timezone, and date settings.
> Aug 15 20:04:55 kali Tor[1407]: Our clock is 6 hours, 55 minutes behind the
> time published in the consensus network status document (2017-08-16
> 03:00:00 UTC).  Tor needs an accurate clock to work correctly. Please check
> your time and date settings!
> Aug 15 20:04:55 kali Tor[1407]: I learned some more directory information,
> but not enough to build a circuit: We have no recent usable consensus.

It looks like Tor thinks your system clock is roughly 7 hours behind the
correct time. Is the timezone correctly configured on your system or did
you recently change your timezone or change your clock?

If I take a guess, I'd say your system is configured with UTC as the timezone,
but the clock itself is set for your local time. Do you know how to confirm
this and can you confirm this? 

- Matt

> -- 
> tor-talk mailing list - tor-talk@lists.torproject.org
> To unsubscribe or change other settings go to
> https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] How stealth onions actually function?

2016-08-06 Thread Matthew Finkel
On Sat, Aug 06, 2016 at 10:24:53AM +0300, Nurmi, Juha wrote:
> Hi,
> 
> I have been playing with stealth onion services[1] to protect some of my
> SSH servers from SSH MITM. I like to keep my servers as hidden as possible.
> 
> Great to have this option on Tor :) I have some questions about it and I
> didn't find much information.

The only documentation I know that exists is the spec[3].

> 
> Could someone tell me how it actually functions? What is the difference
> between basic and stealth? In addition, can an attacker verify that onions
> with stealth option exists and are online?

The spec has a more detail, but briefly both authentication methods rely on
a pre-shared secret between client and service. The distinction is made where
that shared-secret is used.

When a service uses basic authentication instead of publishes its introduction
points in plaintext, it encrypts the list of intro points with a key chosen at
random and then encrypts that symmetric key multiple times using the shared
secret for each client it has configured. With this, all clients can retrieve
the hidden service descriptor from the HSDir but if a client doesn't have a
valid shared secret then they can't find the intro points from the descriptor.

From the spec:
   When generating a hidden service descriptor, the service encrypts the
   introduction-point part with a single randomly generated symmetric
   128-bit session key using AES-CTR as described for v2 hidden service
   descriptors in rend-spec. Afterwards, the service encrypts the session
   key to all descriptor cookies using AES. Authorized client should be able
   to efficiently find the session key that is encrypted for him/her, so
   that 4 octet long client ID are generated consisting of descriptor cookie
   and initialization vector.

Stealth authentication is similar, except it publishes a hidden service
descriptor for each configured client.

   With all else being equal to the preceding authorization protocol, the
   second protocol publishes hidden service descriptors for each user
   separately and gets along with encrypting the introduction-point part of
   descriptors to a single client. 

   [...]

   A hidden service generates an asymmetric "client key" and a symmetric
   "descriptor cookie" for each client. The client key is used as
   replacement for the service's permanent key, so that the service uses a
   different identity for each of his clients. The descriptor cookie is used
   to store descriptors at changing directory nodes that are unpredictable
   for anyone but service and client, to encrypt the introduction-point
   part, and to be included in INTRODUCE2 cells

> 
> Moreover, several research papers measure the total number of onions and we
> know that someone is crawling TorHS Directories.
> Does HiddenServiceAuthorizeClient protect you against these measurements?
> 
> I tested my stealth service without the passphrase on Tor client and Tor
> says "Closing stream for '[scrubbed].onion': hidden service is unavailable
> (try again later)."
> 
> Tor manual describes HiddenServiceAuthorizeClient option[2]:
> 
> "If configured, the hidden service is accessible for authorized clients
> only. The auth-type can either be 'basic' for a general-purpose
> authorization protocol or 'stealth' for a less scalable protocol that also
> hides service activity from unauthorized clients. Only clients that are
> listed here are authorized to access the hidden service."
> 

This is exactly one reason why stealth hidden services are great.

> [1] https://github.com/juhanurmi/stealth-ssh
> [2] https://www.torproject.org/docs/tor-manual.html.en
[3] https://gitweb.torproject.org/torspec.git/tree/rend-spec.txt#n844
-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] Get Tor bridge via python code

2015-10-16 Thread Matthew Finkel
On Fri, Oct 16, 2015 at 02:52:33PM +0330, Farbod Ahmadian wrote:
> Hello every one:
> How can i get tor bridges via a python code?
> I mean i run the python code and it give me my bridges.
> Thank you :)

Hi Farod,

Sadly no, you can not retrieve bridges easily using a python script. The
website (https://bridges.torproject.org) and email
(brid...@bridges.torproject.org) are the only two available methods. The
reason for this is because if an API were available for this then it would
need to support CAPTCHA-like (proof-of-humanity) functionality, and there
was not an easy way to implement this - nor was there any need because no
one had previously requested it. Without some sort of CAPTCHA it would
be extremely easy for anyone to retrieve all available bridges and then
block their IP addresses - thus making the bridges useless.

- Matt
-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] ISP CenturyLink Blocking Tor?

2015-02-01 Thread Matthew Finkel
On Sun, Feb 01, 2015 at 04:14:26PM +, nathan...@moltennetworks.co.uk wrote:
 I have been at a close friend's house recently and his provider is
 CenturyLink (at home I use TimeWarner Cable). I tried to download
 Tor Tails (over BitTorrent) and the Internet literally dropped so I
 closed BitTorrrent, afterwards I launched the Tor Browser Bundle and
 then my computer was disconnected from the router and couldn't
 reconnect and no one at the house could access the Internet, we had
 to reboot the router to fix the issue. I then enabled Pluggable
 Transports (meet-google or something like that) and now I'm able to
 connect to Tor without any issues. This really concerns me as I was
 able to repeat the crash by launching Tor Browser Bundle and crash
 the router again.
 

Wow. That is quite coincidence. Can you ask your friend to contact
CenturyLink and ask them why this happened. It appears no one has
experienced this or, at least, no one updated the Good/Bad ISP wiki
page with this[0].

You probably chose the meek-google pluggable transport[1]. Basically,
it takes advantage of the fact that someone can run a webserver on
Google's infrastructure (using AppEngine) such that when you establish
a HTTPS session with the webserver, your ISP only see the connection
to Google's servers and not the specific server you're connecting to;
the specific webserver (meek) is defined within the encrypted portion.
When Google receives the connection, it correctly passes the
connection to the meek webserver. From there, the webserver then sends
your connection to a Tor Bridge, which is your first hop into the Tor
network. It's a very cool idea and it seems to work very well. The
current Tor Browser also supports connecting to a meek instance running
on Amazon's EC2 infrastructure and on Microsoft's Azure infrastructure.

[0] https://trac.torproject.org/projects/tor/wiki/doc/GoodBadISPs
[1] https://trac.torproject.org/projects/tor/wiki/doc/meek#Overview

 This worries me as I can see a direct link (Launch Tor Browsre
 Bundle without Pluggable Transports) and (Router and Internet
 require Reboot). Could this be a form of censorship within the
 United States?
 

Sounds like it. It would be great if someone directly asks CenturyLink
about this.
 I will be returning home today but this is really worrying me as it
 could be a form of censorship within the United States.
 
 Does anyone have any ideas or thoughts about this?

It's interesting that resetting the modem allowed you to access the
internet again. I wonder if the internet connection block you
experienced was based on the modem's IP address, and when you reset
the modem it was given a new dynamic IP address - hence bypassing the
block. This reminds me a little of the way China handles connections[2],
but it's still a little early to make a serious comparison.

The more information we can get about this, the better.

[2] https://blog.torproject.org/blog/closer-look-great-firewall-china


Also, as a general aside, please remember that trying to run a
bittorrent client over Tor is a bad idea[3] (the first half is the
relevant portion). I'm not sure if this was the intention, but just a
a friendly reminder :)

[3] https://blog.torproject.org/blog/bittorrent-over-tor-isnt-good-idea
-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] Tor and solidarity against online harassment

2014-12-11 Thread Matthew Finkel
On Thu, Dec 11, 2014 at 11:56:36PM +, Jonathan Wilkes wrote:
 Hi Gregory,Do you stand in solidarity with the Tor devs against online 
 harassment?  A wish to refrain from deflecting a conversation isn't exactly 
 the same thing.
 
 I stand in solidarity with the Tor community against online harassment.  I 
 also wish to point out that I have noticed online harassment of women is 
 particularly ferocious.  It shouldn't be tolerated here or in any free 
 software community.  Thanks for the clear stance in the blog post.

Hi Jonathan,

Thanks for your support, would you like your name signed on the statement?
-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] TorBirdy 0.1.3 released - Our fourth beta release!

2014-12-09 Thread Matthew Finkel
On Tue, Dec 09, 2014 at 11:33:27PM +0530, Sukhbir Singh wrote:
  The plugin on AMO has been preliminarily reviewed and we are still in
  the review process. It is again possible (Hooray!) to install TorBirdy
  directly from Thunderbird or by downloading the extension in a web
  browser from Mozilla's website:
 
 TorBirdy 0.1.3 is now available from Mozilla Add-ons. This version has
 passed the full review process.
 
 https://addons.mozilla.org/en-US/thunderbird/addon/torbirdy/

Congrats on the full review!
-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] misleading advice in Ars Technica article about patched binaries

2014-11-17 Thread Matthew Finkel
On Mon, Nov 17, 2014 at 07:16:58AM +, wyory wrote:
 Hello Tor friends:
 
 AN article on Ars Technica about the malicious exit nodes contains some
 misleading advice attributed to Tor officials.
 
 From end of article: Tor officials have long counseled people to employ
 a VPN when using the privacy service, and OnionDuke provides a strong
 cautionary tale when users fail to heed that advice.
 
 http://arstechnica.com/security/2014/11/for-a-year-one-rogue-tor-node-added-malware-to-windows-executables/

Thanks wyory. A new week, a new Tor headline. Happy Monday.

 
 Maybe the article was supposed to read HTTPS instead of VPN. Or,
 verify binaries instead of employ a VPN.
 
 Could a real Tor official contact the author and ask for a correction?

Probably a good idea. VPNs are useful, sometimes, but not very often and
definitely not for most use-cases.


- Matt
-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] Tor Blog: Thoughts and Concerns about Operation Onymous

2014-11-09 Thread Matthew Finkel
On Sun, Nov 09, 2014 at 08:48:35AM -0800, coderman wrote:
 Griffin, Matt, Adam, Roger, David, George, Karen, and Jake worked on a
 wonderful write up of all the questions and concerns regarding this
 Op:
 
 https://blog.torproject.org/blog/thoughts-and-concerns-about-operation-onymous
 

Thanks for sending this!


For those who read this earlier, two new paragraphs were added:

Under Attacks on the Tor network:

*Similarly, there exists the attack where the hidden service selects
the attacker's relay as its guard node. This may happen randomly or
this could occur if the hidden service selects another relay as its
guard and the attacker renders that node unusable, by a denial of
service attack[0] or similar. The hidden service will then be forced to
select a new guard. Eventually, the hidden service will select the
attacker.


And under Advice to concerned hidden service operators

*Another possible suggestion we can provide is manually selecting the
guard node of a hidden service. By configuring the EntryNodes option
in Tor's configuration file you can select a relay in the Tor network
you trust. Keep in mind, however, that a determined attacker will
still be able to determine this relay is your guard and all other
attacks still apply.


* Added information about guard node DoS and EntryNodes option - 2014/11/09 
18:16 UTC

 
 
 also,
 the performance link to doc/TUNING shows it could use much help.
 currently this is minimal, focused on file descriptor limits. more
 tuning guidance is needed!

Yes please!

 
 there is a good thread a few years past on tor-relays,
 https://lists.torproject.org/pipermail/tor-relays/2010-August/000164.html
 , which could provide instruction for additional knobs to turn for a
 solid relay or client under load.
-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] Facebook brute forcing hidden services

2014-10-31 Thread Matthew Finkel
On Fri, Oct 31, 2014 at 01:07:24PM -0400, Mike wrote:
 Here is an obvious question that I can't figure out.
 Why would you use a service that cares nothing about keeping your details
 secret?
 They'll give you up to the state faster than you can blink.
 
 If you are in a country that blacklists facebook, (china) logging onto
 facebook should be the least of your concerns. TOR and facebook don't
 belong in the same sentence.
 Honestly if I was running an exit node still. I'd just add facebook to
 nullroute right now.

Censorship and network interference is exactly what we're trying to
prevent in this situation. Why would you want to prevent someone from
reaching their destination? We're interested in freedom (and providing
unrestricted access), not controling where users can and can not go.
-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] New SSLv3 attack: Turn off SSLv3 in your TorBrowser

2014-10-15 Thread Matthew Finkel
On Tue, Oct 14, 2014 at 10:15:26PM -0400, Nick Mathewson wrote:
 Hi!  It's a new month, so that means there's a new attack on TLS.
 
 This time, the attack is that many clients, when they find a server
 that doesn't support TLS, will downgrade to the ancient SSLv3.  And
 SSLv3 is subject to a new padding oracle attack.
 
 There is a readable summary of the issue at
 https://www.imperialviolet.org/2014/10/14/poodle.html .
 
 Tor itself is not affected: all released versions for a long time have
 shipped with TLSv1 enabled, and we have never had a fallback mechanism
 to SSLv3. Furthermore, Tor does not send the same secret encrypted in
 the same way in multiple connection attempts, so even if you could
 make Tor fall back to SSLv3, a padding oracle attack probably wouldn't
 help very much.
 
 TorBrowser, on the other hand, does have the same default fallback
 mechanisms as Firefox.  I expect and hope the TorBrowser team will be
 releasing a new version soon with SSLv3 enabled.  But in the meantime,
 I think you can disable SSLv3 yourself by changing the value of the
 security.tls.version.min preference to 1.

 Obviously, this isn't a convenient way to do this; if you are
 uncertain of your ability to do so, waiting for an upgrade might be a
 good move.  In the meantime, if you have serious security requirements
 and you cannot disable SSLv3, it might be a good idea to avoid using
 the Internet for a week or two while this all shakes out.

Thanks Nick. Interestingly, but mostly uselessly for us, Mozilla
published an extension[0] that does this. Unfortunately they say it
only works on = FF26 (without tweaking it) and Tor Browser 3.6 is
based on FF24.

For what it's worth, the extension[0] should work with the new Tor
Browser 4.0, but this is untested.

If you do make this config change, when you visit a site that only
supports SSLv3 or downgrades to it, you should receive a message that
says:

Cannot communicate securely with peer: no common encryption algorithm(s).

(Error code: ssl_error_no_cypher_overlap)


For those wondering, this works exactly the same on Tails (1.1.2), too.
(and yes, they spelled it cypher).


I'm also curious what Mike, Georg, and the other TB Devs think. It
looks we need to wait until November when SSL will be disabled in
mainline Firefox[1].


[0] https://addons.mozilla.org/en-US/firefox/addon/ssl-version-control/
[1] 
https://blog.mozilla.org/security/2014/10/14/the-poodle-attack-and-the-end-of-ssl-3-0/

 
 best wishes to other residents of interesting times,
-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] orWall 1.0.0 released!

2014-10-10 Thread Matthew Finkel
On Fri, Oct 03, 2014 at 09:37:42AM +0200, CJ wrote:
 Hello!
 
 just a small update regarding orWall: it's released 1.0.0!
 There's still *one* annoying issue regarding the tethering, but it
 should be OK next week. Just have to take some time in order to debug
 this for good.
 
 orWall provides now a brand new UI in order to be easier to handle.
 There's also an integrated help (as a first-start wizard we might call
 later on).
 There are many new features and improvements, like:
 
 - ability to disable all rules and let the device access freely the Net
 - for each app, the possibility to access some advanced settings
 allowing to bypass Tor, or tell orWall the app knows about proxies or Tor
 - better management for the init-script
 - better management for iptables rules
 - translations in French, German and Italian are almost done
 
 Any feedback from Tor/Orbot users interest me in order to improve
 orWall. I think the current release is pretty good, but as the main dev
 I'm maybe not that neutral regarding this statement ;).

Hey CJ,

This looks great, very nice! I'm playing around with it now, so I may
send some PRs in the future. :)

I was also wondering if you keep the content for orwall.org in version
control. Overall it's very good, but I'd like to help improve some
wording, if possible. I can always send you an email, if that is
better for you.

Thanks for working on this.

- Matt
-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] Why adding Guard-Exit (EE) node in Tor yeild more catch probability than adding guard and exit node separately?

2014-08-05 Thread Matthew Finkel
On Tue, Aug 05, 2014 at 10:40:17AM +0900, saurav dahal wrote:
 I am trying to observe the catch probability i.e. probability that a client
 selects my added guard as well as exit node while making a circuit.
 
 First I inserted certain number of guard nodes and certain number of exit
 nodes in Tor network and performed the simulation in Shadow simulator.
 After completion of simulation, I calculated the catch probability.
 
 Then again I added the same number of Guard-Exit (EE) node and perform the
 simulation and calculated the probability.
 
 I found that catch probability for EE nodes are much higher than those of
 guard and exit node separately.
 
 Could anyone please explain why this happened?

Hi Saurav,

Can you clarify what you mean by Guard-Exit nodes? If I understand
correctly, you ran a simulation where you had x nodes which had the
Guard flag and y exit nodes, then you ran another simulation where
you had (x+y) exit nodes which also had the Guard flag? Is this
correct?

Thanks,
Matt
-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] Why does requesting for bridges by email require a Yahoo or Gmail address?

2014-07-27 Thread Matthew Finkel
On Sun, Jul 27, 2014 at 02:09:52AM -0400, The Caped Wonderwoman wrote:
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA512
 
 The difficulty of obtaining a Riseup account may be prohibitive for a lot of 
 people, especially if they need a bridge quickly for whatever reason. 
 Anecdotally, I requested one under a different identity over a week ago and 
 have yet to hear back. In some situations, that's an eternity, and while I'm 
 sure it would go more quickly with an invite, that presupposes knowing 
 someone who has one to offer.
 

An important point, that I don't think was mentioned previously, is that
Riseup cannot be a substitute for gmail and yahoo mail. The latter
are two service providers which place very few restrictions on the
users. Riseup, on the other hand, only accepts people who either
honestly have similar political and social ideals or they lie. Granted,
if an adversary is trying to surveil or track users then they probably
won't have any problem with deception and lying during the application
process. However, this does raise the bar for entry into retrieving
the specific bridges which are only distributed to riseup users.

 As a side note, I'm always slightly surprised by how few mentions Zoho gets. 
 They're nowhere near perfect, but compared to Google, Yahoo, and such, at 
 least they don't mine your email for targeted advertising, they have a 
 business model where the user is the customer, and their privacy policy is 
 readable and honest (we'll log your IP and fingerprint your browser to see 
 where you go and what you do on our site, but we won't read your mail or 
 follow you around the Internet). http://www.zoho.com/privacy.html
 

I hadn't heard of them. The account creation process seems simple,
sadly the captchas are not very difficult, either. I'm not saying
they're not usable, only that this seems like an easy target for
powerful adversaries. They also have offices in the US and China,
which could cause other problems.

Before we start whitelisting many new email providers, we should
define exactly which criterion we are looking for and what
percentage of the bridges we should allocate to the provider based
on which criteria they meet. We need a system that is usable by the
masses but also one that doesn't render the majority of the system
useless because someone/something was able to enumerate most of the
bridges.

 
 On July 26, 2014 3:16:03 AM EDT, Mirimir miri...@riseup.net wrote:
 On 07/25/2014 11:31 PM, grarpamp wrote:
 
 SNIP
 
  Do we underestimate the social net in oppressed that gives
  them awareness of tor, and to obtain binary and share bridge
  info in the first place?
 
 Maybe we do. But what about carelessness, poor judgment and the
 prevalence of informers? Wouldn't it be better to have a system that
 protected bridges by design?
 
  Or that oppressor will not burn $cheap govt SIM and IP army
  to get and block bridges from gmail to @getbridges?
 
 Right. Requiring hard-to-get email addresses does make it harder to get
 bridge IPs. But who does that impact the most, potential users or
 adversaries? Is there relevant evidence?
 
  This is difficult.
 
 Indeed.
 
 Please excuse the repetition, but DNS-based fast flux (Proximax) with
 selection-based dropping of domain names associated with bridge
 blocking
 is the best possibility that I've seen. Rather than trying to prevent
 adversaries from joining the system, it recursively isolates based on
 behavior.
 
 SNIP
 --
 tor-talk mailing list - tor-talk@lists.torproject.org
 To unsubscribe or change other settings go to
 https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk
 
 - --
 Sent from my Android device with K-9 Mail. Please excuse my brevity. And the 
 cape.
 -BEGIN PGP SIGNATURE-
 Version: APG v1.1.1
 
 iQJMBAEBCgA2BQJT1JewLxxDYXBlZCBXb25kZXJ3b21hbiA8Y2FwZWRfd29uZGVy
 d29tYW5Aem9oby5jb20+AAoJEBgm0LqZNaXf6wkP/Ap8j0gJ1drQ/vywryb09lPb
 tFqS1X4yFq6Drf5188DAl588SXUyTHEfYimXeNMEIjmg2Q013BrnOPY6BdLl/wPe
 0aIiqo+iiLtuqZL+eihivPfTOThO3zjY7ZKC6AhEZf2yO8fbinome38KSZ5ToNoV
 EJcwmrL97HFQVE8Ik6JVmTmsG1San1g8I6DhxdkN/hkWy6aBt2iGdypCWe0vez2O
 YwtKdoCc5PmAKVvnszeOHutcg6FVQ8o+sJLXZU04lq3FLH1RbR5I8+r9EEa+TuZ+
 D8A5vfS4xeUFDmMpF6khOVK6ddjnsJwSc1PxY6Eqvzokg7Q8lyNxy+H8aD9WMpaK
 gG6bx1AH9YqxB1GCx924zimA+XwgYdFCv/fwmF6QdoLmLnqWUEYd8FJmjJlDsgCq
 Z4f3HflzfQTehh2Q6uB/KzcDhreOXQrFSlpvO4keb5iDRjqOh4cbrFdUZFMLN/+j
 Ny2maBjrQFl8P5Boh5vLQiQlYnWPiQH4B+Ycsy942eoTY8sUL8e0psGYBCXx+I+H
 qe4DityZ73pV6pvfX18kWv9aejML1hFri5dZX2v2Z5HVNftdTA6cXEZynrMd8kO8
 WBGnkWyiwYUO65UeK5vycdUKQ2sLd0pCnYhKKfzK6q4W+bdFtXPnnOcHXCtpaWGu
 VM50oYhzhQOO/kZTr2BO
 =A/UT
 -END PGP SIGNATURE-
 
 
 -- 
 tor-talk mailing list - tor-talk@lists.torproject.org
 To unsubscribe or change other settings go to
 https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk
-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] Why does requesting for bridges by email require a Yahoo or Gmail address?

2014-07-26 Thread Matthew Finkel
On Fri, Jul 25, 2014 at 03:44:21PM +, obx wrote:
  Because we need an adequately popular provider that makes it hard to
  generate lots of addresses. Otherwise an attacker could make millions
  of addresses and be millions of different people asking for bridges.
 
 I know this is the reason, but there are still captchas, right?
 

Yes, they do rely on captchas and phone numbers. But luckily, in the
case for gmail, the capture-difficulty is variable. This in no way
solves the problem, but it's certainly better than most alternatives.

 Also, I think this list needs to be expanded.
 
  (Also, it recently became clear that it would be useful for people to
  access this provider via https, rather than http, so a network adversary
  can't just sniff the bridge addresses off the Internet when the user
  reads her mail.
 
 I'm not sure if gmail is safe against this recent adversary, regardless
 of the protocol.
 

Excluding the NSA/US Gov, I think gmail is the best
corporate-controlled service available, right now. This
opinion may change if contradictory information is released, but at
this time, for our purposes, I am happy requiring gmail.

Services like riseup are excellent, but we are abusing their systems
(a little), as well as potentially putting more work/stress/pressure on
the staff. I wish there was a way to necessitate the requirements and
rigor of riseup with the scalability of gmail. Alas, this isn't
available, as far as I know. Riseup is also special due to existing
person relationships, it's possible we can expand the whitelist to other
provides such as autistici, but it will be a more involved process.

Suggestions and help always appreciated
-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] Why does requesting for bridges by email require a Yahoo or Gmail address?

2014-07-24 Thread Matthew Finkel
On Thu, Jul 24, 2014 at 10:29:49PM +, ideas buenas wrote:
 I don't trust Gmail nor Yahoo. Roger, found another way. No excuses, please.
 

This actually has very little to do with trust, and (as Roger said)
these providers were chosen because of the difficulty of creating new
accounts. Out of curiousity, what are you actually worried about?
Personally, it is sad that you need a phone number when you create
these accounts over Tor, but if retrieving bridges is important (and
it usually is), then there are usually ways to do this safely.

Another distribution method is currently being written and we will
write others in the future, but please help us provide another way
(yes, you, please help us if the current situation is unsatisfactory!).
The more people we can safely help, the better.

- Matt
-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] Why does requesting for bridges by email require a Yahoo or Gmail address?

2014-07-24 Thread Matthew Finkel
On Thu, Jul 24, 2014 at 06:37:27PM -0400, krishna e bera wrote:
 On 14-07-24 06:29 PM, ideas buenas wrote:
  I don't trust Gmail nor Yahoo. Roger, found another way. No excuses, please.
 
 I am curious why Riseup.net isnt in the list of popular and relatively
 secure email providers.  Also there must be several large european and
 asian free email providers, but someone from those regions might have to
 recommend/evaluate them.  How about yandex.ru for example?
 

See https://trac.torproject.org/projects/tor/ticket/11139

I haven't looked much at other providers recently. We want to keep the
whitelist as small as possible. We can only make the situation worse by
increasing the attack surface. The email distributor is already
significantly weaker than the website. We'd rather provide more
safe/secure distribution methods.

 Another good method is to get a bridge directly from someone you trust.

This is already done informally. Eventually we will try to make this
safer (to some extent)[0].

[0] https://trac.torproject.org/projects/tor/ticket/7520
-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] messing with XKeyScore

2014-07-04 Thread Matthew Finkel
On Fri, Jul 04, 2014 at 09:36:23PM +, isis wrote:
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA512
 
 Eugen Leitl transcribed 5.8K bytes:
  
  http://blog.erratasec.com/2014/07/jamming-xkeyscore_4.html?m=1 
  
  Errata Security
  
  Advanced persistent cybersecurity
  
  Friday, July 04, 2014
  
  Jamming XKeyScore
  
  Back in the day there was talk about jamming echelon by adding keywords 
  to email that the echelon system was supposedly looking for. We can do the 
  same thing for XKeyScore: jam the system with more information than it can 
  handle. (I enumerate the bugs I find in the code as xks-00xx).
  
  
  For example, when sending emails, just send from the address 
  brid...@torproject.org and in the email body include:
  
  https://bridges.torproject.org/
  bridge = 0.0.0.1:443
  bridge = 0.0.0.2:443
  bridge = 0.0.0.3:443
  ...
  
  Continue this for megabytes worth of bridges (xks-0001), and it'll totally 
  mess up XKeyScore. It has no defense against getting flooded with 
  information like this, as far as I can see.
  
 
 
 Hi. I maintain and develop BridgeDB.
 
 For what it's worth, the released XKS rules would not have worked against
 BridgeDB for over a year now. I have no knowledge of what regexes are
 currently in use in XKS deployments, nor if the apparent typos are errors in
 the original documents, or rather typos in one of the various levels of
 transcriptions which may have occurred in the editing process. If these typos
 were at some point in the original rules running on XKS systems, then *no*
 bridges would have been harvested due to various faults. None.
 
 Ergo, as Jacob has pointed out to me, the regexes which are released should be
 assumed to be several years out of date, and also shouldn't be assumed to be
 representative of the entire ruleset of any deployed XKS system.
 
 I am willing to implement tricks against specific problems with them, mostly
 for the lulz, because fuck the NSA. But it should be assumed that the actual
 regexes have perhaps been updated, and that highly specific tricks are not
 likely to land.
 
 The ticket for this, by the way, was created by Andrea this afternoon, it's
 #12537: https://trac.torproject.org/projects/tor/ticket/12537

In reality it's a bit silly to try to mess with these rules if they are
n-years old. Based on the pics, simply requesting that all users use
brid...@bridges.torproject.org instead of brid...@torproject.org is the
easiest change that by-passes this specific set of rules. But, I
think it is more realistic that these minor points are moot and the
regexes were fixed long ago and that the ruleset more fully covers
Tor's distributors now.

This problem makes me sad on many levels, and I'm not opposed to
implementing mitigation techniques (within reason) based on the
rulesets, however we shouldn't do anything that will hurt our users nor
should be do anything that makes tor more difficult to use
(unfortunately this includes sending users bogus bridge addresses).

For the use-case of bridges, where a user tries to circumvent local
network interference and implicitly expects they're not fingerprinted
by NSA, we are mostly failing right now.
-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] Security concerns with running an exit relay

2014-06-05 Thread Matthew Finkel
On Thu, Jun 05, 2014 at 05:58:46PM -0800, I wrote:
 S7R,
 
 That is a start.
 But where is the full and exemplary answer for someone like me who really 
 wants to get it right but doesn't know how to set the DirFrontPage up or the 
 NTP syncing?
 
 Roger says to try the tor-relay list but that has almost no chance of 
 satisfying the need. Responses to my questions have been condescending and 
 smartarse or illinformed from people speaking beyond their ability which is 
 worse.
 
 There ought to be a detailed guide for Tor being set-up on hired servers well 
 intending people answering the call for more Tor nodes and specifically exits.
 The EFF Challenge does the encouraging but points to the Tor site for what, I 
 find, is inadequate help.
 
 The presumption must be that the person does not know Linux well nor network 
 security.
 
 Robert

tl;dr Thank you for wanting to run a relay. If you think the
documentation is lacking specific information, or if it is confusing,
please say so. It usually doesn't change unless someone says something.


Hi Robert,

There are two unfortunate situations for which we need to account. 1)
It's actually very difficult for the current developers to know what
qualifies as a full and exemplary answer. The documentation can be
written, and maybe this should be, but the reality is that Tor doesn't
have the resources to explain in detail how someone should configure
their server. At this point tor runs on many different systems, but the
only truely supported, plug-n-play OS is Debian GNU/Linux. Roger already
mentioned it, but [0] does describe some basic configuration changes and
does have some good post-installation suggestions. Admittedly, it's
not perfect and is probably lacking some vital information, so if you
can provide some suggestions then that will help everyone.

The OperationalSecurity wiki page that Roger mentioned and that is
linked from [0] is more of an ideal situation. Some of it is absolutely
a good idea to follow (please!), but the most important parts are
generally basic tasks, such as keep your OS up-to-date. If you are using
a VPS, or a similar shared hosting environment, then some of the
information will not be applicable, i.e. Physical Security and
Reliability. But that page will probably be confusing to those users
with little experience, it isn't written in a way that helps someone
learn how to secure their system, which is sad. (Luckily it's on a Wiki,
so anyone can correct this ;) )

With regard to insufficient documentation about setting DirPortFrontPage
and maintaining a synchronized system clock, it may be a good idea to
add these to the Step Four: Once it is working section on [0].

Overall, a mix of [1] and [2] is a good combination, unfortunately it
may not be obvious which parts you want to follow from [1] and which you
want to follow on [2] (such as if you are using Debian rather than
Ubuntu). This is a great discussion to have on tor-relays. I'm sorry
that you had bad experiences in the past.

2) Expanding the Tor network is vitally important, but the network
itself and many Tor users have powerful adversaries. There must be a
way to balance adding an amazing number of insufficiently secure nodes
and growing the network at a slower rate. Maybe having a pre-configured,
installable, OS would make this easier, but the network also needs
diversity which this would hurt and creating and maintaining something
like this is not currently feasible. If someone within the community
has the time and ability to write detailed, step-by-step
documentation on the Wiki, then it sounds like this will be a great
step in the right direction, but until this happens, sites like [3] are
good places to start. Also note that if you aren't comfortable
administering a server then there are other ways you can help Tor and
the Tor network [4] (and the other Help another way options).

But, most importantly, if you think the documentation is lacking
specific information, or if it is confusing, please say so. It usually
doesn't change unless someone says something.


Really, though, despite everything else, thank you for wanting to run
a relay.

Thanks,
Matt

[0] https://www.torproject.org/docs/tor-relay-debian.html.en
[1] https://www.torservers.net/wiki/setup/server
[2] https://www.torproject.org/docs/debian.html.en#ubuntu
[3]
https://www.debian.org/doc/manuals/securing-debian-howto/index.en.html
[4] https://www.torproject.org/donate/donate-service.html.en
-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] Can someone please help me understand section 1.10 of the rendezvous spec

2014-06-03 Thread Matthew Finkel
On Tue, Jun 03, 2014 at 07:47:29PM +, Yaron Goland wrote:
 I'm trying to understand section 1.10 of 
 https://gitweb.torproject.org/torspec.git?a=blob_plain;hb=HEAD;f=rend-spec.txt
 
 
 It seems to say that Alice and Bob directly negotiate a shared symmetric key. 
 Is that true? Does it mean that all communications between Alice and Bob, in 
 the context of a Tor hidden service, are in fact encrypted end to end?
 
 
 I believe that https://www.torproject.org/docs/hidden-services.html.en 
 confirms this point when it says The rendezvous point simply relays 
 (end-to-end encrypted) messages from client to service and vice versa.
 
 
 But this point is really critical for a threat model I'm building so I just 
 want to make sure I've gotten things right. Could anyone confirm?


Hi Yaron,

The short answer is yes. This is how Alice and Bob establish a shared
secret key.

The longer answer is yes, section 1.10 describes how Alice (the client)
and Bob (the hidden service) establish shared secrets. After both Alice
and Bob possess the two respective halves of the Diffie-Hellman keys,
they use the shared secret and a key derivation function to expand the
key material into a byte sequence from which a 5-tuple is extracted (KH,
Df, Db, Kf, Kb). The first element (KH) is used to prove knowledge of
the shared secret, the second (Df) is used when computing the digest of
every cell from Alice to Bob, Db is the same but for cells from Bob to
Alice, Kf is the shared secret key used to {en,de}cipher cells from
Alice to Bob, and Kb is used to {en,de}cipher cells from Bob to Alice.
It sounds like these latter two keys, Kf and Kb, are what you are most
interested in.  Assuming the rendezvous point is unable to break the
security assumptions of the Diffie-Hellman handshake and the KDF is
secure, all messages sent between Alice and Bob are end-to-end
encrypted.

Does this make sense?

HTH,
Matt
-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] Does Tor need to be recompiled *after* the openssl update?

2014-04-12 Thread Matthew Finkel
On Sat, Apr 12, 2014 at 05:04:27AM -0400, hi...@safe-mail.net wrote:
 For those of us who compile Tor from source, does Tor need to be recompiled 
 *after* the openssl update from our OS vendors?

Maybe. If you are upgrading OpenSSL from a much older version then you
may need to recompile Tor (so it knows about the newer version and uses
the correct headers and such) but if you're simply upgrading from, say,
1.0.1e to 1.0.1g then you should not need to recompile Tor. If you
restart Tor it should use the newer version of openssl without issue.
-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] IMPORTANT: Heartbleed vulnerability impact on Hidden Service experiment

2014-04-12 Thread Matthew Finkel
On Sat, Apr 12, 2014 at 12:16:18PM +0300, s...@sky-ip.org wrote:
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA256
 
 Hi,
 
 After seeing the challenge done by CloudFlare, to setup a server open
 to the internet with that vulnerable OpenSSL version so everyone could
 try and get its private keys (to see if it's actually possible), after
 speaking earlier with people in #tor IRC channel, we think it's a good
 way to find out for sure if the Hidden Services could have been
 compromised or not. And if yes, make a more serious and visible banner
 to notify them. Because so far nobody has changed the Hidden Service
 address, from all the Hidden Services I am using.
 

Where do you propose the more serious and visible banner be placed?
With all of the attention that heartbleed attracted in the mainstream
media, I would think (but would probably be wrong) that relay and
hidden service operators are aware of the vulnerability and the fact
that their keys were potentially compromised.

 I don't want them to be exposed to risks and when something happens,
 yet another thing which will be blamed on Tor.
 
 So, to developers and special reference to arma, proposition:
 - -- Can we setup a Tor circuit, separate from the Tor network, or
 within it if it's better this way (if we can choose all the relays in
 a circuit via torrc), a circuit in which all the relays are running
 the vulnerable version of OpenSSL with heartbeats enabled?
 

I'm not sure this will accomplish exactly what you think it will
accomplish. Hidden services are merely one-last-proxy, so as far as we
know, the way to retrieve a hidden service's private key is the same way
as retrieving a relay's private key, by connecting to it's OR Port and
establising (most of) a TLS connection. If you connect to a hidden
service and attempt to establish a TLS connection then you're
connecting to that-thing-behind-the-hidden-service (whether that's
apache, nginx, sshd, etc). Due to the way hidden services are designed,
a non-local user/attacker should not be able to interact with the
instance of Tor that runs the hidden service (where local in this
situation includes anyone who can directly connect to the server).

But to answer your question more directly, your proposal won't be
extremely easy to do. In order to establish a connection only through
relays using vulnerable versions of OpenSSL it will require some
modifications to Tor on th hidden service-side to guarantee that it
builds such a circuit. On the client side you can use a controller
(Stem, txtorcon(?)) to choose your hops. Is there a reason you
specifically want this, though? Is there added benefit when every hop is
vulnerable?

 I have a server and offer it to be the Hidden Service and everyone can
 test and exploit the heartbleed vulnerability and prove if they
 managed to get the private key.
 

Great! If I'm wrong then an attacker only needs the hidden service
address and port number to be able to retrieve the private key. If I'm
not then the attacker really needs your IP address and OR Port (if
you're a relay).

 If you think the experiment is worth it email me directly and let me
 know what do i have to do. I am sure many others will join.
 
 
 s7r
 -BEGIN PGP SIGNATURE-
 Version: GnuPG v2.0.17 (MingW32)
 
 iQEcBAEBCAAGBQJTSQRiAAoJEIN/pSyBJlsRqe4H/3JB7136euT/3tQLJqMjHqZS
 OKyptAUFg6ZnOqGeOnacAqxz79XfNYXDDV8Bxh2erWpVvAIxQjzJFatKtUdjzGBG
 UKHQyNuDRifbaOSAoFcf93hfWvS387I3YMAhHWR5+yQjcucGpcECh8gmlOJNnsZD
 Zt1U1MjzQJfY6t9J5PXMvNDIYXhYE2DYtAmVXRDDNYKssX18Cc/qDid1s1t5OjGr
 wnWWK6lnZ64VJx+U8wsYutLYVUzrXOyp+POK6j8rM22vJlbrdbtGRGscCyaUGVTi
 L+cvFodxn16mL+x+7AjVa1ReHxu0KYXW+3l94Kil9qu2LiW0sPTG358zIOTb1as=
 =zrv8
 -END PGP SIGNATURE-
 -- 
 tor-talk mailing list - tor-talk@lists.torproject.org
 To unsubscribe or change other settings go to
 https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk
-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] Viewing current Dir Auth votes

2014-04-12 Thread Matthew Finkel
On Sat, Apr 12, 2014 at 11:59:32AM -0400, Michael Wolf wrote:
 Is there a way to view just the most recent directory authority votes?
 I found the archives here:
 https://metrics.torproject.org/data.html
 
 April is 1.5 GB, which is a lot when I only want to see set of votes.
 If there is currently no way to download only the most recent votes, it
 would be a really welcome addition :)
 
 The reason I want only the most recent votes is that, after regenerating
 my keys, my relay has gone almost four days and is still showing
 Unmeasured for bandwidth.  I'm just curious which bwauths have yet to
 measure me.  I think a day or two ago, Moria was the only one to have
 completed a measurement.

It sounds like you're looking for consensus-health[0] :)

[1] https://consensus-health.torproject.org/
-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] Does Tor need to be recompiled *after* the openssl update?

2014-04-12 Thread Matthew Finkel
On Sat, Apr 12, 2014 at 05:51:46PM +0200, Nicolas Vigier wrote:
 On Sat, 12 Apr 2014, Matthew Finkel wrote:
 
  On Sat, Apr 12, 2014 at 05:04:27AM -0400, hi...@safe-mail.net wrote:
   For those of us who compile Tor from source, does Tor need to be 
   recompiled 
   *after* the openssl update from our OS vendors?
  
  Maybe. If you are upgrading OpenSSL from a much older version then you
  may need to recompile Tor (so it knows about the newer version and uses
  the correct headers and such) but if you're simply upgrading from, say,
  1.0.1e to 1.0.1g then you should not need to recompile Tor. If you
  restart Tor it should use the newer version of openssl without issue.
 
 Unless tor was linked statically to openssl, using for instance the
 --enable-static-openssl or --enable-static-tor configure options.
 
 Checking that tor is not linked statically can be done with ldd:
 
  $ ldd /usr/bin/tor
  [...]
  libssl.so.10 = /usr/lib64/libssl.so.10 (0x7f6081b5c000)

Yes, this is a great point that I forgot to mention. Thanks!
-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] Tor-ramdisk 2014 20140309 released

2014-03-22 Thread Matthew Finkel
On Sun, Mar 09, 2014 at 06:38:23PM -0400, Anthony G. Basile wrote:
 Hi everyone
 
 I want to announce to the list that a new release of tor-ramdisk is out. 
 Tor-ramdisk is an i686, x86_64 or MIPS uClibc-based micro Linux 
 distribution whose only purpose is to host a Tor server in an 
 environment that maximizes security and privacy. Security is enhanced by 
 hardening the kernel and binaries, and privacy is enhanced by forcing 
 logging to be off at all levels so that even the Tor operator only has 
 access to minimal information. Finally, since everything runs in 
 ephemeral memory, no information survives a reboot, except for the Tor 
 configuration file and the private RSA key, which may be 
 exported/imported by FTP or SCP.
 
 Changelog:
 
 This release bumps tor to version 0.2.4.21 and the kernel to 3.13.5 plus 
 Gentoo's hardened-patches.  All other components are kept at the same 
 versions as the previous release.   We also add haveged, a daemon to 
 help generate entropy on diskless systems, for a more cryptographically 
 sound system.  Testing shows that previous versions of tor-ramdisk were 
 operating at near zero entropy, while haveged easily keeps the available 
 entropy close to 9000 bits. Upgrading is strongly encouraged.
 

Hi!

Is there a good way to send you suggestions for the build script? There
isn't a trac component for tor-ramdisk, should one be created for this?

Thanks,
Matt
-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] Pissed off about Blacklists, and what to do?

2014-02-07 Thread Matthew Finkel
On Thu, Feb 06, 2014 at 10:46:32PM -0500, grarpamp wrote:
 So many sites that we all use are now blacklisting Tor. It's unclear
 whether it is via their use of tools that blindly utilize blacklists,
 or if they are making a conscious choice to deny Tor users. As far
 as I'm concerned, we are all legitimate users of their services and
 quite frankly, I've had enough... exactly the same as I'm sure you
 have all had.
 

Contacting Project Honey Pot has been on my TODO list for a while now
with the hope that some progress can be made, such as they handle exit
nodes specially. The anti-spam list ecosystem is much larger than
Project Honey Pot, but it would be a start.

 What can we do, as a collective social entity, to put an end to
 this madness? It is not as if we, as Tor users, present any more
 of a load upon their help/fraud/abuse desks than the wider open
 internet as a whole, even when if perhaps adjusted for market share
 of source IP's. So what can we do?
 

Paul's mail was very much on point with this. However, Tor has a
perception/PR problem for this. ipset[0] is a prime example of this.
Regarding its listing of numerous blacklists and explicitly the list of
Tor exit nodes it says ...it certainly reduces comment spam on a
WordPress blog and there have been claims from websites owners that
their servers had been attacked through Tor.

[0]
http://trick77.com/2013/10/12/using-ipset-to-ban-bad-ip-addresses-from-project-honey-pot-spamhaus-tor-openbl-and-more/

 This is in re: Hulu (whis is presumably authenticated)... but really,
 it applies to any service which we, the legitimate users of Tor,
 are denied access to.
 
 It has simply gone too far and we should be putting effort into
 reversing this trend by interacting with these deniers to become
 permitters.
 
 What do we do?

Basically what Lunar said.

A more active and vocal community may help. Passively accepting the
current situation doesn't seem to be working. If the services don't
know that legitimate Tor users exist in a significant quantity and
that they are worthwhile to support, then there's no incentive to try.

- Matt
-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] question about bridge relays

2014-01-27 Thread Matthew Finkel
On Mon, Jan 27, 2014 at 01:09:39AM -0500, Roger Dingledine wrote:
 On Sun, Jan 26, 2014 at 04:43:02PM -0600, Kevin Nestor wrote:
  all of your posts and videos about setting up for to use a bridge rely
 on an older version of bridge that uses vidalia separately.
  
  Now that everyone can only download the Tor browser bundle that opens
 as a single browser (mine being a mac), you can not get anything in the
 settings menu that gives you the option to ?find bridges.?  What can
 you do to find a local bridge?
 
 The find bridges button was broken on Vidalia anyway, ever since
 https://bridges.torproject.org/ added a captcha to make it harder for
 bad guys to automate pretending to be lots of people and learn lots of
 bridges addresses.
 
 Now the right answer is to go to https://bridges.torproject.org/ and
 learn some bridges. Then you can either choose 'configure' rather than
 'connect' when you start TBB the first time, in which case it will walk
 you through adding the bridges you found, or if you've already started
 TBB, go to 'open network settings' in your Torbutton (the green onion near
 the URL bar) and select 'my ISP blocks connections to the Tor network'.
 

To add to this, be aware that if you go to
https://bridges.torproject.org/ then you will likely receive two types
of bridges. There are vanilla bridges and there are bridges which that
Tor Project call Pluggable Transports. If you copy the bridges from the
website into the Tor Browser Bundle, as Roger described, you may only be
able to use some of them, which is fine, you only need one bridge to
work. If you would like all of the bridges on the website to work then
you will want to download the Pluggable Transport-capable Tor Browser
Bundle. It's a little larger in size but it provides additional
functionality that is important if you are somewhere that censors Tor
connections.

The pluggable transport bundle is available from [0]. If you are using a
Mac then you will want to choose one from the first 13 links, the links
that contain osx32 in the file name, and you'll want to choose the link
that provides your preferred language, if it's available (de = German,
en-US = US English, es-ES = Spanish (Spain), fa = Farsi, etc).

There is also a FAQ page which will hopefully answer some of your
questions. One of the sections[1], How do I use pluggable transports?
provides instructions and some bridges to help you get started. If
possible, follow Roger's instructions above to add the 'obfsproxy'
bridge lines in that section, if that doesn't work then try to follow
the instructions on that webpage.

The announcement that was made for this pluggable transport-capable TBB
can be found on [2]. It provides some more links and additional
information (including the two links I mentioned above).

Sorry if these instructions are difficult to follow or understand.
Whether or not you need to use the pluggable transport-capable bundle
really depends on where you are. If you don't know if you need to use
pluggable transports then try following Roger's directions first. If you
are still unable to connect to the Tor network then try to follow the
instruction for the pluggable transport-capable bundle.

[0] https://people.torproject.org/~dcf/pt-bundle/3.5-pt20131217/
[1] https://www.torproject.org/docs/faq#PluggableTransports
[2] https://blog.torproject.org/blog/tor-browser-bundle-35-released

 If somebody reading this wants to make some updated screenshots for
 https://www.torproject.org/docs/bridges#UsingBridges
 that would be swell.
 

That really would be swell. The easier we can make this, with
step-by-step visual instructions, the bettwe.

I hope this helps,
- Matt
-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] still unable to reach StartPage or Ixquick

2013-10-07 Thread Matthew Finkel
On Mon, Oct 07, 2013 at 06:19:57PM -0500, Joe Btfsplk wrote:
 On 10/7/2013 5:27 PM, Roger Dingledine wrote:
  On Mon, Oct 07, 2013 at 05:18:17PM -0500, Joe Btfsplk wrote:
  Haven't been able to reach StartPage or Ixquick sites or do search
  for a week or more, in TBB 2.3.25-12.  Can't even reach their home
  pages through another search engine, like Google or Yahoo.
 
  ** Are others able to access these 2 search engines in the *same TBB
  version* as I'm using?  If so, maybe I need to re-extract the TBB
  files  start over.
 
  TBB has been closed / restarted many times since problem began.
  Tor 0.2.3.x is not so fun to use these days:
  https://blog.torproject.org/blog/how-to-handle-millions-new-tor-clients
 
  I recommend trying the TBB 3.0a4 (assuming you're not on Win XP and you
  don't need pluggable transports):
  https://blog.torproject.org/category/tags/tbb-30
 
  We'll hopefully declare Tor 0.2.4.x stable real soon now. We keep getting
  distracted though. Soon I hope! :)
 
 Thanks, but - Whoa!  Tor 0.2.4.x isn't declared stable, so skip it - go 
 straight to 3.0a?  I know it's got a lot of ? unproven as rock solid? 
 features, but what about my secret double naught spying duties?

So these are actually two versions number for two different
programs. 0.2.4.x referred to the tor version which is packaged in TBB.
3.0a referred to the TBB version. Your current version of TBB just
happens to have a very similar version number to tor's. TBB will be
jumping to 3.x soon, though, as Roger said, tor will remain on the
0.2.x.y path.

 But, no XP here.
 
 Anyway, what about others reaching Startpage  Ixquick using 2.3.25-12?  
 If they're largely unreachable for others, no point in worrying about 
 that - plenty of other stuff to occupy myself.

I actually experienced this yesterday. I was too busy to troubleshoot
the connection issue, but it appeared that the request timed out. This
happened for both startpage and DDG, but not the other websites I
loaded. T'was strange, but probably just circuit dependent. In short,
if this is what you saw then it isn't only you, but I don't know why
it's happening.
-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsusbscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] obfsproxy failure: obfs3

2013-08-13 Thread Matthew Finkel
On Tue, Aug 13, 2013 at 04:11:18PM -0700, lee colleton wrote:
 I'd like some help getting obfs3 set up. I'm seeing an error when I attempt
 to start an obfsproxy bridge:
 
 Aug 13 06:49:17.000 [notice] Tor 0.2.4.15-rc (git-f41c20b344fb7359)
 opening new log file.
 Aug 13 06:49:17.000 [notice] Configured hibernation.  This interval
 began at 2013-08-13 00:00:00; the scheduled wake-up time was
 2013-08-13 00:00:00; we expect to exhaust our quota for this interval
 around 2013-08-14 00:00:00; the next interval begins at 2013-08-14
 00:00:00 (all times local)
 Aug 13 06:49:18.000 [warn] Server managed proxy encountered a method
 error. (obfs3 could not setup protocol)
 Aug 13 06:49:18.000 [warn] Managed proxy at '/usr/bin/obfsproxy'
 failed the configuration protocol and will be destroyed.
 
 ...
 
 
 Here's the config:
 
 # A unique handle for your server.
 #Nickname ec2$CONFIG$RESERVATION
 Nickname gcedemo
 
 ContactInfo Lee Colleton l...@colleton.net
 
 # Set SocksPort 0 if you plan to run Tor only as a server, and not
 # make any local application connections yourself.
 SocksPort 0
 
 # What port to advertise for Tor connections.
 ORPort 443
 
 # Listen on a port other than the one advertised in ORPort (that is,
 # advertise 443 but bind to 9001).
 ORListenAddress 0.0.0.0:9001
 
 # Start Tor as a bridge.
 BridgeRelay 1
 
 # Run obfsproxy
 ServerTransportPlugin obfs2,obfs3 exec /usr/bin/obfsproxy --managed
 ServerTransportListenAddr obfs2 0.0.0.0:52176
 ServerTransportListenAddr obfs3 0.0.0.0:40872

Hi Lee!

Thanks for running a bridge! How did you install obfsproxy? If you
installed it using your distribution's package manager, which distribution
do you use?

Thanks,
Matt
-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsusbscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] So what about Pirate Browser?

2013-08-10 Thread Matthew Finkel
On Sat, Aug 10, 2013 at 05:20:47PM +, adrelanos wrote:
 Jerzy Łogiewa:
  Hello!
  
  It looks that The Pirate Bay will enter secure browser market,
  http://torrentfreak.com/pirate-bay-releases-pirate-browser-to-thwart-censorship-130810/
http://piratebrowser.com/
  
  I do not understand why they do the same as TBB. Anyone know?
 
 With the limited information at hand: they don't.
 
 TBB provides anonymity and circumvention.
 
 PirateBrowser focuses on circumvention, dropping anonymity for other
 preferences. As long they clearly advertise it as such, I have no
 problem with that.

I think I'm confused about what they're actually trying to accomplish.
It sounds like they took TBB, replaced TorBrowser with the latest
version of Firefox portable (with the addition of foxyproxy), updated
the configs, and added some bookmarks. They also added some magic that
allows you to circumvent censorship that certain countries such as
Iran, North Korea, United Kingdom, The Netherlands, Belgium, Finland,
Denmark, Italy and Ireland impose onto their citizens.

The one thing I always think about when I hear about the comparison of
censorship circumvention vs. anonymity[0] is something I once heard (maybe
from Jake or Roger, I apologies for not having a citation), but it went
something like When you ask someone in China why they choose to use
Tor, they do not say it is to circumvent the strict censorship in their
country. They say that they use it for the anonymity aspect, because if
the government can censor what they are doing, then that means the
government *knows* what they are doing. As a result, if they can prevent
the government from tracking them, then they are also able to access
sites that the government does not want them to access.

Assuming I recall the basis of the quote correctly, this is an extremely
important idea that must be understood when dealing with censorship.
Going back to the PirateBrowser, if they are stripping out all of the
fantastic work Mike has done to preserve a users Anonymity (and the
packaging Erinn has done) and they replace it with Portable Firefox, I
don't think it can reach the full potential of No more censorship!
that they proclaim. However, I do think it is worth it to look at what
magic they use in Iran and North Korea. Is it more than using Tor and a
hidden service?

Just some thoughts,
- Matt

[0] http://piratebrowser.com/#faq
-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsusbscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] [Tails-dev] secure and simple network time (hack)

2013-04-12 Thread Matthew Finkel
I don't really understand your reservation about this project. It's reasonable
to want authenticated time to a non-webserver of ones choice. Depending on
your environment, tlsdate is complementary to the various other
programs. You can (and will) use whatever you decide fits your needs,
but please don't disparage a valid project because it segfaults after a
while. It's a work-in-progress, better to contribute useful information
than complain.

On Fri, Apr 12, 2013 at 02:43:13PM +0300, Maxim Kammerer wrote:
 On Fri, Jul 20, 2012 at 3:07 AM, Jacob Appelbaum ja...@appelbaum.net wrote:
  Allow me to be very explicit: it is harder to parse an HTTP Date header
  than properly than casting a 32bit integer and flipping their order. The
  attack surface is very small and easy to audit.
 
 Just discovered that tlsdated in tlsdate-0.0.6 is dying with a
 segmentation fault after a while. Not surprised after seeing the code
 — my experimentation with this gimmick is finally over. Turns out that
 “throw something together and wait for patches” is not a sound
 development approach.
 
 --
 Maxim Kammerer
 Liberté Linux: http://dee.su/liberte
 ___
 tor-talk mailing list
 tor-talk@lists.torproject.org
 https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk
___
tor-talk mailing list
tor-talk@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] Can't connect to http://jhiwjjlqpyawmpjx.onion/roundcube/

2013-03-27 Thread Matthew Finkel
On Wed, Mar 27, 2013 at 03:15:45PM -0500, Gramps wrote:
 When I attempt to logon to
 
  http://jhiwjjlqpyawmpjx.onion/roundcube/
 
 I get an error message
 
  DATABASE ERROR: CONNECTION FAILED!
 
  Unable to connect to the database!
  Please contact your server-administrator.
 
 Does anybody have any idea what is wrong?

You'll have to contact Tor Mail directly. It looks like the connection to
the database failed...from my reading of the error.

 
 Many TIA!
 Gramps

Matt
___
tor-talk mailing list
tor-talk@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] Please help w...@lksne.go-plus.net

2012-12-29 Thread Matthew Finkel
On Sat, Dec 29, 2012 at 07:56:55PM +, W. Alksne wrote:
 *As I cannot get the Tor Browser to work I wanted to uninstall it BUT 
 nowhere can I find an '_uninstall option_'.
 I would appreciate your advice or recommendations .
 Faithfully ,
 *Wolfram Alksne .*
 *

Hi Wolfram,

The Tor Browser Bundle isn't actually installed. If you want to
uninstall it you only need to delete its directory/folder.

If you'd like some help getting it working, please feel free to join us
in the irc channel #tor on irc://irc.oftc.net:6667 (or with SSL on port 6697)
or send an email to h...@rt.torproject.orgi providing as many details as
possible (version of TBB, operating system (such as Windows XP, Mac OS X 10.7,
Ubuntu Linux 10.4, etc), any error messages you received, etc).

I hope this helps.

All the best,
Matt
___
tor-talk mailing list
tor-talk@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] torsocks is broken and unmaintained

2012-12-01 Thread Matthew Finkel
On 12/01/2012 06:14 PM, John Case wrote:
 
 On Fri, 2 Nov 2012, grarpamp wrote:
 
 I don't agree. torsocks is still useful to prevent identity correlation
 through circuit sharing. Pushing all traffic through Trans- and DnsPort
 is not the answer.

 Also, I don't want all of my applications using Tor -- just some of
 them. Using Tails or TransPort wouldn't allow me to do this.

 Some people do run multiple Tor's, jails, packet
 filters, and apps. Largely to get around current
 Tor limitations. Those people don't have this
 singularity problem/position that you assume.
 Torsocks is not required in that instance.
 
 
 There has to be a better way to simply make an ssh connection over ToR.
 
 I don't want to run all of tails just to make a single ssh connection (2
 minutes to properly fire up vmware, massive cpu use, laptop gets hot,
 fans running, everything else comes to a crawl).
 
 I don't want to run a full-blown tor relay installation with all the
 bells and whistles and then maintain that full blown environment, watch
 advisories, run periodic tests, test for dns leakage, blah blah.
 
 I want this:
 
 cd /usr/ports/net/torssh
 make install
 torssh u...@host.com
 
 Am I the only person that wants/needs this ?
 
 I understand that you can't go down the road of make a custom tor app
 for everry possible client app that people want to run, but come on ...
 ssh ? If there was just a single app to do this for, it would be that,
 right ?

The real issue is that once they start providing torified-forks of
certain projects where do they draw the line? torFirefox for TBB, sure
(which may be coming down the pipe anyway)! torssh, why not? Tor Project
is already stretched thin which means third party devs would have to
implement most of the work and who would be able to audit all of them?

Torification integrated into these projects would be a usability
god-send for most people. But that would ultimately be its undoing. At
this point many users don't understand that anonymity is not as simple
as flipping a switch, it's so much more complex than that. One possible
advantage of Tor being a little complex is that it makes people realize
that ensuring ones safety/privacy online is *not* easy and it's possible
that increasing the usability too much could put more people at risk.

In addition to this, if different projects have tor integrated then that
would mean each one would have to keep state separately and each would
most likely have different guard nodes and such. The result, again,
would be putting the users more at risk.

I understand the appeal of such packages, but if you think about this
then you'll see that running a single daemon and channeling connections
through it probably is the best and most resource efficient way. Just
think, if x number of programs you wanted to run were torified than
you would essentially being running x instances of tor, not ideal.

For now, using built-in proxy support for an application, or torsocks if
it doesn't have it, is the best option we have and we still need to be
careful when we use any built-in proxy option.
___
tor-talk mailing list
tor-talk@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] torsocks is broken and unmaintained

2012-11-03 Thread Matthew Finkel
On 11/03/2012 08:38 PM, Nick Mathewson wrote:
 On Fri, Nov 2, 2012 at 11:10 PM, Matthew Finkel 
 matthew.fin...@gmail.comwrote:
 
 On 11/02/2012 07:36 PM, Jacob Appelbaum wrote:
 Nick Mathewson:
 On Fri, Nov 2, 2012 at 1:34 PM, adrelanos adrela...@riseup.net wrote:


 Could you blog it please?


 I'd like to see more discussion from more people here first, and see
 whether somebody steps up to say, Yeah, I can maintain that here, or
 whether somebody else who knows more than me about the issues has
 something
 to say.  Otherwise I don't know whether to write a looking for
 maintainer
 post, a who wants to fork post, a don't use Torsocks, use XYZZY
 post,
 or what.


 If Robert wants someone to maintain it, I'd be happy to do so. I had
 wanted to extend it to do some various things anyway. I think it would
 be a suitable base for a bunch of things I'd like to do in the next year.

 All the best,
 Jake


 I saw this thread earlier but didn't have a chance to reply. I was
 thinking about volunteering to patch it up and maintain it if no one
 else wanted to take it on, also, but if you want to take the lead on it
 then I'm more than happy to help you where ever possible...assuming this
 is the direction that's decided upon.

 
 Okay, sounds like we've got some enthusiasm.  Let's get started.  I
 volunteer to review commits and if people ask me to, and suggest that
 asking me to review stuff for a while might be a smart idea.  I just gave
 myself commit access to the g...@git-rw.torproject.org repo too, in case
 that helps.  I am not planning to be a primary author here.

Thanks for adding one more thing to your plate! I know Jake can handle
this but the more eyes we have looking at these initial changes the
better it'll be.

 
 Given the amount of people asking us to apply and/or warning us that we
 mustn't apply particular patches, I'm going to suggest the following
 principles for a while:
   * LET'S START MINIMAL.  Let's stick to doing only the very major bugfixes
 and obvious fixes for at least the next release or two, so that something
 usable comes out.

Agreed. To be honest, I haven't really looked at the code too much, so
I'll start diving into that in a bit. (If there isn't one already...I
haven't checked) Can we get a trac component added so we can track
progress and such?

   * NO ARCHITECTURAL ASTRONAUTICS. I'm always tempted when I come to a
 codebase for the first time to refactor the heck out of it.  Let's avoid
 doing that till we have a little experience with this codebase.  There
 isn't all that much here: let's

Yes...let's! :)

Was there supposed to be more to that sentence?

   * LOVE MEANS GET TESTED. If at all possible, we should make this codebase
 easier to test (right now it wants you to install before testing), and
 improve the coverage of the tests so that (if as people suspect) we're
 likely to break things on one platform when we fix them on another, we can
 at least find out fast whether a patch works everywhere.
 

Certainly sounds like a good idea. I'm going to have to familiarize
myself with some of the other *nix platforms it does/should support.
Just looking through the current issues on google code, for example, I
don't know the internals of OSX well enough *yet* to know if [1] is even
possible. But once we've compiled a list of all the current critical
patches, Debian and others (assuming such a list doesn't exist already),
then we start applying, testing, revising, etc. :)

[1] https://code.google.com/p/torsocks/issues/detail?id=41

- Matt

___
tor-talk mailing list
tor-talk@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] torsocks is broken and unmaintained

2012-11-02 Thread Matthew Finkel
On 11/02/2012 07:36 PM, Jacob Appelbaum wrote:
 Nick Mathewson:
 On Fri, Nov 2, 2012 at 1:34 PM, adrelanos adrela...@riseup.net wrote:


 Could you blog it please?


 I'd like to see more discussion from more people here first, and see
 whether somebody steps up to say, Yeah, I can maintain that here, or
 whether somebody else who knows more than me about the issues has something
 to say.  Otherwise I don't know whether to write a looking for maintainer
 post, a who wants to fork post, a don't use Torsocks, use XYZZY post,
 or what.

 
 If Robert wants someone to maintain it, I'd be happy to do so. I had
 wanted to extend it to do some various things anyway. I think it would
 be a suitable base for a bunch of things I'd like to do in the next year.
 
 All the best,
 Jake
 

I saw this thread earlier but didn't have a chance to reply. I was
thinking about volunteering to patch it up and maintain it if no one
else wanted to take it on, also, but if you want to take the lead on it
then I'm more than happy to help you where ever possible...assuming this
is the direction that's decided upon.

Matt
___
tor-talk mailing list
tor-talk@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] VPS provider

2012-09-25 Thread Matthew Finkel
On 09/25/2012 01:42 PM, Flo wrote:
 +1
 This.
 
 The problem is especially on container-virtualizations like OpenVZ is
 that the admins of the hostnodes must just type something like 'vzctl
 enter 123' and they have a shell in your VPS...
 
 So you should have at least Xen/KVM where you can use encryption

Yes! Sadly there aren't too many KVM hosts, but providers are slowly
offering more options. Xen has been stable for a longer amount of time,
so there are more options available for that, Linode, et al.

I personally have KVM boxes from http://buyvm.net/ and
http://arpnetworks.com/, at times they leave something to be desired
with regard to performance, but overall I have no complaints related to
service or uptime. I don't currently use them for Tor related purposes,
but if they're not going to serve as exit nodes, anything else shouldn't
cause a  problem (except bandwidth, as was noted). I'm planning to
contact them in the future to determine their stance on Tor and see if I
can move forward with some ideas I have, but that remains to be seen.
___
tor-talk mailing list
tor-talk@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] VPS provider

2012-09-25 Thread Matthew Finkel
On 09/25/2012 04:00 PM, irregula...@riseup.net wrote:
 On 09/25/2012 10:18 PM, Matthew Finkel wrote:
 On 09/25/2012 01:42 PM, Flo wrote:
 +1
 This.

 The problem is especially on container-virtualizations like OpenVZ is
 that the admins of the hostnodes must just type something like 'vzctl
 enter 123' and they have a shell in your VPS...

 So you should have at least Xen/KVM where you can use encryption

 Yes! Sadly there aren't too many KVM hosts, but providers are slowly
 offering more options. Xen has been stable for a longer amount of time,
 so there are more options available for that, Linode, et al.

 I personally have KVM boxes from http://buyvm.net/ and
 http://arpnetworks.com/, at times they leave something to be desired
 with regard to performance, but overall I have no complaints related to
 service or uptime. I don't currently use them for Tor related purposes,
 but if they're not going to serve as exit nodes, anything else shouldn't
 cause a  problem (except bandwidth, as was noted). I'm planning to
 contact them in the future to determine their stance on Tor and see if I
 can move forward with some ideas I have, but that remains to be seen.

 
 Hey people
 
 I was under the impression that everyone having physical access to a
 running machine can get access to the operating system as well.
 Encryption makes no difference for a running computer, since cold boot
 attack may be used to dump the keys from memory. What's more, in a
 virtualization environment i guess that would be easier.
 
 If the above statements are generally correct, then you should trust a
 VPS provider, as long as you trust the administrator of the host machine
 *and* everyone else having physical access to it (for example the
 datacenter).

The above is true, for as much as I know, for the most part, but it
really depends on the situation and the purpose of the VPS. Using a
container-like VM provides very little guarantee as to who may have
access to data contained within. As you said, this is not limited to the
immediate VPS provider's staff, either.

Similarly, for the emulation implementations the data is nearly never
100% secure. However, the information that is stored on this type of
system is a key factor into whether or not it is safe enough to use a
third-party provider. It's not the case that the data is secure when
using KVM/Xen vs OpenVZ/Linux-VServer, only that is is more secure. ;)
___
tor-talk mailing list
tor-talk@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] http://torbrowser.sourceforge.net

2012-08-16 Thread Matthew Finkel
On Thu, Aug 16, 2012 at 3:47 PM, Randolph D. rdohm...@gmail.com wrote:
 TBB =/ Torbrowser
 website redesgined in blue

Though imitation is the highest form of flattery, you should really
just redesign your site. Create your own CSS and maybe try using a
3-column design instead of just 2...provides much more flexibility. =)
If you open both sites in different web browser tabs and flip back and
forth between them and notice any similarity/overlap then it's
probably too similar.

Also, the blue background looks bad...really bad, and I think your
project deserves better than that!
___
tor-talk mailing list
tor-talk@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] HTTPS to hidden service unecessary?

2012-07-09 Thread Matthew Finkel
On Mon, Jul 9, 2012 at 10:49 PM, Juenca R jue...@yahoo.com wrote:



 T or HS provide end-to-end encryption, however imho SSL it still maybe
  useful if:
 
  - You use a Tor Gateway (for example in a Lan or WiFi) to reach the
  .onion darknet space and you don't want to trust your Tor Gateway or
  your Lan

 good point. but don't most regular users install Tor on their PC so it's
 local, no gateway?


 It still needs work, but they do exist.
https://trac.torproject.org/projects/tor/wiki/doc/Torouter

In general it's just better practice and safer to provide end-to-end
encryption. There are very few reasons not to use TLS.
___
tor-talk mailing list
tor-talk@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk