Re: [tor-dev] minimizing traffic for IoT Tor node over 3G/LTE

2017-04-08 Thread Razvan Dragomirescu
Thank you, Proposal 140 sounds perfect for what I need, that would minimize
traffic quite a bit! I see some code for it at
https://gitweb.torproject.org/tor.git/log/?qt=grep&q=prop140 , I'm guessing
it's not complete yet.

Thanks again,
Razvan

On Sat, Apr 8, 2017 at 12:43 PM, nusenu  wrote:

> > I am working on a project to create very small Tor nodes on embedded
> > devices connected over LTE or 3G.
>
> since you are concerned about bw usage I assume you talk about tor
> clients not relays.
>
> > I have it working fine with OpenWRT and
> > just 128MB of RAM, but the main issue is now the amount of data needed to
> > download the consensus. The consensus files appear to be around 2.3MB at
> > the moment and I think the default is to re-download every 3 hours, so
> > that's 18.4MB/day or 552MB/month. Is there any way to reduce this while
> > still maintaining good citizenship on the Tor network? Are there any
> > recommended options for low-bandwidth nodes?
>
> There is an ongoing effort to significantly reduce the bw overhead for
> tor clients on metered networks.
>
> Some improvements are supposed to land in tor 0.3.1.x.
>
>
> Relevant proposals:
>
> https://gitweb.torproject.org/torspec.git/tree/proposals/
> 140-consensus-diffs.txt
> https://gitweb.torproject.org/torspec.git/tree/proposals/
> 274-rotate-onion-keys-less.txt
> https://gitweb.torproject.org/torspec.git/tree/proposals/
> 275-md-published-time-is-silly.txt
> https://gitweb.torproject.org/torspec.git/tree/proposals/
> 276-lower-bw-granularity.txt
> https://gitweb.torproject.org/torspec.git/tree/proposals/
> 277-detect-id-sharing.txt
> https://gitweb.torproject.org/torspec.git/tree/proposals/
> 278-directory-compression-scheme-negotiation.txt
>
>
>
>
>
>
>
>
> --
> https://mastodon.social/@nusenu
> https://twitter.com/nusenu_
>
>
> ___
> tor-dev mailing list
> tor-dev@lists.torproject.org
> https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev
>
>
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


[tor-dev] minimizing traffic for IoT Tor node over 3G/LTE

2017-04-08 Thread Razvan Dragomirescu
Hello,

I am working on a project to create very small Tor nodes on embedded
devices connected over LTE or 3G. I have it working fine with OpenWRT and
just 128MB of RAM, but the main issue is now the amount of data needed to
download the consensus. The consensus files appear to be around 2.3MB at
the moment and I think the default is to re-download every 3 hours, so
that's 18.4MB/day or 552MB/month. Is there any way to reduce this while
still maintaining good citizenship on the Tor network? Are there any
recommended options for low-bandwidth nodes?

Thank you,
Razvan

--
Razvan Dragomirescu
Chief Technology Officer
Cayenne Graphics SRL
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Onioncat and Prop224

2016-09-30 Thread Razvan Dragomirescu
Allow me to second that - for some applications (Internet of Things being
the one I'm working on), the volume of data exchanged is very small, so
there isn't much chance for packets to be lost or retransmitted. OnionCat +
Tor simplify development immensely by giving each node a fixed IPv6
address, even behind NAT. You no longer have to _design_ the service for
IoT, you just run it on the node and it's immediately accessible over IPv6.
It's not perfect in terms of network protocol encapsulation but it's "good
enough". https://en.wikipedia.org/wiki/Perfect_is_the_enemy_of_good :)

Razvan

On Thu, Sep 29, 2016 at 2:23 AM, grarpamp  wrote:

> On Wed, Sep 28, 2016 at 11:30 AM, dawuud  wrote:
> > Are you aware of Tahoe-LAFS?
>
> Don't know if they are, or if they are here, all we have is their short
> post.
>
> If they just need an insert and retrieve filestore for small user
> bases, there are lots of choices. If they need the more global
> and random on demand distribution properties, and even mutually
> co-interested long term storage nature of bittorrent, that's harder.
>
> Today people can use onioncat to escape IPv4 address space limitations,
> provide UDP transport, provide configuration free on demand any node to
> any node IP network semantics for use by existing applications.
> Mass bittorrent / bitcoin / p2p apps over a private network such as
> HS / eep happen to typically need and match that.
>
> > Yes but then you are suggesting TCP on top of TCP via TCP/IPv6/onion/TCP.
>
> Onioncat is only one extra encapsulation layer. Of course if you run tcp
> app over onioncat instead of udp app, you have to think about that too.
> But being the top layer, onioncat itself does not have losses, ie any
> losses
> come up from below clearnet --> tor --> ocat --> user.
>
> > Do you know what happens when you get packet loss with that protocol
> layer cake?
> > Cascading retransmissions. Non-optimal, meaning shitty.
>
> For certain applications, expecially bulk background transport, it's
> actually
> quite useable in practice. And people do use voice / video / irc / ssh over
> hidden / eep services... of course there are non-optimal systemic issues
> there. People will use what they can [tolerate].
>
> > You might be able to
> > partially solve this by using a lossy queueing Tun device/application
> but that
> > just makes me cringe.
>
> That's pretty far beyond anywhere tor network design is
> going anytime soon.
>
> Buffering for reordering datagrams into a queue, maybe partially if the
> user doesn't mind possible additional latency. Lossy... not in tcp layers.
>
> Maybe in ideal world user would supply requirements as ifconfig
> request to network, each interface providing different set, user
> binds apps to interfaces as needed.
> Sliders latency / bandwidth / loss - maybe represented as single
> app type config param: voice, irc, bulk, torrent, network tolerant - or
> by list of app names.
> Or network would monitor and adapt to users traffic.
> ___
> tor-dev mailing list
> tor-dev@lists.torproject.org
> https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev
>
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] onion moshing

2016-09-25 Thread Razvan Dragomirescu
Hello again David,

Sorry to resurrect an year-old thread but it looks to me like OnionCat is
abandoned code at this point - mailing lists are gone, no development since
mid last year, etc. Since the Tor developers plan to deprecate (and quickly
eliminate) v2 onion names and expect to move to the new longer names/keys
ASAP, I was wondering if you had any plans to adapt your OnionVpn software.

I was thinking of a very generic lookup mechanism for IPv6 to .onion name
lookup, adaptable to anything from blockchain-based name systems to a
centralized file. Possibly simply running an external script given as a
parameter on each IPv6 to name lookup (and checking that the returned name
hashes back to the IPv6 address expected).

I think OnionVpn may be easier to modify than OnionCat, given that it's
Python.

Thank you,
Razvan

--
Razvan Dragomirescu
Chief Technology Officer
Cayenne Graphics SRL


On Wed, Dec 9, 2015 at 6:59 PM, David Stainton 
wrote:

>
> I was inspired by onioncat to write a twisted python implementation.
> Onionvpn doesn't have as many features as onioncat. I've successfully
> tested that onionvpn and onioncat can talk to each other and play nice.
> Both onionvpn and onioncat implement a virtual public network. Anyone can
> send packets to you if they know your onion address or ipv6 address...
> however injection attacks are unlikely since the attacker cannot know the
> contents of your traffic without compromising the tor process managing the
> onion service.
>
> I've also tested with mosh; that is, you can use mosh which only works
> with ipv4 over an ipv4-to-ipv6 tunnel over onionvpn/onioncat. Like this:
>
> mosh-client -> udp/ipv4 -> ipv6 -> tun device -> tcp-to-tor -> onion
> service decodes ipv6 to tun device -> ipv6 -> udp/ipv4 -> mosh-server
>
> https://github.com/david415/onionvpn
>
>
> If an onionvpn/onioncat operator were to NAT the onion ipv6 traffic to the
> Internet then that host essentially becomes a special IPv6 exit node for
> the tor network. The same can be done for IPv4. Obviously operating such an
> exit node might be risky due to the potential for abuse... however don't
> you just love the idea of being about to use low-level network scanners
> over tor? I wonder if Open Observatory of Network Interference would be
> interested in this.
>
>
> david
>
>
> ___
> tor-dev mailing list
> tor-dev@lists.torproject.org
> https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev
>
>
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] "old style" hidden services after Prop224

2016-09-13 Thread Razvan Dragomirescu
I fully agree with the security points, I was just arguing for keeping the
_option_ to list a v2 service for a longer time (possibly forever). Let's
not make assumptions for the service operators - ok, make them enable the
option explicitly, have them do it at compile time if you want to (like
Tor2Web) but don't remove it just because you think the alternative is
better. Some services (like mine) are not worthy targets for a LEA (unless
they're interested in hacking your fridge), Tor is interesting to us
because of its NAT traversal capabilities and cryptographic service
authentication.

Don't assume that all users have the same goals and they are all fighting
well funded or state-level attackers. Another option to be honest would be
for us to just fork Tor at the time v2 services are removed altogether and
spin a few directory authorities of our own and a few relays around the
world (we send very little traffic) and call it a day. Tor can continue
onwards with the v3-only services while our Tor fork would be happily using
v2, separate from the main network (and less subject to attacks from well
funded attackers and DDOS operators interested in revealing activists'
identities and not in finding out that you're out of cheese in your
fridge). I think there's potential in making a simpler, slightly less
secure version of Tor but with significantly improved user experience.

Oh, and no, I wasn't planning on having the Onion Balance and OnionCat devs
fix bugs for us :). I just didn't want to duplicate effort, so if they have
a plan to adapt their tools to v3, I'd rather wait for their solution than
do a half-baked one of our own.

Razvan

On Tue, Sep 13, 2016 at 10:31 PM, s7r  wrote:

>
> On 9/13/2016 6:13 PM, Razvan Dragomirescu wrote:
> > I disagree with your approach, for comparison's sake, let's say v2 is
> > IPv4 and v3 is IPv6. When IPV6 was introduced, IPv4 was kept around (and
> > still is to this day, although IPv6 is arguably a much better solution
> > in a lot of areas). Expecting _everyone_ to just switch to IPv6 or get
> > cut off is a bit of a pipe dream.
> >
>
> Your analogy with IPv4 and IPv6 is unacceptable. IPv6 exists not because
> IPv4 isn't secure, but because the address space got filled up (internet
> grew). Of course it has some improvements compared to IPv4 but we cannot
> say IPv4 has questionable security. I don't think we can speak about
> security in IP context anyway since there are other protocols where this
> happens (BGP,TCP etc.). And they do exist in parallel with perspective
> to migrate to IPv6 entirely in the future (obviously v2 and v3 hidden
> services will have a migration period also, just not so large because we
> aren't talking about the entire internet here).
>
> > Tor hidden services are a bit "special" because it's hard to poll their
> > owners on their intentions. Some hidden service operators have gone to
> > great lengths to advertise their .onion URLs (v2-style), some have even
> > generated vanity addresses (like Facebook). Forcing a switch to v3 at
> > some point presents a very interesting opportunity for phishing because
> > suddenly a service known and trusted at some address (as opaque as it
> > is) would need to move to an even more opaque address, with no way to
> > determine if the two are really related, run by the same operator, etc.
> > If I were a LE agency, I would immediately grab v3 hidden services,
> > proxy content to existing v2 services and advertise my v3 URL
> > everywhere, then happily monitor traffic.
> >
>
> I am not sure what you mean by grabbing v3 hidden services (generating
> random ed25519 keys?) and how exactly you are going to proxy anything to
> the v2 hidden service without access to v2's private key? But regardless
> of how you have in mind to do this, your points are wrong.
>
> Maintaining v2 services just because operators advertised the v2 onion
> url style is not an argument. RSA1024 will be easily factored in coming
> years. We have strong reasons to believe factoring RSA1024 at current
> moment is not impractical if the target is worth it enough. So, if we
> allow v2 services forever, we increase the chances for a LE to hijack v2
> hidden services by factoring their private keys - this risk is bigger
> than what you are describing. For the second part, there are plenty ways
> to prove a v2 hidden service is tied to a v3 one, given you control v2's
> private key. It provides exactly the same level of cryptographic
> certification.
>
> > All I'm saying is don't remove the v2 services, even if you choose to no
> > longer support them. Some operators (like my company) may choose to
> > continue to patc

Re: [tor-dev] "old style" hidden services after Prop224

2016-09-13 Thread Razvan Dragomirescu
I disagree with your approach, for comparison's sake, let's say v2 is IPv4
and v3 is IPv6. When IPV6 was introduced, IPv4 was kept around (and still
is to this day, although IPv6 is arguably a much better solution in a lot
of areas). Expecting _everyone_ to just switch to IPv6 or get cut off is a
bit of a pipe dream.

Tor hidden services are a bit "special" because it's hard to poll their
owners on their intentions. Some hidden service operators have gone to
great lengths to advertise their .onion URLs (v2-style), some have even
generated vanity addresses (like Facebook). Forcing a switch to v3 at some
point presents a very interesting opportunity for phishing because suddenly
a service known and trusted at some address (as opaque as it is) would need
to move to an even more opaque address, with no way to determine if the two
are really related, run by the same operator, etc. If I were a LE agency, I
would immediately grab v3 hidden services, proxy content to existing v2
services and advertise my v3 URL everywhere, then happily monitor traffic.

All I'm saying is don't remove the v2 services, even if you choose to no
longer support them. Some operators (like my company) may choose to
continue to patch the v2 areas if required and release the patches to the
community at large. Forcing us out altogether would make us drop Tor and
start using an alternative network or expending the additional effort to
make our services network-agnostic (so no more good PR for Tor).

Ivan was right, moving to v3 would be, at least for my project, extremely
complex and unwieldy. Ed25519 is not supported by any smartcards I know
(but can be "hacked" by manually defining Curve25519 params and converting
back and forth). But then we'd have to modify the service re-registration
(or wait for OnionBalance to do it), then add another layer for
OnionCat-like lookups, etc. It would be far easier to just drop the Tor
dependency at that point or centralize it a bit more.

Just my 2 cents, if any hidden service operators wish to chime in, feel
free to do so. After all, it's us (them? :) ) that will have to make the
changes to their services.

Razvan

On Tue, Sep 13, 2016 at 5:40 PM, s7r  wrote:

> On 9/13/2016 3:27 PM, David Goulet wrote:
> [SNIP]
> > Hello!
> >
> > So I 100% share Ivan's concerns. The Hidden Service subsytem of Tor is
> quite
> > complex, lots of pieces need to be glued together and prop224 will add a
> lot
> > of new code (in the 10 of thousand+).
> >
> > We decided a while back to have the two protocols living side by side at
> first
> > that is current system (v2) and next gen (v3). Relays will need to
> support v2
> > for a while after v3 is release because well not everybody updates their
> tor
> > to the latest. Lots of people have current .onion for which they need a
> > transition to the new generation which includes telling their users
> about the
> > new 52 character one and SSL certs and so on...
> >
> > The question arise now. Someone running a .onion upgrades her tor that
> > supports v3, should we allow v2 to continue running or transition it to
> v3 or
> > make them both happy together...? We haven't discuss this in depth and
> thus we
> > need to come to a decision before we end up implementating this (which is
> > _soon_). I personally could think that we probably want to offer a
> transition
> > path and thus have maybe a torrc option that controls that behavior
> meaning
> > allowing v2 for which we enable by default at first and then a
> subsequent Tor
> > release will disable it so the user would have to explicitely set it to
> > continue running v2 .onion and then finally rip off v2 entirely in an
> other
> > release thus offering a deprecation path.
> >
> > However, we are clear that every _new_ service will be v3 and never
> again v2
> > unless it already exists that is we can find a RSA private key
> (considering we
> > do the above of course). And considering both will be supported for a
> while,
> > we'll have to maintain v2 security wise but all new features will go in
> v3.
> >
> > Let's discuss it and together we can come up with a good plan! :)
> >
> > Thanks!
> > David
> >
>
> v2= old-style (RSA1024) hidden services
> v3= prop 224 (ed25519) hidden services
>
> I agree with David - it will be problematic to maintain support for both
> v2 and v3, unlimited in the future. It's clear that we need to offer a
> reasonable transition period, so everyone can upgrade and move their
> customers/user bases to the new hidden services, but this doesn't mean
> v2 should work forever.
>
> v2 hidden services already provide questionable security (from crypto
> point of view) and in the future things will only get worse for v2. I
> agree that there are a lot of third party tools working with v2 hidden
> services (OnionCat, OnionBalance) -  these all need to be improved to
> support prop 224 hidden services.
>
> Considerable resources are spent on v3 hidden services. They are better
> vs v2 from all points 

Re: [tor-dev] "old style" hidden services after Prop224

2016-09-13 Thread Razvan Dragomirescu
Hello Ivan,

Breaking existing (possibly automated) systems is a _very good reason_ IMHO
:). Sure, warn the operators that they're using a legacy system, deprecate
the option but don't disable it.
https://trac.torproject.org/projects/tor/ticket/18054 sounds like a pretty
sane solution btw.

Even if it's no longer officially supported, I think OnionCat in its
current incarnation is a great proof of what could be done with the Tor
network, other than privacy protection. I actually have this listed on one
of my slides for SIM4Things - it's good for the Tor network, shows it can
be used for a variety of things while putting very little stress on the
network components. Very little traffic, potentially large PR impact (in a
good way :) ).

Razvan

On Tue, Sep 13, 2016 at 1:29 AM, Ivan Markin  wrote:

> Razvan Dragomirescu:
> > Thank you Ivan! I still dont see a very good reason to _replace_ the
> > current HS system with the one in Prop224 and not run the two in
> > parallel.
>
> For me it's because it would make overall system more complex and thus
> error-prone and reasonably less secure. Is like using RC4, SHA1, 3DES in
> TLS and be vulnerable downgrade attacks and all kind of stuff like
> Sweet32 and LogJam (export-grade onions, haha).
>
> > Why not let the client decide what format and security
> > features it wants for its services?
>
> It's like dealing with plethora of ciphers and hashes in GnuPG:
>
> https://moxie.org/blog/gpg-and-me/:
> > It’s up to the user whether the underlying cipher is SERPENT or IDEA
> > or TwoFish. The GnuPG man page is over sixteen thousand words long;
> > for comparison, the novel Fahrenheit 451 is only 40k words.
>
> When system is complex in that way someone is going make huge
> mistake(s). If crypto is bad, just put it into museum.
>
> So I don't see _any_ reason to manage outdated and less secure system
> while we have a better option (if we already deployed it).
>
> --
> Ivan Markin
> ___
> tor-dev mailing list
> tor-dev@lists.torproject.org
> https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev
>
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] "old style" hidden services after Prop224

2016-09-12 Thread Razvan Dragomirescu
Thank you Ivan! I still dont see a very good reason to _replace_ the
current HS system with the one in Prop224 and not run the two in parallel.
Why not let the client decide what format and security features it wants
for its services?

Razvan

On 13 Sep 2016 00:37, "Ivan Markin"  wrote:

> Hi Razvan,
>
> Razvan Dragomirescu:
> > I've developed against the current hidden service infrastructure and it
> > appears to work like a charm, but I'm a bit worried about Prop224. That
> > will break both the OnionBalance-like service re-registration that I'm
> > using _and_ the OnionCat HS to IP6 mapping. I know that efforts are in
> > place to upgrade the two in view of Prop224 but I'm wondering if there's
> > any good reason to drop support for "old style" hidden services once
> > Prop224 is fully deployed.
>
> No worries, prop224 is not going to break OnionBalance-like
> re-registration - it's just going to make it more complicated. One will
> have to perform cross-certification trickery in order to reassemble
> intropoints of another onion service. We want to avoid this plain
> "re-registration" since anyone can do it (for details see #15951 [1]).
> The way out is to add a feature into little-t-tor and to rewrite tools
> like OnionBalance, avant, etc to fetch intropoint list from backend
> services directly (via ControlPort or special onion address) thus going
> without posting useless descriptors to HSDirs and fetch them from HSDirs
> again.
>
> Yes, in case of OnionCat onion<->IPv6 mapping we got a problem. It's
> just because address length is 80bit for legacy, 256bit for prop224 and
> <128bit for IPv6. And one have to use something additional (like DHT for
> cjdns) to "resolve" short IPv6 into larger Ed25519. Apparently IPv6 is
> good but not enough to be used as public keys. IMO we need something
> better* for this.
>
> Also you'll likely have issues with migration from RSA1024 to Ed25519 on
> your smartcards. Most (Java) cards I know have built-in RSA engine and
> any additional crypto may not fit in or be slow.
>
> So my two cents is to migrate to prop224 as soon as possible and make
> everyone secure (RSA1024 and SHA1 are bd).
>
> * Maybe just hostnames with variable length?
> [1] https://trac.torproject.org/projects/tor/ticket/15951
> --
> Ivan Markin
> ___
> tor-dev mailing list
> tor-dev@lists.torproject.org
> https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev
>
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


[tor-dev] "old style" hidden services after Prop224

2016-09-12 Thread Razvan Dragomirescu
Hello everyone,

I've pretty much completed a proof of concept of my SIM4Things project (an
IP6 overlay for the Internet of Things running on top of Tor with
persistent secure cryptographic identities tied to physical SIM cards).
I've developed against the current hidden service infrastructure and it
appears to work like a charm, but I'm a bit worried about Prop224. That
will break both the OnionBalance-like service re-registration that I'm
using _and_ the OnionCat HS to IP6 mapping. I know that efforts are in
place to upgrade the two in view of Prop224 but I'm wondering if there's
any good reason to drop support for "old style" hidden services once
Prop224 is fully deployed.

What does everyone think? My vote would obviously be to keep it around,
even if it's no longer the default. Do you see any security implications in
doing so? Is there any hard reason to drop the existing model altogether at
some point and if so, what is the plan for transition? My hidden services
would be very low traffic (Internet of Things, sensor measurements, etc)
but hard to upgrade in the field. Also, by design, we have no control over
the endpoints once deployed (only their owner can access them), so we
cannot force an upgrade, we can just suggest it to the owner and come up
with a procedure for them to do so.

Any ideas?

Thank you,
Razvan

--
Razvan Dragomirescu
Chief Technology Officer
Cayenne Graphics SRL
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Reducing initial onion descriptor upload delay (down to 0s?)

2016-09-08 Thread Razvan Dragomirescu
I've just tried the patch from ticket 20082 and it works great for me. I
was actually wondering why it was taking so long for a ephemeral hidden
service to get registered in my SIM4Things project (I register an ephemeral
service first to get Tor to setup the introduction points, then re-register
it a la OnionBalance with a new identity). It's definitely a great
improvement for my project!

Razvan

On Wed, Sep 7, 2016 at 6:40 PM, Ivan Markin  wrote:

> Hi tor-dev@!
>
> Moving the discussion on the future of rendinitialpostdelay from ticket
> #20082 [1] here.
>
> Transcribing the issue:
> > At the moment descriptor is getting posted at
> > MIN_REND_INITIAL_POST_DELAY (30) seconds after onion service
> > initialization. For the use case of real-time one-time services
> > (like OnionShare, etc) one has to wait for 30 seconds until this
> > onion service can be reached. Besides, if a client tries to reach
> > the service before its descriptor is ever published, tor client gets
> > stuck preventing user from reaching this service after descriptor is
> > published. Like this: Could not pick one of the responsible hidden
> > service directories to fetch descriptors, because we already tried
> > them all unsuccessfully.
>
>
> > It has jumped to 30s from 5s due to "load on authorities".
> > 11d89141ac0ae0ff371e8da79abe218474e7365c:
> >
> > +  o Minor bugfixes (hidden services): +- Upload hidden service
> > descriptors slightly less often, to reduce +  load on
> > authorities.
> >
> > "Load on authorities" is not the point anymore because we don't use
> > V0 since 0.2.2.1-alpha. Thus I think it's safe to drop it back to at
> > least 5s (3s?) for all services. Or even remove it at all?
>
> The questions are:
>   * Can we drop this delay? Why?
>   * Can we set it back to 5s thus avoiding issues that can arise after
> removing the delay?
>   * Should we do something now or postpone it to prop224?
>
> [1] https://trac.torproject.org/projects/tor/ticket/20082
> --
> Ivan Markin
> ___
> tor-dev mailing list
> tor-dev@lists.torproject.org
> https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev
>
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] prop224: Ditching key blinding for shorter onion addresses

2016-07-31 Thread Razvan Dragomirescu
I agree with this, I don't really see the point of making .onion names easy
to remember. If it's a service you access often, you can bookmark it or
alias it locally to something like "myserver.onion" (maybe we should make
it easier for users to do just that - an alias file for .onion lookups,
allowing them to register myserver.onion and point it to
asdlataoireaoiasdasd.onion or whatever).

If it's a link on a Wiki or in a search engine, you just click on it, you
don't care what the name is. The only time you'd have to remember an actual
.onion address is if you heard it on the radio or saw a banner on the side
of the street while driving and had to memorize it in a few seconds. Or
maybe if you have to read the address _over the phone_ to a friend (as
opposed to mailing him the link).

What is the exact use case of this? I'm not saying it's useless, I just
don't see the point, maybe I'm missing something.

Razvan

--
Razvan Dragomirescu
Chief Technology Officer
Cayenne Graphics SRL

On Sat, Jul 30, 2016 at 9:44 PM, Lunar  wrote:

> George Kadianakis:
> > this is an experimental mail meant to address legitimate usability
> concerns
> > with the size of onion addresses after proposal 224 gets implemented.
> It's
> > meant for discussion and it's far from a full blown proposal.
>
> Taking a step back here, I believe the size of the address to be a
> really minor usability problem. IPv6 adressses are 128 bits long, and
> plenty of people in this world now access content via IPv6. It's not a
> usability problem because they use a naming—as opposed to
> addressing—scheme to learn about the appropriate IPv6 address.
>
> While I do think we should think of nicer representation for the new
> addresses than base32, and we should adress that, working on a naming
> system sounds like an easier way out to improve onion services
> usability than asking people to remember random addresses (be them 16 or
> 52 characters-long).
>
> (I now plenty of people who type “riseup” in the Google search bar of
> their browser to access their mailbox… They don't even want to/can't
> remember
> an URL. Hardly a chance they will remember an onion address, whatever
> its size.)
>
> Maybe it would be worthwhile to ask the UX team for input on the topic?
>
> --
> Lunar 
>
> ___
> tor-dev mailing list
> tor-dev@lists.torproject.org
> https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev
>
>
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] HSFETCH fails on basic auth services

2016-06-29 Thread Razvan Dragomirescu
Thank you Tim! For the record, GETINFO works ok in 0.2.8.4-rc (unstable).
HSFETCH still doesn't and I'll file a bug for it.

Razvan

On Thu, Jun 30, 2016 at 1:28 AM, Tim Wilson-Brown - teor  wrote:

>
> > On 30 Jun 2016, at 06:42, Razvan Dragomirescu <
> razvan.dragomire...@veri.fi> wrote:
> >
> > BTW, I have also tried the GETINFO command from the controller to fetch
> the hidden service descriptor directly from the host that has published it,
> but that doesn't work either.  Fetching from the client side (after a
> connection) works fine:
> >
> > AUTHENTICATE
> > 250 OK
> > GETINFO hs/client/desc/id/js2usypscw6y6c5e
> > 250+hs/client/desc/id/js2usypscw6y6c5e=
> > rendezvous-service-descriptor 7codget3fmkzj4z3oqia37iknu5iespk
> > ...
> > .
> > 250 OK
> >
> >
> > Fetching from the server side though 
> >
> > GETINFO hs/service/desc/id/js2usypscw6y6c5e
> > 552 Unrecognized key "hs/service/desc/id/js2usypscw6y6c5e"
> >
> > Any ideas? I'm running Tor 0.2.7.6 btw. This also appears to happen with
> non-authenticated services, but the hs/service/desc/id/ was supposed
> to have been merged back in 0.2.7.1 (??).
>
> Perhaps GETINFO only looks in the HS cache, but hidden services don't
> cache their own descriptors?
>
> > On Wed, Jun 29, 2016 at 11:14 PM, Razvan Dragomirescu <
> razvan.dragomire...@veri.fi> wrote:
> > Hello everyone,
> >
> > I seem to have found an issue (bug?) with the controller HSFETCH command
> - I can't seem to be able to fetch hidden service descriptors for services
> that use basic authentication. Tor appears to want to decrypt the
> introduction points for some reason and also fails to look at the
> HidServAuth directive. Connections (via SOCKS proxy for instance) to said
> service work fine, so Tor is configured correctly, but HSFETCH fails and
> Tor outputs this in the logs:
> >
> > Jun 29 20:08:53.000 [warn] Failed to parse introduction points. Either
> the service has published a corrupt descriptor or you have provided invalid
> authorization data.
> >
> > Jun 29 20:08:53.000 [warn] Fetching v2 rendezvous descriptor failed.
> Retrying at another directory.
> >
> > Is this a known issue? Is there another way to fetch the descriptor of a
> hidden service? I really don't want it to be published since I'm rewriting
> it anyway, but I need to fetch it somehow. I can use
> "PublishHidServDescriptors 0" to stop it from publishing the service at all
> but I have no idea how to fetch it from the local cache. Any controller
> commands for that?
> >
> > To summarize - HSFETCH appears to fail for hidden services with basic
> auth and I couldn't find a way to obtain the hidden service descriptor from
> the hidden service machine itself before publishing. Any advice would be
> appreciated.
>
> Perhaps HSFETCH only looks in the HS cache, but hidden services don't
> cache their own descriptors?
> Perhaps HSFETCH doesn't look at HidServAuth?
> Perhaps HSFETCH shouldn't try to decrypt the descriptor before delivering
> it? Perhaps it should?
>
> I encourage you to log an issue for each of these in our bug tracker at
> https://trac.torproject.org/
>
> Tim
>
> Tim Wilson-Brown (teor)
>
> teor2345 at gmail dot com
> PGP 968F094B
> ricochet:ekmygaiu4rzgsk6n
>
>
>
>
> ___
> tor-dev mailing list
> tor-dev@lists.torproject.org
> https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev
>
>
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] HSFETCH fails on basic auth services

2016-06-29 Thread Razvan Dragomirescu
BTW, I have also tried the GETINFO command from the controller to fetch the
hidden service descriptor directly from the host that has published it, but
that doesn't work either.  Fetching from the client side (after a
connection) works fine:

AUTHENTICATE
250 OK
GETINFO hs/client/desc/id/js2usypscw6y6c5e
250+hs/client/desc/id/js2usypscw6y6c5e=
rendezvous-service-descriptor 7codget3fmkzj4z3oqia37iknu5iespk
version 2
permanent-key
-BEGIN RSA PUBLIC KEY-
MIGJAoGBAMPwmou0Pjcmanw3GW7cpXgX3wiKmeND8A7kShodBfqGDIHkkHRpHuwe
NTCtjAsnVzLqtFNCYpwg6HlyDRn557LHCO/GGvVQNvsPSl8v2N+XnuQ6NJ3Jy+AF
bM1vqrFL6p02QRobtHBlbOkD4fWjC7lP6hYOKHQzt7lwDirtPZMdAgMBAAE=
-END RSA PUBLIC KEY-
secret-id-part d7xhm4st3puvu2zz7yjtluwmzt7iafnb
publication-time 2016-06-29 19:00:00
protocol-versions 2,3
introduction-points
-BEGIN MESSAGE-
AQEI7yt3VSr/3LfUtCiXgcm9D3DGCC7Y1fOmB8mLk3ohO8e0OIHKBxtLM01WGq1N
5OHPcpTXD0Vjovc3lplKuoI6aLXVIrSd6lhTLIuybU5mi1GsE+PJXpHdmmDw9vCe
5dH1x6lkX6V0iUgKfqLAbpNvESxi+IQgG7p6VKEOrmMiH/TvCAH3MDdPFv6jjI17
2dju7V69/Mb6wk+KJtZYDLj/jckdzfpntEywg5VO+HR72OGtJ7CjZI49amgG3YF9
SM4ZXCz2XxL9vKXGhwQGZYchFuNbKMOonkw9BZ5Br2moMBl0awOBNoYbwNvCAhf4
iF2xOHKqTj5lV1u4AVwE8GvOPx8lR+qmFsMJQppjgwPbrVayvbMw9TdK+s+kGUiS
3B6tB7c+AMYIbjJ9kL7+sCQWSz00aXuMDJjyxz6NHVYc/x+VdKuMsWiUWj5O6GrA
2kLEEE4N2QvXRPO3w+diLqdT4StYUIpGGUrrWEl+C3yAN7Vb7rllNznaxZdPQJ4T
6Q3e/b8qAOqECqb5RNacY7u31vC3Q5SJtoPZozvpTxN4YWv0OmMhCy5JZ6goAMer
xnwQcDqtRmgmRSoZCxfyaJQ0R7cnnPDN2pEsPNzr///4K69SS9xIchxWwGIxx4Y4
L3td/vrTr24ve78bSirXrjR9tc2w4Ksy3ZMINKR1OWggo2YnJkM3jtkq26njRkgB
7QyuIrBjv3ETWCL0F+7tu6afI895G6jtbS9SpySR3aSeZWFqmDLPbF43PVfbduIr
dST/9mUOXyTW2jmtSm7M+Z8VlbqJw9O7b2PlbDl2lmKNiGUaqq11J3BKXgWQNKk4
qdsccxt0ohPienfVeMQTlLY00+ZL9gWDruJIpIfjq+KeFIvIlqOUSJWip8D/rYWN
xZdkWqnr9Bs+MC+SlM3sbepEhn4hFIx3Jfc3BeatIvfkZB2vj4/sHOfoCNz4KFBV
d+DsDWY6r2/A607ER2uT9oRSaljLhwPAIGS21ROidKKrK2YCCXoUUrKYHsMOLuht
50rUr/Ar0XvKdf7rOX+LmVEjpP05U1AIo8aVYSziMYYlr8qwRizrFbtq6t5M7FuR
ORL74WvtLjHn/tAdm7A5jVwvWPF3vswBy8eFCNMTV3XEwJHmLBk6znsg2RuxdZ2G
TASr2GrBx4J8MxtMN90R3n5RanV2YnAd1RYihK9BzBT5vFHO7wcQ3dOowrolZ83A
Q0MQGoHT2lvvciKHahjn9HsWvuLVo25tnejvAIVGlD/ayfx8pOXztk9l+RN3N0pS
ZkS63XHE9nQleOFrwYebZkeCQOOaVH+//c+agO+4JV82KBmP1/irmNyiIvuWAXxY
/+SFxro7FJU4yROLkGkKJkVg9bdM3QQ+kQfM+Nci/dmrzmzyt41ClPsk1d2WOySw
/Pd71YaNP76BstkpiUCpFtr8PQV+3UkGe5HWmrs0ZabGyzLKEwDjChs7z+zFp6Od
aNNB3bB+Jrrqu8ZBJpwVjXxsaLb2dMB7Wi7b2E/zWZHr2E2Akh0I0lo3XIU86Eeu
tKeM2xTn1yGNc5InPYln1dcfZ67l41zdN3X1P1DilEfT55hg2uIg0f2UbMXd5Db0
I68Zu3n1PWAaNEHE6m3k1PrLSIVs9bfIucuQacQQtkArT9t+lfkv/2CHGCrdHEfU
0R2tzNw+DnO/nZRIRmwxIUqbpvBKnmyfvbekB6JXvSZBWRboV4YHPZSDtZ7C8JpM
YO9mCBbjhboGcujDFBUf+X0ansOIhOrjAPCvE15h0EKJkS8733AvhxwwkUwhL+Pz
/RUCRe+rD5MCEvQhg9+oXhrnZwjzsvsrZGITuv5su34KumPJ3bqvp1lVxr16owwq
KjLREhBBSIvl96fahGnV1ol6Lik5rUI3NhdmPMW3D0bydidYH3u6ZdOEplfoAUlo
DvT7u+0Apl+Rd2jKutCagHjcLjTzOtk6OpWxgfaR3x/Ds+eUt6kS+FAzSrDPx8A7
t84Ga24xKwZowIdJZqjroRnzpZRkV4Y29m47+OpzBS2LYDZR1mPRaywPcX956miV
mY1D9ci4L8l4jQiK/zeY4A/mUJEIlGNaRiUY1UVgiCQeO5fISBjrk0TVKqZaIbZC
G/K5EJsX11XSbJz5+PzWCyZv/JjbHBTzbf2ocafCdz+aDJ5ekWMNfK3dR4PcS9n5
mFX7KLDjpSfkAPMW6LuCXFf732/tsqcxvf87QX2aWchwqTvUgXq2EZD19GPL4sr+
+tEkKNJuQ7wG5zDMlX030jSQ4WKhC5639LHYcg/TDv+CH6GfRZuQSYZgrCevWbLr
GTWRhsXKKqfcysfw7WNa09AKK/3q+ohON0gcFHtLOjGsiLMs6D0UYc0o5U2KOi22
HOlhpVpTuQ85oNixNKhOkJkAleRKY49Jm/JPWbHf0Qhyb55SeIO0l7pfsO41xw5d
aNvQRLDINUj5BRKFGS70vP+D7Aek294pOpJhXDx11AIaWzmUyCge0Y1QdCu7ywnZ
dhKqpMSmCPbuZ5EmcFNovYmtfPzR3q2CKPbYISPsDwqXInEm2IKSN+qHFwxgfWv9
9Q9lyxv3Op/t9aDHmqZwVB3nTMJyDb5lZFkALkyQdHAudHcU8dq53PbwTzRqkW0H
n9not+bht53bZpo9yjJZ5qXmMsT7CNFAL76iKgTRFEopFtPl7clQhTIbhgigvqL9
e9HhGOpXd3fVC3iD6yuvIxRVHJX6YCQ2OLqkvnaKTOCyVz+hVDy45SpkoAh7UjVn
GPnSUKdS0wUBwvqik1GO2etpC+DjZqLqlHQaDiXn/L+1HxNga/HShK+bnkBID5bj
FTMrn1AVyUU29WZlZWRRIFAlQ7FD/JcXALTi0KvFzjlGeuiLOXZo//BNeYdBblFn
GW3wK0BX4AdDHvLcImPRCVUBrz+LOn7687ZQbTUAZ7tq2LQ=
-END MESSAGE-
signature
-BEGIN SIGNATURE-
VWIK/LZRvSeFNpEkgadnNGZb7G/mOsATZ7GN8COif92ytQADTiWr32FBRN5t/UJ/
wVyQXBqxJ9/LeRjEuJcGCKrrRR2DG932ZjK2SUAkgWnodIlBmpPF5r/btKEUVy3b
hbCdWF5ZNCcjLEJ4T25k74TdIUwo8BXvG94EQPl35/g=
-END SIGNATURE-
.
250 OK


Fetching from the server side though 


*GETINFO hs/service/desc/id/js2usypscw6y6c5e*
*552 Unrecognized key "hs/service/desc/id/js2usypscw6y6c5e"*

Any ideas? I'm running Tor 0.2.7.6 btw. This also appears to happen with
non-authenticated services, but the hs/service/desc/id/ was supposed
to have been merged back in 0.2.7.1 (??).

Razvan



On Wed, Jun 29, 2016 at 11:14 PM, Razvan Dragomirescu <
razvan.dragomire...@veri.fi> wrote:

> Hello everyone,
>
> I seem to have found an issue (bug?) with the controller HSFETCH command -
> I can't seem to be able to fetch hidden service descriptors for services
> that use basic authentication. Tor appears to want to decrypt the
> introduction points for some reason and also fails to look at the
> HidServAuth directive. Connections (via SOCKS proxy for instance) to said
> service work fine, so Tor is configured

[tor-dev] HSFETCH fails on basic auth services

2016-06-29 Thread Razvan Dragomirescu
Hello everyone,

I seem to have found an issue (bug?) with the controller HSFETCH command -
I can't seem to be able to fetch hidden service descriptors for services
that use basic authentication. Tor appears to want to decrypt the
introduction points for some reason and also fails to look at the
HidServAuth directive. Connections (via SOCKS proxy for instance) to said
service work fine, so Tor is configured correctly, but HSFETCH fails and
Tor outputs this in the logs:

*Jun 29 20:08:53.000 [warn] Failed to parse introduction points. Either the
service has published a corrupt descriptor or you have provided invalid
authorization data.*

*Jun 29 20:08:53.000 [warn] Fetching v2 rendezvous descriptor failed.
Retrying at another directory.*

Is this a known issue? Is there another way to fetch the descriptor of a
hidden service? I really don't want it to be published since I'm rewriting
it anyway, but I need to fetch it somehow. I can use
"PublishHidServDescriptors 0" to stop it from publishing the service at all
but I have no idea how to fetch it from the local cache. Any controller
commands for that?

To summarize - HSFETCH appears to fail for hidden services with basic auth
and I couldn't find a way to obtain the hidden service descriptor from the
hidden service machine itself before publishing. Any advice would be
appreciated.

Thank you,
Razvan

--
Razvan Dragomirescu
Chief Technology Officer
Cayenne Graphics SRL
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] is the consensus document unpredictable / unique?

2016-06-29 Thread Razvan Dragomirescu
Fair enough, this needs to be reviewed by a cryptographer once the dust
settles. I'm just trying to get a good grasp of what's possible within the
boundaries of the TOR network and how that translates to
cryptographic/security primitives I can use in this particular project
(like using the consensus hash as a future unpredictable value, etc).

As of last night, I have a very basic proof of concept, it needs a bit of
polish and then I'll start showing it off and submitting it to external
analysis.

Thank you!

Razvan

--
Razvan Dragomirescu
Chief Technology Officer
Cayenne Graphics SRL

On Wed, Jun 29, 2016 at 2:22 AM, Tim Wilson-Brown - teor  wrote:

>
> > On 28 Jun 2016, at 21:36, Razvan Dragomirescu <
> razvan.dragomire...@veri.fi> wrote:
> >
> > Thank you Tim,
> >
> > I've been thinking about it and it looks like an easy fix for the
> problem would be to prevent the rogue DA from farming out the hash
> calculations or generating a lot of them in the 5 min voting window. If the
> hash only depends on public info (consensus data, current date, etc), you
> can't prevent this. But if the hash includes a shared secret key _inside
> the smartcard_, the attacker has to ask the smartcard to compute the hash
> and that's orders of magnitude slower than a computer (and can be made
> artificially slower by burning some CPU cycles inside the card doing RSA
> key generations or something - it has no internal clock so you can't just
> "sleep", but you can give it something time-consuming to do before it
> computes the hash).
> >
> > So here's the new setup I have in mind:
> >
> > 1. Each card can be provisioned with a "network key". This is a value
> that all cards (and nodes) that are part of a given network share. It can
> only be set, not read and can be used in the computation of the Descriptor
> Cookie.
> >
> > 2. The descriptor cookie will be calculated as H ( network-key |
> timestamp (rounded down to a full hour) | H (consensus) )
> >
> > The terminal provides the current consensus hash and a timestamp. The
> card checks that the timestamp is greater than the last timestamp it has
> used. It then concatenates the secret network-key, the given timestamp and
> the consensus hash and returns the hash of the result.
> >
> > This means a few things:
> >
> > 1. An attacker can no longer generate a hash independently or farm it
> out to other computers. It has to ask a smartcard to do it. We can make
> this arbitrarily slow since the operation is only meant to be done once an
> hour, so I could make it take a minute per hash inside the card.
> >
> > 2. Generating a hash for a future date would lock you out until that
> date/time (since decreasing timestamps will be refused by the card). You
> could compute the correct hash for the current timestamp and consensus on
> another card, but you would not be able to generate a signed descriptor
> with that correct hash (you can't inject a hash computed somewhere else
> into the signed descriptor).
> >
> > 3. The card doesn't have to parse the consensus - it just uses it as a
> shared random value (the hash of the consensus).
> >
> > Makes sense?
>
> There are now so many moving parts in this scheme, I think you need to
> specify it all in one place, and then convince a cryptographer to review
> it. (I am not a cryptographer.) And then have your implementation reviewed
> against the spec.
>
> How is the card you're using for side-channels?
> Keys have beed extracted using power usage information, or electromagnetic
> emissions, or even audio.
>
> Tim
>
> >
> > Thank you,
> > Razvan
> >
> >
> > On Tue, Jun 28, 2016 at 6:02 AM, Tim Wilson-Brown - teor <
> teor2...@gmail.com> wrote:
> >
> > > On 28 Jun 2016, at 05:30, Razvan Dragomirescu <
> razvan.dragomire...@veri.fi> wrote:
> > >
> > > Thank you Tim,
> > >
> > > As long as a malicious authority cannot choose a hash that is
> identical to an older consensus hash, I think the system should be fine. In
> addition, I can have the the smartcard look at one of the valid-* dates in
> the consensus and hash that into the descriptor cookie as well - I'm
> guessing a rogue authority can mess with the consensus hash but cannot
> change the valid-after, valid-until, etc dates. If I enforce increasing
> dates (so that you cannot go back in time, once you've seen a consensus for
> Jan 2017 for instance you cannot sign another one from June 2016), if you
> attempt to pre-generate a signature for a future date, you lose
> connectivity until that particular date.

Re: [tor-dev] is the consensus document unpredictable / unique?

2016-06-28 Thread Razvan Dragomirescu
Thank you Tim,

I've been thinking about it and it looks like an easy fix for the problem
would be to prevent the rogue DA from farming out the hash calculations or
generating a lot of them in the 5 min voting window. If the hash only
depends on public info (consensus data, current date, etc), you can't
prevent this. But if the hash includes a shared secret key _inside the
smartcard_, the attacker has to ask the smartcard to compute the hash and
that's orders of magnitude slower than a computer (and can be made
artificially slower by burning some CPU cycles inside the card doing RSA
key generations or something - it has no internal clock so you can't just
"sleep", but you can give it something time-consuming to do before it
computes the hash).

So here's the new setup I have in mind:

1. Each card can be provisioned with a "network key". This is a value that
all cards (and nodes) that are part of a given network share. It can only
be set, not read and can be used in the computation of the Descriptor
Cookie.

2. The descriptor cookie will be calculated as H ( network-key | timestamp
(rounded down to a full hour) | H (consensus) )

The terminal provides the current consensus hash and a timestamp. The card
checks that the timestamp is greater than the last timestamp it has used.
It then concatenates the secret network-key, the given timestamp and the
consensus hash and returns the hash of the result.

This means a few things:

1. An attacker can no longer generate a hash independently or farm it out
to other computers. It has to ask a smartcard to do it. We can make this
arbitrarily slow since the operation is only meant to be done once an hour,
so I could make it take a minute per hash inside the card.

2. Generating a hash for a future date would lock you out until that
date/time (since decreasing timestamps will be refused by the card). You
could compute the correct hash for the current timestamp and consensus on
another card, but you would not be able to generate a signed descriptor
with that correct hash (you can't inject a hash computed somewhere else
into the signed descriptor).

3. The card doesn't have to parse the consensus - it just uses it as a
shared random value (the hash of the consensus).

Makes sense?

Thank you,
Razvan


On Tue, Jun 28, 2016 at 6:02 AM, Tim Wilson-Brown - teor  wrote:

>
> > On 28 Jun 2016, at 05:30, Razvan Dragomirescu <
> razvan.dragomire...@veri.fi> wrote:
> >
> > Thank you Tim,
> >
> > As long as a malicious authority cannot choose a hash that is identical
> to an older consensus hash, I think the system should be fine. In addition,
> I can have the the smartcard look at one of the valid-* dates in the
> consensus and hash that into the descriptor cookie as well - I'm guessing a
> rogue authority can mess with the consensus hash but cannot change the
> valid-after, valid-until, etc dates. If I enforce increasing dates (so that
> you cannot go back in time, once you've seen a consensus for Jan 2017 for
> instance you cannot sign another one from June 2016), if you attempt to
> pre-generate a signature for a future date, you lose connectivity until
> that particular date.
>
> >
> > On 28 Jun 2016, at 06:16, Razvan Dragomirescu <
> razvan.dragomire...@veri.fi> wrote:
> >
> > 1. I've just realized that hashing the current valid-* dates into the
> descriptor cookie doesn't help - those values are known to the attacker and
> he can tweak his vote to generate a certain hash regardless of that date.
> The rest of what I've said applies just fine.
>
> You could have your smart card parse a consensus and check the dates are
> on or after previous signed dates, before signing a descriptor for those
> dates. But text parsing is error-prone.
>
> > I also plan to enforce an upper limit on the number of RSA signatures
> the card can perform with a given key. SIM cards already do this to prevent
> brute force attacks.
>
> You might actually want to limit the number of signatures per-hour as well.
> But no-one has statistics for the number of hidden service descriptors per
> service per hour, as far as I know.
> It's likely somewhere between 0 and 10.
>
> > If you don't have access to the smartcard and if you've somehow
> pre-generated some signed descriptors, those will only work for 1 hour (a
> very specific hour in the future that you've simulated consensus for and
> somehow tricked an authority into making the consensus hash be exactly the
> one you're expecting).
> >
> > What I like about the consensus (vs shared random value) is that it's
> regenerated every hour, so a successful attack would have very limited
> impact (1 hour sometime in the future). Shared random values are gener

Re: [tor-dev] is the consensus document unpredictable / unique?

2016-06-27 Thread Razvan Dragomirescu
Two clarifications/questions:

1. I've just realized that hashing the current valid-* dates into the
descriptor cookie doesn't help - those values are known to the attacker and
he can tweak his vote to generate a certain hash regardless of that date.
The rest of what I've said applies just fine.

2. How would a Directory Authority be able to tweak its vote to generate
multiple valid consensus documents? I'm not familiar with the voting
process (just read about it a bit at
http://jordan-wright.com/blog/2015/05/14/how-tor-works-part-three-the-consensus/
) . Can a rogue DA pretend that it has knowledge of additional relays that
the other DAs don't know about and tweak their fingerprints to try to match
a precomputed hash? Do the DAs simply merge all relay lists with no other
checks? Is there any legitimate situation where a single DA would know
about a given relay? Or am I missing some random data that the DA includes
in its vote that could be used for this?

Thank you,
Razvan

--
Razvan Dragomirescu
Chief Technology Officer
Cayenne Graphics SRL

On Mon, Jun 27, 2016 at 10:30 PM, Razvan Dragomirescu <
razvan.dragomire...@veri.fi> wrote:

> Thank you Tim,
>
> As long as a malicious authority cannot choose a hash that is identical to
> an older consensus hash, I think the system should be fine. In addition, I
> can have the the smartcard look at one of the valid-* dates in the
> consensus and hash that into the descriptor cookie as well - I'm guessing a
> rogue authority can mess with the consensus hash but cannot change the
> valid-after, valid-until, etc dates. If I enforce increasing dates (so that
> you cannot go back in time, once you've seen a consensus for Jan 2017 for
> instance you cannot sign another one from June 2016), if you attempt to
> pre-generate a signature for a future date, you lose connectivity until
> that particular date.
>
> I also plan to enforce an upper limit on the number of RSA signatures the
> card can perform with a given key. SIM cards already do this to prevent
> brute force attacks.
>
> If you don't have access to the smartcard and if you've somehow
> pre-generated some signed descriptors, those will only work for 1 hour (a
> very specific hour in the future that you've simulated consensus for and
> somehow tricked an authority into making the consensus hash be exactly the
> one you're expecting).
>
> What I like about the consensus (vs shared random value) is that it's
> regenerated every hour, so a successful attack would have very limited
> impact (1 hour sometime in the future). Shared random values are generated
> once per day, so if the attacker somehow guesses them successfully, he can
> pretend to be another node for a full day.
>
> As a second security layer, once the communication is established, the two
> nodes can negotiate a shared symmetrical key (based on the same RSA
> keypairs they use as permanent keys for hidden services or a different
> keypair). This way, a successful attacker can only launch a Denial of
> Service type of attack (preventing the legitimate node from getting the
> traffic) but cannot decrypt or encrypt traffic from/to that node.
>
> Thanks again,
> Razvan
>
>
> On Mon, Jun 27, 2016 at 9:02 AM, Tim Wilson-Brown - teor <
> teor2...@gmail.com> wrote:
>
>> Hi Razvan,
>>
>> > On 26 Jun 2016, at 07:52, Razvan Dragomirescu <
>> razvan.dragomire...@veri.fi> wrote:
>> >
>> > I couldn't find a detailed description of the Tor consensus, so I'm
>> checking that my understanding of it is correct. Basically, would it be
>> correct to assume that the consensus document (or a hash thereof) for a
>> date in the future is an unpredictable value that will also be unique to
>> all nodes inquiring about it at that time?
>>
>> The future values of consensuses can be influenced by each of the 9
>> directory authorities, individually. A malicious authority has ~5 minutes
>> between receiving other authorities' votes and creating its own to
>> calculate a vote that creates a consensus hash that achieves a desired
>> outcome.
>>
>> While it can't control the exact consensus hash, it can choose between
>> many possible hashes, only limited by the available computing time. And it
>> can farm out these computations to other computers.
>>
>> A shared random commit-and-reveal scheme, like the one in proposal 250,
>> gives each authority a single choice: to reveal, or not reveal. This means
>> that a malicious authority can only choose between two different output
>> hashes, rather than choosing between many millions of possible hashes.
>>
>> This is why OnionNS switched fr

Re: [tor-dev] is the consensus document unpredictable / unique?

2016-06-27 Thread Razvan Dragomirescu
Thank you Tim,

As long as a malicious authority cannot choose a hash that is identical to
an older consensus hash, I think the system should be fine. In addition, I
can have the the smartcard look at one of the valid-* dates in the
consensus and hash that into the descriptor cookie as well - I'm guessing a
rogue authority can mess with the consensus hash but cannot change the
valid-after, valid-until, etc dates. If I enforce increasing dates (so that
you cannot go back in time, once you've seen a consensus for Jan 2017 for
instance you cannot sign another one from June 2016), if you attempt to
pre-generate a signature for a future date, you lose connectivity until
that particular date.

I also plan to enforce an upper limit on the number of RSA signatures the
card can perform with a given key. SIM cards already do this to prevent
brute force attacks.

If you don't have access to the smartcard and if you've somehow
pre-generated some signed descriptors, those will only work for 1 hour (a
very specific hour in the future that you've simulated consensus for and
somehow tricked an authority into making the consensus hash be exactly the
one you're expecting).

What I like about the consensus (vs shared random value) is that it's
regenerated every hour, so a successful attack would have very limited
impact (1 hour sometime in the future). Shared random values are generated
once per day, so if the attacker somehow guesses them successfully, he can
pretend to be another node for a full day.

As a second security layer, once the communication is established, the two
nodes can negotiate a shared symmetrical key (based on the same RSA
keypairs they use as permanent keys for hidden services or a different
keypair). This way, a successful attacker can only launch a Denial of
Service type of attack (preventing the legitimate node from getting the
traffic) but cannot decrypt or encrypt traffic from/to that node.

Thanks again,
Razvan


On Mon, Jun 27, 2016 at 9:02 AM, Tim Wilson-Brown - teor  wrote:

> Hi Razvan,
>
> > On 26 Jun 2016, at 07:52, Razvan Dragomirescu <
> razvan.dragomire...@veri.fi> wrote:
> >
> > I couldn't find a detailed description of the Tor consensus, so I'm
> checking that my understanding of it is correct. Basically, would it be
> correct to assume that the consensus document (or a hash thereof) for a
> date in the future is an unpredictable value that will also be unique to
> all nodes inquiring about it at that time?
>
> The future values of consensuses can be influenced by each of the 9
> directory authorities, individually. A malicious authority has ~5 minutes
> between receiving other authorities' votes and creating its own to
> calculate a vote that creates a consensus hash that achieves a desired
> outcome.
>
> While it can't control the exact consensus hash, it can choose between
> many possible hashes, only limited by the available computing time. And it
> can farm out these computations to other computers.
>
> A shared random commit-and-reveal scheme, like the one in proposal 250,
> gives each authority a single choice: to reveal, or not reveal. This means
> that a malicious authority can only choose between two different output
> hashes, rather than choosing between many millions of possible hashes.
>
> This is why OnionNS switched from using a hash of the consensus, to the
> shared random value in the consensus.
>
> > On 26 Jun 2016, at 08:18, Tom van der Woerdt  wrote:
> >
> > The consensus has signatures from all directory operators on it, and
> computing those ahead of time requires a lot of private keys. Because they
> also all contain the date, they're all unique. So yea, they're both unique
> and unpredictable.
>
> The signed part of the consensus is the (hash of) everything up until the
> first signature.
> So while the consensus eventually contains up to 9 signatures, and some
> legacy signatures, it's not created or initially distributed between
> authorities that way.
>
> There's a few reasons you shouldn't rely on the hash of the signatures:
> * while the consensus is produced by up to 9 authority signatures, each
> signature is only produced by 1 authority;
> * a client only needs 5 of the 9 signatures to use a consensus, so it's
> not guaranteed to have them all;
> * signatures are distributed encoded, in PKCS1-padded format that ignores
> additional whitespace  (and various other extraneous characters), so a
> malicious authority can control (some bits in) the hash of its own
> signature;
> * PKCS1.5-padding allows arbitrary pseudorandom inputs as part of the
> padding, so a malicious authority can try multiple values for this
> pseudorandom input until it gets a has that it wants;
>
> A malicious au

Re: [tor-dev] is the consensus document unpredictable / unique?

2016-06-26 Thread Razvan Dragomirescu
A better link for the Mediatek Linkit 7688 board I'm using for PoC is
https://www.seeedstudio.com/item_detail.html?p_id=2573 . I'm also doing a
second PoC on a Raspberry Pi Zero -
https://www.raspberrypi.org/products/pi-zero/ - far more powerful than the
Linkit 7688 above and also cheaper, but a lot harder to find.

Razvan

On Sun, Jun 26, 2016 at 3:32 PM, Razvan Dragomirescu <
razvan.dragomire...@veri.fi> wrote:

> Thank you s7r, Tom,
>
> I'll try to explain what I'm doing - I'm working on something called
> SIM4Things - it's an Internet of Things project, giving Internet-connected
> objects a persistent, cryptographically secure identity and a way to reach
> similar objects. The closest analogy is the SIM card in the GSM / mobile
> world (hence the name :) ). The identity is actually an RSA keypair, stored
> in a tamper-resistant microSD form factor secure element (like this one
> https://www.swissbit.com/ps-100u/ ).
>
> The project does multiple things - first, it gives the node an identity -
> the RSA private key inside is used to sign a hidden service descriptor (a
> la OnionBalance) that is then published. As long as the device has access
> to the smartcard, it can sign descriptors. Once the card is removed, it can
> no longer do that.
>
> Second, using hidden services means that the devices become accessible at
> a single .onion address regardless of how they connect to the Internet and
> how many firewalls and/or NAT gateways they are behind.
>
> I'm very close to having a fully functional proof of concept on this tiny
> board
> https://labs.mediatek.com/site/global/developer_tools/mediatek_linkit_smart_7688/whatis_7688/index.gsp
> . It runs OpenWRT. A Python script using STEM connects to Tor and to the
> internal smartcard, fetches the hidden service descriptor as published by
> Tor and modifies / re-signs it to point to the address associated with its
> public key (keeps the introduction points, rewrites everything else). I
> know this will no longer work with Prop 224 but afaik Prop224 is still 1-2
> years away. Once the new descriptor is published, the node can talk to any
> other similar node over the Tor network.
>
> I want to offer the same guarantees that a regular SIM card inside your
> phone would offer - as long as you have the SIM, you can join the network
> and talk to other nodes. Once the SIM is gone, you should no longer be able
> to do so. It should also be impossible (or very hard) to clone such a SIM
> card and it should be impossible (or hard) to generate hidden service
> descriptors in advance (that would allow you to join the network even after
> the SIM has been removed).
>
> So, to summarize - I'm doing a SIM card for the Internet of Things. The
> SIM is a microSD tamper-resistant secure element with an RSA key inside. It
> gives the node an identity (strongly tied to the physical SIM) and a way to
> talk to similar nodes, with no central server or censorship opportunity.
>
> If you have any questions, feel free to ask.
>
> Thanks,
> Razvan
>
> --
> Razvan Dragomirescu
> Chief Technology Officer
> Cayenne Graphics SRL
>
> On Sun, Jun 26, 2016 at 1:29 AM, s7r  wrote:
>
>> Hello,
>>
>> If you hash the consensus entirely, yes that should produce unique
>> hashes every time that are unpredictable until the consensus is available.
>>
>> However, you cannot guarantee it will be the same value for everyone at
>> a given time, because consensus documents overlap and two clients/relays
>> might work properly but not use exactly the same consensus at a given
>> time (at 13:45 "Y" uses consensus document valid after 12:00 and "Z"
>> uses consensus document valid after 13:00. Both are within their
>> valid-until limits).
>>
>> I don't recommend that your rely on what you've suggested because of
>> pour security properties and too complicated design with too many
>> possible failure cases to be worth it.
>> Very soon Tor will include an unique random value we call "Consensus
>> Shared Randomness [SRVALUE]" in every consensus, you can just use that.
>> This is proposal 250. This seams like a better standardized upstream
>> solution with infinite better security properties, so I'd use this as a
>> cookie. This has the advantage of having the same unique value on all
>> nodes all the time: just filter and pair consensus SRVALUE + consensus
>> valid-after timestamp and everyone will be requesting the SRVALUE of the
>> same consensus, therefor producing the same result or fail if none.
>>
>> Last but not least, how will your system work in practice? The hidden
>> service private ke

Re: [tor-dev] is the consensus document unpredictable / unique?

2016-06-26 Thread Razvan Dragomirescu
Thank you s7r, Tom,

I'll try to explain what I'm doing - I'm working on something called
SIM4Things - it's an Internet of Things project, giving Internet-connected
objects a persistent, cryptographically secure identity and a way to reach
similar objects. The closest analogy is the SIM card in the GSM / mobile
world (hence the name :) ). The identity is actually an RSA keypair, stored
in a tamper-resistant microSD form factor secure element (like this one
https://www.swissbit.com/ps-100u/ ).

The project does multiple things - first, it gives the node an identity -
the RSA private key inside is used to sign a hidden service descriptor (a
la OnionBalance) that is then published. As long as the device has access
to the smartcard, it can sign descriptors. Once the card is removed, it can
no longer do that.

Second, using hidden services means that the devices become accessible at a
single .onion address regardless of how they connect to the Internet and
how many firewalls and/or NAT gateways they are behind.

I'm very close to having a fully functional proof of concept on this tiny
board
https://labs.mediatek.com/site/global/developer_tools/mediatek_linkit_smart_7688/whatis_7688/index.gsp
. It runs OpenWRT. A Python script using STEM connects to Tor and to the
internal smartcard, fetches the hidden service descriptor as published by
Tor and modifies / re-signs it to point to the address associated with its
public key (keeps the introduction points, rewrites everything else). I
know this will no longer work with Prop 224 but afaik Prop224 is still 1-2
years away. Once the new descriptor is published, the node can talk to any
other similar node over the Tor network.

I want to offer the same guarantees that a regular SIM card inside your
phone would offer - as long as you have the SIM, you can join the network
and talk to other nodes. Once the SIM is gone, you should no longer be able
to do so. It should also be impossible (or very hard) to clone such a SIM
card and it should be impossible (or hard) to generate hidden service
descriptors in advance (that would allow you to join the network even after
the SIM has been removed).

So, to summarize - I'm doing a SIM card for the Internet of Things. The SIM
is a microSD tamper-resistant secure element with an RSA key inside. It
gives the node an identity (strongly tied to the physical SIM) and a way to
talk to similar nodes, with no central server or censorship opportunity.

If you have any questions, feel free to ask.

Thanks,
Razvan

--
Razvan Dragomirescu
Chief Technology Officer
Cayenne Graphics SRL

On Sun, Jun 26, 2016 at 1:29 AM, s7r  wrote:

> Hello,
>
> If you hash the consensus entirely, yes that should produce unique
> hashes every time that are unpredictable until the consensus is available.
>
> However, you cannot guarantee it will be the same value for everyone at
> a given time, because consensus documents overlap and two clients/relays
> might work properly but not use exactly the same consensus at a given
> time (at 13:45 "Y" uses consensus document valid after 12:00 and "Z"
> uses consensus document valid after 13:00. Both are within their
> valid-until limits).
>
> I don't recommend that your rely on what you've suggested because of
> pour security properties and too complicated design with too many
> possible failure cases to be worth it.
> Very soon Tor will include an unique random value we call "Consensus
> Shared Randomness [SRVALUE]" in every consensus, you can just use that.
> This is proposal 250. This seams like a better standardized upstream
> solution with infinite better security properties, so I'd use this as a
> cookie. This has the advantage of having the same unique value on all
> nodes all the time: just filter and pair consensus SRVALUE + consensus
> valid-after timestamp and everyone will be requesting the SRVALUE of the
> same consensus, therefor producing the same result or fail if none.
>
> Last but not least, how will your system work in practice? The hidden
> service private key will be stored on a smartcard and it cannot be
> copied, it will only sign descriptors at the request of the host. So far
> so good, but the smartcard has to stay plugged in the host all the time,
> or at least all the time the hidden service is running, so what's the
> security property here?
>
> If you think you can manually plug in the smartcard rarely just to sign
> descriptors and keep it somewhere else physically most of the time, this
> will not work. In the wild things happen that demand new descriptors to
> be signed in an unpredictable way: introduction points go offline;
> HSDirs go offline, too many INTRODUCE2 cells received on a single
> introduction point circuit.
>
> And if the private key is on a smartcard, and the smartcard is plugged
&

[tor-dev] is the consensus document unpredictable / unique?

2016-06-25 Thread Razvan Dragomirescu
Hello everyone,

I couldn't find a detailed description of the Tor consensus, so I'm
checking that my understanding of it is correct. Basically, would it be
correct to assume that the consensus document (or a hash thereof) for a
date in the future is an unpredictable value that will also be unique to
all nodes inquiring about it at that time?

I'm thinking of using a hash of the consensus document - like
http://171.25.193.9:443/tor/status-vote/current/consensus - as a descriptor
cookie in a hidden service. This way, an attacker cannot generate or
publish a hidden service descriptor for the future (one with a correct
cookie). A client can fetch the consensus at the time it wants to connect,
hash it, then use that as the descriptor cookie to determine the correct
descriptor id and decrypt the introduction point list.

Does anyone see any issues with this? In my project, the hidden service
private key is on a smartcard, so it can't be copied, you can only ask the
smartcard to sign something with it for you - and I'm trying to prevent an
attacker from generating hidden service descriptors in advance,to be used
without the smartcard. If future descriptors depend on an unpredictable
future value (the hash of the consensus at that time), an attacker can only
generate descriptors for past and current time periods.

Thank you,
Razvan

--
Razvan Dragomirescu
Chief Technology Officer
Cayenne Graphics SRL
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] getting reliable time-period without a clock

2016-06-20 Thread Razvan Dragomirescu
Thank you Ivan,

I don't want to trust the host, that's why I'm looking for something that
the _network_ agrees upon, not something the host can provide or generate
itself. If the host fetches the Facebook hidden service descriptor and
provides it to the card, the card can check the signature on it, then look
at the time it was generated and compute the time period for it, then set
its internal "clock" to that time. Unless the host can trick Facebook into
using the wrong date (a future date), this should work fine.

An alternative that I don't want to use is to simply run a hidden service
of my own that simply returns a signed statement of time ("current time at
this server is HH:MM:DD"  + RSA Signature). But I don't want the system to
depend on a centralized service like this, if the network already agrees on
what is the current time (or time period), I want to use that.

I'll take a look at the doc you've linked to, thank you!

Razvan

--
Razvan Dragomirescu
Chief Technology Officer
Cayenne Graphics SRL


On Mon, Jun 20, 2016 at 7:51 PM, Ivan Markin  wrote:

> Hello Razan,
>
> Razvan Dragomirescu:
> > I am working on a smartcard-based hidden service publishing solution and
> > since I'm tying the hidden service descriptor to the physical smartcard,
> I
> > want to make sure that the host is not asking the smartcard to generate
> > hidden service descriptors in advance, to be used when the card is no
> > longer inserted into the host/reader.
>
> Just for the record, currently it's a problem that is going to be solved
> by introducing shared random randomness [1].
>
> > The smartcard has no internal clock or time source and it's not supposed
> to
> > trust the host it's inserted into, so I need an external trusted source
> > that indicates the current time period. I'm not 100% familiar with the
> Tor
> > protocol (minus the hidden service parts I've been reading about
> recently),
> > so is there any way to get a feel of what the network thinks is the
> current
> > time or the current time-period? An idea would be to fetch the Facebook
> > hidden service descriptor or some other trusted 3rd party hidden service
> at
> > a known address and see if the time period given to the smartcard is
> valid
> > for that Facebook descriptor too. An operator could set up  one or more
> > trusted hidden services to match against the time-period (inside the
> > smartcard) before it signs a given descriptor.
>
> Hmm, you seem to trust untrusted host here since you trust tor daemon
> running on the host for clock fetching.
> Anyway you're proposing to offload more tor logic onto the smartcard
> thus making it trusted host. For me it seems to be unreasonable for such
> tiny amount of resources it has. The only functon of a smartcard is to
> store private keys in secure manner (do not expose them, only use them).
>
> I think that a possible solution to this is to have some trusted
> air-gapped host with the smartcard that generates chunks of signed
> descriptors. This trusted host can check if the digest is legit. Then
> you can transfer the digests to a "postman" machine which just uploads
> these descriptors.
> [ha-ha, ironically, I'm currently creating such setup right now. I'm
> transferring signed digests via UART]
>
>
> [1]
>
> https://gitweb.torproject.org/torspec.git/tree/proposals/250-commit-reveal-consensus.txt
> --
> Healthy bulbs,
> Ivan Markin
> ___
> tor-dev mailing list
> tor-dev@lists.torproject.org
> https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev
>
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


[tor-dev] getting reliable time-period without a clock

2016-06-20 Thread Razvan Dragomirescu
Hello everyone,

I am working on a smartcard-based hidden service publishing solution and
since I'm tying the hidden service descriptor to the physical smartcard, I
want to make sure that the host is not asking the smartcard to generate
hidden service descriptors in advance, to be used when the card is no
longer inserted into the host/reader.

The smartcard has no internal clock or time source and it's not supposed to
trust the host it's inserted into, so I need an external trusted source
that indicates the current time period. I'm not 100% familiar with the Tor
protocol (minus the hidden service parts I've been reading about recently),
so is there any way to get a feel of what the network thinks is the current
time or the current time-period? An idea would be to fetch the Facebook
hidden service descriptor or some other trusted 3rd party hidden service at
a known address and see if the time period given to the smartcard is valid
for that Facebook descriptor too. An operator could set up  one or more
trusted hidden services to match against the time-period (inside the
smartcard) before it signs a given descriptor.

Is there an easier way? I _think_ I can download the current consensus and
check signatures on it (who signs it? how do I verify these signatures?),
then check the valid-after / valid-until fields inside. The problem with
that is its size, it's about 1.6MB - a bit hard for the card to digest that
much but doable in small chunks.

Any hints would be appreciated. Thank you!
Razvan

--
Razvan Dragomirescu
Chief Technology Officer
Cayenne Graphics SRL
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] adding smartcard support to Tor

2016-06-03 Thread Razvan Dragomirescu
Hey Evan, your hidden service appears to be down. Are there any mirrors of
the code or can you bring it back online? My project is starting to take
shape (took your advice and I'm using OpenPGP for now - may move to my own
implementation in the future, but I want to create a small MVP ASAP).

Thanks,
Razvan

--
Razvan Dragomirescu
Chief Technology Officer
Cayenne Graphics SRL





On Mon, May 23, 2016 at 11:25 PM, Evan Margin  wrote:

> Hey Razvan and tor-dev@!
>
> Razvan Dragomirescu:
> > I wanted to revisit this subject and actually start writing some
> > code, but it looks like Ivan Markin's GitHub account is gone,
> > together with all the code there. Ivan, are your modifications to
> > OnionBalance still available anywhere?
>
> Thanks for your interest!
>
> Yeap, GitHub told me someday that I'm blocked because of my
> suspiciousness. Since then I moved my repos to a lil' cgit box that is
> available over the onions [1].
>
>
> So the code you're looking for is at `keycity` branch at [2]. Also you
> need to fetch a pythonic package called `keycity` from [3]. Basically
> `keycity` is a kind of abstraction to use keys from
> keyfiles/smartcards/whatever in the same manner. It depends on `pyscard`
> for SW codes handling and as a `pcscd` bindings. It also can use
> `scdaemon` from GnuPG if you don't want to use `pcsc-lite` for some reason.
>
> Please note that this version of OnionBalance uses different config
> layout and other incompatibilities I can't recall. Also note that
> despite the fact that smartcard support works fine for me it may not do
> the same for you.
> For me Python packaging is a total mess with TMF (Too Many Files) as
> well as scripts/interpreter themselves. TMF makes everything to be
> really slow on machines running from flash cards (USB sticks, or BBB,
> Raspberry Pi, Soekris). This led me to develop `avant` [4][5] to which
> I'm going to add smartcard support someday soon (when there will be free
> software Go bindings for `pcsc-lite` that is not GPL'ed).
> So I want to say that my OnionBalance fork is not maintained and will
> not be. But I can help you out if you have troubles with crafting
> the installation of it.
>
> [1] http://hartwellnogoegst.onion/
> [2] http://hartwellnogoegst.onion/onionbalance
> [3] http://hartwellnogoegst.onion/keycity-py
> [4] http://hartwellnogoegst.onion/avant
> [5]
> https://lists.torproject.org/pipermail/tor-onions/2016-April/000132.html
>
> --
> Happy hacking!
> Ivan Markin
>
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] adding smartcard support to Tor

2016-05-24 Thread Razvan Dragomirescu
Thank you Evan, Donncha,

Regarding 1024-bit RSA support, take a look at
http://www.fi.muni.cz/~xsvenda/jcsupport.html - almost all JavaCard cards
support that.

I'm a Java developer but it looks like I'm going to have to switch to (and
learn) Python for this since almost all Tor utilities appear to only be
maintained in Python (and I don't feel like reinventing the wheel in Java).
We'll see...

Thanks Evan for the .onion links, I'll take a look. I'm still collecting
data, testing hardware, etc. BTW, one of the cheapest options for this is
http://www.ftsafe.com/product/epass/eJavaToken - $12 at
http://javacardos.com/store/smartcard_eJavaToken.php . Unfortunately it has
a bug that prevents OpenPGP from running (something to do with signature
padding, I didn't look much into it). My plan is to write a very small
JavaCard-based applet to load onto the card - that only does RSA key
generation and signing, nothing else. Easy to write and easy to audit.

Thanks again,
Razvan

--
Razvan Dragomirescu
Chief Technology Officer
Cayenne Graphics SRL

On Mon, May 23, 2016 at 11:26 PM, Evan Margin  wrote:

> Hello Donncha!
>
> Donncha Ó Cearbhaill:
> > However his code was integrating with a smartcard at a very low
> > level by sending AT commands manually. I don't think that is the
> > best approach for compatibility.
> >
> > I think a better way would be to interface with the tokens via the
> > PKCS#11 protocol. The majority of smartcards and HSMs implement this
> >  standard and there are compatible implementations available for most
> >  operating systems. The Python pykcs11 module should be a helpful
> > start [1].
>
> Yeah, interfacing smartcard directly or via GnuPG scdaemon is not the
> best approach. But PKCS#11 in even worse. Much much worse. This standard
> is so huge that noone can implement it right. It raises enterance
> threshold so high that it will be used only by overproprietary entities.
> OpenPGP Card spec is pretty small so that everyone can write code within
> an hour and start to interface with a card. So did I. At least I know
> what's going on under the hood and these transparency and simplicity
> makes this setup more secure.
>
> --
> Ivan Markin
> ___
> tor-dev mailing list
> tor-dev@lists.torproject.org
> https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev
>
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] estimating traffic/bandwidth needed to run a Tor node

2016-05-24 Thread Razvan Dragomirescu
Hello everyone and thanks for the quick answers,

To clarify this a bit, some of my nodes will be simple clients (not
offering any services to the network), others will be hidden services
(offering their services). They can also be a combination of the two
(services may contact other services over the Tor network).

Depending on the available bandwidth, some nodes may elect to become relays
as well - this has no immediate benefit to my project but node owners may
choose to give back to the community by running relays or even exit nodes.
This is an IoT project, so if nodes are hosted on a home network for
instance, on a high speed unmetered cable connection, with permanent power
available (not batteries), they can act as full nodes.

I'm trying to make sure though that the worst case scenario for nodes isn't
too bad for 3G or satellite connections (or maybe warn users of the amount
of traffic they're going to see).

Thank you,
Razvan

On Mon, May 23, 2016 at 1:50 AM, s7r  wrote:

> Razvan,
>
> Your email is confusing. To host a Hidden Service you do not need to be
> a Tor node - we call them relays in the common terminology.
>
> So, a relay relays traffic for Tor clients. This will consume as much as
> you give. You can throttle the relay bandwidth rate / burst or limit the
> traffic consumed by accounting per day/week/month, etc. After the speed
> and traffic limits, next limits are CPU, RAM and so on.
>
> There is no sense in being a relay just to host a hidden service. In
> fact we do not recommend this, it's better to run the hidden service and
> relay service in two separate Tor processes if hosted on the same device.
>
> To only host a hidden service you can be a normal Tor client. This will
> not consume any traffic or relay traffic for other clients, but it will
> consume as follows:
> a) all traffic generated by that hidden service. This can be only
> estimated by you, since it can be 0, it can be 1 MB per week it can be
> 100 MB per day, etc.
>
> b) consensus data and microdescriptors for relays in the network. I
> don't have exact numbers for how much is this but count few MBs at every
> 2 hours just to be sure.
>
> On 5/23/2016 12:56 AM, Razvan Dragomirescu wrote:
> > Hello everyone,
> >
> > I'm working on an Internet of Things project and Tor will be a part of
> > it (Hidden Services to be more precise). The nodes however may be
> > battery powered or have slow (or metered) Internet connectivity, so I'm
> > trying to estimate the traffic patterns for a fully functional Tor node.
> > Has this been measured at all? I mean how much traffic should I expect
> > per hour/day/month whatever in order to maintain a good "Tor citizen"
> > node, serving a very low traffic hidden service? I do remember reading
> > something about it needing 4MB per day or something like that, but I
> > can't seem to find that link or page anywhere now... :(.
> >
> > Any hints on where to find this type of info (or maybe how to measure it
> > myself) would be appreciated.
> >
> > Thank you,
> > Razvan
> >
> > --
> > Razvan Dragomirescu
> > Chief Technology Officer
> > Cayenne Graphics SRL
> >
> >
>
>
> ___
> tor-dev mailing list
> tor-dev@lists.torproject.org
> https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev
>
>
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


[tor-dev] estimating traffic/bandwidth needed to run a Tor node

2016-05-22 Thread Razvan Dragomirescu
Hello everyone,

I'm working on an Internet of Things project and Tor will be a part of it
(Hidden Services to be more precise). The nodes however may be battery
powered or have slow (or metered) Internet connectivity, so I'm trying to
estimate the traffic patterns for a fully functional Tor node. Has this
been measured at all? I mean how much traffic should I expect per
hour/day/month whatever in order to maintain a good "Tor citizen" node,
serving a very low traffic hidden service? I do remember reading something
about it needing 4MB per day or something like that, but I can't seem to
find that link or page anywhere now... :(.

Any hints on where to find this type of info (or maybe how to measure it
myself) would be appreciated.

Thank you,
Razvan

--
Razvan Dragomirescu
Chief Technology Officer
Cayenne Graphics SRL
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] adding smartcard support to Tor

2016-05-22 Thread Razvan Dragomirescu
Hello again,

I wanted to revisit this subject and actually start writing some code, but
it looks like Ivan Markin's GitHub account is gone, together with all the
code there. Ivan, are your modifications to OnionBalance still available
anywhere?

Thank you,
Razvan

--
Razvan Dragomirescu
Chief Technology Officer
Cayenne Graphics SRL

On Tue, Oct 20, 2015 at 10:05 PM, Ivan Markin  wrote:

> grarpamp:
> > Yes if you intend to patch tor to use a smartcard as a
> > cryptographic coprocessor offloading anything of interest
> > that needs signed / encrypted / decrypted to it. The card
> > will need to remain plugged in for tor to function.
>
> As I said before, only thing that actually needs to be protected here is
> "main"/"frontend" .onion identity. For that purpose all you need to do
> is to sign descriptors. And not to lose the key.
>
> grarpamp:
> > However how is "pin" on swissbit enabled?
> > If it goes from the host (say via ssh or keyboard or some
> > device or app) through usb port through armory to swissbit,
> > that is never secure.
>
> No, I will be secure. An adversary could sniff your PIN and sign
> whatever they want to, true. But revealing the PIN != revealing the key.
> In this case your identity key is still safe even if your PIN is
> "compromised".
>
> --
> Ivan Markin
>
>
> ___
> tor-dev mailing list
> tor-dev@lists.torproject.org
> https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev
>
>
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] adding smartcard support to Tor

2015-10-20 Thread Razvan Dragomirescu
Yes, that's precisely the point - if the card is stolen, the service is
stolen with it. I'm not trying to prevent that, I'm trying to _tie_ the
service to the card - whoever has the card runs the service. If you see
that the card is gone, you know your service is gone too. If the card is
still there, your service keys are safe.

Razvan

On Tue, Oct 20, 2015 at 10:59 PM, grarpamp  wrote:

> On Tue, Oct 20, 2015 at 3:05 PM, Ivan Markin  wrote:
> > No, I will be secure. An adversary could sniff your PIN and sign
> > whatever they want to, true. But revealing the PIN != revealing the key.
> > In this case your identity key is still safe even if your PIN is
> > "compromised".
>
> Yes the private key may be safe, but the smartcard may be stolen or
> removed from your sphere of access and reutilized with the sniffed
> pin, thus your onion or relay or node is no longer under your control,
> which was the point of the project. The enablement of the smartcard
> needs to be out of band, or use some strong one way challenge
> response like pki/totp/hotp/skey/opie.
> ___
> tor-dev mailing list
> tor-dev@lists.torproject.org
> https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev
>
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] adding smartcard support to Tor

2015-10-18 Thread Razvan Dragomirescu
Ivan, if I understand
https://onionbalance.readthedocs.org/en/latest/design.html#next-generation-onion-services-prop-224-compatibility
correctly, the setup I've planned will no longer work once Tor switches to
the next generation hidden services architecture, is this correct? Will
there be any backwards compatibility or will old hidden services simply
stop working at that point?

Thank you,
Razvan

--
Razvan Dragomirescu
Chief Technology Officer
Cayenne Graphics SRL

On Sun, Oct 18, 2015 at 12:08 PM, Razvan Dragomirescu <
razvan.dragomire...@veri.fi> wrote:

> Thank you Ivan!
>
> On Sun, Oct 18, 2015 at 1:44 AM, Ivan Markin  wrote:
>
>> Not exactly. The trick is that keys are not the same. For more details
>> have a look at the specifications [1]. There is a "permanent key"
>> ("holds the name", signs descriptors) and an "onion key" [2] for each
>> Introduction Point to communicate with the HS. So the "nameholder" key
>> ("permanent") is used only for signing descriptor with a list of IPs and
>> corresponding keys.
>>
>>
> Ah, I understand now! That actually makes perfect sense for my
> application. If I understand it correctly, I can simply let Tor register
> the HS by itself (using a random HS name/key), then fetch the introduction
> points and keys and re-register them with a different HS name - this would
> make the service accessible both on the random name that the host has
> chosen (without talking to the card) and on the name that the card holds
> the private key to (registered out of band, directly by a script that looks
> like OnionBalance).
>
>
>> > Regarding bandwidth, this is for an Internet of Things project, there's
>> > very little data going back and forth, I only plan to use the Tor
>> network
>> > because it's a very good way of establishing point to point circuits in
>> a
>> > decentralized manner. The alternative would be to use something like
>> PubNub
>> >  or Amazon's new IoT service, but those would depend on PubNub/Amazon.
>>
>>
>
>
>> If somebody already knows your
>> backend keys then certainly they know any of your data on this machine.
>>
>> No, not exactly :). There's still one thing they don't have access to -
> the smartcard! Even on a completely compromised backend machine, they still
> can't make the smartcard do something it doesn't want to do. In my project,
> it is the smartcard that drives the system - so a smartcard on one system
> can ask the host to connect it to a similar smartcard on a different system
> by giving it the HS name. The host establishes the connection, then the two
> smartcards establish their own encrypted channel over that connection. A
> compromised host can only deny service or redirect traffic somewhere else,
> but still can't make the smartcard accept injected traffic and can't
> extract the keys on it. I'm basically using Tor as a transport layer with
> NAT traversal and want to tie the HS names to smartcards so that I have a
> way to reach a _specific_ card/endpoint.
>
>  Thanks again!
> Razvan
>
> --
> Razvan Dragomirescu
> Chief Technology Officer
> Cayenne Graphics SRL
>
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] adding smartcard support to Tor

2015-10-18 Thread Razvan Dragomirescu
Thank you s7r! I think I'm going to start by simply using a mechanism
similar to OnionBalance - I'm going to let Tor do its HS registration with
a random HS name (and with a key that the host knows), then read the
introduction points and keys and re-register them (a la OnionBalance) with
a new HS name corresponding to the private key on the card. If I understand
this correctly, this will make the hidden service accessible both on the
random name and on the one the card knows the key to.

This way I don't have to modify Tor at all - I just let it do its thing,
then re-register out of band, like OnionBalance does. I just do it from the
same host instead of a frontend machine and I do it by signing with the
smartcard key (and generating the name based on that).

Thanks again,
Razvan

--
Razvan Dragomirescu
Chief Technology Officer
Cayenne Graphics SRL

On Sun, Oct 18, 2015 at 3:31 AM, s7r  wrote:

> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA256
>
> Hello Razvan,
>
> What you try to achieve is possible. It can be done, but requires code
> to be written. If you are really interested about this feature you can
> either sponsor someone to write the code for it either code it yourself.
>
> The 1024 bit RSA private key (hidden service key) hosted in
> HiddenServiceDir private_key file is used ONLY to sign descriptors
> containing the introduction points for that hidden service. The signed
> descriptors are then uploaded to the HSDirs responsible for that
> hidden service at that time. Nothing more. This hidden service key has
> nothing to do with the encrypted packets sent to that hidden service,
> that is something different which is unrelated to the topic.
>
> Here is how this could be done, in a very short example (1 feet
> overview):
>
> 1. Create a smartcard with your security parameters (password
> protected or not, etc.), which can hold an encrypted 1024 bit RSA
> private key and sign with it when requested.
>
> 2. Code Tor so that it can do the following:
>
> 2.1 - Can start without a private_key file in HiddenServiceDir, only
> with a known hostname without exiting with fatal error. Currently, if
> HiddenServiceDir is set, it won't start without this key and it will
> create a new key there is none. A torrc setting like
> 'OfflineHiddenServiceKey 1' would make sense so Tor will know it needs
> to behave differently when enabled. It will be 0 by default.
>
> 2.2 - Can normally choose and rotate introduction points as it wants
> or needs to, but instead of signing the descriptors itself and
> publishing them, just send the generated and unsigned descriptors via
> ControlPort to another application or script.
>
> 2.3 - A separate application / script will take the unsigned
> descriptors from Tor's ControlPort, access the smartcard, sign the
> descriptors and return them to the Tor process the same - using
> ControlPort, so that they can be published to the HSDirs. Make sure
> the signing standard is respected as per Tor's specifications (bits,
> encoding, format, etc.).
>
> Easy to say, probably not so easy to implement. It will require a
> proposal, code, some additional control port commands, probably other
> stuff as well, but it is possible.
>
> You can host the Tor instance handling the hidden service on another
> server and do a VPN or SSH tunnel between that server and the server
> having physical access to the smartcard, so they can talk to the
> ControlPort as described above. Or you can connect the both servers
> via other hidden services with authorization required so that each
> servers remains anonymously from the other. You can let your
> imagination go wild here and do plenty of things ...
>
> Hope this helps.
>
>
> On 10/18/2015 12:43 AM, Razvan Dragomirescu wrote:
> > Ivan, according to
> > https://www.torproject.org/docs/hidden-services.html.en (maybe I
> > misunderstood it), at Step 4, the client sends an _encrypted_
> > packet to the hidden service, so the hidden service needs to be
> > able to decrypt that packet. So the key on the card needs to be
> > used both for signing the HS registration and for decrypting the
> > packets during the initial handshake, isn't this correct?
> >
> > As far as I could tell, there is no way to tell Tor to use a
> > smartcard in any phase of the protocol, your OnionBalance tool
> > simply handles the registration by itself (outside of Tor).
> >
> > Regarding bandwidth, this is for an Internet of Things project,
> > there's very little data going back and forth, I only plan to use
> > the Tor network because it's a very good way of establishing point
> > to point circuits in a decentralized manne

Re: [tor-dev] adding smartcard support to Tor

2015-10-18 Thread Razvan Dragomirescu
Thank you Ivan!

On Sun, Oct 18, 2015 at 1:44 AM, Ivan Markin  wrote:

> Not exactly. The trick is that keys are not the same. For more details
> have a look at the specifications [1]. There is a "permanent key"
> ("holds the name", signs descriptors) and an "onion key" [2] for each
> Introduction Point to communicate with the HS. So the "nameholder" key
> ("permanent") is used only for signing descriptor with a list of IPs and
> corresponding keys.
>
>
Ah, I understand now! That actually makes perfect sense for my application.
If I understand it correctly, I can simply let Tor register the HS by
itself (using a random HS name/key), then fetch the introduction points and
keys and re-register them with a different HS name - this would make the
service accessible both on the random name that the host has chosen
(without talking to the card) and on the name that the card holds the
private key to (registered out of band, directly by a script that looks
like OnionBalance).


> > Regarding bandwidth, this is for an Internet of Things project, there's
> > very little data going back and forth, I only plan to use the Tor network
> > because it's a very good way of establishing point to point circuits in a
> > decentralized manner. The alternative would be to use something like
> PubNub
> >  or Amazon's new IoT service, but those would depend on PubNub/Amazon.
>
>


> If somebody already knows your
> backend keys then certainly they know any of your data on this machine.
>
> No, not exactly :). There's still one thing they don't have access to -
the smartcard! Even on a completely compromised backend machine, they still
can't make the smartcard do something it doesn't want to do. In my project,
it is the smartcard that drives the system - so a smartcard on one system
can ask the host to connect it to a similar smartcard on a different system
by giving it the HS name. The host establishes the connection, then the two
smartcards establish their own encrypted channel over that connection. A
compromised host can only deny service or redirect traffic somewhere else,
but still can't make the smartcard accept injected traffic and can't
extract the keys on it. I'm basically using Tor as a transport layer with
NAT traversal and want to tie the HS names to smartcards so that I have a
way to reach a _specific_ card/endpoint.

 Thanks again!
Razvan

--
Razvan Dragomirescu
Chief Technology Officer
Cayenne Graphics SRL
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] adding smartcard support to Tor

2015-10-17 Thread Razvan Dragomirescu
Exactly, you ask the smartcard to decrypt your traffic (and sign data if
needed), it never tells you the key, it's a blackbox - it gets plaintext
input and gives you encrypted (or signed) output, without ever revealing
the key it's used. It can also generate the key internally (actually a
keypair, it stores the private key in secure memory (protected from
software _and_ hardware attacks)) and gives you the public key so that you
can publish it.

Remember, smartcards are not just storage, they are tamper resistant
embedded computers. Very limited computers, true, but very good at keeping
secret keys secret, both from a software attack and from a hardware (drop
the card in acid, use a logic analyzer kind of) attack.

Razvan

--
Razvan Dragomirescu
Chief Technology Officer
Cayenne Graphics SRL

On Sat, Oct 17, 2015 at 11:40 PM, Ivan Markin  wrote:

> Ken Keys:
> >> > The point is that one can't[*] extract a private key from a smartcard
> >> > and because of that even if machine is compromised your private key
> >> > stays safe.
> > If the machine is going to use the HS key, the actual HS key has to be
> > visible to it.
>
> Nope. If the machine is going to use the HS key it can ask a smartcard
> to do so. Of course private key is visible to something/someone anyway.
> But in case of smartcards it is visible to a smartcard only.
>
> > An encrypted container holding a VM could use RSA-style
> > public/private key encryption so that it never has to see the private
> > key used to unlock it. You would still need to trust the VM, but the
> > encrypted container would allow you to establish a chain of custody.
>
> It's OK to unlock some encrypted block device/VM with some 'unpluggable'
> key. But it does nothing to protect your HS' identity.
>
> --
> Ivan Markin
> /"\
> \ /   ASCII Ribbon Campaign
>  Xagainst HTML email & Microsoft
> / \  attachments! http://arc.pasp.de/
>
>
> ___
> tor-dev mailing list
> tor-dev@lists.torproject.org
> https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev
>
>
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] adding smartcard support to Tor

2015-10-17 Thread Razvan Dragomirescu
Ivan, according to https://www.torproject.org/docs/hidden-services.html.en
(maybe I misunderstood it), at Step 4, the client sends an _encrypted_
packet to the hidden service, so the hidden service needs to be able to
decrypt that packet. So the key on the card needs to be used both for
signing the HS registration and for decrypting the packets during the
initial handshake, isn't this correct?

As far as I could tell, there is no way to tell Tor to use a smartcard in
any phase of the protocol, your OnionBalance tool simply handles the
registration by itself (outside of Tor).

Regarding bandwidth, this is for an Internet of Things project, there's
very little data going back and forth, I only plan to use the Tor network
because it's a very good way of establishing point to point circuits in a
decentralized manner. The alternative would be to use something like PubNub
 or Amazon's new IoT service, but those would depend on PubNub/Amazon.

Razvan

--
Razvan Dragomirescu
Chief Technology Officer
Cayenne Graphics SRL

On Sat, Oct 17, 2015 at 10:13 PM, Ivan Markin  wrote:

> Razvan Dragomirescu:
> > Thank you Ivan, I've taken a look but as far as I understand your project
> > only signs the HiddenService descriptors from an OpenPGP card. It still
> > requires each backend instance to have its own copy of the key (where it
> > can be read by an attacker). My goal is to have the HS private key
> > exclusively inside the smartcard and only sign/decrypt with it when
> needed
> > but never reveal it.An attacker should not be able to steal the key and
> > host his own HS at the same address - the address would be effectively
> tied
> > to the smartcard - whoever owns the smartcard can sign HS descriptors and
> > decrypt traffic with it, so he or she is the owner of the service.
>
> Yes, it still requires to have plain keys for decryption of traffic on
> backend instances, sure. But you're not right about key "stealing"
> (copying). An address of a HS is calculated from key which is signing
> descriptors. This key resides on a smartcard. It's already
> "the-address-would-be-effectively-tied-to-the-smartcard" situation there.
>
> I do not see any reason to decrypt traffic on a smartcard; in case if an
> attacker can copy your backend key there is no need to decrypt anything
> - they already have an access to the content on your instance. Also
> backend instances' keys are disposable - you can change them seamlessly.
>
> P.S. Notice about bandwidth issue when you're decrypting all of the
> traffic on a smartcard (half-duplex, etc.).
>
> --
> Ivan Markin
> /"\
> \ /   ASCII Ribbon Campaign
>  Xagainst HTML email & Microsoft
> / \  attachments! http://arc.pasp.de/
>
>
> ___
> tor-dev mailing list
> tor-dev@lists.torproject.org
> https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev
>
>
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] adding smartcard support to Tor

2015-10-17 Thread Razvan Dragomirescu
Tamper resistance. And the fact that an attacker with access to the machine
running Tor can read your encrypted thumb drive (you need to decrypt it at
some point to load the key into the Tor process since the encrypted
thumbdrive doesn't run crypto algos internally). A smartcard is a small
embedded tamper-resistant _computer_ - you never ask it for the key, you
ask it to _decrypt_ something for you or _sign_ something for you, you can
never extract the key out of the card.

Razvan

--
Razvan Dragomirescu
Chief Technology Officer
Cayenne Graphics SRL


On Sat, Oct 17, 2015 at 9:36 PM, Ken Keys  wrote:

> What is the advantage of a smart card over a standard encrypted thumb
> drive?
>
> On 10/17/2015 11:19 AM, Razvan Dragomirescu wrote:
> > Thank you Ivan, I've taken a look but as far as I understand your
> > project only signs the HiddenService descriptors from an OpenPGP card.
> > It still requires each backend instance to have its own copy of the
> > key (where it can be read by an attacker). My goal is to have the HS
> > private key exclusively inside the smartcard and only sign/decrypt
> > with it when needed but never reveal it. An attacker should not be
> > able to steal the key and host his own HS at the same address - the
> > address would be effectively tied to the smartcard - whoever owns the
> > smartcard can sign HS descriptors and decrypt traffic with it, so he
> > or she is the owner of the service.
> >
> > Best regards,
> > Razvan
> >
> > --
> > Razvan Dragomirescu
> > Chief Technology Officer
> > Cayenne Graphics SRL
> >
> > On Sat, Oct 17, 2015 at 4:43 AM, Ivan Markin  > <mailto:t...@riseup.net>> wrote:
> >
> > Hello,
> > Razvan Dragomirescu:
> > > I am not sure if this has been discussed before or how hard it
> would be to
> > > implement, but I'm looking for a way to integrate a smartcard
> > with Tor -
> > > essentially, I want to be able to host hidden service keys on
> > the card. I'm
> > > trying to bind the hidden service to a hardware component (the
> > smartcard)
> > > so that it can be securely hosted in a hostile environment as
> > well as
> > > impossible to clone/move without physical access to the smartcard.
> >
> > I'm not sure that this solution is 100% for your purposes. But
> > recently
> > I've added OpenPGP smartcard support to do exactly this into
> > OnionBlance
> > [1]+[2]. What it does is that it just signs a HS descriptor using
> > OpenPGP SC (via 'Signature' or 'Authentication' key). [It's still a
> > pretty dirty hack, there is no even any exception handling.] You
> > can use
> > it by installing "manager/front" service with your smartcard in it
> via
> > OnionBalace and balancing to your actual HS. There is no any
> bandwidth
> > limiting (see OnionBalance design). You can setup OB and an actual
> > HS on
> > the same machine for sure.
> >
> > > I have Tor running on the USBArmory by InversePath (
> > > http://inversepath.com/usbarmory.html ) and have a microSD form
> > factor card
> > > made by Swissbit (
> > >
> >
> www.swissbit.com/products/security-products/overwiev/security-products-overview/
> > <
> http://www.swissbit.com/products/security-products/overwiev/security-products-overview/
> >
> > > ) up and running on it. I am a JavaCard developer myself  and I
> have
> > > developed embedded Linux firmwares before but I have never
> > touched the Tor
> > > source.
> >
> > There is a nice JavaC applet by Joeri [3]. It's the same applet that
> > Yubikey is using. You can find well-written tutorial of producing
> your
> > OpenPGP card at Subgraph [4].
> >
> > >
> > > Is there anyone that is willing to take on a side project doing
> > this? Would
> > > it be just a matter of configuring OpenSSL to use the card (I
> > haven't tried
> > > that yet)?
> >
> > I'm not sure that it is worth to implement a card support in
> > little-t-tor itself. As I said, all the logic is about HS descriptor
> > signing. Python and other langs that provide readablity will provide
> > security then.
> > I think/hope so.
> >
> > [1] https://github.com/mark-in/onionbalance
> > [2] https://github.com/mark-in/openpgpycard
> > [3] 

Re: [tor-dev] adding smartcard support to Tor

2015-10-17 Thread Razvan Dragomirescu
Thank you Ivan, I've taken a look but as far as I understand your project
only signs the HiddenService descriptors from an OpenPGP card. It still
requires each backend instance to have its own copy of the key (where it
can be read by an attacker). My goal is to have the HS private key
exclusively inside the smartcard and only sign/decrypt with it when needed
but never reveal it. An attacker should not be able to steal the key and
host his own HS at the same address - the address would be effectively tied
to the smartcard - whoever owns the smartcard can sign HS descriptors and
decrypt traffic with it, so he or she is the owner of the service.

Best regards,
Razvan

--
Razvan Dragomirescu
Chief Technology Officer
Cayenne Graphics SRL

On Sat, Oct 17, 2015 at 4:43 AM, Ivan Markin  wrote:

> Hello,
> Razvan Dragomirescu:
> > I am not sure if this has been discussed before or how hard it would be
> to
> > implement, but I'm looking for a way to integrate a smartcard with Tor -
> > essentially, I want to be able to host hidden service keys on the card.
> I'm
> > trying to bind the hidden service to a hardware component (the smartcard)
> > so that it can be securely hosted in a hostile environment as well as
> > impossible to clone/move without physical access to the smartcard.
>
> I'm not sure that this solution is 100% for your purposes. But recently
> I've added OpenPGP smartcard support to do exactly this into OnionBlance
> [1]+[2]. What it does is that it just signs a HS descriptor using
> OpenPGP SC (via 'Signature' or 'Authentication' key). [It's still a
> pretty dirty hack, there is no even any exception handling.] You can use
> it by installing "manager/front" service with your smartcard in it via
> OnionBalace and balancing to your actual HS. There is no any bandwidth
> limiting (see OnionBalance design). You can setup OB and an actual HS on
> the same machine for sure.
>
> > I have Tor running on the USBArmory by InversePath (
> > http://inversepath.com/usbarmory.html ) and have a microSD form factor
> card
> > made by Swissbit (
> >
> www.swissbit.com/products/security-products/overwiev/security-products-overview/
> > ) up and running on it. I am a JavaCard developer myself  and I have
> > developed embedded Linux firmwares before but I have never touched the
> Tor
> > source.
>
> There is a nice JavaC applet by Joeri [3]. It's the same applet that
> Yubikey is using. You can find well-written tutorial of producing your
> OpenPGP card at Subgraph [4].
>
> >
> > Is there anyone that is willing to take on a side project doing this?
> Would
> > it be just a matter of configuring OpenSSL to use the card (I haven't
> tried
> > that yet)?
>
> I'm not sure that it is worth to implement a card support in
> little-t-tor itself. As I said, all the logic is about HS descriptor
> signing. Python and other langs that provide readablity will provide
> security then.
> I think/hope so.
>
> [1] https://github.com/mark-in/onionbalance
> [2] https://github.com/mark-in/openpgpycard
> [3] http://sourceforge.net/projects/javacardopenpgp/
> [4] https://subgraph.com/sgos/documentation/smartcards/index.en.html
>
> Hope it helps.
> --
> Ivan Markin
> /"\
> \ /   ASCII Ribbon Campaign
>  Xagainst HTML email & Microsoft
> / \  attachments! http://arc.pasp.de/
>
>
> ___
> tor-dev mailing list
> tor-dev@lists.torproject.org
> https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev
>
>
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] adding smartcard support to Tor

2015-10-17 Thread Razvan Dragomirescu
Thank you grarpamp, but that's not what I'm trying to prevent/achieve. I
simply want to host the private key for a hidden service inside a secure
element (a smartcard) to ensure that only the hardware that has direct
access to my smartcard can publish the descriptors for the service and
decrypt incoming packets. I do realize the host will have complete control
over the Tor instance and that's fine, I simply want to prevent it (or a
different host) from ever publishing this HS without having access to the
smartcard.

The idea is to tie the HS to the physical smart card - whoever holds the
smartcard can publish the service, once the card is removed, the service
moves with it.

An attacker (with or without physical access to the machine running Tor)
would not be able to extract any information that would allow him to
impersonate the service at a later time. Of course, he can change the
_current_ content or serve his own, but cannot permanently compromise the
service by reading its private key.

Thank you,
Razvan

--
Razvan Dragomirescu
Chief Technology Officer
Cayenne Graphics SRL

On Fri, Oct 16, 2015 at 1:56 AM, grarpamp  wrote:

> On Tue, Oct 13, 2015 at 4:08 PM, Razvan Dragomirescu
>  wrote:
> > essentially, I want to be able to host hidden service keys on the card.
> I'm
> > trying to bind the hidden service to a hardware component (the
> smartcard) so
> > that it can be securely hosted in a hostile environment as well as
> > impossible to clone/move without physical access to the smartcard.
>
> The host will have both physical and logical access to your
> process space, therefore you're compromised regardless
> of where you physically keep the keys or how you acccess
> them.
>
> Though there are trac tickets you can search for involving
> loading keys into tor controller via remote tunnel without need
> to leave and mount or access physical devices in /dev.
> ___
> tor-dev mailing list
> tor-dev@lists.torproject.org
> https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev
>
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


[tor-dev] adding smartcard support to Tor

2015-10-13 Thread Razvan Dragomirescu
Hello,

I am not sure if this has been discussed before or how hard it would be to
implement, but I'm looking for a way to integrate a smartcard with Tor -
essentially, I want to be able to host hidden service keys on the card. I'm
trying to bind the hidden service to a hardware component (the smartcard)
so that it can be securely hosted in a hostile environment as well as
impossible to clone/move without physical access to the smartcard.

I have Tor running on the USBArmory by InversePath (
http://inversepath.com/usbarmory.html ) and have a microSD form factor card
made by Swissbit (
www.swissbit.com/products/security-products/overwiev/security-products-overview/
) up and running on it. I am a JavaCard developer myself  and I have
developed embedded Linux firmwares before but I have never touched the Tor
source.

Is there anyone that is willing to take on a side project doing this? Would
it be just a matter of configuring OpenSSL to use the card (I haven't tried
that yet)?

Thank you,
Razvan

--
Razvan Dragomirescu
Chief Technology Officer
Cayenne Graphics SRL
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev