Hi Elias! 

Nice to meet you and thanks for your interest in this GSOC project! I’m 
Marcela, a mentor on this project as well. I was the main author on the CONIKS 
paper, so I can answer all of your paper-related questions. From some of the 
terminology you’re using and from some of the text you’ve quoted, it seems to 
me that you’ve been reading an older version of the paper, so I would refer you 
to our most recent and peer-reviewed paper published at USENIX Security 2015: 
https://www.cs.princeton.edu/~melara/static/pubs/sec15-paper-melara.pdf

I hope my explanations make sense, and please let us know if you have more 
questions!

Best,
Marcela


>> That's correct, though perhaps the project description
>> should emphasize that the priority is on the server and
>> client components.
>> 
> Ok. The CONIKS paper proposes that an identity provider A publishes its 
> commitments to others and then the user can then query  multiple identity 
> providers if they all have the same picture of A's commitment.
> So, the functionality of auditing commitments would be a subset of the server 
> anyhow, I guess then the auditors could also just use the server component 
> which was configured to act only as an auditor?!

Yes, we propose that the identity providers can act as auditors and 
cross-verify each other in this way, and clients may query any auditor when 
they are checking for equivocation. However, I should note that we intended 
this to be the *ideal deployment* (and I apologize if this doesn’t come through 
clearly enough in the paper). CONIKS also allows third-party auditors whose 
sole job it is to check that the chain of signed tree roots is linear. In 
practice, we expect that auditors and identity providers will be separate in 
most cases (at least until there is a standard for CONIKS protocols and data 
formats, but we’re not there yet). 

It’s not accurate to conflate auditing with being a key server, or say that 
auditing is a subset of the key server’s functionality. Auditors only need to 
keep records of identity providers’ signed tree roots, and they never download 
or record any key directory contents, so it doesn’t really make sense to equip 
auditors with key server functionality and “turn it off” when you’re not acting 
as a key server.  Key servers have the more complex and rich functionality 
needing to maintain significantly larger data structures and perform more 
cryptographic computations. This means that key servers have very different 
space and availability requirements than auditors do.

>> While I agree that would probably ease deployment
>> for those running ejabberd, it's not very helpful
>> in the general case. We'd like anyone to be able to
>> run an auditor. And, again, I should stress that the
>> other components are the priority and should make up the
>> bulk of the work.
>> 
> I see your point. As stated above: I'd try to design the server so it can be 
> configured to only act as an auditor.

See my point above. Auditing and maintaining keys isn’t just about configuring 
a server differently.


>> Sounds great. Again, thanks for your interest
>> in the project. Feel free to ping Marcela (masomel)
>> and I (arlolra) on irc in tor-dev
>> 
> Ok, thanks a lot, I will!
> Two questions came up in the last days:
> 
> 1. How would the CONIKS client code integrate with the ctypes-otr add-on? It 
> would probably be part of it? So should I also have a look to implement 
> client functionality in a C library and then use the calls via types?

Arlo may have a better answer to this question.

> 2. While reading the paper I had some confusion about a minor point, namely 
> about the way CONIKS calculates the index of a user identity in the merkle 
> prefix tree. I know you neither wrote the paper nor the reference 
> implementation, but maybe you can shed some light onto this issue, even if it 
> is probably a very small one.

How the lookup index is calculated is actually not a minor point. It’s pretty 
important that this is done in a very specific way to ensure that 
authentication paths in the CONIKS Merkle tree can’t allow an adversary to 
easily determine which other usernames have key mappings in the tree. This is 
one of the main privacy feature of CONIKS.


> On Page 4 the authors state:
> "We can implement a VUF using any deterministic, existentially unforgeable 
> signature scheme [42]. The signature scheme must be deterministic or else (in 
> our application) the identity provider could insert multiple bindings for a 
> user at different locations each with a valid authentication path. In 
> practice, this could be a classic RSA signature [51] (using a deterministic 
> padding scheme such as PKCS v. 1.5 [26]) or BLS “short signatures” [9], which 
> provide the best space efficiency. Many common discrete-log based signature 
> schemes such as DSA [30] are not immediately applicable as they require a 
> random nonce.
> In CONIKS, we generate the index for a user u as: i = h (sign_K (u))
> where h() is a one-way hash function and sign() is a deterministic, 
> existentially unforgeable signature scheme. The extra hash function is added 
> because some bits of sign_K (u) might otherwise leak through binding validity 
> proofs, and signature schemes are generally not unforgeable given some bits 
> of the valid signature. The hash function prevents this by ensuring the final 
> lookup index leaks no information about the value of sign_K (u).”
> 
> I understand that using the signature of the user identity ensures that no 
> information indicating its existence on the server is leaked. But why are 
> they hashing the signature again? RSA-signatures normally already consist of 
> a signed hash-value. Why would it bad to leak bits of the signature?

The answer to this question is in the portion of the paper you quote: 
"signature schemes are generally not unforgeable given some bits of the valid 
signature.” There’s some crypto-jargon here, which you may not be familiar with 
if you haven't taken a crypto course. What this sentence is trying to say is 
that, given some bits of a valid signature, an adversary could be able to find 
some message or data and a corresponding digital signature which is valid and 
which has not been generated by the legitimate signer (in this case the 
identity provider). This is called an existential forgery, and it means that an 
adversary can create forged lookup indices if the signature isn’t hashed. The 
hash on top of the signature also ensures that the lookup index can’t be easily 
computed from the VUF.


> Also, if I see this correctly, they don't do it in their reference 
> implementation 
> (https://github.com/coniks-sys/coniks-ref-implementation/blob/master/coniks_server/src/org/coniks/coniks_server/SignatureOps.java
>  
> <https://github.com/coniks-sys/coniks-ref-implementation/blob/master/coniks_server/src/org/coniks/coniks_server/SignatureOps.java>),
>  instead they just use SHA256withRSA...
> So, am I misreading the paper? Or do they actually mean sign_K(h(u)), which 
> would be the normal 'hash and sign' operation of RSA? Or does it make sense 
> to rehash the signature and their own implementation is not consistent with 
> it?
> This is probably no important point, however I want to make sure I'm not 
> missing a piece concerning the security of the user identities.

Unfortunately, the reference implementation is still a work in progress 
(although I’ve had to take some time off from working on it due to other 
research projects), and is missing several features described in the paper. The 
SignatureOps class in the reference implementation simply implements wrappers 
around Java signature functions, so that’s not where the user lookup index 
calculation happens. The lookup index calculation happens in the unameToIndex() 
function in conks_server.ServerUtils. Now, I will note that the reference 
implementation currently *isn’t* doing the proper index calculation as it only 
currently hashes the username, so thank you for making me go back and 
double-check my code! But it would be relative straight-forward to change this 
and to add the digital signature computation that needs to be done before the 
hashing. 

_______________________________________________
tor-dev mailing list
[email protected]
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev

Reply via email to