----- Original Message -----
> Hi all,
> 
> I've just finished a big SSL certificate refactoring, whose goal was
> to decouple SSLCertLookup enough for me to write a standalone test
> harness for it.
> 
>       https://github.com/jpeach/trafficserver/tree/jpeach/ssl-linkage
> 
> Since there's a fair amount of code shuffling, it would be great if
> anyone could review it.

It is a big code drop indeed, and I would've rather seen its slow
evolution. Being this big, the created (optimized?) diff is anything
but optimized for easy reading. That said, comments (from me, anyway)
will be going to the list, not to github, so we can keep focused.

Be prepared for stupid questions.

from the commit message:

    to release them when we reload the contexts. This also sames us


+struct SSLCertLookup : public ConfigInfo

Why is the destructor Virtual?



+namespace ssl { namespace detail {

That's a very nice use of namespaces to pull out scoped_config!


in
 struct SSLConfigParams : public ConfigInfo

You're making all members public: Does that already break ABI?
And if it does, why not go all the way and reorder them to align nicely?


SSLUtils.h

+#if !defined(OPENSSL_THREADS)
+#error Traffic Server requires a OpenSSL library that support threads

grammar aside, shouldn't this be a compile time issue?

Please add a @brief to the .h and .cc files.

Just a side note:

+    static const char hextab[16] = {
+      '0', '1', '2', '3', '4', '5', '6', '7', '8', '9', 'a', 'b', 'c', 'd', 
'e', 'f'

This is the fourth time we define a hextab in the code ;)



-    return SSL_TLSEXT_ERR_NOACK;
+  explicit
+  SSLAddressLookupKey(const IpEndpoint& ip)
+  {

only now do I realize that a number of our constructors may do well
with being prefixed by explicit...


Design question:

class SSLContextStorage becomes struct SSLContextStorage
the internal struct SSLEntry remains private - is that the
reason it doesn't get the same treatment as scoped_config?


maybe this:

 reverse_dns_name(const char * hostname, char 
(&reversed)[TS_MAX_HOST_NAME_LEN+1])

would make more sense in hostdb?



+  box.check(wildcard.match("foo.com") == false, "foo.com is not a wildcard");
+  box.check(wildcard.match("*.foo.com") == true, "*.foo.com not a wildcard");
+  box.check(wildcard.match("bar*.foo.com") == false, "bar*.foo.com not a 
wildcard");
+  box.check(wildcard.match("*") == false, "* is not a wildcard");
+  box.check(wildcard.match("") == false, "'' is not a wildcard");

I'm confused, why are those not wildcards?


-    CACertFilename = CACertPath =
-    clientCertPath = clientKeyPath =
-    clientCACertFilename = clientCACertPath =
+    serverCACertFilename =
+    serverCACertPath =
+    clientCertPath =
+    clientKeyPath =
+    clientCACertFilename =
+    clientCACertPath =

+1 for unifying those names!


-  if (client_ctx)
+  if (client_ctx) {
     SSL_CTX_free(client_ctx);
-  client_ctx = NULL;
+    client_ctx = NULL;
   }

hah: Nice - if it's *already* NULL, why assign NULL to it?


+SSLNetProcessor::start(int number_of_ssl_threads)
+  // This initialization order matters ...
+  SSLInitializeLibrary();
+  SSLConfig::startup();

   if (HttpProxyPort::hasSSL()) {
-    // Only init server stuff if SSL is enabled in the config file
-    SSLCertLookup::startup();
+    SSLCertificateConfig::startup();

I'm confused, why do we startup() the SSLConfig even if we have no SSL?
Who calls SSLNetProcessor::start()?


Another side note: Has anyone considered borrowing httpd's idea of
uniquely numbering each error log message?

Inspired by things like:
+      SSLError("SSL_write");
which would look extremely unhelpful to me, if I were trying to debug
an issue ;)


(after almost an one and a half hour I am skipping most of 
iocore/net/SSLUtils.cc
it's all just added to a single file and it's hard to tell what was there
before, and what refactored: This would've been better with many small
commits.)

+SSLDebugBufferPrint(const char * tag, const char * buffer, unsigned buflen, 
const char * message)
+    for (unsigned ii = 0; ii < buflen; ii++) {
+      putc(buffer[ii], stdout);
+    }
+    putc('\n', stdout);
why do we use putc here?


I'm noticing that you're abstracting away a lot of OpenSSL's
"A"PI, I'll have more to say on that another time.


Sadly, we have custom configuration parsing here, just as we do
every where else :c 


+  // On Darwin, fail the tests if we have any memory leaks.
+#if defined(darwin)
+  if (system("xcrun leaks test_certlookup") != 0) {
+    RegressionTest::final_status = REGRESSION_TEST_FAILED;
+  }
+#endif

Wouldn't it be a good idea to do this in general?
i.e.: Use valgrind (or which ever tool) on the regression test suit
if it's available and/or enabled, i.e.: If we --enable-debug && --enable-tests


> thanks,
> James

That's all from me.

o/

-- 
Igor Galić

Tel: +43 (0) 664 886 22 883
Mail: i.ga...@brainsware.org
URL: http://brainsware.org/
GPG: 6880 4155 74BD FD7C B515  2EA5 4B1D 9E08 A097 C9AE

Reply via email to