Am 28.06.2016 um 18:06 schrieb therealnewo...@gmail.com:
On Tue, Jun 28, 2016 at 11:51 AM, Rainer Jung <rainer.j...@kippdata.de> wrote:
Am 28.06.2016 um 16:07 schrieb Mark Thomas:

On 28/06/2016 12:28, Mark Thomas wrote:

On 28/06/2016 11:34, Rainer Jung wrote:

Am 28.06.2016 um 11:15 schrieb Mark Thomas:


<snip />

Index: src/ssl.c
===================================================================
--- src/ssl.c    (revision 1750259)
+++ src/ssl.c    (working copy)
@@ -420,6 +420,10 @@
     return psaptr->PSATOLD;
 #elif defined(WIN32)
     return (unsigned long)GetCurrentThreadId();
+#elif defined(DARWIN)
+    uint64_t tid;
+    pthread_threadid_np(NULL, &tid);
+    return (unsigned long)tid;
 #else
     return (unsigned long)(apr_os_thread_current());
 #endif


I want to do some similar testing for Linux before adding what I
suspect
will be a very similar block using gettid().


We could either add something to configure.in. Untested:

Index: native/configure.in
===================================================================
--- native/configure.in (revision 1750462)
+++ native/configure.in (working copy)
@@ -218,6 +218,9 @@
     *-solaris2*)
         APR_ADDTO(TCNATIVE_LIBS, -lkstat)
         ;;
+    *linux*)
+        APR_ADDTO(CFLAGS, -DTCNATIVE_LINUX)
+        ;;
     *)
         ;;
 esac


and then use a

#ifdef TCNATIVE_LINUX

or we copy some other more direct linux check from e.g. APR:

#ifdef __linux__

The latter looks simpler, but the version above is based on all the
logic put into config.guess.


I'd go with the __linux__ option as that is consistent with what we
already use in os/unix/system.c

I'm not against the change to configure.in, I just think we should be
consistent with how we do this throughout the code base.


I've confirmed that the same problem occurs with hash bucket selection
on linux and that switching to gettid() fixes that problem.

I'm going to go ahead with the 1.2.8 release shortly. We can continue to
refine this as necessary and have a more complete fix in 1.2.9.


I did a quick check on Solaris. apr_os_thread_current() uses pthread_self on
Solaris like on Linux (actually on any Unix type OS), but unlike Linux where
this returns a address which is either 32 or 64 bit aligned depending on
address size, on Solaris you get an increasing number starting with 1 for
the first thread and incremented by one for each following thread. Thread
IDs do not get reused in the same process, even if the thread finished, but
thread IDs are common between different processes, because they always start
with 1. So Solaris should be fine as-is.

Does the value have a cap? If not then Solaris will just continue to
use more and more memory as threads are created over the lifetime of
the server.

No cap. I think everyone agrees we are still looking at cleaning up at thread death as a further improvement. Whether there's a problem without it depends on the executor size and whether the executor actually destroys and recreates threads (thus using more IDs than the max thread count).

Regards,

Rainer

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscr...@tomcat.apache.org
For additional commands, e-mail: dev-h...@tomcat.apache.org

Reply via email to