jarsToSkip/jarsToScan

2020-04-04 Thread Jerry Malcolm
I've encountered some very strange behavior in a new TC instance.  It's 
not actually 'new'.  I cloned an Amazon EC2 that had a fully functional 
ApacheHTTPD/Tomcat.  I replaced the domain on the clone with a new 
domain.  Otherwise, nothing was changed. I use JSTL extensively on both 
the original domain and the new domain.  When I brought up TC on the 
clone EC2 with the new domain, I got an error on the first jsp saying it 
couldn't resolve the jstl taglib reference.  I'm familiar with all of 
the google posts about this message.  But all of the responses I found 
said to make sure I had the 1.2 jstl taglib jars, etc.  I verified that 
the jars were there.  And I definitely had the right version.


After several hours, I remembered to check the JarsToSkip and JarsToScan 
properties in Catalina.properties.  I hadn't touched that file in 
months, and the clone EC2 was using the same version that was working on 
the original EC2.  In all of my TC instances, I use JarsToSkip=*, and I 
list out the specific jars with TLDs on JarsToScan.  When I removed the 
JarsToSkip, it took forever to boot as expected.  But this time it did 
find the jstl taglib jars and everything worked.  I then changed the 
JarsToSkip to the massive list of jars in my application that don't have 
tag definitions.  But when I listed the jars out, the loader ignored the 
list, and still processed the listed jars continually reminding me to 
add each one to the list to skip.


The skip/scan lists are working fine on all of the other TC 
installations I manage.  But they are not working on this one new 
installation.  I've tried several variations of the skip/scan 
configurations.  The only way it seems to work as expected is if the 
skip list is "*" or if there is no skip list at all.  Listing out 
individual jars in either list is being ignored.


This is on a 8.5.x.  I did a yum update.  So I assume it's close to the 
latest point release of 8.5.  Maybe I'll have a clearer mind tomorrow 
and see something obvious.  But right now, it's making no sense.  And 
the fact that it started failing with an untouched months-old 
Catalina.properties file that had been working is a real point of 
confusion.  Is there any additional debug info I can turn on that shows 
the skip/scan lists being processed?  I've already set 
org.apache.jasper.servlet.TldScanner.level = FINEST.  It's not showing 
anything other than listing the jars it scans and the results.  What 
could be happening to cause the jar scanner to act wonky?


Here's a subset of JarsToSkip from my Catalina.properties (they are NOT 
skipped)


tomcat.util.scan.StandardJarScanFilter.jarsToSkip=\
  annotations-api.jar,\
  ant-junit*.jar,\
  ant-launcher.jar,\
  ant.jar,\
  asm-*.jar,\
  aspectj*.jar,\
  bootstrap.jar,\
<>


-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



[OT] Re: JNI memory leak?

2020-04-04 Thread Mark Thomas
On April 4, 2020 7:26:05 PM UTC, calder  wrote:
>m
>
>On Sat, Apr 4, 2020, 14:14 Frank Tornack  wrote:
>
>> Good evening,
>> I have a question about your e-mail address. Why does the address end
>> on com.INVALID? How do you get such an address?
>>
>
>That question is off topic.

Subject line adjusted accordingly.

>The invalid is too avoid spam email

No it isn't. And to side track for a moment it is very unhelpful to state 
something as a fact when it is, at best, an educated guess. Especially when, as 
in this case, that guess is wrong. Guesses can be acceptable responses to 
questions on this list but it must be made clear to readers that it is a guess 

The .INVALID is added by the ASF mailing list software (strictly a custom 
extension written by the ASF does this) when the originator posts from a domain 
that has a strict SPF record. If the ASF didn't do this, recipients that check 
SPF records would reject the mail as the originator's domain will not list the 
ASF mail servers as permitted senders for the originator's domain.

In short, .INVALID is added to make sure the message is received by all 
subscribers.

Mark

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: JNI memory leak?

2020-04-04 Thread calder
m

On Sat, Apr 4, 2020, 14:14 Frank Tornack  wrote:

> Good evening,
> I have a question about your e-mail address. Why does the address end
> on com.INVALID? How do you get such an address?
>

That question is off topic.

The invalid is too avoid spam email


Re: JNI memory leak?

2020-04-04 Thread Frank Tornack
Good evening,
I have a question about your e-mail address. Why does the address end
on com.INVALID? How do you get such an address?

Sorry for the interposed question,

Am Samstag, den 04.04.2020, 01:48 + schrieb Mark Boon:
> For the past few months we’ve been trying to trace what looks like
> gradual memory creep. After some long-running experiments it seems
> due to memory leaking when
> jni_invoke_static(JNIEnv_*, JavaValue*, _jobject*, JNICallType,
> _jmethodID*, JNI_ArgumentPusher*, Thread*) is invoked. Somewhere.
> 
> My environment is Tomcat running a proxy webapp. It does TLS
> termination,  authentication and then forwards the call to local
> services. It doesn’t do much else, it’s a relatively small
> application.
> 
> Some (possibly relevant) versions and config parameters:
> Tomcat 8.5
> Java 8u241 (Oracle)
> Heap size = 360Mb
> MAX_ALLOC_ARENA=2
> MALLOC_TRIM_THRESHOLD_=250048
> jdk.nio.maxCachedBufferSize=25600
> 
> We couldn’t find any proof of memory leaking on the Java side.
> When we turn on NativeMemoryTracking=detail and we take a snapshot
> shortly after starting, we see (just one block shown):
> 
> [0x03530e462f9a] JNIHandleBlock::allocate_block(Thread*)+0xaa
> [0x03530e3f759a] JavaCallWrapper::JavaCallWrapper(methodHandle,
> Handle, JavaValue*, Thread*)+0x6a
> [0x03530e3fa000] JavaCalls::call_helper(JavaValue*,
> methodHandle*, JavaCallArguments*, Thread*)+0x8f0
> [0x03530e4454a1] jni_invoke_static(JNIEnv_*, JavaValue*,
> _jobject*, JNICallType, _jmethodID*, JNI_ArgumentPusher*, Thread*)
> [clone .isra.96] [clone .constprop.117]+0x1e1
>  (malloc=33783KB type=Internal #110876)
> 
> Then we run it under heavy load for a few weeks and take another
> snapshot:
> 
> [0x03530e462f9a] JNIHandleBlock::allocate_block(Thread*)+0xaa
> [0x03530e3f759a] JavaCallWrapper::JavaCallWrapper(methodHandle,
> Handle, JavaValue*, Thread*)+0x6a
> [0x03530e3fa000] JavaCalls::call_helper(JavaValue*,
> methodHandle*, JavaCallArguments*, Thread*)+0x8f0
> [0x03530e4454a1] jni_invoke_static(JNIEnv_*, JavaValue*,
> _jobject*, JNICallType, _jmethodID*, JNI_ArgumentPusher*, Thread*)
> [clone .isra.96] [clone .constprop.117]+0x1e1
>  (malloc=726749KB type=Internal #2385226)
> 
> While other blocks also show some variation, none show growth like
> this one. When I do some math on the number (726749KB - 33783KB) /
> (2385226 – 110876) it comes down to a pretty even 312 bytes per
> allocation.
> And we leaked just under 700Mb. While not immediately problematic,
> this does not bode well for our customers who run this service for
> months.
> 
> I’d like to avoid telling them they need to restart this service
> every two weeks to reclaim memory. Has anyone seen something like
> this? Any way it could be avoided?
> 
> Mark Boon
> 
> 
> 


-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: JNI memory leak?

2020-04-04 Thread Mark Boon
I don't have 'proof' Tomcat is to blame. Hence the question-mark. All I have 
managed is narrow it down to this NMT data, which is not very informative. I 
hoped anyone could give me an idea how or where to investigate further. Or if 
someone had run into this before.

The connector of the webapp uses Http11NioProtocol. My understanding is it uses 
direct-byte-buffers backed by native memory for the Nio channels. I don't know 
for sure if that gets allocated through a JNI call, but that was my assumption.

I did not consider trying Mission Control or jvisualvm. Isn't Mission Control 
for embedded Java? And AFAIK, jvisualvm is for profiling Java memory usage and 
underneath uses tools like jmap, jstat and jcmd. Through GC logs and jmap 
heap-dumps I can confidently say there's no memory leak on the Java side. The 
NMT data shown comes from jcmd. No type grows beyond control and full GC always 
returns to the same baseline for the heap. Anyway, the Java heap is only 360Mb 
and this memory-block created by jni_invoke_static has grown to 700Mb by 
itself. And I see no out-of-memory messages. The only hint of this happening is 
that the RES memory of the Tomcat process keeps growing over time, as shown by 
'top'. And it seems GC is getting slower over time, but the customers haven't 
noticed it yet. (This is after we switched to ParallelGC. We did see 
considerable slow-down when using G1GC in the ref-processing, but we couldn't 
figure out why. It would slow to a crawl before the memory leak became obvious.)

Anyway, I was mostly fishing for hints or tips that could help me figure this 
out or avoid it.

The application is simple to the point I'm hard-pressed to think of any other 
part making JNI calls. The only library I can think of using JNI is 
BouncyCastle doing the SSL encryption/decryption, so maybe I'll switch my focus 
there.

Thanks for taking the time to think along.

Mark
  
On 4/4/20, 5:50 AM, "calder"  wrote:

On Fri, Apr 3, 2020 at 8:48 PM Mark Boon  wrote:
>
> For the past few months we’ve been trying to trace what looks like 
gradual memory creep. After some long-running experiments it seems due to 
memory leaking when
> jni_invoke_static(JNIEnv_*, JavaValue*, _jobject*, JNICallType, 
_jmethodID*, JNI_ArgumentPusher*, Thread*) is invoked. Somewhere.
>
> My environment is Tomcat running a proxy webapp. It does TLS termination, 
 authentication and then forwards the call to local services. It doesn’t do 
much else, it’s a relatively small application.
>
> Some (possibly relevant) versions and config parameters:
> Tomcat 8.5
> Java 8u241 (Oracle)
> Heap size = 360Mb
> MAX_ALLOC_ARENA=2
> MALLOC_TRIM_THRESHOLD_=250048
> jdk.nio.maxCachedBufferSize=25600
>
> We couldn’t find any proof of memory leaking on the Java side.
> When we turn on NativeMemoryTracking=detail and we take a snapshot 
shortly after starting, we see (just one block shown):
>
> [0x03530e462f9a] JNIHandleBlock::allocate_block(Thread*)+0xaa
> [0x03530e3f759a] JavaCallWrapper::JavaCallWrapper(methodHandle, 
Handle, JavaValue*, Thread*)+0x6a
> [0x03530e3fa000] JavaCalls::call_helper(JavaValue*, methodHandle*, 
JavaCallArguments*, Thread*)+0x8f0
> [0x03530e4454a1] jni_invoke_static(JNIEnv_*, JavaValue*, _jobject*, 
JNICallType, _jmethodID*, JNI_ArgumentPusher*, Thread*) [clone .isra.96] [clone 
.constprop.117]+0x1e1
>  (malloc=33783KB type=Internal #110876)
>
> Then we run it under heavy load for a few weeks and take another snapshot:
>
> [0x03530e462f9a] JNIHandleBlock::allocate_block(Thread*)+0xaa
> [0x03530e3f759a] JavaCallWrapper::JavaCallWrapper(methodHandle, 
Handle, JavaValue*, Thread*)+0x6a
> [0x03530e3fa000] JavaCalls::call_helper(JavaValue*, methodHandle*, 
JavaCallArguments*, Thread*)+0x8f0
> [0x03530e4454a1] jni_invoke_static(JNIEnv_*, JavaValue*, _jobject*, 
JNICallType, _jmethodID*, JNI_ArgumentPusher*, Thread*) [clone .isra.96] [clone 
.constprop.117]+0x1e1
>  (malloc=726749KB type=Internal #2385226)
>
> While other blocks also show some variation, none show growth like this 
one. When I do some math on the number (726749KB - 33783KB) / (2385226 – 
110876) it comes down to a pretty even 312 bytes per allocation.
> And we leaked just under 700Mb. While not immediately problematic, this 
does not bode well for our customers who run this service for months.
>
> I’d like to avoid telling them they need to restart this service every 
two weeks to reclaim memory. Has anyone seen something like this? Any way it 
could be avoided?

I'm a bit confused. Your stated title is "JNI Memory Leak?"
Tomcat, to my intimate knowledge, does not use JNI (correct me if I'm rwong)
( quick check
 user@stimpy:~/Desktop/tomcat-source/apache-tomcat-8.5.53-src> find .
-name *.c -ls
 user

Re: JNI memory leak?

2020-04-04 Thread Thomas Meyer
Am 4. April 2020 14:53:17 MESZ schrieb calder :
>On Fri, Apr 3, 2020 at 8:48 PM Mark Boon 
>wrote:
>>
>> For the past few months we’ve been trying to trace what looks like
>gradual memory creep. After some long-running experiments it seems due
>to memory leaking when
>> jni_invoke_static(JNIEnv_*, JavaValue*, _jobject*, JNICallType,
>_jmethodID*, JNI_ArgumentPusher*, Thread*) is invoked. Somewhere.
>>
>> My environment is Tomcat running a proxy webapp. It does TLS
>termination,  authentication and then forwards the call to local
>services. It doesn’t do much else, it’s a relatively small application.
>>
>> Some (possibly relevant) versions and config parameters:
>> Tomcat 8.5
>> Java 8u241 (Oracle)
>> Heap size = 360Mb
>> MAX_ALLOC_ARENA=2
>> MALLOC_TRIM_THRESHOLD_=250048
>> jdk.nio.maxCachedBufferSize=25600
>>
>> We couldn’t find any proof of memory leaking on the Java side.
>> When we turn on NativeMemoryTracking=detail and we take a snapshot
>shortly after starting, we see (just one block shown):
>>
>> [0x03530e462f9a] JNIHandleBlock::allocate_block(Thread*)+0xaa
>> [0x03530e3f759a] JavaCallWrapper::JavaCallWrapper(methodHandle,
>Handle, JavaValue*, Thread*)+0x6a
>> [0x03530e3fa000] JavaCalls::call_helper(JavaValue*,
>methodHandle*, JavaCallArguments*, Thread*)+0x8f0
>> [0x03530e4454a1] jni_invoke_static(JNIEnv_*, JavaValue*,
>_jobject*, JNICallType, _jmethodID*, JNI_ArgumentPusher*, Thread*)
>[clone .isra.96] [clone .constprop.117]+0x1e1
>>  (malloc=33783KB type=Internal #110876)
>>
>> Then we run it under heavy load for a few weeks and take another
>snapshot:
>>
>> [0x03530e462f9a] JNIHandleBlock::allocate_block(Thread*)+0xaa
>> [0x03530e3f759a] JavaCallWrapper::JavaCallWrapper(methodHandle,
>Handle, JavaValue*, Thread*)+0x6a
>> [0x03530e3fa000] JavaCalls::call_helper(JavaValue*,
>methodHandle*, JavaCallArguments*, Thread*)+0x8f0
>> [0x03530e4454a1] jni_invoke_static(JNIEnv_*, JavaValue*,
>_jobject*, JNICallType, _jmethodID*, JNI_ArgumentPusher*, Thread*)
>[clone .isra.96] [clone .constprop.117]+0x1e1
>>  (malloc=726749KB type=Internal #2385226)
>>
>> While other blocks also show some variation, none show growth like
>this one. When I do some math on the number (726749KB - 33783KB) /
>(2385226 – 110876) it comes down to a pretty even 312 bytes per
>allocation.
>> And we leaked just under 700Mb. While not immediately problematic,
>this does not bode well for our customers who run this service for
>months.
>>
>> I’d like to avoid telling them they need to restart this service
>every two weeks to reclaim memory. Has anyone seen something like this?
>Any way it could be avoided?
>
>I'm a bit confused. Your stated title is "JNI Memory Leak?"
>Tomcat, to my intimate knowledge, does not use JNI (correct me if I'm
>rwong)
>( quick check
> user@stimpy:~/Desktop/tomcat-source/apache-tomcat-8.5.53-src> find .
>-name *.c -ls
> user@stimpy:~/Desktop/tomcat-source/apache-tomcat-8.5.53-src> find .
>-name *.cpp -ls
> user@stimpy:~/Desktop/tomcat-source/apache-tomcat-8.5.53-src> find .
>-name *.asm -ls
> user@stimpy:~/Desktop/tomcat-source/apache-tomcat-8.5.53-src> find .
>-name *.pas -ls
>}
>
>a) for the "snapshots" provided, there is NO reference to their
>association, ie, "what" code are those related to?
>b) could you run Mission Control or jvisualvm to locate a stack trace
>for this?
>
>We have two apps that use JNI and run via Tomcat (and another app
>server) - one is "so old" that it is limited to 32-bit . the one
>memory leak we have encountered was related to the "native side" (for
>us, the native-compiled Pascal side of things (we also use Assembly
>code) via Java's JNI code).
>
>So, ultimately, I'm confused why we think Tomcat is "to blame" as
>there is no evidence it uses JNI.
>It's my experience JNI memory issues are related to the Java JNI or
>proprietary native code.
>
>-
>To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
>For additional commands, e-mail: users-h...@tomcat.apache.org

Hi,

I think jni is used via apr in tomcat.

Do you use apr http connector?
-- 
Diese Nachricht wurde von meinem Android-Gerät mit K-9 Mail gesendet.

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: JNI memory leak?

2020-04-04 Thread calder
On Fri, Apr 3, 2020 at 8:48 PM Mark Boon  wrote:
>
> For the past few months we’ve been trying to trace what looks like gradual 
> memory creep. After some long-running experiments it seems due to memory 
> leaking when
> jni_invoke_static(JNIEnv_*, JavaValue*, _jobject*, JNICallType, _jmethodID*, 
> JNI_ArgumentPusher*, Thread*) is invoked. Somewhere.
>
> My environment is Tomcat running a proxy webapp. It does TLS termination,  
> authentication and then forwards the call to local services. It doesn’t do 
> much else, it’s a relatively small application.
>
> Some (possibly relevant) versions and config parameters:
> Tomcat 8.5
> Java 8u241 (Oracle)
> Heap size = 360Mb
> MAX_ALLOC_ARENA=2
> MALLOC_TRIM_THRESHOLD_=250048
> jdk.nio.maxCachedBufferSize=25600
>
> We couldn’t find any proof of memory leaking on the Java side.
> When we turn on NativeMemoryTracking=detail and we take a snapshot shortly 
> after starting, we see (just one block shown):
>
> [0x03530e462f9a] JNIHandleBlock::allocate_block(Thread*)+0xaa
> [0x03530e3f759a] JavaCallWrapper::JavaCallWrapper(methodHandle, Handle, 
> JavaValue*, Thread*)+0x6a
> [0x03530e3fa000] JavaCalls::call_helper(JavaValue*, methodHandle*, 
> JavaCallArguments*, Thread*)+0x8f0
> [0x03530e4454a1] jni_invoke_static(JNIEnv_*, JavaValue*, _jobject*, 
> JNICallType, _jmethodID*, JNI_ArgumentPusher*, Thread*) [clone .isra.96] 
> [clone .constprop.117]+0x1e1
>  (malloc=33783KB type=Internal #110876)
>
> Then we run it under heavy load for a few weeks and take another snapshot:
>
> [0x03530e462f9a] JNIHandleBlock::allocate_block(Thread*)+0xaa
> [0x03530e3f759a] JavaCallWrapper::JavaCallWrapper(methodHandle, Handle, 
> JavaValue*, Thread*)+0x6a
> [0x03530e3fa000] JavaCalls::call_helper(JavaValue*, methodHandle*, 
> JavaCallArguments*, Thread*)+0x8f0
> [0x03530e4454a1] jni_invoke_static(JNIEnv_*, JavaValue*, _jobject*, 
> JNICallType, _jmethodID*, JNI_ArgumentPusher*, Thread*) [clone .isra.96] 
> [clone .constprop.117]+0x1e1
>  (malloc=726749KB type=Internal #2385226)
>
> While other blocks also show some variation, none show growth like this one. 
> When I do some math on the number (726749KB - 33783KB) / (2385226 – 110876) 
> it comes down to a pretty even 312 bytes per allocation.
> And we leaked just under 700Mb. While not immediately problematic, this does 
> not bode well for our customers who run this service for months.
>
> I’d like to avoid telling them they need to restart this service every two 
> weeks to reclaim memory. Has anyone seen something like this? Any way it 
> could be avoided?

I'm a bit confused. Your stated title is "JNI Memory Leak?"
Tomcat, to my intimate knowledge, does not use JNI (correct me if I'm rwong)
( quick check
 user@stimpy:~/Desktop/tomcat-source/apache-tomcat-8.5.53-src> find .
-name *.c -ls
 user@stimpy:~/Desktop/tomcat-source/apache-tomcat-8.5.53-src> find .
-name *.cpp -ls
 user@stimpy:~/Desktop/tomcat-source/apache-tomcat-8.5.53-src> find .
-name *.asm -ls
 user@stimpy:~/Desktop/tomcat-source/apache-tomcat-8.5.53-src> find .
-name *.pas -ls
}

a) for the "snapshots" provided, there is NO reference to their
association, ie, "what" code are those related to?
b) could you run Mission Control or jvisualvm to locate a stack trace for this?

We have two apps that use JNI and run via Tomcat (and another app
server) - one is "so old" that it is limited to 32-bit . the one
memory leak we have encountered was related to the "native side" (for
us, the native-compiled Pascal side of things (we also use Assembly
code) via Java's JNI code).

So, ultimately, I'm confused why we think Tomcat is "to blame" as
there is no evidence it uses JNI.
It's my experience JNI memory issues are related to the Java JNI or
proprietary native code.

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org