[ 
https://issues.apache.org/jira/browse/CAMEL-14327?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17002430#comment-17002430
 ] 

Rafał Gała commented on CAMEL-14327:
------------------------------------

Here's the stack, I believe the doStop method is called from onEvict lambda 
passed to LRUCache instance.

Daemon Thread [ForkJoinPool.commonPool-worker-6] (Suspended (breakpoint at line 
212 in Jt400PgmProducer))       
        owns: Object  (id=386)  
        Jt400PgmProducer.doStop() line: 212     
        Jt400PgmProducer(ServiceSupport).stop() line: 159       
        ServiceHelper.stopService(Object) line: 119     
        
AsyncProcessorConverterHelper$ProducerToAsyncProducerBridge(AsyncProcessorConverterHelper$ProcessorToAsyncProcessorBridge).stop()
 line: 112     
        ServicePool<S>.stop(S) line: 214        
        ServicePool$MultiplePool.evict(S) line: 324     
        ServicePool$MultiplePool.evict(Object) line: 283        
        ServicePool<S>.onEvict(S) line: 88      
        451064016.accept(Object) line: not available    
        CaffeineLRUCache<K,V>.onRemoval(K, V, RemovalCause) line: 239   
        SSLMS<K,V>(BoundedLocalCache<K,V>).lambda$notifyRemoval$1(Object, 
Object, RemovalCause) line: 333       
        38917799.run() line: not available      
        ForkJoinTask$RunnableExecuteAction.exec() line: 1402    
        ForkJoinTask$RunnableExecuteAction(ForkJoinTask<V>).doExec() line: 289  
        ForkJoinPool$WorkQueue.runTask(ForkJoinTask<?>) line: 1056      
        ForkJoinPool.runWorker(ForkJoinPool$WorkQueue) line: 1692       
        ForkJoinWorkerThread.run() line: 157    

I don't think increasing cache would do any good. From what I see less than 10 
instances of Jt400PgmProducer are created.


> Jt400PgmProducer doStop() method called on actively used instance
> -----------------------------------------------------------------
>
>                 Key: CAMEL-14327
>                 URL: https://issues.apache.org/jira/browse/CAMEL-14327
>             Project: Camel
>          Issue Type: Bug
>          Components: camel-jt400
>    Affects Versions: 3.0.0
>            Reporter: Rafał Gała
>            Priority: Major
>
> Today I migrated to 3.0.0 version and there seems to be an issue with Service 
> pooling for the Jt400PgmpProducer.
>  
> Here's what I have:
>  
> {code:java}
>     from(
>         "seda:someName?concurrentConsumers=2&size=10")
>             
> .to("jt400://{{as400.user}}:{{as400.password}}@{{as400.host}}/QSYS.LIB/PROGRAM.LIB/KFKEVR.SRVPGM?fieldsLength=200,2000,4,8,8,1000&outputFieldsIdx=0,1,2,3,4,5&connectionPool=#as400ConnectionPool&format=binary&procedureName=RECEIVEEVENT");
> {code}
> When concurrentConsumers attribute of seda endpoint is set to 1 everything 
> works fine, but when it is greater than 1 then it looks like the evict method 
> from MultiplePool class calls stop method on a Jt400PgmProducer instance that 
> is still being used (the process method on it is still getting called). This 
> results in nulling the iSeries object inside Jt400PgmProducer instance:
> {code:java}
>     @Override
>     protected void doStop() throws Exception {
>         if (iSeries != null) {
>             LOG.info("Releasing connection to {}", getISeriesEndpoint());
>             getISeriesEndpoint().releaseSystem(iSeries);
>             iSeries = null;
>         }
>     }
> {code}
> and when the process method gets called later on this instance, it fails with 
> NPE while constructing ServiceProgramCall:
> {code:java}
>     @Override
>     public void process(Exchange exchange) throws Exception {
> ...
>             pgmCall = new ServiceProgramCall(iSeries);
> ...            
> {code}
>  
> I believe this may be related to producer caching in ServicePool class, some 
> sort of key issue in the cache Map maybe?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to