[ 
https://issues.apache.org/jira/browse/FLINK-14525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16960037#comment-16960037
 ] 

Saqib commented on FLINK-14525:
-------------------------------

here is the stack trace of the exception:

 

java.lang.RuntimeException: Buffer pool is destroyed.
 at 
org.apache.flink.streaming.runtime.io.RecordWriterOutput.pushToRecordWriter(RecordWriterOutput.java:110)
 
 at 
org.apache.flink.streaming.runtime.io.RecordWriterOutput.collect(RecordWriterOutput.java:89)
 at 
org.apache.flink.streaming.runtime.io.RecordWriterOutput.collect(RecordWriterOutput.java:45)
 at 
org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:718)
 at 
org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:696)
 at 
org.apache.flink.streaming.api.operators.TimestampedCollector.collect(TimestampedCollector.java:51)
 
 at 
com.cs.ib.tarsan.odds.flink.CMSAccountFilter.flatMap(CMSAccountFilter.java:51)
 at 
com.cs.ib.tarsan.cdds.flink.CMSAccountFilter.flatMap(CMSAccountFilter.java:15)
 at 
org.apache.flink.streaming.api.operators.StreamFlatMap.processElement(StreamFlatMap.java:50)
 at 
org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.pushToOperator(OperatorChain.java:579)
 
 at 
org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:554)
 at 
org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:534)
 at 
org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:718)
 at 
org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:696)
 at 
org.apache.flink.streaming.api.operators.TimestampedCollector.collect(TimestampedCollector.java:51)
 
 at 
com.cs.ib.tarsan.cdds.flink.CddsXMLDocumentCreator.flatMap(CddsXMLDocumentCreator.java:50)
 at 
com.cs.ib.tarsan.cdds.flink.CddsXMLDocumentCreator.flatMap(CddsXMLDocumentCreator.java:22)2019-10-24
 16:37:55.734 [Source: Custom 5<
 - GPID=30428415 ...Exception= Buffer pool is destroyed.

at 
org.apache.flink.streaming.api.operators.StreamFlatMap.processElement(StreamFlatMap.java:50)
 at 
org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.pushToOperator(OperatorChain.java:579)
 
 at 
org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:554)
 at 
org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:534)
 at 
org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:718)
 
 at 
org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:696)
 
 at 
org.apache.flink.streaming.api.operators.StreamFilter.processElement(StreamFilter.java:40)
 at 
org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.pushToOperator(OperatorChain.java:579)
 
 at 
org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:554)
 at 
org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:534)
 at 
org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:718)
 
 at 
org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:696)
 
 at 
org.apache.flink.streaming.api.operators.StreamSourceContexts$NonTimestampContext.collect(StreamSourceContexts.java:104)
 at 
org.apache.flink.streaming.api.operators.StreamSourceContexts$NonTimestampContext.collectWithTimestamp(StreamSourceContexts.java:111)
 at 
org.apache.flink.streaming.connectors.kafka.internals.AbstractFetcher.emitRecordWithTimestamp(AbstractFetcher.java:398)
 at 
org.apache.flink.streaming.connectors.kafka.internal.Kafka010Fetcher.emitRecord(Kafka010Fetcher.java:89)
 
 at 
org.apache.flink.streaming.connectors.kafka.internal.Kafka09Fetcher.runFetchLoop(Kafka09Fetcher.java:154)
 
 at 
org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase.run(FlinkKafkaConsumerBase.java:665)
 
 at 
org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:94)
 at 
org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:58)
 at 
org.apache.flink.streaming.runtime.tasks.SourceStreamTask.run(SourceStreamTask.java:99)
 at 
org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:300)
 at org.apache.flink.runtime.taskmanager.Task.run(Task.java:704)
 at java.lang.Thread.run(Thread.java:745)
Caused by: 'iava.lang.IllegalStateException: Buffer pool is destroyed.
 at 
org.apache.flink.runtime.io.network.buffer.LocalBufferPool.requestMemorySegment(LocalBufferPool.java:244)
 at 
org.apache.flink.runtime.io.network.buffer.LocalBufferPool.requestBufferBuilderBlocking(LocalBufferPool.java:218)
 
 at 
org.apache.flink.runtime.io.network.api.writer.RecordWriter.requestNewBufferBuilder(RecordWriter.java:236)
19-10-24 16:37:55.734 [Source: Custom Source -> Filter -> Flat Map -> Flat Map 
(414)] WARN c.c.i.t.cdds.flink.CMSAccountFilter - 
 at 
org.aparhe.flink.runtime.io.network.api.writer.RecordWriter.getBufferBuilder(RecordWriter.java:229)
 at 
org.apache.flink.runtime.io.network.api.writer.RecordWriter.copyFromSerializerToTargetChannel(RecordWriter.java:149)
 
 at 
org.aparhe.flink.runtime.io.network.api.writer.RecordWriter.emit(RecordWriter.java:128)
 at 
org.apache.flink.runtime.io.network.api.writer.RecordWriter.emit(RecordWriter.java:101)
 at 
org.apache.flink.streaming.runtime.io.StreamRecordWriter.emit(StreamRecordWriter.java:81)
 at 
org.apache.flink.streaming.runtime.io.RecordWriterOutput.pushToRecordWriter(RecordWriterOutput.java:107)



 

 

As expressed, we are not deploying this on cluster, for now its a simple java 
app, with the in-memory stream environment.  The exception is thrown in the 
collect method of the flatmap function  code snippet below:

 

 

public void flatMap(Document doc, Collector<Tuple3<String,String, Document>> 
out) throws Exception {            

 String gpid="UNKNOWN";             

try {                 

gpid=CddsXPathHelper.getGlobalPartyId(doc);                 

if (CddsXPathHelper.isTarsanCMSAccountNew(doc)){

                     String ppid=CddsXPathHelper.getPortfolioId(doc);

                     Tuple3<String,String, Document> tp = new 
Tuple3<String,String,Document>();

                     tp.f0=gpid;tp.f1=ppid;tp.f2=doc;                   

                    out.collect(tp);                 

}else

{                     LOGGER.info("GPID="+gpid +" Not Valid Tarsan Account For 
SourceSystem ="+CddsXPathHelper.vatidSource5ystems +"...ignoring");             

} 

 }catch(Exception e){ 

               LOGGER.warn("GPID="+ gold,"    ...Exception= "+e.getMessage());  
               e.printStackTrace();    

}

 

 

> buffer pool is destroyed
> ------------------------
>
>                 Key: FLINK-14525
>                 URL: https://issues.apache.org/jira/browse/FLINK-14525
>             Project: Flink
>          Issue Type: Bug
>          Components: Runtime / Network
>    Affects Versions: 1.7.2
>            Reporter: Saqib
>            Priority: Blocker
>
> Have a flink app running in standalone mode. The app runs ok in our non-prod 
> env. However on our prod server it throws this exception:
> Buffer pool is destroyed. 
>  
> This error is being thrown as a RuntimeException on the collect call, on the 
> flatmap function. The flatmap is just collecting a Tuple<String, Document>, 
> the Document is a XML Document object.
>  
> As mentioned the non prod env  (and we have multiple, DEV,QA,UAT) this is not 
> happening. The UAT box is spec-ed exactly as our Prod host with 4CPU. The 
> java version is the same too.
>  
> Not sure how to proceed.
>  
> Thanks
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to