coki230 opened a new pull request #3048: "One trace segment has been abandoned, 
cause by buffer is full" bug fix.
URL: https://github.com/apache/skywalking/pull/3048
 
 
   Please answer these questions before submitting pull request
   
   - Why submit this pull request?
   -  Bug fix
   - [ ] New feature provided
   - [ ] Improve performance
   
   - Related issues
   
   ___
   ### Bug fix
   - Bug description.
   
   i lost my agent trace segment data, but my server is ok, it can recieve jvm 
metrics, when i debug it i found it didn't call the method of 
"batchPersistence" which is bulk update or save the data.
   the bug is caused by auto Flush and the method of "batchPersistence", 
because the ES need lock the same lock when flush and internalAdd(which called 
by batchPersistence). so i delete the auto flush the result is so far are 
normal. the underneath is my thread dump brief:
   
   "DataCarrier.RECORD_PERSISTENT.BulkConsumePool.0.Thread" #24 daemon prio=5 
os_prio=0 tid=0x000000002585a800 nid=0xc1c waiting for monitor entry 
[0x000000002715f000]
   java.lang.Thread.State: BLOCKED (on object monitor)
   at 
org.elasticsearch.action.bulk.BulkProcessor.internalAdd(BulkProcessor.java:286)
   - waiting to lock <0x00000006c2c52eb0> (a 
org.elasticsearch.action.bulk.BulkProcessor)
   at org.elasticsearch.action.bulk.BulkProcessor.add(BulkProcessor.java:271)
   at org.elasticsearch.action.bulk.BulkProcessor.add(BulkProcessor.java:267)
   at org.elasticsearch.action.bulk.BulkProcessor.add(BulkProcessor.java:253)
   at 
org.apache.skywalking.oap.server.storage.plugin.elasticsearch.base.BatchProcessEsDAO.lambda$batchPersistence$0(BatchProcessEsDAO.java:64)
   at 
org.apache.skywalking.oap.server.storage.plugin.elasticsearch.base.BatchProcessEsDAO$$Lambda$285/523676665.accept(Unknown
 Source)
   at java.lang.Iterable.forEach(Iterable.java:75)
   at 
org.apache.skywalking.oap.server.storage.plugin.elasticsearch.base.BatchProcessEsDAO.batchPersistence(BatchProcessEsDAO.java:62)
   at 
org.apache.skywalking.oap.server.core.analysis.worker.PersistenceWorker.onWork(PersistenceWorker.java:51)
   at 
org.apache.skywalking.oap.server.core.analysis.worker.RecordPersistentWorker$PersistentConsumer.consume(RecordPersistentWorker.java:103)
   at 
org.apache.skywalking.apm.commons.datacarrier.consumer.MultipleChannelsConsumer.consume(MultipleChannelsConsumer.java:80)
   at 
org.apache.skywalking.apm.commons.datacarrier.consumer.MultipleChannelsConsumer.run(MultipleChannelsConsumer.java:49)
   
   "elasticsearch[scheduler][T#1]" #72 daemon prio=5 os_prio=0 
tid=0x00000000202c7000 nid=0x850 waiting for monitor entry [0x000000002a85e000]
   java.lang.Thread.State: BLOCKED (on object monitor)
   at 
org.elasticsearch.action.bulk.BulkProcessor$Flush.run(BulkProcessor.java:367)
   - waiting to lock <0x00000006c2c52eb0> (a 
org.elasticsearch.action.bulk.BulkProcessor)
   at 
org.elasticsearch.threadpool.Scheduler$ReschedulingRunnable.doRun(Scheduler.java:182)
   at 
org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
   at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
   at java.util.concurrent.FutureTask.run$$$capture(FutureTask.java:266)
   at java.util.concurrent.FutureTask.run(FutureTask.java)
   at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
   at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
   at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
   at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
   at java.lang.Thread.run(Thread.java:748)
   
   "pool-14-thread-1" #68 prio=5 os_prio=0 tid=0x00000000202c4000 nid=0xfa8 
waiting on condition [0x0000000029b4e000]
   java.lang.Thread.State: WAITING (parking)
   at sun.misc.Unsafe.park(Native Method)
   - parking to wait for <0x00000006c2c53160> (a 
java.util.concurrent.Semaphore$NonfairSync)
   at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
   at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
   at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997)
   at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304)
   at java.util.concurrent.Semaphore.acquire(Semaphore.java:312)
   at 
org.elasticsearch.action.bulk.BulkRequestHandler.execute(BulkRequestHandler.java:60)
   at 
org.elasticsearch.action.bulk.BulkProcessor.execute(BulkProcessor.java:339)
   at org.elasticsearch.action.bulk.BulkProcessor.flush(BulkProcessor.java:358)
   - locked <0x00000006c2c52eb0> (a org.elasticsearch.action.bulk.BulkProcessor)
   at 
org.apache.skywalking.oap.server.storage.plugin.elasticsearch.base.BatchProcessEsDAO.batchPersistence(BatchProcessEsDAO.java:72)
   at 
org.apache.skywalking.oap.server.core.storage.PersistenceTimer.extractDataAndSave(PersistenceTimer.java:113)
   at 
org.apache.skywalking.oap.server.core.storage.PersistenceTimer.lambda$start$0(PersistenceTimer.java:65)
   at 
org.apache.skywalking.oap.server.core.storage.PersistenceTimer$$Lambda$201/296552796.run(Unknown
 Source)
   at 
org.apache.skywalking.apm.util.RunnableWithExceptionProtection.run(RunnableWithExceptionProtection.java:36)
   at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
   at java.util.concurrent.FutureTask.runAndReset$$$capture(FutureTask.java:308)
   at java.util.concurrent.FutureTask.runAndReset(FutureTask.java)
   at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
   at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
   at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
   at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
   at java.lang.Thread.run(Thread.java:748)
   
   - How to fix?
   just close the auto flush, because there is a manual flush to commit the data
   ___
   ### New feature or improvement
   - Describe the details and related test reports.
   

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

Reply via email to