coki230 commented on issue #3048: "One trace segment has been abandoned, cause 
by buffer is full" bug fix.
URL: https://github.com/apache/skywalking/pull/3048#issuecomment-511256529
 
 
   > > @peng-yongsheng it have manual fulsh in the code which is in the class 
of "BatchProcessEsDAO" line of 72.
   > > so in my opinion, it didn't have to set Flush Interval.
   > > The lock will remain locked until restart oap server. because the auto 
flush will first lock the BulkProcessor add lock, than it will wait another 
lock which is waiting all the time.
   > 
   > @coki230 I guess the real reason is yours Elasticsearch servers are hung 
up by the automatically data delete. This is a design flaw before 6.2.0, it has 
been fixed. When Elasticsearch servers are very busy to delete history data, 
there are no more idle resources to process the data from bulk processors. then 
OAP servers began to scramble for bulk resources. OAP servers that are not 
scrambling for resources start locking.
   > 
   > **Suggest upgrade to 6.2.0**
   > Some settings for the Elasticsearch server to improving write performance, 
take a look at this document. 
[ES-Server-FAQ](https://github.com/apache/skywalking/blob/master/docs/en/FAQ/ES-Server-FAQ.md)
   
   the presentation is like  Elasticsearch servers are hung up, but i update 
the SW version of 6.2.0, i use new feature of delete index to delete data. and 
i cann't get any error info in my oap server when it cann't submit bulk 
processors. but there might be my ES has poor performance so the ES hung up by 
auto flush or delete data. 
   
   

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

Reply via email to