[GitHub] [activemq-artemis] franz1981 commented on issue #2845: ARTEMIS-2336 Use zero copy to replicate journal/page/large message file (AGAIN)

2019-10-13 Thread GitBox
franz1981 commented on issue #2845: ARTEMIS-2336 Use zero copy to replicate 
journal/page/large message file (AGAIN)
URL: https://github.com/apache/activemq-artemis/pull/2845#issuecomment-541495785
 
 
   @wy96f sadly the rate limiter algorithm in guava is not very good fore fast 
paced networks and if with small token size, despite what the doc says about it 
:)
   
   Netty is considering 0 sized the FileRegion sent on the outbound buffer 
afaik and just "unknown" the one based on chunked files, but the point is that 
the latter will be accumulated and won't back-pressure the channel.becauee are 
sent through a background accumulator (the additional outbound channel in the 
pipeline) ie the solution to provide a custom estimator makes sense imho, but I 
need my computer at hand to try it.
   
   My point on the existing sync done is that is not actually correct because 
it assume that the previous files are being sent before sending that sync done 
while is not in any case, because Netty is asynchronous...the chunked file case 
just make it more evident..
   My 2 cents :)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [activemq-artemis] wy96f commented on issue #2845: ARTEMIS-2336 Use zero copy to replicate journal/page/large message file (AGAIN)

2019-10-13 Thread GitBox
wy96f commented on issue #2845: ARTEMIS-2336 Use zero copy to replicate 
journal/page/large message file (AGAIN)
URL: https://github.com/apache/activemq-artemis/pull/2845#issuecomment-541494236
 
 
   @franz1981
   
   > fix 
https://github.com/netty/netty/blob/ff7a9fa091a8bf2e10020f83fc4df1c44098/transport/src/main/java/io/netty/channel/DefaultMessageSizeEstimator.java#L45
 in order to account for chunked files correctly and flow control them while 
being enqueued into chunk writer
   
   Not sure. I think netty is just limiting data in memory so not accounting 
for chunked file and file region?
   
   > change how sync done timeout is being calculated and consider the total 
time to replicate (from the first file region/chunk sent) instead of the time 
from when the sync done is being enqueued in Netty
   
   We block all journal related operations during sync done interval to make 
sure replica has received all data, so we‘d better not change it.
   
   Maybe we can:
   
   * We use original implementation when broker doesn't support zero region. 
IMO the chunked file way is more or less same with original one. At some point 
original version might be better bcs it reads file data in non-netty thread 
that can be beneficial where netty threads is busy taking care of new sent 
packets.
   
   * Another possible alternative is to use rate limiter like guava instead of 
netty flow control to work around the problem?
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services