[ 
https://issues.apache.org/jira/browse/RATIS-1176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17238163#comment-17238163
 ] 

runzhiwang edited comment on RATIS-1176 at 11/24/20, 11:43 PM:
---------------------------------------------------------------

bq. I suspect it is better than transferTo(..) since it is using 
transferToArbitraryChannel(..).

I agree. I think ratis streaming is faster than transferToArbitraryChannel, 
maybe slower than  transferToTrustedChannel and transferToDirectly.  Because 
when primary/peer receive data in ratis streaming, it need DirectByteBuffer in 
netty to save data, as the following image shows, so primary/peer need to read 
data by flow: socket -> kernel space -> user space, and when primary send data 
to peer, the flow is: user space -> kernel space -> socket, when primary/peer 
save data to disk, the flow is: user space -> kernel space -> disk, so I think 
ratis streaming is slower than transferToTrustedChannel which did not need copy 
between user space and kernel space, but not sure.

 !image-2020-11-25-07-40-50-383.png! 

bq. We may pass MapByteBuffer to our writeAsync(..) method

I agree.


was (Author: yjxxtd):
bq. I suspect it is better than transferTo(..) since it is using 
transferToArbitraryChannel(..).

I agree. I think ratis streaming is faster than transferToArbitraryChannel, 
maybe slower than  transferToTrustedChannel and transferToDirectly.  Because 
when primary receive data in ratis streaming, it need DirectByteBuffer in 
netty, as the following image shows, to save data, so primary need to read data 
by flow: socket -> kernel space -> user space, and when primary send data to 
peer, the flow is: user space -> kernel space -> socket, when primary save data 
to disk, the flow is: user space -> kernel space -> disk, so I think ratis 
streaming is slower than transferToTrustedChannel which did not need copy 
between user space and kernel space, but not sure.

 !image-2020-11-25-07-40-50-383.png! 

bq. We may pass MapByteBuffer to our writeAsync(..) method

I agree.

> Benchmark various ways to stream data
> -------------------------------------
>
>                 Key: RATIS-1176
>                 URL: https://issues.apache.org/jira/browse/RATIS-1176
>             Project: Ratis
>          Issue Type: Sub-task
>          Components: client, Streaming
>            Reporter: Tsz-wo Sze
>            Priority: Major
>         Attachments: image-2020-11-25-07-40-50-383.png
>
>
> In RATIS-1175, we provided a WritableByteChannel view of DataStreamOutput in 
> order to support FileChannel.transferTo.  However, [~runzhiwang] pointed out 
> that sun.nio.ch.FileChannelImpl.transferTo has three submethods
> - transferToDirectly (fastest)
> - transferToTrustedChannel
> - transferToArbitraryChannel (slowest, requires buffer copying)
> Unfortunately, our current implementation only able to use 
> transferToArbitraryChannel.
> There are several ideas below to improve the performance.  We should 
> benchmark them.
> # Improve the current implementation of WritableByteChannel so that it may be 
> able to use a faster transferTo method.
> # Use 
> [FileChannel.map(..)|https://docs.oracle.com/javase/8/docs/api/java/nio/channels/FileChannel.html#map-java.nio.channels.FileChannel.MapMode-long-long-]
>  and pass MappedByteBuffer to our DataStreamOutput.writeAsync method.
> # Add a new API
> {code}
> //DataStreamOutput
>  CompletableFuture<DataStreamReply> writeAsync(File);
> {code}
> Internally, use Netty DefaultFileRegion for zero-copy file transfer:
> https://github.com/netty/netty/blob/4.1/example/src/main/java/io/netty/example/file/FileServerHandler.java#L53



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to