[ 
https://issues.apache.org/jira/browse/KUDU-1555?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15418145#comment-15418145
 ] 

Todd Lipcon commented on KUDU-1555:
-----------------------------------

Ah, I can buy that. The benchmarks I ran for 
http://kudu.apache.org/2016/04/26/ycsb.html were on 4/20/16, and e6052ac was 
committed on 4/21. In the blog post, I commented how the change I made had 
allowed parallel writes to all disks, but because of this issue, I'm not seeing 
it anymore.

Let me try flipping this back in a local build and see how it affects 
performance.

> LogBlockManager FlushDataAsync() synchronously flushes metadata
> ---------------------------------------------------------------
>
>                 Key: KUDU-1555
>                 URL: https://issues.apache.org/jira/browse/KUDU-1555
>             Project: Kudu
>          Issue Type: Bug
>          Components: fs, perf
>    Affects Versions: 0.10.0
>            Reporter: Todd Lipcon
>
> I'm looking at the time spent by a flush in a YCSB workload, and most of the 
> time is spent in stacks like:
> #0  0x0000003a1acdfd9a in sync_file_range () from /lib64/libc.so.6
> #1  0x00000000018a83e4 in ?? ()
> #2  0x00000000018ee741 in kudu::pb_util::WritablePBContainerFile::Flush() ()
> #3  0x000000000187ae56 in 
> kudu::fs::internal::LogWritableBlock::FlushDataAsync() ()
> #4  0x00000000017b177e in 
> kudu::cfile::CFileWriter::FinishAndReleaseBlock(kudu::fs::ScopedWritableBlockCloser*)
>  ()
> #5  0x0000000000905206 in 
> kudu::tablet::MultiColumnWriter::FinishAndReleaseBlocks(kudu::fs::ScopedWritableBlockCloser*)
>  ()
> The purpose of doing the FlushDataAsync() is so that the disks work in 
> parallel, but currently it appears that this is ended up blocked in 
> synchronous _metadata_ flushes on each disk in turn. I think this should 
> probably perform an _async_ metadata flush, since we later relay on a 
> SyncMetadata() call to ensure it is complete.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to