[ https://issues.apache.org/jira/browse/HDFS-347?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13565655#comment-13565655 ]
Brandon Li commented on HDFS-347: --------------------------------- I did some tests comparing the read performance with and without unix domain socket enabled. The result is not what I expected. 1. Apply latest patch to trunk and start a one-datanode cluster on a linux box(Linux version 2.6.18-238, 15GB memory). 2. copy a 1GB local file to HDFS. 3. read it back 4 times (copyToLocal) without enabling unix domain socket support, and the results: || | read 1| read 2 | read 3|read4|| ||real | 0m6.655s| 0m7.080s |0m6.680s|0m6.680s || ||user | 0m4.340s| 0m4.533s|0m4.459s| 0m4.563s || ||sys |0m3.544s| 0m3.525s |0m3.484s| 0m3.438s || 4. add the following to hdfs-site.xml <property> <name>dfs.domain.socket.path</name> <value>/grid/0/test/unixsocket</value> </property> <property> <name>dfs.client.domain.socket.data.traffic</name> <value>true</value> </property> 5. restart the cluster, format namenode, copy 1GB file, read it back 4 times. The new results: || | read 1| read 2 | read 3|read4|| ||real | 0m8.296s|0m7.811s|0m7.960s|0m7.803s || ||user |0m5.129s |0m5.172s|0m5.197s|0m5.018s || ||sys | 0m3.643s|0m3.572s|0m3.701s|0m3.910s || > DFS read performance suboptimal when client co-located on nodes with data > ------------------------------------------------------------------------- > > Key: HDFS-347 > URL: https://issues.apache.org/jira/browse/HDFS-347 > Project: Hadoop HDFS > Issue Type: Improvement > Components: datanode, hdfs-client, performance > Reporter: George Porter > Assignee: Colin Patrick McCabe > Attachments: 2013.01.28.design.pdf, all.tsv, BlockReaderLocal1.txt, > full.patch, HADOOP-4801.1.patch, HADOOP-4801.2.patch, HADOOP-4801.3.patch, > HDFS-347-016_cleaned.patch, HDFS-347.016.patch, HDFS-347.017.clean.patch, > HDFS-347.017.patch, HDFS-347.018.clean.patch, HDFS-347.018.patch2, > HDFS-347.019.patch, HDFS-347.020.patch, HDFS-347.021.patch, > HDFS-347.022.patch, HDFS-347.024.patch, HDFS-347.025.patch, > HDFS-347.026.patch, HDFS-347.027.patch, HDFS-347.029.patch, > HDFS-347.030.patch, HDFS-347.033.patch, HDFS-347.035.patch, > HDFS-347-branch-20-append.txt, hdfs-347-merge.txt, hdfs-347-merge.txt, > hdfs-347-merge.txt, hdfs-347.png, hdfs-347.txt, local-reads-doc > > > One of the major strategies Hadoop uses to get scalable data processing is to > move the code to the data. However, putting the DFS client on the same > physical node as the data blocks it acts on doesn't improve read performance > as much as expected. > After looking at Hadoop and O/S traces (via HADOOP-4049), I think the problem > is due to the HDFS streaming protocol causing many more read I/O operations > (iops) than necessary. Consider the case of a DFSClient fetching a 64 MB > disk block from the DataNode process (running in a separate JVM) running on > the same machine. The DataNode will satisfy the single disk block request by > sending data back to the HDFS client in 64-KB chunks. In BlockSender.java, > this is done in the sendChunk() method, relying on Java's transferTo() > method. Depending on the host O/S and JVM implementation, transferTo() is > implemented as either a sendfilev() syscall or a pair of mmap() and write(). > In either case, each chunk is read from the disk by issuing a separate I/O > operation for each chunk. The result is that the single request for a 64-MB > block ends up hitting the disk as over a thousand smaller requests for 64-KB > each. > Since the DFSClient runs in a different JVM and process than the DataNode, > shuttling data from the disk to the DFSClient also results in context > switches each time network packets get sent (in this case, the 64-kb chunk > turns into a large number of 1500 byte packet send operations). Thus we see > a large number of context switches for each block send operation. > I'd like to get some feedback on the best way to address this, but I think > providing a mechanism for a DFSClient to directly open data blocks that > happen to be on the same machine. It could do this by examining the set of > LocatedBlocks returned by the NameNode, marking those that should be resident > on the local host. Since the DataNode and DFSClient (probably) share the > same hadoop configuration, the DFSClient should be able to find the files > holding the block data, and it could directly open them and send data back to > the client. This would avoid the context switches imposed by the network > layer, and would allow for much larger read buffers than 64KB, which should > reduce the number of iops imposed by each read block operation. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira