[ 
https://issues.apache.org/jira/browse/HADOOP-2434?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12552138
 ] 

Paul Saab commented on HADOOP-2434:
-----------------------------------

It turns out HADOOP-1989 is causing this regression.  stack has verified that 
reverting to before svn 595509, random reads are once again fast:

Here's the stack trace of what's causing the one byte reads
{code}
Thread 36193 ([EMAIL PROTECTED]):
  State: RUNNABLE
  Blocked count: 0
  Waited count: 0
  Stack:
    java.io.FileInputStream.read(Native Method)
    
org.apache.hadoop.dfs.FSDatasetInterface$MetaDataInputStream.read(FSDatasetInterface.java:63)
    java.io.InputStream.read(InputStream.java:163)
    java.io.InputStream.skip(InputStream.java:208)
    java.io.BufferedInputStream.skip(BufferedInputStream.java:349)
    java.io.FilterInputStream.skip(FilterInputStream.java:125)
    org.apache.hadoop.io.IOUtils.skipFully(IOUtils.java:115)
    org.apache.hadoop.dfs.DataNode$BlockSender.<init>(DataNode.java:1326)
    org.apache.hadoop.dfs.DataNode$DataXceiver.readBlock(DataNode.java:888)
    org.apache.hadoop.dfs.DataNode$DataXceiver.run(DataNode.java:840)
    java.lang.Thread.run(Thread.java:619)
{code}

MetaDataInputStream.read() is called over and over to read 1 byte at a time

> MapFile.get on HDFS in TRUNK is WAY!!! slower than 0.15.x
> ---------------------------------------------------------
>
>                 Key: HADOOP-2434
>                 URL: https://issues.apache.org/jira/browse/HADOOP-2434
>             Project: Hadoop
>          Issue Type: Bug
>          Components: io
>            Reporter: stack
>         Attachments: TestMapFile.java, TestMapFile.java
>
>
> Stall happens down in SequenceFile in the first call to getPos inside 
> readRecordLength.  I tried the johano patch from HADOOP-2172 that restores 
> the positional cache but that didn't seem to be the issue here.
> Here is data to support my assertion.
> I wrote a little program to make a MapFile of 1M records.  I then did 1M 
> random reads from same file.  Below are timings from a 0.15.0 and TRUNK as of 
> this afternoon run.
> 0.15.x branch:
> {code}
> [EMAIL PROTECTED] branch-0.15]$ ./bin/hadoop org.apache.hadoop.io.TestMapFile
> .07/12/15 01:29:02 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> 07/12/15 01:29:02 INFO io.TestMapFile: Writing 1000000 rows to 
> testMapFileRandomAccess
> 07/12/15 01:32:04 INFO io.TestMapFile: Writing 1000000 records took 182009ms
> 07/12/15 01:32:04 INFO io.TestMapFile: Reading 1000000 random rows
> 07/12/15 01:48:02 INFO io.TestMapFile: Reading 1000000 random records took 
> 958243ms
> Time: 1,140.652
> OK (1 test)
> {code}
> For the below test using TRUNK r604352, I amended the test so it output a log 
> message every 100k reads:
> {code}
> [EMAIL PROTECTED] hadoop-trunk]$ ./bin/hadoop org.apache.hadoop.io.TestMapFile
> .07/12/15 01:56:34 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> 07/12/15 01:56:34 INFO io.TestMapFile: Writing 1000000 rows to 
> testMapFileRandomAccess
> 07/12/15 01:59:38 INFO io.TestMapFile: Writing 1000000 records took 183986ms
> 07/12/15 01:59:38 INFO io.TestMapFile: Reading 1000000 random rows
> .........
> {code}
> After 20 minutes it still hadn't printed out the 'read 100k messages' (I had 
> to leave -- will fill in final figures later)

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to