[
https://issues.apache.org/jira/browse/HADOOP-2434?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12552130
]
Paul Saab commented on HADOOP-2434:
-----------------------------------
Tracing the DataXceiveServer shows that the reads are occurring 1 byte at time.
{code}
8787 recvfrom(42,
"\0\7Q\\\360\342\324\252<\nN\0\0\0\0\0008\350\263\0\0\0\0\0\34\225V", 4096, 0,
NULL, NULL) = 27
8787
stat("/disk3/nutch/data/filesystem/data/current/subdir40/blk_6697102048916933198",
{st_mode=S_IFREG|0664, st_size=5602825, ...}) = 0
8787
stat("/disk3/nutch/data/filesystem/data/current/subdir40/blk_6697102048916933198.meta",
{st_mode=S_IFREG|0664, st_size=43783, ...}) = 0
8787
stat("/disk3/nutch/data/filesystem/data/current/subdir40/blk_6697102048916933198",
{st_mode=S_IFREG|0664, st_size=5602825, ...}) = 0
8787
open("/disk3/nutch/data/filesystem/data/current/subdir40/blk_6697102048916933198.meta",
O_RDONLY) = 75
8787 fstat(75, {st_mode=S_IFREG|0664, st_size=43783, ...}) = 0
8787
stat("/disk3/nutch/data/filesystem/data/current/subdir40/blk_6697102048916933198.meta",
{st_mode=S_IFREG|0664, st_size=43783, ...}) = 0
8787 read(75, "\0", 1) = 1
8787 read(75, "\1", 1) = 1
8787 read(75, "\1", 1) = 1
8787 read(75, "\0", 1) = 1
8787 read(75, "\0", 1) = 1
8787 read(75, "\2", 1) = 1
8787 read(75, "\0", 1) = 1
8787 read(75, "\337", 1) = 1
8787 read(75, "\34", 1) = 1
8787 read(75, "\352", 1) = 1
8787 read(75, "a", 1) = 1
....
continues like this
{code}
> MapFile.get on HDFS in TRUNK is WAY!!! slower than 0.15.x
> ---------------------------------------------------------
>
> Key: HADOOP-2434
> URL: https://issues.apache.org/jira/browse/HADOOP-2434
> Project: Hadoop
> Issue Type: Bug
> Components: io
> Reporter: stack
> Attachments: TestMapFile.java
>
>
> Stall happens down in SequenceFile in the first call to getPos inside
> readRecordLength. I tried the johano patch from HADOOP-2172 that restores
> the positional cache but that didn't seem to be the issue here.
> Here is data to support my assertion.
> I wrote a little program to make a MapFile of 1M records. I then did 1M
> random reads from same file. Below are timings from a 0.15.0 and TRUNK as of
> this afternoon run.
> 0.15.x branch:
> {code}
> [EMAIL PROTECTED] branch-0.15]$ ./bin/hadoop org.apache.hadoop.io.TestMapFile
> .07/12/15 01:29:02 WARN util.NativeCodeLoader: Unable to load native-hadoop
> library for your platform... using builtin-java classes where applicable
> 07/12/15 01:29:02 INFO io.TestMapFile: Writing 1000000 rows to
> testMapFileRandomAccess
> 07/12/15 01:32:04 INFO io.TestMapFile: Writing 1000000 records took 182009ms
> 07/12/15 01:32:04 INFO io.TestMapFile: Reading 1000000 random rows
> 07/12/15 01:48:02 INFO io.TestMapFile: Reading 1000000 random records took
> 958243ms
> Time: 1,140.652
> OK (1 test)
> {code}
> For the below test using TRUNK r604352, I amended the test so it output a log
> message every 100k reads:
> {code}
> [EMAIL PROTECTED] hadoop-trunk]$ ./bin/hadoop org.apache.hadoop.io.TestMapFile
> .07/12/15 01:56:34 WARN util.NativeCodeLoader: Unable to load native-hadoop
> library for your platform... using builtin-java classes where applicable
> 07/12/15 01:56:34 INFO io.TestMapFile: Writing 1000000 rows to
> testMapFileRandomAccess
> 07/12/15 01:59:38 INFO io.TestMapFile: Writing 1000000 records took 183986ms
> 07/12/15 01:59:38 INFO io.TestMapFile: Reading 1000000 random rows
> .........
> {code}
> After 20 minutes it still hadn't printed out the 'read 100k messages' (I had
> to leave -- will fill in final figures later)
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.