[ 
https://issues.apache.org/jira/browse/HADOOP-3315?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12668792#action_12668792
 ] 

Hong Tang commented on HADOOP-3315:
-----------------------------------

The results were obtained with an early version of Hadoop 19. Can you describe 
your environment and I can retry the tests.

The hardcoded parameters were meant to serve as unit test. 

The following is the tcsh script that I use to run my seek benchmark:

{code}
foreach compress (none lzo)
    foreach block (128 256 512 1024)
        ./runTFileSeekBench -n 1000 -s 10000 -c $compress -b $block ${ROOT} -f 
TestTFileSeek.${block}K.$compress -x w
    end
end

foreach i (1 2 3 4 5)
    foreach compress (none lzo)
        foreach block (128 256 512 1024)
            ./runTFileSeekBench -n 1000 -s 10000 -c $compress -b $block ${ROOT} 
-f TestTFileSeek.${block}K.$compress -x r
        end
    end
end
{code}

> New binary file format
> ----------------------
>
>                 Key: HADOOP-3315
>                 URL: https://issues.apache.org/jira/browse/HADOOP-3315
>             Project: Hadoop Core
>          Issue Type: New Feature
>          Components: io
>            Reporter: Owen O'Malley
>            Assignee: Amir Youssefi
>             Fix For: 0.21.0
>
>         Attachments: HADOOP-3315_20080908_TFILE_PREVIEW_WITH_LZO_TESTS.patch, 
> HADOOP-3315_20080915_TFILE.patch, hadoop-trunk-tfile.patch, 
> hadoop-trunk-tfile.patch, TFile Specification 20081217.pdf
>
>
> SequenceFile's block compression format is too complex and requires 4 codecs 
> to compress or decompress. It would be good to have a file format that only 
> needs 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to