[ 
https://issues.apache.org/jira/browse/HADOOP-3460?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12602924#action_12602924
 ] 

Chris Douglas commented on HADOOP-3460:
---------------------------------------

Just a few quick changes and I think this is ready to commit:
# The testcase: doesn't need a main method, you might want to break up the 
check for forbidding record compression into a separate test, and the call to 
JobConf::setInputPath is generating a warning (replace with 
FileInputFormat::addInputPath)
# WritableValueBytes::writeCompressedBytes no longer throws 
IllegalArgumentException, so that can be removed from its signature
# SeqFABOF::checkOutputSpecs doesn't need to list InvalidJobConfException

> SequenceFileAsBinaryOutputFormat
> --------------------------------
>
>                 Key: HADOOP-3460
>                 URL: https://issues.apache.org/jira/browse/HADOOP-3460
>             Project: Hadoop Core
>          Issue Type: New Feature
>          Components: mapred
>            Reporter: Koji Noguchi
>            Assignee: Koji Noguchi
>            Priority: Minor
>         Attachments: HADOOP-3460-part1.patch, HADOOP-3460-part2.patch
>
>
> Add an OutputFormat to write raw bytes as keys and values to a SequenceFile.
> In C++-Pipes, we're using SequenceFileAsBinaryInputFormat to read 
> Sequencefiles.
> However, we current don't have a way to *write* a sequencefile efficiently 
> without going through extra (de)serializations.
> I'd like to store the correct classnames for key/values but use BytesWritable 
> to write
> (in order for the next java or pig code to be able to read this sequencefile).

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to