[ 
https://issues.apache.org/jira/browse/HADOOP-3460?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Koji Noguchi updated HADOOP-3460:
---------------------------------

    Attachment: HADOOP-3460-part2.patch

Chris, thanks for the review.

bq. * Different properties for the output key/value classes aren't necessary; 
you can use the existing methods, like JobConf::getOutputKeyClass.

Reason I did it this way is, I want to use this ouputformat with C++Pipes.

{noformat}
(1) c++-reducer ---> (2) Java-PipesReducer ---> (3) collector ---> (4) 
SequenceFile(AsBinary)...
{noformat}

And (2) mapred/pipes/PipesReducer calls job.getOutputKeyClass() and 
job.getOutputValueClass(),
but I want those outputs to be BytesWritable and not the key/value classes of 
the SequenceFile.

How about this.  Just like mapoutputkeyclass uses outputkeyclass as the default 
class, we'll use
outputkeyclass if SequenceFileOutputKeyClass is not being defined in the config.

bq. * The generic signature on the RecordWriter can be 
<BytesWritable,BytesWritable> if the signature on SeqFileOF were correct:

Done. Modified SequenceFile.java. Added @SuppressWarnings("unchecked") for 
MultipleSequenceFileOutputFormat.getBaseRecordWriter.

bq. * Since record compression is not supported, it might be worthwhile to 
override OutputFormat::checkOutputSpecs and throw if it's attempted

Done. Test added.

bq. * This should be in o.a.h.mapred.lib rather than o.a.h.mapred

Yes. Except that SequenceFileAsBinaryInputFormat is in o.a.h.mapred.
For now, I'll leave this to o.a.h.mapred and we can create a new Jira to move 
both of them to o.a.h.mapred.lib.

bq. * Keeping a WritableValueBytes instance around (and adding a reset method) 
might be useful, so a new one isn't created for each write.

Done. (Not sure if I did it correctly.)

bq. * The IllegalArgumentException in WritableValueBytes should probably be an 
UnsupportedOperationException

Done.


bq. * WritableValueBytes should be a _static_ inner class

Done.

bq. * The indentation on the anonymous RecordWriter::close should be consistent 
with the standards

Done



> SequenceFileAsBinaryOutputFormat
> --------------------------------
>
>                 Key: HADOOP-3460
>                 URL: https://issues.apache.org/jira/browse/HADOOP-3460
>             Project: Hadoop Core
>          Issue Type: New Feature
>          Components: mapred
>            Reporter: Koji Noguchi
>            Assignee: Koji Noguchi
>            Priority: Minor
>         Attachments: HADOOP-3460-part1.patch, HADOOP-3460-part2.patch
>
>
> Add an OutputFormat to write raw bytes as keys and values to a SequenceFile.
> In C++-Pipes, we're using SequenceFileAsBinaryInputFormat to read 
> Sequencefiles.
> However, we current don't have a way to *write* a sequencefile efficiently 
> without going through extra (de)serializations.
> I'd like to store the correct classnames for key/values but use BytesWritable 
> to write
> (in order for the next java or pig code to be able to read this sequencefile).

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to