Hi John,

   
You can extend  FileInputFormat(or implement InputFormat) and then you need to 
implement below methods.

1. InputSplit[] getSplits(JobConf job, int numSplits)  : For splitting the 
input files logically for the job. If FileInputFormat.getSplits(JobConf job, 
int numSplits) suits for your requirement, you can make use of it. Otherwise 
you can implement it based on your need.

2. RecordReader<K,V> RecordReader(InputSplit split, JobConf job, Reporter 
reporter) : For reading the input split.


Thanks
Devaraj

________________________________________
From: John Hancock [jhancock1...@gmail.com]
Sent: Thursday, May 17, 2012 3:40 PM
To: common-user@hadoop.apache.org
Subject: custom FileInputFormat class

All,

Can anyone on the list point me in the right direction as to how to write
my own FileInputFormat class?

Perhaps this is not even the way I should go, but my goal is to write a
MapReduce job that gets its input from a binary file of integers and longs.

-John

Reply via email to