Thank you for the hints, I have checked these two Java files and managed to make my own program working.
Regards C.J. On Fri, Aug 7, 2009 at 5:44 PM, stack <[email protected]> wrote: > Yeah, use TableInputFormat. You tell it table name and column(s) you want > to see in your mapper. For how to set tablename, etc., see > TableMapReduceUtil in the mapred package and then see RowCounter in the > same > package for an example that uses TableInputFormat, setting table name and > columns read from the command-line via TableMapReduceUtil. > > St.Ack > > > On Fri, Aug 7, 2009 at 7:41 AM, Xine Jar <[email protected]> wrote: > > > Hallo, > > I am using hadoop 0.19.1 and I was previously writing my application > > setting > > a text file as an input path for the job. > > * > > Therefore the main four involved parameters were the following:* > > > > public static class Map extends MapReduceBase implements > > Mapper<LongWritable, *Text*, Text, Text> { > > ....... > > public void map(LongWritable key,*Text value*, OutputCollector<Text, > Text> > > output, Reporter reporter) throws IOException { > > ....} > > } > > > > public static void main(String[] args) throws Exception { > > .... > > conf.setInputFormat(*TextInputFormat*.class); > > FileInputFormat.setInputPaths(conf, *new Path(args[0]*)); > > > > .... > > } > > > > *Question > > *I would like to write a mapreduce application that reads from an hbase > > table, summing some values etc,...and writing the output result in a > > textfile. I have already created this hbase table (using version 0.19.3) > I > > need now to pass it's name as an input path to the job how can I do it? > > > > I have red here and there and found out that the InputFormat shall be set > > to > > "*TableInputFormat.class*" but what about the setInputPaths ?and the > Value > > class in the mapper? > > > > Thank you, > > CJ > > >
