Dear Wiki user,

You have subscribed to a wiki page or wiki category on "Hadoop Wiki" for change 
notification.

The following page has been changed by JimKellerman:
http://wiki.apache.org/hadoop/Hbase/MapReduce

The comment on the change is:
fix broken links

------------------------------------------------------------------------------
  
  = Hbase as MapReduce job data source and sink =
  
- Hbase can be used as a data source, 
[http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Nightly/javadoc/org/apache/hadoop/hbase/mapred/TableInputFormat.html
 TableInputFormat], and data sink, 
[http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Nightly/javadoc/org/apache/hadoop/hbase/mapred/TableOutputFormat.html
 TableOutputFormat], for mapreduce jobs.  Writing mapreduce jobs that read or 
write hbase, you'll probably want to subclass 
[http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Nightly/javadoc/org/apache/hadoop/hbase/mapred/TableMap.html
 TableMap] and/or 
[http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Nightly/javadoc/org/apache/hadoop/hbase/mapred/TableReduce.html
 TableReduce].  See the do-nothing passthrough classes 
[http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Nightly/javadoc/org/apache/hadoop/hbase/mapred/IdentityTableMap.html
 IdentityTableMap] and 
[http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Nightly/javadoc/org/apache/hadoop/hbase
 /mapred/IdentityTableReduce.html IdentityTableReduce] for basic usage.  For a 
more involved example, see 
[http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Nightly/javadoc/org/apache/hadoop/hbase/mapred/BuildTableIndex.html
 BuildTableIndex] from the same package or review the 
org.apache.hadoop.hbase.mapred.TestTableMapReduce unit test.
+ Hbase can be used as a data source, 
[http://hadoop.apache.org/hbase/docs/current/api/org/apache/hadoop/hbase/mapred/TableInputFormat.html
 TableInputFormat], and data sink, 
[http://hadoop.apache.org/hbase/docs/current/api/org/apache/hadoop/hbase/mapred/TableOutputFormat.html
 TableOutputFormat], for mapreduce jobs.  Writing mapreduce jobs that read or 
write hbase, you'll probably want to subclass 
[http://hadoop.apache.org/hbase/docs/current/api/org/apache/hadoop/hbase/mapred/TableMap.html
 TableMap] and/or 
[http://hadoop.apache.org/hbase/docs/current/api/org/apache/hadoop/hbase/mapred/TableReduce.html
 TableReduce].  See the do-nothing passthrough classes 
[http://hadoop.apache.org/hbase/docs/current/api/org/apache/hadoop/hbase/mapred/IdentityTableMap.html
 IdentityTableMap] and 
[http://hadoop.apache.org/hbase/docs/current/api/org/apache/hadoop/hbase/mapred/IdentityTableReduce.html
 IdentityTableReduce] for basic usage.  For a more involved example, see 
[http://hadoop.apache.org/h
 base/docs/current/api/org/apache/hadoop/hbase/mapred/BuildTableIndex.html 
BuildTableIndex] from the same package or review the 
org.apache.hadoop.hbase.mapred.!TestTableMapReduce unit test.
  
  Running mapreduce jobs that have hbase as source or sink, you'll need to 
specify source/sink table and column names in your configuration.
  

Reply via email to