Katta is a tool for managing distributed search, which happens by default to use lucene as it' search engine. Katta is able to read indexes from hdfs, or s3, that katta then deploys onto the local disk of the katta nodes.
The contrib directory of the hadoop installation contains a tool for building lucene indexes, and patch solr-1395, provides an integration of solr with katta, and patch solr-1301 provides a simple way of building solr indexes in hadoop (as a parallel to the contrib lucene index build tool). On Fri, Sep 25, 2009 at 1:13 AM, Chandan Tamrakar < [email protected]> wrote: > I was doing few R&D on integrating hadoop and Lucene . I could create > lucene indexes into HDFS using index contribution provided for Hadoop > > > > Is this a proper way to create Lucene index how much it is different from a > project KATTA ? > > > > Please suggest what would be the better approach to create distributed > lucene indexes > > > > Thanks in advance > > -- Pro Hadoop, a book to guide you from beginner to hadoop mastery, http://www.amazon.com/dp/1430219424?tag=jewlerymall www.prohadoopbook.com a community for Hadoop Professionals
