Dear Wiki user,

You have subscribed to a wiki page or wiki category on "Lucene-hadoop Wiki" for 
change notification.

The following page has been changed by OwenOMalley:
http://wiki.apache.org/lucene-hadoop/FAQ

------------------------------------------------------------------------------
  
  [http://lucene.apache.org/hadoop/ Hadoop] is a distributed computing platform 
written in Java.  It incorporates features similar to those of the 
[http://en.wikipedia.org/wiki/Google_File_System Google File System] and of 
[http://en.wikipedia.org/wiki/MapReduce MapReduce].  For some details, see 
HadoopMapReduce.
  
+ == 2. What platform does Hadoop run on? ==
+ 
+   1. Java 1.5.x or higher, preferably from Sun
+   2. Linux and Windows are the supported operating systems, but BSD and Mac 
OS/X are known to work. (Windows requires the installation of 
[http://www.cygwin.com/ Cygwin].
+ 
- == 2. How well does Hadoop scale? ==
+ == 3. How well does Hadoop scale? ==
  
  Hadoop has been demonstrated on clusters of up to 600 nodes.  Sort 
performance is 
[http://www.mail-archive.com/hadoop-dev%40lucene.apache.org/msg01777.html good] 
and still improving.
  
- == 3. Do I have to write my application in Java? ==
+ == 4. Do I have to write my application in Java? ==
  
  No.  There are several ways to incorporate non-Java code.  HadoopStreaming 
permits any shell command to be used as a map or reduce function, and Hadoop is 
also developing [http://svn.apache.org/viewvc/lucene/hadoop/trunk/src/c%2B%2B/ 
C and C++ APIs].
  
- == 4. How can I help to make Hadoop better? ==
+ == 5. How can I help to make Hadoop better? ==
  
  If you have trouble figuring how to use Hadoop, then, once you've figured 
something out (perhaps with the help of the 
[http://lucene.apache.org/hadoop/mailing_lists.html mailing lists]), pass that 
knowledge on to others by adding something to this wiki.
  
  If you find something that you wish were done better, and know how to fix it, 
read HowToContribute, and contribute a patch.
  
- == 5. If I add new data-nodes to the cluster will HDFS move the blocks to the 
newly added nodes in order to balance disk space utilization between the nodes? 
==
+ == 6. If I add new data-nodes to the cluster will HDFS move the blocks to the 
newly added nodes in order to balance disk space utilization between the nodes? 
==
  
  No, HDFS will not move blocks to new nodes automatically. However, newly 
created files will likely have their blocks placed on the new nodes.
  

Reply via email to