Dear Wiki user,

You have subscribed to a wiki page or wiki category on "Hadoop Wiki" for change 
notification.

The following page has been changed by MarkSchnitzius:
http://wiki.apache.org/hadoop/Running_Hadoop_On_Ubuntu_Linux_%28Single-Node_Cluster%29

------------------------------------------------------------------------------
    [EMAIL PROTECTED]:~$
  }}}
  
- The second line will create an RSA key pair with an empty password. 
Generally, using an empty password is not recommended, but in this case it is 
needed to unlock the key without your interaction (you don't want to enter the 
passphrase everytime Hadoop interacts with its nodes).
+ The second line will create an RSA key pair with an empty password. 
Generally, using an empty password is not recommended, but in this case it is 
needed to unlock the key without your interaction (you don't want to enter the 
passphrase every time Hadoop interacts with its nodes).
  
  Second, you have to enable SSH access to your local machine with this newly 
created key.
  
@@ -188, +188 @@

    scheme and authority determine the FileSystem implementation.  The
    uri's scheme determines the config property (fs.SCHEME.impl) naming
    the FileSystem implementation class.  The uri's authority is used to
-   determine the host, port, etc. for a filesystem.</description>
+   determine the host, port, etc. for a FileSystem.</description>
  </property>
  
  <property>
@@ -321, +321 @@

  
  == Running a MapReduce job ==
  
- We will now run your first HadoopMapReduce job. We will use the WordCount 
example job which reads text files and counts how often words occur. The input 
is text files and the output is text files, each line of which contains a word 
and the count of how often it occured, separated by a tab. More information of 
what happens behind the scenes is available at the WordCount article.
+ We will now run your first HadoopMapReduce job. We will use the WordCount 
example job which reads text files and counts how often words occur. The input 
is text files and the output is text files, each line of which contains a word 
and the count of how often it occurred, separated by a tab. More information of 
what happens behind the scenes is available at the WordCount article.
  
  === Download example input data ===
  
@@ -368, +368 @@

  
  === Run the MapReduce job ===
  
- Now, we actually run the WordCount examle job.
+ Now, we actually run the WordCount example job.
  
  {{{
    [EMAIL PROTECTED]:/usr/local/hadoop$ bin/hadoop jar 
hadoop-0.14.2-examples.jar wordcount gutenberg gutenberg-output

Reply via email to