Dear Wiki user,

You have subscribed to a wiki page or wiki category on "Pig Wiki" for change 
notification.

The following page has been changed by FlipKromer:
http://wiki.apache.org/pig/GettingStarted

The comment on the change is:
Removed outdated and redundant info, pointing to the right pages instead.

------------------------------------------------------------------------------
  = Getting Started =
  
+ * Follow the instruction in [BuildPig] to get pig installed.
- == Requirements ==
- 
-    1. '''Java 1.5.x''' or newer, preferably from Sun. Set JAVA_HOME to the 
root of your Java installation.
-    2. '''Ant''' build tool: [http://ant.apache.org/].
-    3. To run unit tests, you also need '''JUnit''': 
[http://junit.sourceforge.net/].
-    4. To run pig programs, you need access to a '''Hadoop cluster''': 
[http://lucene.apache.org/hadoop/]. It's also possible to run pig in "local" 
mode, with severely limited performance - this mode doesn't require setting up 
a Hadoop cluster.
- 
- == Building Pig ==
- 
-    1. Check out pig code from svn: `svn co 
http://svn.apache.org/repos/asf/incubator/pig/trunk`.
-    2. Build the code from the top directory: `ant`. If the build is 
successful, you should see `pig.jar` created in that directory.
-    3. To validate your `pig.jar` run unit test: `ant test`
- 
  
  
  == Running Pig Programs ==
  
- [Pi] This needs to be changed. Now users can start Pig by simplying running 
./bin/pig and all the configuration things can be set at ./conf/pig.properties.
+ * Then do a super-simple pig task as described in [RunPig].
  
- There are two ways to run pig. The first way is by using `pig.pl` that can be 
found in the scripts directory of your source tree. Using the script would 
require having Perl installed on your machine. You can use it by issuing the 
following command: `pig.pl -cp pig.jar:HADOOPSITEPATH` where HADOOPSITEPATH is 
the directory in which `hadoop-site.xml` file for your Hadoop cluster is 
located. Example:
+ == Running Pig Programs ==
  
- `pig.pl -cp pig.jar:/hadoop/conf`
+ * Finally, test your mettle against the real-world task described in the 
[PigTutorial]
  
- The second way to do this is by using java directly:
- 
- `java -cp pig.jar:HADOOPSITEPATH org.apache.pig.Main`
- 
- This starts pig in the default map-reduce mode. You can also start pig in 
"local" mode:
- 
- `java -cp pig.jar org.apache.pig.Main -x local`
- 
- Or
- 
- `java -jar pig.jar -x local`
- 
- Regardless of how you invoke pig, the commands that are specified above will 
take you to an interactive shell called grunt where you can run DFS and pig 
commands. The documentation about grunt will be posted on wiki soon. If you 
want to run Pig in batch mode, you can append your pig script to either of the 
commands above. Example:
- 
- {{{pig.pl -cp pig.jar:/hadoop/conf myscript.pig}}}
- 
- or
- 
- {{{java -cp pig.jar:/hadoop/conf myscript.pig}}}
-  
- 

Reply via email to