Dear Wiki user,

You have subscribed to a wiki page or wiki category on "Pig Wiki" for change 

The following page has been changed by CorinneC:

New page:
You can run Pig locally or on a Hadoop. To run Pig locally (in "local" mode), 
no Hadoop cluster is required. To run Pig on Hadoop, you  need access to a 
Hadoop cluster: []. 

== Running Pig Programs ==

This section will be updated shortly ...

[Pi] This needs to be changed. Now users can start Pig by simplying running 
./bin/pig and all the configuration things can be set at ./conf/

There are two ways to run pig. The first way is by using `` that can be 
found in the scripts directory of your source tree. Using the script would 
require having Perl installed on your machine. You can use it by issuing the 
following command: ` -cp pig.jar:HADOOPSITEPATH` where HADOOPSITEPATH is 
the directory in which `hadoop-site.xml` file for your Hadoop cluster is 
located. Example:

` -cp pig.jar:/hadoop/conf`

The second way to do this is by using java directly:

`java -cp pig.jar:HADOOPSITEPATH org.apache.pig.Main`

This starts pig in the default map-reduce mode. You can also start pig in 
"local" mode:

`java -cp pig.jar org.apache.pig.Main -x local`


`java -jar pig.jar -x local`

Regardless of how you invoke pig, the commands that are specified above will 
take you to an interactive shell called grunt where you can run DFS and pig 
commands. The documentation about grunt will be posted on wiki soon. If you 
want to run Pig in batch mode, you can append your pig script to either of the 
commands above. Example:

{{{ -cp pig.jar:/hadoop/conf myscript.pig}}}


{{{java -cp pig.jar:/hadoop/conf myscript.pig}}}

Reply via email to