Dear Wiki user, You have subscribed to a wiki page or wiki category on "Pig Wiki" for change notification.
The following page has been changed by CorinneC: http://wiki.apache.org/pig/RunPig New page: You can run Pig locally or on a Hadoop. To run Pig locally (in "local" mode), no Hadoop cluster is required. To run Pig on Hadoop, you need access to a Hadoop cluster: [http://lucene.apache.org/hadoop/]. == Running Pig Programs == This section will be updated shortly ... [Pi] This needs to be changed. Now users can start Pig by simplying running ./bin/pig and all the configuration things can be set at ./conf/pig.properties. There are two ways to run pig. The first way is by using `pig.pl` that can be found in the scripts directory of your source tree. Using the script would require having Perl installed on your machine. You can use it by issuing the following command: `pig.pl -cp pig.jar:HADOOPSITEPATH` where HADOOPSITEPATH is the directory in which `hadoop-site.xml` file for your Hadoop cluster is located. Example: `pig.pl -cp pig.jar:/hadoop/conf` The second way to do this is by using java directly: `java -cp pig.jar:HADOOPSITEPATH org.apache.pig.Main` This starts pig in the default map-reduce mode. You can also start pig in "local" mode: `java -cp pig.jar org.apache.pig.Main -x local` Or `java -jar pig.jar -x local` Regardless of how you invoke pig, the commands that are specified above will take you to an interactive shell called grunt where you can run DFS and pig commands. The documentation about grunt will be posted on wiki soon. If you want to run Pig in batch mode, you can append your pig script to either of the commands above. Example: {{{pig.pl -cp pig.jar:/hadoop/conf myscript.pig}}} or {{{java -cp pig.jar:/hadoop/conf myscript.pig}}}