Modified: hadoop/pig/trunk/src/docs/src/documentation/content/xdocs/setup.xml
URL: 
http://svn.apache.org/viewvc/hadoop/pig/trunk/src/docs/src/documentation/content/xdocs/setup.xml?rev=901900&r1=901899&r2=901900&view=diff
==============================================================================
--- hadoop/pig/trunk/src/docs/src/documentation/content/xdocs/setup.xml 
(original)
+++ hadoop/pig/trunk/src/docs/src/documentation/content/xdocs/setup.xml Thu Jan 
21 22:35:08 2010
@@ -71,7 +71,8 @@
 <section>
 <title>Grunt Shell</title>
 <p>Use Pig's interactive shell, Grunt, to enter pig commands manually. See the 
<a href="setup.html#Sample+Code">Sample Code</a> for instructions about the 
passwd file used in the example.</p>
-<p>You can also run or execute script files from the Grunt shell. See the RUN 
and EXEC commands in the <a href="piglatin_reference.html">Pig Latin Reference 
Manual</a>. </p>
+<p>You can also run or execute script files from the Grunt shell. 
+See the <a href="piglatin_ref2.html#run">run</a> and <a 
href="piglatin_ref2.html#exec">exec</a> commands. </p>
 <p><strong>Local Mode</strong></p>
 <source>
 $ pig -x local

Modified: hadoop/pig/trunk/src/docs/src/documentation/content/xdocs/site.xml
URL: 
http://svn.apache.org/viewvc/hadoop/pig/trunk/src/docs/src/documentation/content/xdocs/site.xml?rev=901900&r1=901899&r2=901900&view=diff
==============================================================================
--- hadoop/pig/trunk/src/docs/src/documentation/content/xdocs/site.xml 
(original)
+++ hadoop/pig/trunk/src/docs/src/documentation/content/xdocs/site.xml Thu Jan 
21 22:35:08 2010
@@ -45,8 +45,8 @@
     <tutorial label="Tutorial"                                 
href="tutorial.html" />
     </docs>  
      <docs label="Guides"> 
-    <plusers label="Pig Latin Users "  href="piglatin_users.html" />
-    <plref label="Pig Latin Reference" href="piglatin_reference.html" />
+    <plref1 label="Pig Latin 1"        href="piglatin_ref1.html" />
+    <plref2 label="Pig Latin 2"        href="piglatin_ref2.html" />
     <cookbook label="Cookbook"                 href="cookbook.html" />
     <udf label="UDFs" href="udf.html" />
     </docs>  

Modified: 
hadoop/pig/trunk/src/docs/src/documentation/content/xdocs/zebra_pig.xml
URL: 
http://svn.apache.org/viewvc/hadoop/pig/trunk/src/docs/src/documentation/content/xdocs/zebra_pig.xml?rev=901900&r1=901899&r2=901900&view=diff
==============================================================================
--- hadoop/pig/trunk/src/docs/src/documentation/content/xdocs/zebra_pig.xml 
(original)
+++ hadoop/pig/trunk/src/docs/src/documentation/content/xdocs/zebra_pig.xml Thu 
Jan 21 22:35:08 2010
@@ -29,7 +29,7 @@
    <section>
    <title>Overview</title>
    <p>With Pig you can load and store data in Zebra format. You can also take 
advantage of sorted Zebra tables for map-side groups and merge joins. When 
working with Pig keep in mind that, unlike MapReduce, you do not need to 
declare Zebra schemas. Zebra automatically converts Zebra schemas to Pig 
schemas (and vice versa) for you.</p>
-
+   
  </section>
  <!-- END OVERVIEW-->
  
@@ -54,19 +54,19 @@
  <ol>
  <li>You need to register a Zebra jar file the same way you would do it for 
any other UDF.</li>
  <li>You need to place the jar on your classpath.</li>
- <li>When using Zebra with Pig, Zebra data is self-described and always 
contains a schema. This means that the AS clause is unnecessary as long as 
-  you know what the column names and types are. To determine the column names 
and types, you can run the DESCRIBE statement right after the load:
+  </ol>
+  
+ <p>Zebra data is self-described meaning that the name and type information is 
stored with the data; you don't need to provide an AS clause or perform type 
casting unless you actually need to change the data. To check column names and 
types, you can run the DESCRIBE statement right after the load:</p>
  <source>
 A = LOAD 'studenttab' USING org.apache.hadoop.zebra.pig.TableLoader();
 DESCRIBE A;
-a: {name: chararray,age: int,gpa: float}
+A: {name: chararray,age: int,gpa: float}
 </source>
- </li>
- </ol>
    
-<p>You can provide alternative names to the columns with the AS clause. You 
can also provide types as long as the 
- original type can be converted to the new type. <em>In general</em>, Zebra 
supports Pig type compatibilities 
- (see <a 
href="piglatin_reference.html#Arithmetic+Operators+and+More">Arithmetic 
Operators and More</a>).</p>
+<p>You can provide alternative names to the columns with the AS clause. You 
can also provide alternative types as long as the 
+ original type can be converted to the new type. (One exception to this rule 
are maps since you can't specify schema for a map. Zebra always creates map 
values as bytearrays which would require casting to real type in the script. 
Note that this is not different for treating maps in Pig for any other 
storage.) For more information see <a 
href="piglatin_ref2.html#Schemas">Schemas</a> and
+<a href="piglatin_ref2.html#Arithmetic+Operators+and+More">Arithmetic 
Operators and More</a>.
+ </p>
  
 <p>You can provide multiple, comma-separated files to the loader:</p>
 <source>
@@ -186,7 +186,8 @@
    <section>
     <title>HDFS File Globs</title>
         <p>Pig supports HDFS file globs 
-    (for more information about globs, see <a 
href="http://hadoop.apache.org/common/docs/current/api/org/apache/hadoop/fs/FileSystem.html";>FileSystem</a>
 and GlobStatus).</p>
+    (for more information 
+    see <a 
href="http://hadoop.apache.org/common/docs/current/api/org/apache/hadoop/fs/FileSystem.html#globStatus(org.apache.hadoop.fs.Path)">GlobStatus</a>).</p>
     <p>In this example, all Zebra tables in the directory of 
/path/to/PIG/tables will be loaded as a union (table union). </p>
  <source>
  A = LOAD ‘/path/to/PIG/tables/*’ USING 
org.apache.hadoop.zebra.pig.TableLoader(‘’);

Modified: 
hadoop/pig/trunk/src/docs/src/documentation/content/xdocs/zebra_users.xml
URL: 
http://svn.apache.org/viewvc/hadoop/pig/trunk/src/docs/src/documentation/content/xdocs/zebra_users.xml?rev=901900&r1=901899&r2=901900&view=diff
==============================================================================
--- hadoop/pig/trunk/src/docs/src/documentation/content/xdocs/zebra_users.xml 
(original)
+++ hadoop/pig/trunk/src/docs/src/documentation/content/xdocs/zebra_users.xml 
Thu Jan 21 22:35:08 2010
@@ -155,7 +155,7 @@
 <section>
 <title>MapReduce Jobs</title>
 <p>
-TableInputFormat has static method, requireSortedTable, that allows the caller 
to specify the behavior of a single sorted table or an order-preserving sorted 
table union as described above. The method ensures all tables in a union are 
sorted. For more information, see <a 
href="zebra_reference.html#TableInputFormat">TableInputFormat</a>.
+TableInputFormat has static method, requireSortedTable, that allows the caller 
to specify the behavior of a single sorted table or an order-preserving sorted 
table union as described above. The method ensures all tables in a union are 
sorted. For more information, see <a 
href="zebra_mapreduce.html#TableInputFormat">TableInputFormat</a>.
 </p>
 
 <p>One simple example: A order-preserving sorted union B. A and B are sorted 
tables. </p>


Reply via email to