more - uppercase for SQL keywords

Project: http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/repo
Commit: 
http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/commit/e1fef71e
Tree: http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/tree/e1fef71e
Diff: http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/diff/e1fef71e

Branch: refs/heads/develop
Commit: e1fef71e3d879add521e2d1040d2e58e6295ee6d
Parents: c40bcad
Author: Lisa Owen <[email protected]>
Authored: Tue Nov 1 14:01:09 2016 -0700
Committer: Lisa Owen <[email protected]>
Committed: Tue Nov 1 14:01:09 2016 -0700

----------------------------------------------------------------------
 bestpractices/managing_data_bestpractices.html.md.erb |  2 +-
 pxf/HDFSFileDataPXF.html.md.erb                       |  2 +-
 pxf/HivePXF.html.md.erb                               | 14 +++++++-------
 pxf/TroubleshootingPXF.html.md.erb                    |  8 ++++----
 query/gporca/query-gporca-enable.html.md.erb          |  6 +++---
 query/gporca/query-gporca-fallback.html.md.erb        |  6 +++---
 query/gporca/query-gporca-features.html.md.erb        |  4 ++--
 query/query-performance.html.md.erb                   |  6 +++---
 query/query-profiling.html.md.erb                     | 14 +++++++-------
 9 files changed, 31 insertions(+), 31 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/e1fef71e/bestpractices/managing_data_bestpractices.html.md.erb
----------------------------------------------------------------------
diff --git a/bestpractices/managing_data_bestpractices.html.md.erb 
b/bestpractices/managing_data_bestpractices.html.md.erb
index 01f82cd..11d6e02 100644
--- a/bestpractices/managing_data_bestpractices.html.md.erb
+++ b/bestpractices/managing_data_bestpractices.html.md.erb
@@ -14,7 +14,7 @@ To obtain the best performance during data loading, observe 
the following best p
 -   If the number of partitions in a table is large, the recommended way to 
load data into the partitioned table is to load the data partition by 
partition. For example, you can use query such as the following to load data 
into only one partition:
 
     ```sql
-    insert into target_partitioned_table_part1 select * from source_table 
where filter
+    INSERT INTO target_partitioned_table_part1 SELECT * FROM source_table 
WHERE filter
     ```
 
     where *filter* selects only the data in the target partition.

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/e1fef71e/pxf/HDFSFileDataPXF.html.md.erb
----------------------------------------------------------------------
diff --git a/pxf/HDFSFileDataPXF.html.md.erb b/pxf/HDFSFileDataPXF.html.md.erb
index 4729fe9..2021565 100644
--- a/pxf/HDFSFileDataPXF.html.md.erb
+++ b/pxf/HDFSFileDataPXF.html.md.erb
@@ -444,7 +444,7 @@ To access external HDFS data in a High Availability HDFS 
cluster, change the `C
 
 ``` sql
 gpadmin=# CREATE EXTERNAL TABLE <table_name> ( <column_name> <data_type> [, 
...] | LIKE <other_table> )
-            LOCATION 
('pxf://<HA-nameservice>/<path-to-hdfs-file>?PROFILE=HdfsTextSimple|HdfsTextMulti|Avro|SequenceWritable[&<custom-option>=<value>[...]]')
+            LOCATION 
('pxf://<HA-nameservice>/<path-to-hdfs-file>?PROFILE=HdfsTextSimple|HdfsTextMulti|Avro[&<custom-option>=<value>[...]]')
          FORMAT '[TEXT|CSV|CUSTOM]' (<formatting-properties>);
 ```
 

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/e1fef71e/pxf/HivePXF.html.md.erb
----------------------------------------------------------------------
diff --git a/pxf/HivePXF.html.md.erb b/pxf/HivePXF.html.md.erb
index ae68a99..978bf9c 100644
--- a/pxf/HivePXF.html.md.erb
+++ b/pxf/HivePXF.html.md.erb
@@ -131,7 +131,7 @@ Create a Hive table to expose our sample data set.
 2. Load the `pxf_hive_datafile.txt` sample data file into the `sales_info` 
table you just created:
 
     ``` sql
-    hive> LOAD DATA local INPATH '/tmp/pxf_hive_datafile.txt'
+    hive> LOAD DATA LOCAL INPATH '/tmp/pxf_hive_datafile.txt'
             INTO TABLE sales_info;
     ```
 
@@ -232,7 +232,7 @@ Use the PXF `HiveText` profile to create a queryable HAWQ 
external table from th
 2. Query the external table:
 
     ``` sql
-    postgres=# SELECT * FROM salesinfo_hivetextprofile where 
location="Beijing";
+    postgres=# SELECT * FROM salesinfo_hivetextprofile WHERE 
location="Beijing";
     ```
 
     ``` shell
@@ -391,7 +391,7 @@ When specifying an array field in a Hive table, you must 
identify the terminator
 4. Load the `pxf_hive_complex.txt` sample data file into the 
`table_complextypes` table you just created:
 
     ``` sql
-    hive> LOAD DATA local INPATH '/tmp/pxf_hive_complex.txt' INTO TABLE 
table_complextypes;
+    hive> LOAD DATA LOCAL INPATH '/tmp/pxf_hive_complex.txt' INTO TABLE 
table_complextypes;
     ```
 
 5. Perform a query on Hive table `table_complextypes` to verify that the data 
was loaded successfully:
@@ -569,12 +569,12 @@ To take advantage of PXF partition filtering push-down, 
the Hive and PXF partiti
 PXF partition filtering push-down is enabled by default. To disable PXF 
partition filtering push-down, set the `pxf_enable_filter_pushdown` HAWQ server 
configuration parameter to `off`:
 
 ``` sql
-postgres=# show pxf_enable_filter_pushdown;
+postgres=# SHOW pxf_enable_filter_pushdown;
  pxf_enable_filter_pushdown
 -----------------------------
  on
 (1 row)
-postgres=# set pxf_enable_filter_pushdown=off;
+postgres=# SET pxf_enable_filter_pushdown=off;
 ```
 
 ### <a id="example2"></a>Create Partitioned Hive Table
@@ -626,7 +626,7 @@ postgres=# CREATE EXTERNAL TABLE pxf_sales_part(
   delivery_city TEXT
 )
 LOCATION ('pxf://namenode:51200/sales_part?Profile=Hive')
-FORMAT 'custom' (FORMATTER='pxfwritable_import');
+FORMAT 'CUSTOM' (FORMATTER='pxfwritable_import');
 
 postgres=# SELECT * FROM pxf_sales_part;
 ```
@@ -644,7 +644,7 @@ postgres=# SELECT * FROM pxf_sales_part WHERE delivery_city 
= 'Sacramento' AND i
 The following HAWQ query reads all the data under `delivery_state` partition 
`CALIFORNIA`, regardless of the city.
 
 ``` sql
-postgres=# set pxf_enable_filter_pushdown=on;
+postgres=# SET pxf_enable_filter_pushdown=on;
 postgres=# SELECT * FROM pxf_sales_part WHERE delivery_state = 'CALIFORNIA';
 ```
 

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/e1fef71e/pxf/TroubleshootingPXF.html.md.erb
----------------------------------------------------------------------
diff --git a/pxf/TroubleshootingPXF.html.md.erb 
b/pxf/TroubleshootingPXF.html.md.erb
index d59e361..f38b3c8 100644
--- a/pxf/TroubleshootingPXF.html.md.erb
+++ b/pxf/TroubleshootingPXF.html.md.erb
@@ -164,7 +164,7 @@ $ psql
 ``` sql
 gpadmin=# CREATE EXTERNAL TABLE hivetest(id int, newid int)
     LOCATION ('pxf://namenode:51200/pxf_hive1?PROFILE=Hive')
-    FORMAT 'custom' (formatter='pxfwritable_import');
+    FORMAT 'CUSTOM' (formatter='pxfwritable_import');
 gpadmin=# select * from hivetest;
 <select output>
 ```
@@ -183,8 +183,8 @@ $ psql
 ```
 
 ``` sql
-gpadmin=# set client_min_messages=DEBUG2
-gpadmin=# select * from hivetest;
+gpadmin=# SET client_min_messages=DEBUG2
+gpadmin=# SELECT * FROM hivetest;
 ...
 DEBUG2:  churl http header: cell #19: X-GP-URL-HOST: localhost
 DEBUG2:  churl http header: cell #20: X-GP-URL-PORT: 51200
@@ -199,5 +199,5 @@ Examine/collect the log messages from `stdout`.
 **Note**: `DEBUG2` database session logging has a performance impact.  
Remember to turn off `DEBUG2` logging after you have collected the desired 
information.
 
 ``` sql
-gpadmin=# set client_min_messages=NOTICE
+gpadmin=# SET client_min_messages=NOTICE
 ```
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/e1fef71e/query/gporca/query-gporca-enable.html.md.erb
----------------------------------------------------------------------
diff --git a/query/gporca/query-gporca-enable.html.md.erb 
b/query/gporca/query-gporca-enable.html.md.erb
index 7122c87..a8e6306 100644
--- a/query/gporca/query-gporca-enable.html.md.erb
+++ b/query/gporca/query-gporca-enable.html.md.erb
@@ -23,7 +23,7 @@ When the configuration parameter 
`optimizer_analyze_root_partition` is set to `o
 1.  Log into the HAWQ master host as `gpadmin`, the HAWQ administrator.
 2.  Set the values of the server configuration parameters. These HAWQ `hawq    
         config` utility commands sets the value of the parameters to `on`:
 
-    ``shell
+    ``` shell
     $ hawq config -c optimizer_analyze_root_partition -v on
     ```
 
@@ -55,7 +55,7 @@ Set the server configuration parameter `optimizer` for the 
HAWQ system.
 Set the server configuration parameter `optimizer` for individual HAWQ 
databases with the `ALTER DATABASE` command. For example, this command enables 
GPORCA for the database *test\_db*.
 
 ``` sql
-> ALTER DATABASE test_db SET OPTIMIZER = ON ;
+> ALTER DATABASE test_db SET optimizer = ON ;
 ```
 
 ## <a id="topic_lx4_vqk_br"></a>Enabling GPORCA for a Session or a Query
@@ -63,7 +63,7 @@ Set the server configuration parameter `optimizer` for 
individual HAWQ databases
 You can use the `SET` command to set `optimizer` server configuration 
parameter for a session. For example, after you use the `psql` utility to 
connect to HAWQ, this `SET` command enables GPORCA:
 
 ``` sql
-> set optimizer = on ;
+> SET optimizer = on ;
 ```
 
 To set the parameter for a specific query, include the `SET` command prior to 
running the query.

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/e1fef71e/query/gporca/query-gporca-fallback.html.md.erb
----------------------------------------------------------------------
diff --git a/query/gporca/query-gporca-fallback.html.md.erb 
b/query/gporca/query-gporca-fallback.html.md.erb
index d949c33..999e9a7 100644
--- a/query/gporca/query-gporca-fallback.html.md.erb
+++ b/query/gporca/query-gporca-fallback.html.md.erb
@@ -62,7 +62,7 @@ CREATE TABLE sales (trans_id int, date date,
 This query against the table is supported by GPORCA and does not generate 
errors in the log file:
 
 ``` sql
-select * from sales;
+SELECT * FROM sales;
 ```
 
 The `EXPLAIN` plan output lists only the number of selected partitions.
@@ -90,13 +90,13 @@ Output from the log file indicates that GPORCA attempted to 
optimize the query:
 The following cube query is not supported by GPORCA.
 
 ``` sql
-select count(*) from foo group by cube(a,b);
+SELECT count(*) FROM foo GROUP BY cube(a,b);
 ```
 
 The following EXPLAIN plan output includes the message "Feature not supported 
by GPORCA."
 
 ``` sql
-postgres=# explain select count(*) from foo group by cube(a,b);
+postgres=# EXPLAIN SELECT count(*) FROM foo GROUP BY cube(a,b);
 ```
 ```
 LOG:  statement: explain select count(*) from foo group by cube(a,b);

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/e1fef71e/query/gporca/query-gporca-features.html.md.erb
----------------------------------------------------------------------
diff --git a/query/gporca/query-gporca-features.html.md.erb 
b/query/gporca/query-gporca-features.html.md.erb
index 0a89036..4941866 100644
--- a/query/gporca/query-gporca-features.html.md.erb
+++ b/query/gporca/query-gporca-features.html.md.erb
@@ -35,7 +35,7 @@ This example `CREATE TABLE` command creates a range 
partitioned table.
 ``` sql
 CREATE TABLE sales(order_id int, item_id int, amount numeric(15,2), 
       date date, yr_qtr int)
-   range partitioned by yr_qtr;
+   RANGE PARTITIONED BY yr_qtr;
 ```
 
 GPORCA improves on these types of queries against partitioned tables:
@@ -138,7 +138,7 @@ GPORCA generates more efficient plans for the following 
types of subqueries:
 GPORCA handles queries that contain the `WITH` clause. The `WITH` clause, also 
known as a common table expression (CTE), generates temporary tables that exist 
only for the query. This example query contains a CTE.
 
 ``` sql
-WITH v AS (SELECT a, sum(b) as s FROM T where c < 10 GROUP BY a)
+WITH v AS (SELECT a, sum(b) as s FROM T WHERE c < 10 GROUP BY a)
   SELECT *FROM  v AS v1 ,  v AS v2
   WHERE v1.a <> v2.a AND v1.s < v2.s;
 ```

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/e1fef71e/query/query-performance.html.md.erb
----------------------------------------------------------------------
diff --git a/query/query-performance.html.md.erb 
b/query/query-performance.html.md.erb
index b4f88fe..e3aa8f7 100644
--- a/query/query-performance.html.md.erb
+++ b/query/query-performance.html.md.erb
@@ -47,9 +47,9 @@ A query is not executing as quickly as you would expect. Here 
is how to investig
 For visibility into query performance, use the EXPLAIN ANALYZE to obtain data 
locality statistics. For example:
 
 ``` sql
-postgres=# create table test (i int);
-postgres=# insert into test values(2);
-postgres=# explain analyze select * from test;
+postgres=# CREATE TABLE test (i int);
+postgres=# INSERT INTO test VALUES(2);
+postgres=# EXPLAIN ANALYZE SELECT * FROM test;
 ```
 ```
 QUERY PLAN

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/e1fef71e/query/query-profiling.html.md.erb
----------------------------------------------------------------------
diff --git a/query/query-profiling.html.md.erb 
b/query/query-profiling.html.md.erb
index b3139cf..ea20e0a 100644
--- a/query/query-profiling.html.md.erb
+++ b/query/query-profiling.html.md.erb
@@ -173,21 +173,21 @@ Perform the following steps to create and run a 
user-defined PL/pgSQL function.
 4. Create the table `test_tbl` with a single column named `id` of type 
`integer`:
 
     ``` sql
-    testdb=# create table test_tbl (id int);
+    testdb=# CREATE TABLE test_tbl (id int);
     ```
    
 5. Add some data to the `test_tbl` table:
 
     ``` sql
-    testdb=# insert into test_tbl select generate_series(1,100);
+    testdb=# INSERT INTO test_tbl SELECT generate_series(1,100);
     ```
    
-    This `insert` command adds 100 rows to `test_tbl`, incrementing the `id` 
for each row.
+    This `INSERT` command adds 100 rows to `test_tbl`, incrementing the `id` 
for each row.
    
 6. Create a PL/pgSQL function named `explain_plan_func()` by copying and 
pasting the following text at the `psql` prompt:
 
     ``` sql
-    create or replace function explain_plan_func() returns varchar as $$
+    CREATE OR REPLACE FUNCTION explain_plan_func() RETURNS varchar as $$
    declare
 
      a varchar;
@@ -201,8 +201,8 @@ Perform the following steps to create and run a 
user-defined PL/pgSQL function.
        return a;
      end;
    $$
-   language plpgsql
-   volatile;
+   LANGUAGE plpgsql
+   VOLATILE;
     ```
 
 7. Verify the `explain_plan_func()` user-defined function was created 
successfully:
@@ -216,7 +216,7 @@ Perform the following steps to create and run a 
user-defined PL/pgSQL function.
 8. Perform a query using the user-defined function you just created:
 
     ``` sql
-    testdb=# select explain_plan_func();
+    testdb=# SELECT explain_plan_func();
     ```
 
     The `EXPLAIN` plan results for the query are displayed:

Reply via email to