[GitHub] incubator-hawq-docs pull request #126: HAWQ-1491 - create usage docs for Hiv...

2017-06-27 Thread sansanichfb
Github user sansanichfb commented on a diff in the pull request:

https://github.com/apache/incubator-hawq-docs/pull/126#discussion_r124423336
  
--- Diff: markdown/pxf/ReadWritePXF.html.md.erb ---
@@ -105,6 +105,18 @@ Note: The DELIMITER 
parameter is mandatory.
 org.apache.hawq.pxf.service.io.GPDBWritable
 
 
+
+HiveVectorizedORC
+Optimized block read of a Hive table where each partition is stored as 
an ORC file.
--- End diff --

People might get confused with HDFS block, so we can maybe use bulk/batch 
read.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq-docs pull request #126: HAWQ-1491 - create usage docs for Hiv...

2017-06-27 Thread sansanichfb
Github user sansanichfb commented on a diff in the pull request:

https://github.com/apache/incubator-hawq-docs/pull/126#discussion_r124423088
  
--- Diff: markdown/pxf/HivePXF.html.md.erb ---
@@ -495,9 +500,16 @@ Use the `HiveORC` profile to access ORC format data. 
The `HiveORC` profile provi
 - `=`, `>`, `<`, `>=`, `<=`, `IS NULL`, and `IS NOT NULL` operators 
and comparisons between the `float8` and `float4` types
 - `IN` operator on arrays of `int2`, `int4`, `int8`, `boolean`, and 
`text`
 
-- Complex type support - You can access Hive tables composed of array, 
map, struct, and union data types. PXF serializes each of these complex types 
to `text`.
+When choosing an ORC-supporting profile, consider the following:
+
+- The `HiveORC` profile supports complex types. You can access Hive tables 
composed of array, map, struct, and union data types. PXF serializes each of 
these complex types to `text`.  
+
+The `HiveVectorizedORC` profile does not support complex types.
+
+- The `HiveVectorizedORC` profile reads 1024 rows of data, while the 
`HiveORC` profile reads only a single row at a time.
--- End diff --

profile reads 1024 rows of data at once


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq-docs pull request #126: HAWQ-1491 - create usage docs for Hiv...

2017-06-27 Thread sansanichfb
Github user sansanichfb commented on a diff in the pull request:

https://github.com/apache/incubator-hawq-docs/pull/126#discussion_r124423164
  
--- Diff: markdown/pxf/HivePXF.html.md.erb ---
@@ -565,6 +577,44 @@ In the following example, you will create a Hive table 
stored in ORC format and
 Time: 425.416 ms
 ```
 
+### Example: Using the HiveVectorizedORC 
Profile
+
+In the following example, you will use the `HiveVectorizedORC` profile to 
query the `sales_info_ORC` Hive table you created in the previous example.
+
+**Note**: The `HiveVectorizedORC` profile does not support the timestamp 
data type and complex types.
+
+1. Start the `psql` subsystem:
+
+``` shell
+$ psql -d postgres
+```
+
+2. Use the PXF `HiveVectorizedORC` profile to create a queryable HAWQ 
external table from the Hive table named `sales_info_ORC` that you created in 
Step 1 of the previous example. The `FORMAT` clause must specify `'CUSTOM'`. 
The `HiveVectorizedORC` `CUSTOM` format supports only the built-in 
`'pxfwritable_import'` `formatter`.
--- End diff --

queryable - maybe readable?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq-docs pull request #101: HAWQ-1383 - plpgsql page cleanup, res...

2017-03-10 Thread sansanichfb
Github user sansanichfb commented on a diff in the pull request:

https://github.com/apache/incubator-hawq-docs/pull/101#discussion_r105483266
  
--- Diff: markdown/plext/using_plpgsql.html.md.erb ---
@@ -19,143 +19,283 @@ software distributed under the License is distributed 
on an
 KIND, either express or implied.  See the License for the
 specific language governing permissions and limitations
 under the License.
--->
+--> 
 
-SQL is the language of most other relational databases use as query 
language. It is portable and easy to learn. But every SQL statement must be 
executed individually by the database server. 
+PL/pgSQL is a trusted procedural language that is automatically installed 
and registered in all HAWQ databases. With PL/pgSQL, you can:
 
-PL/pgSQL is a loadable procedural language. PL/SQL can do the following:
+-   Create functions
+-   Add control structures to the SQL language
+-   Perform complex computations
+-   Use all of the data types, functions, and operators defined in SQL
 
--   create functions
--   add control structures to the SQL language
--   perform complex computations
--   inherit all user-defined types, functions, and operators
--   be trusted by the server
+SQL is the language most relational databases use as a query language. 
While it is portable and easy to learn, every SQL statement is individually 
executed by the database server. Your client application sends each query to 
the database server, waits for it to be processed, receives and processes the 
results, does some computation, then sends further queries to the server. This 
back-and-forth requires interprocess communication and incurs network overhead 
if your client is on a different host than the HAWQ master.
 
-You can use functions created with PL/pgSQL with any database that 
supports built-in functions. For example, it is possible to create complex 
conditional computation functions and later use them to define operators or use 
them in index expressions.
+The PL/pgSQL language addresses some of these limitations. When creating 
functions with PL/pgSQL, you can group computation blocks and queries inside 
the database server, combining the power of a procedural language and the ease 
of use of SQL, but with considerable savings of client/server communication 
overhead. With PL/pgSQL:
 
-Every SQL statement must be executed individually by the database server. 
Your client application must send each query to the database server, wait for 
it to be processed, receive and process the results, do some computation, then 
send further queries to the server. This requires interprocess communication 
and incurs network overhead if your client is on a different machine than the 
database server.
+-   Extra round trips between client and server are eliminated
+-   Intermediate, and perhaps unneeded, results do not have to be 
marshaled or transferred between the server and client
+-   Re-using prepared queries avoids multiple rounds of query parsing
+ 
 
-With PL/pgSQL, you can group a block of computation and a series of 
queries inside the database server, thus having the power of a procedural 
language and the ease of use of SQL, but with considerable savings of 
client/server communication overhead.
+## PL/pgSQL Function Syntax
 
--   Extra round trips between client and server are eliminated
--   Intermediate results that the client does not need do not have to be 
marshaled or transferred between server and client
--   Multiple rounds of query parsing can be avoided
+PL/pgSQL is a block-structured language. The complete text of a function 
definition must be a block, which is defined as:
 
-This can result in a considerable performance increase as compared to an 
application that does not use stored functions.
+``` sql
+[  ]
+[ DECLARE
+declarations ]
+BEGIN
+statements
+END [ label ];
+```
--- End diff --

got it


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq-docs pull request #101: HAWQ-1383 - plpgsql page cleanup, res...

2017-03-10 Thread sansanichfb
Github user sansanichfb commented on a diff in the pull request:

https://github.com/apache/incubator-hawq-docs/pull/101#discussion_r105477434
  
--- Diff: markdown/plext/using_plpgsql.html.md.erb ---
@@ -19,143 +19,283 @@ software distributed under the License is distributed 
on an
 KIND, either express or implied.  See the License for the
 specific language governing permissions and limitations
 under the License.
--->
+--> 
 
-SQL is the language of most other relational databases use as query 
language. It is portable and easy to learn. But every SQL statement must be 
executed individually by the database server. 
+PL/pgSQL is a trusted procedural language that is automatically installed 
and registered in all HAWQ databases. With PL/pgSQL, you can:
 
-PL/pgSQL is a loadable procedural language. PL/SQL can do the following:
+-   Create functions
+-   Add control structures to the SQL language
+-   Perform complex computations
+-   Use all of the data types, functions, and operators defined in SQL
 
--   create functions
--   add control structures to the SQL language
--   perform complex computations
--   inherit all user-defined types, functions, and operators
--   be trusted by the server
+SQL is the language most relational databases use as a query language. 
While it is portable and easy to learn, every SQL statement is individually 
executed by the database server. Your client application sends each query to 
the database server, waits for it to be processed, receives and processes the 
results, does some computation, then sends further queries to the server. This 
back-and-forth requires interprocess communication and incurs network overhead 
if your client is on a different host than the HAWQ master.
 
-You can use functions created with PL/pgSQL with any database that 
supports built-in functions. For example, it is possible to create complex 
conditional computation functions and later use them to define operators or use 
them in index expressions.
+The PL/pgSQL language addresses some of these limitations. When creating 
functions with PL/pgSQL, you can group computation blocks and queries inside 
the database server, combining the power of a procedural language and the ease 
of use of SQL, but with considerable savings of client/server communication 
overhead. With PL/pgSQL:
 
-Every SQL statement must be executed individually by the database server. 
Your client application must send each query to the database server, wait for 
it to be processed, receive and process the results, do some computation, then 
send further queries to the server. This requires interprocess communication 
and incurs network overhead if your client is on a different machine than the 
database server.
+-   Extra round trips between client and server are eliminated
+-   Intermediate, and perhaps unneeded, results do not have to be 
marshaled or transferred between the server and client
+-   Re-using prepared queries avoids multiple rounds of query parsing
+ 
 
-With PL/pgSQL, you can group a block of computation and a series of 
queries inside the database server, thus having the power of a procedural 
language and the ease of use of SQL, but with considerable savings of 
client/server communication overhead.
+## PL/pgSQL Function Syntax
 
--   Extra round trips between client and server are eliminated
--   Intermediate results that the client does not need do not have to be 
marshaled or transferred between server and client
--   Multiple rounds of query parsing can be avoided
+PL/pgSQL is a block-structured language. The complete text of a function 
definition must be a block, which is defined as:
 
-This can result in a considerable performance increase as compared to an 
application that does not use stored functions.
+``` sql
+[  ]
+[ DECLARE
+declarations ]
+BEGIN
+statements
+END [ label ];
+```
--- End diff --

Maybe add EXCEPTION block as well.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq-docs pull request #94: HAWQ-1304 - multiple doc changes for P...

2017-02-07 Thread sansanichfb
Github user sansanichfb commented on a diff in the pull request:

https://github.com/apache/incubator-hawq-docs/pull/94#discussion_r99970872
  
--- Diff: markdown/pxf/ReadWritePXF.html.md.erb ---
@@ -131,6 +149,8 @@ Note: The DELIMITER 
parameter is mandatory.
 
 
 
+**Notes**: Metadata identifies the Java class that provides field 
definitions in the relation. OutputFormat identifies the file format for which 
a specific profile is optimized. While the built-in `Hive*` profiles provide 
Metadata and OutputFormat classes, most profiles will have no need to implement 
or specify these classes.
--- End diff --

We can probably mention that PXF service can produce data in different 
formats - TEXT, GPDBWritable, and outputFormat property means given profile 
optimized for particular output format(TEXT/GPDBWritable).


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq-docs pull request #63: HAWQ-1164 hcatalog access restrictions

2016-11-21 Thread sansanichfb
Github user sansanichfb commented on a diff in the pull request:

https://github.com/apache/incubator-hawq-docs/pull/63#discussion_r89021822
  
--- Diff: pxf/HivePXF.html.md.erb ---
@@ -553,7 +569,7 @@ Alternatively, you can use the `pxf_get_item_fields` 
user-defined function (UDF)
 HCatalog integration has the following limitations:
 
 -   HCatalog integration queries and describe commands do not support 
complex types; only primitive types are supported. Use PXF external tables to 
query complex types in Hive. (See [Complex Types Example](#complex_dt_example).)
--   Even for primitive types, HCatalog metadata descriptions produced by 
`\d` and` \d+` are converted to HAWQ types. For example, the Hive type 
`tinyint` is converted to HAWQ type `int2`. (See [Data Type 
Mapping](#hive_primdatatypes).)
+-   Even for primitive types, HCatalog metadata descriptions produced by 
`\d` are converted to HAWQ types. For example, the Hive type `tinyint` is 
converted to HAWQ type `int2`. (See [Data Type Mapping](#hive_primdatatypes).)
--- End diff --

In this case more accurate is to say that \d show's Hawq's interpretation 
of underlying Hive's datatypes, and \d+ shows both interpreted and original 
types.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq-docs pull request #63: HAWQ-1164 hcatalog access restrictions

2016-11-21 Thread sansanichfb
Github user sansanichfb commented on a diff in the pull request:

https://github.com/apache/incubator-hawq-docs/pull/63#discussion_r89021235
  
--- Diff: pxf/HivePXF.html.md.erb ---
@@ -495,24 +511,24 @@ postgres=# SELECT * FROM hcatalog.default.sales_info;
 
 To obtain a description of a Hive table with HCatalog integration, you can 
use the `psql` client interface.
 
--   Within HAWQ, use either the `\d
 hcatalog.hive-db-name.hive-table-name` or `\d+ 
hcatalog.hive-db-name.hive-table-name` commands to describe a single 
table. For example, from the `psql` client interface:
+-   Within HAWQ, use either the `\d
 hcatalog.hive-db-name.hive-table-name` or `\d+ 
hcatalog.hive-db-name.hive-table-name` commands to describe a single 
table. `\d+` displays both the HAWQ and Hive data type, while `\d` displays 
only the HAWQ data type. For example, from the `psql` client interface:
--- End diff --

\d feature was designed to not stick to any particular source. In above 
example it's Hive, but it might be something else in future.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request: HAWQ-178: Add JSON plugin support in ...

2016-05-18 Thread sansanichfb
Github user sansanichfb commented on the pull request:

https://github.com/apache/incubator-hawq/pull/302#issuecomment-220192722
  
Merging this PR, will address bytea type in separate story.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request: HAWQ-178: Add JSON plugin support in ...

2016-05-17 Thread sansanichfb
Github user sansanichfb commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/302#discussion_r63605864
  
--- Diff: 
pxf/pxf-json/src/main/java/org/apache/hawq/pxf/plugins/json/JsonResolver.java 
---
@@ -0,0 +1,256 @@
+package org.apache.hawq.pxf.plugins.json;
+
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Iterator;
+import java.util.List;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hawq.pxf.api.OneField;
+import org.apache.hawq.pxf.api.OneRow;
+import org.apache.hawq.pxf.api.ReadResolver;
+import org.apache.hawq.pxf.api.io.DataType;
+import org.apache.hawq.pxf.api.utilities.ColumnDescriptor;
+import org.apache.hawq.pxf.api.utilities.InputData;
+import org.apache.hawq.pxf.api.utilities.Plugin;
+import org.codehaus.jackson.JsonFactory;
+import org.codehaus.jackson.JsonNode;
+import org.codehaus.jackson.map.ObjectMapper;
+
+/**
+ * This JSON resolver for PXF will decode a given object from the {@link 
JsonAccessor} into a row for HAWQ. It will
+ * decode this data into a JsonNode and walk the tree for each column. It 
supports normal value mapping via projections
+ * and JSON array indexing.
+ */
+public class JsonResolver extends Plugin implements ReadResolver {
+
+   private static final Log LOG = LogFactory.getLog(JsonResolver.class);
+
+   private ArrayList oneFieldList;
+   private ColumnDescriptorCache[] columnDescriptorCache;
+   private ObjectMapper mapper;
+
+   /**
+* Row with empty fields. Returned in case of broken or malformed json 
records.
+*/
+   private final List emptyRow;
+
+   public JsonResolver(InputData inputData) throws Exception {
+   super(inputData);
+   oneFieldList = new ArrayList();
+   mapper = new ObjectMapper(new JsonFactory());
+
+   // Precompute the column metadata. The metadata is used for 
mapping column names to json nodes.
+   columnDescriptorCache = new 
ColumnDescriptorCache[inputData.getColumns()];
+   for (int i = 0; i < inputData.getColumns(); ++i) {
+   ColumnDescriptor cd = inputData.getColumn(i);
+   columnDescriptorCache[i] = new 
ColumnDescriptorCache(cd);
+   }
+
+   emptyRow = createEmptyRow();
+   }
+
+   @Override
+   public List getFields(OneRow row) throws Exception {
+   oneFieldList.clear();
+
+   String jsonRecordAsText = row.getData().toString();
+
+   JsonNode root = decodeLineToJsonNode(jsonRecordAsText);
+
+   if (root == null) {
+   LOG.warn("Return empty-fields row due to invalid JSON: 
" + jsonRecordAsText);
+   return emptyRow;
+   }
+
+   // Iterate through the column definition and fetch our JSON data
+   for (ColumnDescriptorCache columnMetadata : 
columnDescriptorCache) {
+
+   JsonNode node = getChildJsonNode(root, 
columnMetadata.getNormalizedProjections());
+
+   // If this node is null or missing, add a null value 
here
+   if (node == null || node.isMissingNode()) {
+   addNullField(columnMetadata.getColumnType());
+   } else if (columnMetadata.isArray()) {
+   // If this column is an array index, ex. 
"tweet.hashtags[0]"
+   if (node.isArray()) {
+   // If the JSON node is an array, then 
add it to our list
+   
addFieldFromJsonArray(columnMetadata.getColumnType(), node, 
columnMetadata.getArrayNodeIndex());
+   } else {
+ 

[GitHub] incubator-hawq pull request: HAWQ-703. Serialize HCatalog Complex ...

2016-05-09 Thread sansanichfb
Github user sansanichfb commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/633#discussion_r62571923
  
--- Diff: 
pxf/pxf-hive/src/main/java/org/apache/hawq/pxf/plugins/hive/utilities/HiveUtilities.java
 ---
@@ -102,80 +103,49 @@ public static Table getHiveTable(HiveMetaStoreClient 
client, Metadata.Item itemN
  * {@code decimal(precision, scale) -> numeric(precision, 
scale)}
  * {@code varchar(size) -> varchar(size)}
  * {@code char(size) -> bpchar(size)}
+ * {@code array -> text}
+ * {@code map<keyDataType, valueDataType> -> text}
+ * {@code struct<field1:dataType,...,fieldN:dataType> -> text}
+ * {@code uniontype<...> -> text}
  * 
  *
- * @param hiveColumn hive column schema
+ * @param hiveColumn
+ *hive column schema
  * @return field with mapped HAWQ type and modifiers
- * @throws UnsupportedTypeException if the column type is not supported
+ * @throws UnsupportedTypeException
+ * if the column type is not supported
+ * @see EnumHiveToHawqType
  */
 public static Metadata.Field mapHiveType(FieldSchema hiveColumn) 
throws UnsupportedTypeException {
 String fieldName = hiveColumn.getName();
-String hiveType = hiveColumn.getType();
-String mappedType;
-String[] modifiers = null;
+String hiveType = hiveColumn.getType(); // Type name and modifiers 
if any
+String hiveTypeName; // Type name
+String[] modifiers = null; // Modifiers
+EnumHiveToHawqType hiveToHawqType = 
EnumHiveToHawqType.getHiveToHawqType(hiveType);
+EnumHawqType hawqType = hiveToHawqType.getHawqType();
 
-// check parameterized types:
-if (hiveType.startsWith("varchar(") ||
-hiveType.startsWith("char(")) {
-String[] toks = hiveType.split("[(,)]");
-if (toks.length != 2) {
-throw new UnsupportedTypeException( "HAWQ does not support 
type " + hiveType + " (Field " + fieldName + "), " +
-"expected type of the form ()");
-}
-mappedType = toks[0];
-if (mappedType.equals("char")) {
-mappedType = "bpchar";
-}
-modifiers = new String[] {toks[1]};
-} else if (hiveType.startsWith("decimal(")) {
-String[] toks = hiveType.split("[(,)]");
-if (toks.length != 3) {
-throw new UnsupportedTypeException( "HAWQ does not support 
type " + hiveType + " (Field " + fieldName + "), " +
-"expected type of the form (,)");
+if (hiveToHawqType.getSplitExpression() != null) {
+String[] tokens = 
hiveType.split(hiveToHawqType.getSplitExpression());
+hiveTypeName = tokens[0];
--- End diff --

EnumHiveToHawqType stores Hive type name, corresponding Hawq type and parse 
expression(if any), and there is one-to-many relation between "raw" Hive type 
and EnumHiveToHawqType instance. For example, Hive "raw" types ARRAY, 
ARRAY, ARRAY correspond to one enum instance 
EnumHiveToHawqType.ArrayType. So we cannot tie up tokens to one enum instance 
on creation stage.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request: HAWQ-703. Serialize HCatalog Complex ...

2016-05-09 Thread sansanichfb
Github user sansanichfb commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/633#discussion_r62565599
  
--- Diff: 
pxf/pxf-hive/src/main/java/org/apache/hawq/pxf/plugins/hive/utilities/EnumHiveToHawqType.java
 ---
@@ -0,0 +1,113 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.hawq.pxf.plugins.hive.utilities;
+
+import org.apache.hawq.pxf.api.utilities.EnumHawqType;
+import org.apache.hawq.pxf.api.UnsupportedTypeException;
+
+/**
+ * 
+ * Hive types, which are supported by plugin, mapped to HAWQ's types
+ * @see EnumHawqType
+ */
+public enum EnumHiveToHawqType {
+
+TinyintType("tinyint", EnumHawqType.Int2Type),
+SmallintType("smallint", EnumHawqType.Int2Type),
+IntType("int", EnumHawqType.Int4Type),
+BigintType("bigint", EnumHawqType.Int8Type),
+BooleanType("boolean", EnumHawqType.BoolType),
+FloatType("float", EnumHawqType.Float4Type),
+DoubleType("double", EnumHawqType.Float8Type),
+StringType("string", EnumHawqType.TextType),
+BinaryType("binary", EnumHawqType.ByteaType),
+TimestampType("timestamp", EnumHawqType.TimestampType),
+DateType("date", EnumHawqType.DateType),
+DecimalType("decimal", EnumHawqType.NumericType, "[(,)]"),
+VarcharType("varchar", EnumHawqType.VarcharType, "[(,)]"),
+CharType("char", EnumHawqType.BpcharType, "[(,)]"),
+ArrayType("array", EnumHawqType.TextType, "[<,>]"),
+MapType("map", EnumHawqType.TextType, "[<,>]"),
+StructType("struct", EnumHawqType.TextType, "[<,>]"),
+UnionType("uniontype", EnumHawqType.TextType, "[<,>]");
+
+private String typeName;
+private EnumHawqType hawqType;
+private String splitExpression;
+
+EnumHiveToHawqType(String typeName, EnumHawqType hawqType) {
+this.typeName = typeName;
+this.hawqType = hawqType;
+}
+
+EnumHiveToHawqType(String typeName, EnumHawqType hawqType, String 
splitExpression) {
+this(typeName, hawqType);
+this.splitExpression = splitExpression;
+}
+
+/**
+ * 
+ * @return name of type
+ */
+public String getTypeName() {
+return this.typeName;
+}
+
+/**
+ * 
+ * @return corresponding HAWQ type
+ */
+public EnumHawqType getHawqType() {
+return this.hawqType;
+}
+
+/**
+ * 
+ * @return split by expression
+ */
+public String getSplitExpression() {
+return this.splitExpression;
+}
+
+/**
+ * Returns Hive to HAWQ type mapping entry for given Hive type 
+ * 
+ * @param hiveType full Hive type with modifiers, for example - 
decimal(10, 0), char(5), binary, array, map<string,float> etc
+ * @return corresponding Hive to HAWQ type mapping entry
+ * @throws UnsupportedTypeException if there is no corresponding HAWQ 
type
+ */
+public static EnumHiveToHawqType getHiveToHawqType(String hiveType) {
+for (EnumHiveToHawqType t : values()) {
+String hiveTypeName = hiveType;
+String splitExpression = t.getSplitExpression();
+if (splitExpression != null) {
+String[] tokens = hiveType.split(splitExpression);
+hiveTypeName = tokens[0];
+}
+
+if 
(t.getTypeName().toLowerCase().equals(hiveTypeName.toLowerCase())) {
--- End diff --

As it is enum, and instances creation are controlled by itself, 
getTypeName() will always return not null value


---

[GitHub] incubator-hawq pull request: HAWQ-703. Serialize HCatalog Complex ...

2016-05-03 Thread sansanichfb
Github user sansanichfb commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/633#discussion_r61978916
  
--- Diff: 
pxf/pxf-service/src/test/java/org/apache/hawq/pxf/service/MetadataResponseFormatterTest.java
 ---
@@ -85,26 +86,45 @@ public void formatResponseStringWithModifiers() throws 
Exception {
 List fields = new ArrayList();
 Metadata.Item itemName = new Metadata.Item("default", "table1");
 Metadata metadata = new Metadata(itemName, fields);
-fields.add(new Metadata.Field("field1", "int"));
-fields.add(new Metadata.Field("field2", "numeric",
+fields.add(new Metadata.Field("field1", EnumHawqType.Int8Type, 
"bigint"));
+fields.add(new Metadata.Field("field2", EnumHawqType.NumericType, 
"decimal",
 new String[] {"1349", "1789"}));
-fields.add(new Metadata.Field("field3", "char",
+fields.add(new Metadata.Field("field3", EnumHawqType.BpcharType, 
"char",
 new String[] {"50"}));
 metadataList.add(metadata);
 
 response = MetadataResponseFormatter.formatResponse(metadataList, 
"path.file");
 StringBuilder expected = new StringBuilder("{\"PXFMetadata\":[{");
 
expected.append("\"item\":{\"path\":\"default\",\"name\":\"table1\"},")
 .append("\"fields\":[")
-.append("{\"name\":\"field1\",\"type\":\"int\"},")
-
.append("{\"name\":\"field2\",\"type\":\"numeric\",\"modifiers\":[\"1349\",\"1789\"]},")
-
.append("{\"name\":\"field3\",\"type\":\"char\",\"modifiers\":[\"50\"]}")
+
.append("{\"name\":\"field1\",\"type\":\"int8\",\"sourceType\":\"bigint\"},")
+
.append("{\"name\":\"field2\",\"type\":\"numeric\",\"sourceType\":\"decimal\",\"modifiers\":[\"1349\",\"1789\"]},")
+
.append("{\"name\":\"field3\",\"type\":\"bpchar\",\"sourceType\":\"char\",\"modifiers\":[\"50\"]}")
 .append("]}]}");
 
 assertEquals(expected.toString(), 
convertResponseToString(response));
 }
 
 @Test
+public void formatResponseStringWithSourceType() throws Exception {
+List metadataList = new ArrayList();
+List fields = new ArrayList();
+Metadata.Item itemName = new Metadata.Item("default", "table1");
+Metadata metadata = new Metadata(itemName, fields);
+fields.add(new Metadata.Field("field1", EnumHawqType.Float8Type, 
"double"));
+metadataList.add(metadata);
+
+response = MetadataResponseFormatter.formatResponse(metadataList, 
"path.file");
+StringBuilder expected = new StringBuilder("{\"PXFMetadata\":[{");
+
expected.append("\"item\":{\"path\":\"default\",\"name\":\"table1\"},")
+.append("\"fields\":[")
+
.append("{\"name\":\"field1\",\"type\":\"float8\",\"sourceType\":\"double\"}")
+.append("]}]}");
+
+//assertEquals(expected.toString(), 
convertResponseToString(response));
--- End diff --

yes, sure


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request: HAWQ-703. Serialize HCatalog Complex ...

2016-05-03 Thread sansanichfb
Github user sansanichfb commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/633#discussion_r61978803
  
--- Diff: 
pxf/pxf-hive/src/main/java/org/apache/hawq/pxf/plugins/hive/utilities/EnumHiveToHawqType.java
 ---
@@ -0,0 +1,112 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.hawq.pxf.plugins.hive.utilities;
+
+import org.apache.hawq.pxf.api.utilities.EnumHawqType;
+import org.apache.hawq.pxf.api.UnsupportedTypeException;
+
+/**
+ * 
+ * Hive types, which are supported by plugin, mapped to HAWQ's types
+ * @see EnumHawqType
+ */
+public enum EnumHiveToHawqType {
+
+TinyintType("tinyint", EnumHawqType.Int2Type),
+SmallintType("smallint", EnumHawqType.Int2Type),
+IntType("int", EnumHawqType.Int4Type),
+BigintType("bigint", EnumHawqType.Int8Type),
+BooleanType("boolean", EnumHawqType.BoolType),
+FloatType("float", EnumHawqType.Float4Type),
+DoubleType("double", EnumHawqType.Float8Type),
+StringType("string", EnumHawqType.TextType),
+BinaryType("binary", EnumHawqType.ByteaType),
+TimestampType("timestamp", EnumHawqType.TimestampType),
+DateType("date", EnumHawqType.DateType),
+DecimalType("decimal", EnumHawqType.NumericType, "[(,)]"),
+VarcharType("varchar", EnumHawqType.VarcharType, "[(,)]"),
+CharType("char", EnumHawqType.BpcharType, "[(,)]"),
+ArrayType("array", EnumHawqType.TextType, "[<,>]"),
+MapType("map", EnumHawqType.TextType, "[<,>]"),
+StructType("struct", EnumHawqType.TextType, "[<,>]"),
+UnionType("uniontype", EnumHawqType.TextType, "[<,>]");
+
+private String typeName;
+private EnumHawqType hawqType;
+private String splitExpression;
+
+EnumHiveToHawqType(String typeName, EnumHawqType hawqType) {
+this.typeName = typeName;
+this.hawqType = hawqType;
+}
+
+EnumHiveToHawqType(String typeName, EnumHawqType hawqType, String 
splitExpression) {
+this(typeName, hawqType);
+this.splitExpression = splitExpression;
+}
+
+/**
+ * 
+ * @return name of type
+ */
+public String getTypeName() {
+return this.typeName;
+}
+
+/**
+ * 
+ * @return corresponding HAWQ type
+ */
+public EnumHawqType getHawqType() {
+return this.hawqType;
+}
+
+/**
+ * 
+ * @return split by expression
+ */
+public String getSplitExpression() {
+return this.splitExpression;
+}
+
+/**
+ * Returns Hive to HAWQ type mapping entry for given Hive type 
+ * 
+ * @param hiveType full Hive type with modifiers, for example - 
decimal(10, 0), char(5), binary, array, map<string,float> etc
+ * @return corresponding Hive to HAWQ type mapping entry
+ * @throws UnsupportedTypeException if there is no corresponding HAWQ 
type
+ */
+public static EnumHiveToHawqType getHiveToHawqType(String hiveType) {
+for (EnumHiveToHawqType t : values()) {
+String hiveTypeName = hiveType;
+if (t.getSplitExpression() != null) {
--- End diff --

Good catch


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request: HAWQ-703. Serialize HCatalog Complex ...

2016-05-03 Thread sansanichfb
Github user sansanichfb commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/633#discussion_r61976525
  
--- Diff: 
pxf/pxf-api/src/main/java/org/apache/hawq/pxf/api/utilities/EnumHawqType.java 
---
@@ -0,0 +1,105 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.hawq.pxf.api.utilities;
+
+import java.io.IOException;
+import org.codehaus.jackson.JsonGenerator;
+import org.codehaus.jackson.map.JsonSerializer;
+import org.codehaus.jackson.map.annotate.JsonSerialize;
+import org.codehaus.jackson.map.SerializerProvider;
+import org.codehaus.jackson.JsonProcessingException;
+
+class EnumHawqTypeSerializer extends JsonSerializer {
+
+@Override
+public void serialize(EnumHawqType value, JsonGenerator generator,
+  SerializerProvider provider) throws IOException,
+  JsonProcessingException {
+  generator.writeString(value.getTypeName());
--- End diff --

Enum instances are not storing actual modifiers, just number and type of 
modifiers.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request: HAWQ-703. Serialize HCatalog Complex ...

2016-05-03 Thread sansanichfb
Github user sansanichfb commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/633#discussion_r61976195
  
--- Diff: 
pxf/pxf-hive/src/main/java/org/apache/hawq/pxf/plugins/hive/utilities/HiveUtilities.java
 ---
@@ -186,7 +155,7 @@ public static Table getHiveTable(HiveMetaStoreClient 
client, Metadata.Item itemN
  * @param modifiers type modifiers to be verified
  * @return whether modifiers are null or integers
  */
-private static boolean verifyModifers(String[] modifiers) {
+private static boolean verifyIntegerModifers(String[] modifiers) {
--- End diff --

fixed


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request: HAWQ-703. Serialize HCatalog Complex ...

2016-05-03 Thread sansanichfb
Github user sansanichfb commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/633#discussion_r61975914
  
--- Diff: 
pxf/pxf-hive/src/main/java/org/apache/hawq/pxf/plugins/hive/utilities/HiveUtilities.java
 ---
@@ -88,94 +90,61 @@ public static Table getHiveTable(HiveMetaStoreClient 
client, Metadata.Item itemN
  * Unsupported types will result in an exception.
  * 
  * The supported mappings are:
- * {@code tinyint -> int2}
- * {@code smallint -> int2}
- * {@code int -> int4}
- * {@code bigint -> int8}
- * {@code boolean -> bool}
- * {@code float -> float4}
- * {@code double -> float8}
- * {@code string -> text}
- * {@code binary -> bytea}
- * {@code timestamp -> timestamp}
- * {@code date -> date}
- * {@code decimal(precision, scale) -> numeric(precision, 
scale)}
- * {@code varchar(size) -> varchar(size)}
- * {@code char(size) -> bpchar(size)}
+ * {@code tinyint -> int2}
--- End diff --

thanks, fixed.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request: HAWQ-703. Serialize HCatalog Complex ...

2016-05-03 Thread sansanichfb
Github user sansanichfb commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/633#discussion_r61975316
  
--- Diff: pxf/pxf-api/src/main/java/org/apache/hawq/pxf/api/Metadata.java 
---
@@ -67,36 +68,43 @@ public String toString() {
 }
 
 /**
- * Class representing item field - name and type.
+ * Class representing item field - name, type, source type, modifiers.
+ * Type - exposed type of field
+ * Source type - type of field in underlying source
+ * Modifiers - additional attributes which describe type or field
  */
 public static class Field {
 private String name;
-private String type; // TODO: change to enum
+private EnumHawqType type; // field type which PXF exposes
+private String sourceType; // filed type PXF reads from
 private String[] modifiers; // type modifiers, optional field
 
-public Field(String name, String type) {
-
-if (StringUtils.isBlank(name) || StringUtils.isBlank(type)) {
-throw new IllegalArgumentException("Field name and type 
cannot be empty");
-}
-
-this.name = name;
-this.type = type;
+public Field(String name, EnumHawqType type, String sourceType) {
+if (StringUtils.isBlank(name) || 
StringUtils.isBlank(type.getTypeName())
--- End diff --

Makes sense, updated.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request: HAWQ-703. Serialize HCatalog Complex ...

2016-05-03 Thread sansanichfb
Github user sansanichfb commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/633#discussion_r61972982
  
--- Diff: pxf/pxf-api/src/main/java/org/apache/hawq/pxf/api/Metadata.java 
---
@@ -67,36 +68,43 @@ public String toString() {
 }
 
 /**
- * Class representing item field - name and type.
+ * Class representing item field - name, type, source type, modifiers.
+ * Type - exposed type of field
+ * Source type - type of field in underlying source
+ * Modifiers - additional attributes which describe type or field
  */
 public static class Field {
 private String name;
-private String type; // TODO: change to enum
+private EnumHawqType type; // field type which PXF exposes
+private String sourceType; // filed type PXF reads from
 private String[] modifiers; // type modifiers, optional field
 
-public Field(String name, String type) {
-
-if (StringUtils.isBlank(name) || StringUtils.isBlank(type)) {
-throw new IllegalArgumentException("Field name and type 
cannot be empty");
-}
-
-this.name = name;
-this.type = type;
+public Field(String name, EnumHawqType type, String sourceType) {
--- End diff --

fixed


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request: HAWQ-703. Serialize HCatalog Complex ...

2016-04-28 Thread sansanichfb
GitHub user sansanichfb opened a pull request:

https://github.com/apache/incubator-hawq/pull/633

HAWQ-703. Serialize HCatalog Complex Types to plain text (as Hive pro…

…file).

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/apache/incubator-hawq HAWQ-703

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/incubator-hawq/pull/633.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #633


commit 8c5e6f8b2408329f250ad6d69f0487dc5999b597
Author: Oleksandr Diachenko <odiache...@pivotal.io>
Date:   2016-04-22T23:34:42Z

HAWQ-703. Serialize HCatalog Complex Types to plain text (as Hive profile).




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request: HAWQ-705. Fixed aggregation on psql f...

2016-04-22 Thread sansanichfb
GitHub user sansanichfb opened a pull request:

https://github.com/apache/incubator-hawq/pull/628

HAWQ-705. Fixed aggregation on psql for Hive tables.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/apache/incubator-hawq HAWQ-705

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/incubator-hawq/pull/628.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #628


commit 395d790a87cec5e11a774572b0ed1ec97dd897f9
Author: Oleksandr Diachenko <odiache...@pivotal.io>
Date:   2016-04-22T23:44:53Z

HAWQ-705. Fixed aggregation on psql for Hive tables.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq-site pull request: Add link for HAWQ Extension Fram...

2016-04-21 Thread sansanichfb
Github user sansanichfb commented on the pull request:

https://github.com/apache/incubator-hawq-site/pull/6#issuecomment-213176265
  
+1


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq-site pull request: Add link for HAWQ Extension Fram...

2016-04-21 Thread sansanichfb
Github user sansanichfb commented on a diff in the pull request:

https://github.com/apache/incubator-hawq-site/pull/6#discussion_r60676146
  
--- Diff: index.html ---
@@ -233,6 +233,7 @@ Contribute to Advanced 
Enterprise Technology!
https://cwiki.apache.org/confluence/display/HAWQ;>HAWQ Wiki
http://hdb.docs.pivotal.io/index.html;>HAWQ Docs
+   http://hawq.apache.org/docs/pxf/javadoc;>HAWQ Extension Framework API 
(Jave Doc)
--- End diff --

Java?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq-site pull request: Create README.md

2016-04-21 Thread sansanichfb
Github user sansanichfb commented on the pull request:

https://github.com/apache/incubator-hawq-site/pull/3#issuecomment-213148202
  
Merged, @xinzweb you can close it.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq-site pull request: Updating doc links to point to h...

2016-04-21 Thread sansanichfb
Github user sansanichfb commented on the pull request:

https://github.com/apache/incubator-hawq-site/pull/5#issuecomment-213130367
  
+1


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request: HAWQ-683. Fix param name for Protocol...

2016-04-19 Thread sansanichfb
Github user sansanichfb commented on the pull request:

https://github.com/apache/incubator-hawq/pull/625#issuecomment-212172887
  
Might be useful to have @version tag in each file, but do not add it 
manually to each class, but just substituting it on some build phase, reading 
current version from graddle properties, the similar way we do for PXF api 
version.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request: HAWQ-681. Removed hcatalog_enable GUC...

2016-04-19 Thread sansanichfb
Github user sansanichfb commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/621#discussion_r60287263
  
--- Diff: src/test/regress/input/hcatalog_lookup.source ---
@@ -2,16 +2,9 @@
 -- test hcatalog lookup
 -- --
 
--- Negative test with GUC disabled
-SET hcatalog_enable = false;
-SELECT * from hcatalog.db.t;
-
 SELECT * FROM pxf_get_item_fields('Hive', '*');
--- End diff --

@shivzone good to have, but not related to this JIRA


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request: HAWQ-681. Removed hcatalog_enable GUC...

2016-04-18 Thread sansanichfb
GitHub user sansanichfb opened a pull request:

https://github.com/apache/incubator-hawq/pull/621

HAWQ-681. Removed hcatalog_enable GUC.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/apache/incubator-hawq HAWQ-681

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/incubator-hawq/pull/621.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #621


commit bf5e92dfe7cb7a1713ac42e6fe7e40106bcd5612
Author: Oleksandr Diachenko <odiache...@pivotal.io>
Date:   2016-04-15T21:40:24Z

HAWQ-681. Removed hcatalog_enable GUC.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request: HAWQ-628. Return -1 instead of error.

2016-04-12 Thread sansanichfb
Github user sansanichfb closed the pull request at:

https://github.com/apache/incubator-hawq/pull/595


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request: HAWQ-615. Handle incomptible tables w...

2016-04-04 Thread sansanichfb
Github user sansanichfb commented on the pull request:

https://github.com/apache/incubator-hawq/pull/551#issuecomment-205532919
  
LGTM other than small cosmetic comments.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request: HAWQ-615. Handle incomptible tables w...

2016-04-04 Thread sansanichfb
Github user sansanichfb commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/551#discussion_r58462813
  
--- Diff: 
pxf/pxf-hive/src/test/java/org/apache/hawq/pxf/plugins/hive/HiveMetadataFetcherTest.java
 ---
@@ -154,6 +155,115 @@ public void getTableMetadata() throws Exception {
 assertEquals("int4", field.getType());
 }
 
+@Test
+public void getTableMetadataWithMultipleTables() throws Exception {
+prepareConstruction();
+
+fetcher = new HiveMetadataFetcher(inputData);
+
+String tablepattern = "*";
+String dbpattern = "*";
+String dbname = "default";
+String tablenamebase = "regulartable";
+String pattern = dbpattern + "." + tablepattern;
+
+List dbNames = new 
ArrayList(Arrays.asList(dbname));
+List tableNames = new ArrayList();
+
+// Prepare for tables
+List fields = new ArrayList();
+fields.add(new FieldSchema("field1", "string", null));
+fields.add(new FieldSchema("field2", "int", null));
+StorageDescriptor sd = new StorageDescriptor();
+sd.setCols(fields);
+
+// Mock hive tables returned from hive client
+for(int index=1;index<=2;index++) {
--- End diff --

Add comment why starting from 1 not from 0?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request: HAWQ-546. Implemented call of pxf_get...

2016-03-31 Thread sansanichfb
GitHub user sansanichfb opened a pull request:

https://github.com/apache/incubator-hawq/pull/542

HAWQ-546. Implemented call of pxf_get_object_fields for Hive on psql.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/apache/incubator-hawq HAWQ-546

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/incubator-hawq/pull/542.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #542


commit 758d265396252121c914673b71fe6b2fef08dcd0
Author: Oleksandr Diachenko <odiache...@pivotal.io>
Date:   2016-03-31T20:36:48Z

HAWQ-546. Implemented call of pxf_get_object_fields for Hive on psql.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request: Hawq 546

2016-03-30 Thread sansanichfb
Github user sansanichfb commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/529#discussion_r57978149
  
--- Diff: src/bin/psql/describe.c ---
@@ -1152,6 +1157,15 @@ describeTableDetails(const char *pattern, bool 
verbose, bool showSystem)
PGresult   *res;
int i;
 
+   //Hive hook in this method
--- End diff --

Added.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request: Hawq 546

2016-03-29 Thread sansanichfb
GitHub user sansanichfb opened a pull request:

https://github.com/apache/incubator-hawq/pull/529

Hawq 546



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/apache/incubator-hawq HAWQ-546

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/incubator-hawq/pull/529.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #529


commit d89173e7860ac1b6bf1f50d9a660fc7ccf903b73
Author: Oleksandr Diachenko <odiache...@pivotal.io>
Date:   2016-03-22T00:04:32Z

HAWQ-546. Implemented call of pxf_get_object_fields for Hive on psql.

commit cac5e17b10710ebf92972df42d277beff07d2b97
Author: Oleksandr Diachenko <odiache...@pivotal.io>
Date:   2016-03-29T19:00:15Z

Merge remote-tracking branch 'upstream/master' into HAWQ-546

commit d56fbcf858c3a8509b165ba4feafdf0652e1a5d6
Author: Oleksandr Diachenko <odiache...@pivotal.io>
Date:   2016-03-29T21:44:05Z

HAWQ-546. Implemented call of pxf_get_object_fields for Hive on psql.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request: HAWQ-577. Updated PXF metadata api to...

2016-03-28 Thread sansanichfb
Github user sansanichfb commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/522#discussion_r57662935
  
--- Diff: 
pxf/pxf-service/src/main/java/org/apache/hawq/pxf/service/MetadataResponse.java 
---
@@ -0,0 +1,94 @@
+package org.apache.hawq.pxf.service;
+
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+import java.io.DataOutputStream;
+import java.io.IOException;
+import java.io.OutputStream;
+import java.util.List;
+
+import javax.ws.rs.WebApplicationException;
+import javax.ws.rs.core.StreamingOutput;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hawq.pxf.api.Metadata;
+
+import org.codehaus.jackson.map.ObjectMapper;
+import org.codehaus.jackson.map.annotate.JsonSerialize.Inclusion;
+
+
+/**
+ * Class for serializing metadata in JSON format. The class implements
+ * {@link StreamingOutput} so the serialization will be done in a stream 
and not
+ * in one bulk, this in order to avoid running out of memory when 
processing a
+ * lot of items.
+ */
+public class MetadataResponse implements StreamingOutput {
+
+private static final Log Log = 
LogFactory.getLog(MetadataResponse.class);
+private static final String METADATA_DEFAULT_RESPONSE = 
"{\"PXFMetadata\":[]}";
+
+private List metadataList;
+
+/**
+ * Constructs metadata response out of a metadata list
+ *
+ * @param metadataList metadata list
+ */
+public MetadataResponse(List metadataList) {
+this.metadataList = metadataList;
+}
+
+/**
+ * Serializes the metadata list in JSON, To be used as the result 
string for HAWQ.
+ */
+@Override
+public void write(OutputStream output) throws IOException,
+WebApplicationException {
+DataOutputStream dos = new DataOutputStream(output);
--- End diff --

Should we close resource in the end?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request: HAWQ-599. Fixed coverity issues.

2016-03-28 Thread sansanichfb
GitHub user sansanichfb opened a pull request:

https://github.com/apache/incubator-hawq/pull/524

 HAWQ-599. Fixed coverity issues.

We were carrying character buffer size of 64k over stack which was 
unnecessary because it was used in method pxfutils.process_request only, moved 
it to local variable.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/sansanichfb/incubator-hawq HAWQ-599

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/incubator-hawq/pull/524.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #524


commit 05151041514290426bbf9a7b24ee1240ded9e303
Author: Oleksandr Diachenko <odiache...@pivotal.io>
Date:   2016-03-28T20:32:21Z

HAWQ-599. Fixed coverity issues.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request: HAWQ-465. Implement stored procedure ...

2016-03-23 Thread sansanichfb
Github user sansanichfb commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/479#discussion_r57248626
  
--- Diff: src/test/regress/output/json_load.source ---
@@ -184,7 +184,7 @@ END TRANSACTION;
 -- negative test: duplicated tables
 BEGIN TRANSACTION;
 SELECT 
load_json_data('@abs_builddir@/data/hcatalog/multi_table_duplicates.json');
-ERROR:  relation "hcatalog.db.t" already exists
+ERROR:  relation "db.t" already exists in namespace with oid=4284481535
--- End diff --

@hornn Updated to print namespace name instead of oid.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request: HAWQ-465. Implement stored procedure ...

2016-03-23 Thread sansanichfb
Github user sansanichfb commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/479#discussion_r57227829
  
--- Diff: src/backend/utils/adt/pxf_functions.c ---
@@ -0,0 +1,164 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+#include "catalog/external/externalmd.h"
+#include "postgres.h"
+#include "fmgr.h"
+#include "funcapi.h"
+#include "utils/builtins.h"
+
+
+typedef struct ObjectContext
+{
+   ListCell *current_object;
+   ListCell *current_field;
+} ObjectContext;
+
+ListCell* pxf_object_fields_enum_start(text *profile, text *pattern);
+ObjectContext* pxf_object_fields_enum_next(ObjectContext *object_context);
+void pxf_object_fields_enum_end(void);
+
+ListCell*
+pxf_object_fields_enum_start(text *profile, text *pattern)
+{
+   List *objects = NIL;
+
+   char *profile_cstr = text_to_cstring(profile);
+   char *pattern_cstr = text_to_cstring(pattern);
+
+   objects = get_pxf_object_metadata(profile_cstr, pattern_cstr, NULL);
+
+   return list_head(objects);
+}
+
+ObjectContext*
+pxf_object_fields_enum_next(ObjectContext *object_context)
+{
+
+   //first time call
+   if (object_context->current_object && !object_context->current_field)
+   object_context->current_field = list_head(((PxfItem *) 
lfirst(object_context->current_object))->fields);
+
+   //next field for the same object
+   else if lnext(object_context->current_field)
+   object_context->current_field = 
lnext(object_context->current_field);
+   //next table
+   else if lnext(object_context->current_object)
+   {
+   object_context->current_object = 
lnext(object_context->current_object);
+   object_context->current_field = list_head(((PxfItem *) 
lfirst(object_context->current_object))->fields);
+
+   //no objects, no fields left
+   } else
+   object_context = NULL;
+
+   return object_context;
+}
+
+void pxf_object_fields_enum_end(void)
+{
+   //cleanup
+}
+
+Datum pxf_get_object_fields(PG_FUNCTION_ARGS)
+{
+   MemoryContext oldcontext;
+   FuncCallContext *funcctx;
+   HeapTuple tuple;
+   Datum result;
+   Datum values[4];
+   bool nulls[4];
+
+   ObjectContext *object_context;
+
+   text *profile = PG_GETARG_TEXT_P(0);
+   text *pattern = PG_GETARG_TEXT_P(1);
+
+   /* stuff done only on the first call of the function */
+   if (SRF_IS_FIRSTCALL())
+   {
+   TupleDesc tupdesc;
+
+   /* create a function context for cross-call persistence */
+   funcctx = SRF_FIRSTCALL_INIT();
+
+   /*
+* switch to memory context appropriate for multiple function 
calls
+*/
+   oldcontext = 
MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
+
+   /* initialize object fileds metadata scanning code */
+   object_context = (ObjectContext *) 
palloc0(sizeof(ObjectContext));
+   object_context->current_object = 
pxf_object_fields_enum_start(profile, pattern);
+   funcctx->user_fctx = (void *) object_context;
+
+   /*
+* build tupdesc for result tuples. This must match this 
function's
+* pg_proc entry!
+*/
+   tupdesc = CreateTemplateTupleDesc(4, false);
+   TupleDescInitEntry(tupdesc, (AttrNumber) 1, "path",
+   TEXTOID, -1, 0);
+   TupleDescInitEntry(tupdesc, (AttrNumber) 2, "objectname",
+   TEXTOID, -1, 0);
+   TupleDescInitEntry(tupdesc, (AttrNumber) 3, "columnname",
+   TEXTOID, -1, 0);
+   TupleDescInitEntry(tupdesc, (AttrNumber) 4, "columntype&qu

[GitHub] incubator-hawq pull request: HAWQ 459 Enhanced metadata api to sup...

2016-03-23 Thread sansanichfb
Github user sansanichfb commented on the pull request:

https://github.com/apache/incubator-hawq/pull/477#issuecomment-200515990
  
+1, LGTM.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request: HAWQ 459 Enhanced metadata api to sup...

2016-03-23 Thread sansanichfb
Github user sansanichfb commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/477#discussion_r57224267
  
--- Diff: 
pxf/pxf-hive/src/main/java/org/apache/hawq/pxf/plugins/hive/utilities/HiveUtilities.java
 ---
@@ -203,19 +205,37 @@ private static boolean verifyModifers(String[] 
modifiers) {
  * It can be either table_name or 
db_name.table_name.
  *
  * @param qualifiedName Hive table name
- * @return {@link org.apache.hawq.pxf.api.Metadata.Table} object 
holding the full table name
+ * @return {@link Metadata.Item} object holding the full table name
  */
-public static Metadata.Table parseTableQualifiedName(String 
qualifiedName) {
+public static Metadata.Item extractTableFromName(String qualifiedName) 
{
+List items = extractTablesFromPattern(null, 
qualifiedName);
+if(items.isEmpty()) {
+throw new IllegalArgumentException("No tables found");
--- End diff --

Yes, I saw that one, it's handle_special_error method in libchurl.c. We 
definitely should enhance error handling, but as for now the most common case 
is - when we didn't find any items matching specified pattern, so I think in 
this case we should just return empty response and 200 http code.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request: HAWQ-465. Implement stored procedure ...

2016-03-23 Thread sansanichfb
Github user sansanichfb commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/479#discussion_r57223075
  
--- Diff: src/include/catalog/namespace.h ---
@@ -95,4 +95,6 @@ extern char *namespace_search_path;
 
 extern List *fetch_search_path(bool includeImplicit);
 
+#define HiveProfileName "Hive"
--- End diff --

Makes sense, updated.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request: HAWQ-465. Implement stored procedure ...

2016-03-22 Thread sansanichfb
Github user sansanichfb commented on the pull request:

https://github.com/apache/incubator-hawq/pull/479#issuecomment-200087621
  
@hornn we are not caching metadata in this stored procedure, we just use 
the same code path as for select from Hive tables, also we made it more 
generic. Please refer to JIRA https://issues.apache.org/jira/browse/HAWQ-393 
and to design doc 
https://docs.google.com/document/d/1-P9eEfcS9SZGDMDRNc3xt8cluDL-BghICRhYz9F-MLU/edit.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request: HAWQ-465. Implement stored procedure ...

2016-03-22 Thread sansanichfb
Github user sansanichfb commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/479#discussion_r57090315
  
--- Diff: src/backend/utils/adt/pxf_functions.c ---
@@ -0,0 +1,164 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+#include "catalog/external/externalmd.h"
+#include "postgres.h"
+#include "fmgr.h"
+#include "funcapi.h"
+#include "utils/builtins.h"
+
+
+typedef struct ObjectContext
+{
+   ListCell *current_object;
+   ListCell *current_field;
+} ObjectContext;
+
+ListCell* pxf_object_fields_enum_start(text *profile, text *pattern);
+ObjectContext* pxf_object_fields_enum_next(ObjectContext *object_context);
+void pxf_object_fields_enum_end(void);
+
+ListCell*
+pxf_object_fields_enum_start(text *profile, text *pattern)
+{
+   List *objects = NIL;
+
+   char *profile_cstr = text_to_cstring(profile);
+   char *pattern_cstr = text_to_cstring(pattern);
+
+   objects = get_pxf_object_metadata(profile_cstr, pattern_cstr, NULL);
+
+   return list_head(objects);
+}
+
+ObjectContext*
+pxf_object_fields_enum_next(ObjectContext *object_context)
+{
+
+   //first time call
+   if (object_context->current_object && !object_context->current_field)
+   object_context->current_field = list_head(((PxfItem *) 
lfirst(object_context->current_object))->fields);
+
+   //next field for the same object
+   else if lnext(object_context->current_field)
+   object_context->current_field = 
lnext(object_context->current_field);
+   //next table
+   else if lnext(object_context->current_object)
+   {
+   object_context->current_object = 
lnext(object_context->current_object);
+   object_context->current_field = list_head(((PxfItem *) 
lfirst(object_context->current_object))->fields);
+
+   //no objects, no fields left
+   } else
+   object_context = NULL;
+
+   return object_context;
+}
+
+void pxf_object_fields_enum_end(void)
+{
+   //cleanup
+}
+
+Datum pxf_get_object_fields(PG_FUNCTION_ARGS)
+{
+   MemoryContext oldcontext;
+   FuncCallContext *funcctx;
+   HeapTuple tuple;
+   Datum result;
+   Datum values[4];
+   bool nulls[4];
+
+   ObjectContext *object_context;
+
+   text *profile = PG_GETARG_TEXT_P(0);
+   text *pattern = PG_GETARG_TEXT_P(1);
+
+   /* stuff done only on the first call of the function */
+   if (SRF_IS_FIRSTCALL())
+   {
+   TupleDesc tupdesc;
+
+   /* create a function context for cross-call persistence */
+   funcctx = SRF_FIRSTCALL_INIT();
+
+   /*
+* switch to memory context appropriate for multiple function 
calls
+*/
+   oldcontext = 
MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
+
+   /* initialize object fileds metadata scanning code */
+   object_context = (ObjectContext *) 
palloc0(sizeof(ObjectContext));
+   object_context->current_object = 
pxf_object_fields_enum_start(profile, pattern);
+   funcctx->user_fctx = (void *) object_context;
+
+   /*
+* build tupdesc for result tuples. This must match this 
function's
+* pg_proc entry!
+*/
+   tupdesc = CreateTemplateTupleDesc(4, false);
+   TupleDescInitEntry(tupdesc, (AttrNumber) 1, "path",
+   TEXTOID, -1, 0);
+   TupleDescInitEntry(tupdesc, (AttrNumber) 2, "objectname",
+   TEXTOID, -1, 0);
+   TupleDescInitEntry(tupdesc, (AttrNumber) 3, "columnname",
+   TEXTOID, -1, 0);
+   TupleDescInitEntry(tupdesc, (AttrNumber) 4, "columntype&quo

[GitHub] incubator-hawq pull request: HAWQ-465. Implement stored procedure ...

2016-03-22 Thread sansanichfb
Github user sansanichfb commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/479#discussion_r57090072
  
--- Diff: src/include/catalog/namespace.h ---
@@ -95,4 +95,6 @@ extern char *namespace_search_path;
 
 extern List *fetch_search_path(bool includeImplicit);
 
+#define HiveProfileName "Hive"
--- End diff --

Because only namespace.c uses it.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request: HAWQ-465. Implement stored procedure ...

2016-03-22 Thread sansanichfb
Github user sansanichfb commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/479#discussion_r57089941
  
--- Diff: src/backend/utils/adt/pxf_functions.c ---
@@ -0,0 +1,164 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+#include "catalog/external/externalmd.h"
+#include "postgres.h"
+#include "fmgr.h"
+#include "funcapi.h"
+#include "utils/builtins.h"
+
+
+typedef struct ObjectContext
+{
+   ListCell *current_object;
+   ListCell *current_field;
+} ObjectContext;
+
+ListCell* pxf_object_fields_enum_start(text *profile, text *pattern);
+ObjectContext* pxf_object_fields_enum_next(ObjectContext *object_context);
+void pxf_object_fields_enum_end(void);
+
+ListCell*
+pxf_object_fields_enum_start(text *profile, text *pattern)
+{
+   List *objects = NIL;
+
+   char *profile_cstr = text_to_cstring(profile);
+   char *pattern_cstr = text_to_cstring(pattern);
+
+   objects = get_pxf_object_metadata(profile_cstr, pattern_cstr, NULL);
+
+   return list_head(objects);
+}
+
+ObjectContext*
+pxf_object_fields_enum_next(ObjectContext *object_context)
+{
+
+   //first time call
+   if (object_context->current_object && !object_context->current_field)
+   object_context->current_field = list_head(((PxfItem *) 
lfirst(object_context->current_object))->fields);
+
+   //next field for the same object
+   else if lnext(object_context->current_field)
+   object_context->current_field = 
lnext(object_context->current_field);
--- End diff --

fixed


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request: HAWQ-465. Implement stored procedure ...

2016-03-22 Thread sansanichfb
Github user sansanichfb commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/479#discussion_r57089873
  
--- Diff: src/backend/utils/adt/pxf_functions.c ---
@@ -0,0 +1,164 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+#include "catalog/external/externalmd.h"
+#include "postgres.h"
+#include "fmgr.h"
+#include "funcapi.h"
+#include "utils/builtins.h"
+
+
+typedef struct ObjectContext
+{
+   ListCell *current_object;
+   ListCell *current_field;
+} ObjectContext;
+
+ListCell* pxf_object_fields_enum_start(text *profile, text *pattern);
+ObjectContext* pxf_object_fields_enum_next(ObjectContext *object_context);
+void pxf_object_fields_enum_end(void);
+
+ListCell*
+pxf_object_fields_enum_start(text *profile, text *pattern)
+{
+   List *objects = NIL;
+
+   char *profile_cstr = text_to_cstring(profile);
+   char *pattern_cstr = text_to_cstring(pattern);
+
+   objects = get_pxf_object_metadata(profile_cstr, pattern_cstr, NULL);
+
+   return list_head(objects);
+}
+
+ObjectContext*
+pxf_object_fields_enum_next(ObjectContext *object_context)
+{
+
+   //first time call
--- End diff --

Updated.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request: HAWQ-465. Implement stored procedure ...

2016-03-22 Thread sansanichfb
Github user sansanichfb commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/479#discussion_r57089759
  
--- Diff: src/backend/utils/adt/pxf_functions.c ---
@@ -0,0 +1,164 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+#include "catalog/external/externalmd.h"
--- End diff --

Sure, thanks.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request: HAWQ-465. Implement stored procedure ...

2016-03-22 Thread sansanichfb
Github user sansanichfb commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/479#discussion_r57089182
  
--- Diff: src/backend/catalog/external/externalmd.c ---
@@ -45,22 +46,25 @@
 #include "utils/numeric.h"
 #include "utils/guc.h"
 
-static HCatalogTable *ParseHCatalogTable(struct json_object *hcatalogMD);
-static void LoadHCatalogEntry(HCatalogTable *hcatalogTable);
-static Oid LoadHCatalogNamespace(const char *namespaceName);
-static void LoadHCatalogTable(Oid namespaceOid, HCatalogTable 
*hcatalogTable);
-static void LoadHCatalogType(Oid relid, Oid reltypeoid, NameData relname, 
Oid relnamespaceoid);
-static void LoadHCatalogDistributionPolicy(Oid relid, HCatalogTable 
*hcatalogTable);
-static void LoadHCatalogExtTable(Oid relid, HCatalogTable *hcatalogTable);
-static void LoadHCatalogColumns(Oid relid, List *columns);
+
+List *ParsePxfEntries(StringInfo json, char *profile, Oid dboid);
--- End diff --

Sure, made them static.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request: HAWQ 459 Enhanced metadata api to sup...

2016-03-22 Thread sansanichfb
Github user sansanichfb commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/477#discussion_r57088397
  
--- Diff: 
pxf/pxf-hive/src/main/java/org/apache/hawq/pxf/plugins/hive/utilities/HiveUtilities.java
 ---
@@ -203,19 +205,37 @@ private static boolean verifyModifers(String[] 
modifiers) {
  * It can be either table_name or 
db_name.table_name.
  *
  * @param qualifiedName Hive table name
- * @return {@link org.apache.hawq.pxf.api.Metadata.Table} object 
holding the full table name
+ * @return {@link Metadata.Item} object holding the full table name
  */
-public static Metadata.Table parseTableQualifiedName(String 
qualifiedName) {
+public static Metadata.Item extractTableFromName(String qualifiedName) 
{
+List items = extractTablesFromPattern(null, 
qualifiedName);
+if(items.isEmpty()) {
+throw new IllegalArgumentException("No tables found");
--- End diff --

It's hard to parse stacktrace on client-side, so we should serialize all 
errors to JSON.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request: HAWQ-465. Implement stored procedure ...

2016-03-22 Thread sansanichfb
Github user sansanichfb commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/479#discussion_r57063009
  
--- Diff: src/backend/catalog/external/externalmd.c ---
@@ -23,10 +23,12 @@
  *  Author: antova
  *
  *
- * Utilities for loading external hcatalog metadata
+ * Utilities for loading external PXF metadata
  *
  */
 
+#include "catalog/external/externalmd.h"
--- End diff --

Sure


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request: HAWQ-465. Implement stored procedure ...

2016-03-22 Thread sansanichfb
Github user sansanichfb commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/479#discussion_r57062325
  
--- Diff: src/backend/catalog/external/externalmd.c ---
@@ -438,18 +444,18 @@ void LoadHCatalogColumns(Oid relid, List *columns)
AttrNumber attno = 1;
foreach(lc, columns)
{
-   HCatalogColumn *hcatCol = lfirst(lc);
+   PxfField *hcatCol = lfirst(lc);
--- End diff --

Sure, updated.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request: HAWQ-465. Implement stored procedure ...

2016-03-21 Thread sansanichfb
GitHub user sansanichfb opened a pull request:

https://github.com/apache/incubator-hawq/pull/479

HAWQ-465. Implement stored procedure to return fields metainfo from PXF.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/sansanichfb/incubator-hawq HAWQ-465

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/incubator-hawq/pull/479.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #479


commit 4d4cd047d4d89aaa434fe187bc504d066b512ec6
Author: Oleksandr Diachenko <odiache...@pivotal.io>
Date:   2016-03-08T01:00:00Z

HAWQ-465. Implement stored procedure to return fields metainfo from PXF.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request: HAWQ 459 Enhanced metadata api to sup...

2016-03-21 Thread sansanichfb
Github user sansanichfb commented on the pull request:

https://github.com/apache/incubator-hawq/pull/477#issuecomment-199538669
  
We are introducing API change, so we should bump up pxfProtocolVersion in 
gradle.properties.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request: HAWQ 459 Enhanced metadata api to sup...

2016-03-21 Thread sansanichfb
Github user sansanichfb commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/477#discussion_r56916362
  
--- Diff: 
pxf/pxf-service/src/main/java/org/apache/hawq/pxf/service/utilities/ProtocolData.java
 ---
@@ -95,6 +95,7 @@ public ProtocolData(Map<String, String> paramsMap) {
 accessor = getProperty("ACCESSOR");
 resolver = getProperty("RESOLVER");
 fragmenter = getOptionalProperty("FRAGMENTER");
+metadata = getOptionalProperty("METADATA");
--- End diff --

How are we using this?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request: HAWQ 459 Enhanced metadata api to sup...

2016-03-21 Thread sansanichfb
Github user sansanichfb commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/477#discussion_r56916449
  
--- Diff: pxf/pxf-service/src/main/resources/pxf-profiles-default.xml ---
@@ -49,11 +49,12 @@ under the License.
 
org.apache.hawq.pxf.plugins.hive.HiveDataFragmenter
 
org.apache.hawq.pxf.plugins.hive.HiveAccessor
 
org.apache.hawq.pxf.plugins.hive.HiveResolver
+
org.apache.hawq.pxf.plugins.hive.HiveMetadataFetcher
 
 
 
 HiveRC
-This profile is suitable only for Hive tables stored 
in RC files
+This profile is suitable only for Hive items stored 
in RC files
--- End diff --

In current context might remain "tables".


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request: HAWQ 459 Enhanced metadata api to sup...

2016-03-21 Thread sansanichfb
Github user sansanichfb commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/477#discussion_r56915071
  
--- Diff: 
pxf/pxf-hive/src/main/java/org/apache/hawq/pxf/plugins/hive/utilities/HiveUtilities.java
 ---
@@ -225,15 +245,53 @@ private static boolean verifyModifers(String[] 
modifiers) {
 }
 
 if (toks.size() == 1) {
-dbName = HIVE_DEFAULT_DBNAME;
-tableName = toks.get(0);
+dbPattern = HIVE_DEFAULT_DBNAME;
+tablePattern = toks.get(0);
 } else if (toks.size() == 2) {
-dbName = toks.get(0);
-tableName = toks.get(1);
+dbPattern = toks.get(0);
+tablePattern = toks.get(1);
 } else {
-throw new IllegalArgumentException("\"" + qualifiedName + "\"" 
+ errorMsg);
+throw new IllegalArgumentException("\"" + pattern + "\"" + 
errorMsg);
 }
 
-return new Metadata.Table(dbName, tableName);
+return getTablesFromPattern(client, dbPattern, tablePattern);
+
+}
+
+private static List 
getTablesFromPattern(HiveMetaStoreClient client, String dbPattern, String 
tablePattern) {
+
+List databases = null;
+List itemList = new ArrayList();
+List tables = new ArrayList();
+
+if(client == null || (!dbPattern.contains(WILDCARD) && 
!tablePattern.contains(WILDCARD)) ) {
+/* This case occurs when the call is invoked as part of the 
fragmenter api or when metadata is requested for a specific table name */
+itemList.add(new Metadata.Item(dbPattern, tablePattern));
+return itemList;
+}
+
+try {
+/*if(dbPattern.contains(WILDCARD)) {
+databases.addAll(client.getAllDatabases());
+}*/
+databases = client.getDatabases(dbPattern);
+if(databases.isEmpty()) {
+throw new IllegalArgumentException("no database found for 
the given pattern");
+}
+for(String dbName: databases) {
+for(String tableName: client.getTables(dbName, 
tablePattern)) {
+if (!tableName.isEmpty()) {
+itemList.add(new Metadata.Item(dbName, tableName));
+}
+}
+}
+if(itemList.isEmpty()) {
+throw new IllegalArgumentException("no tables found");
+}
+return itemList;
+
+} catch (MetaException cause) {
+throw new RuntimeException("Failed connecting to Hive 
MetaStore service: " + cause.getMessage(), cause);
--- End diff --

Should we have consistent messages in expection, either starting from upper 
or lower letter?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request: HAWQ 459 Enhanced metadata api to sup...

2016-03-21 Thread sansanichfb
Github user sansanichfb commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/477#discussion_r56914927
  
--- Diff: 
pxf/pxf-hive/src/main/java/org/apache/hawq/pxf/plugins/hive/utilities/HiveUtilities.java
 ---
@@ -225,15 +245,53 @@ private static boolean verifyModifers(String[] 
modifiers) {
 }
 
 if (toks.size() == 1) {
-dbName = HIVE_DEFAULT_DBNAME;
-tableName = toks.get(0);
+dbPattern = HIVE_DEFAULT_DBNAME;
+tablePattern = toks.get(0);
 } else if (toks.size() == 2) {
-dbName = toks.get(0);
-tableName = toks.get(1);
+dbPattern = toks.get(0);
+tablePattern = toks.get(1);
 } else {
-throw new IllegalArgumentException("\"" + qualifiedName + "\"" 
+ errorMsg);
+throw new IllegalArgumentException("\"" + pattern + "\"" + 
errorMsg);
 }
 
-return new Metadata.Table(dbName, tableName);
+return getTablesFromPattern(client, dbPattern, tablePattern);
+
+}
+
+private static List 
getTablesFromPattern(HiveMetaStoreClient client, String dbPattern, String 
tablePattern) {
+
+List databases = null;
+List itemList = new ArrayList();
+List tables = new ArrayList();
+
+if(client == null || (!dbPattern.contains(WILDCARD) && 
!tablePattern.contains(WILDCARD)) ) {
+/* This case occurs when the call is invoked as part of the 
fragmenter api or when metadata is requested for a specific table name */
+itemList.add(new Metadata.Item(dbPattern, tablePattern));
+return itemList;
+}
+
+try {
+/*if(dbPattern.contains(WILDCARD)) {
--- End diff --

Remove it?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request: HAWQ 459 Enhanced metadata api to sup...

2016-03-21 Thread sansanichfb
Github user sansanichfb commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/477#discussion_r56914675
  
--- Diff: 
pxf/pxf-hive/src/main/java/org/apache/hawq/pxf/plugins/hive/utilities/HiveUtilities.java
 ---
@@ -203,19 +205,37 @@ private static boolean verifyModifers(String[] 
modifiers) {
  * It can be either table_name or 
db_name.table_name.
  *
  * @param qualifiedName Hive table name
- * @return {@link org.apache.hawq.pxf.api.Metadata.Table} object 
holding the full table name
+ * @return {@link Metadata.Item} object holding the full table name
  */
-public static Metadata.Table parseTableQualifiedName(String 
qualifiedName) {
+public static Metadata.Item extractTableFromName(String qualifiedName) 
{
+List items = extractTablesFromPattern(null, 
qualifiedName);
+if(items.isEmpty()) {
+return null;
--- End diff --

I would rather throw an exception.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request: HAWQ 459 Enhanced metadata api to sup...

2016-03-21 Thread sansanichfb
Github user sansanichfb commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/477#discussion_r56913906
  
--- Diff: 
pxf/pxf-api/src/main/java/org/apache/hawq/pxf/api/MetadataFetcher.java ---
@@ -21,25 +26,26 @@
 
 
 /**
- * Abstract class that defines getting metadata of a table.
+ * Abstract class that defines getting metadata of an item.
  */
-public abstract class MetadataFetcher {
-protected Metadata metadata;
+public abstract class MetadataFetcher extends Plugin {
+protected List metadata;
 
 /**
  * Constructs a MetadataFetcher.
  *
+ * @param metaData the input data
  */
-public MetadataFetcher() {
-
+public MetadataFetcher(InputData metaData) {
+super(metaData);
 }
 
 /**
  * Gets a metadata of a given table
--- End diff --

table?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request: HAWQ-444. The counter for increasing ...

2016-02-23 Thread sansanichfb
Github user sansanichfb commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/376#discussion_r53856404
  
--- Diff: src/test/regress/pg_regress.c ---
@@ -91,8 +91,8 @@ static intport = -1;
 static char *user = NULL;
 static char *srcdir = NULL;
 static _stringlist *extraroles = NULL;
-char *initfile = NULL;
-char *expected_statuses_file = NULL;
+static char *initfile = "./init_file";
--- End diff --

Got it, thanks.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request: HAWQ-444. The counter for increasing ...

2016-02-23 Thread sansanichfb
Github user sansanichfb commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/376#discussion_r53855896
  
--- Diff: src/test/regress/pg_regress.c ---
@@ -91,8 +91,8 @@ static intport = -1;
 static char *user = NULL;
 static char *srcdir = NULL;
 static _stringlist *extraroles = NULL;
-char *initfile = NULL;
-char *expected_statuses_file = NULL;
+static char *initfile = "./init_file";
--- End diff --

Just curios, what is the reason of this change?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request: HAWQ-432. Memory leaks in pg_regress.

2016-02-22 Thread sansanichfb
Github user sansanichfb commented on the pull request:

https://github.com/apache/incubator-hawq/pull/368#issuecomment-187429072
  
@shivzone I did run coverity build on local branch, not sure if this link 
is accessible outside.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request: HAWQ-432. Memory leaks in pg_regress.

2016-02-22 Thread sansanichfb
GitHub user sansanichfb opened a pull request:

https://github.com/apache/incubator-hawq/pull/368

HAWQ-432. Memory leaks in pg_regress.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/apache/incubator-hawq HAWQ-432

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/incubator-hawq/pull/368.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #368


commit e80c20cffefe66327dba19d1c4094bec4420fa11
Author: Oleksandr Diachenko <odiache...@pivotal.io>
Date:   2016-02-18T23:03:20Z

HAWQ-432. Memory leaks in pg_regress.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request: Hawq 400

2016-02-22 Thread sansanichfb
Github user sansanichfb closed the pull request at:

https://github.com/apache/incubator-hawq/pull/360


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request: Hawq 400

2016-02-20 Thread sansanichfb
GitHub user sansanichfb opened a pull request:

https://github.com/apache/incubator-hawq/pull/360

Hawq 400



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/apache/incubator-hawq HAWQ-400

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/incubator-hawq/pull/360.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #360


commit 73eb65f61c2bd2400cc3283e8fa2f9234807aef5
Author: Oleksandr Diachenko <odiache...@pivotal.io>
Date:   2016-02-13T05:07:33Z

HAWQ-400. Support expected exit codes for regression tests.

commit 686529d34d9f625d70cda270c0361f6b65f5d2c8
Author: Oleksandr Diachenko <odiache...@pivotal.io>
Date:   2016-02-18T23:03:20Z

HAWQ-400. Support expected exit codes for regression tests, fixed leaks.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request: HAWQ-423. Updated gradle and download...

2016-02-18 Thread sansanichfb
Github user sansanichfb commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/356#discussion_r53419960
  
--- Diff: pxf/build.gradle ---
@@ -434,7 +435,6 @@ task tomcatRpm(type: Rpm) {
 packageName 'apache-tomcat'
 summary = 'Apache Tomcat RPM'
 vendor = project.vendor
-release = buildNumber()
--- End diff --

Why do we need to remove build number from rpm name?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request: HAWQ-400. Support expected exit codes...

2016-02-17 Thread sansanichfb
Github user sansanichfb commented on the pull request:

https://github.com/apache/incubator-hawq/pull/347#issuecomment-185471693
  
Merged to master


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request: HAWQ-400. Support expected exit codes...

2016-02-17 Thread sansanichfb
Github user sansanichfb commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/347#discussion_r53245058
  
--- Diff: src/test/regress/GNUmakefile ---
@@ -143,7 +143,7 @@ installcheck-parallel: all upg2-setup ugpart-setup
$(pg_regress_call)  --psqldir=$(PSQLDIR) 
--schedule=$(srcdir)/parallel_schedule --srcdir=$(abs_srcdir)
 
 installcheck-good: all ./current_good_schedule upg2-setup ugpart-setup
-   $(pg_regress_call)  --psqldir=$(PSQLDIR) 
--schedule=./current_good_schedule --srcdir=$(abs_srcdir)
+   $(pg_regress_call)  --psqldir=$(PSQLDIR) 
--schedule=./current_good_schedule --srcdir=$(abs_srcdir) 
--expected-statuses-file=expected_statuses
--- End diff --

discussed in person, will stick to plural form.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request: HAWQ-400. Support expected exit codes...

2016-02-17 Thread sansanichfb
Github user sansanichfb commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/347#discussion_r53240501
  
--- Diff: src/test/regress/pg_regress.c ---
@@ -1672,19 +1800,9 @@ run_single_test(const char *test, test_function 
tfunc)
differ |= newdiff;
}
 
-   if (differ)
-   {
-   status(_("FAILED"));
-   fail_count++;
-   }
-   else
-   {
-   status(_("ok"));
-   success_count++;
-   }
+   int expected_status = get_expected_status(test);
 
-   if (exit_status != 0)
-   log_child_failure(exit_status);
+   print_test_status(differ, exit_status, expected_status, -1);
--- End diff --

Updated


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request: HAWQ-400. Support expected exit codes...

2016-02-17 Thread sansanichfb
Github user sansanichfb commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/347#discussion_r53240486
  
--- Diff: src/test/regress/pg_regress.c ---
@@ -1333,32 +1455,70 @@ wait_for_tests(PID_TYPE * pids, int *statuses, char 
**names, int num_tests)
 }
 
 /*
- * report nonzero exit code from a test process
+ * Print test status depending on differences, actual, expected statuses
  */
 static void
-log_child_failure(int exitstatus)
+print_test_status(bool differ, int actual_status, int expected_status, 
double diff_secs)
--- End diff --

Updated


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request: HAWQ-400. Support expected exit codes...

2016-02-16 Thread sansanichfb
Github user sansanichfb commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/347#discussion_r53113635
  
--- Diff: src/test/regress/pg_regress.c ---
@@ -1931,8 +2050,8 @@ regression_main(int argc, char *argv[], init_function 
ifunc, test_function tfunc
{"psqldir", required_argument, NULL, 16},
{"srcdir", required_argument, NULL, 17},
{"create-role", required_argument, NULL, 18},
-   {"init-file", required_argument, NULL, 19},
--- End diff --

Fixed index duplication.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request: HAWQ-400. Support expected exit codes...

2016-02-16 Thread sansanichfb
GitHub user sansanichfb opened a pull request:

https://github.com/apache/incubator-hawq/pull/347

HAWQ-400. Support expected exit codes for regression tests.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/apache/incubator-hawq HAWQ-400

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/incubator-hawq/pull/347.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #347


commit c74ef60b2adf0b73ce7ed365d2ac738111ee330d
Author: Oleksandr Diachenko <odiache...@pivotal.io>
Date:   2016-02-13T05:07:33Z

HAWQ-400. Support expected exit codes for regression tests.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request:

2016-02-04 Thread sansanichfb
Github user sansanichfb commented on the pull request:


https://github.com/apache/incubator-hawq/commit/37a5043aaccfc0fd6471486fca438e8f7bfb9f79#commitcomment-15881028
  
Might be looking confusing, but looks like it's limitation of regression 
testing framework - we can't expect custom codes other than 0, so we can ignore 
this message as far as installcheck-good passes through.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request:

2016-02-04 Thread sansanichfb
Github user sansanichfb commented on the pull request:


https://github.com/apache/incubator-hawq/commit/37a5043aaccfc0fd6471486fca438e8f7bfb9f79#commitcomment-15900403
  
@changleicn sure, I created https://issues.apache.org/jira/browse/HAWQ-400 
to improve our regression test framework, to support expectations on exit codes.

The reason why error's level is a FATAL - as for now all errors, which are 
used in postinit.c are either WARNING or FATAL, for "hcatalog" database it's 
not WARNING, that's why it's FATAL.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request:

2016-02-04 Thread sansanichfb
Github user sansanichfb commented on the pull request:


https://github.com/apache/incubator-hawq/commit/37a5043aaccfc0fd6471486fca438e8f7bfb9f79#commitcomment-15880801
  
@ztao1987 yes, it's expected, because this command "\connect hcatalog;" 
returns error of level FATAL.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request: HAWQ-369. Hcatalog as reserved name.

2016-02-02 Thread sansanichfb
GitHub user sansanichfb opened a pull request:

https://github.com/apache/incubator-hawq/pull/324

HAWQ-369. Hcatalog as reserved name.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/apache/incubator-hawq HAWQ-369

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/incubator-hawq/pull/324.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #324


commit f63652ab99cc3d4f3568849201b497dc83fc1291
Author: Oleksandr Diachenko <odiache...@pivotal.io>
Date:   2016-01-29T03:31:02Z

HAWQ-369. Hcatalog as reserved name.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request: HAWQ-369. Hcatalog as reserved name.

2016-02-02 Thread sansanichfb
Github user sansanichfb commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/324#discussion_r51650628
  
--- Diff: src/backend/commands/dbcommands.c ---
@@ -1533,6 +1540,14 @@ RenameDatabase(const char *oldname, const char 
*newname)
cqContext   cqc;
cqContext  *pcqCtx;
 
+
+   /*
+* Make sure "hcatalog" is not used as new name, because it's reserved 
for
+* hcatalog feature integration*/
--- End diff --

sure, fixed


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request: HAWQ-369. Hcatalog as reserved name.

2016-02-02 Thread sansanichfb
Github user sansanichfb commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/324#discussion_r51650643
  
--- Diff: src/backend/commands/dbcommands.c ---
@@ -1555,6 +1570,13 @@ RenameDatabase(const char *oldname, const char 
*newname)
ereport(ERROR,
(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
 errmsg("current database may not be 
renamed")));
+   /*
+* "hcatalog" database cannot be renamed
+* */
--- End diff --

fixed


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request: HAWQ-369. Hcatalog as reserved name.

2016-02-02 Thread sansanichfb
Github user sansanichfb commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/324#discussion_r51651033
  
--- Diff: src/test/regress/input/hcatalog_lookup.source ---
@@ -138,16 +138,51 @@ alter table test_schema.p exchange partition p1 with 
table hcatalog.test_schema.
 select pg_catalog.pg_database_size('hcatalog');
 select pg_catalog.pg_database_size(6120);
 
+--positive test: should be able to create table named "hcatalog"
+CREATE TABLE hcatalog(a int);
+
+--negative test: cannot create database named "hcatalog"
+CREATE DATABASE hcatalog;
+
+--allow renaming schemas and databases
+SET gp_called_by_pgdump = true;
+
+--negative test: cannot rename exiting database to "hcatalog"
--- End diff --

thanks, fixed


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request: HAWQ-369. Hcatalog as reserved name.

2016-02-02 Thread sansanichfb
Github user sansanichfb commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/324#discussion_r51652126
  
--- Diff: src/test/regress/input/hcatalog_lookup.source ---
@@ -138,16 +138,51 @@ alter table test_schema.p exchange partition p1 with 
table hcatalog.test_schema.
 select pg_catalog.pg_database_size('hcatalog');
 select pg_catalog.pg_database_size(6120);
 
+--positive test: should be able to create table named "hcatalog"
+CREATE TABLE hcatalog(a int);
+
+--negative test: cannot create database named "hcatalog"
+CREATE DATABASE hcatalog;
+
+--allow renaming schemas and databases
+SET gp_called_by_pgdump = true;
+
+--negative test: cannot rename exiting database to "hcatalog"
+ALTER DATABASE regression RENAME TO hcatalog;
+
+--positive test: can rename exiting schema to "hcatalog"
+CREATE SCHEMA test_schema3;
+ALTER SCHEMA test_schema3 RENAME to hcatalog;
+ALTER SCHEMA hcatalog RENAME to hcatalog1;
+
+--positive test: should be able to create schema named "hcatalog"
+CREATE SCHEMA hcatalog;
+
+--positive test: can rename schema "hcatalog"
+ALTER SCHEMA hcatalog RENAME to hcatalog2;
+
+--negative test: cannot create a database using "hcatalog" as a template
+CREATE DATABASE hcatalog2 TEMPLATE hcatalog;
--- End diff --

correct, just added positive case to ensure 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request: HAWQ-369. Hcatalog as reserved name.

2016-02-02 Thread sansanichfb
Github user sansanichfb commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/324#discussion_r51652164
  
--- Diff: src/test/regress/output/hcatalog_lookup.source ---
@@ -266,6 +266,29 @@ select pg_catalog.pg_database_size('hcatalog');
 ERROR:  database hcatalog (OID 6120) is reserved (SOMEFILE:SOMEFUNC)
 select pg_catalog.pg_database_size(6120);
 ERROR:  database hcatalog (OID 6120) is reserved (SOMEFILE:SOMEFUNC)
+--positive test: should be able to create table named "hcatalog"
+CREATE TABLE hcatalog(a int);
+--negative test: cannot create database named "hcatalog"
+CREATE DATABASE hcatalog;
+ERROR:  "hcatalog" is a reserved name for hcatalog feature integration
+--allow renaming schemas and databases
+SET gp_called_by_pgdump = true;
+--negative test: cannot rename exiting database to "hcatalog"
+ALTER DATABASE regression RENAME TO hcatalog;
+ERROR:  "hcatalog" is a reserved name for hcatalog feature integration
+--positive test: can rename exiting schema to "hcatalog"
--- End diff --

fixed


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request: HAWQ-369. Hcatalog as reserved name.

2016-02-02 Thread sansanichfb
Github user sansanichfb commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/324#discussion_r51649904
  
--- Diff: src/test/regress/input/hcatalog_lookup.source ---
@@ -138,16 +138,51 @@ alter table test_schema.p exchange partition p1 with 
table hcatalog.test_schema.
 select pg_catalog.pg_database_size('hcatalog');
 select pg_catalog.pg_database_size(6120);
 
+--positive test: should be able to create table named "hcatalog"
+CREATE TABLE hcatalog(a int);
+
+--negative test: cannot create database named "hcatalog"
+CREATE DATABASE hcatalog;
+
+--allow renaming schemas and databases
--- End diff --

@GodenYao enabling this GUC allows rename both schemas and databases in 
general, so it's correct. By default HAWQ doesn't support renaming databases 
and schemas, only from pg_dump utility, added this in doc.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request: HAWQ-369. Hcatalog as reserved name.

2016-02-02 Thread sansanichfb
Github user sansanichfb commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/324#discussion_r51652151
  
--- Diff: src/test/regress/output/hcatalog_lookup.source ---
@@ -266,6 +266,29 @@ select pg_catalog.pg_database_size('hcatalog');
 ERROR:  database hcatalog (OID 6120) is reserved (SOMEFILE:SOMEFUNC)
 select pg_catalog.pg_database_size(6120);
 ERROR:  database hcatalog (OID 6120) is reserved (SOMEFILE:SOMEFUNC)
+--positive test: should be able to create table named "hcatalog"
+CREATE TABLE hcatalog(a int);
+--negative test: cannot create database named "hcatalog"
+CREATE DATABASE hcatalog;
+ERROR:  "hcatalog" is a reserved name for hcatalog feature integration
+--allow renaming schemas and databases
+SET gp_called_by_pgdump = true;
+--negative test: cannot rename exiting database to "hcatalog"
--- End diff --

thanks, fixed


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request: HAWQ-369. Hcatalog as reserved name.

2016-02-02 Thread sansanichfb
Github user sansanichfb commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/324#discussion_r51666509
  
--- Diff: src/backend/utils/init/postinit.c ---
@@ -368,6 +368,22 @@ InitPostgres(const char *in_dbname, Oid dboid, const 
char *username,
chardbname[NAMEDATALEN];
 
/*
+* User is not supposed to connect to hcatalog database,
+* because it's reserved for hcatalog feature integration
+*/
+   if (!bootstrap)
+   {
+   if (strcmp(in_dbname, HcatalogDbName) == 0)
+   {
+   ereport(ERROR,
+   (errcode(ERRCODE_UNDEFINED_DATABASE),
--- End diff --

ERRCODE_RESERVED_NAME error code is being referenced when trying to 
create/update object, so ERRCODE_UNDEFINED_DATABASE should be fine for this 
case. Also in current implementation, when users tries to  connect "hcatalog" 
database he is getting ERRCODE_UNDEFINED_DATABASE.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request: HAWQ-369. Hcatalog as reserved name.

2016-02-02 Thread sansanichfb
Github user sansanichfb commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/324#discussion_r5190
  
--- Diff: src/test/regress/output/hcatalog_lookup.source ---
@@ -266,6 +266,29 @@ select pg_catalog.pg_database_size('hcatalog');
 ERROR:  database hcatalog (OID 6120) is reserved (SOMEFILE:SOMEFUNC)
 select pg_catalog.pg_database_size(6120);
 ERROR:  database hcatalog (OID 6120) is reserved (SOMEFILE:SOMEFUNC)
+--positive test: should be able to create table named "hcatalog"
+CREATE TABLE hcatalog(a int);
+--negative test: cannot create database named "hcatalog"
+CREATE DATABASE hcatalog;
+ERROR:  "hcatalog" is a reserved name for hcatalog feature integration
+--allow renaming schemas and databases
--- End diff --

addressed


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request: HAWQ-369. Hcatalog as reserved name.

2016-02-02 Thread sansanichfb
Github user sansanichfb commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/324#discussion_r5177
  
--- Diff: src/test/regress/input/hcatalog_lookup.source ---
@@ -138,16 +138,49 @@ alter table test_schema.p exchange partition p1 with 
table hcatalog.test_schema.
 select pg_catalog.pg_database_size('hcatalog');
 select pg_catalog.pg_database_size(6120);
 
+--positive test: should be able to create table named "hcatalog"
+CREATE TABLE hcatalog(a int);
+
+--negative test: cannot create database named "hcatalog"
+CREATE DATABASE hcatalog;
+
+--allow renaming schemas and databases
+SET gp_called_by_pgdump = true;
+
+--negative test: cannot rename existing database to "hcatalog"
+ALTER DATABASE regression RENAME TO hcatalog;
+
+--positive test: can rename existing schema to "hcatalog"
+CREATE SCHEMA test_schema3;
+ALTER SCHEMA test_schema3 RENAME to hcatalog;
+
+--positive test: can rename schema "hcatalog"
+ALTER SCHEMA hcatalog RENAME to hcatalog1;
+
+--positive test: should be able to create schema named "hcatalog"
+CREATE SCHEMA hcatalog;
+
+--negative test: cannot create a database using "hcatalog" as a template
+CREATE DATABASE hcatalog2 TEMPLATE hcatalog;
+
+--restrict renaming schemas and databases
+SET gp_called_by_pgdump = false;
+
 -- cleanup
 DROP schema test_schema cascade;
 SELECT convert_to_internal_schema('test_schema');
 DROP schema test_schema cascade;
 DROP schema test_schema2 cascade;
+DROP schema hcatalog1 cascade;
--- End diff --

thanks, added.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request: HAWQ-369. Hcatalog as reserved name.

2016-02-02 Thread sansanichfb
Github user sansanichfb commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/324#discussion_r51662616
  
--- Diff: doc/src/sgml/ref/alter_database.sgml ---
@@ -154,6 +154,16 @@ ALTER DATABASE name OWNER TO 
+
+  
+   Currently RENAME TO is supported only when called by pgdump utility.
+  
+  
+   User can not use "hcatalog" as a name for database, because it's 
reserved for Hcatalog integration feature.
--- End diff --

Sure, updated all occurrences.



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request: HAWQ-369. Hcatalog as reserved name.

2016-02-02 Thread sansanichfb
Github user sansanichfb commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/324#discussion_r51662919
  
--- Diff: src/backend/commands/dbcommands.c ---
@@ -848,11 +848,18 @@ createdb(CreatedbStmt *stmt)
 * Check for db name conflict.  This is just to give a more friendly 
error
 * message than "unique index violation".  There's a race condition but
 * we're willing to accept the less friendly message in that case.
+* Also check that user is not trying to use "hcatalog" as a database 
name,
+* because it's already reserved for hcatalog feature integration.
--- End diff --

updated


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request: HAWQ-369. Hcatalog as reserved name.

2016-02-02 Thread sansanichfb
Github user sansanichfb commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/324#discussion_r51662958
  
--- Diff: src/backend/commands/dbcommands.c ---
@@ -848,11 +848,18 @@ createdb(CreatedbStmt *stmt)
 * Check for db name conflict.  This is just to give a more friendly 
error
 * message than "unique index violation".  There's a race condition but
 * we're willing to accept the less friendly message in that case.
+* Also check that user is not trying to use "hcatalog" as a database 
name,
+* because it's already reserved for hcatalog feature integration.
 */
if (OidIsValid(get_database_oid(dbname)))
-   ereport(ERROR,
-   (errcode(ERRCODE_DUPLICATE_DATABASE),
-errmsg("database \"%s\" already exists", 
dbname)));
+   if (strcmp(dbname, HcatalogDbName) == 0)
+   ereport(ERROR,
+   (errcode(ERRCODE_RESERVED_NAME),
+errmsg("\"%s\" is a reserved 
name for hcatalog feature integration", HcatalogDbName)));
--- End diff --

updated :)


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request: HAWQ-369. Hcatalog as reserved name.

2016-02-02 Thread sansanichfb
Github user sansanichfb commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/324#discussion_r51662881
  
--- Diff: doc/src/sgml/ref/create_database.sgml ---
@@ -184,6 +184,10 @@ CREATE DATABASE name
connection slot remains for the database, it is possible that
both will fail.  Also, the limit is not enforced against superusers.
   
+
+  
+   User can not create database named "hcatalog", because it's reserved 
for Hcatalog feature integration.
--- End diff --

sure, updated


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request: HAWQ-369. Hcatalog as reserved name.

2016-02-02 Thread sansanichfb
Github user sansanichfb commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/324#discussion_r51663054
  
--- Diff: src/backend/utils/init/postinit.c ---
@@ -368,6 +368,22 @@ InitPostgres(const char *in_dbname, Oid dboid, const 
char *username,
chardbname[NAMEDATALEN];
 
/*
+* User is not supposed to connect to hcatalog database,
+* because it's reserved for hcatalog feature integration
--- End diff --

done


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request: HAWQ-369. Hcatalog as reserved name.

2016-02-02 Thread sansanichfb
Github user sansanichfb commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/324#discussion_r51662992
  
--- Diff: src/backend/commands/dbcommands.c ---
@@ -1533,6 +1540,15 @@ RenameDatabase(const char *oldname, const char 
*newname)
cqContext   cqc;
cqContext  *pcqCtx;
 
+
+   /*
+* Make sure "hcatalog" is not used as new name, because it's reserved 
for
+* hcatalog feature integration
+*/
+   if (strcmp(newname, HcatalogDbName) == 0)
+   ereport(ERROR,
+   (errcode(ERRCODE_RESERVED_NAME),
+   errmsg("\"%s\" is a reserved name for hcatalog 
feature integration", HcatalogDbName)));
--- End diff --

done


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request: HAWQ-369. Hcatalog as reserved name.

2016-02-02 Thread sansanichfb
Github user sansanichfb commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/324#discussion_r51667038
  
--- Diff: src/backend/utils/init/postinit.c ---
@@ -368,6 +368,22 @@ InitPostgres(const char *in_dbname, Oid dboid, const 
char *username,
chardbname[NAMEDATALEN];
 
/*
+* User is not supposed to connect to hcatalog database,
+* because it's reserved for hcatalog feature integration
+*/
+   if (!bootstrap)
+   {
+   if (strcmp(in_dbname, HcatalogDbName) == 0)
+   {
+   ereport(ERROR,
+   (errcode(ERRCODE_UNDEFINED_DATABASE),
+   errmsg("\"%s\" database is only for 
system use",
--- End diff --

Examples of current errors for incorrect connections attempts:
- psql: FATAL:  database "te" does not exist
- psql: FATAL:  database "template0" is not currently accepting connections

Error for "hcatalog":
psql -d hcatalog;
psql: FATAL:  "hcatalog" database is only for system use.

So I think current error is enough and consistent with other similar errors.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request: HAWQ-178: Add JSON plugin support in ...

2016-01-28 Thread sansanichfb
Github user sansanichfb commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/302#discussion_r51211413
  
--- Diff: 
pxf/pxf-json/src/test/java/org/apache/pxf/hawq/plugins/json/JsonExtensionTest.java
 ---
@@ -0,0 +1,173 @@
+package org.apache.pxf.hawq.plugins.json;
+
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+import java.util.ArrayList;
+import java.util.List;
+
+import org.apache.hadoop.fs.Path;
+import org.apache.hawq.pxf.api.Fragmenter;
+import org.apache.hawq.pxf.api.ReadAccessor;
+import org.apache.hawq.pxf.api.ReadResolver;
+import org.apache.hawq.pxf.api.io.DataType;
+import org.apache.hawq.pxf.plugins.hdfs.HdfsDataFragmenter;
+import org.apache.hawq.pxf.plugins.json.JsonAccessor;
+import org.apache.hawq.pxf.plugins.json.JsonResolver;
+import org.junit.After;
+import org.junit.Test;
+
+public class JsonExtensionTest extends PxfUnit {
+
+   private static List<Pair<String, DataType>> columnDefs = null;
+   private static List<Pair<String, String>> extraParams = new 
ArrayList<Pair<String, String>>();
+
+   static {
+
+   columnDefs = new ArrayList<Pair<String, DataType>>();
+
+   columnDefs.add(new Pair<String, DataType>("created_at", 
DataType.TEXT));
+   columnDefs.add(new Pair<String, DataType>("id", 
DataType.BIGINT));
+   columnDefs.add(new Pair<String, DataType>("text", 
DataType.TEXT));
+   columnDefs.add(new Pair<String, DataType>("user.screen_name", 
DataType.TEXT));
+   columnDefs.add(new Pair<String, 
DataType>("entities.hashtags[0]", DataType.TEXT));
+   columnDefs.add(new Pair<String, 
DataType>("coordinates.coordinates[0]", DataType.FLOAT8));
+   columnDefs.add(new Pair<String, 
DataType>("coordinates.coordinates[1]", DataType.FLOAT8));
+   }
+
+   @After
+   public void cleanup() throws Exception {
+   extraParams.clear();
+   }
+
+   @Test
+   public void testSmallTweets() throws Exception {
+
+   List output = new ArrayList();
+
+   output.add("Fri Jun 07 22:45:02 + 
2013,343136547115253761,REPAIR THE TRUST: REMOVE OBAMA/BIDEN FROM OFFICE. #IRS 
#DOJ #NSA #tcot,SpreadButter,tweetCongress,,");
+   output.add("Fri Jun 07 22:45:02 + 
2013,343136547123646465,@marshafitrie dibagi 1000 aja sha 
:P,patronusdeadly,,,");
+   output.add("Fri Jun 07 22:45:02 + 
2013,343136547136233472,Vaga: Supervisor de Almoxarifado. Confira em 
http://t.co/hK5cy5B2oS,NoSecrets_Vagas,,,;);
+   output.add("Fri Jun 07 22:45:03 + 
2013,343136551322136576,It's Jun 7, 2013 @ 11pm ; Wind = NNE (30,0) 14.0 knots; 
Swell = 2.6 ft @ 5 seconds,SevenStonesBuoy,,-6.1,50.103");
+
+   super.assertOutput(new Path(System.getProperty("user.dir") + 
"/" + "src/test/resources/tweets-small.json"),
--- End diff --

Probably we can use File.separator instead of "/".


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request: HAWQ-340. Make getVersion API return ...

2016-01-25 Thread sansanichfb
Github user sansanichfb commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/290#discussion_r50761090
  
--- Diff: pxf/build.gradle ---
@@ -122,6 +124,42 @@ subprojects { subProject ->
 }
 
 project('pxf-service') {
+
+
+task generateSources {
--- End diff --

Sure, let me add comments.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request: HAWQ-340. Make getVersion API return ...

2016-01-25 Thread sansanichfb
Github user sansanichfb commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/290#discussion_r50761112
  
--- Diff: 
pxf/pxf-service/src/main/java/org/apache/hawq/pxf/service/rest/VersionResource.java
 ---
@@ -33,7 +34,7 @@
  * version e.g. {@code ...pxf/v14/Bridge}
  */
 class Version {
-final static String PXF_PROTOCOL_VERSION = "v14";
+final static String PXF_PROTOCOL_VERSION = "@pxfProtocolVersion@";
--- End diff --

Will add explanation.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request: HAWQ-340. Make getVersion API return ...

2016-01-25 Thread sansanichfb
Github user sansanichfb commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/290#discussion_r50788596
  
--- Diff: 
pxf/pxf-service/src/test/java/org/apache/hawq/pxf/service/rest/VersionResourceTest.java
 ---
@@ -22,21 +22,26 @@
 
 import static org.junit.Assert.assertEquals;
 
+import javax.ws.rs.core.HttpHeaders;
+import javax.ws.rs.core.MediaType;
 import javax.ws.rs.core.Response;
 
 import org.junit.Test;
 
 public class VersionResourceTest {
 
-@Test
-public void getProtocolVersion() throws Exception {
+   @Test
+   public void getProtocolVersion() throws Exception {
 
-VersionResource resource = new VersionResource();
-Response result = resource.getProtocolVersion();
+   VersionResource resource = new VersionResource();
--- End diff --

@shivzone thanks, fixed.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request: HAWQ-340. Make getVersion API return ...

2016-01-22 Thread sansanichfb
GitHub user sansanichfb opened a pull request:

https://github.com/apache/incubator-hawq/pull/290

HAWQ-340. Make getVersion API return JSON format.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/apache/incubator-hawq HAWQ-340

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/incubator-hawq/pull/290.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #290


commit cf03aa9309b93a05fe36949e10cdd6d695440e61
Author: Oleksandr Diachenko <odiache...@pivotal.io>
Date:   2016-01-22T23:58:06Z

HAWQ-340. Make getVersion API return JSON format.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---