[GitHub] incubator-hawq-docs pull request #106: add ranger section to logfiles page

2017-03-29 Thread lisakowen
GitHub user lisakowen opened a pull request:

https://github.com/apache/incubator-hawq-docs/pull/106

add ranger section to logfiles page

add a section to ranger log files page with ranger and RPS log directory 
info.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/lisakowen/incubator-hawq-docs 
feature/ranger-integration

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/incubator-hawq-docs/pull/106.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #106


commit 18205276a354915d45f62af95d9aa99178987e5a
Author: Lisa Owen 
Date:   2017-03-30T00:21:28Z

add ranger section to logfiles page




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq-docs pull request #105: Reconcile Feature/ranger integration ...

2017-03-29 Thread lisakowen
Github user lisakowen commented on a diff in the pull request:

https://github.com/apache/incubator-hawq-docs/pull/105#discussion_r108806756
  
--- Diff: markdown/ranger/ranger-integration-config.html.md.erb ---
@@ -30,9 +30,14 @@ The Ranger Administrative UI is installed when you 
install HDP. You configure th
 
 Installing or upgrading to HAWQ 2.2.0 installs the HAWQ Ranger Plug-in 
Service, but neither configures nor registers the plug-in.  
 
-In order to use Ranger for managing HAWQ authentication events, you must 
first install and register several HAWQ JAR files on the Ranger Administration 
host. This is a one-time configuration that establishes connectivity to your 
HAWQ cluster from the Ranger Administration host. After you have registered the 
JAR files, you enable or disable Ranger integration in HAWQ by setting the 
`hawq_acl_type` configuration parameter. After Ranger integration is enabled, 
you must use the Ranger interface to create all security policies to manage 
access to HAWQ resources. Ranger is pre-populated only with several policies to 
allow `gpadmin` superuser access to default resources. See [Creating HAWQ 
Authorization Policies in Ranger](ranger-policy-creation.html) for information 
about creating policies in Ranger.
+To use Ranger for managing HAWQ authentication events, you must first 
install and register several HAWQ JAR files on the Ranger Administration host. 
This one-time configuration establishes connectivity to your HAWQ cluster from 
the Ranger Administration host. 
+
+The `hawq_acl_type` configuration parameter allows you to shift between 
managing access policies through the HAWQ native interface or the Ranger policy 
manager. Ranger is initially started started with the `hawq_acl_type` parameter 
set to `standalone.` After configuring Ranger access policies, you set the 
`hawq_acl_type` configuration parameter to `ranger` to enable Ranger policy 
management. 
--- End diff --

as this is an intro, something like "the hawq_acl_type server configuration 
parameter controls the mode of authorization in place for hawq.  hawq uses 
native authorization by default. you can enable ranger authorization with this 
parameter."  i don't think you need to get into the values here.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq-docs pull request #105: Reconcile Feature/ranger integration ...

2017-03-29 Thread lisakowen
Github user lisakowen commented on a diff in the pull request:

https://github.com/apache/incubator-hawq-docs/pull/105#discussion_r108807083
  
--- Diff: markdown/ranger/ranger-integration-config.html.md.erb ---
@@ -30,9 +30,14 @@ The Ranger Administrative UI is installed when you 
install HDP. You configure th
 
 Installing or upgrading to HAWQ 2.2.0 installs the HAWQ Ranger Plug-in 
Service, but neither configures nor registers the plug-in.  
 
-In order to use Ranger for managing HAWQ authentication events, you must 
first install and register several HAWQ JAR files on the Ranger Administration 
host. This is a one-time configuration that establishes connectivity to your 
HAWQ cluster from the Ranger Administration host. After you have registered the 
JAR files, you enable or disable Ranger integration in HAWQ by setting the 
`hawq_acl_type` configuration parameter. After Ranger integration is enabled, 
you must use the Ranger interface to create all security policies to manage 
access to HAWQ resources. Ranger is pre-populated only with several policies to 
allow `gpadmin` superuser access to default resources. See [Creating HAWQ 
Authorization Policies in Ranger](ranger-policy-creation.html) for information 
about creating policies in Ranger.
+To use Ranger for managing HAWQ authentication events, you must first 
install and register several HAWQ JAR files on the Ranger Administration host. 
This one-time configuration establishes connectivity to your HAWQ cluster from 
the Ranger Administration host. 
+
+The `hawq_acl_type` configuration parameter allows you to shift between 
managing access policies through the HAWQ native interface or the Ranger policy 
manager. Ranger is initially started started with the `hawq_acl_type` parameter 
set to `standalone.` After configuring Ranger access policies, you set the 
`hawq_acl_type` configuration parameter to `ranger` to enable Ranger policy 
management. 
+
+Once HAWQ Ranger is enabled, access to HAWQ resources is controlled by 
security policies on Ranger. Access policies must be explicitly set for all 
groups and users, as Ranger has no knowledge of any access policies set up in 
the HAWQ native interface and its default is to disallow access. When first 
integrated, Ranger is only pre-populated with policies that allow `gpadmin` 
superuser access to default resources. When Ranger is enabled, you cannot 
manage HAWQ access  through its native interface. 
+See [Creating HAWQ Authorization Policies in 
Ranger](ranger-policy-creation.html) for information about creating policies in 
Ranger.
 
-The following procedures describe each configuration activity.
+Perform the following procedures to configure your Ranger interface.
--- End diff --

to "register the HAWQ Ranger Plug-in Service and enable Ranger 
authorization for HAWQ."


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq-docs pull request #105: Reconcile Feature/ranger integration ...

2017-03-29 Thread lisakowen
Github user lisakowen commented on a diff in the pull request:

https://github.com/apache/incubator-hawq-docs/pull/105#discussion_r108809069
  
--- Diff: markdown/ranger/ranger-integration-config.html.md.erb ---
@@ -84,19 +105,28 @@ The following procedures describe each configuration 
activity.
 gpadmin@master$ hawq stop cluster --reload
 ```
 
-7. To validate connectivity between Ranger and HAWQ, access the Ranger 
Admin UI in Ambari, click the edit icon associated with the `hawq` service 
definition. Ensure that the Active Status is set to Enabled, and click the 
**Test Connection** button. You should receive a message that Ranger connected 
succesfully.  If it fails to connect, edit your HAWQ connectivity properties 
directly in the Ranger Admin UI and re-test the connection.
+7.  When setup is complete, use the fully-qualified domain name to log 
into the Ambari server. Use the Ranger link in the left nav to bring up the 
Ranger Summary pane in the HAWQ Ambari interface. Use the Quick Links to access 
Ranger. This link will take you to the Ranger Login interface. 
+
+8.  Log into the Ranger Access Manager. You will see a list of icons under 
the Service Manager. Click the click the icon marked `hawq` under the HAWQ icon 
to validate connectivity between Ranger and HAWQ. A list of HAWQ policies will 
appear. 
+
+9.  Now return to the Service Manager and click the Edit icon on the 
right, under the HAWQ service icon. Ensure that the Active Status is set to 
Enabled, and click the **Test Connection** button. You should receive a message 
that Ranger connected succesfully.  If it fails to connect, you may need to 
edit your Ranger connection in  `pg_hba.conf,` perform 
+  ``` bash
+   hawq restart cluster
+   ```
+  and re-test the connection.
 
 
 ## Step 2: Configure HAWQ to Use Ranger Policy 
Management
 
-The default Ranger service definition for HAWQ assigns the HAWQ user 
(typically `gpadmin`) all privileges to all objects. 
+The default Ranger service definition for HAWQ assigns the HAWQ 
administrator (typically `gpadmin`) all privileges to all objects. 
 
-**Warning**: If you enable HAWQ-Ranger authorization with only the default 
HAWQ service policies defined, other HAWQ users will have no privileges, even 
for HAWQ objects (databases, tables) that they own.
-
-1. Select the **HAWQ** Service, and then select the **Configs** tab.
+Once the connection between HAWQ and Ranger is configured, you can either 
set up policies for the HAWQ users according to the procedures in [Creating 
HAWQ Authorization Policies in Ranger](ranger-policy-creation.html) or enable 
Ranger with only the default policies. 
--- End diff --

i don't think we want to imply it is ok to enable ranger with just the 
default policies in place.  maybe we want to enhance the warning.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq-docs pull request #105: Reconcile Feature/ranger integration ...

2017-03-29 Thread lisakowen
Github user lisakowen commented on a diff in the pull request:

https://github.com/apache/incubator-hawq-docs/pull/105#discussion_r108808162
  
--- Diff: markdown/ranger/ranger-integration-config.html.md.erb ---
@@ -84,19 +105,28 @@ The following procedures describe each configuration 
activity.
 gpadmin@master$ hawq stop cluster --reload
 ```
 
-7. To validate connectivity between Ranger and HAWQ, access the Ranger 
Admin UI in Ambari, click the edit icon associated with the `hawq` service 
definition. Ensure that the Active Status is set to Enabled, and click the 
**Test Connection** button. You should receive a message that Ranger connected 
succesfully.  If it fails to connect, edit your HAWQ connectivity properties 
directly in the Ranger Admin UI and re-test the connection.
+7.  When setup is complete, use the fully-qualified domain name to log 
into the Ambari server. Use the Ranger link in the left nav to bring up the 
Ranger Summary pane in the HAWQ Ambari interface. Use the Quick Links to access 
Ranger. This link will take you to the Ranger Login interface. 
+
+8.  Log into the Ranger Access Manager. You will see a list of icons under 
the Service Manager. Click the click the icon marked `hawq` under the HAWQ icon 
to validate connectivity between Ranger and HAWQ. A list of HAWQ policies will 
appear. 
--- End diff --

not sure why we want to have them look at the policies here?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq-docs pull request #105: Reconcile Feature/ranger integration ...

2017-03-29 Thread lisakowen
Github user lisakowen commented on a diff in the pull request:

https://github.com/apache/incubator-hawq-docs/pull/105#discussion_r108807764
  
--- Diff: markdown/ranger/ranger-integration-config.html.md.erb ---
@@ -70,9 +75,25 @@ The following procedures describe each configuration 
activity.
 gpadmin@master$ ./enable-ranger-plugin.sh -r ranger_host:6080 -u admin 
-p admin -h hawq_master:5432 -w gpadmin -q gpadmin
 ```
 
+***Note*** You can also enter the short form of the command: 
`./enable-ranger-plugin.sh -r` and the script will prompt you for entries. 
+
 When the script completes, the default HAWQ service definition is 
registered in the Ranger Admin UI. This service definition is named `hawq`.
 
-6. Edit the `pg_hba.conf` file on the HAWQ master node to configure HAWQ 
access for \ on the \. For example, you would 
add an entry similar to the following for the example `enable-ranger-plugin.sh` 
call above:
+6. Locate the `pg_hba.conf` file on the HAWQ master node:
+ 
+``` bash
+$ hawq config --show hawq_master_directory
+ GUC   : hawq_master_directory
+ Value : /data/hawq/master
+ $ ls /data/hawq/master
--- End diff --

 will listing the directory contents help the user?  i find it kind of 
distracting.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq-docs pull request #105: Reconcile Feature/ranger integration ...

2017-03-29 Thread lisakowen
Github user lisakowen commented on a diff in the pull request:

https://github.com/apache/incubator-hawq-docs/pull/105#discussion_r108807397
  
--- Diff: markdown/ranger/ranger-integration-config.html.md.erb ---
@@ -30,9 +30,14 @@ The Ranger Administrative UI is installed when you 
install HDP. You configure th
 
 Installing or upgrading to HAWQ 2.2.0 installs the HAWQ Ranger Plug-in 
Service, but neither configures nor registers the plug-in.  
 
-In order to use Ranger for managing HAWQ authentication events, you must 
first install and register several HAWQ JAR files on the Ranger Administration 
host. This is a one-time configuration that establishes connectivity to your 
HAWQ cluster from the Ranger Administration host. After you have registered the 
JAR files, you enable or disable Ranger integration in HAWQ by setting the 
`hawq_acl_type` configuration parameter. After Ranger integration is enabled, 
you must use the Ranger interface to create all security policies to manage 
access to HAWQ resources. Ranger is pre-populated only with several policies to 
allow `gpadmin` superuser access to default resources. See [Creating HAWQ 
Authorization Policies in Ranger](ranger-policy-creation.html) for information 
about creating policies in Ranger.
+To use Ranger for managing HAWQ authentication events, you must first 
install and register several HAWQ JAR files on the Ranger Administration host. 
This one-time configuration establishes connectivity to your HAWQ cluster from 
the Ranger Administration host. 
+
+The `hawq_acl_type` configuration parameter allows you to shift between 
managing access policies through the HAWQ native interface or the Ranger policy 
manager. Ranger is initially started started with the `hawq_acl_type` parameter 
set to `standalone.` After configuring Ranger access policies, you set the 
`hawq_acl_type` configuration parameter to `ranger` to enable Ranger policy 
management. 
+
+Once HAWQ Ranger is enabled, access to HAWQ resources is controlled by 
security policies on Ranger. Access policies must be explicitly set for all 
groups and users, as Ranger has no knowledge of any access policies set up in 
the HAWQ native interface and its default is to disallow access. When first 
integrated, Ranger is only pre-populated with policies that allow `gpadmin` 
superuser access to default resources. When Ranger is enabled, you cannot 
manage HAWQ access  through its native interface. 
--- End diff --

"When Ranger authorization for HAWQ is enabled,"  

i think the original text that was in place here looks good.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq-docs pull request #105: Reconcile Feature/ranger integration ...

2017-03-29 Thread lisakowen
Github user lisakowen commented on a diff in the pull request:

https://github.com/apache/incubator-hawq-docs/pull/105#discussion_r108808070
  
--- Diff: markdown/ranger/ranger-integration-config.html.md.erb ---
@@ -84,19 +105,28 @@ The following procedures describe each configuration 
activity.
 gpadmin@master$ hawq stop cluster --reload
 ```
 
-7. To validate connectivity between Ranger and HAWQ, access the Ranger 
Admin UI in Ambari, click the edit icon associated with the `hawq` service 
definition. Ensure that the Active Status is set to Enabled, and click the 
**Test Connection** button. You should receive a message that Ranger connected 
succesfully.  If it fails to connect, edit your HAWQ connectivity properties 
directly in the Ranger Admin UI and re-test the connection.
+7.  When setup is complete, use the fully-qualified domain name to log 
into the Ambari server. Use the Ranger link in the left nav to bring up the 
Ranger Summary pane in the HAWQ Ambari interface. Use the Quick Links to access 
Ranger. This link will take you to the Ranger Login interface. 
--- End diff --

should we just identify the direct ranger URL here?

in any case, could bold the specific ambari items you are talking about.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq-docs pull request #105: Reconcile Feature/ranger integration ...

2017-03-29 Thread lisakowen
Github user lisakowen commented on a diff in the pull request:

https://github.com/apache/incubator-hawq-docs/pull/105#discussion_r108808330
  
--- Diff: markdown/ranger/ranger-integration-config.html.md.erb ---
@@ -84,19 +105,28 @@ The following procedures describe each configuration 
activity.
 gpadmin@master$ hawq stop cluster --reload
 ```
 
-7. To validate connectivity between Ranger and HAWQ, access the Ranger 
Admin UI in Ambari, click the edit icon associated with the `hawq` service 
definition. Ensure that the Active Status is set to Enabled, and click the 
**Test Connection** button. You should receive a message that Ranger connected 
succesfully.  If it fails to connect, edit your HAWQ connectivity properties 
directly in the Ranger Admin UI and re-test the connection.
+7.  When setup is complete, use the fully-qualified domain name to log 
into the Ambari server. Use the Ranger link in the left nav to bring up the 
Ranger Summary pane in the HAWQ Ambari interface. Use the Quick Links to access 
Ranger. This link will take you to the Ranger Login interface. 
+
+8.  Log into the Ranger Access Manager. You will see a list of icons under 
the Service Manager. Click the click the icon marked `hawq` under the HAWQ icon 
to validate connectivity between Ranger and HAWQ. A list of HAWQ policies will 
appear. 
+
+9.  Now return to the Service Manager and click the Edit icon on the 
right, under the HAWQ service icon. Ensure that the Active Status is set to 
Enabled, and click the **Test Connection** button. You should receive a message 
that Ranger connected succesfully.  If it fails to connect, you may need to 
edit your Ranger connection in  `pg_hba.conf,` perform 
+  ``` bash
--- End diff --

formatting issue and spelling error (successfully)

also, when updating pg_hba.conf, should be able to do a reload (don't have 
to restart).


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq-docs pull request #105: Reconcile Feature/ranger integration ...

2017-03-29 Thread lisakowen
Github user lisakowen commented on a diff in the pull request:

https://github.com/apache/incubator-hawq-docs/pull/105#discussion_r108807630
  
--- Diff: markdown/ranger/ranger-integration-config.html.md.erb ---
@@ -70,9 +75,25 @@ The following procedures describe each configuration 
activity.
 gpadmin@master$ ./enable-ranger-plugin.sh -r ranger_host:6080 -u admin 
-p admin -h hawq_master:5432 -w gpadmin -q gpadmin
 ```
 
+***Note*** You can also enter the short form of the command: 
`./enable-ranger-plugin.sh -r` and the script will prompt you for entries. 
+
 When the script completes, the default HAWQ service definition is 
registered in the Ranger Admin UI. This service definition is named `hawq`.
 
-6. Edit the `pg_hba.conf` file on the HAWQ master node to configure HAWQ 
access for \ on the \. For example, you would 
add an entry similar to the following for the example `enable-ranger-plugin.sh` 
call above:
+6. Locate the `pg_hba.conf` file on the HAWQ master node:
--- End diff --

lets use the shell prompt and formatting like previous commands.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq-docs pull request #105: Reconcile Feature/ranger integration ...

2017-03-29 Thread janebeckman
GitHub user janebeckman opened a pull request:

https://github.com/apache/incubator-hawq-docs/pull/105

Reconcile Feature/ranger integration branches



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/janebeckman/incubator-hawq-docs 
feature/ranger-integration

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/incubator-hawq-docs/pull/105.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #105


commit d9beb05e9a627891f80a550956a5850709bc43bd
Author: Jane Beckman 
Date:   2017-03-28T22:32:45Z

Expanding info on Ranger config

commit 589e7e5511aecb0c20903ef5cef076d72a4398d3
Author: Jane Beckman 
Date:   2017-03-29T18:57:19Z

Merge branch 'feature/ranger-integration' of 
https://github.com/apache/incubator-hawq-docs into feature/ranger-integration
Update with latest on branch.

commit 863d1030cc9376f11887eca89b37365977bf9548
Author: Jane Beckman 
Date:   2017-03-29T21:11:46Z

Remove link to removed section

commit f02d8abc125a0b24df554a33ea3ffea303575ec0
Author: Jane Beckman 
Date:   2017-03-29T22:38:37Z

Grammar fix




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (HAWQ-1421) Improve PXF rpm package name format and dependencies

2017-03-29 Thread Radar Lei (JIRA)
Radar Lei created HAWQ-1421:
---

 Summary: Improve PXF rpm package name format and dependencies
 Key: HAWQ-1421
 URL: https://issues.apache.org/jira/browse/HAWQ-1421
 Project: Apache HAWQ
  Issue Type: Improvement
  Components: Build
Reporter: Radar Lei
Assignee: Ed Espino
 Fix For: 2.2.0.0-incubating


If we build pxf rpm package by 'make rpm', we will get below pxf packages:
{quote}
  apache-tomcat-7.0.62-el6.noarch.rpm
  pxf-3.2.1.0-root.el6.noarch.rpm
  pxf-hbase_3_2_1_0-3.2.1.0-root.el6.noarch.rpm
  pxf-hdfs_3_2_1_0-3.2.1.0-root.el6.noarch.rpm
  pxf-hive_3_2_1_0-3.2.1.0-root.el6.noarch.rpm
  pxf-jdbc_3_2_1_0-3.2.1.0-root.el6.noarch.rpm
  pxf-json_3_2_1_0-3.2.1.0-root.el6.noarch.rpm
  pxf-service_3_2_1_0-3.2.1.0-root.el6.noarch.rpm
{quote}

These rpm packages have dependencies on Apache Hadoop components only, some 
other Hadoop distributes can't satisfy it. E.g. :
{quote}
rpm -ivh pxf-hdfs_3_2_1_0-3.2.1.0-root.el6.noarch.rpm
error: Failed dependencies:
pxf-service_3_2_1_0 >= 3.2.1.0 is needed by 
pxf-hdfs_3_2_1_0-0:3.2.1.0-root.el6.noarch
hadoop >= 2.7.1 is needed by pxf-hdfs_3_2_1_0-0:3.2.1.0-root.el6.noarch
hadoop-mapreduce >= 2.7.1 is needed by 
pxf-hdfs_3_2_1_0-0:3.2.1.0-root.el6.noarch
{quote}

We'd better make the rpm package name format and dependencies better. 
  1. Remove the version string like '3_2_1_0'.
  2. Remove the user name from the build environment.
  3. Consider do we need to include the apache-tomcat rpm package into HAWQ rpm 
release tarball.
  4. Improve the hard code 'el6' string. (This might be optinal)
  5. Improve the dependencies, including the dependencies between these pxf rpm 
packages.





--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


Need help in debugging test_GPHD_HA_load_nodes.

2017-03-29 Thread Namrata Bhave
Hi,

I am working on building and testing HAWQ on s390x platform.
I am facing below error while executing make unittest-check in 
src/backend/access/external:


[ RUN ] test_GPHD_HA_load_nodes_UnknownNameservice
[ OK ] test_GPHD_HA_load_nodes_UnknownNameservice

[ RUN ] test_GPHD_HA_load_nodes_OneNN
No entries for symbol hdfsFreeNamenodeInformation.
ERROR: ha_config_mock.c:38 - Could not get value to mock function 
hdfsFreeNamenodeInformation
Previously returned mock value was declared at ha_config_test.c:93
[ FAILED ] test_GPHD_HA_load_nodes_OneNN

[ RUN ] test_GPHD_HA_load_nodes_RpcDelimMissing
No entries for symbol hdfsFreeNamenodeInformation.
ERROR: ha_config_mock.c:38 - Could not get value to mock function 
hdfsFreeNamenodeInformation
Previously returned mock value was declared at ha_config_test.c:127
[ FAILED ] test_GPHD_HA_load_nodes_RpcDelimMissing

[ RUN ] test_GPHD_HA_load_nodes_PxfServicePortIsAssigned
port_to_str() has remaining non-returned values.
Remaining item(s) declared at...
ha_config_test.c:163
ha_config_test.c:168
port_to_str() has remaining non-assigned out-values.
Remaining item(s) declared at...
port:0
port_to_str. port parameter still has values that haven't been checked.
Remaining item(s) declared at...
ha_config_test.c:164
ha_config_test.c:169
new_port parameter still has values that haven't been checked.
Remaining item(s) declared at...
ha_config_test.c:165
ha_config_test.c:170
[ FAILED ] test_GPHD_HA_load_nodes_PxfServicePortIsAssigned

[ RUN ] test_GPHD_HA_load_nodes_HostMissing
port_to_str() has remaining non-returned values.
Remaining item(s) declared at...
ha_config_test.c:197
ha_config_test.c:202
port_to_str() has remaining non-assigned out-values.
Remaining item(s) declared at...
port:0
port_to_str. port parameter still has values that haven't been checked.
Remaining item(s) declared at...
ha_config_test.c:198
ha_config_test.c:203
new_port parameter still has values that haven't been checked.
Remaining item(s) declared at...
ha_config_test.c:199
ha_config_test.c:204
[ FAILED ] test_GPHD_HA_load_nodes_HostMissing

[ RUN ] test_GPHD_HA_load_nodes_PortMissing
port_to_str() has remaining non-returned values.
Remaining item(s) declared at...
ha_config_test.c:238
ha_config_test.c:242
port_to_str. port parameter still has values that haven't been checked.
Remaining item(s) declared at...
ha_config_test.c:239
ha_config_test.c:243
new_port parameter still has values that haven't been checked.
Remaining item(s) declared at...
ha_config_test.c:240
ha_config_test.c:244
[ FAILED ] test_GPHD_HA_load_nodes_PortMissing

[ RUN ] test_GPHD_HA_load_nodes_PortIsInvalidNumber
port_to_str() has remaining non-returned values.
Remaining item(s) declared at...
ha_config_test.c:277
ha_config_test.c:282
port_to_str() has remaining non-assigned out-values.
Remaining item(s) declared at...
port:0
port_to_str. port parameter still has values that haven't been checked.
Remaining item(s) declared at...
ha_config_test.c:278
ha_config_test.c:283
new_port parameter still has values that haven't been checked.
Remaining item(s) declared at...
ha_config_test.c:279
ha_config_test.c:284
[ FAILED ] test_GPHD_HA_load_nodes_PortIsInvalidNumber

[ RUN ] test_GPHD_HA_load_nodes_PortIsNotNumber_TakeOne
port_to_str() has remaining non-returned values.
Remaining item(s) declared at...
ha_config_test.c:318
ha_config_test.c:323
port_to_str() has remaining non-assigned out-values.
Remaining item(s) declared at...
port:0
port_to_str. port parameter still has values that haven't been checked.
Remaining item(s) declared at...
ha_config_test.c:319
ha_config_test.c:324
new_port parameter still has values that haven't been checked.
Remaining item(s) declared at...
ha_config_test.c:320
ha_config_test.c:325
[ FAILED ] test_GPHD_HA_load_nodes_PortIsNotNumber_TakeOne

[ RUN ] test_GPHD_HA_load_nodes_PortIsNotNumber_TakeTwo
port_to_str() has remaining non-returned values.
Remaining item(s) declared at...
ha_config_test.c:358
ha_config_test.c:363
port_to_str() has remaining non-assigned out-values.
Remaining item(s) declared at...
port:0
port_to_str. port parameter still has values that haven't been checked.
Remaining item(s) declared at...
ha_config_test.c:359
ha_config_test.c:364
new_port parameter still has values that haven't been checked.
Remaining item(s) declared at...
ha_config_test.c:360
ha_config_test.c:365
[ FAILED ] test_GPHD_HA_load_nodes_PortIsNotNumber_TakeTwo

[=] 9 tests ran
[ PASSED ] 1 tests
[ FAILED ] 8 tests, listed below
[ FAILED ] test_GPHD_HA_load_nodes_OneNN
[ FAILED ] test_GPHD_HA_load_nodes_RpcDelimMissing
[ FAILED ] test_GPHD_HA_load_nodes_PxfServicePortIsAssigned
[ FAILED ] test_GPHD_HA_load_nodes_HostMissing
[ FAILED ] test_GPHD_HA_load_nodes_PortMissing
[ FAILED ] test_GPHD_HA_load_nodes_PortIsInvalidNumber
[ FAILED ] test_GPHD_HA_load_nodes_PortIsNotNumber_TakeOne
[ FAILED ] test_GPHD_HA_load_nodes_PortIsNotNumber_TakeTwo
make[1]: *** [ha_config-check] Error 8


While debugging 

RE: Test Case Failures : HAWQ unit test pxffilters

2017-03-29 Thread Ketan Kunde
Hi ,

WE have already reported a JIRA issue, please refer the below link
https://issues.apache.org/jira/browse/HAWQ-1391


Please comment/ provide us some inputs in case you have resolved it.

Regards
Ketan Kunde

From: Alex (Oleksandr) Diachenko [mailto:odiache...@pivotal.io]
Sent: Wednesday, March 29, 2017 2:35 PM
To: Ketan Kunde
Cc: dev@hawq.incubator.apache.org
Subject: Re: Test Case Failures : HAWQ unit test pxffilters

Ketan,

Thanks for details, please go ahead and report a JIRA here - 
https://issues.apache.org/jira/browse/HAWQ.

Regards, Alex.

On Wed, Mar 29, 2017 at 1:41 AM, Ketan Kunde 
> wrote:
Hi,

The below can be reproducible on s390x platform.
Some steps below that will help you reproduce the issue

1. Use a Rhel 7.1 Host with s390x processor
2. Stack trace below


DirectFunctionCall1() has remaining non-returned values.
  Remaining item(s) declared at...
pxffilters_test.c:219
pxffilters_test.c:219
pxffilters_test.c:219
DirectFunctionCall1.func parameter still has values that 
haven't been checked.
  Remaining item(s) declared at...
pxffilters_test.c:217
pxffilters_test.c:217
pxffilters_test.c:217
arg1 parameter still has values that haven't been checked.
  Remaining item(s) declared at...
pxffilters_test.c:218
pxffilters_test.c:218
pxffilters_test.c:218
[  FAILED ] test__list_const_to_str__int
[=] 1 tests ran
[ PASSED  ] 0 tests
[ FAILED  ] 1 tests, listed below
[ FAILED  ] test__list_const_to_str__int



Do let me know If you have any more questions to reproduce it.

Regards
Ketan Kunde



-Original Message-
From: Alex (Oleksandr) Diachenko 
[mailto:odiache...@pivotal.io]
Sent: Wednesday, March 29, 2017 12:53 PM
To: dev@hawq.incubator.apache.org
Subject: Re: Test Case Failures : HAWQ unit test pxffilters

Hi Ketan,

I think I will be able to help you in resolving this issue.
Would you be able to report this issue and attach a stacktrace and environment 
to reproduce (like Docker, Vagrant etc)

Regards, Alex.

On Tue, Mar 28, 2017 at 10:03 PM, Ketan Kunde 
>
wrote:

> Hi ,
>
>
>
> The above setup issue for HAWQ is also resolved.
> I am encountering unit test case failures, failures in module pxffilters.
> for s390x platform.
>
> The below test fails
> unit_test(test_list_const_to_str_int),
> unit_test(test_list_const_to_str_boolean),
> unit_test(test_list_const_to_str_text),
>
> On debugging the below test its observed in file
> 'src/backend/access/ecternal/pxffilters.c'
> in function list_const_to_str()
> At line no 1078 call to function deconstruct_array(arr, INT2OID,
> sizeof (value), true, 's', , NULL, ); , sets len = 0 Hence
> the 'for' loop starting at line 1084 fails to execute and test fails
>
> The same test when compared on x86 intel is seen set value to len
> other than 0 and hence test cases are passing.
>
> Any inputs on the above cause would be helpful to resolve the failure
> on s390x.
>
> Thanks
> Ketan Kunde
>
>
> DISCLAIMER
> ==
> This e-mail may contain privileged and confidential information which
> is the property of Persistent Systems Ltd. It is intended only for the
> use of the individual or entity to which it is addressed. If you are
> not the intended recipient, you are not authorized to read, retain,
> copy, print, distribute or use this message. If you have received this
> communication in error, please notify the sender and delete all copies of 
> this message.
> Persistent Systems Ltd. does not accept any liability for virus
> infected mails.
>
>

DISCLAIMER
==
This e-mail may contain privileged and confidential information which is the 
property of Persistent Systems Ltd. It is intended only for the use of the 
individual or entity to which it is addressed. If you are not the intended 
recipient, you are not authorized to read, retain, copy, print, distribute or 
use this message. If you have received this communication in error, please 
notify the sender and delete all copies of this message. Persistent Systems 
Ltd. does not accept any liability for virus infected mails.


DISCLAIMER
==
This e-mail may contain privileged and confidential information which is the 
property of Persistent Systems Ltd. It is intended only for the use of the 
individual or entity to which it is addressed. If you are not the intended 
recipient, you are not authorized to read, retain, copy, print, distribute or 
use this message. If you have received this communication in error, please 
notify the sender and delete all copies of this message. Persistent 

RE: Test Case Failures : HAWQ unit test pxffilters

2017-03-29 Thread Ketan Kunde
Hi,

The below can be reproducible on s390x platform.
Some steps below that will help you reproduce the issue

1. Use a Rhel 7.1 Host with s390x processor
2. Stack trace below


DirectFunctionCall1() has remaining non-returned values.
  Remaining item(s) declared at...
pxffilters_test.c:219
pxffilters_test.c:219
pxffilters_test.c:219
DirectFunctionCall1.func parameter still has values that 
haven't been checked.
  Remaining item(s) declared at...
pxffilters_test.c:217
pxffilters_test.c:217
pxffilters_test.c:217
arg1 parameter still has values that haven't been checked.
  Remaining item(s) declared at...
pxffilters_test.c:218
pxffilters_test.c:218
pxffilters_test.c:218
[  FAILED ] test__list_const_to_str__int
[=] 1 tests ran
[ PASSED  ] 0 tests
[ FAILED  ] 1 tests, listed below
[ FAILED  ] test__list_const_to_str__int



Do let me know If you have any more questions to reproduce it.

Regards
Ketan Kunde



-Original Message-
From: Alex (Oleksandr) Diachenko [mailto:odiache...@pivotal.io] 
Sent: Wednesday, March 29, 2017 12:53 PM
To: dev@hawq.incubator.apache.org
Subject: Re: Test Case Failures : HAWQ unit test pxffilters

Hi Ketan,

I think I will be able to help you in resolving this issue.
Would you be able to report this issue and attach a stacktrace and environment 
to reproduce (like Docker, Vagrant etc)

Regards, Alex.

On Tue, Mar 28, 2017 at 10:03 PM, Ketan Kunde 
wrote:

> Hi ,
>
>
>
> The above setup issue for HAWQ is also resolved.
> I am encountering unit test case failures, failures in module pxffilters.
> for s390x platform.
>
> The below test fails
> unit_test(test_list_const_to_str_int),
> unit_test(test_list_const_to_str_boolean),
> unit_test(test_list_const_to_str_text),
>
> On debugging the below test its observed in file 
> 'src/backend/access/ecternal/pxffilters.c'
> in function list_const_to_str()
> At line no 1078 call to function deconstruct_array(arr, INT2OID, 
> sizeof (value), true, 's', , NULL, ); , sets len = 0 Hence 
> the 'for' loop starting at line 1084 fails to execute and test fails
>
> The same test when compared on x86 intel is seen set value to len 
> other than 0 and hence test cases are passing.
>
> Any inputs on the above cause would be helpful to resolve the failure 
> on s390x.
>
> Thanks
> Ketan Kunde
>
>
> DISCLAIMER
> ==
> This e-mail may contain privileged and confidential information which 
> is the property of Persistent Systems Ltd. It is intended only for the 
> use of the individual or entity to which it is addressed. If you are 
> not the intended recipient, you are not authorized to read, retain, 
> copy, print, distribute or use this message. If you have received this 
> communication in error, please notify the sender and delete all copies of 
> this message.
> Persistent Systems Ltd. does not accept any liability for virus 
> infected mails.
>
>

DISCLAIMER
==
This e-mail may contain privileged and confidential information which is the 
property of Persistent Systems Ltd. It is intended only for the use of the 
individual or entity to which it is addressed. If you are not the intended 
recipient, you are not authorized to read, retain, copy, print, distribute or 
use this message. If you have received this communication in error, please 
notify the sender and delete all copies of this message. Persistent Systems 
Ltd. does not accept any liability for virus infected mails.