[jira] [Commented] (HAWQ-1504) Namenode hangs during restart of docker environment configured using incubator-hawq/contrib/hawq-docker/

2017-07-17 Thread Shubham Sharma (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1504?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16091098#comment-16091098
 ] 

Shubham Sharma commented on HAWQ-1504:
--

Submitted [PR 1267 |https://github.com/apache/incubator-hawq/pull/1267]

> Namenode hangs during restart of docker environment configured using 
> incubator-hawq/contrib/hawq-docker/
> 
>
> Key: HAWQ-1504
> URL: https://issues.apache.org/jira/browse/HAWQ-1504
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Command Line Tools
>Reporter: Shubham Sharma
>Assignee: Radar Lei
>Priority: Minor
>
> After setting up an environment using instructions provided under 
> incubator-hawq/contrib/hawq-docker/, while trying to restart docker 
> containers namenode hangs and tries a namenode -format during every start.
> Steps to reproduce this issue - 
> - Navigate to incubator-hawq/contrib/hawq-docker
> - make stop
> - make start
> - docker exec -it centos7-namenode bash
> - ps -ef | grep java
> You can see namenode -format running.
> {code}
> [gpadmin@centos7-namenode data]$ ps -ef | grep java
> hdfs1110  1 00:56 ?00:00:06 
> /etc/alternatives/java_sdk/bin/java -Dproc_namenode -Xmx1000m 
> -Dhdfs.namenode=centos7-namenode -Dhadoop.log.dir=/var/log/hadoop/hdfs 
> -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/usr/hdp/2.5.0.0-1245/hadoop 
> -Dhadoop.id.str= -Dhadoop.root.logger=INFO,console 
> -Djava.library.path=:/usr/hdp/2.5.0.0-1245/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.5.0.0-1245/hadoop/lib/native
>  -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true 
> -Dhadoop.security.logger=INFO,NullAppender 
> org.apache.hadoop.hdfs.server.namenode.NameNode -format
> {code}
> Since namenode -format runs in interactive mode and at this stage it is 
> waiting for a (Yes/No) response, the namenode will remain stuck forever. This 
> makes hdfs unavailable.
> Root cause of the problem - 
> In the dockerfiles present under 
> incubator-hawq/contrib/hawq-docker/centos6-docker/hawq-test and 
> incubator-hawq/contrib/hawq-docker/centos7-docker/hawq-test, the docker 
> directive ENTRYPOINT executes entrypoin.sh during startup.
> The entrypoint.sh in turn executes start-hdfs.sh. start-dfs.sh checks for the 
> following - 
> {code}
> if [ ! -d /tmp/hdfs/name/current ]; then
> su -l hdfs -c "hdfs namenode -format"
>   fi
> {code}
> My assumption is it looks for fsimage and edit logs. If they are not present 
> the script assumes that this a first time initialization and namenode format 
> should be done. However, path /tmp/hdfs/name/current does not exist on 
> namenode. 
> From namenode logs it is clear that fsimage and edit logs are written under 
> /tmp/hadoop-hdfs/dfs/name/current.
> {code}
> 2017-07-18 00:55:20,892 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: 
> No edit log streams selected.
> 2017-07-18 00:55:20,893 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: 
> Planning to load image: 
> FSImageFile(file=/tmp/hadoop-hdfs/dfs/name/current/fsimage_000,
>  cpktTxId=000)
> 2017-07-18 00:55:20,995 INFO 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormatPBINode: Loading 1 INodes.
> 2017-07-18 00:55:21,064 INFO 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormatProtobuf: Loaded FSImage 
> in 0 seconds.
> 2017-07-18 00:55:21,065 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: 
> Loaded image for txid 0 from 
> /tmp/hadoop-hdfs/dfs/name/current/fsimage_000
> 2017-07-18 00:55:21,084 INFO 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Need to save fs image? 
> false (staleImage=false, haEnabled=false, isRollingUpgrade=false)
> 2017-07-18 00:55:21,084 INFO 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog: Starting log segment at 1
> {code}
> Thus wrong path in 
> incubator-hawq/contrib/hawq-docker/centos*-docker/hawq-test/start-hdfs.sh 
> causes namenode to hang during each restart of the containers making hdfs 
> unavailable.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] incubator-hawq pull request #1263: HAWQ-1495 Corrected answer file to match ...

2017-07-17 Thread outofmem0ry
Github user outofmem0ry commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/1263#discussion_r127881478
  
--- Diff: src/test/feature/README.md ---
@@ -16,7 +16,10 @@ Before building the code of feature tests part, just 
make sure your compiler sup
 2. Load environment configuration by running `source 
$INSTALL_PREFIX/greenplum_path.sh`.
 3. Load hdfs configuration. For example, `export 
HADOOP_HOME=/Users/wuhong/hadoop-2.7.2 && export 
PATH=${PATH}:${HADOOP_HOME}/bin`. Since some test cases need `hdfs` and 
`hadoop` command, just ensure these commands work before running. Otherwise you 
will get failure.
 4. Run the cases with`./parallel-run-feature-test.sh 8 ./feature-test`(in 
this case 8 threads in parallel), you could use `--gtest_filter` option to 
filter test cases(both positive and negative patterns are supported). Please 
see more options by running `./feature-test --help`. 
-5.You can also run cases with `./parallel-run-feature-test.sh 8 
./feature-test --gtest_schedule` (eg. --gtest_schedule=./full_tests.txt) if you 
want to run cases in both parallel way and serial way.The schedule file sample 
is full_tests.txt which stays in the same directory.
+5. You can also run cases with `./parallel-run-feature-test.sh 8 
./feature-test --gtest_schedule` (eg. --gtest_schedule=./full_tests.txt) if you 
want to run cases in both parallel way and serial way.The schedule file sample 
is full_tests.txt which stays in the same 
+directory.
--- End diff --

@paul-guo- Done.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request #1263: HAWQ-1495 Corrected answer file to match ...

2017-07-17 Thread paul-guo-
Github user paul-guo- commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/1263#discussion_r127877655
  
--- Diff: src/test/feature/README.md ---
@@ -16,7 +16,10 @@ Before building the code of feature tests part, just 
make sure your compiler sup
 2. Load environment configuration by running `source 
$INSTALL_PREFIX/greenplum_path.sh`.
 3. Load hdfs configuration. For example, `export 
HADOOP_HOME=/Users/wuhong/hadoop-2.7.2 && export 
PATH=${PATH}:${HADOOP_HOME}/bin`. Since some test cases need `hdfs` and 
`hadoop` command, just ensure these commands work before running. Otherwise you 
will get failure.
 4. Run the cases with`./parallel-run-feature-test.sh 8 ./feature-test`(in 
this case 8 threads in parallel), you could use `--gtest_filter` option to 
filter test cases(both positive and negative patterns are supported). Please 
see more options by running `./feature-test --help`. 
-5.You can also run cases with `./parallel-run-feature-test.sh 8 
./feature-test --gtest_schedule` (eg. --gtest_schedule=./full_tests.txt) if you 
want to run cases in both parallel way and serial way.The schedule file sample 
is full_tests.txt which stays in the same directory.
+5. You can also run cases with `./parallel-run-feature-test.sh 8 
./feature-test --gtest_schedule` (eg. --gtest_schedule=./full_tests.txt) if you 
want to run cases in both parallel way and serial way.The schedule file sample 
is full_tests.txt which stays in the same 
+directory.
--- End diff --

It looks like 4 and 5 could be combined.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq issue #1268: HAWQ-1273 - Removed incorrect references from gp...

2017-07-17 Thread radarwave
Github user radarwave commented on the issue:

https://github.com/apache/incubator-hawq/pull/1268
  
Thanks @outofmem0ry to fix this.

LGTM +1


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request #1268: HAWQ-1273 - Removed incorrect references ...

2017-07-17 Thread outofmem0ry
GitHub user outofmem0ry opened a pull request:

https://github.com/apache/incubator-hawq/pull/1268

HAWQ-1273 - Removed incorrect references from gplogfilter



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/outofmem0ry/incubator-hawq 
feature/HAWQ-1273-gplogfilter

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/incubator-hawq/pull/1268.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1268


commit 993a918fbd24487cde88831b194977f4c8d43fbf
Author: Shubham Sharma 
Date:   2017-07-18T03:10:14Z

HAWQ-1273 - Removed incorrect references from gplogfilter




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (HAWQ-1504) Namenode hangs during restart of docker environment configured using incubator-hawq/contrib/hawq-docker/

2017-07-17 Thread Shubham Sharma (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1504?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shubham Sharma updated HAWQ-1504:
-
Priority: Minor  (was: Major)

> Namenode hangs during restart of docker environment configured using 
> incubator-hawq/contrib/hawq-docker/
> 
>
> Key: HAWQ-1504
> URL: https://issues.apache.org/jira/browse/HAWQ-1504
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Command Line Tools
>Reporter: Shubham Sharma
>Assignee: Radar Lei
>Priority: Minor
>
> After setting up an environment using instructions provided under 
> incubator-hawq/contrib/hawq-docker/, while trying to restart docker 
> containers namenode hangs and tries a namenode -format during every start.
> Steps to reproduce this issue - 
> - Navigate to incubator-hawq/contrib/hawq-docker
> - make stop
> - make start
> - docker exec -it centos7-namenode bash
> - ps -ef | grep java
> You can see namenode -format running.
> {code}
> [gpadmin@centos7-namenode data]$ ps -ef | grep java
> hdfs1110  1 00:56 ?00:00:06 
> /etc/alternatives/java_sdk/bin/java -Dproc_namenode -Xmx1000m 
> -Dhdfs.namenode=centos7-namenode -Dhadoop.log.dir=/var/log/hadoop/hdfs 
> -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/usr/hdp/2.5.0.0-1245/hadoop 
> -Dhadoop.id.str= -Dhadoop.root.logger=INFO,console 
> -Djava.library.path=:/usr/hdp/2.5.0.0-1245/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.5.0.0-1245/hadoop/lib/native
>  -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true 
> -Dhadoop.security.logger=INFO,NullAppender 
> org.apache.hadoop.hdfs.server.namenode.NameNode -format
> {code}
> Since namenode -format runs in interactive mode and at this stage it is 
> waiting for a (Yes/No) response, the namenode will remain stuck forever. This 
> makes hdfs unavailable.
> Root cause of the problem - 
> In the dockerfiles present under 
> incubator-hawq/contrib/hawq-docker/centos6-docker/hawq-test and 
> incubator-hawq/contrib/hawq-docker/centos7-docker/hawq-test, the docker 
> directive ENTRYPOINT executes entrypoin.sh during startup.
> The entrypoint.sh in turn executes start-hdfs.sh. start-dfs.sh checks for the 
> following - 
> {code}
> if [ ! -d /tmp/hdfs/name/current ]; then
> su -l hdfs -c "hdfs namenode -format"
>   fi
> {code}
> My assumption is it looks for fsimage and edit logs. If they are not present 
> the script assumes that this a first time initialization and namenode format 
> should be done. However, path /tmp/hdfs/name/current does not exist on 
> namenode. 
> From namenode logs it is clear that fsimage and edit logs are written under 
> /tmp/hadoop-hdfs/dfs/name/current.
> {code}
> 2017-07-18 00:55:20,892 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: 
> No edit log streams selected.
> 2017-07-18 00:55:20,893 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: 
> Planning to load image: 
> FSImageFile(file=/tmp/hadoop-hdfs/dfs/name/current/fsimage_000,
>  cpktTxId=000)
> 2017-07-18 00:55:20,995 INFO 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormatPBINode: Loading 1 INodes.
> 2017-07-18 00:55:21,064 INFO 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormatProtobuf: Loaded FSImage 
> in 0 seconds.
> 2017-07-18 00:55:21,065 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: 
> Loaded image for txid 0 from 
> /tmp/hadoop-hdfs/dfs/name/current/fsimage_000
> 2017-07-18 00:55:21,084 INFO 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Need to save fs image? 
> false (staleImage=false, haEnabled=false, isRollingUpgrade=false)
> 2017-07-18 00:55:21,084 INFO 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog: Starting log segment at 1
> {code}
> Thus wrong path in 
> incubator-hawq/contrib/hawq-docker/centos*-docker/hawq-test/start-hdfs.sh 
> causes namenode to hang during each restart of the containers making hdfs 
> unavailable.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (HAWQ-1504) Namenode hangs during restart of docker environment configured using incubator-hawq/contrib/hawq-docker/

2017-07-17 Thread Shubham Sharma (JIRA)
Shubham Sharma created HAWQ-1504:


 Summary: Namenode hangs during restart of docker environment 
configured using incubator-hawq/contrib/hawq-docker/
 Key: HAWQ-1504
 URL: https://issues.apache.org/jira/browse/HAWQ-1504
 Project: Apache HAWQ
  Issue Type: Bug
  Components: Command Line Tools
Reporter: Shubham Sharma
Assignee: Radar Lei


After setting up an environment using instructions provided under 
incubator-hawq/contrib/hawq-docker/, while trying to restart docker containers 
namenode hangs and tries a namenode -format during every start.

Steps to reproduce this issue - 

- Navigate to incubator-hawq/contrib/hawq-docker
- make stop
- make start
- docker exec -it centos7-namenode bash
- ps -ef | grep java

You can see namenode -format running.
{code}
[gpadmin@centos7-namenode data]$ ps -ef | grep java
hdfs1110  1 00:56 ?00:00:06 
/etc/alternatives/java_sdk/bin/java -Dproc_namenode -Xmx1000m 
-Dhdfs.namenode=centos7-namenode -Dhadoop.log.dir=/var/log/hadoop/hdfs 
-Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/usr/hdp/2.5.0.0-1245/hadoop 
-Dhadoop.id.str= -Dhadoop.root.logger=INFO,console 
-Djava.library.path=:/usr/hdp/2.5.0.0-1245/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.5.0.0-1245/hadoop/lib/native
 -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true 
-Dhadoop.security.logger=INFO,NullAppender 
org.apache.hadoop.hdfs.server.namenode.NameNode -format
{code}

Since namenode -format runs in interactive mode and at this stage it is waiting 
for a (Yes/No) response, the namenode will remain stuck forever. This makes 
hdfs unavailable.

Root cause of the problem - 

In the dockerfiles present under 
incubator-hawq/contrib/hawq-docker/centos6-docker/hawq-test and 
incubator-hawq/contrib/hawq-docker/centos7-docker/hawq-test, the docker 
directive ENTRYPOINT executes entrypoin.sh during startup.

The entrypoint.sh in turn executes start-hdfs.sh. start-dfs.sh checks for the 
following - 

{code}
if [ ! -d /tmp/hdfs/name/current ]; then
su -l hdfs -c "hdfs namenode -format"
  fi
{code}

My assumption is it looks for fsimage and edit logs. If they are not present 
the script assumes that this a first time initialization and namenode format 
should be done. However, path /tmp/hdfs/name/current does not exist on 
namenode. 

>From namenode logs it is clear that fsimage and edit logs are written under 
>/tmp/hadoop-hdfs/dfs/name/current.

{code}
2017-07-18 00:55:20,892 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: No 
edit log streams selected.
2017-07-18 00:55:20,893 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: 
Planning to load image: 
FSImageFile(file=/tmp/hadoop-hdfs/dfs/name/current/fsimage_000, 
cpktTxId=000)
2017-07-18 00:55:20,995 INFO 
org.apache.hadoop.hdfs.server.namenode.FSImageFormatPBINode: Loading 1 INodes.
2017-07-18 00:55:21,064 INFO 
org.apache.hadoop.hdfs.server.namenode.FSImageFormatProtobuf: Loaded FSImage in 
0 seconds.
2017-07-18 00:55:21,065 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: 
Loaded image for txid 0 from 
/tmp/hadoop-hdfs/dfs/name/current/fsimage_000
2017-07-18 00:55:21,084 INFO 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Need to save fs image? 
false (staleImage=false, haEnabled=false, isRollingUpgrade=false)
2017-07-18 00:55:21,084 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: 
Starting log segment at 1
{code}

Thus wrong path in 
incubator-hawq/contrib/hawq-docker/centos*-docker/hawq-test/start-hdfs.sh 
causes namenode to hang during each restart of the containers making hdfs 
unavailable.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HAWQ-1504) Namenode hangs during restart of docker environment configured using incubator-hawq/contrib/hawq-docker/

2017-07-17 Thread Shubham Sharma (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1504?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16090935#comment-16090935
 ] 

Shubham Sharma commented on HAWQ-1504:
--

Submitting a PR shortly

> Namenode hangs during restart of docker environment configured using 
> incubator-hawq/contrib/hawq-docker/
> 
>
> Key: HAWQ-1504
> URL: https://issues.apache.org/jira/browse/HAWQ-1504
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Command Line Tools
>Reporter: Shubham Sharma
>Assignee: Radar Lei
>
> After setting up an environment using instructions provided under 
> incubator-hawq/contrib/hawq-docker/, while trying to restart docker 
> containers namenode hangs and tries a namenode -format during every start.
> Steps to reproduce this issue - 
> - Navigate to incubator-hawq/contrib/hawq-docker
> - make stop
> - make start
> - docker exec -it centos7-namenode bash
> - ps -ef | grep java
> You can see namenode -format running.
> {code}
> [gpadmin@centos7-namenode data]$ ps -ef | grep java
> hdfs1110  1 00:56 ?00:00:06 
> /etc/alternatives/java_sdk/bin/java -Dproc_namenode -Xmx1000m 
> -Dhdfs.namenode=centos7-namenode -Dhadoop.log.dir=/var/log/hadoop/hdfs 
> -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/usr/hdp/2.5.0.0-1245/hadoop 
> -Dhadoop.id.str= -Dhadoop.root.logger=INFO,console 
> -Djava.library.path=:/usr/hdp/2.5.0.0-1245/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.5.0.0-1245/hadoop/lib/native
>  -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true 
> -Dhadoop.security.logger=INFO,NullAppender 
> org.apache.hadoop.hdfs.server.namenode.NameNode -format
> {code}
> Since namenode -format runs in interactive mode and at this stage it is 
> waiting for a (Yes/No) response, the namenode will remain stuck forever. This 
> makes hdfs unavailable.
> Root cause of the problem - 
> In the dockerfiles present under 
> incubator-hawq/contrib/hawq-docker/centos6-docker/hawq-test and 
> incubator-hawq/contrib/hawq-docker/centos7-docker/hawq-test, the docker 
> directive ENTRYPOINT executes entrypoin.sh during startup.
> The entrypoint.sh in turn executes start-hdfs.sh. start-dfs.sh checks for the 
> following - 
> {code}
> if [ ! -d /tmp/hdfs/name/current ]; then
> su -l hdfs -c "hdfs namenode -format"
>   fi
> {code}
> My assumption is it looks for fsimage and edit logs. If they are not present 
> the script assumes that this a first time initialization and namenode format 
> should be done. However, path /tmp/hdfs/name/current does not exist on 
> namenode. 
> From namenode logs it is clear that fsimage and edit logs are written under 
> /tmp/hadoop-hdfs/dfs/name/current.
> {code}
> 2017-07-18 00:55:20,892 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: 
> No edit log streams selected.
> 2017-07-18 00:55:20,893 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: 
> Planning to load image: 
> FSImageFile(file=/tmp/hadoop-hdfs/dfs/name/current/fsimage_000,
>  cpktTxId=000)
> 2017-07-18 00:55:20,995 INFO 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormatPBINode: Loading 1 INodes.
> 2017-07-18 00:55:21,064 INFO 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormatProtobuf: Loaded FSImage 
> in 0 seconds.
> 2017-07-18 00:55:21,065 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: 
> Loaded image for txid 0 from 
> /tmp/hadoop-hdfs/dfs/name/current/fsimage_000
> 2017-07-18 00:55:21,084 INFO 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Need to save fs image? 
> false (staleImage=false, haEnabled=false, isRollingUpgrade=false)
> 2017-07-18 00:55:21,084 INFO 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog: Starting log segment at 1
> {code}
> Thus wrong path in 
> incubator-hawq/contrib/hawq-docker/centos*-docker/hawq-test/start-hdfs.sh 
> causes namenode to hang during each restart of the containers making hdfs 
> unavailable.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (HAWQ-1497) docs - refactor the kerberos sections

2017-07-17 Thread Lisa Owen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1497?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lisa Owen resolved HAWQ-1497.
-
   Resolution: Fixed
Fix Version/s: 2.3.0.0-incubating

PR merged; closing and resolving.

> docs - refactor the kerberos sections
> -
>
> Key: HAWQ-1497
> URL: https://issues.apache.org/jira/browse/HAWQ-1497
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: Documentation
>Reporter: Lisa Owen
>Assignee: David Yozie
> Fix For: 2.3.0.0-incubating
>
>
> the kerberos docs do not really distinguish between enabling kerberos at the 
> HDFS filesystem level vs. enabling kerberos user authentication for HAWQ.  
> also missing content for config'ing HAWQ/PXF for secure HDFS.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Closed] (HAWQ-1479) document hawq/ranger kerberos support

2017-07-17 Thread Lisa Owen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1479?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lisa Owen closed HAWQ-1479.
---

> document hawq/ranger kerberos support
> -
>
> Key: HAWQ-1479
> URL: https://issues.apache.org/jira/browse/HAWQ-1479
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: Documentation
>Reporter: Lisa Owen
>Assignee: David Yozie
> Fix For: 2.3.0.0-incubating
>
>
> add some doc content addressing hawq/ranger/rps kerberos config and any other 
> considerations.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Closed] (HAWQ-1497) docs - refactor the kerberos sections

2017-07-17 Thread Lisa Owen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1497?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lisa Owen closed HAWQ-1497.
---

> docs - refactor the kerberos sections
> -
>
> Key: HAWQ-1497
> URL: https://issues.apache.org/jira/browse/HAWQ-1497
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: Documentation
>Reporter: Lisa Owen
>Assignee: David Yozie
> Fix For: 2.3.0.0-incubating
>
>
> the kerberos docs do not really distinguish between enabling kerberos at the 
> HDFS filesystem level vs. enabling kerberos user authentication for HAWQ.  
> also missing content for config'ing HAWQ/PXF for secure HDFS.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HAWQ-1497) docs - refactor the kerberos sections

2017-07-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1497?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16090299#comment-16090299
 ] 

ASF GitHub Bot commented on HAWQ-1497:
--

Github user asfgit closed the pull request at:

https://github.com/apache/incubator-hawq-docs/pull/127


> docs - refactor the kerberos sections
> -
>
> Key: HAWQ-1497
> URL: https://issues.apache.org/jira/browse/HAWQ-1497
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: Documentation
>Reporter: Lisa Owen
>Assignee: David Yozie
>
> the kerberos docs do not really distinguish between enabling kerberos at the 
> HDFS filesystem level vs. enabling kerberos user authentication for HAWQ.  
> also missing content for config'ing HAWQ/PXF for secure HDFS.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] incubator-hawq issue #1265: HAWQ-1500. HAWQ-1501. HAWQ-1502. Support TDE wri...

2017-07-17 Thread amyrazz44
Github user amyrazz44 commented on the issue:

https://github.com/apache/incubator-hawq/pull/1265
  
Only support TDE write function on single data node. Will support multiple 
data nodes in the next pull request.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (HAWQ-1502) Support TDE write function.

2017-07-17 Thread Amy (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1502?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16089545#comment-16089545
 ] 

Amy commented on HAWQ-1502:
---

Will support TDE write function in two stages. First stage: Implement write 
function on single data node. Second stage: Implement write function on 
multiple data nodes.

> Support TDE write function.
> ---
>
> Key: HAWQ-1502
> URL: https://issues.apache.org/jira/browse/HAWQ-1502
> Project: Apache HAWQ
>  Issue Type: Sub-task
>  Components: libhdfs, Security
>Reporter: Amy
>Assignee: Amy
> Fix For: backlog
>
>
> User can use Libhdfs3 API to write file no matter small or massive to a HDFS 
> directory with TDE enabled.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HAWQ-1502) Support TDE write function

2017-07-17 Thread Amy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1502?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amy updated HAWQ-1502:
--
Summary: Support TDE write function  (was: Support TDE write function. )

> Support TDE write function
> --
>
> Key: HAWQ-1502
> URL: https://issues.apache.org/jira/browse/HAWQ-1502
> Project: Apache HAWQ
>  Issue Type: Sub-task
>  Components: libhdfs, Security
>Reporter: Amy
>Assignee: Amy
> Fix For: backlog
>
>
> User can use Libhdfs3 API to write file no matter small or massive to a HDFS 
> directory with TDE enabled.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HAWQ-1502) Support TDE write function.

2017-07-17 Thread Amy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1502?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amy updated HAWQ-1502:
--
Summary: Support TDE write function.  (was: Support TDE write function)

> Support TDE write function.
> ---
>
> Key: HAWQ-1502
> URL: https://issues.apache.org/jira/browse/HAWQ-1502
> Project: Apache HAWQ
>  Issue Type: Sub-task
>  Components: libhdfs, Security
>Reporter: Amy
>Assignee: Amy
> Fix For: backlog
>
>
> User can use Libhdfs3 API to write file no matter small or massive to a HDFS 
> directory with TDE enabled.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)