[jira] [Updated] (HDFS-13942) [JDK10] Fix javadoc errors in hadoop-hdfs module

2018-10-30 Thread Akira Ajisaka (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HDFS-13942:
-
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.3.0
   Status: Resolved  (was: Patch Available)

Committed to trunk. Thanks [~dineshchitlangia]!

> [JDK10] Fix javadoc errors in hadoop-hdfs module
> 
>
> Key: HDFS-13942
> URL: https://issues.apache.org/jira/browse/HDFS-13942
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: documentation
>Reporter: Akira Ajisaka
>Assignee: Dinesh Chitlangia
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-13942.001.patch, HDFS-13942.002.patch, 
> HDFS-13942.003.patch, HDFS-13942.004.patch, HDFS-13942.005.patch
>
>
> There are 212 errors in hadoop-hdfs module.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13942) [JDK10] Fix javadoc errors in hadoop-hdfs module

2018-10-30 Thread Akira Ajisaka (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16669638#comment-16669638
 ] 

Akira Ajisaka commented on HDFS-13942:
--

+1, checking this in.

> [JDK10] Fix javadoc errors in hadoop-hdfs module
> 
>
> Key: HDFS-13942
> URL: https://issues.apache.org/jira/browse/HDFS-13942
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: documentation
>Reporter: Akira Ajisaka
>Assignee: Dinesh Chitlangia
>Priority: Major
> Attachments: HDFS-13942.001.patch, HDFS-13942.002.patch, 
> HDFS-13942.003.patch, HDFS-13942.004.patch, HDFS-13942.005.patch
>
>
> There are 212 errors in hadoop-hdfs module.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-755) ContainerInfo and ContainerReplica protobuf changes

2018-10-30 Thread Nanda kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-755?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HDDS-755:
-
   Resolution: Fixed
Fix Version/s: 0.4.0
   Status: Resolved  (was: Patch Available)

> ContainerInfo and ContainerReplica protobuf changes
> ---
>
> Key: HDDS-755
> URL: https://issues.apache.org/jira/browse/HDDS-755
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Datanode, SCM
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-755.000.patch, HDDS-755.001.patch
>
>
> We have different classes that maintain container related information, we can 
> consolidate them so that it is easy to read the code.
> Proposal:
> In SCM: will be used in communication between SCM and Client, also used for 
> storing in db
> * ContainerInfoProto
> * ContainerInfo
>  
> In Datanode: Used in communication between Datanode and SCM
> * ContainerReplicaProto
> * ContainerReplica
>  
> In Datanode: Used in communication between Datanode and Client
> * ContainerDataProto
> * ContainerData



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-755) ContainerInfo and ContainerReplica protobuf changes

2018-10-30 Thread Nanda kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-755?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16669611#comment-16669611
 ] 

Nanda kumar commented on HDDS-755:
--

Thanks [~linyiqun] and [~msingh] for the review. I have committed this to trunk.

> ContainerInfo and ContainerReplica protobuf changes
> ---
>
> Key: HDDS-755
> URL: https://issues.apache.org/jira/browse/HDDS-755
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Datanode, SCM
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-755.000.patch, HDDS-755.001.patch
>
>
> We have different classes that maintain container related information, we can 
> consolidate them so that it is easy to read the code.
> Proposal:
> In SCM: will be used in communication between SCM and Client, also used for 
> storing in db
> * ContainerInfoProto
> * ContainerInfo
>  
> In Datanode: Used in communication between Datanode and SCM
> * ContainerReplicaProto
> * ContainerReplica
>  
> In Datanode: Used in communication between Datanode and Client
> * ContainerDataProto
> * ContainerData



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-767) OM should not search for STDOUT root logger for audit logging

2018-10-30 Thread Dinesh Chitlangia (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-767?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia updated HDDS-767:
---
Component/s: Ozone Manager

> OM should not search for STDOUT root logger for audit logging
> -
>
> Key: HDDS-767
> URL: https://issues.apache.org/jira/browse/HDDS-767
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Minor
>  Labels: logging
> Attachments: HDDS-767.001.patch
>
>
> When we start ozone, the .out file shows the following line:
> {noformat}
> 2018-10-31 00:48:04,141 main ERROR Unable to locate appender "STDOUT" for 
> logger config "root"{noformat}
> This is because the console appender has been disabled by default however 
> incorrect log4j2 config is still trying to find the console appender.
>  
> This Jira aims to comment the following config lines to avoid this issue:
> {code:java}
> rootLogger.appenderRefs=stdout
> rootLogger.appenderRef.stdout.ref=STDOUT
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-767) OM should not search for STDOUT root logger for audit logging

2018-10-30 Thread Dinesh Chitlangia (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-767?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia updated HDDS-767:
---
Labels: logging  (was: )

> OM should not search for STDOUT root logger for audit logging
> -
>
> Key: HDDS-767
> URL: https://issues.apache.org/jira/browse/HDDS-767
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Minor
>  Labels: logging
> Attachments: HDDS-767.001.patch
>
>
> When we start ozone, the .out file shows the following line:
> {noformat}
> 2018-10-31 00:48:04,141 main ERROR Unable to locate appender "STDOUT" for 
> logger config "root"{noformat}
> This is because the console appender has been disabled by default however 
> incorrect log4j2 config is still trying to find the console appender.
>  
> This Jira aims to comment the following config lines to avoid this issue:
> {code:java}
> rootLogger.appenderRefs=stdout
> rootLogger.appenderRef.stdout.ref=STDOUT
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-767) OM should not search for STDOUT root logger for audit logging

2018-10-30 Thread Dinesh Chitlangia (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-767?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia updated HDDS-767:
---
Attachment: HDDS-767.001.patch
Status: Patch Available  (was: Open)

> OM should not search for STDOUT root logger for audit logging
> -
>
> Key: HDDS-767
> URL: https://issues.apache.org/jira/browse/HDDS-767
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Minor
> Attachments: HDDS-767.001.patch
>
>
> When we start ozone, the .out file shows the following line:
> {noformat}
> 2018-10-31 00:48:04,141 main ERROR Unable to locate appender "STDOUT" for 
> logger config "root"{noformat}
> This is because the console appender has been disabled by default however 
> incorrect log4j2 config is still trying to find the console appender.
>  
> This Jira aims to comment the following config lines to avoid this issue:
> {code:java}
> rootLogger.appenderRefs=stdout
> rootLogger.appenderRef.stdout.ref=STDOUT
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-767) OM should not search for STDOUT root logger for audit logging

2018-10-30 Thread Dinesh Chitlangia (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-767?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia updated HDDS-767:
---
Description: 
When we start ozone, the .out file shows the following line:
{noformat}
2018-10-31 00:48:04,141 main ERROR Unable to locate appender "STDOUT" for 
logger config "root"{noformat}
This is because the console appender has been disabled by default however 
incorrect log4j2 config is still trying to find the console appender.

 

This Jira aims to comment the config lines to avoid this issue.

  was:
When we start ozone, the .out file shows the following line:
{noformat}
2018-10-31 00:48:04,141 main ERROR Unable to locate appender "STDOUT" for 
logger config "root"{noformat}
This is because the console appender has been disabled by default however 
incorrect log4j2 config is still trying to find the console appender.

 

This Jira aims to remove the config lines to avoid this issue.


> OM should not search for STDOUT root logger for audit logging
> -
>
> Key: HDDS-767
> URL: https://issues.apache.org/jira/browse/HDDS-767
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Minor
>
> When we start ozone, the .out file shows the following line:
> {noformat}
> 2018-10-31 00:48:04,141 main ERROR Unable to locate appender "STDOUT" for 
> logger config "root"{noformat}
> This is because the console appender has been disabled by default however 
> incorrect log4j2 config is still trying to find the console appender.
>  
> This Jira aims to comment the config lines to avoid this issue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-767) OM should not search for STDOUT root logger for audit logging

2018-10-30 Thread Dinesh Chitlangia (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-767?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia updated HDDS-767:
---
Description: 
When we start ozone, the .out file shows the following line:
{noformat}
2018-10-31 00:48:04,141 main ERROR Unable to locate appender "STDOUT" for 
logger config "root"{noformat}
This is because the console appender has been disabled by default however 
incorrect log4j2 config is still trying to find the console appender.

 

This Jira aims to comment the following config lines to avoid this issue:
{code:java}
rootLogger.appenderRefs=stdout
rootLogger.appenderRef.stdout.ref=STDOUT
{code}

  was:
When we start ozone, the .out file shows the following line:
{noformat}
2018-10-31 00:48:04,141 main ERROR Unable to locate appender "STDOUT" for 
logger config "root"{noformat}
This is because the console appender has been disabled by default however 
incorrect log4j2 config is still trying to find the console appender.

 

This Jira aims to comment the config lines to avoid this issue.


> OM should not search for STDOUT root logger for audit logging
> -
>
> Key: HDDS-767
> URL: https://issues.apache.org/jira/browse/HDDS-767
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Minor
>
> When we start ozone, the .out file shows the following line:
> {noformat}
> 2018-10-31 00:48:04,141 main ERROR Unable to locate appender "STDOUT" for 
> logger config "root"{noformat}
> This is because the console appender has been disabled by default however 
> incorrect log4j2 config is still trying to find the console appender.
>  
> This Jira aims to comment the following config lines to avoid this issue:
> {code:java}
> rootLogger.appenderRefs=stdout
> rootLogger.appenderRef.stdout.ref=STDOUT
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-755) ContainerInfo and ContainerReplica protobuf changes

2018-10-30 Thread Nanda kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-755?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16669606#comment-16669606
 ] 

Nanda kumar commented on HDDS-755:
--

[~linyiqun], will take care of the checkstyle issues while committing.

> ContainerInfo and ContainerReplica protobuf changes
> ---
>
> Key: HDDS-755
> URL: https://issues.apache.org/jira/browse/HDDS-755
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Datanode, SCM
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
> Attachments: HDDS-755.000.patch, HDDS-755.001.patch
>
>
> We have different classes that maintain container related information, we can 
> consolidate them so that it is easy to read the code.
> Proposal:
> In SCM: will be used in communication between SCM and Client, also used for 
> storing in db
> * ContainerInfoProto
> * ContainerInfo
>  
> In Datanode: Used in communication between Datanode and SCM
> * ContainerReplicaProto
> * ContainerReplica
>  
> In Datanode: Used in communication between Datanode and Client
> * ContainerDataProto
> * ContainerData



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-767) OM should not search for STDOUT root logger for audit logging

2018-10-30 Thread Dinesh Chitlangia (JIRA)
Dinesh Chitlangia created HDDS-767:
--

 Summary: OM should not search for STDOUT root logger for audit 
logging
 Key: HDDS-767
 URL: https://issues.apache.org/jira/browse/HDDS-767
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Dinesh Chitlangia
Assignee: Dinesh Chitlangia


When we start ozone, the .out file shows the following line:
{noformat}
2018-10-31 00:48:04,141 main ERROR Unable to locate appender "STDOUT" for 
logger config "root"{noformat}
This is because the console appender has been disabled by default however 
incorrect log4j2 config is still trying to find the console appender.

 

This Jira aims to remove the config lines to avoid this issue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13942) [JDK10] Fix javadoc errors in hadoop-hdfs module

2018-10-30 Thread Dinesh Chitlangia (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16669599#comment-16669599
 ] 

Dinesh Chitlangia commented on HDFS-13942:
--

[~ajisakaa] san - Did you get a chance to review the patch? The test failures 
are unrelated to the patch.

> [JDK10] Fix javadoc errors in hadoop-hdfs module
> 
>
> Key: HDFS-13942
> URL: https://issues.apache.org/jira/browse/HDFS-13942
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: documentation
>Reporter: Akira Ajisaka
>Assignee: Dinesh Chitlangia
>Priority: Major
> Attachments: HDFS-13942.001.patch, HDFS-13942.002.patch, 
> HDFS-13942.003.patch, HDFS-13942.004.patch, HDFS-13942.005.patch
>
>
> There are 212 errors in hadoop-hdfs module.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-524) log4j is added with root to apache/hadoop:2 and apache/hadoop:3 images

2018-10-30 Thread Dinesh Chitlangia (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-524?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia updated HDDS-524:
---
Attachment: HDDS-524-docker-hadoop-2.001.patch
HDDS-524-docker-hadoop-3.001.patch

> log4j is added with root to apache/hadoop:2 and apache/hadoop:3 images
> --
>
> Key: HDDS-524
> URL: https://issues.apache.org/jira/browse/HDDS-524
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Elek, Marton
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: newbie
> Attachments: HDDS-524-docker-hadoop-2.001.patch, 
> HDDS-524-docker-hadoop-3.001.patch, HDDS-524-docker-hadoop-runner.001.patch, 
> HDDS-524-docker-hadoop-runner.002.patch
>
>
> {code}
> docker run -it apache/hadoop:2 ls -lah  /opt/hadoop/etc/hadoop
> total 152K
> drwxr-xr-x 1 hadoop users 4.0K Aug 13 17:08 .
> drwxr-xr-x 1 hadoop users 4.0K Nov 13  2017 ..
> -rw-r--r-- 1 hadoop users 7.7K Nov 13  2017 capacity-scheduler.xml
> ...
> -rw-r--r-- 1 hadoop users 5.8K Nov 13  2017 kms-site.xml
> -rw-r--r-- 1 root   root  1023 Aug 13 17:04 log4j.properties
> -rw-r--r-- 1 hadoop users 1.1K Nov 13  2017 mapred-env.cmd
> ...
> {code}
> The owner of the log4j is root instead of hadoop. For this reason we can't 
> use the images for acceptance tests as the launcher script can't overwrite 
> log4j properties based on the environment variables.
> Same is true with 
> {code}
> docker run -it apache/hadoop:2 ls -lah  /opt/hadoop/etc/hadoop
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-524) log4j is added with root to apache/hadoop:2 and apache/hadoop:3 images

2018-10-30 Thread Dinesh Chitlangia (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-524?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16669598#comment-16669598
 ] 

Dinesh Chitlangia commented on HDDS-524:


[~elek] Thanks for review. I apologize for the delay in response. Attached 
patches for both docker-hadoop-2 and docker-hadoop-3.

> log4j is added with root to apache/hadoop:2 and apache/hadoop:3 images
> --
>
> Key: HDDS-524
> URL: https://issues.apache.org/jira/browse/HDDS-524
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Elek, Marton
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: newbie
> Attachments: HDDS-524-docker-hadoop-2.001.patch, 
> HDDS-524-docker-hadoop-3.001.patch, HDDS-524-docker-hadoop-runner.001.patch, 
> HDDS-524-docker-hadoop-runner.002.patch
>
>
> {code}
> docker run -it apache/hadoop:2 ls -lah  /opt/hadoop/etc/hadoop
> total 152K
> drwxr-xr-x 1 hadoop users 4.0K Aug 13 17:08 .
> drwxr-xr-x 1 hadoop users 4.0K Nov 13  2017 ..
> -rw-r--r-- 1 hadoop users 7.7K Nov 13  2017 capacity-scheduler.xml
> ...
> -rw-r--r-- 1 hadoop users 5.8K Nov 13  2017 kms-site.xml
> -rw-r--r-- 1 root   root  1023 Aug 13 17:04 log4j.properties
> -rw-r--r-- 1 hadoop users 1.1K Nov 13  2017 mapred-env.cmd
> ...
> {code}
> The owner of the log4j is root instead of hadoop. For this reason we can't 
> use the images for acceptance tests as the launcher script can't overwrite 
> log4j properties based on the environment variables.
> Same is true with 
> {code}
> docker run -it apache/hadoop:2 ls -lah  /opt/hadoop/etc/hadoop
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-14012) Add diag info in RetryInvocationHandler

2018-10-30 Thread Dinesh Chitlangia (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14012?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia resolved HDFS-14012.
--
Resolution: Not A Bug

> Add diag info in RetryInvocationHandler
> ---
>
> Key: HDFS-14012
> URL: https://issues.apache.org/jira/browse/HDFS-14012
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client
>Reporter: Yongjun Zhang
>Assignee: Dinesh Chitlangia
>Priority: Major
>
> RetryInvocationHandler does the following logging:
> {code:java}
> } else { 
>   LOG.warn("A failover has occurred since the start of this method" + " 
> invocation attempt."); 
> }{code}
> Would be helpful to report the method name, and call stack in this message.
> Thanks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14012) Add diag info in RetryInvocationHandler

2018-10-30 Thread Dinesh Chitlangia (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14012?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16669592#comment-16669592
 ] 

Dinesh Chitlangia commented on HDFS-14012:
--

[~yzhangal] - No problem. I am closing this issue.

> Add diag info in RetryInvocationHandler
> ---
>
> Key: HDFS-14012
> URL: https://issues.apache.org/jira/browse/HDFS-14012
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client
>Reporter: Yongjun Zhang
>Assignee: Dinesh Chitlangia
>Priority: Major
>
> RetryInvocationHandler does the following logging:
> {code:java}
> } else { 
>   LOG.warn("A failover has occurred since the start of this method" + " 
> invocation attempt."); 
> }{code}
> Would be helpful to report the method name, and call stack in this message.
> Thanks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-120) Adding HDDS datanode Audit Log

2018-10-30 Thread Dinesh Chitlangia (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-120?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia updated HDDS-120:
---
Attachment: HDDS-120.001.patch
Status: Patch Available  (was: In Progress)

[~xyao], [~anu], [~jnp] - attached patch 001 for review

> Adding HDDS datanode Audit Log
> --
>
> Key: HDDS-120
> URL: https://issues.apache.org/jira/browse/HDDS-120
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: alpha2
> Attachments: HDDS-120.001.patch
>
>
> This can be useful to find users who overload the DNs. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-755) ContainerInfo and ContainerReplica protobuf changes

2018-10-30 Thread Mukul Kumar Singh (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-755?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16669575#comment-16669575
 ] 

Mukul Kumar Singh commented on HDDS-755:


Thanks for working on this [~nandakumar131].
+1, the v1 patch looks good to me.

> ContainerInfo and ContainerReplica protobuf changes
> ---
>
> Key: HDDS-755
> URL: https://issues.apache.org/jira/browse/HDDS-755
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Datanode, SCM
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
> Attachments: HDDS-755.000.patch, HDDS-755.001.patch
>
>
> We have different classes that maintain container related information, we can 
> consolidate them so that it is easy to read the code.
> Proposal:
> In SCM: will be used in communication between SCM and Client, also used for 
> storing in db
> * ContainerInfoProto
> * ContainerInfo
>  
> In Datanode: Used in communication between Datanode and SCM
> * ContainerReplicaProto
> * ContainerReplica
>  
> In Datanode: Used in communication between Datanode and Client
> * ContainerDataProto
> * ContainerData



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-759) Create config settings for SCM and OM DB directories

2018-10-30 Thread Yiqun Lin (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-759?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16669567#comment-16669567
 ] 

Yiqun Lin commented on HDDS-759:


Thanks [~arpitagarwal] for updating the patch. +1 pending Jenkins.

> Create config settings for SCM and OM DB directories
> 
>
> Key: HDDS-759
> URL: https://issues.apache.org/jira/browse/HDDS-759
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Major
> Attachments: HDDS-759.01.patch, HDDS-759.02.patch
>
>
> Currently SCM, OM and DN all use {{ozone.metadata.dirs}} for storing 
> metadata. 
> Looking more closely, it appears that SCM and OM have no option to choose 
> separate locations. We should provide custom config settings. For most 
> production clusters, admins will want to carefully choose where they place OM 
> and SCM metadata, similar to how they choose locations for NN metadata.
> To avoid disruption, we can have them fallback to {{ozone.metadata.dirs}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-755) ContainerInfo and ContainerReplica protobuf changes

2018-10-30 Thread Yiqun Lin (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-755?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16669554#comment-16669554
 ] 

Yiqun Lin commented on HDDS-755:


Can you fix checkstyle issues which reported by Jenkins? +1 once addressed.

> ContainerInfo and ContainerReplica protobuf changes
> ---
>
> Key: HDDS-755
> URL: https://issues.apache.org/jira/browse/HDDS-755
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Datanode, SCM
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
> Attachments: HDDS-755.000.patch, HDDS-755.001.patch
>
>
> We have different classes that maintain container related information, we can 
> consolidate them so that it is easy to read the code.
> Proposal:
> In SCM: will be used in communication between SCM and Client, also used for 
> storing in db
> * ContainerInfoProto
> * ContainerInfo
>  
> In Datanode: Used in communication between Datanode and SCM
> * ContainerReplicaProto
> * ContainerReplica
>  
> In Datanode: Used in communication between Datanode and Client
> * ContainerDataProto
> * ContainerData



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-754) VolumeInfo#getScmUsed throws NPE

2018-10-30 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-754?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-754:
--
   Resolution: Fixed
Fix Version/s: 0.4.0
   0.3.0
   Status: Resolved  (was: Patch Available)

[~msingh] thanks for filing this issue. [~arpitagarwal] Thanks for the reviews. 
[~hanishakoneru] Thanks for fixing this issue. I have committed this change to 
trunk and ozone-0.3.

> VolumeInfo#getScmUsed throws NPE
> 
>
> Key: HDDS-754
> URL: https://issues.apache.org/jira/browse/HDDS-754
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Reporter: Mukul Kumar Singh
>Assignee: Hanisha Koneru
>Priority: Blocker
> Fix For: 0.3.0, 0.4.0
>
> Attachments: HDDS-754.001.patch, HDDS-754.002.patch, 
> HDDS-754.003.patch, HDDS-754.004.patch
>
>
> The failure can be seen at the following jenkins run
> https://builds.apache.org/job/PreCommit-HDDS-Build/1540/testReport/org.apache.hadoop.hdds.scm.pipeline/TestNodeFailure/testPipelineFail/
> {code}
> 2018-10-29 13:44:11,984 WARN  concurrent.ExecutorHelper 
> (ExecutorHelper.java:logThrowableFromAfterExecute(50)) - Execution exception 
> when running task in Datanode ReportManager Thread - 3
> 2018-10-29 13:44:11,984 WARN  concurrent.ExecutorHelper 
> (ExecutorHelper.java:logThrowableFromAfterExecute(63)) - Caught exception in 
> thread Datanode ReportManager Thread - 3: 
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.ozone.container.common.volume.VolumeInfo.getScmUsed(VolumeInfo.java:107)
>   at 
> org.apache.hadoop.ozone.container.common.volume.VolumeSet.getNodeReport(VolumeSet.java:379)
>   at 
> org.apache.hadoop.ozone.container.ozoneimpl.OzoneContainer.getNodeReport(OzoneContainer.java:225)
>   at 
> org.apache.hadoop.ozone.container.common.report.NodeReportPublisher.getReport(NodeReportPublisher.java:64)
>   at 
> org.apache.hadoop.ozone.container.common.report.NodeReportPublisher.getReport(NodeReportPublisher.java:39)
>   at 
> org.apache.hadoop.ozone.container.common.report.ReportPublisher.publishReport(ReportPublisher.java:86)
>   at 
> org.apache.hadoop.ozone.container.common.report.ReportPublisher.run(ReportPublisher.java:73)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-762) Fix unit test failure for TestContainerSQLCli & TestSCMMetrics

2018-10-30 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-762?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-762:
--
   Resolution: Fixed
Fix Version/s: 0.4.0
   0.3.0
   Status: Resolved  (was: Patch Available)

[~ljain], [~nandakumar131] Thanks for the reviews. [~msingh] Thanks for the 
contribution.I have committed this change to trunk and ozone-0.3 branches.

> Fix unit test failure for TestContainerSQLCli & TestSCMMetrics
> --
>
> Key: HDDS-762
> URL: https://issues.apache.org/jira/browse/HDDS-762
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.3.0
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
>Priority: Blocker
> Fix For: 0.3.0, 0.4.0
>
> Attachments: HDDS-762-ozone-0.3.001.patch, HDDS-762.001.patch
>
>
> TestContainerSQLCli & TestCSMMetrics are currently failing consistently 
> because of a mismatch in metrics register name. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-762) Fix unit test failure for TestContainerSQLCli & TestSCMMetrics

2018-10-30 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-762?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-762:
--
Summary: Fix unit test failure for TestContainerSQLCli & TestSCMMetrics  
(was: Fix unit test failure for TestContainerSQLCli & TestCSMMetrics)

> Fix unit test failure for TestContainerSQLCli & TestSCMMetrics
> --
>
> Key: HDDS-762
> URL: https://issues.apache.org/jira/browse/HDDS-762
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.3.0
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
>Priority: Blocker
> Attachments: HDDS-762-ozone-0.3.001.patch, HDDS-762.001.patch
>
>
> TestContainerSQLCli & TestCSMMetrics are currently failing consistently 
> because of a mismatch in metrics register name. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13404) RBF: TestRouterWebHDFSContractAppend.testRenameFileBeingAppended fails

2018-10-30 Thread Takanobu Asanuma (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13404?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16669486#comment-16669486
 ] 

Takanobu Asanuma commented on HDFS-13404:
-

Thanks for your thoughts in HDFS-13964, [~elgoiri] and [~ayushtkn]. I think the 
problem is that the consistency of webhdfs doesn't seem to assure sequential 
consistency. We may need to use a delay or skip this method using 
\{{fs.contract.create-visibility-delayed}}.

> RBF: TestRouterWebHDFSContractAppend.testRenameFileBeingAppended fails
> --
>
> Key: HDFS-13404
> URL: https://issues.apache.org/jira/browse/HDFS-13404
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
> Attachments: detailed_error.log
>
>
> This is reported by [~elgoiri].
> {noformat}
> java.io.FileNotFoundException: 
> Failed to append to non-existent file /test/test/target for client 127.0.0.1
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirAppendOp.appendFile(FSDirAppendOp.java:104)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2621)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:805)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:485)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:869)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:815)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1682)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2675)
> ...
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>   at 
> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:121)
>   at 
> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:110)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.toIOException(WebHdfsFileSystem.java:549)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.validateResponse(WebHdfsFileSystem.java:527)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.access$200(WebHdfsFileSystem.java:136)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$FsPathOutputStreamRunner$1.close(WebHdfsFileSystem.java:1013)
>   at 
> org.apache.hadoop.fs.contract.AbstractContractAppendTest.testRenameFileBeingAppended(AbstractContractAppendTest.java:139)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14041) NegativeArraySizeException when PROVIDED replication >1

2018-10-30 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/HDFS-14041?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-14041:
---
Status: Patch Available  (was: Open)

> NegativeArraySizeException when PROVIDED replication >1
> ---
>
> Key: HDFS-14041
> URL: https://issues.apache.org/jira/browse/HDFS-14041
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
> Attachments: HDFS-14041.000.patch
>
>
> Caused by: 
> org.apache.hadoop.ipc.RemoteException(java.lang.NegativeArraySizeException): 
> java.lang.NegativeArraySizeException
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.createLocatedBlock(BlockManager.java:1274)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.createLocatedBlock(BlockManager.java:1225)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.createLocatedBlockList(BlockManager.java:1196)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.createLocatedBlocks(BlockManager.java:1346)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getBlockLocations(FSDirStatAndListingOp.java:176)
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14041) NegativeArraySizeException when PROVIDED replication >1

2018-10-30 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/HDFS-14041?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-14041:
---
Summary: NegativeArraySizeException when PROVIDED replication >1  (was: 
NegativeArraySizeException when PROVIDED replicaiton >1)

> NegativeArraySizeException when PROVIDED replication >1
> ---
>
> Key: HDFS-14041
> URL: https://issues.apache.org/jira/browse/HDFS-14041
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
> Attachments: HDFS-14041.000.patch
>
>
> Caused by: 
> org.apache.hadoop.ipc.RemoteException(java.lang.NegativeArraySizeException): 
> java.lang.NegativeArraySizeException
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.createLocatedBlock(BlockManager.java:1274)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.createLocatedBlock(BlockManager.java:1225)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.createLocatedBlockList(BlockManager.java:1196)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.createLocatedBlocks(BlockManager.java:1346)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getBlockLocations(FSDirStatAndListingOp.java:176)
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14041) NegativeArraySizeException when PROVIDED replicaiton >1

2018-10-30 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/HDFS-14041?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-14041:
---
Attachment: HDFS-14041.000.patch

> NegativeArraySizeException when PROVIDED replicaiton >1
> ---
>
> Key: HDFS-14041
> URL: https://issues.apache.org/jira/browse/HDFS-14041
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
> Attachments: HDFS-14041.000.patch
>
>
> Caused by: 
> org.apache.hadoop.ipc.RemoteException(java.lang.NegativeArraySizeException): 
> java.lang.NegativeArraySizeException
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.createLocatedBlock(BlockManager.java:1274)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.createLocatedBlock(BlockManager.java:1225)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.createLocatedBlockList(BlockManager.java:1196)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.createLocatedBlocks(BlockManager.java:1346)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getBlockLocations(FSDirStatAndListingOp.java:176)
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-14041) NegativeArraySizeException when PROVIDED replicaiton >1

2018-10-30 Thread JIRA
Íñigo Goiri created HDFS-14041:
--

 Summary: NegativeArraySizeException when PROVIDED replicaiton >1
 Key: HDFS-14041
 URL: https://issues.apache.org/jira/browse/HDFS-14041
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Íñigo Goiri
Assignee: Íñigo Goiri


Caused by: 
org.apache.hadoop.ipc.RemoteException(java.lang.NegativeArraySizeException): 
java.lang.NegativeArraySizeException
at 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.createLocatedBlock(BlockManager.java:1274)
at 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.createLocatedBlock(BlockManager.java:1225)
at 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.createLocatedBlockList(BlockManager.java:1196)
at 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.createLocatedBlocks(BlockManager.java:1346)
at 
org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getBlockLocations(FSDirStatAndListingOp.java:176)
 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-3743) QJM: improve formatting behavior for JNs

2018-10-30 Thread Hrishikesh Gadre (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-3743?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hrishikesh Gadre reassigned HDFS-3743:
--

Assignee: Hrishikesh Gadre

> QJM: improve formatting behavior for JNs
> 
>
> Key: HDFS-3743
> URL: https://issues.apache.org/jira/browse/HDFS-3743
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: QuorumJournalManager (HDFS-3077)
>Reporter: Todd Lipcon
>Assignee: Hrishikesh Gadre
>Priority: Major
>
> Currently, the JournalNodes automatically format themselves when a new writer 
> takes over, if they don't have any data for that namespace. However, this has 
> a few problems:
> 1) if the administrator accidentally points a new NN at the wrong quorum (eg 
> corresponding to another cluster), it will auto-format a directory on those 
> nodes. This doesn't cause any data loss, but would be better to bail out with 
> an error indicating that they need to be formatted.
> 2) if a journal node crashes and needs to be reformatted, it should be able 
> to re-join the cluster and start storing new segments without having to fail 
> over to a new NN.
> 3) if 2/3 JNs get accidentally reformatted (eg the mount point becomes 
> undone), and the user starts the NN, it should fail to start, because it may 
> end up missing edits. If it auto-formats in this case, the user might have 
> silent "rollback" of the most recent edits.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-659) Implement pagination in GET bucket (object list) endpoint

2018-10-30 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16669389#comment-16669389
 ] 

Bharat Viswanadham edited comment on HDDS-659 at 10/30/18 10:50 PM:


Hi [~elek]

I have rebased the patch on top of HDDS-712.

I have added a few more test cases for continuation token and start after.

Patch is dependant on HDDS-712. 

 

Testing with aws cli is not working properly, I think aws cli is broken for 
list-object, as when I give max-items it is not considering that one as max 
keys, and with continuation token, it is not allowing me to list max-keys. 
(Same behavior with S3 and S3Gateway) So, I have tested with curl with both S3 
and our S3 Gateway endpoints.

 


was (Author: bharatviswa):
I have rebased the patch on top of HDDS-712.

I have added a few more test cases for continuation token and start after.

Patch is dependant on HDDS-712.

 

Testing with aws cli is not working properly, I think aws cli is broken for 
list-object, as when I give max-items it is not considering that one as max 
keys, and with continuation token, it is not allowing me to list max-keys. 
(Same behavior with S3 and S3Gateway) So, I have tested with curl with both S3 
and our S3 Gateway endpoints.

 

> Implement pagination in GET bucket (object list) endpoint
> -
>
> Key: HDDS-659
> URL: https://issues.apache.org/jira/browse/HDDS-659
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Bharat Viswanadham
>Priority: Blocker
> Attachments: HDDS-659.00-WIP.patch, HDDS-659.01.patch, 
> HDDS-659.02.patch, HDDS-659.03.patch
>
>
> The current implementation always returns with all the elements. We need to 
> support paging with supporting the following headers:
>  * {{start-after}}
>  * {{continuation-token}}{{}}
>  * {{max-keys}}{{}}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-762) Fix unit test failure for TestContainerSQLCli & TestCSMMetrics

2018-10-30 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-762?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDDS-762:
---
Priority: Blocker  (was: Major)

> Fix unit test failure for TestContainerSQLCli & TestCSMMetrics
> --
>
> Key: HDDS-762
> URL: https://issues.apache.org/jira/browse/HDDS-762
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.3.0
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
>Priority: Blocker
> Attachments: HDDS-762-ozone-0.3.001.patch, HDDS-762.001.patch
>
>
> TestContainerSQLCli & TestCSMMetrics are currently failing consistently 
> because of a mismatch in metrics register name. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-602) Release Ozone 0.3.0

2018-10-30 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-602?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDDS-602:
---
Priority: Blocker  (was: Major)

> Release Ozone 0.3.0
> ---
>
> Key: HDDS-602
> URL: https://issues.apache.org/jira/browse/HDDS-602
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Blocker
>
> Similar to HDDS-214 I open this issue to discuss all of the release related 
> issue in this jira.
> The jira id also could be used in the commit message of the technical commits 
> (such as tag/version bump)
> As a summary, ozone 0.3.0 could be release in the same way as ozone 0.2.1. We 
> don't need to upload the artifacts to the maven repository (it requires 
> additional work).
> Roadmap is here: 
> [https://cwiki.apache.org/confluence/display/HADOOP/Ozone+Road+Map]
> Branch is ozone-0.3



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-659) Implement pagination in GET bucket (object list) endpoint

2018-10-30 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDDS-659:
---
Priority: Blocker  (was: Major)

> Implement pagination in GET bucket (object list) endpoint
> -
>
> Key: HDDS-659
> URL: https://issues.apache.org/jira/browse/HDDS-659
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Bharat Viswanadham
>Priority: Blocker
> Attachments: HDDS-659.00-WIP.patch, HDDS-659.01.patch, 
> HDDS-659.02.patch, HDDS-659.03.patch
>
>
> The current implementation always returns with all the elements. We need to 
> support paging with supporting the following headers:
>  * {{start-after}}
>  * {{continuation-token}}{{}}
>  * {{max-keys}}{{}}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-742) Handle object list requests (GET bucket) without prefix parameter

2018-10-30 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-742?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDDS-742:
---
Resolution: Duplicate
Status: Resolved  (was: Patch Available)

Included as part of HDDS-659 per [~bharatviswa].

> Handle object list requests (GET bucket) without prefix parameter
> -
>
> Key: HDDS-742
> URL: https://issues.apache.org/jira/browse/HDDS-742
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: S3
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Attachments: HDDS-742.001.patch, HDDS-742.02.patch
>
>
> In s3 gateway the GET bucket endpoint is already implemented. It can return 
> with the available objects based on a given prefix.
> ([https://docs.aws.amazon.com/AmazonS3/latest/API/v2-RESTBucketGET.html)]
> As it's defined the Delimiter parameter is used to reduce the response with 
> returning only the first-level keys and prefixes (aka directories)
> {code:java}
> http://s3.amazonaws.com/doc/2006-03-01/;>
>   example-bucket
>   
>   2
>   1000
>   /
>   false
>   
> sample.jpg
> 2011-02-26T01:56:20.000Z
> bf1d737a4d46a19f3bced6905cc8b902
> 142863
> STANDARD
>   
>   
> photos/
>   
> {code}
> Here we can have multiple additional objects with photos/ prefix but they are 
> not added to the response.
>  
> The main problem in the ozone s3 implementation is that the Delimiter 
> parameter *should be optional.* In case of the delimiter is missing we should 
> always return all the keys without any common prefix simplification.
> It requires for recursive directory listing which is used by s3a adapter.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-659) Implement pagination in GET bucket (object list) endpoint

2018-10-30 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16669389#comment-16669389
 ] 

Bharat Viswanadham commented on HDDS-659:
-

I have rebased the patch on top of HDDS-712.

I have added a few more test cases for continuation token and start after.

Patch is dependant on HDDS-712.

 

Testing with aws cli is not working properly, I think aws cli is broken for 
list-object, as when I give max-items it is not considering that one as max 
keys, and with continuation token, it is not allowing me to list max-keys. 
(Same behavior with S3 and S3Gateway) So, I have tested with curl with both S3 
and our S3 Gateway endpoints.

 

> Implement pagination in GET bucket (object list) endpoint
> -
>
> Key: HDDS-659
> URL: https://issues.apache.org/jira/browse/HDDS-659
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-659.00-WIP.patch, HDDS-659.01.patch, 
> HDDS-659.02.patch, HDDS-659.03.patch
>
>
> The current implementation always returns with all the elements. We need to 
> support paging with supporting the following headers:
>  * {{start-after}}
>  * {{continuation-token}}{{}}
>  * {{max-keys}}{{}}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-659) Implement pagination in GET bucket (object list) endpoint

2018-10-30 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-659:

Attachment: HDDS-659.03.patch

> Implement pagination in GET bucket (object list) endpoint
> -
>
> Key: HDDS-659
> URL: https://issues.apache.org/jira/browse/HDDS-659
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-659.00-WIP.patch, HDDS-659.01.patch, 
> HDDS-659.02.patch, HDDS-659.03.patch
>
>
> The current implementation always returns with all the elements. We need to 
> support paging with supporting the following headers:
>  * {{start-after}}
>  * {{continuation-token}}{{}}
>  * {{max-keys}}{{}}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-753) Fix failure in TestSecureOzoneCluster

2018-10-30 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-753?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16669375#comment-16669375
 ] 

Anu Engineer commented on HDDS-753:
---

+1, pending Jenkins.

> Fix failure in TestSecureOzoneCluster
> -
>
> Key: HDDS-753
> URL: https://issues.apache.org/jira/browse/HDDS-753
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDDS-753-HDDS-4.00.patch, HDDS-753-HDDS-4.01.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13996) Make HttpFS' ACLs RegEx configurable

2018-10-30 Thread Siyao Meng (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13996?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16669369#comment-16669369
 ] 

Siyao Meng commented on HDFS-13996:
---

[~jojochuang] By the way I have to put 
testFileAclsCustomizedUserAndGroupNames() before testFileAcls() in FILEACLS 
because testFileAcls() calls getHttpFSFileSystem() with no args which ruins the 
HttpFS setting for testFileAclsCustomizedUserAndGroupNames() (would lead to 
test failure).

> Make HttpFS' ACLs RegEx configurable
> 
>
> Key: HDFS-13996
> URL: https://issues.apache.org/jira/browse/HDFS-13996
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: httpfs
>Affects Versions: 2.6.5, 3.0.3, 2.7.7
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Attachments: HDFS-13996.001.patch, HDFS-13996.002.patch, 
> HDFS-13996.003.patch
>
>
> Previously in HDFS-11421, WebHDFS' ACLs RegEx is made configurable, but it's 
> not configurable yet in HttpFS. For now in HttpFS, the ACL permission pattern 
> is fixed to DFS_WEBHDFS_ACL_PERMISSION_PATTERN_DEFAULT.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-766) Ozone shell create volume fails if volume name does not have a leading slash

2018-10-30 Thread Hanisha Koneru (JIRA)
Hanisha Koneru created HDDS-766:
---

 Summary: Ozone shell create volume fails if volume name does not 
have a leading slash
 Key: HDDS-766
 URL: https://issues.apache.org/jira/browse/HDDS-766
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Hanisha Koneru


After HDDS-682, volume creation through shell fails if the volume name does not 
have leading slash.
{code:java}
$ ./ozone sh volume create volume1
Volume name is required
$ ./ozone sh volume create /volume1
2018-10-30 14:07:58,078 INFO rpc.RpcClient: Creating Volume: volume1, with hdds 
as owner and quota set to 1152921504606846976 bytes.{code}
In {{OzoneAddress#stringToUri}}, when creating a new URI, the path parameter is 
expected to have a leading slash. Otherwise, the path gets mixed with the 
authority.

To fix this, we should add a leading slash to the path variable, if it does not 
exist, before constructing the URI object.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13996) Make HttpFS' ACLs RegEx configurable

2018-10-30 Thread Siyao Meng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13996?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HDFS-13996:
--
Attachment: HDFS-13996.003.patch
Status: Patch Available  (was: In Progress)

> Make HttpFS' ACLs RegEx configurable
> 
>
> Key: HDFS-13996
> URL: https://issues.apache.org/jira/browse/HDFS-13996
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: httpfs
>Affects Versions: 2.7.7, 3.0.3, 2.6.5
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Attachments: HDFS-13996.001.patch, HDFS-13996.002.patch, 
> HDFS-13996.003.patch
>
>
> Previously in HDFS-11421, WebHDFS' ACLs RegEx is made configurable, but it's 
> not configurable yet in HttpFS. For now in HttpFS, the ACL permission pattern 
> is fixed to DFS_WEBHDFS_ACL_PERMISSION_PATTERN_DEFAULT.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13996) Make HttpFS' ACLs RegEx configurable

2018-10-30 Thread Siyao Meng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13996?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HDFS-13996:
--
Status: In Progress  (was: Patch Available)

[~jojochuang] Thanks for the review. Uploading rev 003 to address checkstyle 
and move test under FILEACLS.

> Make HttpFS' ACLs RegEx configurable
> 
>
> Key: HDFS-13996
> URL: https://issues.apache.org/jira/browse/HDFS-13996
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: httpfs
>Affects Versions: 2.7.7, 3.0.3, 2.6.5
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Attachments: HDFS-13996.001.patch, HDFS-13996.002.patch
>
>
> Previously in HDFS-11421, WebHDFS' ACLs RegEx is made configurable, but it's 
> not configurable yet in HttpFS. For now in HttpFS, the ACL permission pattern 
> is fixed to DFS_WEBHDFS_ACL_PERMISSION_PATTERN_DEFAULT.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-697) update and validate the BCSID for PutSmallFile/GetSmallFile command

2018-10-30 Thread Jitendra Nath Pandey (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-697?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16669321#comment-16669321
 ] 

Jitendra Nath Pandey commented on HDDS-697:
---

+1 for the patch pending jenkins.

> update and validate the BCSID for PutSmallFile/GetSmallFile command
> ---
>
> Key: HDDS-697
> URL: https://issues.apache.org/jira/browse/HDDS-697
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Attachments: HDDS-697.000.patch, HDDS-697.001.patch, 
> HDDS-697.002.patch
>
>
> Similar to putBlock/GetBlock, putSmallFile transaction in Ratis needs to 
> update the BCSID in the container db on datanode. getSmallFile should 
> validate the bcsId while reading the block similar to getBlock.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-753) Fix failure in TestSecureOzoneCluster

2018-10-30 Thread Ajay Kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-753?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16669310#comment-16669310
 ] 

Ajay Kumar commented on HDDS-753:
-

patch v1 to fix checkstyle.

> Fix failure in TestSecureOzoneCluster
> -
>
> Key: HDDS-753
> URL: https://issues.apache.org/jira/browse/HDDS-753
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDDS-753-HDDS-4.00.patch, HDDS-753-HDDS-4.01.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-753) Fix failure in TestSecureOzoneCluster

2018-10-30 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-753?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDDS-753:

Attachment: HDDS-753-HDDS-4.01.patch

> Fix failure in TestSecureOzoneCluster
> -
>
> Key: HDDS-753
> URL: https://issues.apache.org/jira/browse/HDDS-753
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDDS-753-HDDS-4.00.patch, HDDS-753-HDDS-4.01.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-754) VolumeInfo#getScmUsed throws NPE

2018-10-30 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16669291#comment-16669291
 ] 

Anu Engineer commented on HDDS-754:
---

bq. are you okay with the latest patch?
yes , +1.

> VolumeInfo#getScmUsed throws NPE
> 
>
> Key: HDDS-754
> URL: https://issues.apache.org/jira/browse/HDDS-754
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Reporter: Mukul Kumar Singh
>Assignee: Hanisha Koneru
>Priority: Blocker
> Attachments: HDDS-754.001.patch, HDDS-754.002.patch, 
> HDDS-754.003.patch, HDDS-754.004.patch
>
>
> The failure can be seen at the following jenkins run
> https://builds.apache.org/job/PreCommit-HDDS-Build/1540/testReport/org.apache.hadoop.hdds.scm.pipeline/TestNodeFailure/testPipelineFail/
> {code}
> 2018-10-29 13:44:11,984 WARN  concurrent.ExecutorHelper 
> (ExecutorHelper.java:logThrowableFromAfterExecute(50)) - Execution exception 
> when running task in Datanode ReportManager Thread - 3
> 2018-10-29 13:44:11,984 WARN  concurrent.ExecutorHelper 
> (ExecutorHelper.java:logThrowableFromAfterExecute(63)) - Caught exception in 
> thread Datanode ReportManager Thread - 3: 
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.ozone.container.common.volume.VolumeInfo.getScmUsed(VolumeInfo.java:107)
>   at 
> org.apache.hadoop.ozone.container.common.volume.VolumeSet.getNodeReport(VolumeSet.java:379)
>   at 
> org.apache.hadoop.ozone.container.ozoneimpl.OzoneContainer.getNodeReport(OzoneContainer.java:225)
>   at 
> org.apache.hadoop.ozone.container.common.report.NodeReportPublisher.getReport(NodeReportPublisher.java:64)
>   at 
> org.apache.hadoop.ozone.container.common.report.NodeReportPublisher.getReport(NodeReportPublisher.java:39)
>   at 
> org.apache.hadoop.ozone.container.common.report.ReportPublisher.publishReport(ReportPublisher.java:86)
>   at 
> org.apache.hadoop.ozone.container.common.report.ReportPublisher.run(ReportPublisher.java:73)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13996) Make HttpFS' ACLs RegEx configurable

2018-10-30 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13996?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16669282#comment-16669282
 ] 

Wei-Chiu Chuang commented on HDFS-13996:


Thanks [~smeng]
The patch mostly makes sense to me. One suggestion to improve maintainability: 
could you remove the operation ACLS_CUSTOM_USER_GROUP_NAMES? It is not a HTTPFS 
operation after all.
And  you can have operation FILEACLS to run testCustomizedUserAndGroupNames() 
after testFileAcls(). That is, run two tests for the case of FILEACLS

> Make HttpFS' ACLs RegEx configurable
> 
>
> Key: HDFS-13996
> URL: https://issues.apache.org/jira/browse/HDFS-13996
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: httpfs
>Affects Versions: 2.6.5, 3.0.3, 2.7.7
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Attachments: HDFS-13996.001.patch, HDFS-13996.002.patch
>
>
> Previously in HDFS-11421, WebHDFS' ACLs RegEx is made configurable, but it's 
> not configurable yet in HttpFS. For now in HttpFS, the ACL permission pattern 
> is fixed to DFS_WEBHDFS_ACL_PERMISSION_PATTERN_DEFAULT.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-754) VolumeInfo#getScmUsed throws NPE

2018-10-30 Thread Arpit Agarwal (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16669260#comment-16669260
 ] 

Arpit Agarwal commented on HDDS-754:


+1 pending Jenkins.

[~anu] are you okay with the latest patch?

> VolumeInfo#getScmUsed throws NPE
> 
>
> Key: HDDS-754
> URL: https://issues.apache.org/jira/browse/HDDS-754
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Reporter: Mukul Kumar Singh
>Assignee: Hanisha Koneru
>Priority: Blocker
> Attachments: HDDS-754.001.patch, HDDS-754.002.patch, 
> HDDS-754.003.patch, HDDS-754.004.patch
>
>
> The failure can be seen at the following jenkins run
> https://builds.apache.org/job/PreCommit-HDDS-Build/1540/testReport/org.apache.hadoop.hdds.scm.pipeline/TestNodeFailure/testPipelineFail/
> {code}
> 2018-10-29 13:44:11,984 WARN  concurrent.ExecutorHelper 
> (ExecutorHelper.java:logThrowableFromAfterExecute(50)) - Execution exception 
> when running task in Datanode ReportManager Thread - 3
> 2018-10-29 13:44:11,984 WARN  concurrent.ExecutorHelper 
> (ExecutorHelper.java:logThrowableFromAfterExecute(63)) - Caught exception in 
> thread Datanode ReportManager Thread - 3: 
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.ozone.container.common.volume.VolumeInfo.getScmUsed(VolumeInfo.java:107)
>   at 
> org.apache.hadoop.ozone.container.common.volume.VolumeSet.getNodeReport(VolumeSet.java:379)
>   at 
> org.apache.hadoop.ozone.container.ozoneimpl.OzoneContainer.getNodeReport(OzoneContainer.java:225)
>   at 
> org.apache.hadoop.ozone.container.common.report.NodeReportPublisher.getReport(NodeReportPublisher.java:64)
>   at 
> org.apache.hadoop.ozone.container.common.report.NodeReportPublisher.getReport(NodeReportPublisher.java:39)
>   at 
> org.apache.hadoop.ozone.container.common.report.ReportPublisher.publishReport(ReportPublisher.java:86)
>   at 
> org.apache.hadoop.ozone.container.common.report.ReportPublisher.run(ReportPublisher.java:73)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-754) VolumeInfo#getScmUsed throws NPE

2018-10-30 Thread Hanisha Koneru (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16669197#comment-16669197
 ] 

Hanisha Koneru edited comment on HDDS-754 at 10/30/18 7:58 PM:
---

Thanks for the review [~arpitagarwal].
 Patch v04 addresses your comment and fixes the checkstyle and related junit 
failures.

Filed a Jira - HDDS-765 for improving the unit test 
{{TestNodeFailure#testPipelineFail}}


was (Author: hanishakoneru):
Thanks for the review [~arpitagarwal].
Patch v04 addresses your comment and fixes the checkstyle and related junit 
failures.

> VolumeInfo#getScmUsed throws NPE
> 
>
> Key: HDDS-754
> URL: https://issues.apache.org/jira/browse/HDDS-754
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Reporter: Mukul Kumar Singh
>Assignee: Hanisha Koneru
>Priority: Blocker
> Attachments: HDDS-754.001.patch, HDDS-754.002.patch, 
> HDDS-754.003.patch, HDDS-754.004.patch
>
>
> The failure can be seen at the following jenkins run
> https://builds.apache.org/job/PreCommit-HDDS-Build/1540/testReport/org.apache.hadoop.hdds.scm.pipeline/TestNodeFailure/testPipelineFail/
> {code}
> 2018-10-29 13:44:11,984 WARN  concurrent.ExecutorHelper 
> (ExecutorHelper.java:logThrowableFromAfterExecute(50)) - Execution exception 
> when running task in Datanode ReportManager Thread - 3
> 2018-10-29 13:44:11,984 WARN  concurrent.ExecutorHelper 
> (ExecutorHelper.java:logThrowableFromAfterExecute(63)) - Caught exception in 
> thread Datanode ReportManager Thread - 3: 
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.ozone.container.common.volume.VolumeInfo.getScmUsed(VolumeInfo.java:107)
>   at 
> org.apache.hadoop.ozone.container.common.volume.VolumeSet.getNodeReport(VolumeSet.java:379)
>   at 
> org.apache.hadoop.ozone.container.ozoneimpl.OzoneContainer.getNodeReport(OzoneContainer.java:225)
>   at 
> org.apache.hadoop.ozone.container.common.report.NodeReportPublisher.getReport(NodeReportPublisher.java:64)
>   at 
> org.apache.hadoop.ozone.container.common.report.NodeReportPublisher.getReport(NodeReportPublisher.java:39)
>   at 
> org.apache.hadoop.ozone.container.common.report.ReportPublisher.publishReport(ReportPublisher.java:86)
>   at 
> org.apache.hadoop.ozone.container.common.report.ReportPublisher.run(ReportPublisher.java:73)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-765) TestNodeFailure#testPipelineFail is dependent on long timeout

2018-10-30 Thread Hanisha Koneru (JIRA)
Hanisha Koneru created HDDS-765:
---

 Summary: TestNodeFailure#testPipelineFail is dependent on long 
timeout
 Key: HDDS-765
 URL: https://issues.apache.org/jira/browse/HDDS-765
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Hanisha Koneru


The unit test {{TestNodeFailure#testPipelineFail}} is dependent on having a 
long timeout. The timeout interval to detect that a node is stale is 90s by 
default and the timeout interval for the cluster to get ready (in 
{{MiniOzoneClusterImpl}}) is 60s. So the cluster can timeout waiting to get in 
the ready state if a DN is restarted. This is temporarily fixed in HDDS-754 by 
increasing the {{MiniOzoneCluster#waitForClusterToBeReady}} timeout. 

The test should be improved to not be restricted by long timeout.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-755) ContainerInfo and ContainerReplica protobuf changes

2018-10-30 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-755?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16669210#comment-16669210
 ] 

Hadoop QA commented on HDDS-755:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 14 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
40s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
20m 42s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
44s{color} | {color:red} hadoop-hdds/server-scm in trunk has 1 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
38s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
23s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
19s{color} | {color:red} tools in the patch failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 16m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 16m  
4s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
3m 38s{color} | {color:orange} root: The patch generated 3 new + 4 unchanged - 
1 fixed = 7 total (was 5) {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
34s{color} | {color:red} tools in the patch failed. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 15s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
1s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
32s{color} | {color:red} tools in the patch failed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
35s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
8s{color} | {color:green} common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
50s{color} | {color:green} container-service in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
31s{color} | {color:green} client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
43s{color} | {color:green} server-scm in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
30s{color} | {color:green} tools in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  6m 36s{color} 
| {color:red} integration-test in the patch failed. {color} |
| 

[jira] [Commented] (HDDS-754) VolumeInfo#getScmUsed throws NPE

2018-10-30 Thread Hanisha Koneru (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16669197#comment-16669197
 ] 

Hanisha Koneru commented on HDDS-754:
-

Thanks for the review [~arpitagarwal].
Patch v04 addresses your comment and fixes the checkstyle and related junit 
failures.

> VolumeInfo#getScmUsed throws NPE
> 
>
> Key: HDDS-754
> URL: https://issues.apache.org/jira/browse/HDDS-754
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Reporter: Mukul Kumar Singh
>Assignee: Hanisha Koneru
>Priority: Blocker
> Attachments: HDDS-754.001.patch, HDDS-754.002.patch, 
> HDDS-754.003.patch, HDDS-754.004.patch
>
>
> The failure can be seen at the following jenkins run
> https://builds.apache.org/job/PreCommit-HDDS-Build/1540/testReport/org.apache.hadoop.hdds.scm.pipeline/TestNodeFailure/testPipelineFail/
> {code}
> 2018-10-29 13:44:11,984 WARN  concurrent.ExecutorHelper 
> (ExecutorHelper.java:logThrowableFromAfterExecute(50)) - Execution exception 
> when running task in Datanode ReportManager Thread - 3
> 2018-10-29 13:44:11,984 WARN  concurrent.ExecutorHelper 
> (ExecutorHelper.java:logThrowableFromAfterExecute(63)) - Caught exception in 
> thread Datanode ReportManager Thread - 3: 
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.ozone.container.common.volume.VolumeInfo.getScmUsed(VolumeInfo.java:107)
>   at 
> org.apache.hadoop.ozone.container.common.volume.VolumeSet.getNodeReport(VolumeSet.java:379)
>   at 
> org.apache.hadoop.ozone.container.ozoneimpl.OzoneContainer.getNodeReport(OzoneContainer.java:225)
>   at 
> org.apache.hadoop.ozone.container.common.report.NodeReportPublisher.getReport(NodeReportPublisher.java:64)
>   at 
> org.apache.hadoop.ozone.container.common.report.NodeReportPublisher.getReport(NodeReportPublisher.java:39)
>   at 
> org.apache.hadoop.ozone.container.common.report.ReportPublisher.publishReport(ReportPublisher.java:86)
>   at 
> org.apache.hadoop.ozone.container.common.report.ReportPublisher.run(ReportPublisher.java:73)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-754) VolumeInfo#getScmUsed throws NPE

2018-10-30 Thread Hanisha Koneru (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-754?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDDS-754:

Attachment: HDDS-754.004.patch

> VolumeInfo#getScmUsed throws NPE
> 
>
> Key: HDDS-754
> URL: https://issues.apache.org/jira/browse/HDDS-754
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Reporter: Mukul Kumar Singh
>Assignee: Hanisha Koneru
>Priority: Blocker
> Attachments: HDDS-754.001.patch, HDDS-754.002.patch, 
> HDDS-754.003.patch, HDDS-754.004.patch
>
>
> The failure can be seen at the following jenkins run
> https://builds.apache.org/job/PreCommit-HDDS-Build/1540/testReport/org.apache.hadoop.hdds.scm.pipeline/TestNodeFailure/testPipelineFail/
> {code}
> 2018-10-29 13:44:11,984 WARN  concurrent.ExecutorHelper 
> (ExecutorHelper.java:logThrowableFromAfterExecute(50)) - Execution exception 
> when running task in Datanode ReportManager Thread - 3
> 2018-10-29 13:44:11,984 WARN  concurrent.ExecutorHelper 
> (ExecutorHelper.java:logThrowableFromAfterExecute(63)) - Caught exception in 
> thread Datanode ReportManager Thread - 3: 
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.ozone.container.common.volume.VolumeInfo.getScmUsed(VolumeInfo.java:107)
>   at 
> org.apache.hadoop.ozone.container.common.volume.VolumeSet.getNodeReport(VolumeSet.java:379)
>   at 
> org.apache.hadoop.ozone.container.ozoneimpl.OzoneContainer.getNodeReport(OzoneContainer.java:225)
>   at 
> org.apache.hadoop.ozone.container.common.report.NodeReportPublisher.getReport(NodeReportPublisher.java:64)
>   at 
> org.apache.hadoop.ozone.container.common.report.NodeReportPublisher.getReport(NodeReportPublisher.java:39)
>   at 
> org.apache.hadoop.ozone.container.common.report.ReportPublisher.publishReport(ReportPublisher.java:86)
>   at 
> org.apache.hadoop.ozone.container.common.report.ReportPublisher.run(ReportPublisher.java:73)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-762) Fix unit test failure for TestContainerSQLCli & TestCSMMetrics

2018-10-30 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-762?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16669187#comment-16669187
 ] 

Hadoop QA commented on HDDS-762:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
28s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} ozone-0.3 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  6m  
5s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
31s{color} | {color:green} ozone-0.3 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
36s{color} | {color:green} ozone-0.3 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
34s{color} | {color:green} ozone-0.3 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
6s{color} | {color:green} ozone-0.3 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m 49s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} ozone-0.3 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} ozone-0.3 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
33s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 16m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 20s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
24s{color} | {color:green} hadoop-project in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  7m 16s{color} 
| {color:red} integration-test in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
45s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}110m 26s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.container.TestContainerReplication |
|   | hadoop.hdds.scm.pipeline.TestNodeFailure |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | HDDS-762 |
| JIRA Patch URL | 

[jira] [Updated] (HDDS-761) Create S3 subcommand to run S3 related operations

2018-10-30 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-761?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-761:

Issue Type: Bug  (was: Sub-task)
Parent: (was: HDDS-434)

> Create S3 subcommand to run S3 related operations
> -
>
> Key: HDDS-761
> URL: https://issues.apache.org/jira/browse/HDDS-761
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>
> This Jira is added to create S3 subcommand, which will be used for all S3 
> related operations.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-761) Create S3 subcommand to run S3 related operations

2018-10-30 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-761?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-761:

Issue Type: Sub-task  (was: Bug)
Parent: HDDS-763

> Create S3 subcommand to run S3 related operations
> -
>
> Key: HDDS-761
> URL: https://issues.apache.org/jira/browse/HDDS-761
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>
> This Jira is added to create S3 subcommand, which will be used for all S3 
> related operations.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-764) Run S3 smoke tests with replication STANDARD.

2018-10-30 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-764?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-764:

Labels: newbie  (was: )

> Run S3 smoke tests with replication STANDARD.
> -
>
> Key: HDDS-764
> URL: https://issues.apache.org/jira/browse/HDDS-764
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Priority: Major
>  Labels: newbie
>
> This Jira is created from the comment from [~elek]
> 1. I think sooner or later we need to run ozone tests with real replication. 
> We can add a 'scale up' to the hadoop-ozone/dist/src/main/smoketest/test.sh
> {code:java}
> docker-compose -f "$COMPOSE_FILE" down
> docker-compose -f "$COMPOSE_FILE" up -d
> docker-compose -f "$COMPOSE_FILE" scale datanode=3
> {code}
> And with this modification we don't need the '--storage-class 
> REDUCED_REDUNDANCY'. (But we can do it in separated jira)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-764) Run S3 smoke tests with replication STANDARD.

2018-10-30 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-764?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-764:

Issue Type: Sub-task  (was: Bug)
Parent: HDDS-763

> Run S3 smoke tests with replication STANDARD.
> -
>
> Key: HDDS-764
> URL: https://issues.apache.org/jira/browse/HDDS-764
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Priority: Major
>  Labels: newbie
>
> This Jira is created from the comment from [~elek]
> 1. I think sooner or later we need to run ozone tests with real replication. 
> We can add a 'scale up' to the adoop-ozone/dist/src/main/smoketest/test.sh
> {code:java}
> docker-compose -f "$COMPOSE_FILE" down
> docker-compose -f "$COMPOSE_FILE" up -d
> docker-compose -f "$COMPOSE_FILE" scale datanode=3
> {code}
> And with this modification we don't need the '--storage-class 
> REDUCED_REDUNDANCY'. (But we can do it in separated jira)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-764) Run S3 smoke tests with replication STANDARD.

2018-10-30 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-764?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-764:

Description: 
This Jira is created from the comment from [~elek]

1. I think sooner or later we need to run ozone tests with real replication. We 
can add a 'scale up' to the hadoop-ozone/dist/src/main/smoketest/test.sh
{code:java}
docker-compose -f "$COMPOSE_FILE" down
docker-compose -f "$COMPOSE_FILE" up -d
docker-compose -f "$COMPOSE_FILE" scale datanode=3
{code}
And with this modification we don't need the '--storage-class 
REDUCED_REDUNDANCY'. (But we can do it in separated jira)

  was:
This Jira is created from the comment from [~elek]

1. I think sooner or later we need to run ozone tests with real replication. We 
can add a 'scale up' to the adoop-ozone/dist/src/main/smoketest/test.sh
{code:java}
docker-compose -f "$COMPOSE_FILE" down
docker-compose -f "$COMPOSE_FILE" up -d
docker-compose -f "$COMPOSE_FILE" scale datanode=3
{code}
And with this modification we don't need the '--storage-class 
REDUCED_REDUNDANCY'. (But we can do it in separated jira)


> Run S3 smoke tests with replication STANDARD.
> -
>
> Key: HDDS-764
> URL: https://issues.apache.org/jira/browse/HDDS-764
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Priority: Major
>  Labels: newbie
>
> This Jira is created from the comment from [~elek]
> 1. I think sooner or later we need to run ozone tests with real replication. 
> We can add a 'scale up' to the hadoop-ozone/dist/src/main/smoketest/test.sh
> {code:java}
> docker-compose -f "$COMPOSE_FILE" down
> docker-compose -f "$COMPOSE_FILE" up -d
> docker-compose -f "$COMPOSE_FILE" scale datanode=3
> {code}
> And with this modification we don't need the '--storage-class 
> REDUCED_REDUNDANCY'. (But we can do it in separated jira)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13998) ECAdmin NPE with -setPolicy -replicate

2018-10-30 Thread Zsolt Venczel (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16669157#comment-16669157
 ] 

Zsolt Venczel commented on HDFS-13998:
--

[~ayushtkn] I started progressing with it.

> ECAdmin NPE with -setPolicy -replicate
> --
>
> Key: HDFS-13998
> URL: https://issues.apache.org/jira/browse/HDFS-13998
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding
>Affects Versions: 3.2.0, 3.1.2
>Reporter: Xiao Chen
>Assignee: Zsolt Venczel
>Priority: Major
>
> HDFS-13732 tried to improve the output of the console tool. But we missed the 
> fact that for replication, {{getErasureCodingPolicy}} would return null.
> This jira is to fix it in ECAdmin, and add a unit test.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-764) Run S3 smoke tests with replication STANDARD.

2018-10-30 Thread Bharat Viswanadham (JIRA)
Bharat Viswanadham created HDDS-764:
---

 Summary: Run S3 smoke tests with replication STANDARD.
 Key: HDDS-764
 URL: https://issues.apache.org/jira/browse/HDDS-764
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Bharat Viswanadham


This Jira is created from the comment from [~elek]

1. I think sooner or later we need to run ozone tests with real replication. We 
can add a 'scale up' to the adoop-ozone/dist/src/main/smoketest/test.sh
{code:java}
docker-compose -f "$COMPOSE_FILE" down
docker-compose -f "$COMPOSE_FILE" up -d
docker-compose -f "$COMPOSE_FILE" scale datanode=3
{code}
And with this modification we don't need the '--storage-class 
REDUCED_REDUNDANCY'. (But we can do it in separated jira)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDFS-13998) ECAdmin NPE with -setPolicy -replicate

2018-10-30 Thread Zsolt Venczel (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13998?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDFS-13998 started by Zsolt Venczel.

> ECAdmin NPE with -setPolicy -replicate
> --
>
> Key: HDFS-13998
> URL: https://issues.apache.org/jira/browse/HDFS-13998
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding
>Affects Versions: 3.2.0, 3.1.2
>Reporter: Xiao Chen
>Assignee: Zsolt Venczel
>Priority: Major
>
> HDFS-13732 tried to improve the output of the console tool. But we missed the 
> fact that for replication, {{getErasureCodingPolicy}} would return null.
> This jira is to fix it in ECAdmin, and add a unit test.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-12478) [PROVIDED Phase 2] Command line tools for managing Provided Storage Backup mounts

2018-10-30 Thread Ewan Higgs (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-12478?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16669154#comment-16669154
 ] 

Ewan Higgs edited comment on HDFS-12478 at 10/30/18 6:24 PM:
-

005
- minor fixes while walking [~virajith] through the code over the phone.

Comment from [~virajith]: move SnapshotDiff.INodeType changes out of this patch 
and into another changeset.
Also: add test or command line in TestDFSAdmin tht calls DFSAdmin to run the 
commands (catching Unsupported exceptions right now)


was (Author: ehiggs):
005
- minor fixes while walking [~virajith] through the code over the phone.

Comment from [~virajith]: move SnapshotDiff.INodeType changes out of this patch 
and into another changeset.

> [PROVIDED Phase 2] Command line tools for managing Provided Storage Backup 
> mounts
> -
>
> Key: HDFS-12478
> URL: https://issues.apache.org/jira/browse/HDFS-12478
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ewan Higgs
>Assignee: Ewan Higgs
>Priority: Minor
> Attachments: HDFS-12478-HDFS-12090.004.patch, 
> HDFS-12478-HDFS-12090.005.patch, HDFS-12478-HDFS-9806.001.patch, 
> HDFS-12478-HDFS-9806.002.patch, HDFS-12478-HDFS-9806.003.patch
>
>
> This is a task for implementing the command line interface for attaching a 
> PROVIDED storage backup system (see HDFS-9806, HDFS-12090).
> # The administrator should be able to mount a PROVIDED storage volume from 
> the command line. 
> {code}hdfs attach -create [-name ]   path (external)>{code}
> # Whitelist of users who are able to manage mounts (create, attach, detach).
> # Be able to interrogate the status of the attached storage (last time a 
> snapshot was taken, files being backed up).
> # The administrator should be able to remove an attached PROVIDED storage 
> volume from the command line. This simply means that the synchronization 
> process no longer runs. If the administrator has configured their setup to no 
> longer have local copies of the data, the blocks in the subtree are simply no 
> longer accessible as the external file store system is currently inaccessible.
> {code}hdfs attach -remove  [-force | -flush]{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-712) Use x-amz-storage-class to specify replication type and replication factor

2018-10-30 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-712?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16669153#comment-16669153
 ] 

Bharat Viswanadham commented on HDDS-712:
-

Thank You, [~elek] for review.

I have addressed your review comments.
 # Agreed, will file a new Jira for this. I have not taken up in this jira.
 # Done.
 # Done. (Removed LOG.info)

> Use x-amz-storage-class to specify replication type and replication factor
> --
>
> Key: HDDS-712
> URL: https://issues.apache.org/jira/browse/HDDS-712
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-712.00.patch, HDDS-712.01.patch
>
>
>  
> This has been a comment in the Jira in HDDS-693 from [~anu]
> @DefaultValue("STAND_ALONE") @QueryParam("replicationType")
> Just an opportunistic comment. Not part of this patch, this query param will 
> not be sent by S3 hence this will always default to Stand_Alone. At some 
> point we need to move to RATIS, Perhaps we have to read this via 
> x-amz-storage-class.
> *I propose below solution for this:*
> Currently, in code we take query params replicationType and replicationFactor 
> and default them to Stand alone and 1. But these query params cannot be 
> passed from aws cli.
> We want to use x-amz-storage-class header and pass the values. By default for 
> S3 If you don't specify this it defaults to Standard. So, in Ozone over S3 
> also, as we want to default to Ratis and replication factor three by default.
> We can use the mapping Standard to RATIS and REDUCED_REDUNDANCY to Stand 
> alone.
>  
> There are 2 more values 
> STANDARD_IA and ONEZONE_IA these need to be considered later how we want to 
> use them. Intially we are considering to use Standard and Reduced_Redundancy.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12478) [PROVIDED Phase 2] Command line tools for managing Provided Storage Backup mounts

2018-10-30 Thread Ewan Higgs (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-12478?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16669154#comment-16669154
 ] 

Ewan Higgs commented on HDFS-12478:
---

005
- minor fixes while walking [~virajith] through the code over the phone.

Comment from [~virajith]: move SnapshotDiff.INodeType changes out of this patch 
and into another changeset.

> [PROVIDED Phase 2] Command line tools for managing Provided Storage Backup 
> mounts
> -
>
> Key: HDFS-12478
> URL: https://issues.apache.org/jira/browse/HDFS-12478
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ewan Higgs
>Assignee: Ewan Higgs
>Priority: Minor
> Attachments: HDFS-12478-HDFS-12090.004.patch, 
> HDFS-12478-HDFS-12090.005.patch, HDFS-12478-HDFS-9806.001.patch, 
> HDFS-12478-HDFS-9806.002.patch, HDFS-12478-HDFS-9806.003.patch
>
>
> This is a task for implementing the command line interface for attaching a 
> PROVIDED storage backup system (see HDFS-9806, HDFS-12090).
> # The administrator should be able to mount a PROVIDED storage volume from 
> the command line. 
> {code}hdfs attach -create [-name ]   path (external)>{code}
> # Whitelist of users who are able to manage mounts (create, attach, detach).
> # Be able to interrogate the status of the attached storage (last time a 
> snapshot was taken, files being backed up).
> # The administrator should be able to remove an attached PROVIDED storage 
> volume from the command line. This simply means that the synchronization 
> process no longer runs. If the administrator has configured their setup to no 
> longer have local copies of the data, the blocks in the subtree are simply no 
> longer accessible as the external file store system is currently inaccessible.
> {code}hdfs attach -remove  [-force | -flush]{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12478) [PROVIDED Phase 2] Command line tools for managing Provided Storage Backup mounts

2018-10-30 Thread Ewan Higgs (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-12478?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewan Higgs updated HDFS-12478:
--
Status: Patch Available  (was: Open)

> [PROVIDED Phase 2] Command line tools for managing Provided Storage Backup 
> mounts
> -
>
> Key: HDFS-12478
> URL: https://issues.apache.org/jira/browse/HDFS-12478
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ewan Higgs
>Assignee: Ewan Higgs
>Priority: Minor
> Attachments: HDFS-12478-HDFS-12090.004.patch, 
> HDFS-12478-HDFS-12090.005.patch, HDFS-12478-HDFS-9806.001.patch, 
> HDFS-12478-HDFS-9806.002.patch, HDFS-12478-HDFS-9806.003.patch
>
>
> This is a task for implementing the command line interface for attaching a 
> PROVIDED storage backup system (see HDFS-9806, HDFS-12090).
> # The administrator should be able to mount a PROVIDED storage volume from 
> the command line. 
> {code}hdfs attach -create [-name ]   path (external)>{code}
> # Whitelist of users who are able to manage mounts (create, attach, detach).
> # Be able to interrogate the status of the attached storage (last time a 
> snapshot was taken, files being backed up).
> # The administrator should be able to remove an attached PROVIDED storage 
> volume from the command line. This simply means that the synchronization 
> process no longer runs. If the administrator has configured their setup to no 
> longer have local copies of the data, the blocks in the subtree are simply no 
> longer accessible as the external file store system is currently inaccessible.
> {code}hdfs attach -remove  [-force | -flush]{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12478) [PROVIDED Phase 2] Command line tools for managing Provided Storage Backup mounts

2018-10-30 Thread Ewan Higgs (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-12478?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewan Higgs updated HDFS-12478:
--
Status: Open  (was: Patch Available)

> [PROVIDED Phase 2] Command line tools for managing Provided Storage Backup 
> mounts
> -
>
> Key: HDFS-12478
> URL: https://issues.apache.org/jira/browse/HDFS-12478
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ewan Higgs
>Assignee: Ewan Higgs
>Priority: Minor
> Attachments: HDFS-12478-HDFS-12090.004.patch, 
> HDFS-12478-HDFS-12090.005.patch, HDFS-12478-HDFS-9806.001.patch, 
> HDFS-12478-HDFS-9806.002.patch, HDFS-12478-HDFS-9806.003.patch
>
>
> This is a task for implementing the command line interface for attaching a 
> PROVIDED storage backup system (see HDFS-9806, HDFS-12090).
> # The administrator should be able to mount a PROVIDED storage volume from 
> the command line. 
> {code}hdfs attach -create [-name ]   path (external)>{code}
> # Whitelist of users who are able to manage mounts (create, attach, detach).
> # Be able to interrogate the status of the attached storage (last time a 
> snapshot was taken, files being backed up).
> # The administrator should be able to remove an attached PROVIDED storage 
> volume from the command line. This simply means that the synchronization 
> process no longer runs. If the administrator has configured their setup to no 
> longer have local copies of the data, the blocks in the subtree are simply no 
> longer accessible as the external file store system is currently inaccessible.
> {code}hdfs attach -remove  [-force | -flush]{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12478) [PROVIDED Phase 2] Command line tools for managing Provided Storage Backup mounts

2018-10-30 Thread Ewan Higgs (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-12478?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewan Higgs updated HDFS-12478:
--
Attachment: HDFS-12478-HDFS-12090.005.patch

> [PROVIDED Phase 2] Command line tools for managing Provided Storage Backup 
> mounts
> -
>
> Key: HDFS-12478
> URL: https://issues.apache.org/jira/browse/HDFS-12478
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ewan Higgs
>Assignee: Ewan Higgs
>Priority: Minor
> Attachments: HDFS-12478-HDFS-12090.004.patch, 
> HDFS-12478-HDFS-12090.005.patch, HDFS-12478-HDFS-9806.001.patch, 
> HDFS-12478-HDFS-9806.002.patch, HDFS-12478-HDFS-9806.003.patch
>
>
> This is a task for implementing the command line interface for attaching a 
> PROVIDED storage backup system (see HDFS-9806, HDFS-12090).
> # The administrator should be able to mount a PROVIDED storage volume from 
> the command line. 
> {code}hdfs attach -create [-name ]   path (external)>{code}
> # Whitelist of users who are able to manage mounts (create, attach, detach).
> # Be able to interrogate the status of the attached storage (last time a 
> snapshot was taken, files being backed up).
> # The administrator should be able to remove an attached PROVIDED storage 
> volume from the command line. This simply means that the synchronization 
> process no longer runs. If the administrator has configured their setup to no 
> longer have local copies of the data, the blocks in the subtree are simply no 
> longer accessible as the external file store system is currently inaccessible.
> {code}hdfs attach -remove  [-force | -flush]{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12478) [PROVIDED Phase 2] Command line tools for managing Provided Storage Backup mounts

2018-10-30 Thread Ewan Higgs (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-12478?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewan Higgs updated HDFS-12478:
--
Attachment: HDFS-12478-HDFS-12090.004.patch

> [PROVIDED Phase 2] Command line tools for managing Provided Storage Backup 
> mounts
> -
>
> Key: HDFS-12478
> URL: https://issues.apache.org/jira/browse/HDFS-12478
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ewan Higgs
>Assignee: Ewan Higgs
>Priority: Minor
> Attachments: HDFS-12478-HDFS-12090.004.patch, 
> HDFS-12478-HDFS-9806.001.patch, HDFS-12478-HDFS-9806.002.patch, 
> HDFS-12478-HDFS-9806.003.patch
>
>
> This is a task for implementing the command line interface for attaching a 
> PROVIDED storage backup system (see HDFS-9806, HDFS-12090).
> # The administrator should be able to mount a PROVIDED storage volume from 
> the command line. 
> {code}hdfs attach -create [-name ]   path (external)>{code}
> # Whitelist of users who are able to manage mounts (create, attach, detach).
> # Be able to interrogate the status of the attached storage (last time a 
> snapshot was taken, files being backed up).
> # The administrator should be able to remove an attached PROVIDED storage 
> volume from the command line. This simply means that the synchronization 
> process no longer runs. If the administrator has configured their setup to no 
> longer have local copies of the data, the blocks in the subtree are simply no 
> longer accessible as the external file store system is currently inaccessible.
> {code}hdfs attach -remove  [-force | -flush]{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-712) Use x-amz-storage-class to specify replication type and replication factor

2018-10-30 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-712?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-712:

Attachment: HDDS-712.01.patch

> Use x-amz-storage-class to specify replication type and replication factor
> --
>
> Key: HDDS-712
> URL: https://issues.apache.org/jira/browse/HDDS-712
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-712.00.patch, HDDS-712.01.patch
>
>
>  
> This has been a comment in the Jira in HDDS-693 from [~anu]
> @DefaultValue("STAND_ALONE") @QueryParam("replicationType")
> Just an opportunistic comment. Not part of this patch, this query param will 
> not be sent by S3 hence this will always default to Stand_Alone. At some 
> point we need to move to RATIS, Perhaps we have to read this via 
> x-amz-storage-class.
> *I propose below solution for this:*
> Currently, in code we take query params replicationType and replicationFactor 
> and default them to Stand alone and 1. But these query params cannot be 
> passed from aws cli.
> We want to use x-amz-storage-class header and pass the values. By default for 
> S3 If you don't specify this it defaults to Standard. So, in Ozone over S3 
> also, as we want to default to Ratis and replication factor three by default.
> We can use the mapping Standard to RATIS and REDUCED_REDUNDANCY to Stand 
> alone.
>  
> There are 2 more values 
> STANDARD_IA and ONEZONE_IA these need to be considered later how we want to 
> use them. Intially we are considering to use Standard and Reduced_Redundancy.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12478) [PROVIDED Phase 2] Command line tools for managing Provided Storage Backup mounts

2018-10-30 Thread Ewan Higgs (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-12478?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewan Higgs updated HDFS-12478:
--
Attachment: (was: HDFS-12478-HDFS-12090.004.patch)

> [PROVIDED Phase 2] Command line tools for managing Provided Storage Backup 
> mounts
> -
>
> Key: HDFS-12478
> URL: https://issues.apache.org/jira/browse/HDFS-12478
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ewan Higgs
>Assignee: Ewan Higgs
>Priority: Minor
> Attachments: HDFS-12478-HDFS-12090.004.patch, 
> HDFS-12478-HDFS-9806.001.patch, HDFS-12478-HDFS-9806.002.patch, 
> HDFS-12478-HDFS-9806.003.patch
>
>
> This is a task for implementing the command line interface for attaching a 
> PROVIDED storage backup system (see HDFS-9806, HDFS-12090).
> # The administrator should be able to mount a PROVIDED storage volume from 
> the command line. 
> {code}hdfs attach -create [-name ]   path (external)>{code}
> # Whitelist of users who are able to manage mounts (create, attach, detach).
> # Be able to interrogate the status of the attached storage (last time a 
> snapshot was taken, files being backed up).
> # The administrator should be able to remove an attached PROVIDED storage 
> volume from the command line. This simply means that the synchronization 
> process no longer runs. If the administrator has configured their setup to no 
> longer have local copies of the data, the blocks in the subtree are simply no 
> longer accessible as the external file store system is currently inaccessible.
> {code}hdfs attach -remove  [-force | -flush]{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14016) ObserverReadProxyProvider should enable observer read by default

2018-10-30 Thread Konstantin Shvachko (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14016?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16669145#comment-16669145
 ] 

Konstantin Shvachko commented on HDFS-14016:


+1

> ObserverReadProxyProvider should enable observer read by default
> 
>
> Key: HDFS-14016
> URL: https://issues.apache.org/jira/browse/HDFS-14016
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Major
> Attachments: HDFS-14016-HDFS-12943.001.patch
>
>
> Currently in {{ObserverReadProxyProvider#invoke}}, only when 
> {{observerReadEnabled && isRead(method)}} is true, the code will check 
> whether to talk to Observer. Otherwise always talk to active. The issue here 
> is that currently it can only be set through {{setObserverReadEnabled}}, 
> which is used by tests only. So observer read is always disabled in 
> deployment and no way to enable it. We may want to either expose a 
> configuration key, or hard code it to true so it can only be changed for 
> testing purpose, or simply remove this variable. This is closely related to 
> HDFS-13923.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-14040) Add hadoop.token configuration parameter to load tokens

2018-10-30 Thread JIRA
Íñigo Goiri created HDFS-14040:
--

 Summary: Add hadoop.token configuration parameter to load tokens
 Key: HDFS-14040
 URL: https://issues.apache.org/jira/browse/HDFS-14040
 Project: Hadoop HDFS
  Issue Type: New Feature
Reporter: Íñigo Goiri
Assignee: Íñigo Goiri


Currently, Hadoop allows passing files containing tokens.
WebHDFS provides base64 delegation tokens that can be used directly.
This JIRA adds the option to pass base64 tokens directly without using files.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13794) [PROVIDED Phase 2] Teach BlockAliasMap.Writer `remove` method.

2018-10-30 Thread Virajith Jalaparti (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13794?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-13794:
--
Status: Open  (was: Patch Available)

> [PROVIDED Phase 2] Teach BlockAliasMap.Writer `remove` method.
> --
>
> Key: HDFS-13794
> URL: https://issues.apache.org/jira/browse/HDFS-13794
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ewan Higgs
>Assignee: Ewan Higgs
>Priority: Major
> Attachments: HDFS-13794-HDFS-12090.001.patch, 
> HDFS-13794-HDFS-12090.002.patch, HDFS-13794-HDFS-12090.003.patch, 
> HDFS-13794-HDFS-12090.004.patch
>
>
> When updating the BlockAliasMap we may need to deal with deleted blocks. 
> Otherwise the BlockAliasMap will grow indefinitely(!).
> Therefore, the BlockAliasMap.Writer needs a method for removing blocks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13794) [PROVIDED Phase 2] Teach BlockAliasMap.Writer `remove` method.

2018-10-30 Thread Virajith Jalaparti (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13794?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-13794:
--
Status: Patch Available  (was: Open)

> [PROVIDED Phase 2] Teach BlockAliasMap.Writer `remove` method.
> --
>
> Key: HDFS-13794
> URL: https://issues.apache.org/jira/browse/HDFS-13794
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ewan Higgs
>Assignee: Ewan Higgs
>Priority: Major
> Attachments: HDFS-13794-HDFS-12090.001.patch, 
> HDFS-13794-HDFS-12090.002.patch, HDFS-13794-HDFS-12090.003.patch, 
> HDFS-13794-HDFS-12090.004.patch
>
>
> When updating the BlockAliasMap we may need to deal with deleted blocks. 
> Otherwise the BlockAliasMap will grow indefinitely(!).
> Therefore, the BlockAliasMap.Writer needs a method for removing blocks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14036) RBF: Add hdfs-rbf-default.xml to HdfsConfiguration by default

2018-10-30 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-14036?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16669131#comment-16669131
 ] 

Íñigo Goiri commented on HDFS-14036:


Yetus hasn't executed the RBF tests.
As I said, the main tests now fail with this as we get conflicts in the ports.

> RBF: Add hdfs-rbf-default.xml to HdfsConfiguration by default
> -
>
> Key: HDFS-14036
> URL: https://issues.apache.org/jira/browse/HDFS-14036
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Íñigo Goiri
>Assignee: CR Hota
>Priority: Major
> Attachments: HDFS-14036.001.patch
>
>
> Currently, the default values from hdfs-rbf-default.xml are not been set by 
> default.
> We should add them to HdfsConfiguration by default.
> This may break some unit tests so we would need to tune some RBF unit tests.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14036) RBF: Add hdfs-rbf-default.xml to HdfsConfiguration by default

2018-10-30 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14036?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16669129#comment-16669129
 ] 

Hadoop QA commented on HDFS-14036:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 24m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 58s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m  1s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
32s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 61m 30s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-14036 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12946165/HDFS-14036.001.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux b4078ea09273 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 62d98ca |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25391/testReport/ |
| Max. process+thread count | 339 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-client U: 
hadoop-hdfs-project/hadoop-hdfs-client |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25391/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> RBF: Add hdfs-rbf-default.xml 

[jira] [Commented] (HDFS-14038) Expose HdfsDataOutputStreamBuilder to include Spark in LimitedPrivate

2018-10-30 Thread Xiao Chen (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14038?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16669112#comment-16669112
 ] 

Xiao Chen commented on HDFS-14038:
--

Thanks for the comments and sorry if I wasn't clear.

Yes, the goal of this jira is to investigate a reasonable way for downstream to 
construct the stream. We could add 'Spark' to DFS' LimitedPrivate. Or better 
yet figure out a way to expose this in a reasonable way on 
FSDataOutputStreamBuilder. Currently {{replicate}} is purely an HDFS concept, 
that is separate from {{replication}} which sets the replication factor but 
only if the file is using replication rather than EC. That said, replication 
factor is also only an hdfs concept, so I don't see why we can't move that up. 
Need great clarity not to confuse users of course.

> Expose HdfsDataOutputStreamBuilder to include Spark in LimitedPrivate
> -
>
> Key: HDFS-14038
> URL: https://issues.apache.org/jira/browse/HDFS-14038
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Xiao Chen
>Priority: Major
>
> In SPARK-25855 / 
> https://github.com/apache/spark/pull/22881#issuecomment-434359237, Spark 
> prefer to create Spark event log files with replication (instead of EC). To 
> do this currently, it has to be done by some casting / reflection, to get a 
> DistributedFileSystem object (or use the {{HdfsDataOutputStreamBuilder}} 
> subclass of it).
> We should officially expose this for Spark's usage.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-710) Make Block Commit Sequence (BCS) opaque to clients

2018-10-30 Thread Jitendra Nath Pandey (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-710?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16669098#comment-16669098
 ] 

Jitendra Nath Pandey edited comment on HDDS-710 at 10/30/18 5:38 PM:
-

This is addressed by HDDS-749. The BCS is added to BlockID, so that client code 
doesn't see BCS. We should probably re-purpose this Jira to focus on reducing 
footprint of 3 IDs.


was (Author: jnp):
This is addressed by HDDS-749. The BCS is added to BlockID.

> Make Block Commit Sequence (BCS) opaque to clients
> --
>
> Key: HDDS-710
> URL: https://issues.apache.org/jira/browse/HDDS-710
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Client
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Critical
>
> An immutable block is identified by the following:
> - Container ID
> - Local Block ID
> - BCS (Block Commit Sequence ID)
> All of these values are currently exposed to the client. Instead we can have 
> a composite block ID that hides these details from the client. A first 
> thought is a naive implementation that generates a 192-bit (3x64-bit) block 
> ID.
> Proposed by [~anu] in HDDS-676.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-14039) ec -listPolicies doesn't show correct state for the default policy when the default is not RS(6,3)

2018-10-30 Thread Xiao Chen (JIRA)
Xiao Chen created HDFS-14039:


 Summary: ec -listPolicies doesn't show correct state for the 
default policy when the default is not RS(6,3)
 Key: HDFS-14039
 URL: https://issues.apache.org/jira/browse/HDFS-14039
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: erasure-coding
Affects Versions: 3.0.0
Reporter: Xiao Chen
Assignee: Kitti Nanasi


{noformat}
$ hdfs ec -listPolicies
Erasure Coding Policies:
ErasureCodingPolicy=[Name=RS-10-4-1024k, Schema=[ECSchema=[Codec=rs, 
numDataUnits=10, numParityUnits=4]], CellSize=1048576, Id=5], State=DISABLED
ErasureCodingPolicy=[Name=RS-3-2-1024k, Schema=[ECSchema=[Codec=rs, 
numDataUnits=3, numParityUnits=2]], CellSize=1048576, Id=2], State=DISABLED
ErasureCodingPolicy=[Name=RS-6-3-1024k, Schema=[ECSchema=[Codec=rs, 
numDataUnits=6, numParityUnits=3]], CellSize=1048576, Id=1], State=ENABLED
ErasureCodingPolicy=[Name=RS-LEGACY-6-3-1024k, 
Schema=[ECSchema=[Codec=rs-legacy, numDataUnits=6, numParityUnits=3]], 
CellSize=1048576, Id=3], State=DISABLED
ErasureCodingPolicy=[Name=XOR-2-1-1024k, Schema=[ECSchema=[Codec=xor, 
numDataUnits=2, numParityUnits=1]], CellSize=1048576, Id=4], State=DISABLED
$ hdfs ec -enablePolicy -policy XOR-2-1-1024k
Erasure coding policy XOR-2-1-1024k is enabled
$ hdfs ec -listPolicies
Erasure Coding Policies:
ErasureCodingPolicy=[Name=RS-10-4-1024k, Schema=[ECSchema=[Codec=rs, 
numDataUnits=10, numParityUnits=4]], CellSize=1048576, Id=5], State=DISABLED
ErasureCodingPolicy=[Name=RS-3-2-1024k, Schema=[ECSchema=[Codec=rs, 
numDataUnits=3, numParityUnits=2]], CellSize=1048576, Id=2], State=DISABLED
ErasureCodingPolicy=[Name=RS-6-3-1024k, Schema=[ECSchema=[Codec=rs, 
numDataUnits=6, numParityUnits=3]], CellSize=1048576, Id=1], State=ENABLED
ErasureCodingPolicy=[Name=RS-LEGACY-6-3-1024k, 
Schema=[ECSchema=[Codec=rs-legacy, numDataUnits=6, numParityUnits=3]], 
CellSize=1048576, Id=3], State=DISABLED
ErasureCodingPolicy=[Name=XOR-2-1-1024k, Schema=[ECSchema=[Codec=xor, 
numDataUnits=2, numParityUnits=1]], CellSize=1048576, Id=4], State=ENABLED
--
$ #set default to be RS-3-2 for dfs.namenode.ec.system.default.policy, and 
restart NN
(this seems to be what's triggering the failure)
---
$ hdfs ec -listPolicies
Erasure Coding Policies:
ErasureCodingPolicy=[Name=RS-10-4-1024k, Schema=[ECSchema=[Codec=rs, 
numDataUnits=10, numParityUnits=4]], CellSize=1048576, Id=5], State=DISABLED
ErasureCodingPolicy=[Name=RS-3-2-1024k, Schema=[ECSchema=[Codec=rs, 
numDataUnits=3, numParityUnits=2]], CellSize=1048576, Id=2], State=DISABLED
ErasureCodingPolicy=[Name=RS-6-3-1024k, Schema=[ECSchema=[Codec=rs, 
numDataUnits=6, numParityUnits=3]], CellSize=1048576, Id=1], State=ENABLED
ErasureCodingPolicy=[Name=RS-LEGACY-6-3-1024k, 
Schema=[ECSchema=[Codec=rs-legacy, numDataUnits=6, numParityUnits=3]], 
CellSize=1048576, Id=3], State=DISABLED
ErasureCodingPolicy=[Name=XOR-2-1-1024k, Schema=[ECSchema=[Codec=xor, 
numDataUnits=2, numParityUnits=1]], CellSize=1048576, Id=4], State=ENABLED
$ hdfs ec -enablePolicy -policy RS-3-2-1024k
Erasure coding policy RS-3-2-1024k is enabled
$ hdfs ec -listPolicies
Erasure Coding Policies:
ErasureCodingPolicy=[Name=RS-10-4-1024k, Schema=[ECSchema=[Codec=rs, 
numDataUnits=10, numParityUnits=4]], CellSize=1048576, Id=5], State=DISABLED
ErasureCodingPolicy=[Name=RS-3-2-1024k, Schema=[ECSchema=[Codec=rs, 
numDataUnits=3, numParityUnits=2]], CellSize=1048576, Id=2], State=DISABLED
ErasureCodingPolicy=[Name=RS-6-3-1024k, Schema=[ECSchema=[Codec=rs, 
numDataUnits=6, numParityUnits=3]], CellSize=1048576, Id=1], State=ENABLED
ErasureCodingPolicy=[Name=RS-LEGACY-6-3-1024k, 
Schema=[ECSchema=[Codec=rs-legacy, numDataUnits=6, numParityUnits=3]], 
CellSize=1048576, Id=3], State=DISABLED
ErasureCodingPolicy=[Name=XOR-2-1-1024k, Schema=[ECSchema=[Codec=xor, 
numDataUnits=2, numParityUnits=1]], CellSize=1048576, Id=4], State=ENABLED
{noformat}

The last 2 should show RS-3-2 as ENABLED. RS-6-3 DISABLED if it's not enabled 
before.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-710) Make Block Commit Sequence (BCS) opaque to clients

2018-10-30 Thread Jitendra Nath Pandey (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-710?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16669098#comment-16669098
 ] 

Jitendra Nath Pandey commented on HDDS-710:
---

This is addressed by HDDS-749. The BCS is added to BlockID.

> Make Block Commit Sequence (BCS) opaque to clients
> --
>
> Key: HDDS-710
> URL: https://issues.apache.org/jira/browse/HDDS-710
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Client
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Critical
>
> An immutable block is identified by the following:
> - Container ID
> - Local Block ID
> - BCS (Block Commit Sequence ID)
> All of these values are currently exposed to the client. Instead we can have 
> a composite block ID that hides these details from the client. A first 
> thought is a naive implementation that generates a 192-bit (3x64-bit) block 
> ID.
> Proposed by [~anu] in HDDS-676.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14038) Expose HdfsDataOutputStreamBuilder to include Spark in LimitedPrivate

2018-10-30 Thread Marcelo Vanzin (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14038?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16669087#comment-16669087
 ] 

Marcelo Vanzin commented on HDFS-14038:
---

No, the problem here is not the use of reflection. That is needed because Spark 
still has to build against Hadoop 2, which doesn't have that API.

The issue raised in that comment is that the method Spark uses is in a 
LimitedPrivate / Unstable API. Which means it can break at any time.

For example, a better approach would be to have a method in 
{{FSDataOutputStreamBuilder}}, which is marked as public. In fact, there's 
already {{replication()}}, to set the replication factor, but it doesn't seem 
related to the {{replicate()}} method in HdfsDataOutputStreamBuilder. Maybe 
they should be merged.

> Expose HdfsDataOutputStreamBuilder to include Spark in LimitedPrivate
> -
>
> Key: HDFS-14038
> URL: https://issues.apache.org/jira/browse/HDFS-14038
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Xiao Chen
>Priority: Major
>
> In SPARK-25855 / 
> https://github.com/apache/spark/pull/22881#issuecomment-434359237, Spark 
> prefer to create Spark event log files with replication (instead of EC). To 
> do this currently, it has to be done by some casting / reflection, to get a 
> DistributedFileSystem object (or use the {{HdfsDataOutputStreamBuilder}} 
> subclass of it).
> We should officially expose this for Spark's usage.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14024) RBF: ProvidedCapacityTotal json exception in NamenodeHeartbeatService

2018-10-30 Thread CR Hota (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14024?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

CR Hota updated HDFS-14024:
---
Attachment: HDFS-14024-HDFS-13891.0.patch

> RBF: ProvidedCapacityTotal json exception in NamenodeHeartbeatService
> -
>
> Key: HDFS-14024
> URL: https://issues.apache.org/jira/browse/HDFS-14024
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: CR Hota
>Assignee: CR Hota
>Priority: Major
> Attachments: HDFS-14024-HDFS-13891.0.patch, HDFS-14024.0.patch
>
>
> Routers may be proxying for a downstream name node that is NOT migrated to 
> understand "ProvidedCapacityTotal". updateJMXParameters method in 
> NamenodeHeartbeatService should handle this without breaking.
>  
> {code:java}
> jsonObject.getLong("MissingBlocks"),
> jsonObject.getLong("PendingReplicationBlocks"),
> jsonObject.getLong("UnderReplicatedBlocks"),
> jsonObject.getLong("PendingDeletionBlocks"),
> jsonObject.getLong("ProvidedCapacityTotal"));
> {code}
> One way to do this is create a json wrapper while gives back some default if 
> json node is not found.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14024) RBF: ProvidedCapacityTotal json exception in NamenodeHeartbeatService

2018-10-30 Thread CR Hota (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14024?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

CR Hota updated HDFS-14024:
---
Attachment: (was: HDFS-14024.1.patch)

> RBF: ProvidedCapacityTotal json exception in NamenodeHeartbeatService
> -
>
> Key: HDFS-14024
> URL: https://issues.apache.org/jira/browse/HDFS-14024
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: CR Hota
>Assignee: CR Hota
>Priority: Major
> Attachments: HDFS-14024.0.patch
>
>
> Routers may be proxying for a downstream name node that is NOT migrated to 
> understand "ProvidedCapacityTotal". updateJMXParameters method in 
> NamenodeHeartbeatService should handle this without breaking.
>  
> {code:java}
> jsonObject.getLong("MissingBlocks"),
> jsonObject.getLong("PendingReplicationBlocks"),
> jsonObject.getLong("UnderReplicatedBlocks"),
> jsonObject.getLong("PendingDeletionBlocks"),
> jsonObject.getLong("ProvidedCapacityTotal"));
> {code}
> One way to do this is create a json wrapper while gives back some default if 
> json node is not found.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14033) [libhdfs++] Disable libhdfs++ build on systems that do not support thread_local

2018-10-30 Thread Sunil Govindan (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14033?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16669066#comment-16669066
 ] 

Sunil Govindan commented on HDFS-14033:
---

If there are no major concerns on this approach, i could help to get this in by 
today evening.

I really appreciate a review here as its specific to few compilers. As per me, 
changes looks clean.

Thanks

[~vagarychen] [~shv] [~msingh] [~vinayrpet] [~rakeshr]

> [libhdfs++] Disable libhdfs++ build on systems that do not support 
> thread_local
> ---
>
> Key: HDFS-14033
> URL: https://issues.apache.org/jira/browse/HDFS-14033
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Anatoli Shein
>Assignee: Anatoli Shein
>Priority: Major
> Attachments: HDFS-14033.000.patch, HDFS-14033.001.patch
>
>
> In order to still be able to build Hadoop on older systems (such as rhel 6) 
> we need to disable libhdfs++ build on systems that do not support 
> thread_local. We should also emit a warning saying libhdfs++ wasn't built.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14024) RBF: ProvidedCapacityTotal json exception in NamenodeHeartbeatService

2018-10-30 Thread CR Hota (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14024?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

CR Hota updated HDFS-14024:
---
Attachment: HDFS-14024.1.patch

> RBF: ProvidedCapacityTotal json exception in NamenodeHeartbeatService
> -
>
> Key: HDFS-14024
> URL: https://issues.apache.org/jira/browse/HDFS-14024
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: CR Hota
>Assignee: CR Hota
>Priority: Major
> Attachments: HDFS-14024.0.patch, HDFS-14024.1.patch
>
>
> Routers may be proxying for a downstream name node that is NOT migrated to 
> understand "ProvidedCapacityTotal". updateJMXParameters method in 
> NamenodeHeartbeatService should handle this without breaking.
>  
> {code:java}
> jsonObject.getLong("MissingBlocks"),
> jsonObject.getLong("PendingReplicationBlocks"),
> jsonObject.getLong("UnderReplicatedBlocks"),
> jsonObject.getLong("PendingDeletionBlocks"),
> jsonObject.getLong("ProvidedCapacityTotal"));
> {code}
> One way to do this is create a json wrapper while gives back some default if 
> json node is not found.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14016) ObserverReadProxyProvider should enable observer read by default

2018-10-30 Thread Chen Liang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14016?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16669065#comment-16669065
 ] 

Chen Liang commented on HDFS-14016:
---

Post v001 patch to just set it to true. We may want to make it configurable, or 
even remove it in the future. Added as a TODO comment.

> ObserverReadProxyProvider should enable observer read by default
> 
>
> Key: HDFS-14016
> URL: https://issues.apache.org/jira/browse/HDFS-14016
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Major
> Attachments: HDFS-14016-HDFS-12943.001.patch
>
>
> Currently in {{ObserverReadProxyProvider#invoke}}, only when 
> {{observerReadEnabled && isRead(method)}} is true, the code will check 
> whether to talk to Observer. Otherwise always talk to active. The issue here 
> is that currently it can only be set through {{setObserverReadEnabled}}, 
> which is used by tests only. So observer read is always disabled in 
> deployment and no way to enable it. We may want to either expose a 
> configuration key, or hard code it to true so it can only be changed for 
> testing purpose, or simply remove this variable. This is closely related to 
> HDFS-13923.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14016) ObserverReadProxyProvider should enable observer read by default

2018-10-30 Thread Chen Liang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14016?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-14016:
--
Status: Patch Available  (was: Open)

> ObserverReadProxyProvider should enable observer read by default
> 
>
> Key: HDFS-14016
> URL: https://issues.apache.org/jira/browse/HDFS-14016
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Major
> Attachments: HDFS-14016-HDFS-12943.001.patch
>
>
> Currently in {{ObserverReadProxyProvider#invoke}}, only when 
> {{observerReadEnabled && isRead(method)}} is true, the code will check 
> whether to talk to Observer. Otherwise always talk to active. The issue here 
> is that currently it can only be set through {{setObserverReadEnabled}}, 
> which is used by tests only. So observer read is always disabled in 
> deployment and no way to enable it. We may want to either expose a 
> configuration key, or hard code it to true so it can only be changed for 
> testing purpose, or simply remove this variable. This is closely related to 
> HDFS-13923.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-14038) Expose HdfsDataOutputStreamBuilder to include Spark in LimitedPrivate

2018-10-30 Thread Xiao Chen (JIRA)
Xiao Chen created HDFS-14038:


 Summary: Expose HdfsDataOutputStreamBuilder to include Spark in 
LimitedPrivate
 Key: HDFS-14038
 URL: https://issues.apache.org/jira/browse/HDFS-14038
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Xiao Chen


In SPARK-25855 / 
https://github.com/apache/spark/pull/22881#issuecomment-434359237, Spark prefer 
to create Spark event log files with replication (instead of EC). To do this 
currently, it has to be done by some casting / reflection, to get a 
DistributedFileSystem object (or use the {{HdfsDataOutputStreamBuilder}} 
subclass of it).

We should officially expose this for Spark's usage.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14016) ObserverReadProxyProvider should enable observer read by default

2018-10-30 Thread Chen Liang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14016?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-14016:
--
Attachment: HDFS-14016-HDFS-12943.001.patch

> ObserverReadProxyProvider should enable observer read by default
> 
>
> Key: HDFS-14016
> URL: https://issues.apache.org/jira/browse/HDFS-14016
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Major
> Attachments: HDFS-14016-HDFS-12943.001.patch
>
>
> Currently in {{ObserverReadProxyProvider#invoke}}, only when 
> {{observerReadEnabled && isRead(method)}} is true, the code will check 
> whether to talk to Observer. Otherwise always talk to active. The issue here 
> is that currently it can only be set through {{setObserverReadEnabled}}, 
> which is used by tests only. So observer read is always disabled in 
> deployment and no way to enable it. We may want to either expose a 
> configuration key, or hard code it to true so it can only be changed for 
> testing purpose, or simply remove this variable. This is closely related to 
> HDFS-13923.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14035) NN status discovery does not leverage delegation token

2018-10-30 Thread Chen Liang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14035?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-14035:
--
Attachment: HDFS-14035-HDFS-12943.002.patch

> NN status discovery does not leverage delegation token
> --
>
> Key: HDFS-14035
> URL: https://issues.apache.org/jira/browse/HDFS-14035
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Major
> Attachments: HDFS-14035-HDFS-12943.001.patch, 
> HDFS-14035-HDFS-12943.002.patch
>
>
> Currently ObserverReadProxyProvider uses 
> {{HAServiceProtocol#getServiceStatus}} to get the status of each NN. However 
> {{HAServiceProtocol}} does not leverage delegation token. So when running an 
> application on YARN and when YARN node manager makes this call 
> getServiceStatus, token authentication will fail, causing the application to 
> fail.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13794) [PROVIDED Phase 2] Teach BlockAliasMap.Writer `remove` method.

2018-10-30 Thread Ewan Higgs (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13794?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewan Higgs updated HDFS-13794:
--
Status: Patch Available  (was: Open)

> [PROVIDED Phase 2] Teach BlockAliasMap.Writer `remove` method.
> --
>
> Key: HDFS-13794
> URL: https://issues.apache.org/jira/browse/HDFS-13794
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ewan Higgs
>Assignee: Ewan Higgs
>Priority: Major
> Attachments: HDFS-13794-HDFS-12090.001.patch, 
> HDFS-13794-HDFS-12090.002.patch, HDFS-13794-HDFS-12090.003.patch, 
> HDFS-13794-HDFS-12090.004.patch
>
>
> When updating the BlockAliasMap we may need to deal with deleted blocks. 
> Otherwise the BlockAliasMap will grow indefinitely(!).
> Therefore, the BlockAliasMap.Writer needs a method for removing blocks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13794) [PROVIDED Phase 2] Teach BlockAliasMap.Writer `remove` method.

2018-10-30 Thread Ewan Higgs (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13794?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewan Higgs updated HDFS-13794:
--
Attachment: HDFS-13794-HDFS-12090.004.patch

> [PROVIDED Phase 2] Teach BlockAliasMap.Writer `remove` method.
> --
>
> Key: HDFS-13794
> URL: https://issues.apache.org/jira/browse/HDFS-13794
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ewan Higgs
>Assignee: Ewan Higgs
>Priority: Major
> Attachments: HDFS-13794-HDFS-12090.001.patch, 
> HDFS-13794-HDFS-12090.002.patch, HDFS-13794-HDFS-12090.003.patch, 
> HDFS-13794-HDFS-12090.004.patch
>
>
> When updating the BlockAliasMap we may need to deal with deleted blocks. 
> Otherwise the BlockAliasMap will grow indefinitely(!).
> Therefore, the BlockAliasMap.Writer needs a method for removing blocks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13794) [PROVIDED Phase 2] Teach BlockAliasMap.Writer `remove` method.

2018-10-30 Thread Ewan Higgs (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13794?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewan Higgs updated HDFS-13794:
--
Status: Open  (was: Patch Available)

> [PROVIDED Phase 2] Teach BlockAliasMap.Writer `remove` method.
> --
>
> Key: HDFS-13794
> URL: https://issues.apache.org/jira/browse/HDFS-13794
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ewan Higgs
>Assignee: Ewan Higgs
>Priority: Major
> Attachments: HDFS-13794-HDFS-12090.001.patch, 
> HDFS-13794-HDFS-12090.002.patch, HDFS-13794-HDFS-12090.003.patch, 
> HDFS-13794-HDFS-12090.004.patch
>
>
> When updating the BlockAliasMap we may need to deal with deleted blocks. 
> Otherwise the BlockAliasMap will grow indefinitely(!).
> Therefore, the BlockAliasMap.Writer needs a method for removing blocks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13794) [PROVIDED Phase 2] Teach BlockAliasMap.Writer `remove` method.

2018-10-30 Thread Ewan Higgs (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13794?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16669024#comment-16669024
 ] 

Ewan Higgs commented on HDFS-13794:
---

004
- Rebased the code onto updates in HDFS-12090.

> [PROVIDED Phase 2] Teach BlockAliasMap.Writer `remove` method.
> --
>
> Key: HDFS-13794
> URL: https://issues.apache.org/jira/browse/HDFS-13794
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ewan Higgs
>Assignee: Ewan Higgs
>Priority: Major
> Attachments: HDFS-13794-HDFS-12090.001.patch, 
> HDFS-13794-HDFS-12090.002.patch, HDFS-13794-HDFS-12090.003.patch, 
> HDFS-13794-HDFS-12090.004.patch
>
>
> When updating the BlockAliasMap we may need to deal with deleted blocks. 
> Otherwise the BlockAliasMap will grow indefinitely(!).
> Therefore, the BlockAliasMap.Writer needs a method for removing blocks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-13989) RBF: Add FSCK to the Router

2018-10-30 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/HDFS-13989?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri reassigned HDFS-13989:
--

Assignee: Íñigo Goiri

> RBF: Add FSCK to the Router
> ---
>
> Key: HDFS-13989
> URL: https://issues.apache.org/jira/browse/HDFS-13989
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
> Attachments: HDFS-13989.001.patch
>
>
> The namenode supports FSCK.
> The Router should be able to forward FSCK to the right Namenode and aggregate 
> the results.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14024) RBF: ProvidedCapacityTotal json exception in NamenodeHeartbeatService

2018-10-30 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-14024?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16669015#comment-16669015
 ] 

Íñigo Goiri commented on HDFS-14024:


[~crh] can you make the patch for HDFS-13891?

> RBF: ProvidedCapacityTotal json exception in NamenodeHeartbeatService
> -
>
> Key: HDFS-14024
> URL: https://issues.apache.org/jira/browse/HDFS-14024
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: CR Hota
>Assignee: CR Hota
>Priority: Major
> Attachments: HDFS-14024.0.patch
>
>
> Routers may be proxying for a downstream name node that is NOT migrated to 
> understand "ProvidedCapacityTotal". updateJMXParameters method in 
> NamenodeHeartbeatService should handle this without breaking.
>  
> {code:java}
> jsonObject.getLong("MissingBlocks"),
> jsonObject.getLong("PendingReplicationBlocks"),
> jsonObject.getLong("UnderReplicatedBlocks"),
> jsonObject.getLong("PendingDeletionBlocks"),
> jsonObject.getLong("ProvidedCapacityTotal"));
> {code}
> One way to do this is create a json wrapper while gives back some default if 
> json node is not found.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13852) RBF: The DN_REPORT_TIME_OUT and DN_REPORT_CACHE_EXPIRE should be configured in RBFConfigKeys.

2018-10-30 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-13852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16669012#comment-16669012
 ] 

Íñigo Goiri commented on HDFS-13852:


[~hfyang20071] can you provide a patch for HDFS-13891?

> RBF: The DN_REPORT_TIME_OUT and DN_REPORT_CACHE_EXPIRE should be configured 
> in RBFConfigKeys.
> -
>
> Key: HDFS-13852
> URL: https://issues.apache.org/jira/browse/HDFS-13852
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: federation, hdfs
>Affects Versions: 3.1.0, 2.9.1, 3.0.1
>Reporter: yanghuafeng
>Assignee: yanghuafeng
>Priority: Major
> Attachments: HDFS-13852.001.patch, HDFS-13852.002.patch, 
> HDFS-13852.003.patch, HDFS-13852.004.patch
>
>
> In the NamenodeBeanMetrics the router will invokes 'getDataNodeReport' 
> periodically. And we can set the dfs.federation.router.dn-report.time-out and 
> dfs.federation.router.dn-report.cache-expire to avoid time out. But when we 
> start the router, the FederationMetrics will also invoke the method to get 
> node usage. If time out error happened, we cannot adjust the parameter 
> time_out. And the time_out in the FederationMetrics and NamenodeBeanMetrics 
> should be the same.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13869) RBF: Handle NPE for NamenodeBeanMetrics#getFederationMetrics()

2018-10-30 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-13869?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16669013#comment-16669013
 ] 

Íñigo Goiri commented on HDFS-13869:


[~RANith] can you provide a patch based on HDFS-13891?

> RBF: Handle NPE for NamenodeBeanMetrics#getFederationMetrics()
> --
>
> Key: HDFS-13869
> URL: https://issues.apache.org/jira/browse/HDFS-13869
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Affects Versions: 3.0.0
>Reporter: Surendra Singh Lilhore
>Assignee: Ranith Sardar
>Priority: Major
> Attachments: HDFS-13869-002.diff, HDFS-13869-003.diff, 
> HDFS-13869-004.patch, HDFS-13869-005.patch, HDFS-13869-006.patch, 
> HDFS-13869-007.patch, HDFS-13869.patch
>
>
> {code:java}
> Caused by: java.lang.NullPointerException
>   at 
> org.apache.hadoop.hdfs.server.federation.metrics.NamenodeBeanMetrics.getUsed(NamenodeBeanMetrics.java:205)
>   at 
> org.apache.hadoop.hdfs.server.federation.metrics.NamenodeBeanMetrics.getCapacityUsed(NamenodeBeanMetrics.java:519)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43){code}
> ngMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13912) RBF: Add methods to RouterAdmin to set order, read only, and chown

2018-10-30 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-13912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16669011#comment-16669011
 ] 

Íñigo Goiri commented on HDFS-13912:


[~ayushtkn], do you still think this is needed after HDFS-13326?

> RBF: Add methods to RouterAdmin to set order, read only, and chown
> --
>
> Key: HDFS-13912
> URL: https://issues.apache.org/jira/browse/HDFS-13912
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-13912-01.patch, HDFS-13912-02.patch
>
>
> Presently there is methods for only quotas for an existing mount entries.
> Similarly it can be added for the other remaining parameters those even are 
> dynamic and requires changes. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13834) RBF: Connection creator thread should catch Throwable

2018-10-30 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-13834?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16669009#comment-16669009
 ] 

Íñigo Goiri commented on HDFS-13834:


[~crh] can you rebase to HDFS-13891 and add the unit test triggering the 
{{UnknownHostException}}?

> RBF: Connection creator thread should catch Throwable
> -
>
> Key: HDFS-13834
> URL: https://issues.apache.org/jira/browse/HDFS-13834
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: CR Hota
>Assignee: CR Hota
>Priority: Critical
> Attachments: HDFS-13834.0.patch, HDFS-13834.1.patch
>
>
> Connection creator thread is a single thread thats responsible for creating 
> all downstream namenode connections.
> This is very critical thread and hence should not die understand 
> exception/error scenarios.
> We saw this behavior in production systems where the thread died leaving the 
> router process in bad state.
> The thread should also catch a generic error/exception.
> {code}
> @Override
> public void run() {
>   while (this.running) {
> try {
>   ConnectionPool pool = this.queue.take();
>   try {
> int total = pool.getNumConnections();
> int active = pool.getNumActiveConnections();
> if (pool.getNumConnections() < pool.getMaxSize() &&
> active >= MIN_ACTIVE_RATIO * total) {
>   ConnectionContext conn = pool.newConnection();
>   pool.addConnection(conn);
> } else {
>   LOG.debug("Cannot add more than {} connections to {}",
>   pool.getMaxSize(), pool);
> }
>   } catch (IOException e) {
> LOG.error("Cannot create a new connection", e);
>   }
> } catch (InterruptedException e) {
>   LOG.error("The connection creator was interrupted");
>   this.running = false;
> }
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



  1   2   >