[jira] [Commented] (HDDS-9) Add GRPC protocol interceptors for Ozone Block Token

2018-11-17 Thread Jitendra Nath Pandey (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-9?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16690819#comment-16690819
 ] 

Jitendra Nath Pandey commented on HDDS-9:
-

The patch looks ok to me. Just one comment:

    Block token identifier should also contain the length and block commit 
sequence (BCS) of the block. The length at the datanode may be more than what 
is recorded at OM.

> Add GRPC protocol interceptors for Ozone Block Token
> 
>
> Key: HDDS-9
> URL: https://issues.apache.org/jira/browse/HDDS-9
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Security
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
> Attachments: HDDS-9-HDDS-4.001.patch, HDDS-9-HDDS-4.002.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-844) Add logic for pipeline teardown after timeout

2018-11-17 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-844?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16690818#comment-16690818
 ] 

Hadoop QA commented on HDDS-844:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
26s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
18m 57s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
52s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
26s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m 
45s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
3m 20s{color} | {color:orange} root: The patch generated 1 new + 1 unchanged - 
0 fixed = 2 total (was 1) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 38s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
59s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
34s{color} | {color:red} hadoop-hdds_server-scm generated 1 new + 5 unchanged - 
0 fixed = 6 total (was 5) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 45s{color} 
| {color:red} common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
35s{color} | {color:green} client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 40s{color} 
| {color:red} server-scm in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 42s{color} 
| {color:red} integration-test in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 40s{color} 
| {color:red} tools in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
45s{color} | {color:green} The patch does not generate ASF License 

[jira] [Commented] (HDDS-844) Add logic for pipeline teardown after timeout

2018-11-17 Thread Lokesh Jain (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-844?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16690801#comment-16690801
 ] 

Lokesh Jain commented on HDDS-844:
--

[~msingh] Thanks for reviewing the patch! v5 patch addresses your comments.

> Add logic for pipeline teardown after timeout
> -
>
> Key: HDDS-844
> URL: https://issues.apache.org/jira/browse/HDDS-844
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Affects Versions: 0.4.0
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Attachments: HDDS-844.001.patch, HDDS-844.002.patch, 
> HDDS-844.003.patch, HDDS-844.004.patch, HDDS-844.005.patch
>
>
> On receiving pipeline action we close the pipeline and wait for all 
> containers to get closed. Currently pipeline is destroyed on datanodes only 
> after all the containers have been closed. There is a possibility for 
> containers to never get to CLOSED state if there is a two node failure. In 
> such scenarios the pipeline needs to be destroyed and removed from SCM after 
> a timeout.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-844) Add logic for pipeline teardown after timeout

2018-11-17 Thread Lokesh Jain (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-844?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lokesh Jain updated HDDS-844:
-
Attachment: HDDS-844.005.patch

> Add logic for pipeline teardown after timeout
> -
>
> Key: HDDS-844
> URL: https://issues.apache.org/jira/browse/HDDS-844
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Affects Versions: 0.4.0
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Attachments: HDDS-844.001.patch, HDDS-844.002.patch, 
> HDDS-844.003.patch, HDDS-844.004.patch, HDDS-844.005.patch
>
>
> On receiving pipeline action we close the pipeline and wait for all 
> containers to get closed. Currently pipeline is destroyed on datanodes only 
> after all the containers have been closed. There is a possibility for 
> containers to never get to CLOSED state if there is a two node failure. In 
> such scenarios the pipeline needs to be destroyed and removed from SCM after 
> a timeout.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-847) TestBlockDeletion is failing

2018-11-17 Thread Lokesh Jain (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-847?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lokesh Jain reassigned HDDS-847:


Assignee: Lokesh Jain

> TestBlockDeletion is failing
> 
>
> Key: HDDS-847
> URL: https://issues.apache.org/jira/browse/HDDS-847
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Reporter: Nanda kumar
>Assignee: Lokesh Jain
>Priority: Major
>
> {{TestBlockDeletion}} is failing with the below exception
> {code}
> [ERROR] 
> testBlockDeletion(org.apache.hadoop.ozone.container.common.statemachine.commandhandler.TestBlockDeletion)
>   Time elapsed: 28.017 s  <<< FAILURE!
> java.lang.AssertionError
>   at org.junit.Assert.fail(Assert.java:86)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at org.junit.Assert.assertTrue(Assert.java:52)
>   at 
> org.apache.hadoop.ozone.container.common.statemachine.commandhandler.TestBlockDeletion.testBlockDeletion(TestBlockDeletion.java:165)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:379)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:340)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:125)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:413)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14085) RBF: LS command for root shows wrong owner and permission information.

2018-11-17 Thread Ayush Saxena (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16690800#comment-16690800
 ] 

Ayush Saxena commented on HDFS-14085:
-

Thanx [~elgoiri] for the comment.

HDFS-13891 it made the owner group and permissions changed for sure but made it 
same as what we see in the dfsrouteradmin. 

IIUC the permissions and owner info shown there in dfsrouteradmin is admin 
level that who can modify or change those mount table entries not the file 
system level.

The dfs -ls permissions are at user level ;That is who can write using or read 
using that path.(For user that is just a directory in his FS.For us it is also 
redirecting to an actual directory in some namespace).

The permissions that dfsrouteradmin shows are not even checked for general file 
system operations,we just get the request and forward that to NN where the 
permissions which are there on the actual directory are checked.

If we call the command -getFAcl on the same mount point entry we will get the 
different owner and permissions wrt what now shown in dfs -ls (Actually we will 
get the real permissions from getfacl)

If we give a Look at the TODO

{code:java}
  // TODO support users, it should be the user for the pointed folder
{code}

It also said it should be of the pointed folder.Not the admin level for the 
mount point.

Pls Correct If I have misunderstood something in the context. :) 

> RBF: LS command for root shows wrong owner and permission information.
> --
>
> Key: HDFS-14085
> URL: https://issues.apache.org/jira/browse/HDFS-14085
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
>
> The LS command for / lists all the mount entries but the permission displayed 
> is the default permission (777) and the owner and group info same as that of 
> the user calling it; Which actually should be the same as that of the 
> destination of the mount point.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-837) Persist originNodeId as part of .container file in datanode

2018-11-17 Thread Jitendra Nath Pandey (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-837?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16690736#comment-16690736
 ] 

Jitendra Nath Pandey commented on HDDS-837:
---

+1 for the patch.

> Persist originNodeId as part of .container file in datanode
> ---
>
> Key: HDDS-837
> URL: https://issues.apache.org/jira/browse/HDDS-837
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Datanode
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
> Attachments: HDDS-837.000.patch, HDDS-837.wip.patch
>
>
> To differentiate the replica of QUASI_CLOSED containers we need 
> {{originNodeId}} field. With this field, we can uniquely identify a 
> QUASI_CLOSED container replica. This will be needed when we want to CLOSE a 
> QUASI_CLOSED container.
> This field will be set by the node where the container is created and stored 
> as part of {{.container}} file and will be sent as part of ContainerReport to 
> SCM.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14083) libhdfs logs errors when opened FS doesn't support ByteBufferReadable

2018-11-17 Thread Pranay Singh (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14083?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16690669#comment-16690669
 ] 

Pranay Singh commented on HDFS-14083:
-

There is an existing test case in the file mentioned below that hits this case, 
the problem is that this binary does not seems to be executed as a part of test 
run. So I have made changes so this binary is executed as a part of routine 
tests, that said, I have seen some failure while running this test binary, so I 
have created HDFS-14086 to address those failures.

src/main/native/libhdfs-tests/test_libhdfs_ops.c

> libhdfs logs errors when opened FS doesn't support ByteBufferReadable
> -
>
> Key: HDFS-14083
> URL: https://issues.apache.org/jira/browse/HDFS-14083
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: native
>Affects Versions: 3.0.3
>Reporter: Pranay Singh
>Assignee: Pranay Singh
>Priority: Minor
> Attachments: HADOOP-15928.001.patch, HADOOP-15928.002.patch, 
> HDFS-14083.003.patch, HDFS-14083.004.patch, HDFS-14083.005.patch
>
>
> Problem:
> 
> There is excessive error logging when a file is opened by libhdfs 
> (DFSClient/HDFS) in S3 environment, this issue is caused because buffered 
> read is not supported in S3 environment, HADOOP-14603 "S3A input stream to 
> support ByteBufferReadable"  
> The following message is printed repeatedly in the error log/ to STDERR:
> {code}
> --
> UnsupportedOperationException: Byte-buffer read unsupported by input 
> streamjava.lang.UnsupportedOperationException: Byte-buffer read unsupported 
> by input stream
> at 
> org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:150)
> {code}
> h3. Root cause
> After investigating the issue, it appears that the above exception is printed 
> because
> when a file is opened via {{hdfsOpenFileImpl()}} calls {{readDirect()}} which 
> is hitting this
> exception.
> h3. Fix:
> Since the hdfs client is not initiating the byte buffered read but is 
> happening in a implicit manner, we should not be generating the error log 
> during open of a file.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14083) libhdfs logs errors when opened FS doesn't support ByteBufferReadable

2018-11-17 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14083?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16690648#comment-16690648
 ] 

Hadoop QA commented on HDFS-14083:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
32m  5s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  1m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 37s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  6m 
10s{color} | {color:green} hadoop-hdfs-native-client in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
27s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 54m 44s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-14083 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12948605/HDFS-14083.005.patch |
| Optional Tests |  dupname  asflicense  compile  cc  mvnsite  javac  unit  |
| uname | Linux 51a419745819 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / e56d9f2 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25551/testReport/ |
| Max. process+thread count | 443 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-native-client U: 
hadoop-hdfs-project/hadoop-hdfs-native-client |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25551/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> libhdfs logs errors when opened FS doesn't support ByteBufferReadable
> -
>
> Key: HDFS-14083
> URL: https://issues.apache.org/jira/browse/HDFS-14083
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: native
>Affects Versions: 3.0.3
>Reporter: Pranay Singh
>Assignee: Pranay Singh
>Priority: Minor
> Attachments: HADOOP-15928.001.patch, HADOOP-15928.002.patch, 
> HDFS-14083.003.patch, HDFS-14083.004.patch, HDFS-14083.005.patch
>
>
> Problem:
> 
> There is excessive error logging when a file is opened by libhdfs 
> (DFSClient/HDFS) in S3 

[jira] [Commented] (HDDS-844) Add logic for pipeline teardown after timeout

2018-11-17 Thread Mukul Kumar Singh (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-844?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16690644#comment-16690644
 ] 

Mukul Kumar Singh commented on HDDS-844:


Thanks for working on this [~ljain]. The v4 patch looks good to me. Some minor 
comments, I am +1 on the patch once they is addressed.

1) The ASF license is missing for RatisPipelineUtils.java
2) There are some checkstyle issues.

> Add logic for pipeline teardown after timeout
> -
>
> Key: HDDS-844
> URL: https://issues.apache.org/jira/browse/HDDS-844
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Affects Versions: 0.4.0
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Attachments: HDDS-844.001.patch, HDDS-844.002.patch, 
> HDDS-844.003.patch, HDDS-844.004.patch
>
>
> On receiving pipeline action we close the pipeline and wait for all 
> containers to get closed. Currently pipeline is destroyed on datanodes only 
> after all the containers have been closed. There is a possibility for 
> containers to never get to CLOSED state if there is a two node failure. In 
> such scenarios the pipeline needs to be destroyed and removed from SCM after 
> a timeout.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14083) libhdfs logs errors when opened FS doesn't support ByteBufferReadable

2018-11-17 Thread Pranay Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14083?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pranay Singh updated HDFS-14083:

Attachment: HDFS-14083.005.patch

> libhdfs logs errors when opened FS doesn't support ByteBufferReadable
> -
>
> Key: HDFS-14083
> URL: https://issues.apache.org/jira/browse/HDFS-14083
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: native
>Affects Versions: 3.0.3
>Reporter: Pranay Singh
>Assignee: Pranay Singh
>Priority: Minor
> Attachments: HADOOP-15928.001.patch, HADOOP-15928.002.patch, 
> HDFS-14083.003.patch, HDFS-14083.004.patch, HDFS-14083.005.patch
>
>
> Problem:
> 
> There is excessive error logging when a file is opened by libhdfs 
> (DFSClient/HDFS) in S3 environment, this issue is caused because buffered 
> read is not supported in S3 environment, HADOOP-14603 "S3A input stream to 
> support ByteBufferReadable"  
> The following message is printed repeatedly in the error log/ to STDERR:
> {code}
> --
> UnsupportedOperationException: Byte-buffer read unsupported by input 
> streamjava.lang.UnsupportedOperationException: Byte-buffer read unsupported 
> by input stream
> at 
> org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:150)
> {code}
> h3. Root cause
> After investigating the issue, it appears that the above exception is printed 
> because
> when a file is opened via {{hdfsOpenFileImpl()}} calls {{readDirect()}} which 
> is hitting this
> exception.
> h3. Fix:
> Since the hdfs client is not initiating the byte buffered read but is 
> happening in a implicit manner, we should not be generating the error log 
> during open of a file.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14063) Support noredirect param for CREATE/APPEND/OPEN/GETFILECHECKSUM in HttpFS

2018-11-17 Thread Weiwei Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14063?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16690563#comment-16690563
 ] 

Weiwei Yang commented on HDFS-14063:


[~elgoiri] thanks, I guess that's fine as this is not a bug fix. Thanks for the 
contribution.

> Support noredirect param for CREATE/APPEND/OPEN/GETFILECHECKSUM in HttpFS
> -
>
> Key: HDFS-14063
> URL: https://issues.apache.org/jira/browse/HDFS-14063
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
> Fix For: 3.3.0, 3.2.1
>
> Attachments: HDFS-14063.000.patch, HDFS-14063.001.patch, 
> HDFS-14063.002.patch, HDFS-14063.003.patch, HDFS-14063.004.patch
>
>
> Currently HttpFS always redirects the URI. However, the WebUI uses 
> noredirect=true which means it only wants a response with the location. This 
> is properly done in {{NamenodeWebHDFSMethods}}. HttpFS should do the same.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-844) Add logic for pipeline teardown after timeout

2018-11-17 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-844?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16690476#comment-16690476
 ] 

Hadoop QA commented on HDDS-844:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
24s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
 8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m  
6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
17m 40s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
31s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
25s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 17m  
1s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
3m  7s{color} | {color:orange} root: The patch generated 1 new + 1 unchanged - 
0 fixed = 2 total (was 1) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 41s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
36s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
33s{color} | {color:red} hadoop-hdds_server-scm generated 1 new + 5 unchanged - 
0 fixed = 6 total (was 5) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 46s{color} 
| {color:red} common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
34s{color} | {color:green} client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 40s{color} 
| {color:red} server-scm in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 39s{color} 
| {color:red} integration-test in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 33s{color} 
| {color:red} tools in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
40s{color} | {color:red} The patch generated 1 ASF License warnings. {color} 

[jira] [Updated] (HDDS-844) Add logic for pipeline teardown after timeout

2018-11-17 Thread Lokesh Jain (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-844?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lokesh Jain updated HDDS-844:
-
Attachment: HDDS-844.004.patch

> Add logic for pipeline teardown after timeout
> -
>
> Key: HDDS-844
> URL: https://issues.apache.org/jira/browse/HDDS-844
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Affects Versions: 0.4.0
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Attachments: HDDS-844.001.patch, HDDS-844.002.patch, 
> HDDS-844.003.patch, HDDS-844.004.patch
>
>
> On receiving pipeline action we close the pipeline and wait for all 
> containers to get closed. Currently pipeline is destroyed on datanodes only 
> after all the containers have been closed. There is a possibility for 
> containers to never get to CLOSED state if there is a two node failure. In 
> such scenarios the pipeline needs to be destroyed and removed from SCM after 
> a timeout.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-718) Introduce new SCM Commands to list and close Pipelines

2018-11-17 Thread Lokesh Jain (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-718?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16690440#comment-16690440
 ] 

Lokesh Jain commented on HDDS-718:
--

uploaded rebased v2 patch.

> Introduce new SCM Commands to list and close Pipelines
> --
>
> Key: HDDS-718
> URL: https://issues.apache.org/jira/browse/HDDS-718
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Blocker
> Attachments: HDDS-718.001.patch, HDDS-718.002.patch
>
>
> We need to have a tear-down pipeline command in SCM so that an administrator 
> can close/destroy a pipeline in the cluster.
> HDDS-695 brings in the commands in branch ozone-0.3, this Jira is for porting 
> them to trunk.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-718) Introduce new SCM Commands to list and close Pipelines

2018-11-17 Thread Lokesh Jain (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-718?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lokesh Jain updated HDDS-718:
-
Attachment: HDDS-718.002.patch

> Introduce new SCM Commands to list and close Pipelines
> --
>
> Key: HDDS-718
> URL: https://issues.apache.org/jira/browse/HDDS-718
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Blocker
> Attachments: HDDS-718.001.patch, HDDS-718.002.patch
>
>
> We need to have a tear-down pipeline command in SCM so that an administrator 
> can close/destroy a pipeline in the cluster.
> HDDS-695 brings in the commands in branch ozone-0.3, this Jira is for porting 
> them to trunk.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-844) Add logic for pipeline teardown after timeout

2018-11-17 Thread Lokesh Jain (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-844?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16690438#comment-16690438
 ] 

Lokesh Jain commented on HDDS-844:
--

Uploaded rebased v4 patch.

> Add logic for pipeline teardown after timeout
> -
>
> Key: HDDS-844
> URL: https://issues.apache.org/jira/browse/HDDS-844
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Affects Versions: 0.4.0
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Attachments: HDDS-844.001.patch, HDDS-844.002.patch, 
> HDDS-844.003.patch, HDDS-844.004.patch
>
>
> On receiving pipeline action we close the pipeline and wait for all 
> containers to get closed. Currently pipeline is destroyed on datanodes only 
> after all the containers have been closed. There is a possibility for 
> containers to never get to CLOSED state if there is a two node failure. In 
> such scenarios the pipeline needs to be destroyed and removed from SCM after 
> a timeout.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org