[jira] [Assigned] (HDFS-14719) Correct the safemode threshold value in BlockManagerSafeMode

2019-08-11 Thread hemanthboyina (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14719?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

hemanthboyina reassigned HDFS-14719:


Assignee: hemanthboyina

> Correct the safemode threshold value in BlockManagerSafeMode
> 
>
> Key: HDFS-14719
> URL: https://issues.apache.org/jira/browse/HDFS-14719
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 3.1.1
>Reporter: Surendra Singh Lilhore
>Assignee: hemanthboyina
>Priority: Major
>
> BlockManagerSafeMode is doing wrong parsing for safemode threshold. It is 
> storing float value in double, which will give different result some time. If 
> we store "0.999f" value in double then it will be converted to 
> "0.999128746033".
> {code:java}
> this.threshold = conf.getFloat(DFS_NAMENODE_SAFEMODE_THRESHOLD_PCT_KEY,
> DFS_NAMENODE_SAFEMODE_THRESHOLD_PCT_DEFAULT);{code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-14719) Correct the safemode threshold value in BlockManagerSafeMode

2019-08-11 Thread Surendra Singh Lilhore (JIRA)
Surendra Singh Lilhore created HDFS-14719:
-

 Summary: Correct the safemode threshold value in 
BlockManagerSafeMode
 Key: HDFS-14719
 URL: https://issues.apache.org/jira/browse/HDFS-14719
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 3.1.1
Reporter: Surendra Singh Lilhore


BlockManagerSafeMode is doing wrong parsing for safemode threshold. It is 
storing float value in double, which will give different result some time. If 
we store "0.999f" value in double then it will be converted to 
"0.999128746033".
{code:java}
this.threshold = conf.getFloat(DFS_NAMENODE_SAFEMODE_THRESHOLD_PCT_KEY,
DFS_NAMENODE_SAFEMODE_THRESHOLD_PCT_DEFAULT);{code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1105) Add mechanism in Recon to obtain DB snapshot 'delta' updates from Ozone Manager.

2019-08-11 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1105?focusedWorklogId=292870=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-292870
 ]

ASF GitHub Bot logged work on HDDS-1105:


Author: ASF GitHub Bot
Created on: 12/Aug/19 05:24
Start Date: 12/Aug/19 05:24
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1259: HDDS-1105 : Add 
mechanism in Recon to obtain DB snapshot 'delta' updates from Ozone Manager
URL: https://github.com/apache/hadoop/pull/1259#issuecomment-520302152
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 43 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 9 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 74 | Maven dependency ordering for branch |
   | +1 | mvninstall | 645 | trunk passed |
   | +1 | compile | 379 | trunk passed |
   | +1 | checkstyle | 71 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 844 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 167 | trunk passed |
   | 0 | spotbugs | 419 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 621 | trunk passed |
   | -0 | patch | 452 | Used diff version of patch file. Binary files and 
potentially other changes not applied. Please rebase and squash commits if 
necessary. |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 30 | Maven dependency ordering for patch |
   | +1 | mvninstall | 534 | the patch passed |
   | +1 | compile | 365 | the patch passed |
   | +1 | javac | 365 | the patch passed |
   | +1 | checkstyle | 79 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 1 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 647 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 84 | hadoop-ozone generated 6 new + 20 unchanged - 0 fixed 
= 26 total (was 20) |
   | +1 | findbugs | 622 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 287 | hadoop-hdds in the patch passed. |
   | -1 | unit | 3810 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 39 | The patch does not generate ASF License warnings. |
   | | | 9564 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.client.rpc.TestKeyInputStream |
   |   | hadoop.ozone.TestContainerOperations |
   |   | hadoop.ozone.client.rpc.Test2WayCommitInRatis |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.TestMiniChaosOzoneCluster |
   |   | hadoop.ozone.client.rpc.TestCloseContainerHandlingByClient |
   |   | hadoop.ozone.TestStorageContainerManager |
   |   | 
hadoop.ozone.container.common.statemachine.commandhandler.TestCloseContainerByPipeline
 |
   |   | hadoop.ozone.client.rpc.TestMultiBlockWritesWithDnFailures |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1259/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1259 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle xml |
   | uname | Linux efb479a9583c 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / cf5d895 |
   | Default Java | 1.8.0_212 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1259/4/artifact/out/diff-javadoc-javadoc-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1259/4/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1259/4/testReport/ |
   | Max. process+thread count | 4104 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-ozone/common 
hadoop-ozone/ozone-manager hadoop-ozone/ozone-recon U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1259/4/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the

[jira] [Commented] (HDFS-10323) transient deleteOnExit failure in ViewFileSystem due to close() ordering

2019-08-11 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-10323?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16904849#comment-16904849
 ] 

Hadoop QA commented on HDFS-10323:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  7s{color} 
| {color:red} HDFS-10323 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDFS-10323 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12896812/HDFS-10323.003.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27473/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> transient deleteOnExit failure in ViewFileSystem due to close() ordering
> 
>
> Key: HDFS-10323
> URL: https://issues.apache.org/jira/browse/HDFS-10323
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: federation
>Affects Versions: 2.6.0, 2.7.4, 3.0.0-beta1
>Reporter: Ben Podgursky
>Assignee: Wenxin He
>Priority: Major
> Attachments: HDFS-10323.001.patch, HDFS-10323.002.patch, 
> HDFS-10323.003.patch
>
>
> After switching to using a ViewFileSystem, fs.deleteOnExit calls began 
> failing frequently, displaying this error on failure:
> 16/04/21 13:56:24 INFO fs.FileSystem: Ignoring failure to deleteOnExit for 
> path /tmp/delete_on_exit_test_123/a438afc0-a3ca-44f1-9eb5-010ca4a62d84
> Since FileSystem eats the error involved, it is difficult to be sure what the 
> error is, but I believe what is happening is that the ViewFileSystem’s child 
> FileSystems are being close()’d before the ViewFileSystem, due to the random 
> order ClientFinalizer closes FileSystems; so then when the ViewFileSystem 
> tries to close(), it tries to forward the delete() calls to the appropriate 
> child, and fails because the child is already closed.
> I’m unsure how to write an actual Hadoop test to reproduce this, since it 
> involves testing behavior on actual JVM shutdown.  However, I can verify that 
> while
> {code:java}
> fs.deleteOnExit(randomTemporaryDir);

> {code}
> regularly (~50% of the time) fails to delete the temporary directory, this 
> code:
> {code:java}
> ViewFileSystem viewfs = (ViewFileSystem)fs1;

> for (FileSystem fileSystem : viewfs.getChildFileSystems()) {
  
>   if (fileSystem.exists(randomTemporaryDir)) {

> fileSystem.deleteOnExit(randomTemporaryDir);
  
>   }
> 
}

> {code}
> always successfully deletes the temporary directory on JVM shutdown.
> I am not very familiar with FileSystem inheritance hierarchies, but at first 
> glance I see two ways to fix this behavior:
> 1)  ViewFileSystem could forward deleteOnExit calls to the appropriate child 
> FileSystem, and not hold onto that path itself.
> 2) FileSystem.Cache.closeAll could first close all ViewFileSystems, then all 
> other FileSystems.  
> Would appreciate any thoughts of whether this seems accurate, and thoughts 
> (or help) on the fix.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10323) transient deleteOnExit failure in ViewFileSystem due to close() ordering

2019-08-11 Thread Jinglun (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-10323?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16904843#comment-16904843
 ] 

Jinglun commented on HDFS-10323:


Hi [~vincent he], [~bpodgursky], thanks for working on this. ViewFileSystem has 
another problem, because the children filesystems are shared and 
ViewFileSystem.close() does nothing but calling super.close(), it will break 
the semantic of FileSystem.newInstance(). See HADOOP-15565.

I'm considering to add an inner-cache to ViewFileSystem. It can also solve this 
deleteOnExit issue. The patch is ready in HADOOP-15565 now, do you have time to 
take a look? Let me know your thoughts, thanks.

> transient deleteOnExit failure in ViewFileSystem due to close() ordering
> 
>
> Key: HDFS-10323
> URL: https://issues.apache.org/jira/browse/HDFS-10323
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: federation
>Affects Versions: 2.6.0, 2.7.4, 3.0.0-beta1
>Reporter: Ben Podgursky
>Assignee: Wenxin He
>Priority: Major
> Attachments: HDFS-10323.001.patch, HDFS-10323.002.patch, 
> HDFS-10323.003.patch
>
>
> After switching to using a ViewFileSystem, fs.deleteOnExit calls began 
> failing frequently, displaying this error on failure:
> 16/04/21 13:56:24 INFO fs.FileSystem: Ignoring failure to deleteOnExit for 
> path /tmp/delete_on_exit_test_123/a438afc0-a3ca-44f1-9eb5-010ca4a62d84
> Since FileSystem eats the error involved, it is difficult to be sure what the 
> error is, but I believe what is happening is that the ViewFileSystem’s child 
> FileSystems are being close()’d before the ViewFileSystem, due to the random 
> order ClientFinalizer closes FileSystems; so then when the ViewFileSystem 
> tries to close(), it tries to forward the delete() calls to the appropriate 
> child, and fails because the child is already closed.
> I’m unsure how to write an actual Hadoop test to reproduce this, since it 
> involves testing behavior on actual JVM shutdown.  However, I can verify that 
> while
> {code:java}
> fs.deleteOnExit(randomTemporaryDir);

> {code}
> regularly (~50% of the time) fails to delete the temporary directory, this 
> code:
> {code:java}
> ViewFileSystem viewfs = (ViewFileSystem)fs1;

> for (FileSystem fileSystem : viewfs.getChildFileSystems()) {
  
>   if (fileSystem.exists(randomTemporaryDir)) {

> fileSystem.deleteOnExit(randomTemporaryDir);
  
>   }
> 
}

> {code}
> always successfully deletes the temporary directory on JVM shutdown.
> I am not very familiar with FileSystem inheritance hierarchies, but at first 
> glance I see two ways to fix this behavior:
> 1)  ViewFileSystem could forward deleteOnExit calls to the appropriate child 
> FileSystem, and not hold onto that path itself.
> 2) FileSystem.Cache.closeAll could first close all ViewFileSystems, then all 
> other FileSystems.  
> Would appreciate any thoughts of whether this seems accurate, and thoughts 
> (or help) on the fix.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-165) Add unit test for OzoneHddsDatanodeService

2019-08-11 Thread kevin su (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-165?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

kevin su reassigned HDDS-165:
-

Assignee: kevin su

> Add unit test for OzoneHddsDatanodeService
> --
>
> Key: HDDS-165
> URL: https://issues.apache.org/jira/browse/HDDS-165
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Datanode
>Reporter: Nanda kumar
>Assignee: kevin su
>Priority: Major
>  Labels: newbie, test
>
> We have to add unit-test for {{OzoneHddsDatanodeService}} class.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13821) RBF: Add dfs.federation.router.mount-table.cache.enable so that users can disable cache

2019-08-11 Thread xuzq (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13821?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16904827#comment-16904827
 ] 

xuzq commented on HDFS-13821:
-

Thanks [~elgoiri].

I found the same problem in pressure measuring RBF, default ProcessingAvgTime 
will be 20ms/min, ops is 2.48M/min.

After close the cache, ProcessingAvgTime down to 0.05ms/min. 

> RBF: Add dfs.federation.router.mount-table.cache.enable so that users can 
> disable cache
> ---
>
> Key: HDFS-13821
> URL: https://issues.apache.org/jira/browse/HDFS-13821
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Affects Versions: 3.1.0, 2.9.1, 3.0.3
>Reporter: Fei Hui
>Assignee: Fei Hui
>Priority: Major
> Fix For: 2.10.0, 3.2.0, 2.9.2, 3.0.4, 3.1.2
>
> Attachments: HDFS-13821.001.patch, HDFS-13821.002.patch, 
> HDFS-13821.003.patch, HDFS-13821.004.patch, HDFS-13821.005.patch, 
> HDFS-13821.006.patch, HDFS-13821.007.patch, HDFS-13821.008.patch, 
> LocalCacheTest.java, image-2018-08-13-11-27-49-023.png
>
>
> When i test rbf, if found performance problem.
> I found that ProxyAvgTime From Ganglia is so high, i run jstack on Router and 
> get the following stack frames
> {quote}
>    java.lang.Thread.State: WAITING (parking)
>     at sun.misc.Unsafe.park(Native Method)
>     - parking to wait for  <0x0005c264acd8> (a 
> java.util.concurrent.locks.ReentrantLock$NonfairSync)
>     at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
>     at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
>     at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(AbstractQueuedSynchronizer.java:870)
>     at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:1199)
>     at 
> java.util.concurrent.locks.ReentrantLock$NonfairSync.lock(ReentrantLock.java:209)
>     at 
> java.util.concurrent.locks.ReentrantLock.lock(ReentrantLock.java:285)
>     at 
> com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2249)
>     at 
> com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2228)
>     at com.google.common.cache.LocalCache.get(LocalCache.java:3965)
>     at 
> com.google.common.cache.LocalCache$LocalManualCache.get(LocalCache.java:4764)
>     at 
> org.apache.hadoop.hdfs.server.federation.resolver.MountTableResolver.getDestinationForPath(MountTableResolver.java:380)
>     at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.getLocationsForPath(RouterRpcServer.java:2104)
>     at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.getLocationsForPath(RouterRpcServer.java:2087)
>     at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.getListing(RouterRpcServer.java:1050)
>     at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getListing(ClientNamenodeProtocolServerSideTranslatorPB.java:640)
>     at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>     at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
>     at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2115)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2111)
>     at java.security.AccessController.doPrivileged(Native Method)
>     at javax.security.auth.Subject.doAs(Subject.java:422)
> {quote}
> Many threads blocked on *LocalCache*
> After disable the cache, ProxyAvgTime is down as follow showed
>  !image-2018-08-13-11-27-49-023.png! 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14625) Make DefaultAuditLogger class in FSnamesystem to Abstract

2019-08-11 Thread hemanthboyina (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16904825#comment-16904825
 ] 

hemanthboyina commented on HDFS-14625:
--

updated the patch
please check [~jojochuang] 

> Make DefaultAuditLogger class in FSnamesystem to Abstract 
> --
>
> Key: HDFS-14625
> URL: https://issues.apache.org/jira/browse/HDFS-14625
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: hemanthboyina
>Assignee: hemanthboyina
>Priority: Major
> Attachments: HDFS-14625 (1).patch, HDFS-14625(2).patch, 
> HDFS-14625.003.patch, HDFS-14625.004.patch, HDFS-14625.patch
>
>
> As per +HDFS-13270+  Audit logger for Router , we can make DefaultAuditLogger 
>  in FSnamesystem to be Abstract and common



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-13270) RBF: Router audit logger

2019-08-11 Thread hemanthboyina (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13270?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16904824#comment-16904824
 ] 

hemanthboyina edited comment on HDFS-13270 at 8/12/19 3:40 AM:
---

thanks for the comment [~xuzq_zander] 
 the updated patch HDFS-13270.002 was including the changes done with 
-HDFS-14685-. 

       _we should support one configuration to close audit log_

yes we can do that , when user no longer required audit log , we can make a 
configuration to close the audit log

 

 


was (Author: hemanthboyina):
thanks for the comment [~xuzq_zander] 
this patch [link 
title|https://issues.apache.org/jira/secure/attachment/12977228/HDFS-13270.002.patch]
 was including the changes done with -HDFS-14685-. 

       _we should support one configuration to close audit log_

yes we can do that , when user no longer required audit log , we can make a 
configuration to close the audit log

 

 

> RBF: Router audit logger
> 
>
> Key: HDFS-13270
> URL: https://issues.apache.org/jira/browse/HDFS-13270
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Affects Versions: 3.2.0
>Reporter: maobaolong
>Assignee: hemanthboyina
>Priority: Major
> Attachments: HDFS-13270.001.patch, HDFS-13270.002.patch
>
>
> We can use router auditlogger to log the client info and cmd, because the 
> FSNamesystem#Auditlogger's log think the client are all from router.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13270) RBF: Router audit logger

2019-08-11 Thread hemanthboyina (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13270?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16904824#comment-16904824
 ] 

hemanthboyina commented on HDFS-13270:
--

thanks for the comment [~xuzq_zander] 
this patch [link 
title|https://issues.apache.org/jira/secure/attachment/12977228/HDFS-13270.002.patch]
 was including the changes done with -HDFS-14685-. 

       _we should support one configuration to close audit log_

yes we can do that , when user no longer required audit log , we can make a 
configuration to close the audit log

 

 

> RBF: Router audit logger
> 
>
> Key: HDFS-13270
> URL: https://issues.apache.org/jira/browse/HDFS-13270
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Affects Versions: 3.2.0
>Reporter: maobaolong
>Assignee: hemanthboyina
>Priority: Major
> Attachments: HDFS-13270.001.patch, HDFS-13270.002.patch
>
>
> We can use router auditlogger to log the client info and cmd, because the 
> FSNamesystem#Auditlogger's log think the client are all from router.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13821) RBF: Add dfs.federation.router.mount-table.cache.enable so that users can disable cache

2019-08-11 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-13821?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16904823#comment-16904823
 ] 

Íñigo Goiri commented on HDFS-13821:


In the past we always cached so to keep backwards compatibility we should keep 
this enabled by default.

> RBF: Add dfs.federation.router.mount-table.cache.enable so that users can 
> disable cache
> ---
>
> Key: HDFS-13821
> URL: https://issues.apache.org/jira/browse/HDFS-13821
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Affects Versions: 3.1.0, 2.9.1, 3.0.3
>Reporter: Fei Hui
>Assignee: Fei Hui
>Priority: Major
> Fix For: 2.10.0, 3.2.0, 2.9.2, 3.0.4, 3.1.2
>
> Attachments: HDFS-13821.001.patch, HDFS-13821.002.patch, 
> HDFS-13821.003.patch, HDFS-13821.004.patch, HDFS-13821.005.patch, 
> HDFS-13821.006.patch, HDFS-13821.007.patch, HDFS-13821.008.patch, 
> LocalCacheTest.java, image-2018-08-13-11-27-49-023.png
>
>
> When i test rbf, if found performance problem.
> I found that ProxyAvgTime From Ganglia is so high, i run jstack on Router and 
> get the following stack frames
> {quote}
>    java.lang.Thread.State: WAITING (parking)
>     at sun.misc.Unsafe.park(Native Method)
>     - parking to wait for  <0x0005c264acd8> (a 
> java.util.concurrent.locks.ReentrantLock$NonfairSync)
>     at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
>     at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
>     at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(AbstractQueuedSynchronizer.java:870)
>     at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:1199)
>     at 
> java.util.concurrent.locks.ReentrantLock$NonfairSync.lock(ReentrantLock.java:209)
>     at 
> java.util.concurrent.locks.ReentrantLock.lock(ReentrantLock.java:285)
>     at 
> com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2249)
>     at 
> com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2228)
>     at com.google.common.cache.LocalCache.get(LocalCache.java:3965)
>     at 
> com.google.common.cache.LocalCache$LocalManualCache.get(LocalCache.java:4764)
>     at 
> org.apache.hadoop.hdfs.server.federation.resolver.MountTableResolver.getDestinationForPath(MountTableResolver.java:380)
>     at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.getLocationsForPath(RouterRpcServer.java:2104)
>     at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.getLocationsForPath(RouterRpcServer.java:2087)
>     at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.getListing(RouterRpcServer.java:1050)
>     at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getListing(ClientNamenodeProtocolServerSideTranslatorPB.java:640)
>     at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>     at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
>     at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2115)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2111)
>     at java.security.AccessController.doPrivileged(Native Method)
>     at javax.security.auth.Subject.doAs(Subject.java:422)
> {quote}
> Many threads blocked on *LocalCache*
> After disable the cache, ProxyAvgTime is down as follow showed
>  !image-2018-08-13-11-27-49-023.png! 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14708) TestLargeBlockReport#testBlockReportSucceedsWithLargerLengthLimit fails in trunk

2019-08-11 Thread Lisheng Sun (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14708?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lisheng Sun updated HDFS-14708:
---
Attachment: HDFS-14708-002.patch

> TestLargeBlockReport#testBlockReportSucceedsWithLargerLengthLimit fails in 
> trunk
> 
>
> Key: HDFS-14708
> URL: https://issues.apache.org/jira/browse/HDFS-14708
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Minor
> Attachments: HDFS-14708-001.patch, HDFS-14708-002.patch
>
>
> {code:java}
> [ERROR] 
> testBlockReportSucceedsWithLargerLengthLimit(org.apache.hadoop.hdfs.server.datanode.TestLargeBlockReport)
>   Time elapsed: 47.956 s  <<< ERROR!
> org.apache.hadoop.ipc.RemoteException(java.io.IOException): 
> java.lang.IllegalStateException: 
> com.google.protobuf.InvalidProtocolBufferException: Protocol message was too 
> large.  May be malicious.  Use CodedInputStream.setSizeLimit() to increase 
> the size limit.
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.runBlockOp(BlockManager.java:5011)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.blockReport(NameNodeRpcServer.java:1581)
>   at 
> org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolServerSideTranslatorPB.blockReport(DatanodeProtocolServerSideTranslatorPB.java:181)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos$DatanodeProtocolService$2.callBlockingMethod(DatanodeProtocolProtos.java:31664)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:529)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1070)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1001)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:929)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1891)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2921)
> Caused by: java.lang.IllegalStateException: 
> com.google.protobuf.InvalidProtocolBufferException: Protocol message was too 
> large.  May be malicious.  Use CodedInputStream.setSizeLimit() to increase 
> the size limit.
>   at 
> org.apache.hadoop.hdfs.protocol.BlockListAsLongs$BufferDecoder$1.next(BlockListAsLongs.java:424)
>   at 
> org.apache.hadoop.hdfs.protocol.BlockListAsLongs$BufferDecoder$1.next(BlockListAsLongs.java:396)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.reportDiffSorted(BlockManager.java:2952)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.processReport(BlockManager.java:2787)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.processReport(BlockManager.java:2655)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.lambda$blockReport$0(NameNodeRpcServer.java:1582)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$BlockReportProcessingThread.processQueue(BlockManager.java:5089)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$BlockReportProcessingThread.run(BlockManager.java:5068)
> Caused by: com.google.protobuf.InvalidProtocolBufferException: Protocol 
> message was too large.  May be malicious.  Use 
> CodedInputStream.setSizeLimit() to increase the size limit.
>   at 
> com.google.protobuf.InvalidProtocolBufferException.sizeLimitExceeded(InvalidProtocolBufferException.java:110)
>   at 
> com.google.protobuf.CodedInputStream.refillBuffer(CodedInputStream.java:755)
>   at 
> com.google.protobuf.CodedInputStream.readRawByte(CodedInputStream.java:769)
>   at 
> com.google.protobuf.CodedInputStream.readRawVarint64(CodedInputStream.java:462)
>   at 
> org.apache.hadoop.hdfs.protocol.BlockListAsLongs$BufferDecoder$1.next(BlockListAsLongs.java:420)
>   ... 8 more
>   at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1553)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1499)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1396)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:233)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118)
>   at com.sun.proxy.$Proxy25.blockReport(Unknown Source)
>   at 
> org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolClientSideTranslatorPB.blockReport(DatanodeProtocolClientSideTranslatorPB.java:218)
>   at 
> 

[jira] [Commented] (HDFS-14708) TestLargeBlockReport#testBlockReportSucceedsWithLargerLengthLimit fails in trunk

2019-08-11 Thread Ayush Saxena (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16904817#comment-16904817
 ] 

Ayush Saxena commented on HDFS-14708:
-

[~leosun08] There is a whitespace warning, Give a check

> TestLargeBlockReport#testBlockReportSucceedsWithLargerLengthLimit fails in 
> trunk
> 
>
> Key: HDFS-14708
> URL: https://issues.apache.org/jira/browse/HDFS-14708
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Minor
> Attachments: HDFS-14708-001.patch
>
>
> {code:java}
> [ERROR] 
> testBlockReportSucceedsWithLargerLengthLimit(org.apache.hadoop.hdfs.server.datanode.TestLargeBlockReport)
>   Time elapsed: 47.956 s  <<< ERROR!
> org.apache.hadoop.ipc.RemoteException(java.io.IOException): 
> java.lang.IllegalStateException: 
> com.google.protobuf.InvalidProtocolBufferException: Protocol message was too 
> large.  May be malicious.  Use CodedInputStream.setSizeLimit() to increase 
> the size limit.
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.runBlockOp(BlockManager.java:5011)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.blockReport(NameNodeRpcServer.java:1581)
>   at 
> org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolServerSideTranslatorPB.blockReport(DatanodeProtocolServerSideTranslatorPB.java:181)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos$DatanodeProtocolService$2.callBlockingMethod(DatanodeProtocolProtos.java:31664)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:529)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1070)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1001)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:929)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1891)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2921)
> Caused by: java.lang.IllegalStateException: 
> com.google.protobuf.InvalidProtocolBufferException: Protocol message was too 
> large.  May be malicious.  Use CodedInputStream.setSizeLimit() to increase 
> the size limit.
>   at 
> org.apache.hadoop.hdfs.protocol.BlockListAsLongs$BufferDecoder$1.next(BlockListAsLongs.java:424)
>   at 
> org.apache.hadoop.hdfs.protocol.BlockListAsLongs$BufferDecoder$1.next(BlockListAsLongs.java:396)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.reportDiffSorted(BlockManager.java:2952)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.processReport(BlockManager.java:2787)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.processReport(BlockManager.java:2655)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.lambda$blockReport$0(NameNodeRpcServer.java:1582)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$BlockReportProcessingThread.processQueue(BlockManager.java:5089)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$BlockReportProcessingThread.run(BlockManager.java:5068)
> Caused by: com.google.protobuf.InvalidProtocolBufferException: Protocol 
> message was too large.  May be malicious.  Use 
> CodedInputStream.setSizeLimit() to increase the size limit.
>   at 
> com.google.protobuf.InvalidProtocolBufferException.sizeLimitExceeded(InvalidProtocolBufferException.java:110)
>   at 
> com.google.protobuf.CodedInputStream.refillBuffer(CodedInputStream.java:755)
>   at 
> com.google.protobuf.CodedInputStream.readRawByte(CodedInputStream.java:769)
>   at 
> com.google.protobuf.CodedInputStream.readRawVarint64(CodedInputStream.java:462)
>   at 
> org.apache.hadoop.hdfs.protocol.BlockListAsLongs$BufferDecoder$1.next(BlockListAsLongs.java:420)
>   ... 8 more
>   at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1553)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1499)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1396)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:233)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118)
>   at com.sun.proxy.$Proxy25.blockReport(Unknown Source)
>   at 
> org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolClientSideTranslatorPB.blockReport(DatanodeProtocolClientSideTranslatorPB.java:218)
>   at 
> 

[jira] [Commented] (HDFS-14674) [SBN read] Got an unexpected txid when tail editlog

2019-08-11 Thread zy.jordan (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14674?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16904812#comment-16904812
 ] 

zy.jordan commented on HDFS-14674:
--

Hi [~wangzhaohui], I am still confused why SBN skip the remaining transactions? 
Due to ths misaligned editlogs?

> [SBN read] Got an unexpected txid when tail editlog
> ---
>
> Key: HDFS-14674
> URL: https://issues.apache.org/jira/browse/HDFS-14674
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: wangzhaohui
>Assignee: wangzhaohui
>Priority: Blocker
> Attachments: HDFS-14674-001.patch, HDFS-14674-003.patch, 
> HDFS-14674-004.patch, HDFS-14674-005.patch, HDFS-14674-006.patch, image.png
>
>
> Add the following configuration
> !image-2019-07-26-11-34-23-405.png!
> error:
> {code:java}
> //
> [2019-07-17T11:50:21.048+08:00] [INFO] [Edit log tailer] : replaying edit 
> log: 1/20512836 transactions completed. (0%) [2019-07-17T11:50:21.059+08:00] 
> [INFO] [Edit log tailer] : Edits file 
> http://ip/getJournal?jid=ns1003=232056426162=-63%3A1902204348%3A0%3ACID-hope-20180214-20161018-SQYH,
>  
> http://ip/getJournal?ipjid=ns1003=232056426162=-63%3A1902204348%3A0%3ACID-hope-20180214-20161018-SQYH,
>  
> http://ip/getJournal?ipjid=ns1003=232056426162=-63%3A1902204348%3A0%3ACID-hope-20180214-20161018-SQYH
>  of size 3126782311 edits # 500 loaded in 3 seconds 
> [2019-07-17T11:50:21.059+08:00] [INFO] [Edit log tailer] : Reading 
> org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStream@51ceb7bc 
> expecting start txid #232056752162 [2019-07-17T11:50:21.059+08:00] [INFO] 
> [Edit log tailer] : Start loading edits file 
> http://ip/getJournal?ipjid=ns1003=232077264498=-63%3A1902204348%3A0%3ACID-hope-20180214-20161018-SQYH,
>  
> http://ip/getJournal?ipjid=ns1003=232077264498=-63%3A1902204348%3A0%3ACID-hope-20180214-20161018-SQYH,
>  
> http://ip/getJournal?ipjid=ns1003=232077264498=-63%3A1902204348%3A0%3ACID-hope-20180214-20161018-SQYH
>  maxTxnipsToRead = 500 [2019-07-17T11:50:21.059+08:00] [INFO] [Edit log 
> tailer] : Fast-forwarding stream 
> 'http://ip/getJournal?jid=ns1003=232077264498=-63%3A1902204348%3A0%3ACID-hope-20180214-20161018-SQYH,
>  
> http://ip/getJournal?ipjid=ns1003=232077264498=-63%3A1902204348%3A0%3ACID-hope-20180214-20161018-SQYH,
>  
> http://ip/getJournal?ipjid=ns1003=232077264498=-63%3A1902204348%3A0%3ACID-hope-20180214-20161018-SQYH'
>  to transaction ID 232056751662 [2019-07-17T11:50:21.059+08:00] [INFO] [Edit 
> log tailer] ip: Fast-forwarding stream 
> 'http://ip/getJournal?jid=ns1003=232077264498=-63%3A1902204348%3A0%3ACID-hope-20180214-20161018-SQYH'
>  to transaction ID 232056751662 [2019-07-17T11:50:21.061+08:00] [ERROR] [Edit 
> log tailer] : Unknown error encountered while tailing edits. Shutting down 
> standby NN. java.io.IOException: There appears to be a gap in the edit log. 
> We expected txid 232056752162, but got txid 232077264498. at 
> org.apache.hadoop.hdfs.server.namenode.MetaRecoveryContext.editLogLoaderPrompt(MetaRecoveryContext.java:94)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:239)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:161)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:895) at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer.doTailEdits(EditLogTailer.java:321)
>  at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.doWork(EditLogTailer.java:460)
>  at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.access$400(EditLogTailer.java:410)
>  at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread$1.run(EditLogTailer.java:427)
>  at 
> org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:414)
>  at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.run(EditLogTailer.java:423)
>  [2019-07-17T11:50:21.064+08:00] [INFO] [Edit log tailer] : Exiting with 
> status 1 [2019-07-17T11:50:21.066+08:00] [INFO] [Thread-1] : SHUTDOWN_MSG: 
> / SHUTDOWN_MSG: 
> Shutting down NameNode at ip 
> /
> {code}
>  
> if dfs.ha.tail-edits.max-txns-per-lock value is 500,when the namenode load 
> the editlog util 500,the current namenode will load the next editlog,but 
> editlog more than 500.So,namenode got an unexpected txid when tail editlog.
>  
>  
> {code:java}
> //
> [2019-07-17T11:50:21.059+08:00] [INFO] [Edit log tailer] : Edits file 
> http://ip/getJournal?jid=ns1003=232056426162=-63%3A1902204348%3A0%3ACID-hope-20180214-20161018-SQYH,
>  
> 

[jira] [Created] (HDDS-1953) Remove pipeline persistent in SCM

2019-08-11 Thread Sammi Chen (JIRA)
Sammi Chen created HDDS-1953:


 Summary: Remove pipeline persistent in SCM
 Key: HDDS-1953
 URL: https://issues.apache.org/jira/browse/HDDS-1953
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
Reporter: Sammi Chen
Assignee: Sammi Chen


Currently, SCM will persistent pipeline in metastore with datanode information 
locally. After SCM restart, it will reload all the pipelines from the 
metastore.  If there is any datanode information change during the whole SCMc 
lifecycle, the persisted pipeline is not updated. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1865) Use "ozone.network.topology.aware.read" to control both RPC client and server side logic

2019-08-11 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1865?focusedWorklogId=292807=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-292807
 ]

ASF GitHub Bot logged work on HDDS-1865:


Author: ASF GitHub Bot
Created on: 12/Aug/19 02:24
Start Date: 12/Aug/19 02:24
Worklog Time Spent: 10m 
  Work Description: ChenSammi commented on issue #1184: HDDS-1865. Use 
"ozone.network.topology.aware.read" to control both RPC client and server side 
logic
URL: https://github.com/apache/hadoop/pull/1184#issuecomment-520283952
 
 
   Thanks @xiaoyuyao  for the review effort. 
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 292807)
Time Spent: 2h 20m  (was: 2h 10m)

> Use "ozone.network.topology.aware.read" to control both RPC client and server 
> side logic 
> -
>
> Key: HDDS-1865
> URL: https://issues.apache.org/jira/browse/HDDS-1865
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Sammi Chen
>Assignee: Sammi Chen
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13270) RBF: Router audit logger

2019-08-11 Thread xuzq (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13270?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16904795#comment-16904795
 ] 

xuzq commented on HDFS-13270:
-

[~hemanthboyina] Thanks for the patch.

There is a bug in audit log, fixed in 
-[HDFS-14685|https://issues.apache.org/jira/browse/HDFS-14685]-.

And we should support one configuration to close audit log. I found that audit 
log had a great impact on router's performance.

> RBF: Router audit logger
> 
>
> Key: HDFS-13270
> URL: https://issues.apache.org/jira/browse/HDFS-13270
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Affects Versions: 3.2.0
>Reporter: maobaolong
>Assignee: hemanthboyina
>Priority: Major
> Attachments: HDFS-13270.001.patch, HDFS-13270.002.patch
>
>
> We can use router auditlogger to log the client info and cmd, because the 
> FSNamesystem#Auditlogger's log think the client are all from router.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14708) TestLargeBlockReport#testBlockReportSucceedsWithLargerLengthLimit fails in trunk

2019-08-11 Thread Lisheng Sun (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16904793#comment-16904793
 ] 

Lisheng Sun commented on HDFS-14708:


hi [~ayushtkn], [~jojochuang]  Could you help review this patch? Thank you.

> TestLargeBlockReport#testBlockReportSucceedsWithLargerLengthLimit fails in 
> trunk
> 
>
> Key: HDFS-14708
> URL: https://issues.apache.org/jira/browse/HDFS-14708
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Minor
> Attachments: HDFS-14708-001.patch
>
>
> {code:java}
> [ERROR] 
> testBlockReportSucceedsWithLargerLengthLimit(org.apache.hadoop.hdfs.server.datanode.TestLargeBlockReport)
>   Time elapsed: 47.956 s  <<< ERROR!
> org.apache.hadoop.ipc.RemoteException(java.io.IOException): 
> java.lang.IllegalStateException: 
> com.google.protobuf.InvalidProtocolBufferException: Protocol message was too 
> large.  May be malicious.  Use CodedInputStream.setSizeLimit() to increase 
> the size limit.
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.runBlockOp(BlockManager.java:5011)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.blockReport(NameNodeRpcServer.java:1581)
>   at 
> org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolServerSideTranslatorPB.blockReport(DatanodeProtocolServerSideTranslatorPB.java:181)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos$DatanodeProtocolService$2.callBlockingMethod(DatanodeProtocolProtos.java:31664)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:529)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1070)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1001)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:929)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1891)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2921)
> Caused by: java.lang.IllegalStateException: 
> com.google.protobuf.InvalidProtocolBufferException: Protocol message was too 
> large.  May be malicious.  Use CodedInputStream.setSizeLimit() to increase 
> the size limit.
>   at 
> org.apache.hadoop.hdfs.protocol.BlockListAsLongs$BufferDecoder$1.next(BlockListAsLongs.java:424)
>   at 
> org.apache.hadoop.hdfs.protocol.BlockListAsLongs$BufferDecoder$1.next(BlockListAsLongs.java:396)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.reportDiffSorted(BlockManager.java:2952)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.processReport(BlockManager.java:2787)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.processReport(BlockManager.java:2655)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.lambda$blockReport$0(NameNodeRpcServer.java:1582)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$BlockReportProcessingThread.processQueue(BlockManager.java:5089)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$BlockReportProcessingThread.run(BlockManager.java:5068)
> Caused by: com.google.protobuf.InvalidProtocolBufferException: Protocol 
> message was too large.  May be malicious.  Use 
> CodedInputStream.setSizeLimit() to increase the size limit.
>   at 
> com.google.protobuf.InvalidProtocolBufferException.sizeLimitExceeded(InvalidProtocolBufferException.java:110)
>   at 
> com.google.protobuf.CodedInputStream.refillBuffer(CodedInputStream.java:755)
>   at 
> com.google.protobuf.CodedInputStream.readRawByte(CodedInputStream.java:769)
>   at 
> com.google.protobuf.CodedInputStream.readRawVarint64(CodedInputStream.java:462)
>   at 
> org.apache.hadoop.hdfs.protocol.BlockListAsLongs$BufferDecoder$1.next(BlockListAsLongs.java:420)
>   ... 8 more
>   at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1553)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1499)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1396)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:233)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118)
>   at com.sun.proxy.$Proxy25.blockReport(Unknown Source)
>   at 
> org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolClientSideTranslatorPB.blockReport(DatanodeProtocolClientSideTranslatorPB.java:218)
>   at 
> 

[jira] [Commented] (HDFS-13821) RBF: Add dfs.federation.router.mount-table.cache.enable so that users can disable cache

2019-08-11 Thread xuzq (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13821?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16904781#comment-16904781
 ] 

xuzq commented on HDFS-13821:
-

I'm sorry to discuss this again, I think 
FEDERATION_MOUNT_TABLE_CACHE_ENABLE_DEFAULT should be false. The cache is 
turned off by default.

> RBF: Add dfs.federation.router.mount-table.cache.enable so that users can 
> disable cache
> ---
>
> Key: HDFS-13821
> URL: https://issues.apache.org/jira/browse/HDFS-13821
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Affects Versions: 3.1.0, 2.9.1, 3.0.3
>Reporter: Fei Hui
>Assignee: Fei Hui
>Priority: Major
> Fix For: 2.10.0, 3.2.0, 2.9.2, 3.0.4, 3.1.2
>
> Attachments: HDFS-13821.001.patch, HDFS-13821.002.patch, 
> HDFS-13821.003.patch, HDFS-13821.004.patch, HDFS-13821.005.patch, 
> HDFS-13821.006.patch, HDFS-13821.007.patch, HDFS-13821.008.patch, 
> LocalCacheTest.java, image-2018-08-13-11-27-49-023.png
>
>
> When i test rbf, if found performance problem.
> I found that ProxyAvgTime From Ganglia is so high, i run jstack on Router and 
> get the following stack frames
> {quote}
>    java.lang.Thread.State: WAITING (parking)
>     at sun.misc.Unsafe.park(Native Method)
>     - parking to wait for  <0x0005c264acd8> (a 
> java.util.concurrent.locks.ReentrantLock$NonfairSync)
>     at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
>     at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
>     at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(AbstractQueuedSynchronizer.java:870)
>     at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:1199)
>     at 
> java.util.concurrent.locks.ReentrantLock$NonfairSync.lock(ReentrantLock.java:209)
>     at 
> java.util.concurrent.locks.ReentrantLock.lock(ReentrantLock.java:285)
>     at 
> com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2249)
>     at 
> com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2228)
>     at com.google.common.cache.LocalCache.get(LocalCache.java:3965)
>     at 
> com.google.common.cache.LocalCache$LocalManualCache.get(LocalCache.java:4764)
>     at 
> org.apache.hadoop.hdfs.server.federation.resolver.MountTableResolver.getDestinationForPath(MountTableResolver.java:380)
>     at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.getLocationsForPath(RouterRpcServer.java:2104)
>     at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.getLocationsForPath(RouterRpcServer.java:2087)
>     at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.getListing(RouterRpcServer.java:1050)
>     at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getListing(ClientNamenodeProtocolServerSideTranslatorPB.java:640)
>     at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>     at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
>     at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2115)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2111)
>     at java.security.AccessController.doPrivileged(Native Method)
>     at javax.security.auth.Subject.doAs(Subject.java:422)
> {quote}
> Many threads blocked on *LocalCache*
> After disable the cache, ProxyAvgTime is down as follow showed
>  !image-2018-08-13-11-27-49-023.png! 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-14423) Percent (%) and plus (+) characters no longer work in WebHDFS

2019-08-11 Thread Masatake Iwasaki (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16904728#comment-16904728
 ] 

Masatake Iwasaki edited comment on HDFS-14423 at 8/12/19 1:06 AM:
--

java.net.URLEncoder converts "\+" into " " (since java.net.URLEncoder converts 
" " into "\+"). The code path of {{create}} seems to convert "a+b" into "a b" 
while {{mkdir}} does not.


was (Author: iwasakims):
java.net.URLEncoder converts "\+" into " " (since java.net.URLEncoder converts 
" " into "\+"). The code path of {{open}} seems to convert "a+b" into "a b" 
while {{mkdir}} does not.

> Percent (%) and plus (+) characters no longer work in WebHDFS
> -
>
> Key: HDFS-14423
> URL: https://issues.apache.org/jira/browse/HDFS-14423
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 3.2.0, 3.1.2
> Environment: Ubuntu 16.04, but I believe this is irrelevant.
>Reporter: Jing Wang
>Assignee: Wei-Chiu Chuang
>Priority: Major
> Attachments: HDFS-14423.001.patch
>
>
> The following commands with percent (%) no longer work starting with version 
> 3.1:
> {code:java}
> $ hadoop/bin/hdfs dfs -touchz webhdfs://localhost/%
> $ hadoop/bin/hdfs dfs -cat webhdfs://localhost/%
> cat: URLDecoder: Incomplete trailing escape (%) pattern
> {code}
> Also, plus (+ ) characters get turned into spaces when doing DN operations:
> {code:java}
> $ hadoop/bin/hdfs dfs -touchz webhdfs://localhost/a+b
> $ hadoop/bin/hdfs dfs -mkdir webhdfs://localhost/c+d
> $ hadoop/bin/hdfs dfs -ls /
> Found 4 items
> -rw-r--r--   1 jing supergroup  0 2019-04-12 11:20 /a b
> drwxr-xr-x   - jing supergroup  0 2019-04-12 11:21 /c+d
> {code}
> I can confirm that these commands work correctly on 2.9 and 3.0. Also, the 
> usual hdfs:// client works as expected.
> I suspect a relation with HDFS-13176 or HDFS-13582, but I'm not sure what the 
> right fix is. Note that Hive uses % to escape special characters in partition 
> values, so banning % might not be a good option. For example, Hive will 
> create a paths like {{table_name/partition_key=%2F}} when 
> {{partition_key='/'}}.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14423) Percent (%) and plus (+) characters no longer work in WebHDFS

2019-08-11 Thread Masatake Iwasaki (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16904774#comment-16904774
 ] 

Masatake Iwasaki commented on HDFS-14423:
-

client deubg log shows expected URL in which "+" is not converted. 
{noformat}
$ bin/hdfs dfs -touchz 'webhdfs://localhost/a+b'
2019-08-12 09:10:04,929 TRACE web.WebHdfsFileSystem: 
url=http://localhost:9870/webhdfs/v1/a+b?op=GETFILESTATUS=iwasakims
2019-08-12 09:10:06,896 TRACE web.WebHdfsFileSystem: 
url=http://localhost:9870/webhdfs/v1/?op=GETFILESTATUS=iwasakims
2019-08-12 09:10:06,995 TRACE web.WebHdfsFileSystem: 
url=http://localhost:9870/webhdfs/v1/a+b?op=CREATE=iwasakims=134217728=4096=true=644=1=666
{noformat}

namenode log shows that GETFILESTATUS (for checking whether the file already 
exists) was applied to "a b" because URLDecoder#decode is called in 
NamenodeWebHdfsMethods#get.
{noformat}
2019-08-12 09:10:06,638 DEBUG 
org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger: --- 
logged event for top service: allowed=true ugi=iwasakims (auth:SIMPLE) 
ip=/127.0.0.1   cmd=getfileinfo src=/a bdst=nullper
m=null
2019-08-12 09:10:06,639 TRACE 
org.apache.hadoop.hdfs.web.resources.ExceptionHandler: GOT EXCEPITION
java.io.FileNotFoundException: File does not exist: /a b
at 
org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.get(NamenodeWebHdfsMethods.java:1158)
at 
org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods$4.run(NamenodeWebHdfsMethods.java:1075)
at 
org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods$4.run(NamenodeWebHdfsMethods.java:1070)
at org.apache.hadoop.ipc.ExternalCall.run(ExternalCall.java:78)
at org.apache.hadoop.ipc.ExternalCall.run(ExternalCall.java:30)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1891)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2921)
{noformat}

Succeeding CREATE was given "a+b" as path since NamenodeWebHdfsMethods#put does 
not calls URLDecoder#decode. NameNode just redirect the request to chosen 
DataNode.
{noformat}
2019-08-12 09:10:07,016 TRACE 
org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods: 
HTTP PUT: op=CREATE, path=a+b, ugi=iwasakims (auth:SIMPLE), 
user.name=iwasakims, doas=null, accesstime=-1, blocksize=134217728, 
buffersize=4096, createflag=, flag=, modificationtime=-1, noredirect=false, 
overwrite=true, permission=644, renameoptions=, replication=1, 
unmaskedpermission=666
2019-08-12 09:10:07,051 TRACE 
org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods: 
redirectURI=http://localhost:9864/webhdfs/v1/a+b?op=CREATE=iwasakims=localhost:8020=134217728=4096==true=true=644=1=666
{noformat}

DFSClient on redirected datanode created "a b" because 
WebHdfsHandler#channelRead0 always calls URLDecoder#decode on the path.
{noformat}
2019-08-12 09:10:07,725 DEBUG org.apache.hadoop.hdfs.DFSClient: /a b: masked={ 
masked: rw-r--r--, unmasked: rw-rw-rw- }
2019-08-12 09:10:07,860 DEBUG org.apache.hadoop.hdfs.DFSClient: 
computePacketChunkSize: src=/a b, chunkSize=516, chunksPerPacket=126, 
packetSize=65016
2019-08-12 09:10:07,867 DEBUG org.apache.hadoop.hdfs.client.impl.LeaseRenewer: 
Lease renewer daemon for [DFSClient_NONMAPREDUCE_683464448_30] with renew id 1 
started
2019-08-12 09:10:07,869 INFO datanode.webhdfs: 127.0.0.1 PUT 
/webhdfs/v1/a+b?op=CREATE=iwasakims=localhost:8020=134217728=4096==true=true=644=1=666
 201
{noformat}

> Percent (%) and plus (+) characters no longer work in WebHDFS
> -
>
> Key: HDFS-14423
> URL: https://issues.apache.org/jira/browse/HDFS-14423
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 3.2.0, 3.1.2
> Environment: Ubuntu 16.04, but I believe this is irrelevant.
>Reporter: Jing Wang
>Assignee: Wei-Chiu Chuang
>Priority: Major
> Attachments: HDFS-14423.001.patch
>
>
> The following commands with percent (%) no longer work starting with version 
> 3.1:
> {code:java}
> $ hadoop/bin/hdfs dfs -touchz webhdfs://localhost/%
> $ hadoop/bin/hdfs dfs -cat webhdfs://localhost/%
> cat: URLDecoder: Incomplete trailing escape (%) pattern
> {code}
> Also, plus (+ ) characters get turned into spaces when doing DN operations:
> {code:java}
> $ hadoop/bin/hdfs dfs -touchz webhdfs://localhost/a+b
> $ hadoop/bin/hdfs dfs -mkdir webhdfs://localhost/c+d
> $ hadoop/bin/hdfs dfs -ls /
> Found 4 items
> -rw-r--r--   1 jing supergroup  0 2019-04-12 11:20 /a b
> drwxr-xr-x   - jing supergroup  0 

[jira] [Work logged] (HDDS-1105) Add mechanism in Recon to obtain DB snapshot 'delta' updates from Ozone Manager.

2019-08-11 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1105?focusedWorklogId=292793=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-292793
 ]

ASF GitHub Bot logged work on HDDS-1105:


Author: ASF GitHub Bot
Created on: 12/Aug/19 00:51
Start Date: 12/Aug/19 00:51
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1259: HDDS-1105 : Add 
mechanism in Recon to obtain DB snapshot 'delta' updates from Ozone Manager
URL: https://github.com/apache/hadoop/pull/1259#issuecomment-520275779
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 44 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 9 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 87 | Maven dependency ordering for branch |
   | +1 | mvninstall | 769 | trunk passed |
   | +1 | compile | 480 | trunk passed |
   | +1 | checkstyle | 73 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 923 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 163 | trunk passed |
   | 0 | spotbugs | 433 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 629 | trunk passed |
   | -0 | patch | 473 | Used diff version of patch file. Binary files and 
potentially other changes not applied. Please rebase and squash commits if 
necessary. |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 29 | Maven dependency ordering for patch |
   | +1 | mvninstall | 552 | the patch passed |
   | +1 | compile | 380 | the patch passed |
   | +1 | javac | 380 | the patch passed |
   | +1 | checkstyle | 80 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 2 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 737 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 92 | hadoop-ozone generated 6 new + 20 unchanged - 0 fixed 
= 26 total (was 20) |
   | +1 | findbugs | 643 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 328 | hadoop-hdds in the patch passed. |
   | -1 | unit | 342 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 38 | The patch does not generate ASF License warnings. |
   | | | 6643 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.recon.tasks.TestReconTaskControllerImpl |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1259/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1259 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle xml |
   | uname | Linux d058aaa61991 4.15.0-54-generic #58-Ubuntu SMP Mon Jun 24 
10:55:24 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / cf5d895 |
   | Default Java | 1.8.0_222 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1259/3/artifact/out/diff-javadoc-javadoc-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1259/3/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1259/3/testReport/ |
   | Max. process+thread count | 1226 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-ozone/common 
hadoop-ozone/ozone-manager hadoop-ozone/ozone-recon U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1259/3/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 292793)
Time Spent: 50m  (was: 40m)

> Add mechanism in Recon to obtain DB snapshot 'delta' updates from Ozone 
> Manager.
> 
>
> Key: HDDS-1105
> URL: 

[jira] [Commented] (HDFS-14708) TestLargeBlockReport#testBlockReportSucceedsWithLargerLengthLimit fails in trunk

2019-08-11 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16904758#comment-16904758
 ] 

Hadoop QA commented on HDFS-14708:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
46s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 15s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
6s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 35s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}144m 22s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
46s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}203m 49s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestDFSUpgrade |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy |
|   | hadoop.hdfs.TestDistributedFileSystem |
|   | hadoop.hdfs.TestReadStripedFileWithDecodingDeletedData |
|   | hadoop.hdfs.TestDFSInotifyEventInputStreamKerberized |
|   | hadoop.hdfs.TestErasureCodingPolicyWithSnapshot |
|   | hadoop.hdfs.server.balancer.TestBalancerRPCDelay |
|   | hadoop.hdfs.TestReadStripedFileWithDecodingCorruptData |
|   | hadoop.hdfs.TestFileAppend3 |
|   | hadoop.hdfs.TestRenameWhileOpen |
|   | hadoop.hdfs.TestDFSStorageStateRecovery |
|   | hadoop.hdfs.TestReplaceDatanodeFailureReplication |
|   | hadoop.hdfs.TestReconstructStripedFile |
|   | hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes |
|   | hadoop.hdfs.TestFileChecksumCompositeCrc |
|   | hadoop.hdfs.TestAclsEndToEnd |
|   | hadoop.hdfs.TestDFSClientFailover |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=18.09.7 Server=18.09.7 Image:yetus/hadoop:bdbca0e53b4 |
| JIRA Issue | HDFS-14708 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12977245/HDFS-14708-001.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  

[jira] [Commented] (HDFS-14423) Percent (%) and plus (+) characters no longer work in WebHDFS

2019-08-11 Thread Masatake Iwasaki (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16904735#comment-16904735
 ] 

Masatake Iwasaki commented on HDFS-14423:
-

https://tools.ietf.org/html/rfc3986#section-3.3
{quote}
URI producing applications often use the reserved characters allowed in a 
segment to delimit scheme-specific or dereference-handler-specific 
subcomponents.  For example, the semicolon (";") and equals ("=") reserved 
characters are often used to delimit parameters and parameter values applicable 
to that segment.  The comma (",") reserved character is often used for similar 
purposes.  For example, one URI producer might use a segment such as 
"name;v=1.1" to indicate a reference to version 1.1 of "name", whereas another 
might use a segment such as "name,1.1" to indicate the same.  Parameter types 
may be defined by scheme-specific semantics, but in most cases the syntax of a 
parameter is specific to the implementation of the URI's dereferencing 
algorithm.
{quote}
{noformat}
  reserved= gen-delims / sub-delims

  gen-delims  = ":" / "/" / "?" / "#" / "[" / "]" / "@"

  sub-delims  = "!" / "$" / "&" / "'" / "(" / ")"
  / "*" / "+" / "," / ";" / "="

...

  path  = path-abempty; begins with "/" or is empty
/ path-absolute   ; begins with "/" but not "//"
/ path-noscheme   ; begins with a non-colon segment
/ path-rootless   ; begins with a segment
/ path-empty  ; zero characters

  path-abempty  = *( "/" segment )
  path-absolute = "/" [ segment-nz *( "/" segment ) ]
  path-noscheme = segment-nz-nc *( "/" segment )
  path-rootless = segment-nz *( "/" segment )
  path-empty= 0
  segment   = *pchar
  segment-nz= 1*pchar
  segment-nz-nc = 1*( unreserved / pct-encoded / sub-delims / "@" )
; non-zero-length segment without any colon ":"

  pchar = unreserved / pct-encoded / sub-delims / ":" / "@"
{noformat}

> Percent (%) and plus (+) characters no longer work in WebHDFS
> -
>
> Key: HDFS-14423
> URL: https://issues.apache.org/jira/browse/HDFS-14423
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 3.2.0, 3.1.2
> Environment: Ubuntu 16.04, but I believe this is irrelevant.
>Reporter: Jing Wang
>Assignee: Wei-Chiu Chuang
>Priority: Major
> Attachments: HDFS-14423.001.patch
>
>
> The following commands with percent (%) no longer work starting with version 
> 3.1:
> {code:java}
> $ hadoop/bin/hdfs dfs -touchz webhdfs://localhost/%
> $ hadoop/bin/hdfs dfs -cat webhdfs://localhost/%
> cat: URLDecoder: Incomplete trailing escape (%) pattern
> {code}
> Also, plus (+ ) characters get turned into spaces when doing DN operations:
> {code:java}
> $ hadoop/bin/hdfs dfs -touchz webhdfs://localhost/a+b
> $ hadoop/bin/hdfs dfs -mkdir webhdfs://localhost/c+d
> $ hadoop/bin/hdfs dfs -ls /
> Found 4 items
> -rw-r--r--   1 jing supergroup  0 2019-04-12 11:20 /a b
> drwxr-xr-x   - jing supergroup  0 2019-04-12 11:21 /c+d
> {code}
> I can confirm that these commands work correctly on 2.9 and 3.0. Also, the 
> usual hdfs:// client works as expected.
> I suspect a relation with HDFS-13176 or HDFS-13582, but I'm not sure what the 
> right fix is. Note that Hive uses % to escape special characters in partition 
> values, so banning % might not be a good option. For example, Hive will 
> create a paths like {{table_name/partition_key=%2F}} when 
> {{partition_key='/'}}.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-14423) Percent (%) and plus (+) characters no longer work in WebHDFS

2019-08-11 Thread Masatake Iwasaki (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16904731#comment-16904731
 ] 

Masatake Iwasaki edited comment on HDFS-14423 at 8/11/19 8:28 PM:
--

The reason why [WebHdfs file path gets truncated when having semicolon (\;) 
inside|HDFS-13176] is that Jersey treats the part after ";" as parameters. I 
think using URLEncoder/URLDecoder is not the right way for HDFS-13176. I'm 
going to try refatctoring here.


was (Author: iwasakims):
The reason why WebHdfs file path gets truncated when having semicolon (\;) 
inside is that Jersey treats the part after ";" as parameters. I think using 
URLEncoder/URLDecoder is not the right way for HDFS-13176. I'm going to try 
refatctoring here.

> Percent (%) and plus (+) characters no longer work in WebHDFS
> -
>
> Key: HDFS-14423
> URL: https://issues.apache.org/jira/browse/HDFS-14423
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 3.2.0, 3.1.2
> Environment: Ubuntu 16.04, but I believe this is irrelevant.
>Reporter: Jing Wang
>Assignee: Wei-Chiu Chuang
>Priority: Major
> Attachments: HDFS-14423.001.patch
>
>
> The following commands with percent (%) no longer work starting with version 
> 3.1:
> {code:java}
> $ hadoop/bin/hdfs dfs -touchz webhdfs://localhost/%
> $ hadoop/bin/hdfs dfs -cat webhdfs://localhost/%
> cat: URLDecoder: Incomplete trailing escape (%) pattern
> {code}
> Also, plus (+ ) characters get turned into spaces when doing DN operations:
> {code:java}
> $ hadoop/bin/hdfs dfs -touchz webhdfs://localhost/a+b
> $ hadoop/bin/hdfs dfs -mkdir webhdfs://localhost/c+d
> $ hadoop/bin/hdfs dfs -ls /
> Found 4 items
> -rw-r--r--   1 jing supergroup  0 2019-04-12 11:20 /a b
> drwxr-xr-x   - jing supergroup  0 2019-04-12 11:21 /c+d
> {code}
> I can confirm that these commands work correctly on 2.9 and 3.0. Also, the 
> usual hdfs:// client works as expected.
> I suspect a relation with HDFS-13176 or HDFS-13582, but I'm not sure what the 
> right fix is. Note that Hive uses % to escape special characters in partition 
> values, so banning % might not be a good option. For example, Hive will 
> create a paths like {{table_name/partition_key=%2F}} when 
> {{partition_key='/'}}.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14708) TestLargeBlockReport#testBlockReportSucceedsWithLargerLengthLimit fails in trunk

2019-08-11 Thread Ayush Saxena (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14708?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-14708:

Status: Patch Available  (was: Open)

> TestLargeBlockReport#testBlockReportSucceedsWithLargerLengthLimit fails in 
> trunk
> 
>
> Key: HDFS-14708
> URL: https://issues.apache.org/jira/browse/HDFS-14708
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Minor
> Attachments: HDFS-14708-001.patch
>
>
> {code:java}
> [ERROR] 
> testBlockReportSucceedsWithLargerLengthLimit(org.apache.hadoop.hdfs.server.datanode.TestLargeBlockReport)
>   Time elapsed: 47.956 s  <<< ERROR!
> org.apache.hadoop.ipc.RemoteException(java.io.IOException): 
> java.lang.IllegalStateException: 
> com.google.protobuf.InvalidProtocolBufferException: Protocol message was too 
> large.  May be malicious.  Use CodedInputStream.setSizeLimit() to increase 
> the size limit.
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.runBlockOp(BlockManager.java:5011)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.blockReport(NameNodeRpcServer.java:1581)
>   at 
> org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolServerSideTranslatorPB.blockReport(DatanodeProtocolServerSideTranslatorPB.java:181)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos$DatanodeProtocolService$2.callBlockingMethod(DatanodeProtocolProtos.java:31664)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:529)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1070)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1001)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:929)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1891)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2921)
> Caused by: java.lang.IllegalStateException: 
> com.google.protobuf.InvalidProtocolBufferException: Protocol message was too 
> large.  May be malicious.  Use CodedInputStream.setSizeLimit() to increase 
> the size limit.
>   at 
> org.apache.hadoop.hdfs.protocol.BlockListAsLongs$BufferDecoder$1.next(BlockListAsLongs.java:424)
>   at 
> org.apache.hadoop.hdfs.protocol.BlockListAsLongs$BufferDecoder$1.next(BlockListAsLongs.java:396)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.reportDiffSorted(BlockManager.java:2952)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.processReport(BlockManager.java:2787)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.processReport(BlockManager.java:2655)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.lambda$blockReport$0(NameNodeRpcServer.java:1582)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$BlockReportProcessingThread.processQueue(BlockManager.java:5089)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$BlockReportProcessingThread.run(BlockManager.java:5068)
> Caused by: com.google.protobuf.InvalidProtocolBufferException: Protocol 
> message was too large.  May be malicious.  Use 
> CodedInputStream.setSizeLimit() to increase the size limit.
>   at 
> com.google.protobuf.InvalidProtocolBufferException.sizeLimitExceeded(InvalidProtocolBufferException.java:110)
>   at 
> com.google.protobuf.CodedInputStream.refillBuffer(CodedInputStream.java:755)
>   at 
> com.google.protobuf.CodedInputStream.readRawByte(CodedInputStream.java:769)
>   at 
> com.google.protobuf.CodedInputStream.readRawVarint64(CodedInputStream.java:462)
>   at 
> org.apache.hadoop.hdfs.protocol.BlockListAsLongs$BufferDecoder$1.next(BlockListAsLongs.java:420)
>   ... 8 more
>   at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1553)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1499)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1396)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:233)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118)
>   at com.sun.proxy.$Proxy25.blockReport(Unknown Source)
>   at 
> org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolClientSideTranslatorPB.blockReport(DatanodeProtocolClientSideTranslatorPB.java:218)
>   at 
> 

[jira] [Commented] (HDFS-12243) Trash emptier should use Time.monotonicNow()

2019-08-11 Thread Ayush Saxena (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-12243?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16904732#comment-16904732
 ] 

Ayush Saxena commented on HDFS-12243:
-

hi  [~leosun08] [~jojochuang]
Seems something similar type of change(Change to monotonic now) was tried here 
at HADOOP-15901, But had objections:
https://issues.apache.org/jira/browse/HADOOP-15901?focusedCommentId=16675133=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16675133

Give a check once!!!

> Trash emptier should use Time.monotonicNow()
> 
>
> Key: HDFS-12243
> URL: https://issues.apache.org/jira/browse/HDFS-12243
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: fs
>Reporter: Wei-Chiu Chuang
>Assignee: Lisheng Sun
>Priority: Minor
> Attachments: HDFS-12243.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-14423) Percent (%) and plus (+) characters no longer work in WebHDFS

2019-08-11 Thread Masatake Iwasaki (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16904731#comment-16904731
 ] 

Masatake Iwasaki edited comment on HDFS-14423 at 8/11/19 8:11 PM:
--

The reason why WebHdfs file path gets truncated when having semicolon (\;) 
inside is that Jersey treats the part after ";" as parameters. I think using 
URLEncoder/URLDecoder is not the right way for HDFS-13176. I'm going to try 
refatctoring here.


was (Author: iwasakims):
The reason why WebHdfs file path gets truncated when having semicolon (;) 
inside is that Jersey treats the part after ";" as parameters. I think using 
URLEncoder/URLDecoder is not the right way for HDFS-13176. I'm going to try 
refatctoring here.

> Percent (%) and plus (+) characters no longer work in WebHDFS
> -
>
> Key: HDFS-14423
> URL: https://issues.apache.org/jira/browse/HDFS-14423
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 3.2.0, 3.1.2
> Environment: Ubuntu 16.04, but I believe this is irrelevant.
>Reporter: Jing Wang
>Assignee: Wei-Chiu Chuang
>Priority: Major
> Attachments: HDFS-14423.001.patch
>
>
> The following commands with percent (%) no longer work starting with version 
> 3.1:
> {code:java}
> $ hadoop/bin/hdfs dfs -touchz webhdfs://localhost/%
> $ hadoop/bin/hdfs dfs -cat webhdfs://localhost/%
> cat: URLDecoder: Incomplete trailing escape (%) pattern
> {code}
> Also, plus (+ ) characters get turned into spaces when doing DN operations:
> {code:java}
> $ hadoop/bin/hdfs dfs -touchz webhdfs://localhost/a+b
> $ hadoop/bin/hdfs dfs -mkdir webhdfs://localhost/c+d
> $ hadoop/bin/hdfs dfs -ls /
> Found 4 items
> -rw-r--r--   1 jing supergroup  0 2019-04-12 11:20 /a b
> drwxr-xr-x   - jing supergroup  0 2019-04-12 11:21 /c+d
> {code}
> I can confirm that these commands work correctly on 2.9 and 3.0. Also, the 
> usual hdfs:// client works as expected.
> I suspect a relation with HDFS-13176 or HDFS-13582, but I'm not sure what the 
> right fix is. Note that Hive uses % to escape special characters in partition 
> values, so banning % might not be a good option. For example, Hive will 
> create a paths like {{table_name/partition_key=%2F}} when 
> {{partition_key='/'}}.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14423) Percent (%) and plus (+) characters no longer work in WebHDFS

2019-08-11 Thread Masatake Iwasaki (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16904731#comment-16904731
 ] 

Masatake Iwasaki commented on HDFS-14423:
-

The reason why WebHdfs file path gets truncated when having semicolon (;) 
inside is that Jersey treats the part after ";" as parameters. I think using 
URLEncoder/URLDecoder is not the right way for HDFS-13176. I'm going to try 
refatctoring here.

> Percent (%) and plus (+) characters no longer work in WebHDFS
> -
>
> Key: HDFS-14423
> URL: https://issues.apache.org/jira/browse/HDFS-14423
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 3.2.0, 3.1.2
> Environment: Ubuntu 16.04, but I believe this is irrelevant.
>Reporter: Jing Wang
>Assignee: Wei-Chiu Chuang
>Priority: Major
> Attachments: HDFS-14423.001.patch
>
>
> The following commands with percent (%) no longer work starting with version 
> 3.1:
> {code:java}
> $ hadoop/bin/hdfs dfs -touchz webhdfs://localhost/%
> $ hadoop/bin/hdfs dfs -cat webhdfs://localhost/%
> cat: URLDecoder: Incomplete trailing escape (%) pattern
> {code}
> Also, plus (+ ) characters get turned into spaces when doing DN operations:
> {code:java}
> $ hadoop/bin/hdfs dfs -touchz webhdfs://localhost/a+b
> $ hadoop/bin/hdfs dfs -mkdir webhdfs://localhost/c+d
> $ hadoop/bin/hdfs dfs -ls /
> Found 4 items
> -rw-r--r--   1 jing supergroup  0 2019-04-12 11:20 /a b
> drwxr-xr-x   - jing supergroup  0 2019-04-12 11:21 /c+d
> {code}
> I can confirm that these commands work correctly on 2.9 and 3.0. Also, the 
> usual hdfs:// client works as expected.
> I suspect a relation with HDFS-13176 or HDFS-13582, but I'm not sure what the 
> right fix is. Note that Hive uses % to escape special characters in partition 
> values, so banning % might not be a good option. For example, Hive will 
> create a paths like {{table_name/partition_key=%2F}} when 
> {{partition_key='/'}}.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14423) Percent (%) and plus (+) characters no longer work in WebHDFS

2019-08-11 Thread Masatake Iwasaki (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16904728#comment-16904728
 ] 

Masatake Iwasaki commented on HDFS-14423:
-

java.net.URLEncoder converts "\+" into " " (since java.net.URLEncoder converts 
" " into "\+"). The code path of {{open}} seems to convert "a+b" into "a b" 
while {{mkdir}} does not.

> Percent (%) and plus (+) characters no longer work in WebHDFS
> -
>
> Key: HDFS-14423
> URL: https://issues.apache.org/jira/browse/HDFS-14423
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 3.2.0, 3.1.2
> Environment: Ubuntu 16.04, but I believe this is irrelevant.
>Reporter: Jing Wang
>Assignee: Wei-Chiu Chuang
>Priority: Major
> Attachments: HDFS-14423.001.patch
>
>
> The following commands with percent (%) no longer work starting with version 
> 3.1:
> {code:java}
> $ hadoop/bin/hdfs dfs -touchz webhdfs://localhost/%
> $ hadoop/bin/hdfs dfs -cat webhdfs://localhost/%
> cat: URLDecoder: Incomplete trailing escape (%) pattern
> {code}
> Also, plus (+ ) characters get turned into spaces when doing DN operations:
> {code:java}
> $ hadoop/bin/hdfs dfs -touchz webhdfs://localhost/a+b
> $ hadoop/bin/hdfs dfs -mkdir webhdfs://localhost/c+d
> $ hadoop/bin/hdfs dfs -ls /
> Found 4 items
> -rw-r--r--   1 jing supergroup  0 2019-04-12 11:20 /a b
> drwxr-xr-x   - jing supergroup  0 2019-04-12 11:21 /c+d
> {code}
> I can confirm that these commands work correctly on 2.9 and 3.0. Also, the 
> usual hdfs:// client works as expected.
> I suspect a relation with HDFS-13176 or HDFS-13582, but I'm not sure what the 
> right fix is. Note that Hive uses % to escape special characters in partition 
> values, so banning % might not be a good option. For example, Hive will 
> create a paths like {{table_name/partition_key=%2F}} when 
> {{partition_key='/'}}.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1927) Consolidate add/remove Acl into OzoneAclUtil class

2019-08-11 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1927?focusedWorklogId=292727=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-292727
 ]

ASF GitHub Bot logged work on HDDS-1927:


Author: ASF GitHub Bot
Created on: 11/Aug/19 17:18
Start Date: 11/Aug/19 17:18
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1263: HDDS-1927. 
Consolidate add/remove Acl into OzoneAclUtil class. Contri…
URL: https://github.com/apache/hadoop/pull/1263#issuecomment-520245179
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 41 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 5 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 14 | Maven dependency ordering for branch |
   | +1 | mvninstall | 566 | trunk passed |
   | +1 | compile | 344 | trunk passed |
   | +1 | checkstyle | 72 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 808 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 149 | trunk passed |
   | 0 | spotbugs | 439 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 636 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 21 | Maven dependency ordering for patch |
   | +1 | mvninstall | 537 | the patch passed |
   | +1 | compile | 354 | the patch passed |
   | +1 | javac | 354 | the patch passed |
   | +1 | checkstyle | 72 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 616 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 153 | the patch passed |
   | +1 | findbugs | 643 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 285 | hadoop-hdds in the patch passed. |
   | -1 | unit | 2520 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 46 | The patch does not generate ASF License warnings. |
   | | | 8029 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.ozone.container.common.statemachine.commandhandler.TestBlockDeletion |
   |   | hadoop.ozone.client.rpc.TestBlockOutputStreamWithFailures |
   |   | hadoop.ozone.client.rpc.TestCloseContainerHandlingByClient |
   |   | hadoop.hdds.scm.pipeline.TestRatisPipelineCreateAndDestory |
   |   | hadoop.ozone.client.rpc.TestCommitWatcher |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1263/10/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1263 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux d45fbfaf9971 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / cf5d895 |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1263/10/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1263/10/testReport/ |
   | Max. process+thread count | 4431 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/common hadoop-ozone/client 
hadoop-ozone/ozone-manager hadoop-ozone/objectstore-service 
hadoop-ozone/integration-test U: hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1263/10/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 292727)
Time Spent: 2h  (was: 1h 50m)

> Consolidate add/remove Acl into OzoneAclUtil class
> --
>
> Key: HDDS-1927
> URL: https://issues.apache.org/jira/browse/HDDS-1927
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>  

[jira] [Work logged] (HDDS-1927) Consolidate add/remove Acl into OzoneAclUtil class

2019-08-11 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1927?focusedWorklogId=292726=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-292726
 ]

ASF GitHub Bot logged work on HDDS-1927:


Author: ASF GitHub Bot
Created on: 11/Aug/19 17:01
Start Date: 11/Aug/19 17:01
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1263: HDDS-1927. 
Consolidate add/remove Acl into OzoneAclUtil class. Contri…
URL: https://github.com/apache/hadoop/pull/1263#issuecomment-520243888
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 45 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 5 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 30 | Maven dependency ordering for branch |
   | +1 | mvninstall | 588 | trunk passed |
   | +1 | compile | 390 | trunk passed |
   | +1 | checkstyle | 59 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 815 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 168 | trunk passed |
   | 0 | spotbugs | 410 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 601 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 19 | Maven dependency ordering for patch |
   | +1 | mvninstall | 568 | the patch passed |
   | +1 | compile | 358 | the patch passed |
   | +1 | javac | 358 | the patch passed |
   | +1 | checkstyle | 63 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 631 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 147 | the patch passed |
   | +1 | findbugs | 620 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 289 | hadoop-hdds in the patch passed. |
   | -1 | unit | 2273 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 39 | The patch does not generate ASF License warnings. |
   | | | 7866 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.ozone.client.rpc.TestBlockOutputStreamWithFailures |
   |   | hadoop.hdds.scm.pipeline.TestNodeFailure |
   |   | hadoop.ozone.client.rpc.TestMultiBlockWritesWithDnFailures |
   |   | hadoop.ozone.scm.node.TestQueryNode |
   |   | hadoop.ozone.TestMiniChaosOzoneCluster |
   |   | hadoop.hdds.scm.safemode.TestSCMSafeModeWithPipelineRules |
   |   | hadoop.hdds.scm.pipeline.TestNode2PipelineMap |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1263/9/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1263 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 86dd0b03e939 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / cf5d895 |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1263/9/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1263/9/testReport/ |
   | Max. process+thread count | 4968 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/common hadoop-ozone/client 
hadoop-ozone/ozone-manager hadoop-ozone/objectstore-service 
hadoop-ozone/integration-test U: hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1263/9/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 292726)
Time Spent: 1h 50m  (was: 1h 40m)

> Consolidate add/remove Acl into OzoneAclUtil class
> --
>
> Key: HDDS-1927
> URL: https://issues.apache.org/jira/browse/HDDS-1927
> Project: Hadoop Distributed Data Store
>  

[jira] [Commented] (HDFS-14654) RBF: TestRouterRpc tests are flaky

2019-08-11 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14654?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16904699#comment-16904699
 ] 

Hadoop QA commented on HDFS-14654:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
55s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 23s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
35s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 43s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 22m 50s{color} 
| {color:red} hadoop-hdfs-rbf in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
27s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 74m 29s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.federation.router.TestRouterWithSecureStartup |
|   | hadoop.hdfs.server.federation.router.TestRouterAllResolver |
|   | hadoop.hdfs.server.federation.security.TestRouterHttpDelegationToken |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=18.09.7 Server=18.09.7 Image:yetus/hadoop:bdbca0e53b4 |
| JIRA Issue | HDFS-14654 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12977255/HDFS-14654.002.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 9356d4c380a0 4.15.0-52-generic #56-Ubuntu SMP Tue Jun 4 
22:49:08 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / cf5d895 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_212 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27470/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27470/testReport/ |
| Max. process+thread count | 1589 (vs. ulimit of 5500) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 

[jira] [Commented] (HDFS-12243) Trash emptier should use Time.monotonicNow()

2019-08-11 Thread Lisheng Sun (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-12243?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16904696#comment-16904696
 ] 

Lisheng Sun commented on HDFS-12243:


Hi [~jojochuang]. I see this Jira has been not updated for a long time. I 
assgined to me and hoped you not mind it.

Uploaded v1 patch.

> Trash emptier should use Time.monotonicNow()
> 
>
> Key: HDFS-12243
> URL: https://issues.apache.org/jira/browse/HDFS-12243
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: fs
>Reporter: Wei-Chiu Chuang
>Assignee: Lisheng Sun
>Priority: Minor
> Attachments: HDFS-12243.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12243) Trash emptier should use Time.monotonicNow()

2019-08-11 Thread Lisheng Sun (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-12243?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lisheng Sun updated HDFS-12243:
---
Attachment: HDFS-12243.001.patch

> Trash emptier should use Time.monotonicNow()
> 
>
> Key: HDFS-12243
> URL: https://issues.apache.org/jira/browse/HDFS-12243
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: fs
>Reporter: Wei-Chiu Chuang
>Assignee: Lisheng Sun
>Priority: Minor
> Attachments: HDFS-12243.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-12243) Trash emptier should use Time.monotonicNow()

2019-08-11 Thread Lisheng Sun (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-12243?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lisheng Sun reassigned HDFS-12243:
--

Assignee: Lisheng Sun  (was: Wei-Chiu Chuang)

> Trash emptier should use Time.monotonicNow()
> 
>
> Key: HDFS-12243
> URL: https://issues.apache.org/jira/browse/HDFS-12243
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: fs
>Reporter: Wei-Chiu Chuang
>Assignee: Lisheng Sun
>Priority: Minor
>




--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14711) RBF: RBFMetrics throws NullPointerException if stateStore disabled

2019-08-11 Thread Chen Zhang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16904681#comment-16904681
 ] 

Chen Zhang commented on HDFS-14711:
---

Hi [~ayushtkn], agree with you, we need to add some NULL check. So this Jira 
looks duplicate with HDFS-14656.

> RBF: RBFMetrics throws NullPointerException if stateStore disabled
> --
>
> Key: HDFS-14711
> URL: https://issues.apache.org/jira/browse/HDFS-14711
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chen Zhang
>Assignee: Chen Zhang
>Priority: Major
> Attachments: HDFS-14711.001.patch
>
>
> In current implementation, if \{{stateStore}} initialize fail, only log an 
> error message. Actually RBFMetrics can't work normally at this time.
> {code:java}
> 2019-08-08 22:43:58,024 [qtp812446698-28] ERROR jmx.JMXJsonServlet 
> (JMXJsonServlet.java:writeAttribute(345)) - getting attribute FilesTotal of 
> Hadoop:service=NameNode,name=FSNamesystem-2 threw an exception
> javax.management.RuntimeMBeanException: java.lang.NullPointerException
> at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.rethrow(DefaultMBeanServerInterceptor.java:839)
> at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.rethrowMaybeMBeanException(DefaultMBeanServerInterceptor.java:852)
> at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getAttribute(DefaultMBeanServerInterceptor.java:651)
> at 
> com.sun.jmx.mbeanserver.JmxMBeanServer.getAttribute(JmxMBeanServer.java:678)
> at 
> org.apache.hadoop.jmx.JMXJsonServlet.writeAttribute(JMXJsonServlet.java:338)
> at org.apache.hadoop.jmx.JMXJsonServlet.listBeans(JMXJsonServlet.java:316)
> at org.apache.hadoop.jmx.JMXJsonServlet.doGet(JMXJsonServlet.java:210)
> at javax.servlet.http.HttpServlet.service(HttpServlet.java:687)
> at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
> at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:848)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1772)
> at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:644)
> at 
> org.apache.hadoop.security.authentication.server.ProxyUserAuthenticationFilter.doFilter(ProxyUserAuthenticationFilter.java:104)
> at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:592)
> at org.apache.hadoop.hdfs.web.AuthFilter.doFilter(AuthFilter.java:51)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
> at 
> org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter.doFilter(StaticUserWebFilter.java:110)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
> at 
> org.apache.hadoop.http.HttpServer2$QuotingInputFilter.doFilter(HttpServer2.java:1604)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
> at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
> at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
> at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
> at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
> at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
> at org.eclipse.jetty.server.Server.handle(Server.java:539)
> at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:333)
> at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251)
> at 
> org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:283)
> at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:108)
> at 
> org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
> at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303)
> at 
> 

[jira] [Commented] (HDFS-14654) RBF: TestRouterRpc tests are flaky

2019-08-11 Thread Chen Zhang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14654?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16904680#comment-16904680
 ] 

Chen Zhang commented on HDFS-14654:
---

Thanks [~elgoiri] for your review. I've added some javadocs explaining in patch 
v2, not sure is it enough, please help to review again.

> RBF: TestRouterRpc tests are flaky
> --
>
> Key: HDFS-14654
> URL: https://issues.apache.org/jira/browse/HDFS-14654
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Takanobu Asanuma
>Assignee: Chen Zhang
>Priority: Major
> Attachments: HDFS-14654.001.patch, HDFS-14654.002.patch, error.log
>
>
> They sometimes pass and sometimes fail.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14654) RBF: TestRouterRpc tests are flaky

2019-08-11 Thread Chen Zhang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14654?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Zhang updated HDFS-14654:
--
Attachment: HDFS-14654.002.patch

> RBF: TestRouterRpc tests are flaky
> --
>
> Key: HDFS-14654
> URL: https://issues.apache.org/jira/browse/HDFS-14654
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Takanobu Asanuma
>Assignee: Chen Zhang
>Priority: Major
> Attachments: HDFS-14654.001.patch, HDFS-14654.002.patch, error.log
>
>
> They sometimes pass and sometimes fail.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1942) Support copy during S3 multipart upload part creation

2019-08-11 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1942?focusedWorklogId=292715=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-292715
 ]

ASF GitHub Bot logged work on HDDS-1942:


Author: ASF GitHub Bot
Created on: 11/Aug/19 15:19
Start Date: 11/Aug/19 15:19
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1279: HDDS-1942. 
Support copy during S3 multipart upload part creation
URL: https://github.com/apache/hadoop/pull/1279#issuecomment-520236387
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 65 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 31 | Maven dependency ordering for branch |
   | +1 | mvninstall | 666 | trunk passed |
   | +1 | compile | 412 | trunk passed |
   | +1 | checkstyle | 73 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 970 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 172 | trunk passed |
   | 0 | spotbugs | 477 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 698 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 19 | Maven dependency ordering for patch |
   | +1 | mvninstall | 584 | the patch passed |
   | +1 | compile | 381 | the patch passed |
   | +1 | javac | 381 | the patch passed |
   | +1 | checkstyle | 71 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 671 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 156 | the patch passed |
   | +1 | findbugs | 704 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 334 | hadoop-hdds in the patch passed. |
   | -1 | unit | 2991 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 42 | The patch does not generate ASF License warnings. |
   | | | 9182 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.hdds.scm.pipeline.TestRatisPipelineProvider |
   |   | hadoop.hdds.scm.pipeline.TestRatisPipelineCreateAndDestory |
   |   | hadoop.ozone.client.rpc.TestMultiBlockWritesWithDnFailures |
   |   | hadoop.ozone.client.rpc.Test2WayCommitInRatis |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1279/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1279 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux a8dd26e8e4a8 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / cf5d895 |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1279/1/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1279/1/testReport/ |
   | Max. process+thread count | 5065 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/s3gateway hadoop-ozone/dist U: hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1279/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 292715)
Time Spent: 20m  (was: 10m)

> Support copy during S3 multipart upload part creation
> -
>
> Key: HDDS-1942
> URL: https://issues.apache.org/jira/browse/HDDS-1942
>

[jira] [Work logged] (HDDS-1950) S3 MPU part-list call fails if there are no parts

2019-08-11 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1950?focusedWorklogId=292714=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-292714
 ]

ASF GitHub Bot logged work on HDDS-1950:


Author: ASF GitHub Bot
Created on: 11/Aug/19 14:52
Start Date: 11/Aug/19 14:52
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1278: HDDS-1950. S3 
MPU part-list call fails if there are no parts
URL: https://github.com/apache/hadoop/pull/1278#issuecomment-520234393
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 71 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 1 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 621 | trunk passed |
   | +1 | compile | 369 | trunk passed |
   | +1 | checkstyle | 76 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 946 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 172 | trunk passed |
   | 0 | spotbugs | 434 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 630 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 552 | the patch passed |
   | +1 | compile | 378 | the patch passed |
   | +1 | javac | 378 | the patch passed |
   | +1 | checkstyle | 76 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 721 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 165 | the patch passed |
   | +1 | findbugs | 648 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 329 | hadoop-hdds in the patch passed. |
   | -1 | unit | 2376 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 45 | The patch does not generate ASF License warnings. |
   | | | 8336 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdds.scm.pipeline.TestRatisPipelineCreateAndDestory |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestOzoneClientRetriesOnException |
   |   | hadoop.ozone.client.rpc.TestMultiBlockWritesWithDnFailures |
   |   | hadoop.ozone.TestMiniOzoneCluster |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=18.09.7 Server=18.09.7 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1278/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1278 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 57d845d3eded 4.15.0-52-generic #56-Ubuntu SMP Tue Jun 4 
22:49:08 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / cf5d895 |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1278/1/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1278/1/testReport/ |
   | Max. process+thread count | 5403 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/ozone-manager U: hadoop-ozone/ozone-manager |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1278/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 292714)
Time Spent: 20m  (was: 10m)

> S3 MPU part-list call fails if there are no parts
> -
>
> Key: HDDS-1950
> URL: https://issues.apache.org/jira/browse/HDDS-1950
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: S3
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> If an S3 multipart upload is created 

[jira] [Work logged] (HDDS-1054) List Multipart uploads in a bucket

2019-08-11 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1054?focusedWorklogId=292707=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-292707
 ]

ASF GitHub Bot logged work on HDDS-1054:


Author: ASF GitHub Bot
Created on: 11/Aug/19 14:07
Start Date: 11/Aug/19 14:07
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #1277: 
HDDS-1054. List Multipart uploads in a bucket
URL: https://github.com/apache/hadoop/pull/1277#discussion_r312740496
 
 

 ##
 File path: hadoop-ozone/dist/src/main/smoketest/s3/MultipartUpload.robot
 ##
 @@ -200,3 +200,24 @@ Test Multipart Upload with the simplified aws s3 cp API
 Execute AWSS3Clicp s3://${BUCKET}/mpyawscli 
/tmp/part1.result
 Execute AWSS3Clirm s3://${BUCKET}/mpyawscli
 Compare files   /tmp/part1
/tmp/part1.result
+
+Test Multipart Upload list
+${result} = Execute AWSS3APICli create-multipart-upload 
--bucket ${BUCKET} --key listtest/key1
+${uploadID1} =  Execute and checkrc echo '${result}' | jq -r 
'.UploadId'0
+Should contain  ${result}${BUCKET}
+Should contain  ${result}listtest/key1
+Should contain  ${result}UploadId
+
+${result} = Execute AWSS3APICli create-multipart-upload 
--bucket ${BUCKET} --key listtest/key2
+${uploadID2} =  Execute and checkrc echo '${result}' | jq -r 
'.UploadId'0
+Should contain  ${result}${BUCKET}
+Should contain  ${result}listtest/key2
+Should contain  ${result}UploadId
+
+${result} = Execute AWSS3APICli list-multipart-uploads 
--bucket ${BUCKET} --prefix listtest
+Should contain  ${result}${uploadID1}
+Should contain  ${result}${uploadID2}
+
+${count} =  Execute and checkrc  echo '${result}' | jq -r 
'.Uploads | length'  0
 
 Review comment:
   whitespace:end of line
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 292707)
Time Spent: 20m  (was: 10m)

> List Multipart uploads in a bucket
> --
>
> Key: HDDS-1054
> URL: https://issues.apache.org/jira/browse/HDDS-1054
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Elek, Marton
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> This Jira is to implement in ozone to list of in-progress multipart uploads 
> in a bucket.
> [https://docs.aws.amazon.com/AmazonS3/latest/API/mpUploadListMPUpload.html]
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1054) List Multipart uploads in a bucket

2019-08-11 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1054?focusedWorklogId=292708=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-292708
 ]

ASF GitHub Bot logged work on HDDS-1054:


Author: ASF GitHub Bot
Created on: 11/Aug/19 14:07
Start Date: 11/Aug/19 14:07
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1277: HDDS-1054. List 
Multipart uploads in a bucket
URL: https://github.com/apache/hadoop/pull/1277#issuecomment-520231221
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 73 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 3 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 30 | Maven dependency ordering for branch |
   | +1 | mvninstall | 599 | trunk passed |
   | +1 | compile | 371 | trunk passed |
   | +1 | checkstyle | 73 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 997 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 164 | trunk passed |
   | 0 | spotbugs | 432 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 629 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 21 | Maven dependency ordering for patch |
   | +1 | mvninstall | 558 | the patch passed |
   | +1 | compile | 371 | the patch passed |
   | +1 | cc | 371 | the patch passed |
   | +1 | javac | 371 | the patch passed |
   | +1 | checkstyle | 77 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | -1 | whitespace | 0 | The patch has 1 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply |
   | +1 | shadedclient | 740 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 166 | the patch passed |
   | -1 | findbugs | 443 | hadoop-ozone generated 1 new + 0 unchanged - 0 fixed 
= 1 total (was 0) |
   ||| _ Other Tests _ |
   | +1 | unit | 347 | hadoop-hdds in the patch passed. |
   | -1 | unit | 225 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 39 | The patch does not generate ASF License warnings. |
   | | | 6300 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | FindBugs | module:hadoop-ozone |
   |  |  Dead store to multipartKey in 
org.apache.hadoop.ozone.om.KeyManagerImpl.listMultipartUploads(String, String, 
String)  At 
KeyManagerImpl.java:org.apache.hadoop.ozone.om.KeyManagerImpl.listMultipartUploads(String,
 String, String)  At KeyManagerImpl.java:[line 1312] |
   | Failed junit tests | hadoop.ozone.om.ratis.TestOzoneManagerRatisServer |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1277/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1277 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle cc |
   | uname | Linux 6ec7387069c5 4.15.0-54-generic #58-Ubuntu SMP Mon Jun 24 
10:55:24 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / cf5d895 |
   | Default Java | 1.8.0_222 |
   | whitespace | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1277/1/artifact/out/whitespace-eol.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1277/1/artifact/out/new-findbugs-hadoop-ozone.html
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1277/1/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1277/1/testReport/ |
   | Max. process+thread count | 1272 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/common hadoop-ozone/client 
hadoop-ozone/ozone-manager hadoop-ozone/s3gateway hadoop-ozone/dist U: 
hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1277/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---


[jira] [Assigned] (HDDS-1947) fix naming issue for ScmBlockLocationTestingClient

2019-08-11 Thread Elek, Marton (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1947?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton reassigned HDDS-1947:
--

Assignee: Elek, Marton

> fix naming issue for ScmBlockLocationTestingClient
> --
>
> Key: HDDS-1947
> URL: https://issues.apache.org/jira/browse/HDDS-1947
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: star
>Assignee: Elek, Marton
>Priority: Major
> Attachments: HDFS-14489.patch
>
>
> class 'ScmBlockLocationTestIngClient' is not named in Camel-Case form. Rename 
> it to ScmBlockLocationTestingClient.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14708) TestLargeBlockReport#testBlockReportSucceedsWithLargerLengthLimit fails in trunk

2019-08-11 Thread Lisheng Sun (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16904634#comment-16904634
 ] 

Lisheng Sun commented on HDFS-14708:


Attached the v1 patch and uploaded it.  

> TestLargeBlockReport#testBlockReportSucceedsWithLargerLengthLimit fails in 
> trunk
> 
>
> Key: HDFS-14708
> URL: https://issues.apache.org/jira/browse/HDFS-14708
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Minor
> Attachments: HDFS-14708-001.patch
>
>
> {code:java}
> [ERROR] 
> testBlockReportSucceedsWithLargerLengthLimit(org.apache.hadoop.hdfs.server.datanode.TestLargeBlockReport)
>   Time elapsed: 47.956 s  <<< ERROR!
> org.apache.hadoop.ipc.RemoteException(java.io.IOException): 
> java.lang.IllegalStateException: 
> com.google.protobuf.InvalidProtocolBufferException: Protocol message was too 
> large.  May be malicious.  Use CodedInputStream.setSizeLimit() to increase 
> the size limit.
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.runBlockOp(BlockManager.java:5011)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.blockReport(NameNodeRpcServer.java:1581)
>   at 
> org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolServerSideTranslatorPB.blockReport(DatanodeProtocolServerSideTranslatorPB.java:181)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos$DatanodeProtocolService$2.callBlockingMethod(DatanodeProtocolProtos.java:31664)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:529)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1070)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1001)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:929)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1891)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2921)
> Caused by: java.lang.IllegalStateException: 
> com.google.protobuf.InvalidProtocolBufferException: Protocol message was too 
> large.  May be malicious.  Use CodedInputStream.setSizeLimit() to increase 
> the size limit.
>   at 
> org.apache.hadoop.hdfs.protocol.BlockListAsLongs$BufferDecoder$1.next(BlockListAsLongs.java:424)
>   at 
> org.apache.hadoop.hdfs.protocol.BlockListAsLongs$BufferDecoder$1.next(BlockListAsLongs.java:396)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.reportDiffSorted(BlockManager.java:2952)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.processReport(BlockManager.java:2787)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.processReport(BlockManager.java:2655)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.lambda$blockReport$0(NameNodeRpcServer.java:1582)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$BlockReportProcessingThread.processQueue(BlockManager.java:5089)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$BlockReportProcessingThread.run(BlockManager.java:5068)
> Caused by: com.google.protobuf.InvalidProtocolBufferException: Protocol 
> message was too large.  May be malicious.  Use 
> CodedInputStream.setSizeLimit() to increase the size limit.
>   at 
> com.google.protobuf.InvalidProtocolBufferException.sizeLimitExceeded(InvalidProtocolBufferException.java:110)
>   at 
> com.google.protobuf.CodedInputStream.refillBuffer(CodedInputStream.java:755)
>   at 
> com.google.protobuf.CodedInputStream.readRawByte(CodedInputStream.java:769)
>   at 
> com.google.protobuf.CodedInputStream.readRawVarint64(CodedInputStream.java:462)
>   at 
> org.apache.hadoop.hdfs.protocol.BlockListAsLongs$BufferDecoder$1.next(BlockListAsLongs.java:420)
>   ... 8 more
>   at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1553)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1499)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1396)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:233)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118)
>   at com.sun.proxy.$Proxy25.blockReport(Unknown Source)
>   at 
> org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolClientSideTranslatorPB.blockReport(DatanodeProtocolClientSideTranslatorPB.java:218)
>   at 
> 

[jira] [Updated] (HDFS-14708) TestLargeBlockReport#testBlockReportSucceedsWithLargerLengthLimit fails in trunk

2019-08-11 Thread Lisheng Sun (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14708?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lisheng Sun updated HDFS-14708:
---
Attachment: HDFS-14708-001.patch

> TestLargeBlockReport#testBlockReportSucceedsWithLargerLengthLimit fails in 
> trunk
> 
>
> Key: HDFS-14708
> URL: https://issues.apache.org/jira/browse/HDFS-14708
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Minor
> Attachments: HDFS-14708-001.patch
>
>
> {code:java}
> [ERROR] 
> testBlockReportSucceedsWithLargerLengthLimit(org.apache.hadoop.hdfs.server.datanode.TestLargeBlockReport)
>   Time elapsed: 47.956 s  <<< ERROR!
> org.apache.hadoop.ipc.RemoteException(java.io.IOException): 
> java.lang.IllegalStateException: 
> com.google.protobuf.InvalidProtocolBufferException: Protocol message was too 
> large.  May be malicious.  Use CodedInputStream.setSizeLimit() to increase 
> the size limit.
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.runBlockOp(BlockManager.java:5011)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.blockReport(NameNodeRpcServer.java:1581)
>   at 
> org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolServerSideTranslatorPB.blockReport(DatanodeProtocolServerSideTranslatorPB.java:181)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos$DatanodeProtocolService$2.callBlockingMethod(DatanodeProtocolProtos.java:31664)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:529)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1070)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1001)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:929)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1891)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2921)
> Caused by: java.lang.IllegalStateException: 
> com.google.protobuf.InvalidProtocolBufferException: Protocol message was too 
> large.  May be malicious.  Use CodedInputStream.setSizeLimit() to increase 
> the size limit.
>   at 
> org.apache.hadoop.hdfs.protocol.BlockListAsLongs$BufferDecoder$1.next(BlockListAsLongs.java:424)
>   at 
> org.apache.hadoop.hdfs.protocol.BlockListAsLongs$BufferDecoder$1.next(BlockListAsLongs.java:396)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.reportDiffSorted(BlockManager.java:2952)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.processReport(BlockManager.java:2787)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.processReport(BlockManager.java:2655)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.lambda$blockReport$0(NameNodeRpcServer.java:1582)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$BlockReportProcessingThread.processQueue(BlockManager.java:5089)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$BlockReportProcessingThread.run(BlockManager.java:5068)
> Caused by: com.google.protobuf.InvalidProtocolBufferException: Protocol 
> message was too large.  May be malicious.  Use 
> CodedInputStream.setSizeLimit() to increase the size limit.
>   at 
> com.google.protobuf.InvalidProtocolBufferException.sizeLimitExceeded(InvalidProtocolBufferException.java:110)
>   at 
> com.google.protobuf.CodedInputStream.refillBuffer(CodedInputStream.java:755)
>   at 
> com.google.protobuf.CodedInputStream.readRawByte(CodedInputStream.java:769)
>   at 
> com.google.protobuf.CodedInputStream.readRawVarint64(CodedInputStream.java:462)
>   at 
> org.apache.hadoop.hdfs.protocol.BlockListAsLongs$BufferDecoder$1.next(BlockListAsLongs.java:420)
>   ... 8 more
>   at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1553)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1499)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1396)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:233)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118)
>   at com.sun.proxy.$Proxy25.blockReport(Unknown Source)
>   at 
> org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolClientSideTranslatorPB.blockReport(DatanodeProtocolClientSideTranslatorPB.java:218)
>   at 
> 

[jira] [Updated] (HDDS-1942) Support copy during S3 multipart upload part creation

2019-08-11 Thread Elek, Marton (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-1942:
---
Status: Patch Available  (was: Open)

> Support copy during S3 multipart upload part creation
> -
>
> Key: HDDS-1942
> URL: https://issues.apache.org/jira/browse/HDDS-1942
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: S3
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Uploads a part by copying data from an existing object as data source
> Documented here:
> https://docs.aws.amazon.com/AmazonS3/latest/API/mpUploadUploadPartCopy.html



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1942) Support copy during S3 multipart upload part creation

2019-08-11 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1942?focusedWorklogId=292697=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-292697
 ]

ASF GitHub Bot logged work on HDDS-1942:


Author: ASF GitHub Bot
Created on: 11/Aug/19 12:45
Start Date: 11/Aug/19 12:45
Worklog Time Spent: 10m 
  Work Description: elek commented on pull request #1279: HDDS-1942. 
Support copy during S3 multipart upload part creation
URL: https://github.com/apache/hadoop/pull/1279
 
 
   Uploads a part by copying data from an existing object as data source
   
   Documented here:
   
   https://docs.aws.amazon.com/AmazonS3/latest/API/mpUploadUploadPartCopy.html
   
   See: https://issues.apache.org/jira/browse/HDDS-1942
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 292697)
Time Spent: 10m
Remaining Estimate: 0h

> Support copy during S3 multipart upload part creation
> -
>
> Key: HDDS-1942
> URL: https://issues.apache.org/jira/browse/HDDS-1942
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: S3
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Uploads a part by copying data from an existing object as data source
> Documented here:
> https://docs.aws.amazon.com/AmazonS3/latest/API/mpUploadUploadPartCopy.html



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1942) Support copy during S3 multipart upload part creation

2019-08-11 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-1942:
-
Labels: pull-request-available  (was: )

> Support copy during S3 multipart upload part creation
> -
>
> Key: HDDS-1942
> URL: https://issues.apache.org/jira/browse/HDDS-1942
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: S3
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Blocker
>  Labels: pull-request-available
>
> Uploads a part by copying data from an existing object as data source
> Documented here:
> https://docs.aws.amazon.com/AmazonS3/latest/API/mpUploadUploadPartCopy.html



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14708) TestLargeBlockReport#testBlockReportSucceedsWithLargerLengthLimit fails in trunk

2019-08-11 Thread Lisheng Sun (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16904632#comment-16904632
 ] 

Lisheng Sun commented on HDFS-14708:


[~ayushtkn] Since  HADOOP-16452 update Increase ipc.maximum.data.length default 
from 64MB to 128MB leads to 
TestLargeBlockReport#testBlockReportSucceedsWithLargerLengthLimit failed. I 
revert HADOOP-16452 in my local,  
TestLargeBlockReport#testBlockReportSucceedsWithLargerLengthLimit succss. I 
will fix UT.

 

> TestLargeBlockReport#testBlockReportSucceedsWithLargerLengthLimit fails in 
> trunk
> 
>
> Key: HDFS-14708
> URL: https://issues.apache.org/jira/browse/HDFS-14708
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Minor
>
> {code:java}
> [ERROR] 
> testBlockReportSucceedsWithLargerLengthLimit(org.apache.hadoop.hdfs.server.datanode.TestLargeBlockReport)
>   Time elapsed: 47.956 s  <<< ERROR!
> org.apache.hadoop.ipc.RemoteException(java.io.IOException): 
> java.lang.IllegalStateException: 
> com.google.protobuf.InvalidProtocolBufferException: Protocol message was too 
> large.  May be malicious.  Use CodedInputStream.setSizeLimit() to increase 
> the size limit.
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.runBlockOp(BlockManager.java:5011)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.blockReport(NameNodeRpcServer.java:1581)
>   at 
> org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolServerSideTranslatorPB.blockReport(DatanodeProtocolServerSideTranslatorPB.java:181)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos$DatanodeProtocolService$2.callBlockingMethod(DatanodeProtocolProtos.java:31664)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:529)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1070)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1001)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:929)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1891)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2921)
> Caused by: java.lang.IllegalStateException: 
> com.google.protobuf.InvalidProtocolBufferException: Protocol message was too 
> large.  May be malicious.  Use CodedInputStream.setSizeLimit() to increase 
> the size limit.
>   at 
> org.apache.hadoop.hdfs.protocol.BlockListAsLongs$BufferDecoder$1.next(BlockListAsLongs.java:424)
>   at 
> org.apache.hadoop.hdfs.protocol.BlockListAsLongs$BufferDecoder$1.next(BlockListAsLongs.java:396)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.reportDiffSorted(BlockManager.java:2952)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.processReport(BlockManager.java:2787)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.processReport(BlockManager.java:2655)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.lambda$blockReport$0(NameNodeRpcServer.java:1582)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$BlockReportProcessingThread.processQueue(BlockManager.java:5089)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$BlockReportProcessingThread.run(BlockManager.java:5068)
> Caused by: com.google.protobuf.InvalidProtocolBufferException: Protocol 
> message was too large.  May be malicious.  Use 
> CodedInputStream.setSizeLimit() to increase the size limit.
>   at 
> com.google.protobuf.InvalidProtocolBufferException.sizeLimitExceeded(InvalidProtocolBufferException.java:110)
>   at 
> com.google.protobuf.CodedInputStream.refillBuffer(CodedInputStream.java:755)
>   at 
> com.google.protobuf.CodedInputStream.readRawByte(CodedInputStream.java:769)
>   at 
> com.google.protobuf.CodedInputStream.readRawVarint64(CodedInputStream.java:462)
>   at 
> org.apache.hadoop.hdfs.protocol.BlockListAsLongs$BufferDecoder$1.next(BlockListAsLongs.java:420)
>   ... 8 more
>   at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1553)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1499)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1396)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:233)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118)
>   at com.sun.proxy.$Proxy25.blockReport(Unknown Source)
>   at 
> 

[jira] [Work logged] (HDDS-1950) S3 MPU part-list call fails if there are no parts

2019-08-11 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1950?focusedWorklogId=292695=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-292695
 ]

ASF GitHub Bot logged work on HDDS-1950:


Author: ASF GitHub Bot
Created on: 11/Aug/19 12:32
Start Date: 11/Aug/19 12:32
Worklog Time Spent: 10m 
  Work Description: elek commented on pull request #1278: HDDS-1950. S3 MPU 
part-list call fails if there are no parts
URL: https://github.com/apache/hadoop/pull/1278
 
 
   If an S3 multipart upload is created but no part is upload the part list 
can't be called because it throws HTTP 500:
   
   Create an MPU:
   
   {code}
   aws s3api --endpoint http://localhost: create-multipart-upload 
--bucket=docker --key=testkeu 
   {
   "Bucket": "docker",
   "Key": "testkeu",
   "UploadId": "85343e71-4c16-4a75-bb55-01f56a9339b2-102592678478217234"
   }
   {code}
   
   List the parts:
   
   {code}
   aws s3api --endpoint http://localhost: list-parts  --bucket=docker 
--key=testkeu 
--upload-id=85343e71-4c16-4a75-bb55-01f56a9339b2-102592678478217234
   {code}
   
   It throws an exception on the server side, because in the 
KeyManagerImpl.listParts the  ReplicationType is retrieved from the first part:
   
   {code}
   HddsProtos.ReplicationType replicationType =
   
partKeyInfoMap.firstEntry().getValue().getPartKeyInfo().getType();
   {code}
   
   Which is not yet available in this use case.
   
   See: https://issues.apache.org/jira/browse/HDDS-1950
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 292695)
Time Spent: 10m
Remaining Estimate: 0h

> S3 MPU part-list call fails if there are no parts
> -
>
> Key: HDDS-1950
> URL: https://issues.apache.org/jira/browse/HDDS-1950
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: S3
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> If an S3 multipart upload is created but no part is upload the part list 
> can't be called because it throws HTTP 500:
> Create an MPU:
> {code}
> aws s3api --endpoint http://localhost: create-multipart-upload 
> --bucket=docker --key=testkeu 
> {
> "Bucket": "docker",
> "Key": "testkeu",
> "UploadId": "85343e71-4c16-4a75-bb55-01f56a9339b2-102592678478217234"
> }
> {code}
> List the parts:
> {code}
> aws s3api --endpoint http://localhost: list-parts  --bucket=docker 
> --key=testkeu 
> --upload-id=85343e71-4c16-4a75-bb55-01f56a9339b2-102592678478217234
> {code}
> It throws an exception on the server side, because in the 
> KeyManagerImpl.listParts the  ReplicationType is retrieved from the first 
> part:
> {code}
> HddsProtos.ReplicationType replicationType =
> partKeyInfoMap.firstEntry().getValue().getPartKeyInfo().getType();
> {code}
> Which is not yet available in this use case.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1950) S3 MPU part-list call fails if there are no parts

2019-08-11 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1950?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-1950:
-
Labels: pull-request-available  (was: )

> S3 MPU part-list call fails if there are no parts
> -
>
> Key: HDDS-1950
> URL: https://issues.apache.org/jira/browse/HDDS-1950
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: S3
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>
> If an S3 multipart upload is created but no part is upload the part list 
> can't be called because it throws HTTP 500:
> Create an MPU:
> {code}
> aws s3api --endpoint http://localhost: create-multipart-upload 
> --bucket=docker --key=testkeu 
> {
> "Bucket": "docker",
> "Key": "testkeu",
> "UploadId": "85343e71-4c16-4a75-bb55-01f56a9339b2-102592678478217234"
> }
> {code}
> List the parts:
> {code}
> aws s3api --endpoint http://localhost: list-parts  --bucket=docker 
> --key=testkeu 
> --upload-id=85343e71-4c16-4a75-bb55-01f56a9339b2-102592678478217234
> {code}
> It throws an exception on the server side, because in the 
> KeyManagerImpl.listParts the  ReplicationType is retrieved from the first 
> part:
> {code}
> HddsProtos.ReplicationType replicationType =
> partKeyInfoMap.firstEntry().getValue().getPartKeyInfo().getType();
> {code}
> Which is not yet available in this use case.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1950) S3 MPU part-list call fails if there are no parts

2019-08-11 Thread Elek, Marton (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1950?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-1950:
---
Status: Patch Available  (was: Open)

> S3 MPU part-list call fails if there are no parts
> -
>
> Key: HDDS-1950
> URL: https://issues.apache.org/jira/browse/HDDS-1950
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: S3
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>
> If an S3 multipart upload is created but no part is upload the part list 
> can't be called because it throws HTTP 500:
> Create an MPU:
> {code}
> aws s3api --endpoint http://localhost: create-multipart-upload 
> --bucket=docker --key=testkeu 
> {
> "Bucket": "docker",
> "Key": "testkeu",
> "UploadId": "85343e71-4c16-4a75-bb55-01f56a9339b2-102592678478217234"
> }
> {code}
> List the parts:
> {code}
> aws s3api --endpoint http://localhost: list-parts  --bucket=docker 
> --key=testkeu 
> --upload-id=85343e71-4c16-4a75-bb55-01f56a9339b2-102592678478217234
> {code}
> It throws an exception on the server side, because in the 
> KeyManagerImpl.listParts the  ReplicationType is retrieved from the first 
> part:
> {code}
> HddsProtos.ReplicationType replicationType =
> partKeyInfoMap.firstEntry().getValue().getPartKeyInfo().getType();
> {code}
> Which is not yet available in this use case.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1054) List Multipart uploads in a bucket

2019-08-11 Thread Elek, Marton (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1054?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-1054:
---
Status: Patch Available  (was: Open)

> List Multipart uploads in a bucket
> --
>
> Key: HDDS-1054
> URL: https://issues.apache.org/jira/browse/HDDS-1054
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Elek, Marton
>Priority: Blocker
>
> This Jira is to implement in ozone to list of in-progress multipart uploads 
> in a bucket.
> [https://docs.aws.amazon.com/AmazonS3/latest/API/mpUploadListMPUpload.html]
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1054) List Multipart uploads in a bucket

2019-08-11 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1054?focusedWorklogId=292694=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-292694
 ]

ASF GitHub Bot logged work on HDDS-1054:


Author: ASF GitHub Bot
Created on: 11/Aug/19 12:21
Start Date: 11/Aug/19 12:21
Worklog Time Spent: 10m 
  Work Description: elek commented on pull request #1277: HDDS-1054. List 
Multipart uploads in a bucket
URL: https://github.com/apache/hadoop/pull/1277
 
 
   This Jira is to implement in ozone to list of in-progress multipart uploads 
in a bucket.
   
   [https://docs.aws.amazon.com/AmazonS3/latest/API/mpUploadListMPUpload.html]
   
    
   
   See: https://issues.apache.org/jira/browse/HDDS-1054
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 292694)
Time Spent: 10m
Remaining Estimate: 0h

> List Multipart uploads in a bucket
> --
>
> Key: HDDS-1054
> URL: https://issues.apache.org/jira/browse/HDDS-1054
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Elek, Marton
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> This Jira is to implement in ozone to list of in-progress multipart uploads 
> in a bucket.
> [https://docs.aws.amazon.com/AmazonS3/latest/API/mpUploadListMPUpload.html]
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1054) List Multipart uploads in a bucket

2019-08-11 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1054?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-1054:
-
Labels: pull-request-available  (was: )

> List Multipart uploads in a bucket
> --
>
> Key: HDDS-1054
> URL: https://issues.apache.org/jira/browse/HDDS-1054
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Elek, Marton
>Priority: Blocker
>  Labels: pull-request-available
>
> This Jira is to implement in ozone to list of in-progress multipart uploads 
> in a bucket.
> [https://docs.aws.amazon.com/AmazonS3/latest/API/mpUploadListMPUpload.html]
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1927) Consolidate add/remove Acl into OzoneAclUtil class

2019-08-11 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1927?focusedWorklogId=292664=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-292664
 ]

ASF GitHub Bot logged work on HDDS-1927:


Author: ASF GitHub Bot
Created on: 11/Aug/19 08:44
Start Date: 11/Aug/19 08:44
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1263: HDDS-1927. 
Consolidate add/remove Acl into OzoneAclUtil class. Contri…
URL: https://github.com/apache/hadoop/pull/1263#issuecomment-520211149
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 44 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 5 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 31 | Maven dependency ordering for branch |
   | +1 | mvninstall | 607 | trunk passed |
   | +1 | compile | 365 | trunk passed |
   | +1 | checkstyle | 63 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 824 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 153 | trunk passed |
   | 0 | spotbugs | 430 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 627 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 19 | Maven dependency ordering for patch |
   | +1 | mvninstall | 551 | the patch passed |
   | +1 | compile | 363 | the patch passed |
   | +1 | javac | 363 | the patch passed |
   | +1 | checkstyle | 64 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 625 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 84 | hadoop-ozone generated 3 new + 20 unchanged - 0 fixed 
= 23 total (was 20) |
   | +1 | findbugs | 641 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 305 | hadoop-hdds in the patch passed. |
   | -1 | unit | 2081 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 54 | The patch does not generate ASF License warnings. |
   | | | 7725 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.TestStorageContainerManager |
   |   | hadoop.ozone.client.rpc.TestMultiBlockWritesWithDnFailures |
   |   | hadoop.ozone.TestMiniOzoneCluster |
   |   | hadoop.ozone.client.rpc.TestBlockOutputStream |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1263/7/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1263 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 03d7420c6d75 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 1c5b286 |
   | Default Java | 1.8.0_212 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1263/7/artifact/out/diff-javadoc-javadoc-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1263/7/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1263/7/testReport/ |
   | Max. process+thread count | 4501 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/common hadoop-ozone/client 
hadoop-ozone/ozone-manager hadoop-ozone/objectstore-service 
hadoop-ozone/integration-test U: hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1263/7/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 292664)
Time Spent: 1h 40m  (was: 1.5h)

> Consolidate add/remove Acl into OzoneAclUtil class
> --
>
> Key: HDDS-1927
> URL: https://issues.apache.org/jira/browse/HDDS-1927
> Project: Hadoop Distributed Data Store
>  Issue 

[jira] [Work logged] (HDDS-1927) Consolidate add/remove Acl into OzoneAclUtil class

2019-08-11 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1927?focusedWorklogId=292663=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-292663
 ]

ASF GitHub Bot logged work on HDDS-1927:


Author: ASF GitHub Bot
Created on: 11/Aug/19 08:42
Start Date: 11/Aug/19 08:42
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1263: HDDS-1927. 
Consolidate add/remove Acl into OzoneAclUtil class. Contri…
URL: https://github.com/apache/hadoop/pull/1263#issuecomment-520211069
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 40 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 5 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 12 | Maven dependency ordering for branch |
   | +1 | mvninstall | 591 | trunk passed |
   | +1 | compile | 351 | trunk passed |
   | +1 | checkstyle | 65 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 819 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 148 | trunk passed |
   | 0 | spotbugs | 430 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 625 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 16 | Maven dependency ordering for patch |
   | +1 | mvninstall | 547 | the patch passed |
   | +1 | compile | 351 | the patch passed |
   | +1 | javac | 351 | the patch passed |
   | +1 | checkstyle | 65 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 617 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 155 | the patch passed |
   | +1 | findbugs | 638 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 309 | hadoop-hdds in the patch passed. |
   | -1 | unit | 1837 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 43 | The patch does not generate ASF License warnings. |
   | | | 7367 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.dn.scrubber.TestDataScrubber |
   |   | hadoop.hdds.scm.pipeline.TestNode2PipelineMap |
   |   | 
hadoop.ozone.container.common.statemachine.commandhandler.TestCloseContainerByPipeline
 |
   |   | hadoop.ozone.container.ozoneimpl.TestOzoneContainer |
   |   | hadoop.hdds.scm.safemode.TestSCMSafeModeWithPipelineRules |
   |   | hadoop.ozone.web.TestOzoneWebAccess |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1263/8/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1263 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 49374ebf6f2e 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 1c5b286 |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1263/8/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1263/8/testReport/ |
   | Max. process+thread count | 3686 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/common hadoop-ozone/client 
hadoop-ozone/ozone-manager hadoop-ozone/objectstore-service 
hadoop-ozone/integration-test U: hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1263/8/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 292663)
Time Spent: 1.5h  (was: 1h 20m)

> Consolidate add/remove Acl into OzoneAclUtil class
> --
>
> Key: HDDS-1927
> URL: https://issues.apache.org/jira/browse/HDDS-1927
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>

[jira] [Commented] (HDFS-14708) TestLargeBlockReport#testBlockReportSucceedsWithLargerLengthLimit fails in trunk

2019-08-11 Thread Ayush Saxena (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16904601#comment-16904601
 ] 

Ayush Saxena commented on HDFS-14708:
-

[~leosun08] any idea why or which JIRA broke this?

> TestLargeBlockReport#testBlockReportSucceedsWithLargerLengthLimit fails in 
> trunk
> 
>
> Key: HDFS-14708
> URL: https://issues.apache.org/jira/browse/HDFS-14708
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Minor
>
> {code:java}
> [ERROR] 
> testBlockReportSucceedsWithLargerLengthLimit(org.apache.hadoop.hdfs.server.datanode.TestLargeBlockReport)
>   Time elapsed: 47.956 s  <<< ERROR!
> org.apache.hadoop.ipc.RemoteException(java.io.IOException): 
> java.lang.IllegalStateException: 
> com.google.protobuf.InvalidProtocolBufferException: Protocol message was too 
> large.  May be malicious.  Use CodedInputStream.setSizeLimit() to increase 
> the size limit.
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.runBlockOp(BlockManager.java:5011)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.blockReport(NameNodeRpcServer.java:1581)
>   at 
> org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolServerSideTranslatorPB.blockReport(DatanodeProtocolServerSideTranslatorPB.java:181)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos$DatanodeProtocolService$2.callBlockingMethod(DatanodeProtocolProtos.java:31664)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:529)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1070)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1001)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:929)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1891)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2921)
> Caused by: java.lang.IllegalStateException: 
> com.google.protobuf.InvalidProtocolBufferException: Protocol message was too 
> large.  May be malicious.  Use CodedInputStream.setSizeLimit() to increase 
> the size limit.
>   at 
> org.apache.hadoop.hdfs.protocol.BlockListAsLongs$BufferDecoder$1.next(BlockListAsLongs.java:424)
>   at 
> org.apache.hadoop.hdfs.protocol.BlockListAsLongs$BufferDecoder$1.next(BlockListAsLongs.java:396)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.reportDiffSorted(BlockManager.java:2952)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.processReport(BlockManager.java:2787)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.processReport(BlockManager.java:2655)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.lambda$blockReport$0(NameNodeRpcServer.java:1582)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$BlockReportProcessingThread.processQueue(BlockManager.java:5089)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$BlockReportProcessingThread.run(BlockManager.java:5068)
> Caused by: com.google.protobuf.InvalidProtocolBufferException: Protocol 
> message was too large.  May be malicious.  Use 
> CodedInputStream.setSizeLimit() to increase the size limit.
>   at 
> com.google.protobuf.InvalidProtocolBufferException.sizeLimitExceeded(InvalidProtocolBufferException.java:110)
>   at 
> com.google.protobuf.CodedInputStream.refillBuffer(CodedInputStream.java:755)
>   at 
> com.google.protobuf.CodedInputStream.readRawByte(CodedInputStream.java:769)
>   at 
> com.google.protobuf.CodedInputStream.readRawVarint64(CodedInputStream.java:462)
>   at 
> org.apache.hadoop.hdfs.protocol.BlockListAsLongs$BufferDecoder$1.next(BlockListAsLongs.java:420)
>   ... 8 more
>   at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1553)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1499)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1396)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:233)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118)
>   at com.sun.proxy.$Proxy25.blockReport(Unknown Source)
>   at 
> org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolClientSideTranslatorPB.blockReport(DatanodeProtocolClientSideTranslatorPB.java:218)
>   at 
> 

[jira] [Commented] (HDFS-14713) RBF: routeradmin support refreshRouterArgs command but it not on display

2019-08-11 Thread Ayush Saxena (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14713?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16904600#comment-16904600
 ] 

Ayush Saxena commented on HDFS-14713:
-

Completely agree, this should be covered.
[~wangzhaohui] would be great if you can add one test at {{TestRouterAdminCli}} 
to assert the full usage message. 

> RBF: routeradmin support refreshRouterArgs command but it not on display
> 
>
> Key: HDFS-14713
> URL: https://issues.apache.org/jira/browse/HDFS-14713
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: wangzhaohui
>Assignee: wangzhaohui
>Priority: Major
> Attachments: HDFS-14713-000.patch, after.png, before.png
>
>
> When the cmd commond is null,the refreshRouterArgs command is not display 
> ,because there is one missing value in the String[] commands
> {code:java}
> //
> if (cmd == null) {
>   String[] commands =
>   {"-add", "-update", "-rm", "-ls", "-getDestination",
>   "-setQuota", "-clrQuota",
>   "-safemode", "-nameservice", "-getDisabledNameservices",
>   "-refresh"};
>   StringBuilder usage = new StringBuilder();
>   usage.append("Usage: hdfs dfsrouteradmin :\n");
>   for (int i = 0; i < commands.length; i++) {
> usage.append(getUsage(commands[i]));
> if (i + 1 < commands.length) {
>   usage.append("\n");
> }
>   }
>   
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org