[jira] [Commented] (HDFS-14227) RBF: HDFS "dfsadmin -printTopology" not displaying the rack details properly

2019-07-12 Thread Ayush Saxena (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14227?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16884299#comment-16884299
 ] 

Ayush Saxena commented on HDFS-14227:
-

Seems Something invalid :


{noformat}
Rack: /ns0/default-rack
   127.0.0.1:33699 (localhost)
   127.0.0.1:37965 (localhost)
   127.0.0.1:38297 (localhost)
   127.0.0.1:40305 (localhost)
   127.0.0.1:44011 (localhost)
   127.0.0.1:46833 (localhost)

Rack: /ns1/default-rack
   127.0.0.1:34369 (localhost)
   127.0.0.1:35535 (localhost)
   127.0.0.1:37991 (localhost)
   127.0.0.1:38053 (localhost)
   127.0.0.1:43407 (localhost)
   127.0.0.1:43881 (localhost)
{noformat}

Tried the scenario, Gave an output like this, Same as the asked behavior.

[~elgoiri] Guess we can close this as Not a Problem?


> RBF: HDFS "dfsadmin -printTopology" not displaying the rack details properly
> 
>
> Key: HDFS-14227
> URL: https://issues.apache.org/jira/browse/HDFS-14227
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.1.1
>Reporter: venkata ramkumar
>Assignee: venkata ramkumar
>Priority: Minor
>  Labels: RBF
>
> namespaces : hacluster1 ,hacluster2
> under hacluster1 :(IP1, IP2)
> under hacluster2 :(IP3,IP4)
> commands :
> {noformat}
> /router/bin> ./hdfs dfsadmin -printTopology
> 19/01/24 15:12:53 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> Rack: /hacluster1/default-rack
>IP1:9866 (BLR121217)
>IP2:9866 (linux-110)
>IP3:9866 (linux111)
>IP4:9866 (linux112)
> {noformat}
> expected o/p:
> {noformat}
> /router/bin> ./hdfs dfsadmin -printTopology
> 19/01/24 15:12:53 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> Rack: /hacluster1/default-rack
>IP1:9866 (BLR121217)
>IP2:9866 (linux-110)
> Rack: /hacluster2/default-rack
>IP3:9866 (linux111)
>IP4:9866 (linux112)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-13248) RBF: Namenode need to choose block location for the client

2019-07-12 Thread Ayush Saxena (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16884283#comment-16884283
 ] 

Ayush Saxena edited comment on HDFS-13248 at 7/13/19 5:32 AM:
--

Hey Brahma!!!
Literally, I don't have those stats with an MR Job. Actually I guess you are 
pointing towards as if client resides on the same site as that of Router.
In that case, Yes, Correct. technically this won't be a problem as the router 
and the client node share the same addr. 
Here is to tackle when this isn't true and the *client resides on a node having 
a DN but not a Router* In that scenario the BPP shall be satisfied wrt to 
Router not the client, and hence these locality problems shall occur and the 
DN's shall be sorted wrt to Router not client. And by far I guess the number of 
clients sites shall be more than Routers. Let me know if you require any stats 
for analysis, we shall try grab them up.
Well we had couple solutions, All Stuck as of now :
*  Add proxy address in IPC connection (HADOOP-16254) --> This had some 
security concerns.
* The RouterRPCServer should transfer CallerContext and client ip to 
NamenodeRpcServer (HDFS-13293) --> This tend to little opaque and couple of 
more problems stated above.
* Favored Nodes --> I guess the last patch here. Pass the local node as favored 
node. But this isn't a complete solution. This doesn't take into account the 
fallback in case of non availability of local nodes and couple of more.

Do give a check, if you can help, or give some pointers to any of the 
solutions, Or a new solutions. this had been still since quite a long.


was (Author: ayushtkn):
Hey Brahma!!!
Literally, I don't have those stats with an MR Job. Actually I guess you are 
pointing towards as if client resides on the same site as that of Router.
In that case, Yes, Correct. technically this won't be a problem as the router 
and the client node share the same addr. 
Here is to tackle when this isn't true and the *client resides on a node having 
a DN but not a Router* In that scenario the BPP shall be satisfied wrt to 
Router not the client, and hence these locality problems shall occur and the 
DN's shall be sorted wrt to Router not client. And by far I guess the number of 
clients sides shall be more than Routers. Let me know if you require any stats 
for analysis, we shall try grab them up.
Well he had couple solutions, All Stuck as of now :
*  Add proxy address in IPC connection (HADOOP-16254) --> This had some 
security concerns.
* The RouterRPCServer should transfer CallerContext and client ip to 
NamenodeRpcServer (HDFS-13293) --> This tend to little opaque and couple of 
more problems stated above.
* Favored Nodes --> I guess the last patch here. Pass the local node as favored 
node. But this isn't a complete solution. This doesn't take into account the 
fallback in case of non availability of local nodes and couple of more.

Do give a check, if you can help, or give some pointers to any of the 
solutions, Or a new solutions. this had been still since quite a long.

> RBF: Namenode need to choose block location for the client
> --
>
> Key: HDFS-13248
> URL: https://issues.apache.org/jira/browse/HDFS-13248
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Wu Weiwei
>Assignee: Íñigo Goiri
>Priority: Major
> Attachments: HDFS-13248.000.patch, HDFS-13248.001.patch, 
> HDFS-13248.002.patch, HDFS-13248.003.patch, HDFS-13248.004.patch, 
> HDFS-13248.005.patch, HDFS-Router-Data-Locality.odt, RBF Data Locality 
> Design.pdf, clientMachine-call-path.jpeg, debug-info-1.jpeg, debug-info-2.jpeg
>
>
> When execute a put operation via router, the NameNode will choose block 
> location for the router, not for the real client. This will affect the file's 
> locality.
> I think on both NameNode and Router, we should add a new addBlock method, or 
> add a parameter for the current addBlock method, to pass the real client 
> information.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13835) RBF: Unable to add files after changing the order

2019-07-12 Thread Ayush Saxena (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16884291#comment-16884291
 ] 

Ayush Saxena commented on HDFS-13835:
-

Thanx [~ramkumar]

bq. I am unable to change only the order from HASH to RANDOM if a mount entry 
is pointing to multiple namespaces using Update command due to which file is 
not getting added after update command.

Now after HDFS-13853 is in you would be able to change order, without 
specifying the destination. we made it optional.

Shall resolve this now!!! 

> RBF: Unable to add files after changing the order
> -
>
> Key: HDFS-13835
> URL: https://issues.apache.org/jira/browse/HDFS-13835
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: venkata ramkumar
>Assignee: venkata ramkumar
>Priority: Critical
>
> When  a mount point it pointing to multiple sub cluster by default the order 
> is HASH.
> But After changing the order from HASH to RANDOM i am unable to add files to 
> that mount point.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13507) RBF: Remove update functionality from routeradmin's add cmd

2019-07-12 Thread Ayush Saxena (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13507?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16884288#comment-16884288
 ] 

Ayush Saxena commented on HDFS-13507:
-

Hi [~elgoiri] 
What plans for this, Do we push this up(I see you already have reviewed this) 
rebasing and fixing the error, or we get up with also changing the add cmd to 
take different destinations here or may be on a follow up I guess we would also 
have to handle this up in update cmd too, if we tend to keep the behavior.

Not sure [~gangli2384] is now active or not, if not anyone of us can take up. 
If we feel we should be remove the update functionality from add.

> RBF: Remove update functionality from routeradmin's add cmd
> ---
>
> Key: HDFS-13507
> URL: https://issues.apache.org/jira/browse/HDFS-13507
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Wei Yan
>Assignee: Gang Li
>Priority: Minor
>  Labels: incompatible
> Attachments: HDFS-13507-HDFS-13891.003.patch, 
> HDFS-13507-HDFS-13891.004.patch, HDFS-13507.000.patch, HDFS-13507.001.patch, 
> HDFS-13507.002.patch
>
>
> Follow up the discussion in HDFS-13326. We should remove the "update" 
> functionality from routeradmin's add cmd, to make it consistent with RPC 
> calls.
> Note that: this is an incompatible change.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13248) RBF: Namenode need to choose block location for the client

2019-07-12 Thread Ayush Saxena (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16884283#comment-16884283
 ] 

Ayush Saxena commented on HDFS-13248:
-

Hey Brahma!!!
Literally, I don't have those stats with an MR Job. Actually I guess you are 
pointing towards as if client resides on the same site as that of Router.
In that case, Yes, Correct. technically this won't be a problem as the router 
and the client node share the same addr. 
Here is to tackle when this isn't true and the *client resides on a node having 
a DN but not a Router* In that scenario the BPP shall be satisfied wrt to 
Router not the client, and hence these locality problems shall occur and the 
DN's shall be sorted wrt to Router not client. And by far I guess the number of 
clients sides shall be more than Routers. Let me know if you require any stats 
for analysis, we shall try grab them up.
Well he had couple solutions, All Stuck as of now :
*  Add proxy address in IPC connection (HADOOP-16254) --> This had some 
security concerns.
* The RouterRPCServer should transfer CallerContext and client ip to 
NamenodeRpcServer (HDFS-13293) --> This tend to little opaque and couple of 
more problems stated above.
* Favored Nodes --> I guess the last patch here. Pass the local node as favored 
node. But this isn't a complete solution. This doesn't take into account the 
fallback in case of non availability of local nodes and couple of more.

Do give a check, if you can help, or give some pointers to any of the 
solutions, Or a new solutions. this had been still since quite a long.

> RBF: Namenode need to choose block location for the client
> --
>
> Key: HDFS-13248
> URL: https://issues.apache.org/jira/browse/HDFS-13248
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Wu Weiwei
>Assignee: Íñigo Goiri
>Priority: Major
> Attachments: HDFS-13248.000.patch, HDFS-13248.001.patch, 
> HDFS-13248.002.patch, HDFS-13248.003.patch, HDFS-13248.004.patch, 
> HDFS-13248.005.patch, HDFS-Router-Data-Locality.odt, RBF Data Locality 
> Design.pdf, clientMachine-call-path.jpeg, debug-info-1.jpeg, debug-info-2.jpeg
>
>
> When execute a put operation via router, the NameNode will choose block 
> location for the router, not for the real client. This will affect the file's 
> locality.
> I think on both NameNode and Router, we should add a new addBlock method, or 
> add a parameter for the current addBlock method, to pass the real client 
> information.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14636) SBN : If you configure the default proxy provider still read Request going to Observer namenode only.

2019-07-12 Thread Ayush Saxena (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14636?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16884281#comment-16884281
 ] 

Ayush Saxena commented on HDFS-14636:
-

Hi [~xkrogen]

Isn't it unfair for the observer to serve the request to a client? When the 
client doesn't intend to get it from the Observer, He didn't even set the state 
id. The Observer should return it as a StandbyException. I know the present 
behavior, when it isn't set, But I think we need to change a bit.

Moreover, This would be leading to inconsistencies too, Some time the request 
would be from Active and Sometimes from Observer, and User may get different 
responses.As per the proxy provider. He won't be expecting it too. 

I tried checking HDFS-13923, I guess there too the intentions was to allow some 
clients to go directly to active. But IMO for case by case for a client to 
remove Observer nodes in the configs would be little tedious task.  
Maybe handling this at ONN side can be tried. If there aren't any complications 
around. :) 

> SBN : If you configure the default proxy provider still read Request going to 
> Observer namenode only.
> -
>
> Key: HDFS-14636
> URL: https://issues.apache.org/jira/browse/HDFS-14636
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 3.1.1
>Reporter: Harshakiran Reddy
>Assignee: Ranith Sardar
>Priority: Major
>  Labels: SBN
>
> {noformat}
> In Observer cluster, will configure the default proxy provider instead of 
> "org.apache.hadoop.hdfs.server.namenode.ha.ObserverReadProxyProvider", still 
> Read request going to Observer namenode only.{noformat}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14595) HDFS-11848 breaks API compatibility

2019-07-12 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14595?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16884278#comment-16884278
 ] 

Hadoop QA commented on HDFS-14595:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
51s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
19s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 56s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
4s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  3m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 29s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
44s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
55s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 95m 24s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 24m 24s{color} 
| {color:red} hadoop-hdfs-rbf in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
32s{color} | {color:red} The patch generated 85 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}199m 24s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestParallelShortCircuitLegacyRead |
|   | hadoop.hdfs.server.namenode.ha.TestHASafeMode |
|   | hadoop.hdfs.tools.TestDFSZKFailoverController |
|   | hadoop.hdfs.server.namenode.ha.TestStandbyCheckpoints |
|   | hadoop.hdfs.server.namenode.ha.TestPipelinesFailover |
|   | hadoop.hdfs.server.datanode.TestDirectoryScanner |
|   | hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA |
|   | hadoop.hdfs.server.namenode.ha.TestEditLogTailer |
|   | hadoop.hdfs.TestReadStripedFileWithMissingBlocks |
|   | hadoop.hdfs.TestReconstructStripedFile |
|   | hadoop.hdfs.server.namenode.TestReencryption |
|   | hadoop.hdfs.server.namenode.ha.TestXAttrsWithHA |
|   | hadoop.hdfs.TestFileAppend4 |
|   | 

[jira] [Work logged] (HDDS-1689) Implement S3 Create Bucket request to use Cache and DoubleBuffer

2019-07-12 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1689?focusedWorklogId=276211=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-276211
 ]

ASF GitHub Bot logged work on HDDS-1689:


Author: ASF GitHub Bot
Created on: 13/Jul/19 02:37
Start Date: 13/Jul/19 02:37
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1088: HDDS-1689. 
Implement S3 Create Bucket request to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/1088#issuecomment-511080754
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 88 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 5 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 77 | Maven dependency ordering for branch |
   | +1 | mvninstall | 550 | trunk passed |
   | +1 | compile | 268 | trunk passed |
   | +1 | checkstyle | 74 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 949 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 168 | trunk passed |
   | 0 | spotbugs | 374 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 604 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 33 | Maven dependency ordering for patch |
   | +1 | mvninstall | 508 | the patch passed |
   | +1 | compile | 271 | the patch passed |
   | +1 | cc | 271 | the patch passed |
   | +1 | javac | 271 | the patch passed |
   | -0 | checkstyle | 39 | hadoop-ozone: The patch generated 2 new + 0 
unchanged - 0 fixed = 2 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 751 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 193 | the patch passed |
   | +1 | findbugs | 678 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 358 | hadoop-hdds in the patch passed. |
   | -1 | unit | 2912 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 53 | The patch does not generate ASF License warnings. |
   | | | 8777 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdds.scm.pipeline.TestRatisPipelineCreateAndDestory |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.ozone.client.rpc.TestOzoneClientRetriesOnException |
   |   | hadoop.ozone.client.rpc.TestBlockOutputStream |
   |   | hadoop.ozone.client.rpc.TestCommitWatcher |
   |   | hadoop.ozone.client.rpc.TestFailureHandlingByClient |
   |   | hadoop.ozone.client.rpc.TestKeyInputStream |
   |   | hadoop.ozone.client.rpc.TestBlockOutputStreamWithFailures |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=18.09.7 Server=18.09.7 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1088/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1088 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle cc |
   | uname | Linux d73f0f46d68f 4.15.0-52-generic #56-Ubuntu SMP Tue Jun 4 
22:49:08 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 4a70a0d |
   | Default Java | 1.8.0_212 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1088/2/artifact/out/diff-checkstyle-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1088/2/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1088/2/testReport/ |
   | Max. process+thread count | 4875 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-ozone/common 
hadoop-ozone/ozone-manager U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1088/2/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking

[jira] [Work logged] (HDDS-1775) Make OM KeyDeletingService compatible with HA model

2019-07-12 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1775?focusedWorklogId=276208=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-276208
 ]

ASF GitHub Bot logged work on HDDS-1775:


Author: ASF GitHub Bot
Created on: 13/Jul/19 02:19
Start Date: 13/Jul/19 02:19
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1063: HDDS-1775. Make 
OM KeyDeletingService compatible with HA model
URL: https://github.com/apache/hadoop/pull/1063#issuecomment-511079591
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 102 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 4 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 27 | Maven dependency ordering for branch |
   | +1 | mvninstall | 697 | trunk passed |
   | +1 | compile | 343 | trunk passed |
   | +1 | checkstyle | 101 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 1018 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 192 | trunk passed |
   | 0 | spotbugs | 444 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 706 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 24 | Maven dependency ordering for patch |
   | +1 | mvninstall | 735 | the patch passed |
   | +1 | compile | 309 | the patch passed |
   | +1 | cc | 309 | the patch passed |
   | +1 | javac | 309 | the patch passed |
   | -0 | checkstyle | 41 | hadoop-ozone: The patch generated 5 new + 0 
unchanged - 0 fixed = 5 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | -1 | whitespace | 0 | The patch has 1 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply |
   | +1 | shadedclient | 741 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 101 | hadoop-ozone generated 1 new + 12 unchanged - 0 fixed 
= 13 total (was 12) |
   | +1 | findbugs | 609 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 364 | hadoop-hdds in the patch passed. |
   | -1 | unit | 2958 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 40 | The patch does not generate ASF License warnings. |
   | | | 9387 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.om.TestSecureOzoneManager |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.ozone.om.TestOmMetrics |
   |   | hadoop.ozone.client.rpc.TestCloseContainerHandlingByClient |
   |   | hadoop.ozone.client.rpc.TestWatchForCommit |
   |   | hadoop.ozone.client.rpc.TestOzoneClientRetriesOnException |
   |   | hadoop.hdds.scm.pipeline.TestSCMRestart |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=18.09.7 Server=18.09.7 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1063/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1063 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle cc |
   | uname | Linux 3edb16226876 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 4a70a0d |
   | Default Java | 1.8.0_212 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1063/3/artifact/out/diff-checkstyle-hadoop-ozone.txt
 |
   | whitespace | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1063/3/artifact/out/whitespace-eol.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1063/3/artifact/out/diff-javadoc-javadoc-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1063/3/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1063/3/testReport/ |
   | Max. process+thread count | 4995 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/common hadoop-ozone/ozone-manager 
hadoop-ozone/integration-test U: hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1063/3/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

[jira] [Work logged] (HDDS-1775) Make OM KeyDeletingService compatible with HA model

2019-07-12 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1775?focusedWorklogId=276207=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-276207
 ]

ASF GitHub Bot logged work on HDDS-1775:


Author: ASF GitHub Bot
Created on: 13/Jul/19 02:19
Start Date: 13/Jul/19 02:19
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #1063: 
HDDS-1775. Make OM KeyDeletingService compatible with HA model
URL: https://github.com/apache/hadoop/pull/1063#discussion_r303188169
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManager.java
 ##
 @@ -405,9 +406,12 @@ private OzoneManager(OzoneConfiguration conf) throws 
IOException,
 omRpcServer = getRpcServer(conf);
 omRpcAddress = updateRPCListenAddress(configuration,
 OZONE_OM_ADDRESS_KEY, omNodeRpcAddr, omRpcServer);
+
 this.scmClient = new ScmClient(scmBlockClient, scmContainerClient);
-keyManager = new KeyManagerImpl(scmClient, metadataManager,
-configuration, omStorage.getOmId(), blockTokenMgr, getKmsProvider());
+
+keyManager = new KeyManagerImpl(this, scmClient, configuration,
+omStorage.getOmId());
+
 
 Review comment:
   whitespace:end of line
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 276207)
Time Spent: 1h 20m  (was: 1h 10m)

> Make OM KeyDeletingService compatible with HA model
> ---
>
> Key: HDDS-1775
> URL: https://issues.apache.org/jira/browse/HDDS-1775
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> Currently OM KeyDeletingService directly deletes all the keys in DeletedTable 
> after deleting the corresponding blocks through SCM. For HA compatibility, 
> the key purging should happen through the OM Ratis server. This Jira 
> introduces PurgeKeys request in OM protocol. This request will be submitted 
> to OMs Ratis server after SCM deletes blocks corresponding to deleted keys.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1689) Implement S3 Create Bucket request to use Cache and DoubleBuffer

2019-07-12 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1689?focusedWorklogId=276206=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-276206
 ]

ASF GitHub Bot logged work on HDDS-1689:


Author: ASF GitHub Bot
Created on: 13/Jul/19 01:55
Start Date: 13/Jul/19 01:55
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1088: HDDS-1689. 
Implement S3 Create Bucket request to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/1088#issuecomment-511077933
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 38 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 5 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 79 | Maven dependency ordering for branch |
   | +1 | mvninstall | 496 | trunk passed |
   | +1 | compile | 242 | trunk passed |
   | +1 | checkstyle | 61 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 778 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 147 | trunk passed |
   | 0 | spotbugs | 315 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 497 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 74 | Maven dependency ordering for patch |
   | +1 | mvninstall | 452 | the patch passed |
   | +1 | compile | 351 | the patch passed |
   | +1 | cc | 351 | the patch passed |
   | +1 | javac | 351 | the patch passed |
   | -0 | checkstyle | 35 | hadoop-ozone: The patch generated 2 new + 0 
unchanged - 0 fixed = 2 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 625 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 162 | the patch passed |
   | +1 | findbugs | 600 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 282 | hadoop-hdds in the patch passed. |
   | -1 | unit | 2039 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 48 | The patch does not generate ASF License warnings. |
   | | | 7252 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.ozone.client.rpc.TestContainerStateMachineFailures |
   |   | hadoop.ozone.client.rpc.TestCloseContainerHandlingByClient |
   |   | hadoop.ozone.client.rpc.TestCommitWatcher |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.ozone.client.rpc.TestOzoneClientRetriesOnException |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=18.09.7 Server=18.09.7 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1088/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1088 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle cc |
   | uname | Linux 22528066b727 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 4a70a0d |
   | Default Java | 1.8.0_212 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1088/1/artifact/out/diff-checkstyle-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1088/1/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1088/1/testReport/ |
   | Max. process+thread count | 5307 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-ozone/common 
hadoop-ozone/ozone-manager U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1088/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 276206)
Time Spent: 20m  (was: 10m)

> Implement S3 Create Bucket request to use Cache and DoubleBuffer
> 

[jira] [Commented] (HDDS-1773) Add intermittent IO disk test to fault injection test

2019-07-12 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1773?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16884241#comment-16884241
 ] 

Eric Yang commented on HDDS-1773:
-

Patch 002 provides setup-acid.sh and cleanup-acid.sh to generate a faulty disk.
Those script requires admin privileges to generate a faulty virtual disk.  
README file contains step by step instruction on how to run ITAcid test case to 
exercise Ozone on the faulty disk.

> Add intermittent IO disk test to fault injection test
> -
>
> Key: HDDS-1773
> URL: https://issues.apache.org/jira/browse/HDDS-1773
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Eric Yang
>Priority: Major
> Attachments: HDDS-1773.001.patch, HDDS-1773.002.patch
>
>
> Disk errors can also be simulated by setting cgroup blkio rate to 0 while 
> Ozone cluster is running.  
> This test will be added to corruption test project and this test will only be 
> performed if there is write access into host cgroup to control the throttle 
> of disk IO.
> Expected result:
> When datanode becomes irresponsive due to slow io, scm must flag the node as 
> unhealthy.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1773) Add intermittent IO disk test to fault injection test

2019-07-12 Thread Eric Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1773?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated HDDS-1773:

Attachment: HDDS-1773.002.patch

> Add intermittent IO disk test to fault injection test
> -
>
> Key: HDDS-1773
> URL: https://issues.apache.org/jira/browse/HDDS-1773
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Eric Yang
>Priority: Major
> Attachments: HDDS-1773.001.patch, HDDS-1773.002.patch
>
>
> Disk errors can also be simulated by setting cgroup blkio rate to 0 while 
> Ozone cluster is running.  
> This test will be added to corruption test project and this test will only be 
> performed if there is write access into host cgroup to control the throttle 
> of disk IO.
> Expected result:
> When datanode becomes irresponsive due to slow io, scm must flag the node as 
> unhealthy.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1776) Fix image name in some ozone docker-compose files

2019-07-12 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1776?focusedWorklogId=276184=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-276184
 ]

ASF GitHub Bot logged work on HDDS-1776:


Author: ASF GitHub Bot
Created on: 12/Jul/19 22:56
Start Date: 12/Jul/19 22:56
Worklog Time Spent: 10m 
  Work Description: xiaoyuyao commented on pull request #1066: HDDS-1776. 
Fix image name in some ozone docker-compose files. Contrib…
URL: https://github.com/apache/hadoop/pull/1066
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 276184)
Time Spent: 0.5h  (was: 20m)

> Fix image name in some ozone docker-compose files
> -
>
> Key: HDDS-1776
> URL: https://issues.apache.org/jira/browse/HDDS-1776
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> The docker compose file has invalid reference to scm images, which fails the 
> docker-compose up with errors like below. This ticket is opened to fix them.
>  
> {code:java}
> ERROR: no such image: apache/ozone-runner::20190617-2: invalid reference 
> format}
> or 
> ERROR: no such image: apache/ozone-runner:latest:20190617-2: invalid 
> reference format{code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1689) Implement S3 Create Bucket request to use Cache and DoubleBuffer

2019-07-12 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1689?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-1689:
-
Status: Patch Available  (was: Open)

> Implement S3 Create Bucket request to use Cache and DoubleBuffer
> 
>
> Key: HDDS-1689
> URL: https://issues.apache.org/jira/browse/HDDS-1689
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Implement S3 Bucket write requests to use OM Cache, double buffer.
>  
> In this Jira will add the changes to implement S3 bucket operations, and 
> HA/Non-HA will have a different code path, but once all requests are 
> implemented will have a single code path.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1795) Implement S3 Delete Bucket request to use Cache and DoubleBuffer

2019-07-12 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1795?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-1795:
-
Labels:   (was: pull-request-available)

> Implement S3 Delete Bucket request to use Cache and DoubleBuffer
> 
>
> Key: HDDS-1795
> URL: https://issues.apache.org/jira/browse/HDDS-1795
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>
> Implement S3 Bucket write requests to use OM Cache, double buffer.
>  
> In this Jira will add the changes to implement S3 bucket operations, and 
> HA/Non-HA will have a different code path, but once all requests are 
> implemented will have a single code path.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1795) CLONE - Implement S3 Delete Bucket request to use Cache and DoubleBuffer

2019-07-12 Thread Bharat Viswanadham (JIRA)
Bharat Viswanadham created HDDS-1795:


 Summary: CLONE - Implement S3 Delete Bucket request to use Cache 
and DoubleBuffer
 Key: HDDS-1795
 URL: https://issues.apache.org/jira/browse/HDDS-1795
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
  Components: Ozone Manager
Reporter: Bharat Viswanadham
Assignee: Bharat Viswanadham


Implement S3 Bucket write requests to use OM Cache, double buffer.

 

In this Jira will add the changes to implement S3 bucket operations, and 
HA/Non-HA will have a different code path, but once all requests are 
implemented will have a single code path.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1795) Implement S3 Delete Bucket request to use Cache and DoubleBuffer

2019-07-12 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1795?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-1795:
-
Summary: Implement S3 Delete Bucket request to use Cache and DoubleBuffer  
(was: CLONE - Implement S3 Delete Bucket request to use Cache and DoubleBuffer)

> Implement S3 Delete Bucket request to use Cache and DoubleBuffer
> 
>
> Key: HDDS-1795
> URL: https://issues.apache.org/jira/browse/HDDS-1795
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>
> Implement S3 Bucket write requests to use OM Cache, double buffer.
>  
> In this Jira will add the changes to implement S3 bucket operations, and 
> HA/Non-HA will have a different code path, but once all requests are 
> implemented will have a single code path.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1689) Implement S3 Create Bucket request to use Cache and DoubleBuffer

2019-07-12 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1689?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-1689:
-
Labels: pull-request-available  (was: )

> Implement S3 Create Bucket request to use Cache and DoubleBuffer
> 
>
> Key: HDDS-1689
> URL: https://issues.apache.org/jira/browse/HDDS-1689
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>
> Implement S3 Bucket write requests to use OM Cache, double buffer.
>  
> In this Jira will add the changes to implement S3 bucket operations, and 
> HA/Non-HA will have a different code path, but once all requests are 
> implemented will have a single code path.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1689) Implement S3 Create Bucket request to use Cache and DoubleBuffer

2019-07-12 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1689?focusedWorklogId=276182=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-276182
 ]

ASF GitHub Bot logged work on HDDS-1689:


Author: ASF GitHub Bot
Created on: 12/Jul/19 22:47
Start Date: 12/Jul/19 22:47
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #1088: 
HDDS-1689. Implement S3 Create Bucket request to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/1088
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 276182)
Time Spent: 10m
Remaining Estimate: 0h

> Implement S3 Create Bucket request to use Cache and DoubleBuffer
> 
>
> Key: HDDS-1689
> URL: https://issues.apache.org/jira/browse/HDDS-1689
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Implement S3 Bucket write requests to use OM Cache, double buffer.
>  
> In this Jira will add the changes to implement S3 bucket operations, and 
> HA/Non-HA will have a different code path, but once all requests are 
> implemented will have a single code path.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1689) Implement S3 Create Bucket request to use Cache and DoubleBuffer

2019-07-12 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1689?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-1689:
-
Summary: Implement S3 Create Bucket request to use Cache and DoubleBuffer  
(was: Implement S3 Bucket Write Requests to use Cache and DoubleBuffer)

> Implement S3 Create Bucket request to use Cache and DoubleBuffer
> 
>
> Key: HDDS-1689
> URL: https://issues.apache.org/jira/browse/HDDS-1689
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>
> Implement S3 Bucket write requests to use OM Cache, double buffer.
>  
> In this Jira will add the changes to implement S3 bucket operations, and 
> HA/Non-HA will have a different code path, but once all requests are 
> implemented will have a single code path.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1787) NPE thrown while trying to find DN closest to client

2019-07-12 Thread Siddharth Wagle (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16884213#comment-16884213
 ] 

Siddharth Wagle commented on HDDS-1787:
---

Thanks [~Sammi] for taking this up, do let me know if you need any help running 
MiniOzoneChaos cluster. Most likely this would not require testing with chaos 
cluster to debug, but just in case.

> NPE thrown while trying to find DN closest to client
> 
>
> Key: HDDS-1787
> URL: https://issues.apache.org/jira/browse/HDDS-1787
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.5.0
>Reporter: Siddharth Wagle
>Assignee: Sammi Chen
>Priority: Major
>
> cc: [~xyao] This seems related to the client side topology changes, not sure 
> if some other Jira is already addressing this.
> {code}
> 2019-07-10 16:45:53,176 WARN  ipc.Server (Server.java:logException(2724)) - 
> IPC Server handler 14 on 35066, call Call#127037 Retry#0 
> org.apache.hadoop.hdds.scm.protocol.ScmBlockLocationProtocol.send from 17
> 2.31.116.73:52540
> java.lang.NullPointerException
> at 
> org.apache.hadoop.ozone.protocolPB.ScmBlockLocationProtocolServerSideTranslatorPB.lambda$sortDatanodes$0(ScmBlockLocationProtocolServerSideTranslatorPB.java:215)
> at 
> java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1382)
> at 
> java.util.stream.ReferencePipeline$Head.forEach(ReferencePipeline.java:580)
> at 
> org.apache.hadoop.ozone.protocolPB.ScmBlockLocationProtocolServerSideTranslatorPB.sortDatanodes(ScmBlockLocationProtocolServerSideTranslatorPB.java:215)
> at 
> org.apache.hadoop.ozone.protocolPB.ScmBlockLocationProtocolServerSideTranslatorPB.send(ScmBlockLocationProtocolServerSideTranslatorPB.java:124)
> at 
> org.apache.hadoop.hdds.protocol.proto.ScmBlockLocationProtocolProtos$ScmBlockLocationProtocolService$2.callBlockingMethod(ScmBlockLocationProtocolProtos.java:13157)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:876)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:822)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2682)
> 2019-07-10 16:45:53,176 WARN  om.KeyManagerImpl 
> (KeyManagerImpl.java:lambda$sortDatanodeInPipeline$7(2129)) - Unable to sort 
> datanodes based on distance to client, volume=xqoyzocpse, bucket=vxwajaczqh, 
> key=pool-444-thread-7-201077822, client=127.0.0.1, 
> datanodes=[10f15723-45d7-4a0c-8f01-8b101744a110{ip: 172.31.116.73, host: 
> sid-minichaos.gce.cloudera.com, networkLocation: /default-rack, certSerialId: 
> null}, 7ac2777f-0a5c-4414-9e7f-bfbc47d696ea{ip: 172.31.116.73, host: 
> sid-minichaos.gce.cloudera.com, networkLocation: /default-rack, certSerialId: 
> null}], exception=java.lang.NullPointerException
> at 
> org.apache.hadoop.ozone.protocolPB.ScmBlockLocationProtocolServerSideTranslatorPB.lambda$sortDatanodes$0(ScmBlockLocationProtocolServerSideTranslatorPB.java:215)
> at 
> java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1382)
> at 
> java.util.stream.ReferencePipeline$Head.forEach(ReferencePipeline.java:580)
> at 
> org.apache.hadoop.ozone.protocolPB.ScmBlockLocationProtocolServerSideTranslatorPB.sortDatanodes(ScmBlockLocationProtocolServerSideTranslatorPB.java:215)
> at 
> org.apache.hadoop.ozone.protocolPB.ScmBlockLocationProtocolServerSideTranslatorPB.send(ScmBlockLocationProtocolServerSideTranslatorPB.java:124)
> at 
> org.apache.hadoop.hdds.protocol.proto.ScmBlockLocationProtocolProtos$ScmBlockLocationProtocolService$2.callBlockingMethod(ScmBlockLocationProtocolProtos.java:13157)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:876)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:822)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2682)
> {code}



--
This message was sent by Atlassian 

[jira] [Work logged] (HDDS-1544) Support default Acls for volume, bucket, keys and prefix

2019-07-12 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1544?focusedWorklogId=276177=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-276177
 ]

ASF GitHub Bot logged work on HDDS-1544:


Author: ASF GitHub Bot
Created on: 12/Jul/19 22:33
Start Date: 12/Jul/19 22:33
Worklog Time Spent: 10m 
  Work Description: xiaoyuyao commented on issue #1074: HDDS-1544. Support 
default Acls for volume, bucket, keys and prefix. Contributed by Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/1074#issuecomment-511053477
 
 
   TestOzoneNativeAuthorizer failure is related. The Prefix_Lock is not 
reentrant. The fix is refactor the PrefixmanagerImpl#getLongestPrefixPath with 
a helper function that does not acquire prefix lock. 
   
   In PrefixManagerImpl#setAcl, we should call the helper function without lock.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 276177)
Time Spent: 7h 50m  (was: 7h 40m)

> Support default Acls for volume, bucket, keys and prefix
> 
>
> Key: HDDS-1544
> URL: https://issues.apache.org/jira/browse/HDDS-1544
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1544.00.patch
>
>  Time Spent: 7h 50m
>  Remaining Estimate: 0h
>
> Add dAcls for volume, bucket, keys and prefix



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1775) Make OM KeyDeletingService compatible with HA model

2019-07-12 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1775?focusedWorklogId=276175=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-276175
 ]

ASF GitHub Bot logged work on HDDS-1775:


Author: ASF GitHub Bot
Created on: 12/Jul/19 22:30
Start Date: 12/Jul/19 22:30
Worklog Time Spent: 10m 
  Work Description: hanishakoneru commented on issue #1063: HDDS-1775. Make 
OM KeyDeletingService compatible with HA model
URL: https://github.com/apache/hadoop/pull/1063#issuecomment-511052962
 
 
   Fixed failing tests and add unit tests.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 276175)
Time Spent: 1h 10m  (was: 1h)

> Make OM KeyDeletingService compatible with HA model
> ---
>
> Key: HDDS-1775
> URL: https://issues.apache.org/jira/browse/HDDS-1775
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> Currently OM KeyDeletingService directly deletes all the keys in DeletedTable 
> after deleting the corresponding blocks through SCM. For HA compatibility, 
> the key purging should happen through the OM Ratis server. This Jira 
> introduces PurgeKeys request in OM protocol. This request will be submitted 
> to OMs Ratis server after SCM deletes blocks corresponding to deleted keys.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14595) HDFS-11848 breaks API compatibility

2019-07-12 Thread Siyao Meng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14595?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HDFS-14595:
--
Attachment: HDFS-14595.002.patch
Status: Patch Available  (was: In Progress)

Uploaded patch rev 002. Updated the test helper function verifyINodeLeaseCounts 
with new API.

> HDFS-11848 breaks API compatibility
> ---
>
> Key: HDFS-14595
> URL: https://issues.apache.org/jira/browse/HDFS-14595
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.1.2, 3.2.0
>Reporter: Wei-Chiu Chuang
>Assignee: Siyao Meng
>Priority: Blocker
> Attachments: HDFS-14595.001.patch, HDFS-14595.002.patch, hadoop_ 
> 36e1870eab904d5a6f12ecfb1fdb52ca08d95ac5 to 
> b241194d56f97ee372cbec7062bcf155bc3df662 compatibility report.htm
>
>
> Our internal tool caught an API compatibility issue with HDFS-11848.
> HDFS-11848 adds an additional parameter to 
> DistributedFileSystem.listOpenFiles(), but it doesn't keep the existing API.
> This can cause issue when upgrading from Hadoop 2.9.0/2.8.3/3.0.0 to 
> 3.0.1/3.1.0 and above.
> Suggest:
> (1) Add back the old API (which was added in HDFS-10480), and mark it 
> deprecated.
> (2) Update release doc to enforce running API compatibility check for each 
> releases.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14595) HDFS-11848 breaks API compatibility

2019-07-12 Thread Siyao Meng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14595?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HDFS-14595:
--
Status: In Progress  (was: Patch Available)

> HDFS-11848 breaks API compatibility
> ---
>
> Key: HDFS-14595
> URL: https://issues.apache.org/jira/browse/HDFS-14595
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.1.2, 3.2.0
>Reporter: Wei-Chiu Chuang
>Assignee: Siyao Meng
>Priority: Blocker
> Attachments: HDFS-14595.001.patch, hadoop_ 
> 36e1870eab904d5a6f12ecfb1fdb52ca08d95ac5 to 
> b241194d56f97ee372cbec7062bcf155bc3df662 compatibility report.htm
>
>
> Our internal tool caught an API compatibility issue with HDFS-11848.
> HDFS-11848 adds an additional parameter to 
> DistributedFileSystem.listOpenFiles(), but it doesn't keep the existing API.
> This can cause issue when upgrading from Hadoop 2.9.0/2.8.3/3.0.0 to 
> 3.0.1/3.1.0 and above.
> Suggest:
> (1) Add back the old API (which was added in HDFS-10480), and mark it 
> deprecated.
> (2) Update release doc to enforce running API compatibility check for each 
> releases.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14579) In refreshNodes, avoid performing a DNS lookup while holding the write lock

2019-07-12 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-14579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16884171#comment-16884171
 ] 

Íñigo Goiri commented on HDFS-14579:


[~sodonnell], yes I think the lock is fine.
I think it might be worth opening a separate JIRA to do the DNS resolution in 
parallel.

> In refreshNodes, avoid performing a DNS lookup while holding the write lock
> ---
>
> Key: HDFS-14579
> URL: https://issues.apache.org/jira/browse/HDFS-14579
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.3.0
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Major
> Attachments: HDFS-14579.001.patch
>
>
> When refreshNodes is called on a large cluster, or a cluster where DNS is not 
> performing well, it can cause the namenode to hang for a long time. This is 
> because the refreshNodes operation holds the global write lock while it is 
> running. Most of refreshNodes code is simple and hence fast, but 
> unfortunately it performs a DNS lookup for each host in the cluster while the 
> lock is held. 
> Right now, it calls:
> {code}
>   public void refreshNodes(final Configuration conf) throws IOException {
> refreshHostsReader(conf);
> namesystem.writeLock();
> try {
>   refreshDatanodes();
>   countSoftwareVersions();
> } finally {
>   namesystem.writeUnlock();
> }
>   }
> {code}
> The line refreshHostsReader(conf); reads the new config file and does a DNS 
> lookup on each entry - the write lock is not held here. Then the main work is 
> done here:
> {code}
>   private void refreshDatanodes() {
> final Map copy;
> synchronized (this) {
>   copy = new HashMap<>(datanodeMap);
> }
> for (DatanodeDescriptor node : copy.values()) {
>   // Check if not include.
>   if (!hostConfigManager.isIncluded(node)) {
> node.setDisallowed(true);
>   } else {
> long maintenanceExpireTimeInMS =
> hostConfigManager.getMaintenanceExpirationTimeInMS(node);
> if (node.maintenanceNotExpired(maintenanceExpireTimeInMS)) {
>   datanodeAdminManager.startMaintenance(
>   node, maintenanceExpireTimeInMS);
> } else if (hostConfigManager.isExcluded(node)) {
>   datanodeAdminManager.startDecommission(node);
> } else {
>   datanodeAdminManager.stopMaintenance(node);
>   datanodeAdminManager.stopDecommission(node);
> }
>   }
>   node.setUpgradeDomain(hostConfigManager.getUpgradeDomain(node));
> }
>   }
> {code}
> All the isIncluded(), isExcluded() methods call node.getResolvedAddress() 
> which does the DNS lookup. We could probably change things to perform all the 
> DNS lookups outside of the write lock, and then take the lock and process the 
> nodes. Also change or overload isIncluded() etc to take the inetAddress 
> rather than the datanode descriptor.
> It would not shorten the time the operation takes to run overall, but it 
> would move the long duration out of the write lock and avoid blocking the 
> namenode for the entire time.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14593) RBF: Implement deletion feature for expired records in State Store

2019-07-12 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-14593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16884170#comment-16884170
 ] 

Íñigo Goiri commented on HDFS-14593:


The tests related to security are expected to fail; there is a JIRA for fixing 
it already.
TestRouterRpc* looks suspicious though.

> RBF: Implement deletion feature for expired records in State Store
> --
>
> Key: HDFS-14593
> URL: https://issues.apache.org/jira/browse/HDFS-14593
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: rbf
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
> Attachments: HDFS-14593.001.patch, HDFS-14593.002.patch, 
> HDFS-14593.003.patch, HDFS-14593.004.patch, HDFS-14593.005.patch, 
> HDFS-14593.006.patch, HDFS-14593.007.patch, HDFS-14593.008.patch, 
> HDFS-14593.009.patch, HDFS-14593.010.patch, HDFS-14593.011.patch
>
>
> Currently, any router seems to exist in the Router Information eternally.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14625) Make DefaultAuditLogger class in FSnamesystem to Abstract

2019-07-12 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-14625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16884169#comment-16884169
 ] 

Íñigo Goiri commented on HDFS-14625:


Having the fields as protected is better than public.
Checkstyle is asking for private but as mentioned before, it would be too 
disruptive.
I'm fine with the approach in  [^HDFS-14625.003.patch].
+1

> Make DefaultAuditLogger class in FSnamesystem to Abstract 
> --
>
> Key: HDFS-14625
> URL: https://issues.apache.org/jira/browse/HDFS-14625
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: hemanthboyina
>Assignee: hemanthboyina
>Priority: Major
> Attachments: HDFS-14625 (1).patch, HDFS-14625(2).patch, 
> HDFS-14625.003.patch, HDFS-14625.patch
>
>
> As per +HDFS-13270+  Audit logger for Router , we can make DefaultAuditLogger 
>  in FSnamesystem to be Abstract and common



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1773) Add intermittent IO disk test to fault injection test

2019-07-12 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1773?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16884164#comment-16884164
 ] 

Eric Yang commented on HDDS-1773:
-

{quote}I agree that it's easy. The problem is that it can't simulate a certain 
type of disk failures.{quote}

Can you give an example?

> Add intermittent IO disk test to fault injection test
> -
>
> Key: HDDS-1773
> URL: https://issues.apache.org/jira/browse/HDDS-1773
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Eric Yang
>Priority: Major
> Attachments: HDDS-1773.001.patch
>
>
> Disk errors can also be simulated by setting cgroup blkio rate to 0 while 
> Ozone cluster is running.  
> This test will be added to corruption test project and this test will only be 
> performed if there is write access into host cgroup to control the throttle 
> of disk IO.
> Expected result:
> When datanode becomes irresponsive due to slow io, scm must flag the node as 
> unhealthy.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1774) Add disk hang test to fault injection test

2019-07-12 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1774?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16884161#comment-16884161
 ] 

Eric Yang commented on HDDS-1774:
-

Patch 001 is based on HDDS-1772 patch 3.  This patch adds a disk hang test for 
throttling datanode data disk availability, and run the standard upload and 
download tests.

> Add disk hang test to fault injection test
> --
>
> Key: HDDS-1774
> URL: https://issues.apache.org/jira/browse/HDDS-1774
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Eric Yang
>Priority: Major
> Attachments: HDDS-1774.001.patch
>
>
> When disk is corrupted, the disk may show behavior of hang in accessing data. 
>  One of the simulation that can be performed is to set disk IO throughput to 
> 0 bytes/sec to simulate disk hang.  Ozone file system client can detect disk 
> access timeout, and proceed to read/write data to another datanode.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1774) Add disk hang test to fault injection test

2019-07-12 Thread Eric Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1774?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated HDDS-1774:

Attachment: HDDS-1774.001.patch

> Add disk hang test to fault injection test
> --
>
> Key: HDDS-1774
> URL: https://issues.apache.org/jira/browse/HDDS-1774
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Eric Yang
>Priority: Major
> Attachments: HDDS-1774.001.patch
>
>
> When disk is corrupted, the disk may show behavior of hang in accessing data. 
>  One of the simulation that can be performed is to set disk IO throughput to 
> 0 bytes/sec to simulate disk hang.  Ozone file system client can detect disk 
> access timeout, and proceed to read/write data to another datanode.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14625) Make DefaultAuditLogger class in FSnamesystem to Abstract

2019-07-12 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16884152#comment-16884152
 ] 

Hadoop QA commented on HDFS-14625:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
52s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 33s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 41s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 7 new + 177 unchanged - 0 fixed = 184 total (was 177) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 51s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}103m 21s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
32s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}164m 29s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.datanode.TestDirectoryScanner |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=18.09.7 Server=18.09.7 Image:yetus/hadoop:bdbca0e53b4 |
| JIRA Issue | HDFS-14625 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12974554/HDFS-14625.003.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 5d676e7bdc23 4.15.0-52-generic #56-Ubuntu SMP Tue Jun 4 
22:49:08 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 4a70a0d |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_212 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27218/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27218/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27218/testReport/ |
| Max. process+thread count | 2814 (vs. ulimit of 5500) |
| 

[jira] [Commented] (HDDS-1773) Add intermittent IO disk test to fault injection test

2019-07-12 Thread Elek, Marton (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1773?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16884151#comment-16884151
 ] 

Elek, Marton commented on HDDS-1773:


{quote}This is most obvious and transparent approach to simulate intermittent 
disk failure. What could be easier than no additional code to inject faults?
{quote}
I agree that it's easy. The problem is that it can't simulate a certain type of 
disk failures.

> Add intermittent IO disk test to fault injection test
> -
>
> Key: HDDS-1773
> URL: https://issues.apache.org/jira/browse/HDDS-1773
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Eric Yang
>Priority: Major
> Attachments: HDDS-1773.001.patch
>
>
> Disk errors can also be simulated by setting cgroup blkio rate to 0 while 
> Ozone cluster is running.  
> This test will be added to corruption test project and this test will only be 
> performed if there is write access into host cgroup to control the throttle 
> of disk IO.
> Expected result:
> When datanode becomes irresponsive due to slow io, scm must flag the node as 
> unhealthy.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1773) Add intermittent IO disk test to fault injection test

2019-07-12 Thread Elek, Marton (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1773?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16884149#comment-16884149
 ] 

Elek, Marton commented on HDDS-1773:


{quote}We can stay disagree on byteman issue. Some ASF legal said it is ok to 
use GPL tools during build/test time, and leave no trace of it in release 
package. In Ozone's implementation, it references GPL tool in release package, 
and run GPL test tool from release package. This is mis-quoting ASF legal. This 
is the reason that I am not comfortable with this approach.
{quote}
Please don't mix the current usage of byteman in the hadoop-runner and the 
proposed solution to use it during the build.

> Add intermittent IO disk test to fault injection test
> -
>
> Key: HDDS-1773
> URL: https://issues.apache.org/jira/browse/HDDS-1773
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Eric Yang
>Priority: Major
> Attachments: HDDS-1773.001.patch
>
>
> Disk errors can also be simulated by setting cgroup blkio rate to 0 while 
> Ozone cluster is running.  
> This test will be added to corruption test project and this test will only be 
> performed if there is write access into host cgroup to control the throttle 
> of disk IO.
> Expected result:
> When datanode becomes irresponsive due to slow io, scm must flag the node as 
> unhealthy.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14573) Backport Standby Read to branch-3

2019-07-12 Thread Chen Liang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14573?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16884143#comment-16884143
 ] 

Chen Liang commented on HDFS-14573:
---

Committed 3.1 v003 patch to branch 3.1. Working on 3.0 backport.

> Backport Standby Read to branch-3
> -
>
> Key: HDFS-14573
> URL: https://issues.apache.org/jira/browse/HDFS-14573
> Project: Hadoop HDFS
>  Issue Type: Task
>  Components: hdfs
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Major
> Attachments: HDFS-14573-branch-3.0.001.patch, 
> HDFS-14573-branch-3.1.001.patch, HDFS-14573-branch-3.1.002.patch, 
> HDFS-14573-branch-3.1.003.patch, HDFS-14573-branch-3.2.001.patch, 
> HDFS-14573-branch-3.2.002.patch, HDFS-14573-branch-3.2.003.patch, 
> HDFS-14573-branch-3.2.004.patch
>
>
> This Jira tracks backporting the feature consistent read from standby 
> (HDFS-12943) to branch-3.x, including 3.0, 3.1, 3.2. This is required for 
> backporting to branch-2.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14593) RBF: Implement deletion feature for expired records in State Store

2019-07-12 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16884100#comment-16884100
 ] 

Hadoop QA commented on HDFS-14593:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
46s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 16s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m  0s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 25m 11s{color} 
| {color:red} hadoop-hdfs-rbf in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
31s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 79m 21s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.federation.router.TestRouterRpc |
|   | hadoop.hdfs.server.federation.router.TestRouterWithSecureStartup |
|   | hadoop.hdfs.server.federation.security.TestRouterHttpDelegationToken |
|   | hadoop.hdfs.server.federation.router.TestRouterRpcMultiDestination |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=18.09.7 Server=18.09.7 Image:yetus/hadoop:bdbca0e53b4 |
| JIRA Issue | HDFS-14593 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12974547/HDFS-14593.011.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  xml  |
| uname | Linux 17374ab71f0a 4.15.0-52-generic #56-Ubuntu SMP Tue Jun 4 
22:49:08 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 4a70a0d |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_212 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27219/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt
 |
|  

[jira] [Work logged] (HDDS-1779) TestWatchForCommit tests are flaky

2019-07-12 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1779?focusedWorklogId=276080=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-276080
 ]

ASF GitHub Bot logged work on HDDS-1779:


Author: ASF GitHub Bot
Created on: 12/Jul/19 19:07
Start Date: 12/Jul/19 19:07
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1071: HDDS-1779. 
TestWatchForCommit tests are flaky.
URL: https://github.com/apache/hadoop/pull/1071#issuecomment-510999876
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 57 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 2 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 473 | trunk passed |
   | +1 | compile | 269 | trunk passed |
   | +1 | checkstyle | 74 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 864 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 163 | trunk passed |
   | 0 | spotbugs | 316 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 505 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 450 | the patch passed |
   | +1 | compile | 268 | the patch passed |
   | +1 | javac | 268 | the patch passed |
   | -0 | checkstyle | 42 | hadoop-ozone: The patch generated 2 new + 0 
unchanged - 0 fixed = 2 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 682 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 162 | the patch passed |
   | +1 | findbugs | 520 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 338 | hadoop-hdds in the patch passed. |
   | -1 | unit | 2469 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 41 | The patch does not generate ASF License warnings. |
   | | | 7595 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.hdds.scm.pipeline.TestRatisPipelineCreateAndDestory |
   |   | hadoop.hdds.scm.pipeline.TestRatisPipelineProvider |
   |   | hadoop.ozone.client.rpc.TestOzoneClientRetriesOnException |
   |   | hadoop.ozone.client.rpc.TestBlockOutputStreamWithFailures |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.ozone.client.rpc.TestCloseContainerHandlingByClient |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=18.09.7 Server=18.09.7 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1071/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1071 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux d6815f71900a 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 4a70a0d |
   | Default Java | 1.8.0_212 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1071/1/artifact/out/diff-checkstyle-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1071/1/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1071/1/testReport/ |
   | Max. process+thread count | 5231 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/integration-test U: 
hadoop-ozone/integration-test |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1071/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 276080)
Time Spent: 1h 10m  (was: 1h)

> TestWatchForCommit tests are flaky
> --
>
> Key: HDDS-1779
> URL: 

[jira] [Commented] (HDFS-12746) DataNode Audit Logger

2019-07-12 Thread Erik Krogen (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-12746?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16884093#comment-16884093
 ] 

Erik Krogen commented on HDFS-12746:


Cool, thanks for the updates [~anu] and [~dineshchitlangia]!

> DataNode Audit Logger
> -
>
> Key: HDFS-12746
> URL: https://issues.apache.org/jira/browse/HDFS-12746
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode, logging
>Reporter: Erik Krogen
>Assignee: hemanthboyina
>Priority: Major
>
> I would like to discuss adding in an audit logger for the Datanodes. We have 
> audit logging on pretty much all other components: Namenode, ResourceManager, 
> NodeManager. It seems the DN should have a similar concept to log, at 
> minimum, all block reads/writes. I think all of the interesting information 
> does already appear in the DN logs at INFO level but it would be nice to have 
> a specific audit class that this gets logged through, a la {{RMAuditLogger}} 
> and {{NMAuditLogger}}, to enable special handling.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12746) DataNode Audit Logger

2019-07-12 Thread Dinesh Chitlangia (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-12746?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16884086#comment-16884086
 ] 

Dinesh Chitlangia commented on HDFS-12746:
--

{quote}if I recall correctly, Dinesh Chitlangia has also written an Audit 
parser and it ships as part of Ozone.{quote}
https://issues.apache.org/jira/browse/HDDS-393
The design was based on one of the custom script originally written by [~arp] 
to process Namenode audit logs.

{quote}the lack of easily parseable format on the NN's audit logs has been an 
annoyance for quite some time.{quote} +1 [~xkrogen]

> DataNode Audit Logger
> -
>
> Key: HDFS-12746
> URL: https://issues.apache.org/jira/browse/HDFS-12746
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode, logging
>Reporter: Erik Krogen
>Assignee: hemanthboyina
>Priority: Major
>
> I would like to discuss adding in an audit logger for the Datanodes. We have 
> audit logging on pretty much all other components: Namenode, ResourceManager, 
> NodeManager. It seems the DN should have a similar concept to log, at 
> minimum, all block reads/writes. I think all of the interesting information 
> does already appear in the DN logs at INFO level but it would be nice to have 
> a specific audit class that this gets logged through, a la {{RMAuditLogger}} 
> and {{NMAuditLogger}}, to enable special handling.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1772) Add disk full test to fault injection test

2019-07-12 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1772?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16884084#comment-16884084
 ] 

Eric Yang commented on HDDS-1772:
-

Rebase patch 003 to HDDS-1771 patch 003.

> Add disk full test to fault injection test
> --
>
> Key: HDDS-1772
> URL: https://issues.apache.org/jira/browse/HDDS-1772
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Eric Yang
>Priority: Major
> Attachments: HDDS-1772.001.patch, HDDS-1772.002.patch, 
> HDDS-1772.003.patch
>
>
> In Read-only test, one of the simulation to verify is the data disk becomes 
> full.  This can be tested by using a small Docker data disk to simulate disk 
> full.  When data disk is full, Ozone should continue to operate, and provide 
> read access to Ozone file system.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1772) Add disk full test to fault injection test

2019-07-12 Thread Eric Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1772?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated HDDS-1772:

Attachment: HDDS-1772.003.patch

> Add disk full test to fault injection test
> --
>
> Key: HDDS-1772
> URL: https://issues.apache.org/jira/browse/HDDS-1772
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Eric Yang
>Priority: Major
> Attachments: HDDS-1772.001.patch, HDDS-1772.002.patch, 
> HDDS-1772.003.patch
>
>
> In Read-only test, one of the simulation to verify is the data disk becomes 
> full.  This can be tested by using a small Docker data disk to simulate disk 
> full.  When data disk is full, Ozone should continue to operate, and provide 
> read access to Ozone file system.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-12746) DataNode Audit Logger

2019-07-12 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-12746?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16884080#comment-16884080
 ] 

Anu Engineer edited comment on HDFS-12746 at 7/12/19 6:44 PM:
--

[~xkrogen] we took your advice and created a DN Audit log on the Datanode side 
in Ozone. They are in the format that you suggested that we follow. The work 
was done by [~dineshchitlangia] from the Ozone team.

Here is the Jira which does it for Ozone, 
https://issues.apache.org/jira/browse/HDDS-120

Just updating here for reference.  Thanks

if I recall correctly, [~dineshchitlangia] has also written an Audit parser and 
it ships as part of Ozone.


was (Author: anu):
[~xkrogen] we took your advice and created a DN Audit log on the Datanode side 
in Ozone. They are in the format that you suggested that we follow. The work 
was done by [~dineshchitlangia] from the Ozone team.

Here is the Jira which does it for Ozone, 
https://issues.apache.org/jira/browse/HDDS-120

Just updating here for reference.  Thanks

 

> DataNode Audit Logger
> -
>
> Key: HDFS-12746
> URL: https://issues.apache.org/jira/browse/HDFS-12746
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode, logging
>Reporter: Erik Krogen
>Assignee: hemanthboyina
>Priority: Major
>
> I would like to discuss adding in an audit logger for the Datanodes. We have 
> audit logging on pretty much all other components: Namenode, ResourceManager, 
> NodeManager. It seems the DN should have a similar concept to log, at 
> minimum, all block reads/writes. I think all of the interesting information 
> does already appear in the DN logs at INFO level but it would be nice to have 
> a specific audit class that this gets logged through, a la {{RMAuditLogger}} 
> and {{NMAuditLogger}}, to enable special handling.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12746) DataNode Audit Logger

2019-07-12 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-12746?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16884080#comment-16884080
 ] 

Anu Engineer commented on HDFS-12746:
-

[~xkrogen] we took your advice and created a DN Audit log on the Datanode side 
in Ozone. They are in the format that you suggested that we follow. The work 
was done by [~dineshchitlangia] from the Ozone team.

Here is the Jira which does it for Ozone, 
https://issues.apache.org/jira/browse/HDDS-120

Just updating here for reference.  Thanks

 

> DataNode Audit Logger
> -
>
> Key: HDFS-12746
> URL: https://issues.apache.org/jira/browse/HDFS-12746
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode, logging
>Reporter: Erik Krogen
>Assignee: hemanthboyina
>Priority: Major
>
> I would like to discuss adding in an audit logger for the Datanodes. We have 
> audit logging on pretty much all other components: Namenode, ResourceManager, 
> NodeManager. It seems the DN should have a similar concept to log, at 
> minimum, all block reads/writes. I think all of the interesting information 
> does already appear in the DN logs at INFO level but it would be nice to have 
> a specific audit class that this gets logged through, a la {{RMAuditLogger}} 
> and {{NMAuditLogger}}, to enable special handling.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-11994) Hadoop NameNode Web UI throws "Failed to retrieve data from /jmx?qry=java.lang:type=Memory, cause:" When running behind a proxy

2019-07-12 Thread hemanthboyina (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-11994?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

hemanthboyina reassigned HDFS-11994:


Assignee: hemanthboyina

> Hadoop NameNode Web UI throws "Failed to retrieve data from 
> /jmx?qry=java.lang:type=Memory, cause:" When running behind a proxy
> ---
>
> Key: HDFS-11994
> URL: https://issues.apache.org/jira/browse/HDFS-11994
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ui
>Affects Versions: 2.8.0
> Environment: CentOS release 6.9 (Final)
> OpenJDK version "1.8.0_131"
> Hadoop 2.8.0
>Reporter: Sergey Bahchissaraitsev
>Assignee: hemanthboyina
>Priority: Minor
>
> When running behind a proxy Hadoop Web UI throws the following exception 
> because it tries to make Ajax requests to the base server URL:
> {code:java}
> Failed to retrieve data from /jmx?qry=java.lang:type=Memory, cause:
> {code}
> A good solution could be to adjust the Ajax URL based on the actual window 
> URL using the jQuery Ajax "beforeSend" pre-request callback function: 
> http://api.jquery.com/jquery.ajax/



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-12746) DataNode Audit Logger

2019-07-12 Thread hemanthboyina (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-12746?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

hemanthboyina reassigned HDFS-12746:


Assignee: hemanthboyina

> DataNode Audit Logger
> -
>
> Key: HDFS-12746
> URL: https://issues.apache.org/jira/browse/HDFS-12746
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode, logging
>Reporter: Erik Krogen
>Assignee: hemanthboyina
>Priority: Major
>
> I would like to discuss adding in an audit logger for the Datanodes. We have 
> audit logging on pretty much all other components: Namenode, ResourceManager, 
> NodeManager. It seems the DN should have a similar concept to log, at 
> minimum, all block reads/writes. I think all of the interesting information 
> does already appear in the DN logs at INFO level but it would be nice to have 
> a specific audit class that this gets logged through, a la {{RMAuditLogger}} 
> and {{NMAuditLogger}}, to enable special handling.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1554) Create disk tests for fault injection test

2019-07-12 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1554?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16884020#comment-16884020
 ] 

Hadoop QA commented on HDDS-1554:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
50s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
1s{color} | {color:green} No case conflicting files found. {color} |
| {color:blue}0{color} | {color:blue} yamllint {color} | {color:blue}  0m  
0s{color} | {color:blue} yamllint was not available. {color} |
| {color:blue}0{color} | {color:blue} shelldocs {color} | {color:blue}  0m  
1s{color} | {color:blue} Shelldocs was not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 23 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  3m 
42s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 10m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  4m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 20s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m  
4s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  5m 
22s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  8m 
51s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
34s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
 4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  4m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  4m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
 0s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m 
17s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 54s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  9m 
12s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m 
56s{color} | {color:green} hadoop-hdds in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 25m 58s{color} 
| {color:red} hadoop-ozone in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  1m 
 6s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}123m 56s{color} | 
{color:black} {color} |
\\

[jira] [Commented] (HDFS-14625) Make DefaultAuditLogger class in FSnamesystem to Abstract

2019-07-12 Thread hemanthboyina (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16884019#comment-16884019
 ] 

hemanthboyina commented on HDFS-14625:
--

thanks [~elgoiri] 
the check style issues are of visibility modifier , we have changed the access 
modifiers of the fields from public to protected 
need to change to public again ?


have modified 2 other check style issues and submitted patch

> Make DefaultAuditLogger class in FSnamesystem to Abstract 
> --
>
> Key: HDFS-14625
> URL: https://issues.apache.org/jira/browse/HDFS-14625
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: hemanthboyina
>Assignee: hemanthboyina
>Priority: Major
> Attachments: HDFS-14625 (1).patch, HDFS-14625(2).patch, 
> HDFS-14625.003.patch, HDFS-14625.patch
>
>
> As per +HDFS-13270+  Audit logger for Router , we can make DefaultAuditLogger 
>  in FSnamesystem to be Abstract and common



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14625) Make DefaultAuditLogger class in FSnamesystem to Abstract

2019-07-12 Thread hemanthboyina (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14625?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

hemanthboyina updated HDFS-14625:
-
Attachment: HDFS-14625.003.patch

> Make DefaultAuditLogger class in FSnamesystem to Abstract 
> --
>
> Key: HDFS-14625
> URL: https://issues.apache.org/jira/browse/HDFS-14625
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: hemanthboyina
>Assignee: hemanthboyina
>Priority: Major
> Attachments: HDFS-14625 (1).patch, HDFS-14625(2).patch, 
> HDFS-14625.003.patch, HDFS-14625.patch
>
>
> As per +HDFS-13270+  Audit logger for Router , we can make DefaultAuditLogger 
>  in FSnamesystem to be Abstract and common



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1771) Add slow IO disk test to fault injection test

2019-07-12 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1771?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16884016#comment-16884016
 ] 

Eric Yang commented on HDDS-1771:
-

Patch 3 rebased to HDDS-1554 patch 13.
The rate can be customized by using:

{code}
mvn clean verify -Ddisk.read.bps=1mb -Ddisk.read.iops=120 -Ddisk.write.bps=300k 
-Ddisk.write.iops=30 -Pit,docker-build
{code}

This will exercise the test with:

# read rate: 1mb/s, read ops: 120/s
# write rate: 300k/s, write ops: 30/s

> Add slow IO disk test to fault injection test
> -
>
> Key: HDDS-1771
> URL: https://issues.apache.org/jira/browse/HDDS-1771
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Eric Yang
>Priority: Major
> Attachments: HDDS-1771.001.patch, HDDS-1771.002.patch, 
> HDDS-1771.003.patch
>
>
> In fault injection test, one possible simulation is to create slow disk IO.  
> This test can assist in developing a set of timing profiles that works for 
> Ozone cluster.  When we write to a file, the data travels across a bunch of 
> buffers and caches before it is effectively written to the disk.  By 
> controlling cgroup blkio rate in Linux Kernel, we can simulate slow disk 
> read, write.  Docker provides the following parameters to control cgroup:
> {code}
> --device-read-bps=""
> --device-write-bps=""
> --device-read-iops=""
> --device-write-iops=""
> {code}
> The test will be added to read/write test with docker compose file as 
> parameters to test the timing profiles.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1771) Add slow IO disk test to fault injection test

2019-07-12 Thread Eric Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1771?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated HDDS-1771:

Attachment: HDDS-1771.003.patch

> Add slow IO disk test to fault injection test
> -
>
> Key: HDDS-1771
> URL: https://issues.apache.org/jira/browse/HDDS-1771
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Eric Yang
>Priority: Major
> Attachments: HDDS-1771.001.patch, HDDS-1771.002.patch, 
> HDDS-1771.003.patch
>
>
> In fault injection test, one possible simulation is to create slow disk IO.  
> This test can assist in developing a set of timing profiles that works for 
> Ozone cluster.  When we write to a file, the data travels across a bunch of 
> buffers and caches before it is effectively written to the disk.  By 
> controlling cgroup blkio rate in Linux Kernel, we can simulate slow disk 
> read, write.  Docker provides the following parameters to control cgroup:
> {code}
> --device-read-bps=""
> --device-write-bps=""
> --device-read-iops=""
> --device-write-iops=""
> {code}
> The test will be added to read/write test with docker compose file as 
> parameters to test the timing profiles.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1767) ContainerStateMachine should have its own executors for executing applyTransaction calls

2019-07-12 Thread Lokesh Jain (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1767?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lokesh Jain updated HDDS-1767:
--
Status: Patch Available  (was: Open)

> ContainerStateMachine should have its own executors for executing 
> applyTransaction calls
> 
>
> Key: HDDS-1767
> URL: https://issues.apache.org/jira/browse/HDDS-1767
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
>  Labels: pull-request-available
>
> Currently ContainerStateMachine uses the executors provided by 
> XceiverServerRatis for executing applyTransaction calls. This would result in 
> two or more ContainerStateMachine to share the same set of executors. Delay 
> or load in one ContainerStateMachine would adversely affect the performance 
> of other state machines in such a case. It is better to have separate set of 
> executors for each ContainerStateMachine.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14593) RBF: Implement deletion feature for expired records in State Store

2019-07-12 Thread Takanobu Asanuma (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16883975#comment-16883975
 ] 

Takanobu Asanuma commented on HDFS-14593:
-

011.patch fixes a checkstyle issue.

> RBF: Implement deletion feature for expired records in State Store
> --
>
> Key: HDFS-14593
> URL: https://issues.apache.org/jira/browse/HDFS-14593
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: rbf
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
> Attachments: HDFS-14593.001.patch, HDFS-14593.002.patch, 
> HDFS-14593.003.patch, HDFS-14593.004.patch, HDFS-14593.005.patch, 
> HDFS-14593.006.patch, HDFS-14593.007.patch, HDFS-14593.008.patch, 
> HDFS-14593.009.patch, HDFS-14593.010.patch, HDFS-14593.011.patch
>
>
> Currently, any router seems to exist in the Router Information eternally.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14593) RBF: Implement deletion feature for expired records in State Store

2019-07-12 Thread Takanobu Asanuma (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14593?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Takanobu Asanuma updated HDFS-14593:

Attachment: HDFS-14593.011.patch

> RBF: Implement deletion feature for expired records in State Store
> --
>
> Key: HDFS-14593
> URL: https://issues.apache.org/jira/browse/HDFS-14593
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: rbf
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
> Attachments: HDFS-14593.001.patch, HDFS-14593.002.patch, 
> HDFS-14593.003.patch, HDFS-14593.004.patch, HDFS-14593.005.patch, 
> HDFS-14593.006.patch, HDFS-14593.007.patch, HDFS-14593.008.patch, 
> HDFS-14593.009.patch, HDFS-14593.010.patch, HDFS-14593.011.patch
>
>
> Currently, any router seems to exist in the Router Information eternally.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1735) Create separated unit and integration test executor dev-support scripts

2019-07-12 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16883961#comment-16883961
 ] 

Hudson commented on HDDS-1735:
--

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #16909 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/16909/])
HDDS-1735. Create separate unit and integration test executor (elek: rev 
0bae9e8ec8b53a3b484eaa01a3fa3f177d56b3e4)
* (edit) hadoop-ozone/dev-support/checks/author.sh
* (edit) hadoop-ozone/dev-support/checks/checkstyle.sh
* (edit) hadoop-ozone/dev-support/checks/unit.sh
* (edit) hadoop-ozone/dev-support/checks/acceptance.sh
* (edit) hadoop-ozone/dev-support/checks/build.sh
* (edit) hadoop-ozone/dev-support/checks/rat.sh
* (add) hadoop-ozone/dev-support/checks/integration.sh
* (edit) hadoop-ozone/dev-support/checks/findbugs.sh


> Create separated unit and integration test executor dev-support scripts
> ---
>
> Key: HDDS-1735
> URL: https://issues.apache.org/jira/browse/HDDS-1735
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
> Attachments: Screen Shot 2019-07-02 at 3.25.33 PM.png
>
>  Time Spent: 2.5h
>  Remaining Estimate: 0h
>
> hadoop-ozone/dev-support/checks directory contains multiple helper script to 
> execute different type of testing (findbugs, rat, unit, build).
> They easily define how tests should be executed, with the following contract:
>  * The problems should be printed out to the console
>  * in case of test failure a non zero exit code should be used
>  
> The tests are working well (in fact I have some experiments with executing 
> these scripts on k8s and argo where all the shell scripts are executed 
> parallel) but we need some update:
>  1. Most important: the unit tests and integration tests can be separated. 
> Integration tests are more flaky and it's better to have a way to run only 
> the normal unit tests
>  2. As HDDS-1115 introduced a pom.ozone.xml it's better to use them instead 
> of the magical "am pl hadoop-ozone-dist" trick--
>  3. To make it possible to run blockade test in containers we should use - T 
> flag with docker-compose
>  4. checkstyle violations are printed out to the console



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1735) Create separated unit and integration test executor dev-support scripts

2019-07-12 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1735?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-1735:
---
   Resolution: Fixed
Fix Version/s: 0.5.0
   Status: Resolved  (was: Patch Available)

Thank you, I have committed this to the trunk. If you like I can cherry-pick 
this to the 0.4.1 branch. Please let me know. [~eyang] Thanks for the comments.

> Create separated unit and integration test executor dev-support scripts
> ---
>
> Key: HDDS-1735
> URL: https://issues.apache.org/jira/browse/HDDS-1735
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
> Attachments: Screen Shot 2019-07-02 at 3.25.33 PM.png
>
>  Time Spent: 2.5h
>  Remaining Estimate: 0h
>
> hadoop-ozone/dev-support/checks directory contains multiple helper script to 
> execute different type of testing (findbugs, rat, unit, build).
> They easily define how tests should be executed, with the following contract:
>  * The problems should be printed out to the console
>  * in case of test failure a non zero exit code should be used
>  
> The tests are working well (in fact I have some experiments with executing 
> these scripts on k8s and argo where all the shell scripts are executed 
> parallel) but we need some update:
>  1. Most important: the unit tests and integration tests can be separated. 
> Integration tests are more flaky and it's better to have a way to run only 
> the normal unit tests
>  2. As HDDS-1115 introduced a pom.ozone.xml it's better to use them instead 
> of the magical "am pl hadoop-ozone-dist" trick--
>  3. To make it possible to run blockade test in containers we should use - T 
> flag with docker-compose
>  4. checkstyle violations are printed out to the console



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1735) Create separated unit and integration test executor dev-support scripts

2019-07-12 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1735?focusedWorklogId=276012=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-276012
 ]

ASF GitHub Bot logged work on HDDS-1735:


Author: ASF GitHub Bot
Created on: 12/Jul/19 16:25
Start Date: 12/Jul/19 16:25
Worklog Time Spent: 10m 
  Work Description: anuengineer commented on pull request #1035: HDDS-1735. 
Create separate unit and integration test executor dev-support script
URL: https://github.com/apache/hadoop/pull/1035
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 276012)
Time Spent: 2.5h  (was: 2h 20m)

> Create separated unit and integration test executor dev-support scripts
> ---
>
> Key: HDDS-1735
> URL: https://issues.apache.org/jira/browse/HDDS-1735
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
> Attachments: Screen Shot 2019-07-02 at 3.25.33 PM.png
>
>  Time Spent: 2.5h
>  Remaining Estimate: 0h
>
> hadoop-ozone/dev-support/checks directory contains multiple helper script to 
> execute different type of testing (findbugs, rat, unit, build).
> They easily define how tests should be executed, with the following contract:
>  * The problems should be printed out to the console
>  * in case of test failure a non zero exit code should be used
>  
> The tests are working well (in fact I have some experiments with executing 
> these scripts on k8s and argo where all the shell scripts are executed 
> parallel) but we need some update:
>  1. Most important: the unit tests and integration tests can be separated. 
> Integration tests are more flaky and it's better to have a way to run only 
> the normal unit tests
>  2. As HDDS-1115 introduced a pom.ozone.xml it's better to use them instead 
> of the magical "am pl hadoop-ozone-dist" trick--
>  3. To make it possible to run blockade test in containers we should use - T 
> flag with docker-compose
>  4. checkstyle violations are printed out to the console



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1773) Add intermittent IO disk test to fault injection test

2019-07-12 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1773?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16883956#comment-16883956
 ] 

Eric Yang commented on HDDS-1773:
-

{quote}I am 100% sure that it's allowed to use at build/test time. There is no 
restriction about what can you do in your memory. Restrictions applies to the 
release packages.{quote}

We can stay disagree on byteman issue.  Some ASF legal said it is ok to use GPL 
tools during build/test time, and leave no trace of it in release package.  In 
Ozone's implementation, it references GPL tool in release package, and run GPL 
test tool from release package.  This is mis-quoting ASF legal.  This is the 
reason that I am not comfortable with this approach.

{quote}Interesting idea, and can be useful in some cases but it doesn't help us 
to easily inject random read/write slowness/failures during tests.{quote}

Sorry, I don't understand the reply.  Please define "easily inject".  Processes 
will try to commit data to the disk sector which contains holes.  Read/write of 
the same file have random chance of hitting the faulty sector.  This is most 
obvious and transparent approach to simulate intermittent disk failure.  What 
could be easier than no additional code to inject faults?

> Add intermittent IO disk test to fault injection test
> -
>
> Key: HDDS-1773
> URL: https://issues.apache.org/jira/browse/HDDS-1773
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Eric Yang
>Priority: Major
> Attachments: HDDS-1773.001.patch
>
>
> Disk errors can also be simulated by setting cgroup blkio rate to 0 while 
> Ozone cluster is running.  
> This test will be added to corruption test project and this test will only be 
> performed if there is write access into host cgroup to control the throttle 
> of disk IO.
> Expected result:
> When datanode becomes irresponsive due to slow io, scm must flag the node as 
> unhealthy.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1785) OOM error in Freon due to the concurrency handling

2019-07-12 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1785?focusedWorklogId=276006=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-276006
 ]

ASF GitHub Bot logged work on HDDS-1785:


Author: ASF GitHub Bot
Created on: 12/Jul/19 16:04
Start Date: 12/Jul/19 16:04
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1085: HDDS-1785. OOM 
error in Freon due to the concurrency handling
URL: https://github.com/apache/hadoop/pull/1085#issuecomment-510942107
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 71 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 482 | trunk passed |
   | +1 | compile | 248 | trunk passed |
   | +1 | checkstyle | 68 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 905 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 154 | trunk passed |
   | 0 | spotbugs | 317 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 504 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 535 | the patch passed |
   | +1 | compile | 310 | the patch passed |
   | +1 | javac | 310 | the patch passed |
   | +1 | checkstyle | 79 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 724 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 153 | the patch passed |
   | +1 | findbugs | 608 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 353 | hadoop-hdds in the patch passed. |
   | -1 | unit | 198 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 36 | The patch does not generate ASF License warnings. |
   | | | 5596 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.ozone.om.ratis.TestOzoneManagerDoubleBufferWithOMResponse |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=18.09.7 Server=18.09.7 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1085/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1085 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux e9107caa267d 4.15.0-48-generic #51-Ubuntu SMP Wed Apr 3 
08:28:49 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 190e434 |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1085/2/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1085/2/testReport/ |
   | Max. process+thread count | 370 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/tools U: hadoop-ozone/tools |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1085/2/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 276006)
Time Spent: 40m  (was: 0.5h)

> OOM error in Freon due to the concurrency handling
> --
>
> Key: HDDS-1785
> URL: https://issues.apache.org/jira/browse/HDDS-1785
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Elek, Marton
>Assignee: Doroszlai, Attila
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> HDDS-1532 modified the concurrent framework usage of Freon 
> (RandomKeyGenerator).
> The new approach uses separated tasks (Runnable) to create the 
> volumes/buckets/keys.
> Unfortunately it doesn't work very well in some 

[jira] [Commented] (HDDS-1554) Create disk tests for fault injection test

2019-07-12 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1554?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16883920#comment-16883920
 ] 

Eric Yang commented on HDDS-1554:
-

Patch 13 fixes check style and white space issues.

> Create disk tests for fault injection test
> --
>
> Key: HDDS-1554
> URL: https://issues.apache.org/jira/browse/HDDS-1554
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: build
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1554.001.patch, HDDS-1554.002.patch, 
> HDDS-1554.003.patch, HDDS-1554.004.patch, HDDS-1554.005.patch, 
> HDDS-1554.006.patch, HDDS-1554.007.patch, HDDS-1554.008.patch, 
> HDDS-1554.009.patch, HDDS-1554.010.patch, HDDS-1554.011.patch, 
> HDDS-1554.012.patch, HDDS-1554.013.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> The current plan for fault injection disk tests are:
>  # Scenario 1 - Read/Write test
>  ## Run docker-compose to bring up a cluster
>  ## Initialize scm and om
>  ## Upload data to Ozone cluster
>  ## Verify data is correct
>  ## Shutdown cluster
>  # Scenario 2 - Read/Only test
>  ## Repeat Scenario 1
>  ## Mount data disk as read only
>  ## Try to write data to Ozone cluster
>  ## Validate error message is correct
>  ## Shutdown cluster
>  # Scenario 3 - Corruption test
>  ## Repeat Scenario 2
>  ## Shutdown cluster
>  ## Modify data disk data
>  ## Restart cluster
>  ## Validate error message for read from corrupted data
>  ## Validate error message for write to corrupted volume



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1554) Create disk tests for fault injection test

2019-07-12 Thread Eric Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1554?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated HDDS-1554:

Attachment: HDDS-1554.013.patch

> Create disk tests for fault injection test
> --
>
> Key: HDDS-1554
> URL: https://issues.apache.org/jira/browse/HDDS-1554
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: build
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1554.001.patch, HDDS-1554.002.patch, 
> HDDS-1554.003.patch, HDDS-1554.004.patch, HDDS-1554.005.patch, 
> HDDS-1554.006.patch, HDDS-1554.007.patch, HDDS-1554.008.patch, 
> HDDS-1554.009.patch, HDDS-1554.010.patch, HDDS-1554.011.patch, 
> HDDS-1554.012.patch, HDDS-1554.013.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> The current plan for fault injection disk tests are:
>  # Scenario 1 - Read/Write test
>  ## Run docker-compose to bring up a cluster
>  ## Initialize scm and om
>  ## Upload data to Ozone cluster
>  ## Verify data is correct
>  ## Shutdown cluster
>  # Scenario 2 - Read/Only test
>  ## Repeat Scenario 1
>  ## Mount data disk as read only
>  ## Try to write data to Ozone cluster
>  ## Validate error message is correct
>  ## Shutdown cluster
>  # Scenario 3 - Corruption test
>  ## Repeat Scenario 2
>  ## Shutdown cluster
>  ## Modify data disk data
>  ## Restart cluster
>  ## Validate error message for read from corrupted data
>  ## Validate error message for write to corrupted volume



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1201) Reporting Corruptions in Containers to SCM

2019-07-12 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1201?focusedWorklogId=275999=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-275999
 ]

ASF GitHub Bot logged work on HDDS-1201:


Author: ASF GitHub Bot
Created on: 12/Jul/19 15:14
Start Date: 12/Jul/19 15:14
Worklog Time Spent: 10m 
  Work Description: elek commented on issue #1032: [HDDS-1201] Reporting 
corrupted containers info to SCM
URL: https://github.com/apache/hadoop/pull/1032#issuecomment-510924348
 
 
   Oh, no worries at all. It's a problem with the build system if these minor 
problems are not clearly visible  immediately on the PR.
   
   I just tried to share how can it be avoid, but I am also thinking to 
document it better on the wiki.
   
   (ps: I am also experimenting with helper scripts such as 
`./hadoop-ozone/dev-support/checks/checkstyle.sh` to make it easier to run the 
checks locally. For me it helps a lot, but checkstyle report will be more 
usable after HDDS-1735...) 
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 275999)
Time Spent: 2h 10m  (was: 2h)

> Reporting Corruptions in Containers to SCM
> --
>
> Key: HDDS-1201
> URL: https://issues.apache.org/jira/browse/HDDS-1201
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Datanode, SCM
>Reporter: Supratim Deka
>Assignee: Hrishikesh Gadre
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 2h 10m
>  Remaining Estimate: 0h
>
> Add protocol message and handling to report container corruptions to the SCM.
> Also add basic recovery handling in SCM.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14547) DirectoryWithQuotaFeature.quota costs additional memory even the storage type quota is not set.

2019-07-12 Thread Erik Krogen (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14547?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16883891#comment-16883891
 ] 

Erik Krogen commented on HDFS-14547:


Besides fixing up the checkstyle and Java 7 compilation issues, you don't need 
to declare {{throws ConstEnumException}} for {{ModifyAction}} as it is a 
{{RuntimeException}}. This will make all of your anonymous classes a little 
more succinct.

Other than that, LGTM.

> DirectoryWithQuotaFeature.quota costs additional memory even the storage type 
> quota is not set.
> ---
>
> Key: HDFS-14547
> URL: https://issues.apache.org/jira/browse/HDFS-14547
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.1.0
>Reporter: Jinglun
>Assignee: Jinglun
>Priority: Major
> Fix For: 3.0.4, 3.3.0, 3.2.1, 3.1.3
>
> Attachments: HDFS-14547-branch-2.9.001.patch, HDFS-14547-design, 
> HDFS-14547-patch003-Test Report.pdf, HDFS-14547.001.patch, 
> HDFS-14547.002.patch, HDFS-14547.003.patch, HDFS-14547.004.patch, 
> HDFS-14547.005.patch, HDFS-14547.006.patch, HDFS-14547.007.patch
>
>
> Our XiaoMi HDFS is considering upgrading from 2.6 to 3.1. We notice the 
> storage type quota 'tsCounts' is instantiated to 
> EnumCounters(StorageType.class), so it will cost a long[5] even 
> if we don't have any storage type quota on this inode(only space quota or 
> name quota).
> In our cluster we have many dirs with quota and the NameNode's memory is in 
> tension, so the additional cost will be a problem.
>  See DirectoryWithQuotaFeature.Builder().
>  
> {code:java}
> class DirectoryWithQuotaFeature$Builder {
>   public Builder() {
>this.quota = new QuotaCounts.Builder().nameSpace(DEFAULT_NAMESPACE_QUOTA).
>storageSpace(DEFAULT_STORAGE_SPACE_QUOTA).
>typeSpaces(DEFAULT_STORAGE_SPACE_QUOTA).build();// set default value -1.
>this.usage = new QuotaCounts.Builder().nameSpace(1).build();
>   }
>   public Builder typeSpaces(long val) {// set default value.
>this.tsCounts.reset(val);
>return this;
>   }
> }
> class QuotaCounts$Builder {
>   public Builder() {
> this.nsSsCounts = new EnumCounters(Quota.class);
> this.tsCounts = new EnumCounters(StorageType.class);
>   }
> }
> class EnumCounters {
>   public EnumCounters(final Class enumClass) {
> final E[] enumConstants = enumClass.getEnumConstants();
> Preconditions.checkNotNull(enumConstants);
> this.enumClass = enumClass;
> this.counters = new long[enumConstants.length];// new a long array here.
>   }
> }
> {code}
> Related to HDFS-14542.
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1782) Add an option to MiniOzoneChaosCluster to read files multiple times.

2019-07-12 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1782?focusedWorklogId=275993=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-275993
 ]

ASF GitHub Bot logged work on HDDS-1782:


Author: ASF GitHub Bot
Created on: 12/Jul/19 14:57
Start Date: 12/Jul/19 14:57
Worklog Time Spent: 10m 
  Work Description: adoroszlai commented on pull request #1076: HDDS-1782. 
Add an option to MiniOzoneChaosCluster to read files multiple times. 
Contributed by Mukul Kumar Singh.
URL: https://github.com/apache/hadoop/pull/1076#discussion_r303020363
 
 

 ##
 File path: 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/chaos/TestProbability.java
 ##
 @@ -0,0 +1,39 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.ozone.chaos;
+
+import org.apache.commons.lang3.RandomUtils;
+
+/**
+ * Class to keep track of test probability.
+ */
+public class TestProbability {
+  private int pct;
+
+  private TestProbability(int pct) {
+this.pct = pct;
+  }
+
+  public boolean isTrue() {
+return (RandomUtils.nextInt() * pct / 100) == 1;
 
 Review comment:
   Thanks.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 275993)
Time Spent: 1h  (was: 50m)

> Add an option to MiniOzoneChaosCluster to read files multiple times.
> 
>
> Key: HDDS-1782
> URL: https://issues.apache.org/jira/browse/HDDS-1782
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.4.0
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> Right now MiniOzoneChaosCluster writes a file/ reads it and deletes it 
> immediately. This jira proposes to add an option to read the file multiple 
> time in MiniOzoneChaosCluster.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1767) ContainerStateMachine should have its own executors for executing applyTransaction calls

2019-07-12 Thread Lokesh Jain (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1767?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lokesh Jain updated HDDS-1767:
--
Labels: pull-request-available  (was: )

> ContainerStateMachine should have its own executors for executing 
> applyTransaction calls
> 
>
> Key: HDDS-1767
> URL: https://issues.apache.org/jira/browse/HDDS-1767
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
>  Labels: pull-request-available
>
> Currently ContainerStateMachine uses the executors provided by 
> XceiverServerRatis for executing applyTransaction calls. This would result in 
> two or more ContainerStateMachine to share the same set of executors. Delay 
> or load in one ContainerStateMachine would adversely affect the performance 
> of other state machines in such a case. It is better to have separate set of 
> executors for each ContainerStateMachine.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1785) OOM error in Freon due to the concurrency handling

2019-07-12 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1785?focusedWorklogId=275989=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-275989
 ]

ASF GitHub Bot logged work on HDDS-1785:


Author: ASF GitHub Bot
Created on: 12/Jul/19 14:52
Start Date: 12/Jul/19 14:52
Worklog Time Spent: 10m 
  Work Description: adoroszlai commented on issue #1085: HDDS-1785. OOM 
error in Freon due to the concurrency handling
URL: https://github.com/apache/hadoop/pull/1085#issuecomment-510916323
 
 
   @elek @iamcaoxudong please review
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 275989)
Time Spent: 0.5h  (was: 20m)

> OOM error in Freon due to the concurrency handling
> --
>
> Key: HDDS-1785
> URL: https://issues.apache.org/jira/browse/HDDS-1785
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Elek, Marton
>Assignee: Doroszlai, Attila
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> HDDS-1532 modified the concurrent framework usage of Freon 
> (RandomKeyGenerator).
> The new approach uses separated tasks (Runnable) to create the 
> volumes/buckets/keys.
> Unfortunately it doesn't work very well in some cases.
>  # When Freon starts it creates an executor with fixed number of threads (10)
>  # The first loop submits numOfVolumes (10) VolumeProcessor tasks to the 
> executor
>  # The 10 threads starts to execute the 10 VolumeProcessor tasks
>  # Each VolumeProcessor tasks creates numOfBuckets (1000) BucketProcessor 
> tasks. All together 1 tasks are submitted to the executor.
>  # The 10 threads starts to execute the first 10 BucketProcessor tasks, they 
> starts to create the KeyProcessor tasks: 500 000 * 10 tasks are submitted.
>  # At this point of the time no keys are generated, but the next 10 
> BucketProcessor tasks are started to execute..
>  # To execute the first key creation we should process all the 
> BucketProcessor tasks which means that all the Key creation tasks (10 * 1000 
> * 500 000) are created and added to the executor
>  # Which requires a huge amount of time and memory



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-1794) Fix checkstyle errors introduced by HDDS-1201

2019-07-12 Thread Hrishikesh Gadre (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1794?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hrishikesh Gadre resolved HDDS-1794.

Resolution: Duplicate

> Fix checkstyle errors introduced by HDDS-1201
> -
>
> Key: HDDS-1794
> URL: https://issues.apache.org/jira/browse/HDDS-1794
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Hrishikesh Gadre
>Assignee: Hrishikesh Gadre
>Priority: Trivial
>
> The fix for HDDS-1201 introduced few checkstyle errors. This Jira is to fix 
> these issues.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1201) Reporting Corruptions in Containers to SCM

2019-07-12 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1201?focusedWorklogId=275976=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-275976
 ]

ASF GitHub Bot logged work on HDDS-1201:


Author: ASF GitHub Bot
Created on: 12/Jul/19 14:26
Start Date: 12/Jul/19 14:26
Worklog Time Spent: 10m 
  Work Description: hgadre commented on issue #1032: [HDDS-1201] Reporting 
corrupted containers info to SCM
URL: https://github.com/apache/hadoop/pull/1032#issuecomment-510906972
 
 
   @elek sorry about that. I have filed HDDS-1794 to fix this.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 275976)
Time Spent: 2h  (was: 1h 50m)

> Reporting Corruptions in Containers to SCM
> --
>
> Key: HDDS-1201
> URL: https://issues.apache.org/jira/browse/HDDS-1201
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Datanode, SCM
>Reporter: Supratim Deka
>Assignee: Hrishikesh Gadre
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> Add protocol message and handling to report container corruptions to the SCM.
> Also add basic recovery handling in SCM.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1794) Fix checkstyle errors introduced by HDDS-1201

2019-07-12 Thread Hrishikesh Gadre (JIRA)
Hrishikesh Gadre created HDDS-1794:
--

 Summary: Fix checkstyle errors introduced by HDDS-1201
 Key: HDDS-1794
 URL: https://issues.apache.org/jira/browse/HDDS-1794
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Hrishikesh Gadre
Assignee: Hrishikesh Gadre


The fix for HDDS-1201 introduced few checkstyle errors. This Jira is to fix 
these issues.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1785) OOM error in Freon due to the concurrency handling

2019-07-12 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1785?focusedWorklogId=275936=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-275936
 ]

ASF GitHub Bot logged work on HDDS-1785:


Author: ASF GitHub Bot
Created on: 12/Jul/19 13:53
Start Date: 12/Jul/19 13:53
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1085: HDDS-1785. OOM 
error in Freon due to the concurrency handling
URL: https://github.com/apache/hadoop/pull/1085#issuecomment-510894889
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 39 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 456 | trunk passed |
   | +1 | compile | 241 | trunk passed |
   | +1 | checkstyle | 65 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 803 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 173 | trunk passed |
   | 0 | spotbugs | 303 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 595 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 427 | the patch passed |
   | +1 | compile | 271 | the patch passed |
   | +1 | javac | 271 | the patch passed |
   | +1 | checkstyle | 81 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 657 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 149 | the patch passed |
   | -1 | findbugs | 321 | hadoop-ozone generated 1 new + 0 unchanged - 0 fixed 
= 1 total (was 0) |
   ||| _ Other Tests _ |
   | +1 | unit | 277 | hadoop-hdds in the patch passed. |
   | -1 | unit | 1730 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 50 | The patch does not generate ASF License warnings. |
   | | | 6789 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | FindBugs | module:hadoop-ozone |
   |  |  Exception is caught when Exception is not thrown in 
org.apache.hadoop.ozone.freon.RandomKeyGenerator.createKey(long)  At 
RandomKeyGenerator.java:is not thrown in 
org.apache.hadoop.ozone.freon.RandomKeyGenerator.createKey(long)  At 
RandomKeyGenerator.java:[line 728] |
   | Failed junit tests | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestOzoneClientRetriesOnException |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.container.ozoneimpl.TestOzoneContainer |
   |   | hadoop.ozone.client.rpc.TestWatchForCommit |
   |   | hadoop.ozone.client.rpc.TestCommitWatcher |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.ozone.client.rpc.TestCloseContainerHandlingByClient |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=18.09.7 Server=18.09.7 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1085/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1085 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 0f7d4659da4b 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / f9fab9f |
   | Default Java | 1.8.0_212 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1085/1/artifact/out/new-findbugs-hadoop-ozone.html
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1085/1/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1085/1/testReport/ |
   | Max. process+thread count | 5295 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/tools U: hadoop-ozone/tools |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1085/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries 

[jira] [Work logged] (HDDS-1782) Add an option to MiniOzoneChaosCluster to read files multiple times.

2019-07-12 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1782?focusedWorklogId=275917=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-275917
 ]

ASF GitHub Bot logged work on HDDS-1782:


Author: ASF GitHub Bot
Created on: 12/Jul/19 13:32
Start Date: 12/Jul/19 13:32
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1076: HDDS-1782. Add 
an option to MiniOzoneChaosCluster to read files multiple times. Contributed by 
Mukul Kumar Singh.
URL: https://github.com/apache/hadoop/pull/1076#issuecomment-510887805
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 38 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | 0 | shelldocs | 0 | Shelldocs was not available. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 5 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 483 | trunk passed |
   | +1 | compile | 273 | trunk passed |
   | +1 | checkstyle | 84 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 790 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 156 | trunk passed |
   | 0 | spotbugs | 330 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 527 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 459 | the patch passed |
   | +1 | compile | 271 | the patch passed |
   | +1 | javac | 271 | the patch passed |
   | -0 | checkstyle | 49 | hadoop-ozone: The patch generated 3 new + 0 
unchanged - 0 fixed = 3 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | shellcheck | 0 | There were no new shellcheck issues. |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 689 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 156 | the patch passed |
   | +1 | findbugs | 534 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 309 | hadoop-hdds in the patch passed. |
   | -1 | unit | 1687 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 40 | The patch does not generate ASF License warnings. |
   | | | 6862 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestOzoneClientRetriesOnException |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestWatchForCommit |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=18.09.7 Server=18.09.7 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1076/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1076 |
   | Optional Tests | dupname asflicense mvnsite unit shellcheck shelldocs 
compile javac javadoc mvninstall shadedclient findbugs checkstyle |
   | uname | Linux 0856e1e9aa5a 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / f9fab9f |
   | Default Java | 1.8.0_212 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1076/2/artifact/out/diff-checkstyle-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1076/2/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1076/2/testReport/ |
   | Max. process+thread count | 5263 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/integration-test U: 
hadoop-ozone/integration-test |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1076/2/console |
   | versions | git=2.7.4 maven=3.3.9 shellcheck=0.4.6 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 275917)
Time Spent: 50m  (was: 40m)

> Add an option to MiniOzoneChaosCluster to read files multiple times.
> 
>
> 

[jira] [Commented] (HDFS-14593) RBF: Implement deletion feature for expired records in State Store

2019-07-12 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16883807#comment-16883807
 ] 

Hadoop QA commented on HDFS-14593:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
35s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
 5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 37s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 17s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs-rbf: The patch 
generated 1 new + 2 unchanged - 0 fixed = 3 total (was 2) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 25s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 24m  5s{color} 
| {color:red} hadoop-hdfs-rbf in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 76m 46s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.federation.router.TestRouterWithSecureStartup |
|   | hadoop.hdfs.server.federation.security.TestRouterHttpDelegationToken |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=18.09.7 Server=18.09.7 Image:yetus/hadoop:bdbca0e53b4 |
| JIRA Issue | HDFS-14593 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12974511/HDFS-14593.010.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  xml  |
| uname | Linux 64324f4fa832 4.15.0-52-generic #56-Ubuntu SMP Tue Jun 4 
22:49:08 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / b15ef7d |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_212 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27217/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs-rbf.txt
 |
| unit | 

[jira] [Commented] (HDFS-14458) Report pmem stats to namenode

2019-07-12 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16883799#comment-16883799
 ] 

Hadoop QA commented on HDFS-14458:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
49s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 39s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 14s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}109m 15s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
28s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}163m 43s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.server.datanode.TestDirectoryScanner |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=18.09.7 Server=18.09.7 Image:yetus/hadoop:bdbca0e |
| JIRA Issue | HDFS-14458 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12974500/HDFS-14458.005.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 08d15b8f5d91 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / f9fab9f |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_212 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27215/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27215/testReport/ |
| Max. process+thread count | 3930 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 

[jira] [Commented] (HDDS-1791) Update network-tests/src/test/blockade/README.md file

2019-07-12 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1791?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16883795#comment-16883795
 ] 

Hudson commented on HDDS-1791:
--

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #16903 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/16903/])
HDDS-1791. Update network-tests/src/test/blockade/README.md file (elek: rev 
7b8177ba0fe1a5a0b322af75e77547baac761865)
* (edit) 
hadoop-ozone/fault-injection-test/network-tests/src/test/blockade/README.md


> Update network-tests/src/test/blockade/README.md file
> -
>
> Key: HDDS-1791
> URL: https://issues.apache.org/jira/browse/HDDS-1791
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: test
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> {{hadoop-ozone/fault-injection-test/network-tests/src/test/blockade/README.md}}
>  has to be updated after HDDS-1778.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1790) Fix checkstyle issues in TestDataScrubber

2019-07-12 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1790?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16883796#comment-16883796
 ] 

Hudson commented on HDDS-1790:
--

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #16903 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/16903/])
HDDS-1790. Fix checkstyle issues in TestDataScrubber (elek: rev 
190e4349d77e7ae0601ff81a70c7569c72833ee3)
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/dn/scrubber/TestDataScrubber.java


> Fix checkstyle issues in TestDataScrubber
> -
>
> Key: HDDS-1790
> URL: https://issues.apache.org/jira/browse/HDDS-1790
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> There are 4 Checkstyle issues in TestDataScrubber that has to be fixed
> {noformat}
> [ERROR] 
> src/test/java/org/apache/hadoop/ozone/dn/scrubber/TestDataScrubber.java:[157] 
> (sizes) LineLength: Line is longer than 80 characters (found 81).
> [ERROR] 
> src/test/java/org/apache/hadoop/ozone/dn/scrubber/TestDataScrubber.java:[161] 
> (sizes) LineLength: Line is longer than 80 characters (found 82).
> [ERROR] 
> src/test/java/org/apache/hadoop/ozone/dn/scrubber/TestDataScrubber.java:[167] 
> (sizes) LineLength: Line is longer than 80 characters (found 85).
> [ERROR] 
> src/test/java/org/apache/hadoop/ozone/dn/scrubber/TestDataScrubber.java:[187] 
> (sizes) LineLength: Line is longer than 80 characters (found 104).
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1791) Update network-tests/src/test/blockade/README.md file

2019-07-12 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1791?focusedWorklogId=275895=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-275895
 ]

ASF GitHub Bot logged work on HDDS-1791:


Author: ASF GitHub Bot
Created on: 12/Jul/19 12:59
Start Date: 12/Jul/19 12:59
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1083: HDDS-1791. 
Update network-tests/src/test/blockade/README.md file
URL: https://github.com/apache/hadoop/pull/1083#issuecomment-510877178
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 62 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 2 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 11 | Maven dependency ordering for branch |
   | +1 | mvninstall | 506 | trunk passed |
   | +1 | compile | 250 | trunk passed |
   | +1 | checkstyle | 65 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 788 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 144 | trunk passed |
   | 0 | spotbugs | 322 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 514 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 22 | Maven dependency ordering for patch |
   | +1 | mvninstall | 446 | the patch passed |
   | +1 | compile | 247 | the patch passed |
   | +1 | javac | 247 | the patch passed |
   | +1 | checkstyle | 32 | The patch passed checkstyle in hadoop-hdds |
   | +1 | checkstyle | 36 | hadoop-ozone: The patch generated 0 new + 0 
unchanged - 4 fixed = 0 total (was 4) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 648 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 154 | the patch passed |
   | +1 | findbugs | 544 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 410 | hadoop-hdds in the patch passed. |
   | -1 | unit | 3885 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 42 | The patch does not generate ASF License warnings. |
   | | | 8960 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.client.rpc.TestBlockOutputStream |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.hdds.scm.pipeline.TestRatisPipelineCreateAndDestory |
   |   | hadoop.ozone.client.rpc.TestOzoneClientRetriesOnException |
   |   | hadoop.ozone.om.TestOzoneManagerHA |
   |   | hadoop.ozone.TestMiniOzoneCluster |
   |   | hadoop.ozone.om.TestOzoneManager |
   |   | hadoop.ozone.client.rpc.TestCloseContainerHandlingByClient |
   |   | hadoop.ozone.client.rpc.TestReadRetries |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=18.09.7 Server=18.09.7 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1083/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1083 |
   | Optional Tests | dupname asflicense mvnsite unit compile javac javadoc 
mvninstall shadedclient findbugs checkstyle |
   | uname | Linux 630f37ba5827 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / f9fab9f |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1083/1/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1083/1/testReport/ |
   | Max. process+thread count | 4240 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/fault-injection-test/network-tests 
hadoop-ozone/integration-test U: hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1083/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 275895)
Time Spent: 0.5h  (was: 20m)

> Update network-tests/src/test/blockade/README.md file
> 

[jira] [Updated] (HDDS-1791) Update network-tests/src/test/blockade/README.md file

2019-07-12 Thread Elek, Marton (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1791?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-1791:
---
Fix Version/s: 0.4.1

> Update network-tests/src/test/blockade/README.md file
> -
>
> Key: HDDS-1791
> URL: https://issues.apache.org/jira/browse/HDDS-1791
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: test
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> {{hadoop-ozone/fault-injection-test/network-tests/src/test/blockade/README.md}}
>  has to be updated after HDDS-1778.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1791) Update network-tests/src/test/blockade/README.md file

2019-07-12 Thread Elek, Marton (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1791?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-1791:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Update network-tests/src/test/blockade/README.md file
> -
>
> Key: HDDS-1791
> URL: https://issues.apache.org/jira/browse/HDDS-1791
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: test
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> {{hadoop-ozone/fault-injection-test/network-tests/src/test/blockade/README.md}}
>  has to be updated after HDDS-1778.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1790) Fix checkstyle issues in TestDataScrubber

2019-07-12 Thread Elek, Marton (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1790?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-1790:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Fix checkstyle issues in TestDataScrubber
> -
>
> Key: HDDS-1790
> URL: https://issues.apache.org/jira/browse/HDDS-1790
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> There are 4 Checkstyle issues in TestDataScrubber that has to be fixed
> {noformat}
> [ERROR] 
> src/test/java/org/apache/hadoop/ozone/dn/scrubber/TestDataScrubber.java:[157] 
> (sizes) LineLength: Line is longer than 80 characters (found 81).
> [ERROR] 
> src/test/java/org/apache/hadoop/ozone/dn/scrubber/TestDataScrubber.java:[161] 
> (sizes) LineLength: Line is longer than 80 characters (found 82).
> [ERROR] 
> src/test/java/org/apache/hadoop/ozone/dn/scrubber/TestDataScrubber.java:[167] 
> (sizes) LineLength: Line is longer than 80 characters (found 85).
> [ERROR] 
> src/test/java/org/apache/hadoop/ozone/dn/scrubber/TestDataScrubber.java:[187] 
> (sizes) LineLength: Line is longer than 80 characters (found 104).
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1790) Fix checkstyle issues in TestDataScrubber

2019-07-12 Thread Elek, Marton (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1790?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-1790:
---
Fix Version/s: 0.5.0

> Fix checkstyle issues in TestDataScrubber
> -
>
> Key: HDDS-1790
> URL: https://issues.apache.org/jira/browse/HDDS-1790
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> There are 4 Checkstyle issues in TestDataScrubber that has to be fixed
> {noformat}
> [ERROR] 
> src/test/java/org/apache/hadoop/ozone/dn/scrubber/TestDataScrubber.java:[157] 
> (sizes) LineLength: Line is longer than 80 characters (found 81).
> [ERROR] 
> src/test/java/org/apache/hadoop/ozone/dn/scrubber/TestDataScrubber.java:[161] 
> (sizes) LineLength: Line is longer than 80 characters (found 82).
> [ERROR] 
> src/test/java/org/apache/hadoop/ozone/dn/scrubber/TestDataScrubber.java:[167] 
> (sizes) LineLength: Line is longer than 80 characters (found 85).
> [ERROR] 
> src/test/java/org/apache/hadoop/ozone/dn/scrubber/TestDataScrubber.java:[187] 
> (sizes) LineLength: Line is longer than 80 characters (found 104).
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1790) Fix checkstyle issues in TestDataScrubber

2019-07-12 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1790?focusedWorklogId=275885=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-275885
 ]

ASF GitHub Bot logged work on HDDS-1790:


Author: ASF GitHub Bot
Created on: 12/Jul/19 12:50
Start Date: 12/Jul/19 12:50
Worklog Time Spent: 10m 
  Work Description: elek commented on pull request #1082: HDDS-1790. Fix 
checkstyle issues in TestDataScrubber.
URL: https://github.com/apache/hadoop/pull/1082
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 275885)
Time Spent: 0.5h  (was: 20m)

> Fix checkstyle issues in TestDataScrubber
> -
>
> Key: HDDS-1790
> URL: https://issues.apache.org/jira/browse/HDDS-1790
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> There are 4 Checkstyle issues in TestDataScrubber that has to be fixed
> {noformat}
> [ERROR] 
> src/test/java/org/apache/hadoop/ozone/dn/scrubber/TestDataScrubber.java:[157] 
> (sizes) LineLength: Line is longer than 80 characters (found 81).
> [ERROR] 
> src/test/java/org/apache/hadoop/ozone/dn/scrubber/TestDataScrubber.java:[161] 
> (sizes) LineLength: Line is longer than 80 characters (found 82).
> [ERROR] 
> src/test/java/org/apache/hadoop/ozone/dn/scrubber/TestDataScrubber.java:[167] 
> (sizes) LineLength: Line is longer than 80 characters (found 85).
> [ERROR] 
> src/test/java/org/apache/hadoop/ozone/dn/scrubber/TestDataScrubber.java:[187] 
> (sizes) LineLength: Line is longer than 80 characters (found 104).
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1791) Update network-tests/src/test/blockade/README.md file

2019-07-12 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1791?focusedWorklogId=275875=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-275875
 ]

ASF GitHub Bot logged work on HDDS-1791:


Author: ASF GitHub Bot
Created on: 12/Jul/19 12:41
Start Date: 12/Jul/19 12:41
Worklog Time Spent: 10m 
  Work Description: elek commented on pull request #1083: HDDS-1791. Update 
network-tests/src/test/blockade/README.md file
URL: https://github.com/apache/hadoop/pull/1083
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 275875)
Time Spent: 20m  (was: 10m)

> Update network-tests/src/test/blockade/README.md file
> -
>
> Key: HDDS-1791
> URL: https://issues.apache.org/jira/browse/HDDS-1791
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: test
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> {{hadoop-ozone/fault-injection-test/network-tests/src/test/blockade/README.md}}
>  has to be updated after HDDS-1778.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1735) Create separated unit and integration test executor dev-support scripts

2019-07-12 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1735?focusedWorklogId=275870=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-275870
 ]

ASF GitHub Bot logged work on HDDS-1735:


Author: ASF GitHub Bot
Created on: 12/Jul/19 12:36
Start Date: 12/Jul/19 12:36
Worklog Time Spent: 10m 
  Work Description: elek commented on issue #1035: HDDS-1735. Create 
separate unit and integration test executor dev-support script
URL: https://github.com/apache/hadoop/pull/1035#issuecomment-510870453
 
 
   > @elek I was trying to merge, but seems like we have some conflicts. 
Perhaps due to the fact that I merged a patch from nanda, also there is an 
author check warning.
   
   Yes, I rebased in on the top of the trunk. 
   
   Author check is false positive as one of the scripts greps for "@author" 
tags, obviously it contains the @author inside as string. I changed it to use 
string concatenation (@a + uthor) to make yetus happy
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 275870)
Time Spent: 2h 20m  (was: 2h 10m)

> Create separated unit and integration test executor dev-support scripts
> ---
>
> Key: HDDS-1735
> URL: https://issues.apache.org/jira/browse/HDDS-1735
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
> Attachments: Screen Shot 2019-07-02 at 3.25.33 PM.png
>
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> hadoop-ozone/dev-support/checks directory contains multiple helper script to 
> execute different type of testing (findbugs, rat, unit, build).
> They easily define how tests should be executed, with the following contract:
>  * The problems should be printed out to the console
>  * in case of test failure a non zero exit code should be used
>  
> The tests are working well (in fact I have some experiments with executing 
> these scripts on k8s and argo where all the shell scripts are executed 
> parallel) but we need some update:
>  1. Most important: the unit tests and integration tests can be separated. 
> Integration tests are more flaky and it's better to have a way to run only 
> the normal unit tests
>  2. As HDDS-1115 introduced a pom.ozone.xml it's better to use them instead 
> of the magical "am pl hadoop-ozone-dist" trick--
>  3. To make it possible to run blockade test in containers we should use - T 
> flag with docker-compose
>  4. checkstyle violations are printed out to the console



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1492) Generated chunk size name too long.

2019-07-12 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1492?focusedWorklogId=275863=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-275863
 ]

ASF GitHub Bot logged work on HDDS-1492:


Author: ASF GitHub Bot
Created on: 12/Jul/19 12:27
Start Date: 12/Jul/19 12:27
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1084: HDDS-1492. 
Generated chunk size name too long.
URL: https://github.com/apache/hadoop/pull/1084#issuecomment-510867630
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 36 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 6 | Maven dependency ordering for branch |
   | -1 | mvninstall | 9 | hadoop-hdds in trunk failed. |
   | -1 | mvninstall | 9 | hadoop-ozone in trunk failed. |
   | -1 | compile | 8 | hadoop-hdds in trunk failed. |
   | +1 | checkstyle | 75 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 832 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 165 | trunk passed |
   | 0 | spotbugs | 309 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 510 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 75 | Maven dependency ordering for patch |
   | +1 | mvninstall | 431 | the patch passed |
   | +1 | compile | 262 | the patch passed |
   | -1 | javac | 94 | hadoop-hdds generated 14 new + 0 unchanged - 0 fixed = 
14 total (was 0) |
   | -0 | checkstyle | 32 | hadoop-hdds: The patch generated 2 new + 0 
unchanged - 0 fixed = 2 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | -1 | whitespace | 0 | The patch has 116 line(s) that end in whitespace. 
Use git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply |
   | -1 | whitespace | 1 | The patch 1800  line(s) with tabs. |
   | +1 | shadedclient | 642 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 148 | the patch passed |
   | -1 | findbugs | 79 | hadoop-ozone in the patch failed. |
   ||| _ Other Tests _ |
   | +1 | unit | 276 | hadoop-hdds in the patch passed. |
   | -1 | unit | 53 | hadoop-ozone in the patch failed. |
   | -1 | asflicense | 35 | The patch generated 1 ASF License warnings. |
   | | | 4282 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=18.09.7 Server=18.09.7 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1084/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1084 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 17acb20d2ff5 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / f9fab9f |
   | Default Java | 1.8.0_212 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1084/1/artifact/out/branch-mvninstall-hadoop-hdds.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1084/1/artifact/out/branch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1084/1/artifact/out/branch-compile-hadoop-hdds.txt
 |
   | javac | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1084/1/artifact/out/diff-compile-javac-hadoop-hdds.txt
 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1084/1/artifact/out/diff-checkstyle-hadoop-hdds.txt
 |
   | whitespace | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1084/1/artifact/out/whitespace-eol.txt
 |
   | whitespace | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1084/1/artifact/out/whitespace-tabs.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1084/1/artifact/out/patch-findbugs-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1084/1/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1084/1/testReport/ |
   | asflicense | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1084/1/artifact/out/patch-asflicense-problems.txt
 |
   | Max. process+thread count | 476 (vs. ulimit of 5500) |
   | modules | C: 

[jira] [Updated] (HDDS-1785) OOM error in Freon due to the concurrency handling

2019-07-12 Thread Doroszlai, Attila (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1785?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doroszlai, Attila updated HDDS-1785:

Status: Patch Available  (was: In Progress)

> OOM error in Freon due to the concurrency handling
> --
>
> Key: HDDS-1785
> URL: https://issues.apache.org/jira/browse/HDDS-1785
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Elek, Marton
>Assignee: Doroszlai, Attila
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> HDDS-1532 modified the concurrent framework usage of Freon 
> (RandomKeyGenerator).
> The new approach uses separated tasks (Runnable) to create the 
> volumes/buckets/keys.
> Unfortunately it doesn't work very well in some cases.
>  # When Freon starts it creates an executor with fixed number of threads (10)
>  # The first loop submits numOfVolumes (10) VolumeProcessor tasks to the 
> executor
>  # The 10 threads starts to execute the 10 VolumeProcessor tasks
>  # Each VolumeProcessor tasks creates numOfBuckets (1000) BucketProcessor 
> tasks. All together 1 tasks are submitted to the executor.
>  # The 10 threads starts to execute the first 10 BucketProcessor tasks, they 
> starts to create the KeyProcessor tasks: 500 000 * 10 tasks are submitted.
>  # At this point of the time no keys are generated, but the next 10 
> BucketProcessor tasks are started to execute..
>  # To execute the first key creation we should process all the 
> BucketProcessor tasks which means that all the Key creation tasks (10 * 1000 
> * 500 000) are created and added to the executor
>  # Which requires a huge amount of time and memory



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14593) RBF: Implement deletion feature for expired records in State Store

2019-07-12 Thread Takanobu Asanuma (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16883746#comment-16883746
 ] 

Takanobu Asanuma commented on HDFS-14593:
-

{quote}I understand that now as we remove this does not allow it?
{quote}
Right. You can confirm that 
{{TestStateStoreRouterState#testRouterStateExpiredAndDeletion}} fails with 
Collections.singleton.

Uploaded 010.patch addressing your comment.

> RBF: Implement deletion feature for expired records in State Store
> --
>
> Key: HDFS-14593
> URL: https://issues.apache.org/jira/browse/HDFS-14593
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: rbf
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
> Attachments: HDFS-14593.001.patch, HDFS-14593.002.patch, 
> HDFS-14593.003.patch, HDFS-14593.004.patch, HDFS-14593.005.patch, 
> HDFS-14593.006.patch, HDFS-14593.007.patch, HDFS-14593.008.patch, 
> HDFS-14593.009.patch, HDFS-14593.010.patch
>
>
> Currently, any router seems to exist in the Router Information eternally.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1586) Allow Ozone RPC client to read with topology awareness

2019-07-12 Thread Elek, Marton (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1586?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16883742#comment-16883742
 ] 

Elek, Marton commented on HDDS-1586:


The smoketest of the new compose files introduced by this patch seems to be 
broken: HDDS-1793

> Allow Ozone RPC client to read with topology awareness
> --
>
> Key: HDDS-1586
> URL: https://issues.apache.org/jira/browse/HDDS-1586
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Sammi Chen
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 4.5h
>  Remaining Estimate: 0h
>
> The idea is to leverage the node location from the block locations and perfer 
> read from closer block replicas. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1793) Acceptance test of ozone-topology cluster is failing

2019-07-12 Thread Elek, Marton (JIRA)
Elek, Marton created HDDS-1793:
--

 Summary: Acceptance test of ozone-topology cluster is failing
 Key: HDDS-1793
 URL: https://issues.apache.org/jira/browse/HDDS-1793
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Elek, Marton


Since HDDS-1586 the smoketests of the ozone-topology compose file is broken:
{code:java}
Output:  
/tmp/smoketest/ozone-topology/result/robot-ozone-topology-ozone-topology-basic-scm.xml
must specify at least one container source
Stopping datanode_2 ... 
Stopping datanode_3 ... 
Stopping datanode_4 ... 
Stopping scm... 
Stopping om ... 
Stopping datanode_1 ... 

Stopping datanode_2 ... done

Stopping datanode_4 ... done

Stopping datanode_1 ... done

Stopping datanode_3 ... done

Stopping scm... done

Stopping om ... done
Removing datanode_2 ... 
Removing datanode_3 ... 
Removing datanode_4 ... 
Removing scm... 
Removing om ... 
Removing datanode_1 ... 

Removing datanode_1 ... done

Removing om ... done

Removing datanode_3 ... done

Removing datanode_4 ... done

Removing datanode_2 ... done

Removing scm... done
Removing network ozone-topology_net
[ ERROR ] Reading XML source 
'/var/jenkins_home/workspace/ozone/hadoop-ozone/dist/target/ozone-0.5.0-SNAPSHOT/compose/ozone-topology/result/robot-*.xml'
 failed: No such file or directory

Try --help for usage information.
ERROR: Test execution of 
/var/jenkins_home/workspace/ozone/hadoop-ozone/dist/target/ozone-0.5.0-SNAPSHOT/compose/ozone-topology
 is FAILED{code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14593) RBF: Implement deletion feature for expired records in State Store

2019-07-12 Thread Takanobu Asanuma (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14593?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Takanobu Asanuma updated HDFS-14593:

Attachment: HDFS-14593.010.patch

> RBF: Implement deletion feature for expired records in State Store
> --
>
> Key: HDFS-14593
> URL: https://issues.apache.org/jira/browse/HDFS-14593
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: rbf
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
> Attachments: HDFS-14593.001.patch, HDFS-14593.002.patch, 
> HDFS-14593.003.patch, HDFS-14593.004.patch, HDFS-14593.005.patch, 
> HDFS-14593.006.patch, HDFS-14593.007.patch, HDFS-14593.008.patch, 
> HDFS-14593.009.patch, HDFS-14593.010.patch
>
>
> Currently, any router seems to exist in the Router Information eternally.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1785) OOM error in Freon due to the concurrency handling

2019-07-12 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1785?focusedWorklogId=275849=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-275849
 ]

ASF GitHub Bot logged work on HDDS-1785:


Author: ASF GitHub Bot
Created on: 12/Jul/19 11:59
Start Date: 12/Jul/19 11:59
Worklog Time Spent: 10m 
  Work Description: adoroszlai commented on pull request #1085: HDDS-1785. 
OOM error in Freon due to the concurrency handling
URL: https://github.com/apache/hadoop/pull/1085
 
 
   ## What changes were proposed in this pull request?
   
   Change concurrency in Freon `RandomKeyGenerator`:
   
* create a worker for each thread
* let each worker create volumes, buckets and keys, without limiting 
"inner" objects to specific "outer" ones (eg. create key for any bucket)
   
   Workers coordinate the items they create using "global" counters.
   
   https://issues.apache.org/jira/browse/HDDS-1785
   
   ## How was this patch tested?
   
   Tested with various number of volumes/buckets/threads.
   
   ```
   $ ozone freon rk --numOfVolumes 1 --numOfBuckets 100 --numOfKeys 5 
--numOfThreads 1 --replicationType=RATIS --factor=THREE
   ...
   Number of Volumes created: 1
   Number of Buckets created: 100
   Number of Keys added: 500
   Ratis replication factor: THREE
   Ratis replication type: RATIS
   Average Time spent in volume creation: 00:00:00,100
   Average Time spent in bucket creation: 00:00:00,304
   Average Time spent in key creation: 00:00:01,556
   Average Time spent in key write: 00:00:53,509
   Total bytes written: 512
   Total Execution time: 00:01:01,537
   ```
   
   ```
   $ ozone freon rk --numOfVolumes 1 --numOfBuckets 100 --numOfKeys 5 
--numOfThreads 50 --replicationType=RATIS --factor=THREE
   ...
   Number of Volumes created: 1
   Number of Buckets created: 100
   Number of Keys added: 500
   Ratis replication factor: THREE
   Ratis replication type: RATIS
   Average Time spent in volume creation: 00:00:00,003
   Average Time spent in bucket creation: 00:00:00,229
   Average Time spent in key creation: 00:00:00,273
   Average Time spent in key write: 00:00:10,375
   Total bytes written: 512
   Total Execution time: 00:00:16,872
   ```
   
   ```
   $ ozone freon rk --numOfVolumes 10 --numOfBuckets 10 --numOfKeys 500 
--numOfThreads 50 --replicationType=RATIS --factor=THREE
   ...
   Number of Volumes created: 10
   Number of Buckets created: 100
   Number of Keys added: 5
   Ratis replication factor: THREE
   Ratis replication type: RATIS
   Average Time spent in volume creation: 00:00:00,052
   Average Time spent in bucket creation: 00:00:00,240
   Average Time spent in key creation: 00:00:30,742
   Average Time spent in key write: 00:10:04,146
   Total bytes written: 51200
   Total Execution time: 00:10:42,463
   ```
   
   ```
   $ ozone freon rk --numOfVolumes 100 --numOfBuckets 100 --numOfKeys 2 
--numOfThreads 50 --replicationType=RATIS --factor=THREE
   ...
   Number of Volumes created: 100
   Number of Buckets created: 1
   Number of Keys added: 2
   Ratis replication factor: THREE
   Ratis replication type: RATIS
   Average Time spent in volume creation: 00:00:00,266
   Average Time spent in bucket creation: 00:00:06,388
   Average Time spent in key creation: 00:00:09,324
   Average Time spent in key write: 00:03:44,925
   Total bytes written: 20480
   Total Execution time: 00:04:11,735
   ```
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 275849)
Time Spent: 10m
Remaining Estimate: 0h

> OOM error in Freon due to the concurrency handling
> --
>
> Key: HDDS-1785
> URL: https://issues.apache.org/jira/browse/HDDS-1785
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Elek, Marton
>Assignee: Doroszlai, Attila
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> HDDS-1532 modified the concurrent framework usage of Freon 
> (RandomKeyGenerator).
> The new approach uses separated tasks (Runnable) to create the 
> volumes/buckets/keys.
> Unfortunately it doesn't work very well in some cases.
>  # When Freon starts it creates an executor with fixed number of threads (10)
>  # The first loop submits numOfVolumes (10) VolumeProcessor tasks to the 
> executor
>  # The 10 threads starts to execute the 10 VolumeProcessor tasks
>  # Each VolumeProcessor tasks creates numOfBuckets 

[jira] [Updated] (HDDS-1785) OOM error in Freon due to the concurrency handling

2019-07-12 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1785?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-1785:
-
Labels: pull-request-available  (was: )

> OOM error in Freon due to the concurrency handling
> --
>
> Key: HDDS-1785
> URL: https://issues.apache.org/jira/browse/HDDS-1785
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Elek, Marton
>Assignee: Doroszlai, Attila
>Priority: Blocker
>  Labels: pull-request-available
>
> HDDS-1532 modified the concurrent framework usage of Freon 
> (RandomKeyGenerator).
> The new approach uses separated tasks (Runnable) to create the 
> volumes/buckets/keys.
> Unfortunately it doesn't work very well in some cases.
>  # When Freon starts it creates an executor with fixed number of threads (10)
>  # The first loop submits numOfVolumes (10) VolumeProcessor tasks to the 
> executor
>  # The 10 threads starts to execute the 10 VolumeProcessor tasks
>  # Each VolumeProcessor tasks creates numOfBuckets (1000) BucketProcessor 
> tasks. All together 1 tasks are submitted to the executor.
>  # The 10 threads starts to execute the first 10 BucketProcessor tasks, they 
> starts to create the KeyProcessor tasks: 500 000 * 10 tasks are submitted.
>  # At this point of the time no keys are generated, but the next 10 
> BucketProcessor tasks are started to execute..
>  # To execute the first key creation we should process all the 
> BucketProcessor tasks which means that all the Key creation tasks (10 * 1000 
> * 500 000) are created and added to the executor
>  # Which requires a huge amount of time and memory



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1779) TestWatchForCommit tests are flaky

2019-07-12 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1779?focusedWorklogId=275842=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-275842
 ]

ASF GitHub Bot logged work on HDDS-1779:


Author: ASF GitHub Bot
Created on: 12/Jul/19 11:50
Start Date: 12/Jul/19 11:50
Worklog Time Spent: 10m 
  Work Description: bshashikant commented on pull request #1071: HDDS-1779. 
TestWatchForCommit tests are flaky.
URL: https://github.com/apache/hadoop/pull/1071#discussion_r302947780
 
 

 ##
 File path: 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestWatchForCommit.java
 ##
 @@ -343,61 +349,24 @@ public void testWatchForCommitForRetryfailure() throws 
Exception {
 cluster.shutdownHddsDatanode(pipeline.getNodes().get(1));
 // again write data with more than max buffer limit. This wi
 try {
-  // just watch for a lo index which in not updated in the commitInfo Map
-  xceiverClient.watchForCommit(index + 1, 2);
+  // just watch for a log index which in not updated in the commitInfo Map
+  // as well as there is no logIndex generate in Ratis.
+  // The basic idea here is just to test if its throws an exception.
+  xceiverClient
+  .watchForCommit(index + new Random().nextInt(100) + 10, 2);
 
 Review comment:
   The idea here is to run the test each time with unique number so any 
possible hacks/errors get caught if any.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 275842)
Time Spent: 1h  (was: 50m)

> TestWatchForCommit tests are flaky
> --
>
> Key: HDDS-1779
> URL: https://issues.apache.org/jira/browse/HDDS-1779
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> The tests have become flaky bcoz once  nodes are shutdown inn Ratis pipeline, 
> a watch request can either be received at server at a server and fail with 
> NotReplicatedException or sometimes it fails with StatusRuntimeExceptions 
> from grpc which both need to be accounted for in the tests. Other than that, 
> HDDS-1384 also causes bind exception to e thrown intermittently which in turn 
> shuts down the miniOzoneCluster. To overcome this, the test class has been 
> refactored as well.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1779) TestWatchForCommit tests are flaky

2019-07-12 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1779?focusedWorklogId=275841=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-275841
 ]

ASF GitHub Bot logged work on HDDS-1779:


Author: ASF GitHub Bot
Created on: 12/Jul/19 11:49
Start Date: 12/Jul/19 11:49
Worklog Time Spent: 10m 
  Work Description: bshashikant commented on pull request #1071: HDDS-1779. 
TestWatchForCommit tests are flaky.
URL: https://github.com/apache/hadoop/pull/1071#discussion_r302947487
 
 

 ##
 File path: 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestWatchForCommit.java
 ##
 @@ -303,10 +305,14 @@ public void testWatchForCommitWithSmallerTimeoutValue() 
throws Exception {
 cluster.shutdownHddsDatanode(pipeline.getNodes().get(0));
 cluster.shutdownHddsDatanode(pipeline.getNodes().get(1));
 try {
-  // just watch for a lo index which in not updated in the commitInfo Map
-  xceiverClient.watchForCommit(index + 1, 3000);
+  // just watch for a log index which in not updated in the commitInfo Map
+  // as well as there is no logIndex generate in Ratis.
+  // The basic idea here is just to test if its throws an exception.
+  xceiverClient
+  .watchForCommit(index + new Random().nextInt(100) + 10, 3000);
   Assert.fail("expected exception not thrown");
 } catch (Exception e) {
+  System.out.println("exception " + e);
 
 Review comment:
   Addressed in the latest commit.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 275841)
Time Spent: 50m  (was: 40m)

> TestWatchForCommit tests are flaky
> --
>
> Key: HDDS-1779
> URL: https://issues.apache.org/jira/browse/HDDS-1779
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> The tests have become flaky bcoz once  nodes are shutdown inn Ratis pipeline, 
> a watch request can either be received at server at a server and fail with 
> NotReplicatedException or sometimes it fails with StatusRuntimeExceptions 
> from grpc which both need to be accounted for in the tests. Other than that, 
> HDDS-1384 also causes bind exception to e thrown intermittently which in turn 
> shuts down the miniOzoneCluster. To overcome this, the test class has been 
> refactored as well.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1782) Add an option to MiniOzoneChaosCluster to read files multiple times.

2019-07-12 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1782?focusedWorklogId=275837=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-275837
 ]

ASF GitHub Bot logged work on HDDS-1782:


Author: ASF GitHub Bot
Created on: 12/Jul/19 11:38
Start Date: 12/Jul/19 11:38
Worklog Time Spent: 10m 
  Work Description: mukul1987 commented on pull request #1076: HDDS-1782. 
Add an option to MiniOzoneChaosCluster to read files multiple times. 
Contributed by Mukul Kumar Singh.
URL: https://github.com/apache/hadoop/pull/1076#discussion_r302944648
 
 

 ##
 File path: 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/chaos/TestProbability.java
 ##
 @@ -0,0 +1,39 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.ozone.chaos;
+
+import org.apache.commons.lang3.RandomUtils;
+
+/**
+ * Class to keep track of test probability.
+ */
+public class TestProbability {
+  private int pct;
+
+  private TestProbability(int pct) {
+this.pct = pct;
+  }
+
+  public boolean isTrue() {
+return (RandomUtils.nextInt() * pct / 100) == 1;
 
 Review comment:
   Thanks for the review @adoroszlai, this is fixed as part of the update to 
this pull request.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 275837)
Time Spent: 40m  (was: 0.5h)

> Add an option to MiniOzoneChaosCluster to read files multiple times.
> 
>
> Key: HDDS-1782
> URL: https://issues.apache.org/jira/browse/HDDS-1782
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.4.0
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Right now MiniOzoneChaosCluster writes a file/ reads it and deletes it 
> immediately. This jira proposes to add an option to read the file multiple 
> time in MiniOzoneChaosCluster.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1753) Datanode unable to find chunk while replication data using ratis.

2019-07-12 Thread Shashikant Banerjee (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1753?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16883727#comment-16883727
 ] 

Shashikant Banerjee commented on HDDS-1753:
---

The issue being caused here is as data is still to be replicated to the 
followers via leader, as a result of key delete , a block in a closed container 
can get deleted on the leader. When the follower asks for the chunk data from 
the leader, it fails as the chunk file does not exist in the leader.

The solution being proposed here is as follows:

Whenever a delete command gets received on a datanode from SCM, it should first 
check the min replicated index across all the servers in the pipeline. 
ContainerStateMachine will also track, the close container log index for each 
cotainer. Now, if the min replicated index >= close container index in the 
leader, a delete operation will be queued over Ratis in the leader and same 
will be ignored in the follower and now delete will happen over Ratis. In case, 
close container index is not replicated, delete transaction will never be 
enqueued over Ratis and ignored. SCM already has a retry policy in place to 
retry the same delete.

In case, the Ratis pipeline does not exist, delete will work as is.

> Datanode unable to find chunk while replication data using ratis.
> -
>
> Key: HDDS-1753
> URL: https://issues.apache.org/jira/browse/HDDS-1753
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.4.0
>Reporter: Mukul Kumar Singh
>Assignee: Shashikant Banerjee
>Priority: Major
>  Labels: MiniOzoneChaosCluster
>
> Leader datanode is unable to read chunk from the datanode while replicating 
> data from leader to follower.
> Please note that deletion of keys is also happening while the data is being 
> replicated.
> {code}
> 2019-07-02 19:39:22,604 INFO  impl.RaftServerImpl 
> (RaftServerImpl.java:checkInconsistentAppendEntries(972)) - 
> 5ac88709-a3a2-4c8f-91de-5e54b617f05e: inconsistency entries. 
> Reply:76a3eb0f-d7cd-477b-8973-db1
> 014feb398<-5ac88709-a3a2-4c8f-91de-5e54b617f05e#70:FAIL,INCONSISTENCY,nextIndex:9771,term:2,followerCommit:9782
> 2019-07-02 19:39:22,605 ERROR impl.ChunkManagerImpl 
> (ChunkUtils.java:readData(161)) - Unable to find the chunk file. chunk info : 
> ChunkInfo{chunkName='76ec669ae2cb6e10dd9f08c0789c5fdf_stream_a2850dce-def3
> -4d64-93d8-fa2ebafee933_chunk_1, offset=0, len=2048}
> 2019-07-02 19:39:22,605 INFO  impl.RaftServerImpl 
> (RaftServerImpl.java:checkInconsistentAppendEntries(990)) - 
> 5ac88709-a3a2-4c8f-91de-5e54b617f05e: Failed appendEntries as latest snapshot 
> (9770) already h
> as the append entries (first index: 1)
> 2019-07-02 19:39:22,605 INFO  impl.RaftServerImpl 
> (RaftServerImpl.java:checkInconsistentAppendEntries(972)) - 
> 5ac88709-a3a2-4c8f-91de-5e54b617f05e: inconsistency entries. 
> Reply:76a3eb0f-d7cd-477b-8973-db1
> 014feb398<-5ac88709-a3a2-4c8f-91de-5e54b617f05e#71:FAIL,INCONSISTENCY,nextIndex:9771,term:2,followerCommit:9782
> 2019-07-02 19:39:22,605 INFO  keyvalue.KeyValueHandler 
> (ContainerUtils.java:logAndReturnError(146)) - Operation: ReadChunk : Trace 
> ID: 4216d461a4679e17:4216d461a4679e17:0:0 : Message: Unable to find the c
> hunk file. chunk info 
> ChunkInfo{chunkName='76ec669ae2cb6e10dd9f08c0789c5fdf_stream_a2850dce-def3-4d64-93d8-fa2ebafee933_chunk_1,
>  offset=0, len=2048} : Result: UNABLE_TO_FIND_CHUNK
> 2019-07-02 19:39:22,605 INFO  impl.RaftServerImpl 
> (RaftServerImpl.java:checkInconsistentAppendEntries(990)) - 
> 5ac88709-a3a2-4c8f-91de-5e54b617f05e: Failed appendEntries as latest snapshot 
> (9770) already h
> as the append entries (first index: 2)
> 2019-07-02 19:39:22,606 INFO  impl.RaftServerImpl 
> (RaftServerImpl.java:checkInconsistentAppendEntries(972)) - 
> 5ac88709-a3a2-4c8f-91de-5e54b617f05e: inconsistency entries. 
> Reply:76a3eb0f-d7cd-477b-8973-db1
> 014feb398<-5ac88709-a3a2-4c8f-91de-5e54b617f05e#72:FAIL,INCONSISTENCY,nextIndex:9771,term:2,followerCommit:9782
> 19:39:22.606 [pool-195-thread-19] ERROR DNAudit - user=null | ip=null | 
> op=READ_CHUNK {blockData=conID: 3 locID: 102372189549953034 bcsId: 0} | 
> ret=FAILURE
> java.lang.Exception: Unable to find the chunk file. chunk info 
> ChunkInfo{chunkName='76ec669ae2cb6e10dd9f08c0789c5fdf_stream_a2850dce-def3-4d64-93d8-fa2ebafee933_chunk_1,
>  offset=0, len=2048}
> at 
> org.apache.hadoop.ozone.container.common.impl.HddsDispatcher.dispatchRequest(HddsDispatcher.java:320)
>  ~[hadoop-hdds-container-service-0.5.0-SNAPSHOT.jar:?]
> at 
> org.apache.hadoop.ozone.container.common.impl.HddsDispatcher.dispatch(HddsDispatcher.java:148)
>  ~[hadoop-hdds-container-service-0.5.0-SNAPSHOT.jar:?]
> at 
> 

[jira] [Commented] (HDFS-14357) Update the relevant docs for HDFS cache on SCM support

2019-07-12 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14357?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16883722#comment-16883722
 ] 

Hadoop QA commented on HDFS-14357:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
29m 33s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 13s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
31s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 45m 17s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=18.09.7 Server=18.09.7 Image:yetus/hadoop:bdbca0e |
| JIRA Issue | HDFS-14357 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12974505/HDFS-14357.004.patch |
| Optional Tests |  dupname  asflicense  mvnsite  |
| uname | Linux 5d0e999e2164 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / f9fab9f |
| maven | version: Apache Maven 3.3.9 |
| Max. process+thread count | 413 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27216/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Update the relevant docs for HDFS cache on SCM support
> --
>
> Key: HDFS-14357
> URL: https://issues.apache.org/jira/browse/HDFS-14357
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Feilong He
>Assignee: Feilong He
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14357.000.patch, HDFS-14357.001.patch, 
> HDFS-14357.002.patch, HDFS-14357.003.patch, HDFS-14357.004.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1790) Fix checkstyle issues in TestDataScrubber

2019-07-12 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1790?focusedWorklogId=275833=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-275833
 ]

ASF GitHub Bot logged work on HDDS-1790:


Author: ASF GitHub Bot
Created on: 12/Jul/19 11:25
Start Date: 12/Jul/19 11:25
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1082: HDDS-1790. Fix 
checkstyle issues in TestDataScrubber.
URL: https://github.com/apache/hadoop/pull/1082#issuecomment-510851545
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 40 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 498 | trunk passed |
   | +1 | compile | 263 | trunk passed |
   | +1 | checkstyle | 78 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 882 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 150 | trunk passed |
   | 0 | spotbugs | 198 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | -1 | findbugs | 26 | hadoop-ozone in trunk failed. |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 435 | the patch passed |
   | +1 | compile | 233 | the patch passed |
   | +1 | javac | 233 | the patch passed |
   | +1 | checkstyle | 33 | The patch passed checkstyle in hadoop-hdds |
   | +1 | checkstyle | 34 | hadoop-ozone: The patch generated 0 new + 0 
unchanged - 4 fixed = 0 total (was 4) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 611 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 140 | the patch passed |
   | +1 | findbugs | 504 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 273 | hadoop-hdds in the patch passed. |
   | -1 | unit | 1763 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 42 | The patch does not generate ASF License warnings. |
   | | | 6328 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestOzoneClientRetriesOnException |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.ozone.client.rpc.TestFailureHandlingByClient |
   |   | 
hadoop.ozone.container.common.statemachine.commandhandler.TestCloseContainerByPipeline
 |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestWatchForCommit |
   |   | hadoop.ozone.client.rpc.TestCommitWatcher |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.ozone.client.rpc.TestCloseContainerHandlingByClient |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=18.09.7 Server=18.09.7 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1082/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1082 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux bc04395a2c6e 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 9119ed0 |
   | Default Java | 1.8.0_212 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1082/1/artifact/out/branch-findbugs-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1082/1/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1082/1/testReport/ |
   | Max. process+thread count | 5387 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/integration-test U: 
hadoop-ozone/integration-test |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1082/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 275833)
Time Spent: 20m  

[jira] [Resolved] (HDDS-1759) TestWatchForCommit crashes

2019-07-12 Thread Nanda kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1759?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar resolved HDDS-1759.
---
Resolution: Duplicate

> TestWatchForCommit crashes
> --
>
> Key: HDDS-1759
> URL: https://issues.apache.org/jira/browse/HDDS-1759
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>  Components: test
>Reporter: Nanda kumar
>Priority: Major
>
> {{org.apache.hadoop.ozone.client.rpc.TestWatchForCommit}} is crashing with 
> the following exception trace.
> {noformat}
> [ERROR] Crashed tests:
> [ERROR] org.apache.hadoop.ozone.client.rpc.TestWatchForCommit
> [ERROR] org.apache.maven.surefire.booter.SurefireBooterForkException: 
> ExecutionException The forked VM terminated without properly saying goodbye. 
> VM crash or System.exit called?
> [ERROR] Command was /bin/sh -c cd 
> /Users/nvadivelu/codebase/apache/hadoop/hadoop-ozone/integration-test && 
> /Library/Java/JavaVirtualMachines/jdk1.8.0_152.jdk/Contents/Home/jre/bin/java 
> -Xmx2048m -XX:+HeapDumpOnOutOfMemoryError -jar 
> /Users/nvadivelu/codebase/apache/hadoop/hadoop-ozone/integration-test/target/surefire/surefirebooter6824244130326461346.jar
>  
> /Users/nvadivelu/codebase/apache/hadoop/hadoop-ozone/integration-test/target/surefire
>  2019-07-03T10-47-23_862-jvmRun1 surefire1503013258446082728tmp 
> surefire_07547129263746053478tmp
> [ERROR] Error occurred in starting fork, check output in log
> [ERROR] Process Exit Code: 1
> [ERROR] Crashed tests:
> [ERROR] org.apache.hadoop.ozone.client.rpc.TestWatchForCommit
> [ERROR]   at 
> org.apache.maven.plugin.surefire.booterclient.ForkStarter.awaitResultsDone(ForkStarter.java:511)
> [ERROR]   at 
> org.apache.maven.plugin.surefire.booterclient.ForkStarter.runSuitesForkPerTestSet(ForkStarter.java:458)
> [ERROR]   at 
> org.apache.maven.plugin.surefire.booterclient.ForkStarter.run(ForkStarter.java:299)
> [ERROR]   at 
> org.apache.maven.plugin.surefire.booterclient.ForkStarter.run(ForkStarter.java:247)
> [ERROR]   at 
> org.apache.maven.plugin.surefire.AbstractSurefireMojo.executeProvider(AbstractSurefireMojo.java:1149)
> [ERROR]   at 
> org.apache.maven.plugin.surefire.AbstractSurefireMojo.executeAfterPreconditionsChecked(AbstractSurefireMojo.java:991)
> [ERROR]   at 
> org.apache.maven.plugin.surefire.AbstractSurefireMojo.execute(AbstractSurefireMojo.java:837)
> [ERROR]   at 
> org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:134)
> [ERROR]   at 
> org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:208)
> [ERROR]   at 
> org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:154)
> [ERROR]   at 
> org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:146)
> [ERROR]   at 
> org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:117)
> [ERROR]   at 
> org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:81)
> [ERROR]   at 
> org.apache.maven.lifecycle.internal.builder.singlethreaded.SingleThreadedBuilder.build(SingleThreadedBuilder.java:51)
> [ERROR]   at 
> org.apache.maven.lifecycle.internal.LifecycleStarter.execute(LifecycleStarter.java:128)
> [ERROR]   at 
> org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:309)
> [ERROR]   at 
> org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:194)
> [ERROR]   at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:107)
> [ERROR]   at org.apache.maven.cli.MavenCli.execute(MavenCli.java:955)
> [ERROR]   at org.apache.maven.cli.MavenCli.doMain(MavenCli.java:290)
> [ERROR]   at org.apache.maven.cli.MavenCli.main(MavenCli.java:194)
> [ERROR]   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> [ERROR]   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> [ERROR]   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> [ERROR]   at java.lang.reflect.Method.invoke(Method.java:498)
> [ERROR]   at 
> org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced(Launcher.java:289)
> [ERROR]   at 
> org.codehaus.plexus.classworlds.launcher.Launcher.launch(Launcher.java:229)
> [ERROR]   at 
> org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode(Launcher.java:415)
> [ERROR]   at 
> org.codehaus.plexus.classworlds.launcher.Launcher.main(Launcher.java:356)
> [ERROR] Caused by: 
> org.apache.maven.surefire.booter.SurefireBooterForkException: The forked VM 
> terminated without properly saying goodbye. VM crash or System.exit called?
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (HDDS-1492) Generated chunk size name too long.

2019-07-12 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1492?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-1492:
-
Labels: pull-request-available  (was: )

> Generated chunk size name too long.
> ---
>
> Key: HDDS-1492
> URL: https://issues.apache.org/jira/browse/HDDS-1492
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Affects Versions: 0.5.0
>Reporter: Aravindan Vijayan
>Assignee: Shashikant Banerjee
>Priority: Critical
>  Labels: pull-request-available
>
> Following exception is seen in SCM logs intermittently. 
> {code}
> java.lang.RuntimeException: file name 
> 'chunks/2a54b2a153f4a9c5da5f44e2c6f97c60_stream_9c6ac565-e2d4-469c-bd5c-47922a35e798_chunk_10.tmp.2.23115'
>  is too long ( > 100 bytes)
> {code}
> We may have to limit the name of the chunk to 100 bytes.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1492) Generated chunk size name too long.

2019-07-12 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1492?focusedWorklogId=275828=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-275828
 ]

ASF GitHub Bot logged work on HDDS-1492:


Author: ASF GitHub Bot
Created on: 12/Jul/19 11:14
Start Date: 12/Jul/19 11:14
Worklog Time Spent: 10m 
  Work Description: bshashikant commented on pull request #1084: HDDS-1492. 
Generated chunk size name too long.
URL: https://github.com/apache/hadoop/pull/1084
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 275828)
Time Spent: 10m
Remaining Estimate: 0h

> Generated chunk size name too long.
> ---
>
> Key: HDDS-1492
> URL: https://issues.apache.org/jira/browse/HDDS-1492
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Affects Versions: 0.5.0
>Reporter: Aravindan Vijayan
>Assignee: Shashikant Banerjee
>Priority: Critical
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Following exception is seen in SCM logs intermittently. 
> {code}
> java.lang.RuntimeException: file name 
> 'chunks/2a54b2a153f4a9c5da5f44e2c6f97c60_stream_9c6ac565-e2d4-469c-bd5c-47922a35e798_chunk_10.tmp.2.23115'
>  is too long ( > 100 bytes)
> {code}
> We may have to limit the name of the chunk to 100 bytes.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



  1   2   >