[jira] [Commented] (HDFS-2470) NN should automatically set permissions on dfs.namenode.*.dir

2019-08-13 Thread Siddharth Wagle (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-2470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16906895#comment-16906895
 ] 

Siddharth Wagle commented on HDFS-2470:
---

Hi [~eyang] thanks for catching the oversight, fixing #1 from your comment.
Not sure about #2, why can we not fall back to a fixed default of 700?

> NN should automatically set permissions on dfs.namenode.*.dir
> -
>
> Key: HDFS-2470
> URL: https://issues.apache.org/jira/browse/HDFS-2470
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.0.0-alpha
>Reporter: Aaron T. Myers
>Assignee: Siddharth Wagle
>Priority: Major
> Attachments: HDFS-2470.01.patch, HDFS-2470.02.patch, 
> HDFS-2470.03.patch, HDFS-2470.04.patch, HDFS-2470.05.patch, HDFS-2470.06.patch
>
>
> Much as the DN currently sets the correct permissions for the 
> dfs.datanode.data.dir, the NN should do the same for the 
> dfs.namenode.(name|edit).dir.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-1947) fix naming issue for ScmBlockLocationTestingClient

2019-08-13 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1947?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16906891#comment-16906891
 ] 

Anu Engineer edited comment on HDDS-1947 at 8/14/19 5:40 AM:
-

[~starphin] I am going to commit this patch. However, the patch is named 
"HDFS-14489.patch". I am making comment here so that other committers or 
someone reading this patch does not get confused. The content of the patch 
seems to address the issue described in the JIRA. Hence I will commit this 
patch as-is, but with the right Jira ID.

 

Ideally, this patch should have been called "HDDS-1947.001.patch".

 


was (Author: anu):
[~starphin] I am going to commit this patch. However, the patch is named 
"HDFS-14489.patch". I am making comment here so that other committers or 
someone reading this patch does not get confused. The content of the patch 
seems to address the issue described in the JIRA. Hence I will commit this 
patch as-is, but with the right Jira ID.

 

> fix naming issue for ScmBlockLocationTestingClient
> --
>
> Key: HDDS-1947
> URL: https://issues.apache.org/jira/browse/HDDS-1947
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: star
>Assignee: star
>Priority: Major
> Attachments: HDFS-14489.patch
>
>
> class 'ScmBlockLocationTestIngClient' is not named in Camel-Case form. Rename 
> it to ScmBlockLocationTestingClient.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1947) fix naming issue for ScmBlockLocationTestingClient

2019-08-13 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1947?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16906891#comment-16906891
 ] 

Anu Engineer commented on HDDS-1947:


[~starphin] I am going to commit this patch. However, the patch is named 
"HDFS-14489.patch". I am making comment here so that other committers or 
someone reading this patch does not get confused. The content of the patch 
seems to address the issue described in the JIRA. Hence I will commit this 
patch as-is, but with the right Jira ID.

 

> fix naming issue for ScmBlockLocationTestingClient
> --
>
> Key: HDDS-1947
> URL: https://issues.apache.org/jira/browse/HDDS-1947
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: star
>Assignee: star
>Priority: Major
> Attachments: HDFS-14489.patch
>
>
> class 'ScmBlockLocationTestIngClient' is not named in Camel-Case form. Rename 
> it to ScmBlockLocationTestingClient.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1928) Cannot run ozone-recon compose due to syntax error

2019-08-13 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1928?focusedWorklogId=294467=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-294467
 ]

ASF GitHub Bot logged work on HDDS-1928:


Author: ASF GitHub Bot
Created on: 14/Aug/19 05:36
Start Date: 14/Aug/19 05:36
Worklog Time Spent: 10m 
  Work Description: adoroszlai commented on issue #1249: HDDS-1928. Cannot 
run ozone-recon compose due to syntax error
URL: https://github.com/apache/hadoop/pull/1249#issuecomment-521108125
 
 
   Thanks @vivekratnavel, @avijayanhwx for the review, @elek for the idea to 
add `test.sh`, and @anuengineer for commiting it.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 294467)
Time Spent: 1h 50m  (was: 1h 40m)

> Cannot run ozone-recon compose due to syntax error
> --
>
> Key: HDDS-1928
> URL: https://issues.apache.org/jira/browse/HDDS-1928
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: docker
>Affects Versions: 0.4.1
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1, 0.5.0
>
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> {noformat}
> $ cd hadoop-ozone/dist/target/ozone-0.5.0-SNAPSHOT/compose/ozone-recon
> $ docker-compose up -d --scale datanode=3
> ERROR: yaml.scanner.ScannerError: mapping values are not allowed here
>   in "./docker-compose.yaml", line 20, column 33
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1832) Improve logging for PipelineActions handling in SCM and datanode

2019-08-13 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1832?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16906889#comment-16906889
 ] 

Hudson commented on HDDS-1832:
--

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17112 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17112/])
HDDS-1832 : Improve logging for PipelineActions handling in SCM and (aengineer: 
rev fc229b6490a152036b6424c7c0ac5c3df9525e57)
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/PipelineActionHandler.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/transport/server/ratis/XceiverServerRatis.java


> Improve logging for PipelineActions handling in SCM and datanode
> 
>
> Key: HDDS-1832
> URL: https://issues.apache.org/jira/browse/HDDS-1832
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode, SCM
>Affects Versions: 0.4.0
>Reporter: Mukul Kumar Singh
>Assignee: Aravindan Vijayan
>Priority: Major
>  Labels: newbie, pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 2.5h
>  Remaining Estimate: 0h
>
> XceiverServerRatis should log the reason while sending the PipelineAction to 
> the datanode.
> Also on the PipelineActionHandler should also log the detailed reason for the 
> action.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1916) Only contract tests are run in ozonefs module

2019-08-13 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1916?focusedWorklogId=294464=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-294464
 ]

ASF GitHub Bot logged work on HDDS-1916:


Author: ASF GitHub Bot
Created on: 14/Aug/19 05:34
Start Date: 14/Aug/19 05:34
Worklog Time Spent: 10m 
  Work Description: adoroszlai commented on issue #1235: HDDS-1916. Only 
contract tests are run in ozonefs module
URL: https://github.com/apache/hadoop/pull/1235#issuecomment-521107829
 
 
   Thanks @bharatviswa504 for the review, and @anuengineer for commiting it.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 294464)
Time Spent: 1.5h  (was: 1h 20m)

> Only contract tests are run in ozonefs module
> -
>
> Key: HDDS-1916
> URL: https://issues.apache.org/jira/browse/HDDS-1916
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.3.0
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 0.4.1, 0.5.0
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> {{hadoop-ozone-filesystem}} has 6 test classes that are not being run:
> {code}
> hadoop-ozone/ozonefs/src/test/java/org/apache/hadoop/fs/ozone/TestFilteredClassLoader.java
> hadoop-ozone/ozonefs/src/test/java/org/apache/hadoop/fs/ozone/TestOzoneFSInputStream.java
> hadoop-ozone/ozonefs/src/test/java/org/apache/hadoop/fs/ozone/TestOzoneFileInterfaces.java
> hadoop-ozone/ozonefs/src/test/java/org/apache/hadoop/fs/ozone/TestOzoneFileSystem.java
> hadoop-ozone/ozonefs/src/test/java/org/apache/hadoop/fs/ozone/TestOzoneFileSystemWithMocks.java
> hadoop-ozone/ozonefs/src/test/java/org/apache/hadoop/fs/ozone/TestOzoneFsRenameDir.java
> {code}
> {code:title=https://raw.githubusercontent.com/elek/ozone-ci/master/byscane/byscane-nightly-vxsck/integration/output.log}
> [INFO] ---
> [INFO]  T E S T S
> [INFO] ---
> [INFO] Running org.apache.hadoop.fs.ozone.contract.ITestOzoneContractDelete
> [INFO] Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 33.956 
> s - in org.apache.hadoop.fs.ozone.contract.ITestOzoneContractDelete
> [INFO] Running org.apache.hadoop.fs.ozone.contract.ITestOzoneContractMkdir
> [INFO] Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 34.528 
> s - in org.apache.hadoop.fs.ozone.contract.ITestOzoneContractMkdir
> [INFO] Running org.apache.hadoop.fs.ozone.contract.ITestOzoneContractSeek
> [INFO] Tests run: 18, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 
> 42.245 s - in org.apache.hadoop.fs.ozone.contract.ITestOzoneContractSeek
> [INFO] Running org.apache.hadoop.fs.ozone.contract.ITestOzoneContractOpen
> [INFO] Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 33.996 
> s - in org.apache.hadoop.fs.ozone.contract.ITestOzoneContractOpen
> [INFO] Running org.apache.hadoop.fs.ozone.contract.ITestOzoneContractRename
> [INFO] Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 34.816 
> s - in org.apache.hadoop.fs.ozone.contract.ITestOzoneContractRename
> [INFO] Running org.apache.hadoop.fs.ozone.contract.ITestOzoneContractDistCp
> [INFO] Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 59.418 
> s - in org.apache.hadoop.fs.ozone.contract.ITestOzoneContractDistCp
> [INFO] Running 
> org.apache.hadoop.fs.ozone.contract.ITestOzoneContractGetFileStatus
> [INFO] Tests run: 18, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 
> 35.042 s - in 
> org.apache.hadoop.fs.ozone.contract.ITestOzoneContractGetFileStatus
> [INFO] Running org.apache.hadoop.fs.ozone.contract.ITestOzoneContractCreate
> [WARNING] Tests run: 11, Failures: 0, Errors: 0, Skipped: 2, Time elapsed: 
> 35.144 s - in org.apache.hadoop.fs.ozone.contract.ITestOzoneContractCreate
> [INFO] Running org.apache.hadoop.fs.ozone.contract.ITestOzoneContractRootDir
> [INFO] Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 28.986 
> s - in org.apache.hadoop.fs.ozone.contract.ITestOzoneContractRootDir
> [INFO] 
> [INFO] Results:
> [INFO] 
> [WARNING] Tests run: 92, Failures: 0, Errors: 0, Skipped: 2
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org

[jira] [Commented] (HDDS-1956) Aged IO Thread exits on first read

2019-08-13 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1956?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16906887#comment-16906887
 ] 

Hudson commented on HDDS-1956:
--

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17112 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17112/])
HDDS-1956. Aged IO Thread exits on first read (aengineer: rev 
78b714af9c0ef4cd1b6219eee884a43eb66d1574)
* (edit) hadoop-ozone/integration-test/src/test/resources/log4j.properties
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/MiniOzoneLoadGenerator.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/MiniOzoneChaosCluster.java


> Aged IO Thread exits on first read
> --
>
> Key: HDDS-1956
> URL: https://issues.apache.org/jira/browse/HDDS-1956
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.5.0
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 2h 40m
>  Remaining Estimate: 0h
>
> Aged IO Thread in {{TestMiniChaosOzoneCluster}} exits on first read due to 
> exception:
> {code}
> 2019-08-12 22:55:37,799 [pool-245-thread-1] INFO  
> ozone.MiniOzoneLoadGenerator 
> (MiniOzoneLoadGenerator.java:startAgedFilesLoad(194)) - AGED LOADGEN: Started 
> Aged IO Thread:2139.
> ...
> 2019-08-12 22:55:47,147 [pool-245-thread-1] ERROR 
> ozone.MiniOzoneLoadGenerator 
> (MiniOzoneLoadGenerator.java:startAgedFilesLoad(213)) - AGED LOADGEN: 0 
> Exiting due to exception
> java.lang.ArrayIndexOutOfBoundsException: 1
>   at 
> org.apache.hadoop.ozone.MiniOzoneLoadGenerator.readData(MiniOzoneLoadGenerator.java:151)
>   at 
> org.apache.hadoop.ozone.MiniOzoneLoadGenerator.startAgedFilesLoad(MiniOzoneLoadGenerator.java:209)
>   at 
> org.apache.hadoop.ozone.MiniOzoneLoadGenerator.lambda$startIO$1(MiniOzoneLoadGenerator.java:235)
> 2019-08-12 22:55:47,149 [pool-245-thread-1] INFO  
> ozone.MiniOzoneLoadGenerator 
> (MiniOzoneLoadGenerator.java:startAgedFilesLoad(219)) - Terminating IO 
> thread:2139.
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1950) S3 MPU part-list call fails if there are no parts

2019-08-13 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1950?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-1950:
---
Target Version/s: 0.5.0

> S3 MPU part-list call fails if there are no parts
> -
>
> Key: HDDS-1950
> URL: https://issues.apache.org/jira/browse/HDDS-1950
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: S3
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> If an S3 multipart upload is created but no part is upload the part list 
> can't be called because it throws HTTP 500:
> Create an MPU:
> {code}
> aws s3api --endpoint http://localhost: create-multipart-upload 
> --bucket=docker --key=testkeu 
> {
> "Bucket": "docker",
> "Key": "testkeu",
> "UploadId": "85343e71-4c16-4a75-bb55-01f56a9339b2-102592678478217234"
> }
> {code}
> List the parts:
> {code}
> aws s3api --endpoint http://localhost: list-parts  --bucket=docker 
> --key=testkeu 
> --upload-id=85343e71-4c16-4a75-bb55-01f56a9339b2-102592678478217234
> {code}
> It throws an exception on the server side, because in the 
> KeyManagerImpl.listParts the  ReplicationType is retrieved from the first 
> part:
> {code}
> HddsProtos.ReplicationType replicationType =
> partKeyInfoMap.firstEntry().getValue().getPartKeyInfo().getType();
> {code}
> Which is not yet available in this use case.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1915) Remove hadoop script from ozone distribution

2019-08-13 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1915?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16906888#comment-16906888
 ] 

Hudson commented on HDDS-1915:
--

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17112 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17112/])
HDDS-1915. Remove hadoop script from ozone distribution (aengineer: rev 
15545c8bf1318e936fe2251bc2ef7522a36af7cd)
* (edit) hadoop-ozone/dist/dev-support/bin/dist-layout-stitching


> Remove hadoop script from ozone distribution
> 
>
> Key: HDDS-1915
> URL: https://issues.apache.org/jira/browse/HDDS-1915
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> /bin/hadoop script is included in the ozone distribution even if we a 
> dedicated /bin/ozone
> [~arp] reported that it can be confusing, for example "hadoop classpath" 
> returns with a bad classpath (ozone classpath ) should be used 
> instead.
> To avoid such confusions I suggest to remove the hadoop script from 
> distribution as ozone script already provides all the functionalities.
> It also helps as to reduce the dependencies between hadoop 3.2-SNAPSHOT and 
> ozone as we use the snapshot hadoop script as of now.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1105) Add mechanism in Recon to obtain DB snapshot 'delta' updates from Ozone Manager.

2019-08-13 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1105?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-1105:
---
Target Version/s: 0.5.0

> Add mechanism in Recon to obtain DB snapshot 'delta' updates from Ozone 
> Manager.
> 
>
> Key: HDDS-1105
> URL: https://issues.apache.org/jira/browse/HDDS-1105
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Aravindan Vijayan
>Assignee: Aravindan Vijayan
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h 10m
>  Remaining Estimate: 0h
>
> *Some context*
> The FSCK server will periodically invoke this OM API passing in the most 
> recent sequence number of its own RocksDB instance. The OM will use the 
> RockDB getUpdateSince() API to answer this query. Since the getUpdateSince 
> API only works against the RocksDB WAL, we have to configure OM RocksDB WAL 
> (https://github.com/facebook/rocksdb/wiki/Write-Ahead-Log) with sufficient 
> max size to make this API useful. If the OM cannot get all transactions since 
> the given sequence number (due to WAL flushing), it can error out. In that 
> case the FSCK server can fall back to getting the entire checkpoint snapshot 
> implemented in HDDS-1085.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1956) Aged IO Thread exits on first read

2019-08-13 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1956?focusedWorklogId=294463=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-294463
 ]

ASF GitHub Bot logged work on HDDS-1956:


Author: ASF GitHub Bot
Created on: 14/Aug/19 05:33
Start Date: 14/Aug/19 05:33
Worklog Time Spent: 10m 
  Work Description: adoroszlai commented on issue #1287: HDDS-1956. Aged IO 
Thread exits on first read
URL: https://github.com/apache/hadoop/pull/1287#issuecomment-521107553
 
 
   Thanks @mukul1987 for the review, and @anuengineer for commiting it.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 294463)
Time Spent: 2h 40m  (was: 2.5h)

> Aged IO Thread exits on first read
> --
>
> Key: HDDS-1956
> URL: https://issues.apache.org/jira/browse/HDDS-1956
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.5.0
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 2h 40m
>  Remaining Estimate: 0h
>
> Aged IO Thread in {{TestMiniChaosOzoneCluster}} exits on first read due to 
> exception:
> {code}
> 2019-08-12 22:55:37,799 [pool-245-thread-1] INFO  
> ozone.MiniOzoneLoadGenerator 
> (MiniOzoneLoadGenerator.java:startAgedFilesLoad(194)) - AGED LOADGEN: Started 
> Aged IO Thread:2139.
> ...
> 2019-08-12 22:55:47,147 [pool-245-thread-1] ERROR 
> ozone.MiniOzoneLoadGenerator 
> (MiniOzoneLoadGenerator.java:startAgedFilesLoad(213)) - AGED LOADGEN: 0 
> Exiting due to exception
> java.lang.ArrayIndexOutOfBoundsException: 1
>   at 
> org.apache.hadoop.ozone.MiniOzoneLoadGenerator.readData(MiniOzoneLoadGenerator.java:151)
>   at 
> org.apache.hadoop.ozone.MiniOzoneLoadGenerator.startAgedFilesLoad(MiniOzoneLoadGenerator.java:209)
>   at 
> org.apache.hadoop.ozone.MiniOzoneLoadGenerator.lambda$startIO$1(MiniOzoneLoadGenerator.java:235)
> 2019-08-12 22:55:47,149 [pool-245-thread-1] INFO  
> ozone.MiniOzoneLoadGenerator 
> (MiniOzoneLoadGenerator.java:startAgedFilesLoad(219)) - Terminating IO 
> thread:2139.
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-1366) Add ability in Recon to track the number of small files in an Ozone cluster.

2019-08-13 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1366?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer resolved HDDS-1366.

   Resolution: Fixed
Fix Version/s: 0.5.0

[~shwetayakkali] Thanks for the contribution and [~arp] Thanks for commit. I am 
just resolving this JIRA. Please reopen if needed.

> Add ability in Recon to track the number of small files in an Ozone cluster.
> 
>
> Key: HDDS-1366
> URL: https://issues.apache.org/jira/browse/HDDS-1366
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Recon
>Reporter: Aravindan Vijayan
>Assignee: Shweta
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 12h
>  Remaining Estimate: 0h
>
> Ozone users may want to track the number of small files they have in their 
> cluster and where they are present. Recon can help them with the information 
> by iterating the OM Key Table and dividing the keys into different buckets 
> based on the data size. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1105) Add mechanism in Recon to obtain DB snapshot 'delta' updates from Ozone Manager.

2019-08-13 Thread Aravindan Vijayan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1105?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aravindan Vijayan updated HDDS-1105:

Status: Patch Available  (was: In Progress)

> Add mechanism in Recon to obtain DB snapshot 'delta' updates from Ozone 
> Manager.
> 
>
> Key: HDDS-1105
> URL: https://issues.apache.org/jira/browse/HDDS-1105
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Aravindan Vijayan
>Assignee: Aravindan Vijayan
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h 10m
>  Remaining Estimate: 0h
>
> *Some context*
> The FSCK server will periodically invoke this OM API passing in the most 
> recent sequence number of its own RocksDB instance. The OM will use the 
> RockDB getUpdateSince() API to answer this query. Since the getUpdateSince 
> API only works against the RocksDB WAL, we have to configure OM RocksDB WAL 
> (https://github.com/facebook/rocksdb/wiki/Write-Ahead-Log) with sufficient 
> max size to make this API useful. If the OM cannot get all transactions since 
> the given sequence number (due to WAL flushing), it can error out. In that 
> case the FSCK server can fall back to getting the entire checkpoint snapshot 
> implemented in HDDS-1085.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1832) Improve logging for PipelineActions handling in SCM and datanode

2019-08-13 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1832?focusedWorklogId=294458=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-294458
 ]

ASF GitHub Bot logged work on HDDS-1832:


Author: ASF GitHub Bot
Created on: 14/Aug/19 05:24
Start Date: 14/Aug/19 05:24
Worklog Time Spent: 10m 
  Work Description: anuengineer commented on pull request #1217: HDDS-1832 
: Improve logging for PipelineActions handling in SCM and datanode.
URL: https://github.com/apache/hadoop/pull/1217
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 294458)
Time Spent: 2.5h  (was: 2h 20m)

> Improve logging for PipelineActions handling in SCM and datanode
> 
>
> Key: HDDS-1832
> URL: https://issues.apache.org/jira/browse/HDDS-1832
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode, SCM
>Affects Versions: 0.4.0
>Reporter: Mukul Kumar Singh
>Assignee: Aravindan Vijayan
>Priority: Major
>  Labels: newbie, pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 2.5h
>  Remaining Estimate: 0h
>
> XceiverServerRatis should log the reason while sending the PipelineAction to 
> the datanode.
> Also on the PipelineActionHandler should also log the detailed reason for the 
> action.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1915) Remove hadoop script from ozone distribution

2019-08-13 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1915?focusedWorklogId=294454=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-294454
 ]

ASF GitHub Bot logged work on HDDS-1915:


Author: ASF GitHub Bot
Created on: 14/Aug/19 05:23
Start Date: 14/Aug/19 05:23
Worklog Time Spent: 10m 
  Work Description: anuengineer commented on pull request #1233: HDDS-1915. 
Remove hadoop script from ozone distribution
URL: https://github.com/apache/hadoop/pull/1233
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 294454)
Time Spent: 1h 10m  (was: 1h)

> Remove hadoop script from ozone distribution
> 
>
> Key: HDDS-1915
> URL: https://issues.apache.org/jira/browse/HDDS-1915
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> /bin/hadoop script is included in the ozone distribution even if we a 
> dedicated /bin/ozone
> [~arp] reported that it can be confusing, for example "hadoop classpath" 
> returns with a bad classpath (ozone classpath ) should be used 
> instead.
> To avoid such confusions I suggest to remove the hadoop script from 
> distribution as ozone script already provides all the functionalities.
> It also helps as to reduce the dependencies between hadoop 3.2-SNAPSHOT and 
> ozone as we use the snapshot hadoop script as of now.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1915) Remove hadoop script from ozone distribution

2019-08-13 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1915?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-1915:
---
   Resolution: Fixed
Fix Version/s: 0.5.0
   Status: Resolved  (was: Patch Available)

Committed to the trunk.

> Remove hadoop script from ozone distribution
> 
>
> Key: HDDS-1915
> URL: https://issues.apache.org/jira/browse/HDDS-1915
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> /bin/hadoop script is included in the ozone distribution even if we a 
> dedicated /bin/ozone
> [~arp] reported that it can be confusing, for example "hadoop classpath" 
> returns with a bad classpath (ozone classpath ) should be used 
> instead.
> To avoid such confusions I suggest to remove the hadoop script from 
> distribution as ozone script already provides all the functionalities.
> It also helps as to reduce the dependencies between hadoop 3.2-SNAPSHOT and 
> ozone as we use the snapshot hadoop script as of now.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1915) Remove hadoop script from ozone distribution

2019-08-13 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1915?focusedWorklogId=294453=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-294453
 ]

ASF GitHub Bot logged work on HDDS-1915:


Author: ASF GitHub Bot
Created on: 14/Aug/19 05:23
Start Date: 14/Aug/19 05:23
Worklog Time Spent: 10m 
  Work Description: anuengineer commented on issue #1233: HDDS-1915. Remove 
hadoop script from ozone distribution
URL: https://github.com/apache/hadoop/pull/1233#issuecomment-521105651
 
 
   @elek  Thanks for the contribution. @bharatviswa504  Thanks for the review. 
I have committed this to the trunk.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 294453)
Time Spent: 1h  (was: 50m)

> Remove hadoop script from ozone distribution
> 
>
> Key: HDDS-1915
> URL: https://issues.apache.org/jira/browse/HDDS-1915
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> /bin/hadoop script is included in the ozone distribution even if we a 
> dedicated /bin/ozone
> [~arp] reported that it can be confusing, for example "hadoop classpath" 
> returns with a bad classpath (ozone classpath ) should be used 
> instead.
> To avoid such confusions I suggest to remove the hadoop script from 
> distribution as ozone script already provides all the functionalities.
> It also helps as to reduce the dependencies between hadoop 3.2-SNAPSHOT and 
> ozone as we use the snapshot hadoop script as of now.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1768) Audit xxxAcl methods in OzoneManager

2019-08-13 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1768?focusedWorklogId=294445=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-294445
 ]

ASF GitHub Bot logged work on HDDS-1768:


Author: ASF GitHub Bot
Created on: 14/Aug/19 05:05
Start Date: 14/Aug/19 05:05
Worklog Time Spent: 10m 
  Work Description: dineshchitlangia commented on pull request #1204: 
HDDS-1768. Audit xxxAcl methods in OzoneManager
URL: https://github.com/apache/hadoop/pull/1204#discussion_r313704753
 
 

 ##
 File path: 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestOzoneRpcClientForAclAuditLog.java
 ##
 @@ -0,0 +1,268 @@
+package org.apache.hadoop.ozone.client.rpc;
+
+import org.apache.commons.io.FileUtils;
+import org.apache.commons.lang3.RandomStringUtils;
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import 
org.apache.hadoop.hdds.scm.protocolPB.StorageContainerLocationProtocolClientSideTranslatorPB;
+import org.apache.hadoop.ozone.MiniOzoneCluster;
+import org.apache.hadoop.ozone.OzoneAcl;
+import org.apache.hadoop.ozone.audit.AuditEventStatus;
+import org.apache.hadoop.ozone.audit.OMAction;
+import org.apache.hadoop.ozone.client.ObjectStore;
+import org.apache.hadoop.ozone.client.OzoneClient;
+import org.apache.hadoop.ozone.client.OzoneClientFactory;
+import org.apache.hadoop.ozone.client.OzoneVolume;
+import org.apache.hadoop.ozone.security.acl.IAccessAuthorizer;
+import org.apache.hadoop.ozone.security.acl.OzoneObj;
+import org.apache.hadoop.ozone.security.acl.OzoneObjInfo;
+import org.apache.hadoop.security.UserGroupInformation;
+import org.junit.AfterClass;
+import org.junit.Assert;
+import org.junit.BeforeClass;
+import org.junit.FixMethodOrder;
+import org.junit.Test;
+import org.junit.runners.MethodSorters;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.File;
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.UUID;
+
+import static org.apache.hadoop.ozone.OzoneAcl.AclScope.ACCESS;
+import static 
org.apache.hadoop.ozone.OzoneConfigKeys.OZONE_ACL_AUTHORIZER_CLASS;
+import static 
org.apache.hadoop.ozone.OzoneConfigKeys.OZONE_ACL_AUTHORIZER_CLASS_NATIVE;
+import static org.apache.hadoop.ozone.OzoneConfigKeys.OZONE_ACL_ENABLED;
+import static org.apache.hadoop.ozone.OzoneConfigKeys.OZONE_ADMINISTRATORS;
+import static 
org.apache.hadoop.ozone.OzoneConfigKeys.OZONE_ADMINISTRATORS_WILDCARD;
+import static 
org.apache.hadoop.ozone.security.acl.OzoneObj.ResourceType.VOLUME;
+import static org.apache.hadoop.ozone.security.acl.OzoneObj.StoreType.OZONE;
+import static org.junit.Assert.assertTrue;
+
+/**
+ * This class is to test audit logs for xxxACL APIs of Ozone Client.
+ */
+@FixMethodOrder(MethodSorters.NAME_ASCENDING)
+public class TestOzoneRpcClientForAclAuditLog {
+
+  static final Logger LOG =
+  LoggerFactory.getLogger(TestOzoneRpcClientForAclAuditLog.class);
+  private static UserGroupInformation ugi;
+  private static final OzoneAcl USER_ACL =
+  new OzoneAcl(IAccessAuthorizer.ACLIdentityType.USER,
+  "johndoe", IAccessAuthorizer.ACLType.ALL, ACCESS);
+  private static final OzoneAcl USER_ACL_2 =
+  new OzoneAcl(IAccessAuthorizer.ACLIdentityType.USER,
+  "jane", IAccessAuthorizer.ACLType.ALL, ACCESS);
+  private static List aclListToAdd = new ArrayList<>();
+  private static MiniOzoneCluster cluster = null;
+  private static OzoneClient ozClient = null;
+  private static ObjectStore store = null;
+  private static StorageContainerLocationProtocolClientSideTranslatorPB
+  storageContainerLocationClient;
+  private static String scmId = UUID.randomUUID().toString();
+
+
+  /**
+   * Create a MiniOzoneCluster for testing.
+   *
+   * Ozone is made active by setting OZONE_ENABLED = true
+   *
+   * @throws IOException
+   */
+  @BeforeClass
+  public static void init() throws Exception {
+System.setProperty("log4j.configurationFile", "log4j2.properties");
+ugi = UserGroupInformation.getCurrentUser();
+OzoneConfiguration conf = new OzoneConfiguration();
+conf.setBoolean(OZONE_ACL_ENABLED, true);
+conf.set(OZONE_ADMINISTRATORS, OZONE_ADMINISTRATORS_WILDCARD);
+conf.set(OZONE_ACL_AUTHORIZER_CLASS,
+OZONE_ACL_AUTHORIZER_CLASS_NATIVE);
+startCluster(conf);
+aclListToAdd.add(USER_ACL);
+aclListToAdd.add(USER_ACL_2);
+  }
+
+  private   /**
+   * Create a MiniOzoneCluster for testing.
+   * @param conf Configurations to start the cluster.
+   * @throws Exception
+   */
+  static void startCluster(OzoneConfiguration conf) throws Exception {
+cluster = MiniOzoneCluster.newBuilder(conf)
+.setNumDatanodes(3)
+.setScmId(scmId)
+.build();
+cluster.waitForClusterToBeReady();
+ozClient = OzoneClientFactory.getRpcClient(conf);
+store = ozClient.getObjectStore();
+

[jira] [Work logged] (HDDS-1768) Audit xxxAcl methods in OzoneManager

2019-08-13 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1768?focusedWorklogId=294443=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-294443
 ]

ASF GitHub Bot logged work on HDDS-1768:


Author: ASF GitHub Bot
Created on: 14/Aug/19 05:04
Start Date: 14/Aug/19 05:04
Worklog Time Spent: 10m 
  Work Description: dineshchitlangia commented on pull request #1204: 
HDDS-1768. Audit xxxAcl methods in OzoneManager
URL: https://github.com/apache/hadoop/pull/1204#discussion_r313704727
 
 

 ##
 File path: 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestOzoneRpcClientForAclAuditLog.java
 ##
 @@ -0,0 +1,268 @@
+package org.apache.hadoop.ozone.client.rpc;
+
+import org.apache.commons.io.FileUtils;
+import org.apache.commons.lang3.RandomStringUtils;
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import 
org.apache.hadoop.hdds.scm.protocolPB.StorageContainerLocationProtocolClientSideTranslatorPB;
+import org.apache.hadoop.ozone.MiniOzoneCluster;
+import org.apache.hadoop.ozone.OzoneAcl;
+import org.apache.hadoop.ozone.audit.AuditEventStatus;
+import org.apache.hadoop.ozone.audit.OMAction;
+import org.apache.hadoop.ozone.client.ObjectStore;
+import org.apache.hadoop.ozone.client.OzoneClient;
+import org.apache.hadoop.ozone.client.OzoneClientFactory;
+import org.apache.hadoop.ozone.client.OzoneVolume;
+import org.apache.hadoop.ozone.security.acl.IAccessAuthorizer;
+import org.apache.hadoop.ozone.security.acl.OzoneObj;
+import org.apache.hadoop.ozone.security.acl.OzoneObjInfo;
+import org.apache.hadoop.security.UserGroupInformation;
+import org.junit.AfterClass;
+import org.junit.Assert;
+import org.junit.BeforeClass;
+import org.junit.FixMethodOrder;
+import org.junit.Test;
+import org.junit.runners.MethodSorters;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.File;
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.UUID;
+
+import static org.apache.hadoop.ozone.OzoneAcl.AclScope.ACCESS;
+import static 
org.apache.hadoop.ozone.OzoneConfigKeys.OZONE_ACL_AUTHORIZER_CLASS;
+import static 
org.apache.hadoop.ozone.OzoneConfigKeys.OZONE_ACL_AUTHORIZER_CLASS_NATIVE;
+import static org.apache.hadoop.ozone.OzoneConfigKeys.OZONE_ACL_ENABLED;
+import static org.apache.hadoop.ozone.OzoneConfigKeys.OZONE_ADMINISTRATORS;
+import static 
org.apache.hadoop.ozone.OzoneConfigKeys.OZONE_ADMINISTRATORS_WILDCARD;
+import static 
org.apache.hadoop.ozone.security.acl.OzoneObj.ResourceType.VOLUME;
+import static org.apache.hadoop.ozone.security.acl.OzoneObj.StoreType.OZONE;
+import static org.junit.Assert.assertTrue;
+
+/**
+ * This class is to test audit logs for xxxACL APIs of Ozone Client.
+ */
+@FixMethodOrder(MethodSorters.NAME_ASCENDING)
+public class TestOzoneRpcClientForAclAuditLog {
+
+  static final Logger LOG =
+  LoggerFactory.getLogger(TestOzoneRpcClientForAclAuditLog.class);
+  private static UserGroupInformation ugi;
+  private static final OzoneAcl USER_ACL =
+  new OzoneAcl(IAccessAuthorizer.ACLIdentityType.USER,
+  "johndoe", IAccessAuthorizer.ACLType.ALL, ACCESS);
+  private static final OzoneAcl USER_ACL_2 =
+  new OzoneAcl(IAccessAuthorizer.ACLIdentityType.USER,
+  "jane", IAccessAuthorizer.ACLType.ALL, ACCESS);
+  private static List aclListToAdd = new ArrayList<>();
+  private static MiniOzoneCluster cluster = null;
+  private static OzoneClient ozClient = null;
+  private static ObjectStore store = null;
+  private static StorageContainerLocationProtocolClientSideTranslatorPB
+  storageContainerLocationClient;
+  private static String scmId = UUID.randomUUID().toString();
+
+
+  /**
+   * Create a MiniOzoneCluster for testing.
+   *
+   * Ozone is made active by setting OZONE_ENABLED = true
+   *
+   * @throws IOException
+   */
+  @BeforeClass
+  public static void init() throws Exception {
+System.setProperty("log4j.configurationFile", "log4j2.properties");
+ugi = UserGroupInformation.getCurrentUser();
+OzoneConfiguration conf = new OzoneConfiguration();
+conf.setBoolean(OZONE_ACL_ENABLED, true);
+conf.set(OZONE_ADMINISTRATORS, OZONE_ADMINISTRATORS_WILDCARD);
+conf.set(OZONE_ACL_AUTHORIZER_CLASS,
+OZONE_ACL_AUTHORIZER_CLASS_NATIVE);
+startCluster(conf);
+aclListToAdd.add(USER_ACL);
+aclListToAdd.add(USER_ACL_2);
+  }
+
+  private   /**
 
 Review comment:
   done
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 294443)
Time Spent: 3.5h  (was: 3h 20m)

> Audit xxxAcl 

[jira] [Work logged] (HDDS-1768) Audit xxxAcl methods in OzoneManager

2019-08-13 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1768?focusedWorklogId=29=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-29
 ]

ASF GitHub Bot logged work on HDDS-1768:


Author: ASF GitHub Bot
Created on: 14/Aug/19 05:04
Start Date: 14/Aug/19 05:04
Worklog Time Spent: 10m 
  Work Description: dineshchitlangia commented on pull request #1204: 
HDDS-1768. Audit xxxAcl methods in OzoneManager
URL: https://github.com/apache/hadoop/pull/1204#discussion_r313704740
 
 

 ##
 File path: 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestOzoneRpcClientForAclAuditLog.java
 ##
 @@ -0,0 +1,268 @@
+package org.apache.hadoop.ozone.client.rpc;
+
+import org.apache.commons.io.FileUtils;
+import org.apache.commons.lang3.RandomStringUtils;
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import 
org.apache.hadoop.hdds.scm.protocolPB.StorageContainerLocationProtocolClientSideTranslatorPB;
+import org.apache.hadoop.ozone.MiniOzoneCluster;
+import org.apache.hadoop.ozone.OzoneAcl;
+import org.apache.hadoop.ozone.audit.AuditEventStatus;
+import org.apache.hadoop.ozone.audit.OMAction;
+import org.apache.hadoop.ozone.client.ObjectStore;
+import org.apache.hadoop.ozone.client.OzoneClient;
+import org.apache.hadoop.ozone.client.OzoneClientFactory;
+import org.apache.hadoop.ozone.client.OzoneVolume;
+import org.apache.hadoop.ozone.security.acl.IAccessAuthorizer;
+import org.apache.hadoop.ozone.security.acl.OzoneObj;
+import org.apache.hadoop.ozone.security.acl.OzoneObjInfo;
+import org.apache.hadoop.security.UserGroupInformation;
+import org.junit.AfterClass;
+import org.junit.Assert;
+import org.junit.BeforeClass;
+import org.junit.FixMethodOrder;
+import org.junit.Test;
+import org.junit.runners.MethodSorters;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.File;
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.UUID;
+
+import static org.apache.hadoop.ozone.OzoneAcl.AclScope.ACCESS;
+import static 
org.apache.hadoop.ozone.OzoneConfigKeys.OZONE_ACL_AUTHORIZER_CLASS;
+import static 
org.apache.hadoop.ozone.OzoneConfigKeys.OZONE_ACL_AUTHORIZER_CLASS_NATIVE;
+import static org.apache.hadoop.ozone.OzoneConfigKeys.OZONE_ACL_ENABLED;
+import static org.apache.hadoop.ozone.OzoneConfigKeys.OZONE_ADMINISTRATORS;
+import static 
org.apache.hadoop.ozone.OzoneConfigKeys.OZONE_ADMINISTRATORS_WILDCARD;
+import static 
org.apache.hadoop.ozone.security.acl.OzoneObj.ResourceType.VOLUME;
+import static org.apache.hadoop.ozone.security.acl.OzoneObj.StoreType.OZONE;
+import static org.junit.Assert.assertTrue;
+
+/**
+ * This class is to test audit logs for xxxACL APIs of Ozone Client.
+ */
+@FixMethodOrder(MethodSorters.NAME_ASCENDING)
+public class TestOzoneRpcClientForAclAuditLog {
+
+  static final Logger LOG =
+  LoggerFactory.getLogger(TestOzoneRpcClientForAclAuditLog.class);
+  private static UserGroupInformation ugi;
+  private static final OzoneAcl USER_ACL =
+  new OzoneAcl(IAccessAuthorizer.ACLIdentityType.USER,
+  "johndoe", IAccessAuthorizer.ACLType.ALL, ACCESS);
+  private static final OzoneAcl USER_ACL_2 =
+  new OzoneAcl(IAccessAuthorizer.ACLIdentityType.USER,
+  "jane", IAccessAuthorizer.ACLType.ALL, ACCESS);
+  private static List aclListToAdd = new ArrayList<>();
+  private static MiniOzoneCluster cluster = null;
+  private static OzoneClient ozClient = null;
+  private static ObjectStore store = null;
+  private static StorageContainerLocationProtocolClientSideTranslatorPB
+  storageContainerLocationClient;
+  private static String scmId = UUID.randomUUID().toString();
+
+
+  /**
+   * Create a MiniOzoneCluster for testing.
+   *
+   * Ozone is made active by setting OZONE_ENABLED = true
+   *
+   * @throws IOException
+   */
+  @BeforeClass
+  public static void init() throws Exception {
+System.setProperty("log4j.configurationFile", "log4j2.properties");
+ugi = UserGroupInformation.getCurrentUser();
+OzoneConfiguration conf = new OzoneConfiguration();
+conf.setBoolean(OZONE_ACL_ENABLED, true);
+conf.set(OZONE_ADMINISTRATORS, OZONE_ADMINISTRATORS_WILDCARD);
+conf.set(OZONE_ACL_AUTHORIZER_CLASS,
+OZONE_ACL_AUTHORIZER_CLASS_NATIVE);
+startCluster(conf);
+aclListToAdd.add(USER_ACL);
+aclListToAdd.add(USER_ACL_2);
+  }
+
+  private   /**
+   * Create a MiniOzoneCluster for testing.
+   * @param conf Configurations to start the cluster.
+   * @throws Exception
+   */
+  static void startCluster(OzoneConfiguration conf) throws Exception {
+cluster = MiniOzoneCluster.newBuilder(conf)
+.setNumDatanodes(3)
+.setScmId(scmId)
+.build();
+cluster.waitForClusterToBeReady();
+ozClient = OzoneClientFactory.getRpcClient(conf);
+store = ozClient.getObjectStore();
+

[jira] [Work logged] (HDDS-1768) Audit xxxAcl methods in OzoneManager

2019-08-13 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1768?focusedWorklogId=294441=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-294441
 ]

ASF GitHub Bot logged work on HDDS-1768:


Author: ASF GitHub Bot
Created on: 14/Aug/19 05:03
Start Date: 14/Aug/19 05:03
Worklog Time Spent: 10m 
  Work Description: dchitlangia commented on pull request #1204: HDDS-1768. 
Audit xxxAcl methods in OzoneManager
URL: https://github.com/apache/hadoop/pull/1204#discussion_r313704529
 
 

 ##
 File path: 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestOzoneRpcClientForAclAuditLog.java
 ##
 @@ -0,0 +1,268 @@
+package org.apache.hadoop.ozone.client.rpc;
+
+import org.apache.commons.io.FileUtils;
+import org.apache.commons.lang3.RandomStringUtils;
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import 
org.apache.hadoop.hdds.scm.protocolPB.StorageContainerLocationProtocolClientSideTranslatorPB;
+import org.apache.hadoop.ozone.MiniOzoneCluster;
+import org.apache.hadoop.ozone.OzoneAcl;
+import org.apache.hadoop.ozone.audit.AuditEventStatus;
+import org.apache.hadoop.ozone.audit.OMAction;
+import org.apache.hadoop.ozone.client.ObjectStore;
+import org.apache.hadoop.ozone.client.OzoneClient;
+import org.apache.hadoop.ozone.client.OzoneClientFactory;
+import org.apache.hadoop.ozone.client.OzoneVolume;
+import org.apache.hadoop.ozone.security.acl.IAccessAuthorizer;
+import org.apache.hadoop.ozone.security.acl.OzoneObj;
+import org.apache.hadoop.ozone.security.acl.OzoneObjInfo;
+import org.apache.hadoop.security.UserGroupInformation;
+import org.junit.AfterClass;
+import org.junit.Assert;
+import org.junit.BeforeClass;
+import org.junit.FixMethodOrder;
+import org.junit.Test;
+import org.junit.runners.MethodSorters;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.File;
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.UUID;
+
+import static org.apache.hadoop.ozone.OzoneAcl.AclScope.ACCESS;
+import static 
org.apache.hadoop.ozone.OzoneConfigKeys.OZONE_ACL_AUTHORIZER_CLASS;
+import static 
org.apache.hadoop.ozone.OzoneConfigKeys.OZONE_ACL_AUTHORIZER_CLASS_NATIVE;
+import static org.apache.hadoop.ozone.OzoneConfigKeys.OZONE_ACL_ENABLED;
+import static org.apache.hadoop.ozone.OzoneConfigKeys.OZONE_ADMINISTRATORS;
+import static 
org.apache.hadoop.ozone.OzoneConfigKeys.OZONE_ADMINISTRATORS_WILDCARD;
+import static 
org.apache.hadoop.ozone.security.acl.OzoneObj.ResourceType.VOLUME;
+import static org.apache.hadoop.ozone.security.acl.OzoneObj.StoreType.OZONE;
+import static org.junit.Assert.assertTrue;
+
+/**
+ * This class is to test audit logs for xxxACL APIs of Ozone Client.
+ */
+@FixMethodOrder(MethodSorters.NAME_ASCENDING)
+public class TestOzoneRpcClientForAclAuditLog {
+
+  static final Logger LOG =
+  LoggerFactory.getLogger(TestOzoneRpcClientForAclAuditLog.class);
+  private static UserGroupInformation ugi;
+  private static final OzoneAcl USER_ACL =
+  new OzoneAcl(IAccessAuthorizer.ACLIdentityType.USER,
+  "johndoe", IAccessAuthorizer.ACLType.ALL, ACCESS);
+  private static final OzoneAcl USER_ACL_2 =
+  new OzoneAcl(IAccessAuthorizer.ACLIdentityType.USER,
+  "jane", IAccessAuthorizer.ACLType.ALL, ACCESS);
+  private static List aclListToAdd = new ArrayList<>();
+  private static MiniOzoneCluster cluster = null;
+  private static OzoneClient ozClient = null;
+  private static ObjectStore store = null;
+  private static StorageContainerLocationProtocolClientSideTranslatorPB
+  storageContainerLocationClient;
+  private static String scmId = UUID.randomUUID().toString();
+
+
+  /**
+   * Create a MiniOzoneCluster for testing.
+   *
+   * Ozone is made active by setting OZONE_ENABLED = true
+   *
+   * @throws IOException
+   */
+  @BeforeClass
+  public static void init() throws Exception {
+System.setProperty("log4j.configurationFile", "log4j2.properties");
+ugi = UserGroupInformation.getCurrentUser();
+OzoneConfiguration conf = new OzoneConfiguration();
+conf.setBoolean(OZONE_ACL_ENABLED, true);
+conf.set(OZONE_ADMINISTRATORS, OZONE_ADMINISTRATORS_WILDCARD);
+conf.set(OZONE_ACL_AUTHORIZER_CLASS,
+OZONE_ACL_AUTHORIZER_CLASS_NATIVE);
+startCluster(conf);
+aclListToAdd.add(USER_ACL);
+aclListToAdd.add(USER_ACL_2);
+  }
+
+  private   /**
 
 Review comment:
   Done.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 294441)
Time Spent: 3h 20m  (was: 3h 10m)

> Audit xxxAcl 

[jira] [Work logged] (HDDS-1768) Audit xxxAcl methods in OzoneManager

2019-08-13 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1768?focusedWorklogId=294440=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-294440
 ]

ASF GitHub Bot logged work on HDDS-1768:


Author: ASF GitHub Bot
Created on: 14/Aug/19 05:03
Start Date: 14/Aug/19 05:03
Worklog Time Spent: 10m 
  Work Description: dchitlangia commented on pull request #1204: HDDS-1768. 
Audit xxxAcl methods in OzoneManager
URL: https://github.com/apache/hadoop/pull/1204#discussion_r313704529
 
 

 ##
 File path: 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestOzoneRpcClientForAclAuditLog.java
 ##
 @@ -0,0 +1,268 @@
+package org.apache.hadoop.ozone.client.rpc;
+
+import org.apache.commons.io.FileUtils;
+import org.apache.commons.lang3.RandomStringUtils;
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import 
org.apache.hadoop.hdds.scm.protocolPB.StorageContainerLocationProtocolClientSideTranslatorPB;
+import org.apache.hadoop.ozone.MiniOzoneCluster;
+import org.apache.hadoop.ozone.OzoneAcl;
+import org.apache.hadoop.ozone.audit.AuditEventStatus;
+import org.apache.hadoop.ozone.audit.OMAction;
+import org.apache.hadoop.ozone.client.ObjectStore;
+import org.apache.hadoop.ozone.client.OzoneClient;
+import org.apache.hadoop.ozone.client.OzoneClientFactory;
+import org.apache.hadoop.ozone.client.OzoneVolume;
+import org.apache.hadoop.ozone.security.acl.IAccessAuthorizer;
+import org.apache.hadoop.ozone.security.acl.OzoneObj;
+import org.apache.hadoop.ozone.security.acl.OzoneObjInfo;
+import org.apache.hadoop.security.UserGroupInformation;
+import org.junit.AfterClass;
+import org.junit.Assert;
+import org.junit.BeforeClass;
+import org.junit.FixMethodOrder;
+import org.junit.Test;
+import org.junit.runners.MethodSorters;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.File;
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.UUID;
+
+import static org.apache.hadoop.ozone.OzoneAcl.AclScope.ACCESS;
+import static 
org.apache.hadoop.ozone.OzoneConfigKeys.OZONE_ACL_AUTHORIZER_CLASS;
+import static 
org.apache.hadoop.ozone.OzoneConfigKeys.OZONE_ACL_AUTHORIZER_CLASS_NATIVE;
+import static org.apache.hadoop.ozone.OzoneConfigKeys.OZONE_ACL_ENABLED;
+import static org.apache.hadoop.ozone.OzoneConfigKeys.OZONE_ADMINISTRATORS;
+import static 
org.apache.hadoop.ozone.OzoneConfigKeys.OZONE_ADMINISTRATORS_WILDCARD;
+import static 
org.apache.hadoop.ozone.security.acl.OzoneObj.ResourceType.VOLUME;
+import static org.apache.hadoop.ozone.security.acl.OzoneObj.StoreType.OZONE;
+import static org.junit.Assert.assertTrue;
+
+/**
+ * This class is to test audit logs for xxxACL APIs of Ozone Client.
+ */
+@FixMethodOrder(MethodSorters.NAME_ASCENDING)
+public class TestOzoneRpcClientForAclAuditLog {
+
+  static final Logger LOG =
+  LoggerFactory.getLogger(TestOzoneRpcClientForAclAuditLog.class);
+  private static UserGroupInformation ugi;
+  private static final OzoneAcl USER_ACL =
+  new OzoneAcl(IAccessAuthorizer.ACLIdentityType.USER,
+  "johndoe", IAccessAuthorizer.ACLType.ALL, ACCESS);
+  private static final OzoneAcl USER_ACL_2 =
+  new OzoneAcl(IAccessAuthorizer.ACLIdentityType.USER,
+  "jane", IAccessAuthorizer.ACLType.ALL, ACCESS);
+  private static List aclListToAdd = new ArrayList<>();
+  private static MiniOzoneCluster cluster = null;
+  private static OzoneClient ozClient = null;
+  private static ObjectStore store = null;
+  private static StorageContainerLocationProtocolClientSideTranslatorPB
+  storageContainerLocationClient;
+  private static String scmId = UUID.randomUUID().toString();
+
+
+  /**
+   * Create a MiniOzoneCluster for testing.
+   *
+   * Ozone is made active by setting OZONE_ENABLED = true
+   *
+   * @throws IOException
+   */
+  @BeforeClass
+  public static void init() throws Exception {
+System.setProperty("log4j.configurationFile", "log4j2.properties");
+ugi = UserGroupInformation.getCurrentUser();
+OzoneConfiguration conf = new OzoneConfiguration();
+conf.setBoolean(OZONE_ACL_ENABLED, true);
+conf.set(OZONE_ADMINISTRATORS, OZONE_ADMINISTRATORS_WILDCARD);
+conf.set(OZONE_ACL_AUTHORIZER_CLASS,
+OZONE_ACL_AUTHORIZER_CLASS_NATIVE);
+startCluster(conf);
+aclListToAdd.add(USER_ACL);
+aclListToAdd.add(USER_ACL_2);
+  }
+
+  private   /**
 
 Review comment:
   Done.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 294440)
Time Spent: 3h 10m  (was: 3h)

> Audit xxxAcl methods in 

[jira] [Commented] (HDDS-1956) Aged IO Thread exits on first read

2019-08-13 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1956?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16906877#comment-16906877
 ] 

Anu Engineer commented on HDDS-1956:


[~msingh] Thanks for the review. [~adoroszlai] Thanks for the fix. I have 
committed this to the trunk.

> Aged IO Thread exits on first read
> --
>
> Key: HDDS-1956
> URL: https://issues.apache.org/jira/browse/HDDS-1956
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.5.0
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> Aged IO Thread in {{TestMiniChaosOzoneCluster}} exits on first read due to 
> exception:
> {code}
> 2019-08-12 22:55:37,799 [pool-245-thread-1] INFO  
> ozone.MiniOzoneLoadGenerator 
> (MiniOzoneLoadGenerator.java:startAgedFilesLoad(194)) - AGED LOADGEN: Started 
> Aged IO Thread:2139.
> ...
> 2019-08-12 22:55:47,147 [pool-245-thread-1] ERROR 
> ozone.MiniOzoneLoadGenerator 
> (MiniOzoneLoadGenerator.java:startAgedFilesLoad(213)) - AGED LOADGEN: 0 
> Exiting due to exception
> java.lang.ArrayIndexOutOfBoundsException: 1
>   at 
> org.apache.hadoop.ozone.MiniOzoneLoadGenerator.readData(MiniOzoneLoadGenerator.java:151)
>   at 
> org.apache.hadoop.ozone.MiniOzoneLoadGenerator.startAgedFilesLoad(MiniOzoneLoadGenerator.java:209)
>   at 
> org.apache.hadoop.ozone.MiniOzoneLoadGenerator.lambda$startIO$1(MiniOzoneLoadGenerator.java:235)
> 2019-08-12 22:55:47,149 [pool-245-thread-1] INFO  
> ozone.MiniOzoneLoadGenerator 
> (MiniOzoneLoadGenerator.java:startAgedFilesLoad(219)) - Terminating IO 
> thread:2139.
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1956) Aged IO Thread exits on first read

2019-08-13 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1956?focusedWorklogId=294438=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-294438
 ]

ASF GitHub Bot logged work on HDDS-1956:


Author: ASF GitHub Bot
Created on: 14/Aug/19 04:58
Start Date: 14/Aug/19 04:58
Worklog Time Spent: 10m 
  Work Description: anuengineer commented on pull request #1287: HDDS-1956. 
Aged IO Thread exits on first read
URL: https://github.com/apache/hadoop/pull/1287
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 294438)
Time Spent: 2.5h  (was: 2h 20m)

> Aged IO Thread exits on first read
> --
>
> Key: HDDS-1956
> URL: https://issues.apache.org/jira/browse/HDDS-1956
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.5.0
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 2.5h
>  Remaining Estimate: 0h
>
> Aged IO Thread in {{TestMiniChaosOzoneCluster}} exits on first read due to 
> exception:
> {code}
> 2019-08-12 22:55:37,799 [pool-245-thread-1] INFO  
> ozone.MiniOzoneLoadGenerator 
> (MiniOzoneLoadGenerator.java:startAgedFilesLoad(194)) - AGED LOADGEN: Started 
> Aged IO Thread:2139.
> ...
> 2019-08-12 22:55:47,147 [pool-245-thread-1] ERROR 
> ozone.MiniOzoneLoadGenerator 
> (MiniOzoneLoadGenerator.java:startAgedFilesLoad(213)) - AGED LOADGEN: 0 
> Exiting due to exception
> java.lang.ArrayIndexOutOfBoundsException: 1
>   at 
> org.apache.hadoop.ozone.MiniOzoneLoadGenerator.readData(MiniOzoneLoadGenerator.java:151)
>   at 
> org.apache.hadoop.ozone.MiniOzoneLoadGenerator.startAgedFilesLoad(MiniOzoneLoadGenerator.java:209)
>   at 
> org.apache.hadoop.ozone.MiniOzoneLoadGenerator.lambda$startIO$1(MiniOzoneLoadGenerator.java:235)
> 2019-08-12 22:55:47,149 [pool-245-thread-1] INFO  
> ozone.MiniOzoneLoadGenerator 
> (MiniOzoneLoadGenerator.java:startAgedFilesLoad(219)) - Terminating IO 
> thread:2139.
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1956) Aged IO Thread exits on first read

2019-08-13 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1956?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-1956:
---
   Resolution: Fixed
Fix Version/s: 0.5.0
   Status: Resolved  (was: Patch Available)

> Aged IO Thread exits on first read
> --
>
> Key: HDDS-1956
> URL: https://issues.apache.org/jira/browse/HDDS-1956
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.5.0
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> Aged IO Thread in {{TestMiniChaosOzoneCluster}} exits on first read due to 
> exception:
> {code}
> 2019-08-12 22:55:37,799 [pool-245-thread-1] INFO  
> ozone.MiniOzoneLoadGenerator 
> (MiniOzoneLoadGenerator.java:startAgedFilesLoad(194)) - AGED LOADGEN: Started 
> Aged IO Thread:2139.
> ...
> 2019-08-12 22:55:47,147 [pool-245-thread-1] ERROR 
> ozone.MiniOzoneLoadGenerator 
> (MiniOzoneLoadGenerator.java:startAgedFilesLoad(213)) - AGED LOADGEN: 0 
> Exiting due to exception
> java.lang.ArrayIndexOutOfBoundsException: 1
>   at 
> org.apache.hadoop.ozone.MiniOzoneLoadGenerator.readData(MiniOzoneLoadGenerator.java:151)
>   at 
> org.apache.hadoop.ozone.MiniOzoneLoadGenerator.startAgedFilesLoad(MiniOzoneLoadGenerator.java:209)
>   at 
> org.apache.hadoop.ozone.MiniOzoneLoadGenerator.lambda$startIO$1(MiniOzoneLoadGenerator.java:235)
> 2019-08-12 22:55:47,149 [pool-245-thread-1] INFO  
> ozone.MiniOzoneLoadGenerator 
> (MiniOzoneLoadGenerator.java:startAgedFilesLoad(219)) - Terminating IO 
> thread:2139.
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14423) Percent (%) and plus (+) characters no longer work in WebHDFS

2019-08-13 Thread Masatake Iwasaki (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16906872#comment-16906872
 ] 

Masatake Iwasaki commented on HDFS-14423:
-

Yeah. I'm working on backporting. I will attach branch-2 patch later.

> Percent (%) and plus (+) characters no longer work in WebHDFS
> -
>
> Key: HDFS-14423
> URL: https://issues.apache.org/jira/browse/HDFS-14423
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 3.2.0, 3.1.2
> Environment: Ubuntu 16.04, but I believe this is irrelevant.
>Reporter: Jing Wang
>Assignee: Masatake Iwasaki
>Priority: Major
> Fix For: 3.3.0, 3.2.1, 3.1.3
>
> Attachments: HDFS-14423.001.patch, HDFS-14423.002.patch, 
> HDFS-14423.003.patch, HDFS-14423.004.patch
>
>
> The following commands with percent (%) no longer work starting with version 
> 3.1:
> {code:java}
> $ hadoop/bin/hdfs dfs -touchz webhdfs://localhost/%
> $ hadoop/bin/hdfs dfs -cat webhdfs://localhost/%
> cat: URLDecoder: Incomplete trailing escape (%) pattern
> {code}
> Also, plus (+ ) characters get turned into spaces when doing DN operations:
> {code:java}
> $ hadoop/bin/hdfs dfs -touchz webhdfs://localhost/a+b
> $ hadoop/bin/hdfs dfs -mkdir webhdfs://localhost/c+d
> $ hadoop/bin/hdfs dfs -ls /
> Found 4 items
> -rw-r--r--   1 jing supergroup  0 2019-04-12 11:20 /a b
> drwxr-xr-x   - jing supergroup  0 2019-04-12 11:21 /c+d
> {code}
> I can confirm that these commands work correctly on 2.9 and 3.0. Also, the 
> usual hdfs:// client works as expected.
> I suspect a relation with HDFS-13176 or HDFS-13582, but I'm not sure what the 
> right fix is. Note that Hive uses % to escape special characters in partition 
> values, so banning % might not be a good option. For example, Hive will 
> create a paths like {{table_name/partition_key=%2F}} when 
> {{partition_key='/'}}.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-14728) RBF: GetDatanodeReport causes a large GC pressure on the NameNodes

2019-08-13 Thread xuzq (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14728?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16906852#comment-16906852
 ] 

xuzq edited comment on HDFS-14728 at 8/14/19 4:14 AM:
--

Thanks [~elgoiri] for the comment. And I attach [^HDFS-14728-trunk-003.patch], 
please review.
{quote}By the way, we should cover what this exception is and have tests if so.
{quote}
It may be throw IOException, RunTimeException and Error.

And I'm sorry I don't know which cases in the current code may cause 
RunTimeException and Errors.:(


was (Author: xuzq_zander):
Thanks [~elgoiri] for the comment. And I attach [^HDFS-14728-trunk-002.patch], 
please review.
{quote} By the way, we should cover what this exception is and have tests if so.
{quote}
It may be throw IOException, RunTimeException and Error.

And I'm sorry I don't know which cases in the current code may cause 
RunTimeException and Errors.:(

> RBF: GetDatanodeReport causes a large GC pressure on the NameNodes
> --
>
> Key: HDFS-14728
> URL: https://issues.apache.org/jira/browse/HDFS-14728
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: rbf
>Reporter: xuzq
>Assignee: xuzq
>Priority: Major
> Attachments: HDFS-14728-trunk-001.patch, HDFS-14728-trunk-002.patch, 
> HDFS-14728-trunk-003.patch
>
>
> When a cluster contains millions of DNs, *GetDatanodeReport* is pretty 
> expensive, and it will cause a large GC pressure on NameNode.
> When multiple NSs share the millions DNs by federation and the router listens 
> to the NSs, the problem will be more serious.
> All the NSs will be GC at the same time.
> RBF should cache the datanode report informations and have an option to 
> disable the cache.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14728) RBF: GetDatanodeReport causes a large GC pressure on the NameNodes

2019-08-13 Thread xuzq (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14728?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xuzq updated HDFS-14728:

Attachment: HDFS-14728-trunk-003.patch

> RBF: GetDatanodeReport causes a large GC pressure on the NameNodes
> --
>
> Key: HDFS-14728
> URL: https://issues.apache.org/jira/browse/HDFS-14728
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: rbf
>Reporter: xuzq
>Assignee: xuzq
>Priority: Major
> Attachments: HDFS-14728-trunk-001.patch, HDFS-14728-trunk-002.patch, 
> HDFS-14728-trunk-003.patch
>
>
> When a cluster contains millions of DNs, *GetDatanodeReport* is pretty 
> expensive, and it will cause a large GC pressure on NameNode.
> When multiple NSs share the millions DNs by federation and the router listens 
> to the NSs, the problem will be more serious.
> All the NSs will be GC at the same time.
> RBF should cache the datanode report informations and have an option to 
> disable the cache.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1920) Place ozone.om.address config key default value in ozone-site.xml

2019-08-13 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16906865#comment-16906865
 ] 

Hudson commented on HDDS-1920:
--

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17111 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17111/])
HDDS-1920. Place ozone.om.address config key default value in (aengineer: rev 
bf457797f607f3aeeb2292e63f440cb13e15a2d9)
* (edit) hadoop-hdds/common/src/main/resources/ozone-default.xml


> Place ozone.om.address config key default value in ozone-site.xml
> -
>
> Key: HDDS-1920
> URL: https://issues.apache.org/jira/browse/HDDS-1920
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> {code:xml}
>
>  ozone.om.address
> -
> +0.0.0.0:9862
>  OM, REQUIRED
>  
>The address of the Ozone OM service. This allows clients to discover
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1768) Audit xxxAcl methods in OzoneManager

2019-08-13 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1768?focusedWorklogId=294428=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-294428
 ]

ASF GitHub Bot logged work on HDDS-1768:


Author: ASF GitHub Bot
Created on: 14/Aug/19 04:07
Start Date: 14/Aug/19 04:07
Worklog Time Spent: 10m 
  Work Description: dchitlangia commented on pull request #1204: HDDS-1768. 
Audit xxxAcl methods in OzoneManager
URL: https://github.com/apache/hadoop/pull/1204#discussion_r313697013
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManager.java
 ##
 @@ -2999,23 +3016,36 @@ public OmKeyInfo lookupFile(OmKeyArgs args) throws 
IOException {
*/
   @Override
   public boolean addAcl(OzoneObj obj, OzoneAcl acl) throws IOException {
-if(isAclEnabled) {
-  checkAcls(obj.getResourceType(), obj.getStoreType(), ACLType.WRITE_ACL,
-  obj.getVolumeName(), obj.getBucketName(), obj.getKeyName());
-}
-// TODO: Audit ACL operation.
-switch (obj.getResourceType()) {
-case VOLUME:
-  return volumeManager.addAcl(obj, acl);
-case BUCKET:
-  return bucketManager.addAcl(obj, acl);
-case KEY:
-  return keyManager.addAcl(obj, acl);
-case PREFIX:
-  return prefixManager.addAcl(obj, acl);
-default:
-  throw new OMException("Unexpected resource type: " +
-  obj.getResourceType(), INVALID_REQUEST);
+boolean auditSuccess = true;
+
+try{
+  if(isAclEnabled) {
+checkAcls(obj.getResourceType(), obj.getStoreType(), ACLType.WRITE_ACL,
+obj.getVolumeName(), obj.getBucketName(), obj.getKeyName());
+  }
+  switch (obj.getResourceType()) {
+  case VOLUME:
+return volumeManager.addAcl(obj, acl);
+  case BUCKET:
+return bucketManager.addAcl(obj, acl);
+  case KEY:
+return keyManager.addAcl(obj, acl);
+  case PREFIX:
+return prefixManager.addAcl(obj, acl);
+  default:
+throw new OMException("Unexpected resource type: " +
+obj.getResourceType(), INVALID_REQUEST);
+  }
+} catch(Exception ex) {
+  auditSuccess = false;
+  auditAcl(obj, Arrays.asList(acl), OMAction.ADD_ACL,
 
 Review comment:
   @bharatviswa504 I think we can skip this one as throughout this class we are 
following this approach of using auditSuccess, mostly for code 
readability/correctness.
   I think we can discuss with @anuengineer  on this. If he is onboard with 
this change then we can change it across OM, SCM, DN for audit log. Since that 
will be a big enough change, we can do that in separate jira. Does that sound 
good?
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 294428)
Time Spent: 3h  (was: 2h 50m)

> Audit xxxAcl methods in OzoneManager
> 
>
> Key: HDDS-1768
> URL: https://issues.apache.org/jira/browse/HDDS-1768
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h
>  Remaining Estimate: 0h
>
> Audit permission failures from authorizer



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-6980) TestWebHdfsFileSystemContract fails in trunk

2019-08-13 Thread Akira Ajisaka (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-6980?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HDFS-6980:

Resolution: Cannot Reproduce
Status: Resolved  (was: Patch Available)

This issue has become stale. Closing.

> TestWebHdfsFileSystemContract fails in trunk
> 
>
> Key: HDFS-6980
> URL: https://issues.apache.org/jira/browse/HDFS-6980
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Akira Ajisaka
>Assignee: Tsuyoshi Ozawa
>Priority: Major
>  Labels: BB2015-05-TBR
> Attachments: HDFS-6980.1-2.patch, HDFS-6980.1.patch
>
>
> Many tests in TestWebHdfsFileSystemContract fail by "too many open files" 
> error.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14722) RBF: GetMountPointStatus should return mountTable information when getFileInfoAll throw IOException

2019-08-13 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14722?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16906858#comment-16906858
 ] 

Hadoop QA commented on HDFS-14722:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  3s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 38s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 21m 39s{color} 
| {color:red} hadoop-hdfs-rbf in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 69m 48s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.federation.router.TestRouterWithSecureStartup |
|   | hadoop.hdfs.server.federation.security.TestRouterHttpDelegationToken |
|   | hadoop.hdfs.server.federation.router.TestRouterMountTableForBug |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.1 Server=19.03.1 Image:yetus/hadoop:bdbca0e |
| JIRA Issue | HDFS-14722 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12977546/HDFS-14722-trunk-bug-discuss.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux d66e60d0ff57 4.4.0-157-generic #185-Ubuntu SMP Tue Jul 23 
09:17:01 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / e6d240d |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_212 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27501/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27501/testReport/ |
| Max. process+thread count | 1617 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 

[jira] [Commented] (HDFS-14728) RBF: GetDatanodeReport causes a large GC pressure on the NameNodes

2019-08-13 Thread xuzq (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14728?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16906852#comment-16906852
 ] 

xuzq commented on HDFS-14728:
-

Thanks [~elgoiri] for the comment. And I attach [^HDFS-14728-trunk-002.patch], 
please review.
{quote} By the way, we should cover what this exception is and have tests if so.
{quote}
It may be throw IOException, RunTimeException and Error.

And I'm sorry I don't know which cases in the current code may cause 
RunTimeException and Errors.:(

> RBF: GetDatanodeReport causes a large GC pressure on the NameNodes
> --
>
> Key: HDFS-14728
> URL: https://issues.apache.org/jira/browse/HDFS-14728
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: rbf
>Reporter: xuzq
>Assignee: xuzq
>Priority: Major
> Attachments: HDFS-14728-trunk-001.patch, HDFS-14728-trunk-002.patch
>
>
> When a cluster contains millions of DNs, *GetDatanodeReport* is pretty 
> expensive, and it will cause a large GC pressure on NameNode.
> When multiple NSs share the millions DNs by federation and the router listens 
> to the NSs, the problem will be more serious.
> All the NSs will be GC at the same time.
> RBF should cache the datanode report informations and have an option to 
> disable the cache.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14713) RBF: RouterAdmin supports refreshRouterArgs command but not on display

2019-08-13 Thread wangzhaohui (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14713?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16906851#comment-16906851
 ] 

wangzhaohui commented on HDFS-14713:


sorry,my bad,upload v004 pls review,thanks.[~ayushtkn]  [~elgoiri]

> RBF: RouterAdmin supports refreshRouterArgs command but not on display
> --
>
> Key: HDFS-14713
> URL: https://issues.apache.org/jira/browse/HDFS-14713
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: wangzhaohui
>Assignee: wangzhaohui
>Priority: Major
> Attachments: HDFS-14713-000.patch, HDFS-14713-001.patch, 
> HDFS-14713-002.patch, HDFS-14713-003.patch, HDFS-14713-004.patch, after.png, 
> before.png
>
>
> When the cmd commond is null,the refreshRouterArgs command is not display 
> ,because there is one missing value in the String[] commands
> {code:java}
> //
> if (cmd == null) {
>   String[] commands =
>   {"-add", "-update", "-rm", "-ls", "-getDestination",
>   "-setQuota", "-clrQuota",
>   "-safemode", "-nameservice", "-getDisabledNameservices",
>   "-refresh"};
>   StringBuilder usage = new StringBuilder();
>   usage.append("Usage: hdfs dfsrouteradmin :\n");
>   for (int i = 0; i < commands.length; i++) {
> usage.append(getUsage(commands[i]));
> if (i + 1 < commands.length) {
>   usage.append("\n");
> }
>   }
>   
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14728) RBF: GetDatanodeReport causes a large GC pressure on the NameNodes

2019-08-13 Thread xuzq (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14728?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xuzq updated HDFS-14728:

Attachment: HDFS-14728-trunk-002.patch

> RBF: GetDatanodeReport causes a large GC pressure on the NameNodes
> --
>
> Key: HDFS-14728
> URL: https://issues.apache.org/jira/browse/HDFS-14728
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: rbf
>Reporter: xuzq
>Assignee: xuzq
>Priority: Major
> Attachments: HDFS-14728-trunk-001.patch, HDFS-14728-trunk-002.patch
>
>
> When a cluster contains millions of DNs, *GetDatanodeReport* is pretty 
> expensive, and it will cause a large GC pressure on NameNode.
> When multiple NSs share the millions DNs by federation and the router listens 
> to the NSs, the problem will be more serious.
> All the NSs will be GC at the same time.
> RBF should cache the datanode report informations and have an option to 
> disable the cache.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14713) RBF: RouterAdmin supports refreshRouterArgs command but not on display

2019-08-13 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14713?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16906843#comment-16906843
 ] 

Hadoop QA commented on HDFS-14713:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
33s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 10s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 20s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 21m 38s{color} 
| {color:red} hadoop-hdfs-rbf in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 70m 49s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.federation.security.TestRouterHttpDelegationToken |
|   | hadoop.hdfs.server.federation.router.TestRouterWithSecureStartup |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.1 Server=19.03.1 Image:yetus/hadoop:bdbca0e |
| JIRA Issue | HDFS-14713 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12977544/HDFS-14713-004.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux e01d5a1d43de 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / e6d240d |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_222 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27500/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27500/testReport/ |
| Max. process+thread count | 1617 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 
hadoop-hdfs-project/hadoop-hdfs-rbf |
| Console output | 

[jira] [Updated] (HDDS-1920) Place ozone.om.address config key default value in ozone-site.xml

2019-08-13 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1920?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-1920:
---
   Resolution: Fixed
Fix Version/s: 0.5.0
   Status: Resolved  (was: Patch Available)

Thank you for your contribution. [~arp] and [~bharatviswa] Thanks for the 
reviews.

> Place ozone.om.address config key default value in ozone-site.xml
> -
>
> Key: HDDS-1920
> URL: https://issues.apache.org/jira/browse/HDDS-1920
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> {code:xml}
>
>  ozone.om.address
> -
> +0.0.0.0:9862
>  OM, REQUIRED
>  
>The address of the Ozone OM service. This allows clients to discover
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1920) Place ozone.om.address config key default value in ozone-site.xml

2019-08-13 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1920?focusedWorklogId=294394=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-294394
 ]

ASF GitHub Bot logged work on HDDS-1920:


Author: ASF GitHub Bot
Created on: 14/Aug/19 03:30
Start Date: 14/Aug/19 03:30
Worklog Time Spent: 10m 
  Work Description: anuengineer commented on pull request #1237: HDDS-1920. 
Place ozone.om.address config key default value in ozone-site.xml
URL: https://github.com/apache/hadoop/pull/1237
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 294394)
Time Spent: 1h  (was: 50m)

> Place ozone.om.address config key default value in ozone-site.xml
> -
>
> Key: HDDS-1920
> URL: https://issues.apache.org/jira/browse/HDDS-1920
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> {code:xml}
>
>  ozone.om.address
> -
> +0.0.0.0:9862
>  OM, REQUIRED
>  
>The address of the Ozone OM service. This allows clients to discover
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1871) Remove anti-affinity rules from k8s minkube example

2019-08-13 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1871?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16906830#comment-16906830
 ] 

Anu Engineer commented on HDDS-1871:


I am getting the following error:
{noformat}
 > kubectl get pod
NAME READY STATUS  RESTARTS   AGE
datanode-0   0/1   ImagePullBackOff0  20m
om-0 0/1   ErrImagePull0  20m
s3g-00/1   ImagePullBackOff0  20m
scm-00/1   Init:ErrImagePull   0  20m
{noformat}

any idea what I am doing wrong ?

Here is my env.

* * minikube v1.3.1 on Darwin 10.14.5
* Kubernetes  1.15.2
* Docker 17.06.0-ce



> Remove anti-affinity rules from k8s minkube example
> ---
>
> Key: HDDS-1871
> URL: https://issues.apache.org/jira/browse/HDDS-1871
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: kubernetes
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> HDDS-1646 introduced real persistence for k8s example deployment files which 
> means that we need anti-affinity scheduling rules: Even if we use statefulset 
> instead of daemonset we would like to start one datanode per real nodes.
> With minikube we have only one node therefore the scheduling rule should be 
> removed to enable at least 3 datanodes on the same physical nodes.
> How to test:
> {code}
>  mvn clean install -DskipTests -f pom.ozone.xml
> cd hadoop-ozone/dist/target/ozone-0.5.0-SNAPSHOT/kubernetes/examples/minikube
> minikube start
> kubectl apply -f .
> kc get pod
> {code}
> You should see 3 datanode instances.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14722) RBF: GetMountPointStatus should return mountTable information when getFileInfoAll throw IOException

2019-08-13 Thread xuzq (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14722?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16906814#comment-16906814
 ] 

xuzq commented on HDFS-14722:
-

I am so sorry, may be I didn't express it cleanly.

I attach [^HDFS-14722-trunk-bug-discuss.patch], please confirm these bug with 
UT. Thanks

> RBF: GetMountPointStatus should return mountTable information when 
> getFileInfoAll throw IOException
> ---
>
> Key: HDFS-14722
> URL: https://issues.apache.org/jira/browse/HDFS-14722
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: rbf
>Reporter: xuzq
>Assignee: xuzq
>Priority: Major
> Attachments: HDFS-14722-trunk-001.patch, HDFS-14722-trunk-002.patch, 
> HDFS-14722-trunk-003.patch, HDFS-14722-trunk-bug-discuss.patch
>
>
> When IOException in getFileInfoAll, we should return the mountTable 
> informations instead of super information.
> Code like:
> {code:java}
> // RouterClientProtocol#getMountPointStatus
> try {
>   String mName = name.startsWith("/") ? name : "/" + name;
>   MountTableResolver mountTable = (MountTableResolver) subclusterResolver;
>   MountTable entry = mountTable.getMountPoint(mName);
>   if (entry != null) {
> RemoteMethod method = new RemoteMethod("getFileInfo",
> new Class[] {String.class}, new RemoteParam());
> HdfsFileStatus fInfo = getFileInfoAll(
> entry.getDestinations(), method, mountStatusTimeOut);
> if (fInfo != null) {
>   permission = fInfo.getPermission();
>   owner = fInfo.getOwner();
>   group = fInfo.getGroup();
>   childrenNum = fInfo.getChildrenNum();
> } else {
>   permission = entry.getMode();
>   owner = entry.getOwnerName();
>   group = entry.getGroupName();
> }
>   }
> } catch (IOException e) {
>   LOG.error("Cannot get mount point: {}", e.getMessage());
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14722) RBF: GetMountPointStatus should return mountTable information when getFileInfoAll throw IOException

2019-08-13 Thread xuzq (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14722?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xuzq updated HDFS-14722:

Attachment: HDFS-14722-trunk-bug-discuss.patch

> RBF: GetMountPointStatus should return mountTable information when 
> getFileInfoAll throw IOException
> ---
>
> Key: HDFS-14722
> URL: https://issues.apache.org/jira/browse/HDFS-14722
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: rbf
>Reporter: xuzq
>Assignee: xuzq
>Priority: Major
> Attachments: HDFS-14722-trunk-001.patch, HDFS-14722-trunk-002.patch, 
> HDFS-14722-trunk-003.patch, HDFS-14722-trunk-bug-discuss.patch
>
>
> When IOException in getFileInfoAll, we should return the mountTable 
> informations instead of super information.
> Code like:
> {code:java}
> // RouterClientProtocol#getMountPointStatus
> try {
>   String mName = name.startsWith("/") ? name : "/" + name;
>   MountTableResolver mountTable = (MountTableResolver) subclusterResolver;
>   MountTable entry = mountTable.getMountPoint(mName);
>   if (entry != null) {
> RemoteMethod method = new RemoteMethod("getFileInfo",
> new Class[] {String.class}, new RemoteParam());
> HdfsFileStatus fInfo = getFileInfoAll(
> entry.getDestinations(), method, mountStatusTimeOut);
> if (fInfo != null) {
>   permission = fInfo.getPermission();
>   owner = fInfo.getOwner();
>   group = fInfo.getGroup();
>   childrenNum = fInfo.getChildrenNum();
> } else {
>   permission = entry.getMode();
>   owner = entry.getOwnerName();
>   group = entry.getGroupName();
> }
>   }
> } catch (IOException e) {
>   LOG.error("Cannot get mount point: {}", e.getMessage());
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1768) Audit xxxAcl methods in OzoneManager

2019-08-13 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1768?focusedWorklogId=294338=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-294338
 ]

ASF GitHub Bot logged work on HDDS-1768:


Author: ASF GitHub Bot
Created on: 14/Aug/19 02:01
Start Date: 14/Aug/19 02:01
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #1204: 
HDDS-1768. Audit xxxAcl methods in OzoneManager
URL: https://github.com/apache/hadoop/pull/1204#discussion_r313679227
 
 

 ##
 File path: 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestOzoneRpcClientForAclAuditLog.java
 ##
 @@ -0,0 +1,268 @@
+package org.apache.hadoop.ozone.client.rpc;
+
+import org.apache.commons.io.FileUtils;
+import org.apache.commons.lang3.RandomStringUtils;
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import 
org.apache.hadoop.hdds.scm.protocolPB.StorageContainerLocationProtocolClientSideTranslatorPB;
+import org.apache.hadoop.ozone.MiniOzoneCluster;
+import org.apache.hadoop.ozone.OzoneAcl;
+import org.apache.hadoop.ozone.audit.AuditEventStatus;
+import org.apache.hadoop.ozone.audit.OMAction;
+import org.apache.hadoop.ozone.client.ObjectStore;
+import org.apache.hadoop.ozone.client.OzoneClient;
+import org.apache.hadoop.ozone.client.OzoneClientFactory;
+import org.apache.hadoop.ozone.client.OzoneVolume;
+import org.apache.hadoop.ozone.security.acl.IAccessAuthorizer;
+import org.apache.hadoop.ozone.security.acl.OzoneObj;
+import org.apache.hadoop.ozone.security.acl.OzoneObjInfo;
+import org.apache.hadoop.security.UserGroupInformation;
+import org.junit.AfterClass;
+import org.junit.Assert;
+import org.junit.BeforeClass;
+import org.junit.FixMethodOrder;
+import org.junit.Test;
+import org.junit.runners.MethodSorters;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.File;
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.UUID;
+
+import static org.apache.hadoop.ozone.OzoneAcl.AclScope.ACCESS;
+import static 
org.apache.hadoop.ozone.OzoneConfigKeys.OZONE_ACL_AUTHORIZER_CLASS;
+import static 
org.apache.hadoop.ozone.OzoneConfigKeys.OZONE_ACL_AUTHORIZER_CLASS_NATIVE;
+import static org.apache.hadoop.ozone.OzoneConfigKeys.OZONE_ACL_ENABLED;
+import static org.apache.hadoop.ozone.OzoneConfigKeys.OZONE_ADMINISTRATORS;
+import static 
org.apache.hadoop.ozone.OzoneConfigKeys.OZONE_ADMINISTRATORS_WILDCARD;
+import static 
org.apache.hadoop.ozone.security.acl.OzoneObj.ResourceType.VOLUME;
+import static org.apache.hadoop.ozone.security.acl.OzoneObj.StoreType.OZONE;
+import static org.junit.Assert.assertTrue;
+
+/**
+ * This class is to test audit logs for xxxACL APIs of Ozone Client.
+ */
+@FixMethodOrder(MethodSorters.NAME_ASCENDING)
+public class TestOzoneRpcClientForAclAuditLog {
+
+  static final Logger LOG =
+  LoggerFactory.getLogger(TestOzoneRpcClientForAclAuditLog.class);
+  private static UserGroupInformation ugi;
+  private static final OzoneAcl USER_ACL =
+  new OzoneAcl(IAccessAuthorizer.ACLIdentityType.USER,
+  "johndoe", IAccessAuthorizer.ACLType.ALL, ACCESS);
+  private static final OzoneAcl USER_ACL_2 =
+  new OzoneAcl(IAccessAuthorizer.ACLIdentityType.USER,
+  "jane", IAccessAuthorizer.ACLType.ALL, ACCESS);
+  private static List aclListToAdd = new ArrayList<>();
+  private static MiniOzoneCluster cluster = null;
+  private static OzoneClient ozClient = null;
+  private static ObjectStore store = null;
+  private static StorageContainerLocationProtocolClientSideTranslatorPB
+  storageContainerLocationClient;
+  private static String scmId = UUID.randomUUID().toString();
+
+
+  /**
+   * Create a MiniOzoneCluster for testing.
+   *
+   * Ozone is made active by setting OZONE_ENABLED = true
+   *
+   * @throws IOException
+   */
+  @BeforeClass
+  public static void init() throws Exception {
+System.setProperty("log4j.configurationFile", "log4j2.properties");
+ugi = UserGroupInformation.getCurrentUser();
+OzoneConfiguration conf = new OzoneConfiguration();
+conf.setBoolean(OZONE_ACL_ENABLED, true);
+conf.set(OZONE_ADMINISTRATORS, OZONE_ADMINISTRATORS_WILDCARD);
+conf.set(OZONE_ACL_AUTHORIZER_CLASS,
+OZONE_ACL_AUTHORIZER_CLASS_NATIVE);
+startCluster(conf);
+aclListToAdd.add(USER_ACL);
+aclListToAdd.add(USER_ACL_2);
+  }
+
+  private   /**
+   * Create a MiniOzoneCluster for testing.
+   * @param conf Configurations to start the cluster.
+   * @throws Exception
+   */
+  static void startCluster(OzoneConfiguration conf) throws Exception {
+cluster = MiniOzoneCluster.newBuilder(conf)
+.setNumDatanodes(3)
+.setScmId(scmId)
+.build();
+cluster.waitForClusterToBeReady();
+ozClient = OzoneClientFactory.getRpcClient(conf);
+store = ozClient.getObjectStore();
+

[jira] [Work logged] (HDDS-1768) Audit xxxAcl methods in OzoneManager

2019-08-13 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1768?focusedWorklogId=294334=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-294334
 ]

ASF GitHub Bot logged work on HDDS-1768:


Author: ASF GitHub Bot
Created on: 14/Aug/19 01:56
Start Date: 14/Aug/19 01:56
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #1204: 
HDDS-1768. Audit xxxAcl methods in OzoneManager
URL: https://github.com/apache/hadoop/pull/1204#discussion_r313678380
 
 

 ##
 File path: 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestOzoneRpcClientForAclAuditLog.java
 ##
 @@ -0,0 +1,268 @@
+package org.apache.hadoop.ozone.client.rpc;
+
+import org.apache.commons.io.FileUtils;
+import org.apache.commons.lang3.RandomStringUtils;
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import 
org.apache.hadoop.hdds.scm.protocolPB.StorageContainerLocationProtocolClientSideTranslatorPB;
+import org.apache.hadoop.ozone.MiniOzoneCluster;
+import org.apache.hadoop.ozone.OzoneAcl;
+import org.apache.hadoop.ozone.audit.AuditEventStatus;
+import org.apache.hadoop.ozone.audit.OMAction;
+import org.apache.hadoop.ozone.client.ObjectStore;
+import org.apache.hadoop.ozone.client.OzoneClient;
+import org.apache.hadoop.ozone.client.OzoneClientFactory;
+import org.apache.hadoop.ozone.client.OzoneVolume;
+import org.apache.hadoop.ozone.security.acl.IAccessAuthorizer;
+import org.apache.hadoop.ozone.security.acl.OzoneObj;
+import org.apache.hadoop.ozone.security.acl.OzoneObjInfo;
+import org.apache.hadoop.security.UserGroupInformation;
+import org.junit.AfterClass;
+import org.junit.Assert;
+import org.junit.BeforeClass;
+import org.junit.FixMethodOrder;
+import org.junit.Test;
+import org.junit.runners.MethodSorters;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.File;
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.UUID;
+
+import static org.apache.hadoop.ozone.OzoneAcl.AclScope.ACCESS;
+import static 
org.apache.hadoop.ozone.OzoneConfigKeys.OZONE_ACL_AUTHORIZER_CLASS;
+import static 
org.apache.hadoop.ozone.OzoneConfigKeys.OZONE_ACL_AUTHORIZER_CLASS_NATIVE;
+import static org.apache.hadoop.ozone.OzoneConfigKeys.OZONE_ACL_ENABLED;
+import static org.apache.hadoop.ozone.OzoneConfigKeys.OZONE_ADMINISTRATORS;
+import static 
org.apache.hadoop.ozone.OzoneConfigKeys.OZONE_ADMINISTRATORS_WILDCARD;
+import static 
org.apache.hadoop.ozone.security.acl.OzoneObj.ResourceType.VOLUME;
+import static org.apache.hadoop.ozone.security.acl.OzoneObj.StoreType.OZONE;
+import static org.junit.Assert.assertTrue;
+
+/**
+ * This class is to test audit logs for xxxACL APIs of Ozone Client.
+ */
+@FixMethodOrder(MethodSorters.NAME_ASCENDING)
+public class TestOzoneRpcClientForAclAuditLog {
+
+  static final Logger LOG =
+  LoggerFactory.getLogger(TestOzoneRpcClientForAclAuditLog.class);
+  private static UserGroupInformation ugi;
+  private static final OzoneAcl USER_ACL =
+  new OzoneAcl(IAccessAuthorizer.ACLIdentityType.USER,
+  "johndoe", IAccessAuthorizer.ACLType.ALL, ACCESS);
+  private static final OzoneAcl USER_ACL_2 =
+  new OzoneAcl(IAccessAuthorizer.ACLIdentityType.USER,
+  "jane", IAccessAuthorizer.ACLType.ALL, ACCESS);
+  private static List aclListToAdd = new ArrayList<>();
+  private static MiniOzoneCluster cluster = null;
+  private static OzoneClient ozClient = null;
+  private static ObjectStore store = null;
+  private static StorageContainerLocationProtocolClientSideTranslatorPB
+  storageContainerLocationClient;
+  private static String scmId = UUID.randomUUID().toString();
+
+
+  /**
+   * Create a MiniOzoneCluster for testing.
+   *
+   * Ozone is made active by setting OZONE_ENABLED = true
+   *
+   * @throws IOException
+   */
+  @BeforeClass
+  public static void init() throws Exception {
+System.setProperty("log4j.configurationFile", "log4j2.properties");
+ugi = UserGroupInformation.getCurrentUser();
+OzoneConfiguration conf = new OzoneConfiguration();
+conf.setBoolean(OZONE_ACL_ENABLED, true);
+conf.set(OZONE_ADMINISTRATORS, OZONE_ADMINISTRATORS_WILDCARD);
+conf.set(OZONE_ACL_AUTHORIZER_CLASS,
+OZONE_ACL_AUTHORIZER_CLASS_NATIVE);
+startCluster(conf);
+aclListToAdd.add(USER_ACL);
+aclListToAdd.add(USER_ACL_2);
+  }
+
+  private   /**
+   * Create a MiniOzoneCluster for testing.
+   * @param conf Configurations to start the cluster.
+   * @throws Exception
+   */
+  static void startCluster(OzoneConfiguration conf) throws Exception {
+cluster = MiniOzoneCluster.newBuilder(conf)
+.setNumDatanodes(3)
+.setScmId(scmId)
+.build();
+cluster.waitForClusterToBeReady();
+ozClient = OzoneClientFactory.getRpcClient(conf);
+store = ozClient.getObjectStore();
+

[jira] [Updated] (HDFS-14713) RBF: RouterAdmin supports refreshRouterArgs command but not on display

2019-08-13 Thread wangzhaohui (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14713?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

wangzhaohui updated HDFS-14713:
---
Attachment: HDFS-14713-004.patch

> RBF: RouterAdmin supports refreshRouterArgs command but not on display
> --
>
> Key: HDFS-14713
> URL: https://issues.apache.org/jira/browse/HDFS-14713
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: wangzhaohui
>Assignee: wangzhaohui
>Priority: Major
> Attachments: HDFS-14713-000.patch, HDFS-14713-001.patch, 
> HDFS-14713-002.patch, HDFS-14713-003.patch, HDFS-14713-004.patch, after.png, 
> before.png
>
>
> When the cmd commond is null,the refreshRouterArgs command is not display 
> ,because there is one missing value in the String[] commands
> {code:java}
> //
> if (cmd == null) {
>   String[] commands =
>   {"-add", "-update", "-rm", "-ls", "-getDestination",
>   "-setQuota", "-clrQuota",
>   "-safemode", "-nameservice", "-getDisabledNameservices",
>   "-refresh"};
>   StringBuilder usage = new StringBuilder();
>   usage.append("Usage: hdfs dfsrouteradmin :\n");
>   for (int i = 0; i < commands.length; i++) {
> usage.append(getUsage(commands[i]));
> if (i + 1 < commands.length) {
>   usage.append("\n");
> }
>   }
>   
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1768) Audit xxxAcl methods in OzoneManager

2019-08-13 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1768?focusedWorklogId=294328=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-294328
 ]

ASF GitHub Bot logged work on HDDS-1768:


Author: ASF GitHub Bot
Created on: 14/Aug/19 01:55
Start Date: 14/Aug/19 01:55
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #1204: 
HDDS-1768. Audit xxxAcl methods in OzoneManager
URL: https://github.com/apache/hadoop/pull/1204#discussion_r313678178
 
 

 ##
 File path: 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestOzoneRpcClientForAclAuditLog.java
 ##
 @@ -0,0 +1,268 @@
+package org.apache.hadoop.ozone.client.rpc;
+
+import org.apache.commons.io.FileUtils;
+import org.apache.commons.lang3.RandomStringUtils;
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import 
org.apache.hadoop.hdds.scm.protocolPB.StorageContainerLocationProtocolClientSideTranslatorPB;
+import org.apache.hadoop.ozone.MiniOzoneCluster;
+import org.apache.hadoop.ozone.OzoneAcl;
+import org.apache.hadoop.ozone.audit.AuditEventStatus;
+import org.apache.hadoop.ozone.audit.OMAction;
+import org.apache.hadoop.ozone.client.ObjectStore;
+import org.apache.hadoop.ozone.client.OzoneClient;
+import org.apache.hadoop.ozone.client.OzoneClientFactory;
+import org.apache.hadoop.ozone.client.OzoneVolume;
+import org.apache.hadoop.ozone.security.acl.IAccessAuthorizer;
+import org.apache.hadoop.ozone.security.acl.OzoneObj;
+import org.apache.hadoop.ozone.security.acl.OzoneObjInfo;
+import org.apache.hadoop.security.UserGroupInformation;
+import org.junit.AfterClass;
+import org.junit.Assert;
+import org.junit.BeforeClass;
+import org.junit.FixMethodOrder;
+import org.junit.Test;
+import org.junit.runners.MethodSorters;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.File;
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.UUID;
+
+import static org.apache.hadoop.ozone.OzoneAcl.AclScope.ACCESS;
+import static 
org.apache.hadoop.ozone.OzoneConfigKeys.OZONE_ACL_AUTHORIZER_CLASS;
+import static 
org.apache.hadoop.ozone.OzoneConfigKeys.OZONE_ACL_AUTHORIZER_CLASS_NATIVE;
+import static org.apache.hadoop.ozone.OzoneConfigKeys.OZONE_ACL_ENABLED;
+import static org.apache.hadoop.ozone.OzoneConfigKeys.OZONE_ADMINISTRATORS;
+import static 
org.apache.hadoop.ozone.OzoneConfigKeys.OZONE_ADMINISTRATORS_WILDCARD;
+import static 
org.apache.hadoop.ozone.security.acl.OzoneObj.ResourceType.VOLUME;
+import static org.apache.hadoop.ozone.security.acl.OzoneObj.StoreType.OZONE;
+import static org.junit.Assert.assertTrue;
+
+/**
+ * This class is to test audit logs for xxxACL APIs of Ozone Client.
+ */
+@FixMethodOrder(MethodSorters.NAME_ASCENDING)
+public class TestOzoneRpcClientForAclAuditLog {
+
+  static final Logger LOG =
+  LoggerFactory.getLogger(TestOzoneRpcClientForAclAuditLog.class);
+  private static UserGroupInformation ugi;
+  private static final OzoneAcl USER_ACL =
+  new OzoneAcl(IAccessAuthorizer.ACLIdentityType.USER,
+  "johndoe", IAccessAuthorizer.ACLType.ALL, ACCESS);
+  private static final OzoneAcl USER_ACL_2 =
+  new OzoneAcl(IAccessAuthorizer.ACLIdentityType.USER,
+  "jane", IAccessAuthorizer.ACLType.ALL, ACCESS);
+  private static List aclListToAdd = new ArrayList<>();
+  private static MiniOzoneCluster cluster = null;
+  private static OzoneClient ozClient = null;
+  private static ObjectStore store = null;
+  private static StorageContainerLocationProtocolClientSideTranslatorPB
+  storageContainerLocationClient;
+  private static String scmId = UUID.randomUUID().toString();
+
+
+  /**
+   * Create a MiniOzoneCluster for testing.
+   *
+   * Ozone is made active by setting OZONE_ENABLED = true
+   *
+   * @throws IOException
+   */
+  @BeforeClass
+  public static void init() throws Exception {
+System.setProperty("log4j.configurationFile", "log4j2.properties");
+ugi = UserGroupInformation.getCurrentUser();
+OzoneConfiguration conf = new OzoneConfiguration();
+conf.setBoolean(OZONE_ACL_ENABLED, true);
+conf.set(OZONE_ADMINISTRATORS, OZONE_ADMINISTRATORS_WILDCARD);
+conf.set(OZONE_ACL_AUTHORIZER_CLASS,
+OZONE_ACL_AUTHORIZER_CLASS_NATIVE);
+startCluster(conf);
+aclListToAdd.add(USER_ACL);
+aclListToAdd.add(USER_ACL_2);
+  }
+
+  private   /**
 
 Review comment:
   indentation. (Comments are in between private)
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 294328)
Time Spent: 

[jira] [Work logged] (HDDS-1768) Audit xxxAcl methods in OzoneManager

2019-08-13 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1768?focusedWorklogId=294324=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-294324
 ]

ASF GitHub Bot logged work on HDDS-1768:


Author: ASF GitHub Bot
Created on: 14/Aug/19 01:48
Start Date: 14/Aug/19 01:48
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #1204: 
HDDS-1768. Audit xxxAcl methods in OzoneManager
URL: https://github.com/apache/hadoop/pull/1204#discussion_r313677222
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManager.java
 ##
 @@ -2999,23 +3016,36 @@ public OmKeyInfo lookupFile(OmKeyArgs args) throws 
IOException {
*/
   @Override
   public boolean addAcl(OzoneObj obj, OzoneAcl acl) throws IOException {
-if(isAclEnabled) {
-  checkAcls(obj.getResourceType(), obj.getStoreType(), ACLType.WRITE_ACL,
-  obj.getVolumeName(), obj.getBucketName(), obj.getKeyName());
-}
-// TODO: Audit ACL operation.
-switch (obj.getResourceType()) {
-case VOLUME:
-  return volumeManager.addAcl(obj, acl);
-case BUCKET:
-  return bucketManager.addAcl(obj, acl);
-case KEY:
-  return keyManager.addAcl(obj, acl);
-case PREFIX:
-  return prefixManager.addAcl(obj, acl);
-default:
-  throw new OMException("Unexpected resource type: " +
-  obj.getResourceType(), INVALID_REQUEST);
+boolean auditSuccess = true;
+
+try{
+  if(isAclEnabled) {
+checkAcls(obj.getResourceType(), obj.getStoreType(), ACLType.WRITE_ACL,
+obj.getVolumeName(), obj.getBucketName(), obj.getKeyName());
+  }
+  switch (obj.getResourceType()) {
+  case VOLUME:
+return volumeManager.addAcl(obj, acl);
+  case BUCKET:
+return bucketManager.addAcl(obj, acl);
+  case KEY:
+return keyManager.addAcl(obj, acl);
+  case PREFIX:
+return prefixManager.addAcl(obj, acl);
+  default:
+throw new OMException("Unexpected resource type: " +
+obj.getResourceType(), INVALID_REQUEST);
+  }
+} catch(Exception ex) {
+  auditSuccess = false;
+  auditAcl(obj, Arrays.asList(acl), OMAction.ADD_ACL,
 
 Review comment:
   Minor comment: No need of auditSuccess flag, we can use exepection value to 
decide whether it is success or not.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 294324)
Time Spent: 2h 20m  (was: 2h 10m)

> Audit xxxAcl methods in OzoneManager
> 
>
> Key: HDDS-1768
> URL: https://issues.apache.org/jira/browse/HDDS-1768
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> Audit permission failures from authorizer



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14609) RBF: Security should use common AuthenticationFilter

2019-08-13 Thread Takanobu Asanuma (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16906787#comment-16906787
 ] 

Takanobu Asanuma commented on HDFS-14609:
-

Thanks for the comments and the investigation, Chen Zhang and CR Hota. I'm 
sorry that I coudn't work on this recently due to other tasks.

This is the old revision of HDFS-13891. The faild tests are succeeded in this 
stage. Comparing this branch and trunk may help to find the problem.
[https://github.com/tasanuma/hadoop-private]

> RBF: Security should use common AuthenticationFilter
> 
>
> Key: HDFS-14609
> URL: https://issues.apache.org/jira/browse/HDFS-14609
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: CR Hota
>Assignee: Chen Zhang
>Priority: Major
>
> We worked on router based federation security as part of HDFS-13532. We kept 
> it compatible with the way namenode works. However with HADOOP-16314 and 
> HDFS-16354 in trunk, auth filters seems to have been changed causing tests to 
> fail.
> Changes are needed appropriately in RBF, mainly fixing broken tests.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14423) Percent (%) and plus (+) characters no longer work in WebHDFS

2019-08-13 Thread Akira Ajisaka (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16906775#comment-16906775
 ] 

Akira Ajisaka commented on HDFS-14423:
--

Would you backport this to branch-2?

> Percent (%) and plus (+) characters no longer work in WebHDFS
> -
>
> Key: HDFS-14423
> URL: https://issues.apache.org/jira/browse/HDFS-14423
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 3.2.0, 3.1.2
> Environment: Ubuntu 16.04, but I believe this is irrelevant.
>Reporter: Jing Wang
>Assignee: Masatake Iwasaki
>Priority: Major
> Fix For: 3.3.0, 3.2.1, 3.1.3
>
> Attachments: HDFS-14423.001.patch, HDFS-14423.002.patch, 
> HDFS-14423.003.patch, HDFS-14423.004.patch
>
>
> The following commands with percent (%) no longer work starting with version 
> 3.1:
> {code:java}
> $ hadoop/bin/hdfs dfs -touchz webhdfs://localhost/%
> $ hadoop/bin/hdfs dfs -cat webhdfs://localhost/%
> cat: URLDecoder: Incomplete trailing escape (%) pattern
> {code}
> Also, plus (+ ) characters get turned into spaces when doing DN operations:
> {code:java}
> $ hadoop/bin/hdfs dfs -touchz webhdfs://localhost/a+b
> $ hadoop/bin/hdfs dfs -mkdir webhdfs://localhost/c+d
> $ hadoop/bin/hdfs dfs -ls /
> Found 4 items
> -rw-r--r--   1 jing supergroup  0 2019-04-12 11:20 /a b
> drwxr-xr-x   - jing supergroup  0 2019-04-12 11:21 /c+d
> {code}
> I can confirm that these commands work correctly on 2.9 and 3.0. Also, the 
> usual hdfs:// client works as expected.
> I suspect a relation with HDFS-13176 or HDFS-13582, but I'm not sure what the 
> right fix is. Note that Hive uses % to escape special characters in partition 
> values, so banning % might not be a good option. For example, Hive will 
> create a paths like {{table_name/partition_key=%2F}} when 
> {{partition_key='/'}}.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14423) Percent (%) and plus (+) characters no longer work in WebHDFS

2019-08-13 Thread Akira Ajisaka (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14423?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HDFS-14423:
-
Fix Version/s: 3.1.3
   3.2.1
   3.3.0

> Percent (%) and plus (+) characters no longer work in WebHDFS
> -
>
> Key: HDFS-14423
> URL: https://issues.apache.org/jira/browse/HDFS-14423
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 3.2.0, 3.1.2
> Environment: Ubuntu 16.04, but I believe this is irrelevant.
>Reporter: Jing Wang
>Assignee: Masatake Iwasaki
>Priority: Major
> Fix For: 3.3.0, 3.2.1, 3.1.3
>
> Attachments: HDFS-14423.001.patch, HDFS-14423.002.patch, 
> HDFS-14423.003.patch, HDFS-14423.004.patch
>
>
> The following commands with percent (%) no longer work starting with version 
> 3.1:
> {code:java}
> $ hadoop/bin/hdfs dfs -touchz webhdfs://localhost/%
> $ hadoop/bin/hdfs dfs -cat webhdfs://localhost/%
> cat: URLDecoder: Incomplete trailing escape (%) pattern
> {code}
> Also, plus (+ ) characters get turned into spaces when doing DN operations:
> {code:java}
> $ hadoop/bin/hdfs dfs -touchz webhdfs://localhost/a+b
> $ hadoop/bin/hdfs dfs -mkdir webhdfs://localhost/c+d
> $ hadoop/bin/hdfs dfs -ls /
> Found 4 items
> -rw-r--r--   1 jing supergroup  0 2019-04-12 11:20 /a b
> drwxr-xr-x   - jing supergroup  0 2019-04-12 11:21 /c+d
> {code}
> I can confirm that these commands work correctly on 2.9 and 3.0. Also, the 
> usual hdfs:// client works as expected.
> I suspect a relation with HDFS-13176 or HDFS-13582, but I'm not sure what the 
> right fix is. Note that Hive uses % to escape special characters in partition 
> values, so banning % might not be a good option. For example, Hive will 
> create a paths like {{table_name/partition_key=%2F}} when 
> {{partition_key='/'}}.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14423) Percent (%) and plus (+) characters no longer work in WebHDFS

2019-08-13 Thread Akira Ajisaka (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16906773#comment-16906773
 ] 

Akira Ajisaka commented on HDFS-14423:
--

Thank you, [~iwasakims]!

> Percent (%) and plus (+) characters no longer work in WebHDFS
> -
>
> Key: HDFS-14423
> URL: https://issues.apache.org/jira/browse/HDFS-14423
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 3.2.0, 3.1.2
> Environment: Ubuntu 16.04, but I believe this is irrelevant.
>Reporter: Jing Wang
>Assignee: Masatake Iwasaki
>Priority: Major
> Attachments: HDFS-14423.001.patch, HDFS-14423.002.patch, 
> HDFS-14423.003.patch, HDFS-14423.004.patch
>
>
> The following commands with percent (%) no longer work starting with version 
> 3.1:
> {code:java}
> $ hadoop/bin/hdfs dfs -touchz webhdfs://localhost/%
> $ hadoop/bin/hdfs dfs -cat webhdfs://localhost/%
> cat: URLDecoder: Incomplete trailing escape (%) pattern
> {code}
> Also, plus (+ ) characters get turned into spaces when doing DN operations:
> {code:java}
> $ hadoop/bin/hdfs dfs -touchz webhdfs://localhost/a+b
> $ hadoop/bin/hdfs dfs -mkdir webhdfs://localhost/c+d
> $ hadoop/bin/hdfs dfs -ls /
> Found 4 items
> -rw-r--r--   1 jing supergroup  0 2019-04-12 11:20 /a b
> drwxr-xr-x   - jing supergroup  0 2019-04-12 11:21 /c+d
> {code}
> I can confirm that these commands work correctly on 2.9 and 3.0. Also, the 
> usual hdfs:// client works as expected.
> I suspect a relation with HDFS-13176 or HDFS-13582, but I'm not sure what the 
> right fix is. Note that Hive uses % to escape special characters in partition 
> values, so banning % might not be a good option. For example, Hive will 
> create a paths like {{table_name/partition_key=%2F}} when 
> {{partition_key='/'}}.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14717) Junit not found in hadoop-dynamometer-infra

2019-08-13 Thread kevin su (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16906770#comment-16906770
 ] 

kevin su commented on HDFS-14717:
-

[~xkrogen] [~smeng] Thanks for your  help

> Junit not found in hadoop-dynamometer-infra
> ---
>
> Key: HDFS-14717
> URL: https://issues.apache.org/jira/browse/HDFS-14717
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: kevin su
>Assignee: kevin su
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14717.001.patch, HDFS-14717.002.patch, 
> HDFS-14717.003.patch
>
>
> {code:java}
> $ hadoop jar hadoop-dynamometer-infra-3.3.0-SNAPSHOT.jar 
> org.apache.hadoop.tools.dynamometer.Client
> {code}
> {code:java}
> Exception in thread "main" java.lang.NoClassDefFoundError: org/junit/Assert
>  at org.apache.hadoop.tools.dynamometer.Client.main(Client.java:256)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498)
>  at org.apache.hadoop.util.RunJar.run(RunJar.java:323)
>  at org.apache.hadoop.util.RunJar.main(RunJar.java:236)
> Caused by: java.lang.ClassNotFoundException: org.junit.Assert
>  at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
>  at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>  at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>  ... 7 more{code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14491) More Clarity on Namenode UI Around Blocks and Replicas

2019-08-13 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14491?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16906766#comment-16906766
 ] 

Hudson commented on HDFS-14491:
---

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17110 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17110/])
HDFS-14491. More Clarity on Namenode UI Around Blocks and Replicas. (weichiu: 
rev c13ec7ab666fc4878174a7cd952ca93941ae7c05)
* (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.html


> More Clarity on Namenode UI Around Blocks and Replicas
> --
>
> Key: HDFS-14491
> URL: https://issues.apache.org/jira/browse/HDFS-14491
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Alan Jackoway
>Assignee: Siyao Meng
>Priority: Minor
> Fix For: 3.3.0, 3.2.1, 3.1.3
>
> Attachments: HDFS-14491.001.patch
>
>
> I recently deleted more than 1/3 of the files in my HDFS installation. During 
> the process of the delete, I noticed that the NameNode UI near the top has a 
> line like this:
> {quote}44,031,342 files and directories, 38,988,775 blocks = 83,020,117 total 
> filesystem object(s).
> {quote}
> Then lower down had a line like this:
> {quote}Number of Blocks Pending Deletion 4000
> {quote}
> That made it appear that I was deleting more blocks than exist in the 
> cluster. When that number was below the total number of blocks, I briefly 
> believed I had deleted the entire cluster. In reality, the second number 
> includes replicas, while the first does not.
> The UI should be clarified to indicate where "Blocks" includes replicas and 
> where it doesn't. This may also have an impact on the under-replicated count.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1659) Define the process to add proposal/design docs to the Ozone subproject

2019-08-13 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16906765#comment-16906765
 ] 

Hudson commented on HDDS-1659:
--

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17110 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17110/])
HDDS-1659. Define the process to add proposal/design docs to the Ozone 
(aengineer: rev 50a22b66c0292d37984460991a680d9d3e8c862c)
* (add) hadoop-hdds/docs/content/design/ozone-enhancement-proposals.md


> Define the process to add proposal/design docs to the Ozone subproject
> --
>
> Key: HDDS-1659
> URL: https://issues.apache.org/jira/browse/HDDS-1659
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 3.5h
>  Remaining Estimate: 0h
>
> We think that it would be more effective to collect all the design docs in 
> one place and make it easier to review them by the community.
> We propose to follow an approach where the proposals are committed to the 
> hadoop-hdds/docs project and the review can be the same as a review of a PR



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1928) Cannot run ozone-recon compose due to syntax error

2019-08-13 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1928?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16906767#comment-16906767
 ] 

Hudson commented on HDDS-1928:
--

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17110 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17110/])
HDDS-1928. Cannot run ozone-recon compose due to syntax error (aengineer: rev 
e6d240dc91004c468533b523358849a2611ed757)
* (add) hadoop-ozone/dist/src/main/compose/ozone-recon/test.sh
* (edit) hadoop-ozone/dist/src/main/compose/ozone-recon/docker-compose.yaml


> Cannot run ozone-recon compose due to syntax error
> --
>
> Key: HDDS-1928
> URL: https://issues.apache.org/jira/browse/HDDS-1928
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: docker
>Affects Versions: 0.4.1
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1, 0.5.0
>
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> {noformat}
> $ cd hadoop-ozone/dist/target/ozone-0.5.0-SNAPSHOT/compose/ozone-recon
> $ docker-compose up -d --scale datanode=3
> ERROR: yaml.scanner.ScannerError: mapping values are not allowed here
>   in "./docker-compose.yaml", line 20, column 33
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1928) Cannot run ozone-recon compose due to syntax error

2019-08-13 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1928?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-1928:
---
   Resolution: Fixed
Fix Version/s: 0.5.0
   0.4.1
   Status: Resolved  (was: Patch Available)

> Cannot run ozone-recon compose due to syntax error
> --
>
> Key: HDDS-1928
> URL: https://issues.apache.org/jira/browse/HDDS-1928
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: docker
>Affects Versions: 0.4.1
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1, 0.5.0
>
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> {noformat}
> $ cd hadoop-ozone/dist/target/ozone-0.5.0-SNAPSHOT/compose/ozone-recon
> $ docker-compose up -d --scale datanode=3
> ERROR: yaml.scanner.ScannerError: mapping values are not allowed here
>   in "./docker-compose.yaml", line 20, column 33
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1928) Cannot run ozone-recon compose due to syntax error

2019-08-13 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1928?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16906761#comment-16906761
 ] 

Anu Engineer commented on HDDS-1928:


I have committed this to trunk and ozone-0.4.1. [~adoroszlai] Thanks for your 
contribution. [~vivekratnavel] Thanks for the review.

 
{noformat}
3 datanodes are up and registered to the scm
==
ozone-recon-basic :: Smoketest ozone cluster startup
==
Check webui static resources | PASS |
--
Start freon testing | FAIL |
255 != 0
--{noformat}
 

There was a failure on my laptop, but this fix is correct and needed. Hence I 
have committed this. FYI, [~nandakumar131]

> Cannot run ozone-recon compose due to syntax error
> --
>
> Key: HDDS-1928
> URL: https://issues.apache.org/jira/browse/HDDS-1928
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: docker
>Affects Versions: 0.4.1
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> {noformat}
> $ cd hadoop-ozone/dist/target/ozone-0.5.0-SNAPSHOT/compose/ozone-recon
> $ docker-compose up -d --scale datanode=3
> ERROR: yaml.scanner.ScannerError: mapping values are not allowed here
>   in "./docker-compose.yaml", line 20, column 33
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1928) Cannot run ozone-recon compose due to syntax error

2019-08-13 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1928?focusedWorklogId=294306=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-294306
 ]

ASF GitHub Bot logged work on HDDS-1928:


Author: ASF GitHub Bot
Created on: 14/Aug/19 00:41
Start Date: 14/Aug/19 00:41
Worklog Time Spent: 10m 
  Work Description: anuengineer commented on pull request #1249: HDDS-1928. 
Cannot run ozone-recon compose due to syntax error
URL: https://github.com/apache/hadoop/pull/1249
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 294306)
Time Spent: 1h 40m  (was: 1.5h)

> Cannot run ozone-recon compose due to syntax error
> --
>
> Key: HDDS-1928
> URL: https://issues.apache.org/jira/browse/HDDS-1928
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: docker
>Affects Versions: 0.4.1
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> {noformat}
> $ cd hadoop-ozone/dist/target/ozone-0.5.0-SNAPSHOT/compose/ozone-recon
> $ docker-compose up -d --scale datanode=3
> ERROR: yaml.scanner.ScannerError: mapping values are not allowed here
>   in "./docker-compose.yaml", line 20, column 33
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14491) More Clarity on Namenode UI Around Blocks and Replicas

2019-08-13 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14491?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-14491:
---
   Resolution: Fixed
Fix Version/s: 3.1.3
   3.2.1
   3.3.0
   Status: Resolved  (was: Patch Available)

Pushed patch to trunk branch-3.2 and branch-3.1
Thanks [~smeng]!

> More Clarity on Namenode UI Around Blocks and Replicas
> --
>
> Key: HDFS-14491
> URL: https://issues.apache.org/jira/browse/HDFS-14491
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Alan Jackoway
>Assignee: Siyao Meng
>Priority: Minor
> Fix For: 3.3.0, 3.2.1, 3.1.3
>
> Attachments: HDFS-14491.001.patch
>
>
> I recently deleted more than 1/3 of the files in my HDFS installation. During 
> the process of the delete, I noticed that the NameNode UI near the top has a 
> line like this:
> {quote}44,031,342 files and directories, 38,988,775 blocks = 83,020,117 total 
> filesystem object(s).
> {quote}
> Then lower down had a line like this:
> {quote}Number of Blocks Pending Deletion 4000
> {quote}
> That made it appear that I was deleting more blocks than exist in the 
> cluster. When that number was below the total number of blocks, I briefly 
> believed I had deleted the entire cluster. In reality, the second number 
> includes replicas, while the first does not.
> The UI should be clarified to indicate where "Blocks" includes replicas and 
> where it doesn't. This may also have an impact on the under-replicated count.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14491) More Clarity on Namenode UI Around Blocks and Replicas

2019-08-13 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14491?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16906752#comment-16906752
 ] 

Wei-Chiu Chuang commented on HDFS-14491:


+1 gotcha. Makes sense to me now.

> More Clarity on Namenode UI Around Blocks and Replicas
> --
>
> Key: HDFS-14491
> URL: https://issues.apache.org/jira/browse/HDFS-14491
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Alan Jackoway
>Assignee: Siyao Meng
>Priority: Minor
> Attachments: HDFS-14491.001.patch
>
>
> I recently deleted more than 1/3 of the files in my HDFS installation. During 
> the process of the delete, I noticed that the NameNode UI near the top has a 
> line like this:
> {quote}44,031,342 files and directories, 38,988,775 blocks = 83,020,117 total 
> filesystem object(s).
> {quote}
> Then lower down had a line like this:
> {quote}Number of Blocks Pending Deletion 4000
> {quote}
> That made it appear that I was deleting more blocks than exist in the 
> cluster. When that number was below the total number of blocks, I briefly 
> believed I had deleted the entire cluster. In reality, the second number 
> includes replicas, while the first does not.
> The UI should be clarified to indicate where "Blocks" includes replicas and 
> where it doesn't. This may also have an impact on the under-replicated count.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1659) Define the process to add proposal/design docs to the Ozone subproject

2019-08-13 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-1659:
---
   Resolution: Fixed
Fix Version/s: 0.5.0
   Status: Resolved  (was: Patch Available)

It has been open for long enough for the community to comment. I have merged 
this now.

> Define the process to add proposal/design docs to the Ozone subproject
> --
>
> Key: HDDS-1659
> URL: https://issues.apache.org/jira/browse/HDDS-1659
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 3.5h
>  Remaining Estimate: 0h
>
> We think that it would be more effective to collect all the design docs in 
> one place and make it easier to review them by the community.
> We propose to follow an approach where the proposals are committed to the 
> hadoop-hdds/docs project and the review can be the same as a review of a PR



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14625) Make DefaultAuditLogger class in FSnamesystem to Abstract

2019-08-13 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16906748#comment-16906748
 ] 

Hudson commented on HDFS-14625:
---

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17109 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17109/])
HDFS-14625. Make DefaultAuditLogger class in FSnamesystem to Abstract. 
(weichiu: rev 633b7c1cfecde6166899449efae6326ee03cd8c4)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSNamesystem.java
* (add) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/DefaultAuditLogger.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestAuditLogAtDebug.java


> Make DefaultAuditLogger class in FSnamesystem to Abstract 
> --
>
> Key: HDFS-14625
> URL: https://issues.apache.org/jira/browse/HDFS-14625
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: hemanthboyina
>Assignee: hemanthboyina
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14625 (1).patch, HDFS-14625(2).patch, 
> HDFS-14625.003.patch, HDFS-14625.004.patch, HDFS-14625.patch
>
>
> As per +HDFS-13270+  Audit logger for Router , we can make DefaultAuditLogger 
>  in FSnamesystem to be Abstract and common



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1916) Only contract tests are run in ozonefs module

2019-08-13 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16906749#comment-16906749
 ] 

Hudson commented on HDDS-1916:
--

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17109 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17109/])
HDDS-1916. Only contract tests are run in ozonefs module (aengineer: rev 
9691117099d7914c6297b0e4ea3852341775fb15)
* (edit) hadoop-ozone/ozonefs/pom.xml


> Only contract tests are run in ozonefs module
> -
>
> Key: HDDS-1916
> URL: https://issues.apache.org/jira/browse/HDDS-1916
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.3.0
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 0.4.1, 0.5.0
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> {{hadoop-ozone-filesystem}} has 6 test classes that are not being run:
> {code}
> hadoop-ozone/ozonefs/src/test/java/org/apache/hadoop/fs/ozone/TestFilteredClassLoader.java
> hadoop-ozone/ozonefs/src/test/java/org/apache/hadoop/fs/ozone/TestOzoneFSInputStream.java
> hadoop-ozone/ozonefs/src/test/java/org/apache/hadoop/fs/ozone/TestOzoneFileInterfaces.java
> hadoop-ozone/ozonefs/src/test/java/org/apache/hadoop/fs/ozone/TestOzoneFileSystem.java
> hadoop-ozone/ozonefs/src/test/java/org/apache/hadoop/fs/ozone/TestOzoneFileSystemWithMocks.java
> hadoop-ozone/ozonefs/src/test/java/org/apache/hadoop/fs/ozone/TestOzoneFsRenameDir.java
> {code}
> {code:title=https://raw.githubusercontent.com/elek/ozone-ci/master/byscane/byscane-nightly-vxsck/integration/output.log}
> [INFO] ---
> [INFO]  T E S T S
> [INFO] ---
> [INFO] Running org.apache.hadoop.fs.ozone.contract.ITestOzoneContractDelete
> [INFO] Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 33.956 
> s - in org.apache.hadoop.fs.ozone.contract.ITestOzoneContractDelete
> [INFO] Running org.apache.hadoop.fs.ozone.contract.ITestOzoneContractMkdir
> [INFO] Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 34.528 
> s - in org.apache.hadoop.fs.ozone.contract.ITestOzoneContractMkdir
> [INFO] Running org.apache.hadoop.fs.ozone.contract.ITestOzoneContractSeek
> [INFO] Tests run: 18, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 
> 42.245 s - in org.apache.hadoop.fs.ozone.contract.ITestOzoneContractSeek
> [INFO] Running org.apache.hadoop.fs.ozone.contract.ITestOzoneContractOpen
> [INFO] Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 33.996 
> s - in org.apache.hadoop.fs.ozone.contract.ITestOzoneContractOpen
> [INFO] Running org.apache.hadoop.fs.ozone.contract.ITestOzoneContractRename
> [INFO] Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 34.816 
> s - in org.apache.hadoop.fs.ozone.contract.ITestOzoneContractRename
> [INFO] Running org.apache.hadoop.fs.ozone.contract.ITestOzoneContractDistCp
> [INFO] Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 59.418 
> s - in org.apache.hadoop.fs.ozone.contract.ITestOzoneContractDistCp
> [INFO] Running 
> org.apache.hadoop.fs.ozone.contract.ITestOzoneContractGetFileStatus
> [INFO] Tests run: 18, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 
> 35.042 s - in 
> org.apache.hadoop.fs.ozone.contract.ITestOzoneContractGetFileStatus
> [INFO] Running org.apache.hadoop.fs.ozone.contract.ITestOzoneContractCreate
> [WARNING] Tests run: 11, Failures: 0, Errors: 0, Skipped: 2, Time elapsed: 
> 35.144 s - in org.apache.hadoop.fs.ozone.contract.ITestOzoneContractCreate
> [INFO] Running org.apache.hadoop.fs.ozone.contract.ITestOzoneContractRootDir
> [INFO] Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 28.986 
> s - in org.apache.hadoop.fs.ozone.contract.ITestOzoneContractRootDir
> [INFO] 
> [INFO] Results:
> [INFO] 
> [WARNING] Tests run: 92, Failures: 0, Errors: 0, Skipped: 2
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1659) Define the process to add proposal/design docs to the Ozone subproject

2019-08-13 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1659?focusedWorklogId=294300=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-294300
 ]

ASF GitHub Bot logged work on HDDS-1659:


Author: ASF GitHub Bot
Created on: 14/Aug/19 00:10
Start Date: 14/Aug/19 00:10
Worklog Time Spent: 10m 
  Work Description: anuengineer commented on pull request #950: HDDS-1659. 
Define the process to add proposal/design docs to the Ozone subproject
URL: https://github.com/apache/hadoop/pull/950
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 294300)
Time Spent: 3.5h  (was: 3h 20m)

> Define the process to add proposal/design docs to the Ozone subproject
> --
>
> Key: HDDS-1659
> URL: https://issues.apache.org/jira/browse/HDDS-1659
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3.5h
>  Remaining Estimate: 0h
>
> We think that it would be more effective to collect all the design docs in 
> one place and make it easier to review them by the community.
> We propose to follow an approach where the proposals are committed to the 
> hadoop-hdds/docs project and the review can be the same as a review of a PR



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14595) HDFS-11848 breaks API compatibility

2019-08-13 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14595?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16906747#comment-16906747
 ] 

Wei-Chiu Chuang commented on HDFS-14595:


+1

> HDFS-11848 breaks API compatibility
> ---
>
> Key: HDFS-14595
> URL: https://issues.apache.org/jira/browse/HDFS-14595
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.2.0, 3.1.2
>Reporter: Wei-Chiu Chuang
>Assignee: Siyao Meng
>Priority: Blocker
> Attachments: HDFS-14595.001.patch, HDFS-14595.002.patch, 
> HDFS-14595.003.patch, HDFS-14595.004.patch, HDFS-14595.005.patch, 
> HDFS-14595.006.patch, hadoop_ 36e1870eab904d5a6f12ecfb1fdb52ca08d95ac5 to 
> b241194d56f97ee372cbec7062bcf155bc3df662 compatibility report.htm
>
>
> Our internal tool caught an API compatibility issue with HDFS-11848.
> HDFS-11848 adds an additional parameter to 
> DistributedFileSystem.listOpenFiles(), but it doesn't keep the existing API.
> This can cause issue when upgrading from Hadoop 2.9.0/2.8.3/3.0.0 to 
> 3.0.1/3.1.0 and above.
> Suggest:
> (1) Add back the old API (which was added in HDFS-10480), and mark it 
> deprecated.
> (2) Update release doc to enforce running API compatibility check for each 
> releases.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1916) Only contract tests are run in ozonefs module

2019-08-13 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1916?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-1916:
---
   Resolution: Fixed
Fix Version/s: 0.5.0
   0.4.1
   Status: Resolved  (was: Patch Available)

Committed to both trunk and Ozone-0.4.1, Thanks for the contribution.

> Only contract tests are run in ozonefs module
> -
>
> Key: HDDS-1916
> URL: https://issues.apache.org/jira/browse/HDDS-1916
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.3.0
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 0.4.1, 0.5.0
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> {{hadoop-ozone-filesystem}} has 6 test classes that are not being run:
> {code}
> hadoop-ozone/ozonefs/src/test/java/org/apache/hadoop/fs/ozone/TestFilteredClassLoader.java
> hadoop-ozone/ozonefs/src/test/java/org/apache/hadoop/fs/ozone/TestOzoneFSInputStream.java
> hadoop-ozone/ozonefs/src/test/java/org/apache/hadoop/fs/ozone/TestOzoneFileInterfaces.java
> hadoop-ozone/ozonefs/src/test/java/org/apache/hadoop/fs/ozone/TestOzoneFileSystem.java
> hadoop-ozone/ozonefs/src/test/java/org/apache/hadoop/fs/ozone/TestOzoneFileSystemWithMocks.java
> hadoop-ozone/ozonefs/src/test/java/org/apache/hadoop/fs/ozone/TestOzoneFsRenameDir.java
> {code}
> {code:title=https://raw.githubusercontent.com/elek/ozone-ci/master/byscane/byscane-nightly-vxsck/integration/output.log}
> [INFO] ---
> [INFO]  T E S T S
> [INFO] ---
> [INFO] Running org.apache.hadoop.fs.ozone.contract.ITestOzoneContractDelete
> [INFO] Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 33.956 
> s - in org.apache.hadoop.fs.ozone.contract.ITestOzoneContractDelete
> [INFO] Running org.apache.hadoop.fs.ozone.contract.ITestOzoneContractMkdir
> [INFO] Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 34.528 
> s - in org.apache.hadoop.fs.ozone.contract.ITestOzoneContractMkdir
> [INFO] Running org.apache.hadoop.fs.ozone.contract.ITestOzoneContractSeek
> [INFO] Tests run: 18, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 
> 42.245 s - in org.apache.hadoop.fs.ozone.contract.ITestOzoneContractSeek
> [INFO] Running org.apache.hadoop.fs.ozone.contract.ITestOzoneContractOpen
> [INFO] Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 33.996 
> s - in org.apache.hadoop.fs.ozone.contract.ITestOzoneContractOpen
> [INFO] Running org.apache.hadoop.fs.ozone.contract.ITestOzoneContractRename
> [INFO] Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 34.816 
> s - in org.apache.hadoop.fs.ozone.contract.ITestOzoneContractRename
> [INFO] Running org.apache.hadoop.fs.ozone.contract.ITestOzoneContractDistCp
> [INFO] Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 59.418 
> s - in org.apache.hadoop.fs.ozone.contract.ITestOzoneContractDistCp
> [INFO] Running 
> org.apache.hadoop.fs.ozone.contract.ITestOzoneContractGetFileStatus
> [INFO] Tests run: 18, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 
> 35.042 s - in 
> org.apache.hadoop.fs.ozone.contract.ITestOzoneContractGetFileStatus
> [INFO] Running org.apache.hadoop.fs.ozone.contract.ITestOzoneContractCreate
> [WARNING] Tests run: 11, Failures: 0, Errors: 0, Skipped: 2, Time elapsed: 
> 35.144 s - in org.apache.hadoop.fs.ozone.contract.ITestOzoneContractCreate
> [INFO] Running org.apache.hadoop.fs.ozone.contract.ITestOzoneContractRootDir
> [INFO] Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 28.986 
> s - in org.apache.hadoop.fs.ozone.contract.ITestOzoneContractRootDir
> [INFO] 
> [INFO] Results:
> [INFO] 
> [WARNING] Tests run: 92, Failures: 0, Errors: 0, Skipped: 2
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1916) Only contract tests are run in ozonefs module

2019-08-13 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1916?focusedWorklogId=294291=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-294291
 ]

ASF GitHub Bot logged work on HDDS-1916:


Author: ASF GitHub Bot
Created on: 14/Aug/19 00:00
Start Date: 14/Aug/19 00:00
Worklog Time Spent: 10m 
  Work Description: anuengineer commented on pull request #1235: HDDS-1916. 
Only contract tests are run in ozonefs module
URL: https://github.com/apache/hadoop/pull/1235
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 294291)
Time Spent: 1h 20m  (was: 1h 10m)

> Only contract tests are run in ozonefs module
> -
>
> Key: HDDS-1916
> URL: https://issues.apache.org/jira/browse/HDDS-1916
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.3.0
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> {{hadoop-ozone-filesystem}} has 6 test classes that are not being run:
> {code}
> hadoop-ozone/ozonefs/src/test/java/org/apache/hadoop/fs/ozone/TestFilteredClassLoader.java
> hadoop-ozone/ozonefs/src/test/java/org/apache/hadoop/fs/ozone/TestOzoneFSInputStream.java
> hadoop-ozone/ozonefs/src/test/java/org/apache/hadoop/fs/ozone/TestOzoneFileInterfaces.java
> hadoop-ozone/ozonefs/src/test/java/org/apache/hadoop/fs/ozone/TestOzoneFileSystem.java
> hadoop-ozone/ozonefs/src/test/java/org/apache/hadoop/fs/ozone/TestOzoneFileSystemWithMocks.java
> hadoop-ozone/ozonefs/src/test/java/org/apache/hadoop/fs/ozone/TestOzoneFsRenameDir.java
> {code}
> {code:title=https://raw.githubusercontent.com/elek/ozone-ci/master/byscane/byscane-nightly-vxsck/integration/output.log}
> [INFO] ---
> [INFO]  T E S T S
> [INFO] ---
> [INFO] Running org.apache.hadoop.fs.ozone.contract.ITestOzoneContractDelete
> [INFO] Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 33.956 
> s - in org.apache.hadoop.fs.ozone.contract.ITestOzoneContractDelete
> [INFO] Running org.apache.hadoop.fs.ozone.contract.ITestOzoneContractMkdir
> [INFO] Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 34.528 
> s - in org.apache.hadoop.fs.ozone.contract.ITestOzoneContractMkdir
> [INFO] Running org.apache.hadoop.fs.ozone.contract.ITestOzoneContractSeek
> [INFO] Tests run: 18, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 
> 42.245 s - in org.apache.hadoop.fs.ozone.contract.ITestOzoneContractSeek
> [INFO] Running org.apache.hadoop.fs.ozone.contract.ITestOzoneContractOpen
> [INFO] Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 33.996 
> s - in org.apache.hadoop.fs.ozone.contract.ITestOzoneContractOpen
> [INFO] Running org.apache.hadoop.fs.ozone.contract.ITestOzoneContractRename
> [INFO] Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 34.816 
> s - in org.apache.hadoop.fs.ozone.contract.ITestOzoneContractRename
> [INFO] Running org.apache.hadoop.fs.ozone.contract.ITestOzoneContractDistCp
> [INFO] Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 59.418 
> s - in org.apache.hadoop.fs.ozone.contract.ITestOzoneContractDistCp
> [INFO] Running 
> org.apache.hadoop.fs.ozone.contract.ITestOzoneContractGetFileStatus
> [INFO] Tests run: 18, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 
> 35.042 s - in 
> org.apache.hadoop.fs.ozone.contract.ITestOzoneContractGetFileStatus
> [INFO] Running org.apache.hadoop.fs.ozone.contract.ITestOzoneContractCreate
> [WARNING] Tests run: 11, Failures: 0, Errors: 0, Skipped: 2, Time elapsed: 
> 35.144 s - in org.apache.hadoop.fs.ozone.contract.ITestOzoneContractCreate
> [INFO] Running org.apache.hadoop.fs.ozone.contract.ITestOzoneContractRootDir
> [INFO] Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 28.986 
> s - in org.apache.hadoop.fs.ozone.contract.ITestOzoneContractRootDir
> [INFO] 
> [INFO] Results:
> [INFO] 
> [WARNING] Tests run: 92, Failures: 0, Errors: 0, Skipped: 2
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1916) Only contract tests are run in ozonefs module

2019-08-13 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1916?focusedWorklogId=294290=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-294290
 ]

ASF GitHub Bot logged work on HDDS-1916:


Author: ASF GitHub Bot
Created on: 14/Aug/19 00:00
Start Date: 14/Aug/19 00:00
Worklog Time Spent: 10m 
  Work Description: anuengineer commented on issue #1235: HDDS-1916. Only 
contract tests are run in ozonefs module
URL: https://github.com/apache/hadoop/pull/1235#issuecomment-521052637
 
 
   @adoroszlai  Thanks for the contribution. @bharatviswa504  Thanks for the 
review. I have committed this to the trunk.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 294290)
Time Spent: 1h 10m  (was: 1h)

> Only contract tests are run in ozonefs module
> -
>
> Key: HDDS-1916
> URL: https://issues.apache.org/jira/browse/HDDS-1916
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.3.0
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> {{hadoop-ozone-filesystem}} has 6 test classes that are not being run:
> {code}
> hadoop-ozone/ozonefs/src/test/java/org/apache/hadoop/fs/ozone/TestFilteredClassLoader.java
> hadoop-ozone/ozonefs/src/test/java/org/apache/hadoop/fs/ozone/TestOzoneFSInputStream.java
> hadoop-ozone/ozonefs/src/test/java/org/apache/hadoop/fs/ozone/TestOzoneFileInterfaces.java
> hadoop-ozone/ozonefs/src/test/java/org/apache/hadoop/fs/ozone/TestOzoneFileSystem.java
> hadoop-ozone/ozonefs/src/test/java/org/apache/hadoop/fs/ozone/TestOzoneFileSystemWithMocks.java
> hadoop-ozone/ozonefs/src/test/java/org/apache/hadoop/fs/ozone/TestOzoneFsRenameDir.java
> {code}
> {code:title=https://raw.githubusercontent.com/elek/ozone-ci/master/byscane/byscane-nightly-vxsck/integration/output.log}
> [INFO] ---
> [INFO]  T E S T S
> [INFO] ---
> [INFO] Running org.apache.hadoop.fs.ozone.contract.ITestOzoneContractDelete
> [INFO] Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 33.956 
> s - in org.apache.hadoop.fs.ozone.contract.ITestOzoneContractDelete
> [INFO] Running org.apache.hadoop.fs.ozone.contract.ITestOzoneContractMkdir
> [INFO] Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 34.528 
> s - in org.apache.hadoop.fs.ozone.contract.ITestOzoneContractMkdir
> [INFO] Running org.apache.hadoop.fs.ozone.contract.ITestOzoneContractSeek
> [INFO] Tests run: 18, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 
> 42.245 s - in org.apache.hadoop.fs.ozone.contract.ITestOzoneContractSeek
> [INFO] Running org.apache.hadoop.fs.ozone.contract.ITestOzoneContractOpen
> [INFO] Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 33.996 
> s - in org.apache.hadoop.fs.ozone.contract.ITestOzoneContractOpen
> [INFO] Running org.apache.hadoop.fs.ozone.contract.ITestOzoneContractRename
> [INFO] Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 34.816 
> s - in org.apache.hadoop.fs.ozone.contract.ITestOzoneContractRename
> [INFO] Running org.apache.hadoop.fs.ozone.contract.ITestOzoneContractDistCp
> [INFO] Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 59.418 
> s - in org.apache.hadoop.fs.ozone.contract.ITestOzoneContractDistCp
> [INFO] Running 
> org.apache.hadoop.fs.ozone.contract.ITestOzoneContractGetFileStatus
> [INFO] Tests run: 18, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 
> 35.042 s - in 
> org.apache.hadoop.fs.ozone.contract.ITestOzoneContractGetFileStatus
> [INFO] Running org.apache.hadoop.fs.ozone.contract.ITestOzoneContractCreate
> [WARNING] Tests run: 11, Failures: 0, Errors: 0, Skipped: 2, Time elapsed: 
> 35.144 s - in org.apache.hadoop.fs.ozone.contract.ITestOzoneContractCreate
> [INFO] Running org.apache.hadoop.fs.ozone.contract.ITestOzoneContractRootDir
> [INFO] Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 28.986 
> s - in org.apache.hadoop.fs.ozone.contract.ITestOzoneContractRootDir
> [INFO] 
> [INFO] Results:
> [INFO] 
> [WARNING] Tests run: 92, Failures: 0, Errors: 0, Skipped: 2
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: 

[jira] [Updated] (HDFS-14625) Make DefaultAuditLogger class in FSnamesystem to Abstract

2019-08-13 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14625?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-14625:
---
   Resolution: Fixed
Fix Version/s: 3.3.0
   Status: Resolved  (was: Patch Available)

Thanks [~hemanthboyina]! Pushed 004 patch to trunk.
I'm not sure if this should go into lower branches. Please let me know.

> Make DefaultAuditLogger class in FSnamesystem to Abstract 
> --
>
> Key: HDFS-14625
> URL: https://issues.apache.org/jira/browse/HDFS-14625
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: hemanthboyina
>Assignee: hemanthboyina
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14625 (1).patch, HDFS-14625(2).patch, 
> HDFS-14625.003.patch, HDFS-14625.004.patch, HDFS-14625.patch
>
>
> As per +HDFS-13270+  Audit logger for Router , we can make DefaultAuditLogger 
>  in FSnamesystem to be Abstract and common



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14423) Percent (%) and plus (+) characters no longer work in WebHDFS

2019-08-13 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16906737#comment-16906737
 ] 

Hudson commented on HDFS-14423:
---

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17108 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17108/])
HDFS-14423. Percent (%) and plus (+) characters no longer work in (iwasakims: 
rev da0006fe0473e353ee2d489156248a01aa982dfd)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeHttpServer.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/web/webhdfs/WebHdfsHandler.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHdfsUrl.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/http/HttpServer2.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/web/resources/NamenodeWebHdfsMethods.java


> Percent (%) and plus (+) characters no longer work in WebHDFS
> -
>
> Key: HDFS-14423
> URL: https://issues.apache.org/jira/browse/HDFS-14423
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 3.2.0, 3.1.2
> Environment: Ubuntu 16.04, but I believe this is irrelevant.
>Reporter: Jing Wang
>Assignee: Masatake Iwasaki
>Priority: Major
> Attachments: HDFS-14423.001.patch, HDFS-14423.002.patch, 
> HDFS-14423.003.patch, HDFS-14423.004.patch
>
>
> The following commands with percent (%) no longer work starting with version 
> 3.1:
> {code:java}
> $ hadoop/bin/hdfs dfs -touchz webhdfs://localhost/%
> $ hadoop/bin/hdfs dfs -cat webhdfs://localhost/%
> cat: URLDecoder: Incomplete trailing escape (%) pattern
> {code}
> Also, plus (+ ) characters get turned into spaces when doing DN operations:
> {code:java}
> $ hadoop/bin/hdfs dfs -touchz webhdfs://localhost/a+b
> $ hadoop/bin/hdfs dfs -mkdir webhdfs://localhost/c+d
> $ hadoop/bin/hdfs dfs -ls /
> Found 4 items
> -rw-r--r--   1 jing supergroup  0 2019-04-12 11:20 /a b
> drwxr-xr-x   - jing supergroup  0 2019-04-12 11:21 /c+d
> {code}
> I can confirm that these commands work correctly on 2.9 and 3.0. Also, the 
> usual hdfs:// client works as expected.
> I suspect a relation with HDFS-13176 or HDFS-13582, but I'm not sure what the 
> right fix is. Note that Hive uses % to escape special characters in partition 
> values, so banning % might not be a good option. For example, Hive will 
> create a paths like {{table_name/partition_key=%2F}} when 
> {{partition_key='/'}}.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-14665) HttpFS: LISTSTATUS response is missing HDFS-specific fields

2019-08-13 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14665?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang resolved HDFS-14665.

   Resolution: Fixed
Fix Version/s: 3.2.1
   3.3.0

Pushed the PR to trunk and branch-3.2
[~smeng] the patch doesn't apply cleanly to branch-3.1. If you want, please 
attach a branch-3.1 patch.

Thanks [~smeng]!

> HttpFS: LISTSTATUS response is missing HDFS-specific fields
> ---
>
> Key: HDFS-14665
> URL: https://issues.apache.org/jira/browse/HDFS-14665
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: httpfs
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Fix For: 3.3.0, 3.2.1
>
>
> WebHDFS:
> {code:java}
> GET /webhdfs/v1/tmp/?op=LISTSTATUS=hdfs HTTP/1.1
> {code}
> {code}
> {
>   "FileStatuses": {
> "FileStatus": [
> ...
>   {
> "accessTime": 0,
> "blockSize": 0,
> "childrenNum": 0,
> "fileId": 16395,
> "group": "hadoop",
> "length": 0,
> "modificationTime": 1563893395614,
> "owner": "mapred",
> "pathSuffix": "logs",
> "permission": "1777",
> "replication": 0,
> "storagePolicy": 0,
> "type": "DIRECTORY"
>   }
> ]
>   }
> }
> {code}
> HttpFS:
> {code:java}
> GET /webhdfs/v1/tmp/?op=LISTSTATUS=hdfs HTTP/1.1
> {code}
> {code}
> {
>   "FileStatuses": {
> "FileStatus": [
> ...
>   {
> "pathSuffix": "logs",
> "type": "DIRECTORY",
> "length": 0,
> "owner": "mapred",
> "group": "hadoop",
> "permission": "1777",
> "accessTime": 0,
> "modificationTime": 1563893395614,
> "blockSize": 0,
> "replication": 0
>   }
> ]
>   }
> }
> {code}
> You can see the same LISTSTATUS request to HttpFS is missing 3 fields:
> {code}
> "childrenNum" (should only be none 0 for directories)
> "fileId"
> "storagePolicy"
> {code}
> The same applies to LISTSTATUS_BATCH, which might be using the same 
> underlying calls to compose the response.
> Root cause:
> [toJsonInner|https://github.com/apache/hadoop/blob/17e8cf501b384af93726e4f2e6f5e28c6e3a8f65/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/FSOperations.java#L120]
>  didn't serialize the HDFS-specific keys from FileStatus.
> Also may file another Jira to align the order of the keys in the responses.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1917) TestOzoneRpcClientAbstract is failing

2019-08-13 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1917?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-1917:
---
   Resolution: Fixed
Fix Version/s: 0.4.1
   05
   Status: Resolved  (was: Patch Available)

Thanks for your contribution. I have committed to trunk and ozone-0.4.1

> TestOzoneRpcClientAbstract is failing
> -
>
> Key: HDDS-1917
> URL: https://issues.apache.org/jira/browse/HDDS-1917
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
>  Labels: pull-request-available
> Fix For: 05, 0.4.1
>
>  Time Spent: 2h 10m
>  Remaining Estimate: 0h
>
> TestOzoneRpcClientAbstract is failing with the below error
> {noformat}
> [ERROR] 
> testNativeAclsForKey(org.apache.hadoop.ozone.client.rpc.TestSecureOzoneRpcClient)
>   Time elapsed: 0.113 s  <<< FAILURE!
> java.lang.AssertionError: READ_ACL should exist in current 
> acls:group:jenkins:a[ACCESS]
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at 
> org.apache.hadoop.ozone.client.rpc.TestOzoneRpcClientAbstract.validateOzoneAccessAcl(TestOzoneRpcClientAbstract.java:2466)
>   at 
> org.apache.hadoop.ozone.client.rpc.TestOzoneRpcClientAbstract.testNativeAclsForKey(TestOzoneRpcClientAbstract.java:2300)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1917) TestOzoneRpcClientAbstract is failing

2019-08-13 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1917?focusedWorklogId=294281=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-294281
 ]

ASF GitHub Bot logged work on HDDS-1917:


Author: ASF GitHub Bot
Created on: 13/Aug/19 23:38
Start Date: 13/Aug/19 23:38
Worklog Time Spent: 10m 
  Work Description: anuengineer commented on pull request #1234: HDDS-1917. 
TestOzoneRpcClientAbstract is failing.
URL: https://github.com/apache/hadoop/pull/1234
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 294281)
Time Spent: 2h 20m  (was: 2h 10m)

> TestOzoneRpcClientAbstract is failing
> -
>
> Key: HDDS-1917
> URL: https://issues.apache.org/jira/browse/HDDS-1917
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1, 05
>
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> TestOzoneRpcClientAbstract is failing with the below error
> {noformat}
> [ERROR] 
> testNativeAclsForKey(org.apache.hadoop.ozone.client.rpc.TestSecureOzoneRpcClient)
>   Time elapsed: 0.113 s  <<< FAILURE!
> java.lang.AssertionError: READ_ACL should exist in current 
> acls:group:jenkins:a[ACCESS]
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at 
> org.apache.hadoop.ozone.client.rpc.TestOzoneRpcClientAbstract.validateOzoneAccessAcl(TestOzoneRpcClientAbstract.java:2466)
>   at 
> org.apache.hadoop.ozone.client.rpc.TestOzoneRpcClientAbstract.testNativeAclsForKey(TestOzoneRpcClientAbstract.java:2300)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14665) HttpFS: LISTSTATUS response is missing HDFS-specific fields

2019-08-13 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14665?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16906713#comment-16906713
 ] 

Hudson commented on HDFS-14665:
---

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17107 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17107/])
HDFS-14665. HttpFS: LISTSTATUS response is missing HDFS-specific fields 
(weichiu: rev 6ae8bc3a4a07c6b4e7060362b749be8c7afe0560)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/client/BaseTestHttpFSWith.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/client/HttpFSFileSystem.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/FSOperations.java


> HttpFS: LISTSTATUS response is missing HDFS-specific fields
> ---
>
> Key: HDFS-14665
> URL: https://issues.apache.org/jira/browse/HDFS-14665
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: httpfs
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
>
> WebHDFS:
> {code:java}
> GET /webhdfs/v1/tmp/?op=LISTSTATUS=hdfs HTTP/1.1
> {code}
> {code}
> {
>   "FileStatuses": {
> "FileStatus": [
> ...
>   {
> "accessTime": 0,
> "blockSize": 0,
> "childrenNum": 0,
> "fileId": 16395,
> "group": "hadoop",
> "length": 0,
> "modificationTime": 1563893395614,
> "owner": "mapred",
> "pathSuffix": "logs",
> "permission": "1777",
> "replication": 0,
> "storagePolicy": 0,
> "type": "DIRECTORY"
>   }
> ]
>   }
> }
> {code}
> HttpFS:
> {code:java}
> GET /webhdfs/v1/tmp/?op=LISTSTATUS=hdfs HTTP/1.1
> {code}
> {code}
> {
>   "FileStatuses": {
> "FileStatus": [
> ...
>   {
> "pathSuffix": "logs",
> "type": "DIRECTORY",
> "length": 0,
> "owner": "mapred",
> "group": "hadoop",
> "permission": "1777",
> "accessTime": 0,
> "modificationTime": 1563893395614,
> "blockSize": 0,
> "replication": 0
>   }
> ]
>   }
> }
> {code}
> You can see the same LISTSTATUS request to HttpFS is missing 3 fields:
> {code}
> "childrenNum" (should only be none 0 for directories)
> "fileId"
> "storagePolicy"
> {code}
> The same applies to LISTSTATUS_BATCH, which might be using the same 
> underlying calls to compose the response.
> Root cause:
> [toJsonInner|https://github.com/apache/hadoop/blob/17e8cf501b384af93726e4f2e6f5e28c6e3a8f65/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/FSOperations.java#L120]
>  didn't serialize the HDFS-specific keys from FileStatus.
> Also may file another Jira to align the order of the keys in the responses.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1917) TestOzoneRpcClientAbstract is failing

2019-08-13 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1917?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16906712#comment-16906712
 ] 

Hudson commented on HDDS-1917:
--

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17107 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17107/])
HDDS-1917. TestOzoneRpcClientAbstract is failing. (aengineer: rev 
3cff73aff47695f6a48a36878191409f050f)
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/KeyManagerImpl.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestOzoneRpcClientAbstract.java


> TestOzoneRpcClientAbstract is failing
> -
>
> Key: HDDS-1917
> URL: https://issues.apache.org/jira/browse/HDDS-1917
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h 10m
>  Remaining Estimate: 0h
>
> TestOzoneRpcClientAbstract is failing with the below error
> {noformat}
> [ERROR] 
> testNativeAclsForKey(org.apache.hadoop.ozone.client.rpc.TestSecureOzoneRpcClient)
>   Time elapsed: 0.113 s  <<< FAILURE!
> java.lang.AssertionError: READ_ACL should exist in current 
> acls:group:jenkins:a[ACCESS]
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at 
> org.apache.hadoop.ozone.client.rpc.TestOzoneRpcClientAbstract.validateOzoneAccessAcl(TestOzoneRpcClientAbstract.java:2466)
>   at 
> org.apache.hadoop.ozone.client.rpc.TestOzoneRpcClientAbstract.testNativeAclsForKey(TestOzoneRpcClientAbstract.java:2300)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1961) TestStorageContainerManager#testScmProcessDatanodeHeartbeat is flaky

2019-08-13 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1961?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16906711#comment-16906711
 ] 

Hudson commented on HDDS-1961:
--

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17107 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17107/])
HDDS-1961. TestStorageContainerManager#testScmProcessDatanodeHeartbeat 
(aengineer: rev cb390dff87a86eae22c432576be90d39f84a6ee8)
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/TestStorageContainerManager.java


> TestStorageContainerManager#testScmProcessDatanodeHeartbeat is flaky
> 
>
> Key: HDDS-1961
> URL: https://issues.apache.org/jira/browse/HDDS-1961
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1, 0.5.0
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> TestStorageContainerManager#testScmProcessDatanodeHeartbeat is flaky
> {noformat}
> [ERROR] 
> testScmProcessDatanodeHeartbeat(org.apache.hadoop.ozone.TestStorageContainerManager)
>   Time elapsed: 25.057 s  <<< FAILURE!
> java.lang.AssertionError
>   at org.junit.Assert.fail(Assert.java:86)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at org.junit.Assert.assertTrue(Assert.java:52)
>   at 
> org.apache.hadoop.ozone.TestStorageContainerManager.testScmProcessDatanodeHeartbeat(TestStorageContainerManager.java:531)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14423) Percent (%) and plus (+) characters no longer work in WebHDFS

2019-08-13 Thread Masatake Iwasaki (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16906708#comment-16906708
 ] 

Masatake Iwasaki commented on HDFS-14423:
-

I'm committing this.

> Percent (%) and plus (+) characters no longer work in WebHDFS
> -
>
> Key: HDFS-14423
> URL: https://issues.apache.org/jira/browse/HDFS-14423
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 3.2.0, 3.1.2
> Environment: Ubuntu 16.04, but I believe this is irrelevant.
>Reporter: Jing Wang
>Assignee: Masatake Iwasaki
>Priority: Major
> Attachments: HDFS-14423.001.patch, HDFS-14423.002.patch, 
> HDFS-14423.003.patch, HDFS-14423.004.patch
>
>
> The following commands with percent (%) no longer work starting with version 
> 3.1:
> {code:java}
> $ hadoop/bin/hdfs dfs -touchz webhdfs://localhost/%
> $ hadoop/bin/hdfs dfs -cat webhdfs://localhost/%
> cat: URLDecoder: Incomplete trailing escape (%) pattern
> {code}
> Also, plus (+ ) characters get turned into spaces when doing DN operations:
> {code:java}
> $ hadoop/bin/hdfs dfs -touchz webhdfs://localhost/a+b
> $ hadoop/bin/hdfs dfs -mkdir webhdfs://localhost/c+d
> $ hadoop/bin/hdfs dfs -ls /
> Found 4 items
> -rw-r--r--   1 jing supergroup  0 2019-04-12 11:20 /a b
> drwxr-xr-x   - jing supergroup  0 2019-04-12 11:21 /c+d
> {code}
> I can confirm that these commands work correctly on 2.9 and 3.0. Also, the 
> usual hdfs:// client works as expected.
> I suspect a relation with HDFS-13176 or HDFS-13582, but I'm not sure what the 
> right fix is. Note that Hive uses % to escape special characters in partition 
> values, so banning % might not be a good option. For example, Hive will 
> create a paths like {{table_name/partition_key=%2F}} when 
> {{partition_key='/'}}.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1961) TestStorageContainerManager#testScmProcessDatanodeHeartbeat is flaky

2019-08-13 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1961?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-1961:
---
   Resolution: Fixed
Fix Version/s: 0.5.0
   0.4.1
   Status: Resolved  (was: Patch Available)

[~nandakumar131] Committed to trunk and ozone-0.4.1

> TestStorageContainerManager#testScmProcessDatanodeHeartbeat is flaky
> 
>
> Key: HDDS-1961
> URL: https://issues.apache.org/jira/browse/HDDS-1961
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1, 0.5.0
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> TestStorageContainerManager#testScmProcessDatanodeHeartbeat is flaky
> {noformat}
> [ERROR] 
> testScmProcessDatanodeHeartbeat(org.apache.hadoop.ozone.TestStorageContainerManager)
>   Time elapsed: 25.057 s  <<< FAILURE!
> java.lang.AssertionError
>   at org.junit.Assert.fail(Assert.java:86)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at org.junit.Assert.assertTrue(Assert.java:52)
>   at 
> org.apache.hadoop.ozone.TestStorageContainerManager.testScmProcessDatanodeHeartbeat(TestStorageContainerManager.java:531)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1961) TestStorageContainerManager#testScmProcessDatanodeHeartbeat is flaky

2019-08-13 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1961?focusedWorklogId=294277=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-294277
 ]

ASF GitHub Bot logged work on HDDS-1961:


Author: ASF GitHub Bot
Created on: 13/Aug/19 23:06
Start Date: 13/Aug/19 23:06
Worklog Time Spent: 10m 
  Work Description: anuengineer commented on issue #1288: HDDS-1961. 
TestStorageContainerManager#testScmProcessDatanodeHeartbeat is flaky.
URL: https://github.com/apache/hadoop/pull/1288#issuecomment-521042143
 
 
   Committed to trunk and 0.4.1. Thanks for fixing this.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 294277)
Time Spent: 0.5h  (was: 20m)

> TestStorageContainerManager#testScmProcessDatanodeHeartbeat is flaky
> 
>
> Key: HDDS-1961
> URL: https://issues.apache.org/jira/browse/HDDS-1961
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> TestStorageContainerManager#testScmProcessDatanodeHeartbeat is flaky
> {noformat}
> [ERROR] 
> testScmProcessDatanodeHeartbeat(org.apache.hadoop.ozone.TestStorageContainerManager)
>   Time elapsed: 25.057 s  <<< FAILURE!
> java.lang.AssertionError
>   at org.junit.Assert.fail(Assert.java:86)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at org.junit.Assert.assertTrue(Assert.java:52)
>   at 
> org.apache.hadoop.ozone.TestStorageContainerManager.testScmProcessDatanodeHeartbeat(TestStorageContainerManager.java:531)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1961) TestStorageContainerManager#testScmProcessDatanodeHeartbeat is flaky

2019-08-13 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1961?focusedWorklogId=294278=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-294278
 ]

ASF GitHub Bot logged work on HDDS-1961:


Author: ASF GitHub Bot
Created on: 13/Aug/19 23:06
Start Date: 13/Aug/19 23:06
Worklog Time Spent: 10m 
  Work Description: anuengineer commented on pull request #1288: HDDS-1961. 
TestStorageContainerManager#testScmProcessDatanodeHeartbeat is flaky.
URL: https://github.com/apache/hadoop/pull/1288
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 294278)
Time Spent: 40m  (was: 0.5h)

> TestStorageContainerManager#testScmProcessDatanodeHeartbeat is flaky
> 
>
> Key: HDDS-1961
> URL: https://issues.apache.org/jira/browse/HDDS-1961
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> TestStorageContainerManager#testScmProcessDatanodeHeartbeat is flaky
> {noformat}
> [ERROR] 
> testScmProcessDatanodeHeartbeat(org.apache.hadoop.ozone.TestStorageContainerManager)
>   Time elapsed: 25.057 s  <<< FAILURE!
> java.lang.AssertionError
>   at org.junit.Assert.fail(Assert.java:86)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at org.junit.Assert.assertTrue(Assert.java:52)
>   at 
> org.apache.hadoop.ozone.TestStorageContainerManager.testScmProcessDatanodeHeartbeat(TestStorageContainerManager.java:531)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1891) Ozone fs shell command should work with default port when port number is not specified

2019-08-13 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1891?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16906693#comment-16906693
 ] 

Hudson commented on HDDS-1891:
--

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17106 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17106/])
HDDS-1891. Ozone fs shell command should work with default port when 
(aengineer: rev 68c818415aedf672e35b8ecd9dfd0cb33c43a91e)
* (edit) 
hadoop-ozone/ozonefs/src/main/java/org/apache/hadoop/fs/ozone/BasicOzoneFileSystem.java
* (edit) 
hadoop-ozone/ozonefs/src/test/java/org/apache/hadoop/fs/ozone/TestOzoneFileSystemWithMocks.java


> Ozone fs shell command should work with default port when port number is not 
> specified
> --
>
> Key: HDDS-1891
> URL: https://issues.apache.org/jira/browse/HDDS-1891
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1, 0.5.0
>
>  Time Spent: 3h 40m
>  Remaining Estimate: 0h
>
> {code:bash|title=Without port number -> Error}
> $ ozone fs -ls o3fs://bucket.volume.localhost/
> -ls: Ozone file system url should be either one of the two forms: 
> o3fs://bucket.volume/key  OR o3fs://bucket.volume.om-host.example.com:5678/key
> ...
> {code}
> {code:bash|title=With port number -> Success}
> $ ozone fs -ls o3fs://bucket.volume.localhost:9862/
> Found 1 items
> -rw-rw-rw-   1 hadoop hadoop   1485 2019-08-01 21:14 
> o3fs://bucket.volume.localhost:9862/README.txt
> {code}
> We expect the first command to attempt port 9862 by default.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1891) Ozone fs shell command should work with default port when port number is not specified

2019-08-13 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1891?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16906689#comment-16906689
 ] 

Anu Engineer commented on HDDS-1891:


[~nandakumar131] Just saw that you have tagged this for 0.4.1, I am committing 
now into that branch too.


> Ozone fs shell command should work with default port when port number is not 
> specified
> --
>
> Key: HDDS-1891
> URL: https://issues.apache.org/jira/browse/HDDS-1891
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 3h 40m
>  Remaining Estimate: 0h
>
> {code:bash|title=Without port number -> Error}
> $ ozone fs -ls o3fs://bucket.volume.localhost/
> -ls: Ozone file system url should be either one of the two forms: 
> o3fs://bucket.volume/key  OR o3fs://bucket.volume.om-host.example.com:5678/key
> ...
> {code}
> {code:bash|title=With port number -> Success}
> $ ozone fs -ls o3fs://bucket.volume.localhost:9862/
> Found 1 items
> -rw-rw-rw-   1 hadoop hadoop   1485 2019-08-01 21:14 
> o3fs://bucket.volume.localhost:9862/README.txt
> {code}
> We expect the first command to attempt port 9862 by default.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1891) Ozone fs shell command should work with default port when port number is not specified

2019-08-13 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1891?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-1891:
---
Fix Version/s: 0.4.1

> Ozone fs shell command should work with default port when port number is not 
> specified
> --
>
> Key: HDDS-1891
> URL: https://issues.apache.org/jira/browse/HDDS-1891
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1, 0.5.0
>
>  Time Spent: 3h 40m
>  Remaining Estimate: 0h
>
> {code:bash|title=Without port number -> Error}
> $ ozone fs -ls o3fs://bucket.volume.localhost/
> -ls: Ozone file system url should be either one of the two forms: 
> o3fs://bucket.volume/key  OR o3fs://bucket.volume.om-host.example.com:5678/key
> ...
> {code}
> {code:bash|title=With port number -> Success}
> $ ozone fs -ls o3fs://bucket.volume.localhost:9862/
> Found 1 items
> -rw-rw-rw-   1 hadoop hadoop   1485 2019-08-01 21:14 
> o3fs://bucket.volume.localhost:9862/README.txt
> {code}
> We expect the first command to attempt port 9862 by default.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-1891) Ozone fs shell command should work with default port when port number is not specified

2019-08-13 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1891?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer resolved HDDS-1891.

   Resolution: Fixed
Fix Version/s: 0.5.0

I have committed this patch to the trunk branch. [~nandakumar131] This is 
tagged for 0.4.1, please let me know if you would this to be committed into 
ozone-0.4.1

> Ozone fs shell command should work with default port when port number is not 
> specified
> --
>
> Key: HDDS-1891
> URL: https://issues.apache.org/jira/browse/HDDS-1891
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 3h 40m
>  Remaining Estimate: 0h
>
> {code:bash|title=Without port number -> Error}
> $ ozone fs -ls o3fs://bucket.volume.localhost/
> -ls: Ozone file system url should be either one of the two forms: 
> o3fs://bucket.volume/key  OR o3fs://bucket.volume.om-host.example.com:5678/key
> ...
> {code}
> {code:bash|title=With port number -> Success}
> $ ozone fs -ls o3fs://bucket.volume.localhost:9862/
> Found 1 items
> -rw-rw-rw-   1 hadoop hadoop   1485 2019-08-01 21:14 
> o3fs://bucket.volume.localhost:9862/README.txt
> {code}
> We expect the first command to attempt port 9862 by default.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1891) Ozone fs shell command should work with default port when port number is not specified

2019-08-13 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1891?focusedWorklogId=294259=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-294259
 ]

ASF GitHub Bot logged work on HDDS-1891:


Author: ASF GitHub Bot
Created on: 13/Aug/19 22:44
Start Date: 13/Aug/19 22:44
Worklog Time Spent: 10m 
  Work Description: anuengineer commented on issue #1218: HDDS-1891. Ozone 
fs shell command should work with default port when port number is not specified
URL: https://github.com/apache/hadoop/pull/1218#issuecomment-521036865
 
 
   @smengcl  Thanks for the contribution. @bharatviswa504  Thanks for the 
review, I have committed this to the trunk.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 294259)
Time Spent: 3.5h  (was: 3h 20m)

> Ozone fs shell command should work with default port when port number is not 
> specified
> --
>
> Key: HDDS-1891
> URL: https://issues.apache.org/jira/browse/HDDS-1891
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3.5h
>  Remaining Estimate: 0h
>
> {code:bash|title=Without port number -> Error}
> $ ozone fs -ls o3fs://bucket.volume.localhost/
> -ls: Ozone file system url should be either one of the two forms: 
> o3fs://bucket.volume/key  OR o3fs://bucket.volume.om-host.example.com:5678/key
> ...
> {code}
> {code:bash|title=With port number -> Success}
> $ ozone fs -ls o3fs://bucket.volume.localhost:9862/
> Found 1 items
> -rw-rw-rw-   1 hadoop hadoop   1485 2019-08-01 21:14 
> o3fs://bucket.volume.localhost:9862/README.txt
> {code}
> We expect the first command to attempt port 9862 by default.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1891) Ozone fs shell command should work with default port when port number is not specified

2019-08-13 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1891?focusedWorklogId=294260=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-294260
 ]

ASF GitHub Bot logged work on HDDS-1891:


Author: ASF GitHub Bot
Created on: 13/Aug/19 22:44
Start Date: 13/Aug/19 22:44
Worklog Time Spent: 10m 
  Work Description: anuengineer commented on pull request #1218: HDDS-1891. 
Ozone fs shell command should work with default port when port number is not 
specified
URL: https://github.com/apache/hadoop/pull/1218
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 294260)
Time Spent: 3h 40m  (was: 3.5h)

> Ozone fs shell command should work with default port when port number is not 
> specified
> --
>
> Key: HDDS-1891
> URL: https://issues.apache.org/jira/browse/HDDS-1891
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h 40m
>  Remaining Estimate: 0h
>
> {code:bash|title=Without port number -> Error}
> $ ozone fs -ls o3fs://bucket.volume.localhost/
> -ls: Ozone file system url should be either one of the two forms: 
> o3fs://bucket.volume/key  OR o3fs://bucket.volume.om-host.example.com:5678/key
> ...
> {code}
> {code:bash|title=With port number -> Success}
> $ ozone fs -ls o3fs://bucket.volume.localhost:9862/
> Found 1 items
> -rw-rw-rw-   1 hadoop hadoop   1485 2019-08-01 21:14 
> o3fs://bucket.volume.localhost:9862/README.txt
> {code}
> We expect the first command to attempt port 9862 by default.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1956) Aged IO Thread exits on first read

2019-08-13 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1956?focusedWorklogId=294254=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-294254
 ]

ASF GitHub Bot logged work on HDDS-1956:


Author: ASF GitHub Bot
Created on: 13/Aug/19 22:39
Start Date: 13/Aug/19 22:39
Worklog Time Spent: 10m 
  Work Description: avijayanhwx commented on issue #1287: HDDS-1956. Aged 
IO Thread exits on first read
URL: https://github.com/apache/hadoop/pull/1287#issuecomment-521035462
 
 
   /label ozone
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 294254)
Time Spent: 2h 20m  (was: 2h 10m)

> Aged IO Thread exits on first read
> --
>
> Key: HDDS-1956
> URL: https://issues.apache.org/jira/browse/HDDS-1956
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.5.0
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> Aged IO Thread in {{TestMiniChaosOzoneCluster}} exits on first read due to 
> exception:
> {code}
> 2019-08-12 22:55:37,799 [pool-245-thread-1] INFO  
> ozone.MiniOzoneLoadGenerator 
> (MiniOzoneLoadGenerator.java:startAgedFilesLoad(194)) - AGED LOADGEN: Started 
> Aged IO Thread:2139.
> ...
> 2019-08-12 22:55:47,147 [pool-245-thread-1] ERROR 
> ozone.MiniOzoneLoadGenerator 
> (MiniOzoneLoadGenerator.java:startAgedFilesLoad(213)) - AGED LOADGEN: 0 
> Exiting due to exception
> java.lang.ArrayIndexOutOfBoundsException: 1
>   at 
> org.apache.hadoop.ozone.MiniOzoneLoadGenerator.readData(MiniOzoneLoadGenerator.java:151)
>   at 
> org.apache.hadoop.ozone.MiniOzoneLoadGenerator.startAgedFilesLoad(MiniOzoneLoadGenerator.java:209)
>   at 
> org.apache.hadoop.ozone.MiniOzoneLoadGenerator.lambda$startIO$1(MiniOzoneLoadGenerator.java:235)
> 2019-08-12 22:55:47,149 [pool-245-thread-1] INFO  
> ozone.MiniOzoneLoadGenerator 
> (MiniOzoneLoadGenerator.java:startAgedFilesLoad(219)) - Terminating IO 
> thread:2139.
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1928) Cannot run ozone-recon compose due to syntax error

2019-08-13 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1928?focusedWorklogId=294253=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-294253
 ]

ASF GitHub Bot logged work on HDDS-1928:


Author: ASF GitHub Bot
Created on: 13/Aug/19 22:37
Start Date: 13/Aug/19 22:37
Worklog Time Spent: 10m 
  Work Description: avijayanhwx commented on issue #1249: HDDS-1928. Cannot 
run ozone-recon compose due to syntax error
URL: https://github.com/apache/hadoop/pull/1249#issuecomment-521035043
 
 
   LGTM +1
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 294253)
Time Spent: 1.5h  (was: 1h 20m)

> Cannot run ozone-recon compose due to syntax error
> --
>
> Key: HDDS-1928
> URL: https://issues.apache.org/jira/browse/HDDS-1928
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: docker
>Affects Versions: 0.4.1
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> {noformat}
> $ cd hadoop-ozone/dist/target/ozone-0.5.0-SNAPSHOT/compose/ozone-recon
> $ docker-compose up -d --scale datanode=3
> ERROR: yaml.scanner.ScannerError: mapping values are not allowed here
>   in "./docker-compose.yaml", line 20, column 33
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1488) Scm cli command to start/stop replication manager

2019-08-13 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16906677#comment-16906677
 ] 

Hudson commented on HDDS-1488:
--

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17105 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17105/])
HDDS-1488. Scm cli command to start/stop replication manager. (aengineer: rev 
69b74e90167041f561bfcccf5a4e46ea208c467e)
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/client/ScmClient.java
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/protocol/StorageContainerLocationProtocol.java
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/audit/SCMAction.java
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/protocolPB/StorageContainerLocationProtocolClientSideTranslatorPB.java
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/ReplicationManager.java
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/SCMClientProtocolServer.java
* (add) 
hadoop-hdds/tools/src/main/java/org/apache/hadoop/hdds/scm/cli/ReplicationManagerStopSubcommand.java
* (edit) 
hadoop-hdds/tools/src/main/java/org/apache/hadoop/hdds/scm/cli/SCMCLI.java
* (edit) 
hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/client/ContainerOperationClient.java
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/protocolPB/StorageContainerLocationProtocolServerSideTranslatorPB.java
* (add) 
hadoop-hdds/tools/src/main/java/org/apache/hadoop/hdds/scm/cli/ReplicationManagerCommands.java
* (add) 
hadoop-hdds/tools/src/main/java/org/apache/hadoop/hdds/scm/cli/ReplicationManagerStatusSubcommand.java
* (edit) 
hadoop-hdds/common/src/main/proto/StorageContainerLocationProtocol.proto
* (edit) 
hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/container/TestReplicationManager.java
* (add) 
hadoop-hdds/tools/src/main/java/org/apache/hadoop/hdds/scm/cli/ReplicationManagerStartSubcommand.java


> Scm cli command to start/stop replication manager
> -
>
> Key: HDDS-1488
> URL: https://issues.apache.org/jira/browse/HDDS-1488
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 0.4.1, 0.5.0
>
>  Time Spent: 3h 20m
>  Remaining Estimate: 0h
>
> It would be nice to have scmcli command to start/stop the ReplicationManager 
> thread running in SCM



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1488) Scm cli command to start/stop replication manager

2019-08-13 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1488?focusedWorklogId=294246=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-294246
 ]

ASF GitHub Bot logged work on HDDS-1488:


Author: ASF GitHub Bot
Created on: 13/Aug/19 22:26
Start Date: 13/Aug/19 22:26
Worklog Time Spent: 10m 
  Work Description: anuengineer commented on pull request #1221: HDDS-1488. 
Scm cli command to start/stop replication manager.
URL: https://github.com/apache/hadoop/pull/1221
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 294246)
Time Spent: 3h 20m  (was: 3h 10m)

> Scm cli command to start/stop replication manager
> -
>
> Key: HDDS-1488
> URL: https://issues.apache.org/jira/browse/HDDS-1488
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 0.4.1, 0.5.0
>
>  Time Spent: 3h 20m
>  Remaining Estimate: 0h
>
> It would be nice to have scmcli command to start/stop the ReplicationManager 
> thread running in SCM



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1768) Audit xxxAcl methods in OzoneManager

2019-08-13 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1768?focusedWorklogId=294224=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-294224
 ]

ASF GitHub Bot logged work on HDDS-1768:


Author: ASF GitHub Bot
Created on: 13/Aug/19 22:13
Start Date: 13/Aug/19 22:13
Worklog Time Spent: 10m 
  Work Description: dineshchitlangia commented on issue #1204: HDDS-1768. 
Audit xxxAcl methods in OzoneManager
URL: https://github.com/apache/hadoop/pull/1204#issuecomment-521029139
 
 
   @bharatviswa504 Thanks for reviewing. Updated PR to address review comments.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 294224)
Time Spent: 2h 10m  (was: 2h)

> Audit xxxAcl methods in OzoneManager
> 
>
> Key: HDDS-1768
> URL: https://issues.apache.org/jira/browse/HDDS-1768
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h 10m
>  Remaining Estimate: 0h
>
> Audit permission failures from authorizer



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1488) Scm cli command to start/stop replication manager

2019-08-13 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1488?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-1488:
---
   Resolution: Fixed
Fix Version/s: 0.5.0
   0.4.1
   Status: Resolved  (was: Patch Available)

[~nandakumar131] Thanks for the contribution. [~bharatviswa] Thanks for the 
review.

> Scm cli command to start/stop replication manager
> -
>
> Key: HDDS-1488
> URL: https://issues.apache.org/jira/browse/HDDS-1488
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 0.4.1, 0.5.0
>
>  Time Spent: 3h 10m
>  Remaining Estimate: 0h
>
> It would be nice to have scmcli command to start/stop the ReplicationManager 
> thread running in SCM



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1886) Use ArrayList#clear to address audit failure scenario

2019-08-13 Thread Dinesh Chitlangia (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1886?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16906650#comment-16906650
 ] 

Dinesh Chitlangia commented on HDDS-1886:
-

[~adoroszlai] thanks for reviewing, [~anu] thanks for commit.

> Use ArrayList#clear to address audit failure scenario
> -
>
> Key: HDDS-1886
> URL: https://issues.apache.org/jira/browse/HDDS-1886
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>  Components: test
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 2h 10m
>  Remaining Estimate: 0h
>
> TestOzoneAuditLogger makes use of ArrayList#remove to clear the log file in 
> between test runs.
> When writing tests in future for more failures scenarios, the tests will fail 
> if the log entry has multi-line stack trace in audit logs.
> This jira aims to use ArrayList#clear to make the test future proof.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1886) Use ArrayList#clear to address audit failure scenario

2019-08-13 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1886?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16906649#comment-16906649
 ] 

Hudson commented on HDDS-1886:
--

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17104 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17104/])
HDDS-1886. Use ArrayList#clear to address audit failure scenario (aengineer: 
rev 689a80d3ce310c3b617537550a529b9a1dc80f4b)
* (edit) 
hadoop-hdds/common/src/test/java/org/apache/hadoop/ozone/audit/TestOzoneAuditLogger.java


> Use ArrayList#clear to address audit failure scenario
> -
>
> Key: HDDS-1886
> URL: https://issues.apache.org/jira/browse/HDDS-1886
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>  Components: test
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 2h 10m
>  Remaining Estimate: 0h
>
> TestOzoneAuditLogger makes use of ArrayList#remove to clear the log file in 
> between test runs.
> When writing tests in future for more failures scenarios, the tests will fail 
> if the log entry has multi-line stack trace in audit logs.
> This jira aims to use ArrayList#clear to make the test future proof.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1886) Use ArrayList#clear to address audit failure scenario

2019-08-13 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1886?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-1886:
---
   Resolution: Fixed
Fix Version/s: 0.5.0
   Status: Resolved  (was: Patch Available)

> Use ArrayList#clear to address audit failure scenario
> -
>
> Key: HDDS-1886
> URL: https://issues.apache.org/jira/browse/HDDS-1886
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>  Components: test
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 2h 10m
>  Remaining Estimate: 0h
>
> TestOzoneAuditLogger makes use of ArrayList#remove to clear the log file in 
> between test runs.
> When writing tests in future for more failures scenarios, the tests will fail 
> if the log entry has multi-line stack trace in audit logs.
> This jira aims to use ArrayList#clear to make the test future proof.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1886) Use ArrayList#clear to address audit failure scenario

2019-08-13 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1886?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16906644#comment-16906644
 ] 

Anu Engineer commented on HDDS-1886:


I have committed this patch to the trunk.  Thanks for the contribution.

> Use ArrayList#clear to address audit failure scenario
> -
>
> Key: HDDS-1886
> URL: https://issues.apache.org/jira/browse/HDDS-1886
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>  Components: test
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 2h 10m
>  Remaining Estimate: 0h
>
> TestOzoneAuditLogger makes use of ArrayList#remove to clear the log file in 
> between test runs.
> When writing tests in future for more failures scenarios, the tests will fail 
> if the log entry has multi-line stack trace in audit logs.
> This jira aims to use ArrayList#clear to make the test future proof.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1886) Use ArrayList#clear to address audit failure scenario

2019-08-13 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1886?focusedWorklogId=294216=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-294216
 ]

ASF GitHub Bot logged work on HDDS-1886:


Author: ASF GitHub Bot
Created on: 13/Aug/19 21:47
Start Date: 13/Aug/19 21:47
Worklog Time Spent: 10m 
  Work Description: anuengineer commented on pull request #1205: HDDS-1886. 
Use ArrayList#clear to address audit failure scenario
URL: https://github.com/apache/hadoop/pull/1205
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 294216)
Time Spent: 2h 10m  (was: 2h)

> Use ArrayList#clear to address audit failure scenario
> -
>
> Key: HDDS-1886
> URL: https://issues.apache.org/jira/browse/HDDS-1886
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>  Components: test
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 2h 10m
>  Remaining Estimate: 0h
>
> TestOzoneAuditLogger makes use of ArrayList#remove to clear the log file in 
> between test runs.
> When writing tests in future for more failures scenarios, the tests will fail 
> if the log entry has multi-line stack trace in audit logs.
> This jira aims to use ArrayList#clear to make the test future proof.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1886) Use ArrayList#clear to address audit failure scenario

2019-08-13 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1886?focusedWorklogId=294214=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-294214
 ]

ASF GitHub Bot logged work on HDDS-1886:


Author: ASF GitHub Bot
Created on: 13/Aug/19 21:46
Start Date: 13/Aug/19 21:46
Worklog Time Spent: 10m 
  Work Description: anuengineer commented on pull request #1205: HDDS-1886. 
Use ArrayList#clear to address audit failure scenario
URL: https://github.com/apache/hadoop/pull/1205#discussion_r313627412
 
 

 ##
 File path: 
hadoop-hdds/common/src/test/java/org/apache/hadoop/ozone/audit/TestOzoneAuditLogger.java
 ##
 @@ -153,7 +153,7 @@ private void verifyLog(String expected) throws IOException 
{
 assertTrue(lines.size() != 0);
 assertTrue(expected.equalsIgnoreCase(lines.get(0)));
 //empty the file
-lines.remove(0);
+lines.clear();
 
 Review comment:
   @adoroszlai  Thanks for the review. I will commit this now.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 294214)
Time Spent: 2h  (was: 1h 50m)

> Use ArrayList#clear to address audit failure scenario
> -
>
> Key: HDDS-1886
> URL: https://issues.apache.org/jira/browse/HDDS-1886
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>  Components: test
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> TestOzoneAuditLogger makes use of ArrayList#remove to clear the log file in 
> between test runs.
> When writing tests in future for more failures scenarios, the tests will fail 
> if the log entry has multi-line stack trace in audit logs.
> This jira aims to use ArrayList#clear to make the test future proof.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1722) Use the bindings in ReconSchemaGenerationModule to create Recon SQL tables on startup

2019-08-13 Thread Aravindan Vijayan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1722?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aravindan Vijayan updated HDDS-1722:

Description: 
Currently the table creation is done for each schema definition one by one. 

Setup sqlite DB and create Recon SQL tables.
cc [~vivekratnavel], [~swagle]

  was:
Setup sqlite DB and create Recon SQL tables.

cc [~vivekratnavel], [~swagle]


> Use the bindings in ReconSchemaGenerationModule to create Recon SQL tables on 
> startup
> -
>
> Key: HDDS-1722
> URL: https://issues.apache.org/jira/browse/HDDS-1722
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Recon
>Reporter: Aravindan Vijayan
>Assignee: Shweta
>Priority: Major
>
> Currently the table creation is done for each schema definition one by one. 
> Setup sqlite DB and create Recon SQL tables.
> cc [~vivekratnavel], [~swagle]



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1722) Use the bindings in ReconSchemaGenerationModule to create Recon SQL tables on startup

2019-08-13 Thread Aravindan Vijayan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1722?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aravindan Vijayan updated HDDS-1722:

Description: 
Setup sqlite DB and create Recon SQL tables.

cc [~vivekratnavel], [~swagle]

  was:
Setup sqlite DB.
Invoke jooq to create tables.

cc [~vivekratnavel], [~swagle]


> Use the bindings in ReconSchemaGenerationModule to create Recon SQL tables on 
> startup
> -
>
> Key: HDDS-1722
> URL: https://issues.apache.org/jira/browse/HDDS-1722
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Recon
>Reporter: Aravindan Vijayan
>Assignee: Shweta
>Priority: Major
>
> Setup sqlite DB and create Recon SQL tables.
> cc [~vivekratnavel], [~swagle]



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1722) Use the bindings in ReconSchemaGenerationModule to create Recon SQL tables on startup

2019-08-13 Thread Aravindan Vijayan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1722?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aravindan Vijayan updated HDDS-1722:

Summary: Use the bindings in ReconSchemaGenerationModule to create Recon 
SQL tables on startup  (was: Add --init option in Ozone Recon to setup sqlite 
DB for creating its aggregate tables.)

> Use the bindings in ReconSchemaGenerationModule to create Recon SQL tables on 
> startup
> -
>
> Key: HDDS-1722
> URL: https://issues.apache.org/jira/browse/HDDS-1722
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Recon
>Reporter: Aravindan Vijayan
>Assignee: Shweta
>Priority: Major
>
> Setup sqlite DB.
> Invoke jooq to create tables.
> cc [~vivekratnavel], [~swagle]



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



  1   2   3   >