[jira] [Work logged] (HDDS-2022) Add additional freon tests

2019-09-13 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2022?focusedWorklogId=312508=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-312508
 ]

ASF GitHub Bot logged work on HDDS-2022:


Author: ASF GitHub Bot
Created on: 14/Sep/19 05:55
Start Date: 14/Sep/19 05:55
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1341: HDDS-2022. Add 
additional freon tests
URL: https://github.com/apache/hadoop/pull/1341#issuecomment-531452566
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 153 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | -1 | mvninstall | 35 | hadoop-ozone in trunk failed. |
   | -1 | compile | 22 | hadoop-ozone in trunk failed. |
   | -0 | checkstyle | 37 | The patch fails to run checkstyle in hadoop-ozone |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 990 | branch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 100 | hadoop-ozone in trunk failed. |
   | 0 | spotbugs | 187 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | -1 | findbugs | 28 | hadoop-ozone in trunk failed. |
   ||| _ Patch Compile Tests _ |
   | -1 | mvninstall | 35 | hadoop-ozone in the patch failed. |
   | -1 | compile | 25 | hadoop-ozone in the patch failed. |
   | -1 | javac | 25 | hadoop-ozone in the patch failed. |
   | -0 | checkstyle | 38 | The patch fails to run checkstyle in hadoop-ozone |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 1 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 821 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 91 | hadoop-ozone in the patch failed. |
   | -1 | findbugs | 28 | hadoop-ozone in the patch failed. |
   ||| _ Other Tests _ |
   | -1 | unit | 198 | hadoop-hdds in the patch failed. |
   | -1 | unit | 30 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 34 | The patch does not generate ASF License warnings. |
   | | | 3684 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.container.ozoneimpl.TestOzoneContainer |
   |   | hadoop.ozone.container.keyvalue.TestKeyValueContainer |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.2 Server=19.03.2 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1341/10/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1341 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient xml findbugs checkstyle |
   | uname | Linux e83069eba372 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 9a931b8 |
   | Default Java | 1.8.0_222 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1341/10/artifact/out/branch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1341/10/artifact/out/branch-compile-hadoop-ozone.txt
 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1341/10/artifact/out//home/jenkins/jenkins-slave/workspace/hadoop-multibranch_PR-1341/out/maven-branch-checkstyle-hadoop-ozone.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1341/10/artifact/out/branch-javadoc-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1341/10/artifact/out/branch-findbugs-hadoop-ozone.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1341/10/artifact/out/patch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1341/10/artifact/out/patch-compile-hadoop-ozone.txt
 |
   | javac | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1341/10/artifact/out/patch-compile-hadoop-ozone.txt
 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1341/10/artifact/out//home/jenkins/jenkins-slave/workspace/hadoop-multibranch_PR-1341/out/maven-patch-checkstyle-hadoop-ozone.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1341/10/artifact/out/patch-javadoc-hadoop-ozone.txt
 |
   | findbugs | 

[jira] [Updated] (HDDS-2110) Arbitrary file can be downloaded with the help of ProfilerServlet

2019-09-13 Thread Nanda kumar (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2110?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HDDS-2110:
--
Target Version/s: 0.4.1

> Arbitrary file can be downloaded with the help of ProfilerServlet
> -
>
> Key: HDDS-2110
> URL: https://issues.apache.org/jira/browse/HDDS-2110
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Native
>Reporter: Aayush
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> The LOC 324 in the file 
> [ProfileServlet.java|https://github.com/apache/hadoop/blob/217bdbd940a96986df3b96899b43caae2b5a9ed2/hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/server/ProfileServlet.java]
>  is prone to an arbitrary file download:-
> {code:java}
> protected void doGetDownload(String fileName, final HttpServletRequest req,   
>final HttpServletResponse resp) throws IOException {
> File requestedFile = 
> ProfileServlet.OUTPUT_DIR.resolve(fileName).toAbsolutePath().toFile();{code}
> As the String fileName is directly considered as the requested file.
>  
> Which is called at LOC 180 with HTTP request directly passed:-
> {code:java}
> if (req.getParameter("file") != null) {  
> doGetDownload(req.getParameter("file"), req, resp);  
> return;
> }
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2111) XSS fragments can be injected to the S3g landing page

2019-09-13 Thread Nanda kumar (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2111?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HDDS-2111:
--
Target Version/s: 0.4.1

> XSS fragments can be injected to the S3g landing page  
> ---
>
> Key: HDDS-2111
> URL: https://issues.apache.org/jira/browse/HDDS-2111
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: S3
>Reporter: Aayush
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> VULNERABILITY DETAILS
> There is a way to bypass anti-XSS filter for DOM XSS exploiting a 
> "window.location.href".
> Considering a typical URL:
> scheme://domain:port/path?query_string#fragment_id
> Browsers encode correctly both "path" and "query_string", but not the 
> "fragment_id". 
> So if used "fragment_id" the vector is also not logged on Web Server.
> VERSION
> Chrome Version: 10.0.648.134 (Official Build 77917) beta
> REPRODUCTION CASE
> This is an index.html page:
> {code:java}
> aws s3api --endpoint 
> document.write(window.location.href.replace("static/", "")) 
> create-bucket --bucket=wordcount
> {code}
> The attack vector is:
> index.html?#alert('XSS');
> * PoC:
> For your convenience, a minimalist PoC is located on:
> http://security.onofri.org/xss_location.html?#alert('XSS');
> * References
> - DOM Based Cross-Site Scripting or XSS of the Third Kind - 
> http://www.webappsec.org/projects/articles/071105.shtml
> reference:- 
> https://bugs.chromium.org/p/chromium/issues/detail?id=76796



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14847) Blocks are over-replicated while EC decommissioning

2019-09-13 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16929688#comment-16929688
 ] 

Ayush Saxena commented on HDFS-14847:
-

Thanx [~ferhui] for the report.
[~marvelrock] [~zhaoyim] you guys worked on similar issues, Do you want to give 
a check to it.

> Blocks are over-replicated while EC decommissioning
> ---
>
> Key: HDFS-14847
> URL: https://issues.apache.org/jira/browse/HDFS-14847
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.2.0, 3.0.3, 3.1.2, 3.3.0
>Reporter: Fei Hui
>Assignee: Fei Hui
>Priority: Critical
> Attachments: HDFS-14847.001.patch, HDFS-14847.002.patch
>
>
> Found that Some blocks are over-replicated while ec decommissioning. Messages 
> in log as follow
> {quote}
> INFO BlockStateChange: Block: blk_-9223372035714984112_363779142, Expected 
> Replicas: 9, live replicas: 8, corrupt replicas: 0, decommissioned replicas: 
> 0, decommissioning replicas: 3, maintenance replicas: 0, live entering 
> maintenance replicas: 0, excess replicas: 0, Is Open File: false, Datanodes 
> having this block: 10.254.41.34:50010 10.254.54.53:50010 10.254.28.53:50010 
> 10.254.56.55:50010 10.254.32.21:50010 10.254.33.19:50010 10.254.63.17:50010 
> 10.254.31.19:50010 10.254.35.29:50010 10.254.51.57:50010 10.254.40.58:50010 
> 10.254.69.31:50010 10.254.47.18:50010 10.254.51.18:50010 10.254.43.57:50010 
> 10.254.50.47:50010 10.254.42.37:50010 10.254.57.29:50010 10.254.67.40:50010 
> 10.254.44.16:50010 10.254.59.38:50010 10.254.53.56:50010 10.254.45.11:50010 
> 10.254.39.22:50010 10.254.30.16:50010 10.254.35.53:50010 10.254.22.30:50010 
> 10.254.26.34:50010 10.254.17.58:50010 10.254.65.53:50010 10.254.60.39:50010 
> 10.254.61.20:50010 10.254.64.23:50010 10.254.21.13:50010 10.254.37.35:50010 
> 10.254.68.30:50010 10.254.62.37:50010 10.254.25.58:50010 10.254.52.54:50010 
> 10.254.58.31:50010 10.254.49.11:50010 10.254.55.52:50010 10.254.19.19:50010 
> 10.254.36.40:50010 10.254.18.30:50010 10.254.20.39:50010 10.254.66.52:50010 
> 10.254.56.32:50010 10.254.24.55:50010 10.254.34.11:50010 10.254.29.58:50010 
> 10.254.27.40:50010 10.254.46.33:50010 10.254.23.19:50010 10.254.74.12:50010 
> 10.254.74.13:50010 10.254.41.35:50010 10.254.67.58:50010 10.254.54.11:50010 
> 10.254.68.14:50010 10.254.27.14:50010 10.254.51.29:50010 10.254.45.21:50010 
> 10.254.50.56:50010 10.254.47.31:50010 10.254.40.14:50010 10.254.65.21:50010 
> 10.254.62.22:50010 10.254.57.16:50010 10.254.36.52:50010 10.254.30.13:50010 
> 10.254.35.12:50010 10.254.69.34:50010 10.254.34.58:50010 10.254.17.50:50010 
> 10.254.63.12:50010 10.254.28.21:50010 10.254.58.30:50010 10.254.24.57:50010 
> 10.254.33.50:50010 10.254.44.52:50010 10.254.32.48:50010 10.254.43.39:50010 
> 10.254.20.37:50010 10.254.56.59:50010 10.254.22.33:50010 10.254.60.34:50010 
> 10.254.49.19:50010 10.254.52.21:50010 10.254.23.59:50010 10.254.21.16:50010 
> 10.254.42.55:50010 10.254.29.33:50010 10.254.53.17:50010 10.254.19.14:50010 
> 10.254.64.51:50010 10.254.46.20:50010 10.254.66.22:50010 10.254.18.38:50010 
> 10.254.39.17:50010 10.254.37.57:50010 10.254.31.54:50010 10.254.55.33:50010 
> 10.254.25.17:50010 10.254.61.33:50010 10.254.26.40:50010 10.254.59.23:50010 
> 10.254.59.35:50010 10.254.66.48:50010 10.254.41.15:50010 10.254.54.31:50010 
> 10.254.61.50:50010 10.254.62.31:50010 10.254.17.56:50010 10.254.29.18:50010 
> 10.254.45.16:50010 10.254.63.48:50010 10.254.22.34:50010 10.254.37.51:50010 
> 10.254.65.49:50010 10.254.58.21:50010 10.254.42.12:50010 10.254.55.17:50010 
> 10.254.27.13:50010 10.254.57.17:50010 10.254.67.18:50010 10.254.31.31:50010 
> 10.254.28.12:50010 10.254.36.12:50010 10.254.21.59:50010 10.254.30.30:50010 
> 10.254.26.50:50010 10.254.40.40:50010 10.254.32.17:50010 10.254.47.55:50010 
> 10.254.60.55:50010 10.254.49.33:50010 10.254.68.47:50010 10.254.39.21:50010 
> 10.254.56.14:50010 10.254.33.54:50010 10.254.69.57:50010 10.254.43.50:50010 
> 10.254.50.13:50010 10.254.25.49:50010 10.254.18.20:50010 10.254.52.23:50010 
> 10.254.19.11:50010 10.254.20.21:50010 10.254.74.16:50010 10.254.64.55:50010 
> 10.254.24.48:50010 10.254.46.29:50010 10.254.51.12:50010 10.254.23.56:50010 
> 10.254.44.59:50010 10.254.53.58:50010 10.254.34.38:50010 10.254.35.37:50010 
> 10.254.35.16:50010 10.254.36.23:50010 10.254.41.47:50010 10.254.54.12:50010 
> 10.254.20.59:50010 , Current Datanode: 10.254.56.55:50010, Is current 
> datanode decommissioning: true, Is current datanode entering maintenance: 
> false
> {quote}
> Decommisions hang for a long time.
> Deep into the code and find that There is a problem in ErasureCodingWork.java
> For Example, there are 2 nodes(dn0, dn1) in decommission and an ec block 
> group with the 2 nodes. After creating an ErasureCodingWork to reconstruct, 
> it will 

[jira] [Commented] (HDFS-14799) Do Not Call Map containsKey In Conjunction with get

2019-09-13 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14799?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16929685#comment-16929685
 ] 

Ayush Saxena commented on HDFS-14799:
-

Committed to trunk, Thanx [~hemanthboyina] and [~belugabehr]

> Do Not Call Map containsKey In Conjunction with get
> ---
>
> Key: HDFS-14799
> URL: https://issues.apache.org/jira/browse/HDFS-14799
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 3.2.0
>Reporter: David Mollitor
>Assignee: hemanthboyina
>Priority: Minor
>  Labels: newbie, noob
> Attachments: HDFS-14799.001.patch
>
>
> {code:java|title=InvalidateBlocks.java}
>   private final Map>
>   nodeToBlocks = new HashMap<>();
>   private final Map>
>   nodeToECBlocks = new HashMap<>();
> ...
>   private LightWeightHashSet getBlocksSet(final DatanodeInfo dn) {
> if (nodeToBlocks.containsKey(dn)) {
>   return nodeToBlocks.get(dn);
> }
> return null;
>   }
>   private LightWeightHashSet getECBlocksSet(final DatanodeInfo dn) {
> if (nodeToECBlocks.containsKey(dn)) {
>   return nodeToECBlocks.get(dn);
> }
> return null;
>   }
> {code}
> There is no need to check for {{containsKey}} here since a call to {{get}} 
> will already return 'null' if the key is not there.  This just adds overhead 
> of having to dive into the Map twice to get the value.
> {code}
>   private LightWeightHashSet getECBlocksSet(final DatanodeInfo dn) {
> return nodeToECBlocks.get(dn);
>   }
> {code}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14799) Do Not Call Map containsKey In Conjunction with get

2019-09-13 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14799?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16929686#comment-16929686
 ] 

Hudson commented on HDFS-14799:
---

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17299 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17299/])
HDFS-14799. Do Not Call Map containsKey In Conjunction with get. (ayushsaxena: 
rev e04b8a46c3088d13bf010f2959062e1440332bcc)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/InvalidateBlocks.java


> Do Not Call Map containsKey In Conjunction with get
> ---
>
> Key: HDFS-14799
> URL: https://issues.apache.org/jira/browse/HDFS-14799
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 3.2.0
>Reporter: David Mollitor
>Assignee: hemanthboyina
>Priority: Minor
>  Labels: newbie, noob
> Fix For: 3.3.0
>
> Attachments: HDFS-14799.001.patch
>
>
> {code:java|title=InvalidateBlocks.java}
>   private final Map>
>   nodeToBlocks = new HashMap<>();
>   private final Map>
>   nodeToECBlocks = new HashMap<>();
> ...
>   private LightWeightHashSet getBlocksSet(final DatanodeInfo dn) {
> if (nodeToBlocks.containsKey(dn)) {
>   return nodeToBlocks.get(dn);
> }
> return null;
>   }
>   private LightWeightHashSet getECBlocksSet(final DatanodeInfo dn) {
> if (nodeToECBlocks.containsKey(dn)) {
>   return nodeToECBlocks.get(dn);
> }
> return null;
>   }
> {code}
> There is no need to check for {{containsKey}} here since a call to {{get}} 
> will already return 'null' if the key is not there.  This just adds overhead 
> of having to dive into the Map twice to get the value.
> {code}
>   private LightWeightHashSet getECBlocksSet(final DatanodeInfo dn) {
> return nodeToECBlocks.get(dn);
>   }
> {code}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14799) Do Not Call Map containsKey In Conjunction with get

2019-09-13 Thread Ayush Saxena (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14799?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-14799:

Fix Version/s: 3.3.0
 Hadoop Flags: Reviewed
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> Do Not Call Map containsKey In Conjunction with get
> ---
>
> Key: HDFS-14799
> URL: https://issues.apache.org/jira/browse/HDFS-14799
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 3.2.0
>Reporter: David Mollitor
>Assignee: hemanthboyina
>Priority: Minor
>  Labels: newbie, noob
> Fix For: 3.3.0
>
> Attachments: HDFS-14799.001.patch
>
>
> {code:java|title=InvalidateBlocks.java}
>   private final Map>
>   nodeToBlocks = new HashMap<>();
>   private final Map>
>   nodeToECBlocks = new HashMap<>();
> ...
>   private LightWeightHashSet getBlocksSet(final DatanodeInfo dn) {
> if (nodeToBlocks.containsKey(dn)) {
>   return nodeToBlocks.get(dn);
> }
> return null;
>   }
>   private LightWeightHashSet getECBlocksSet(final DatanodeInfo dn) {
> if (nodeToECBlocks.containsKey(dn)) {
>   return nodeToECBlocks.get(dn);
> }
> return null;
>   }
> {code}
> There is no need to check for {{containsKey}} here since a call to {{get}} 
> will already return 'null' if the key is not there.  This just adds overhead 
> of having to dive into the Map twice to get the value.
> {code}
>   private LightWeightHashSet getECBlocksSet(final DatanodeInfo dn) {
> return nodeToECBlocks.get(dn);
>   }
> {code}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-2129) Using dist profile fails with pom.ozone.xml as parent pom

2019-09-13 Thread Elek, Marton (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2129?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton reassigned HDDS-2129:
--

Assignee: Elek, Marton

> Using dist profile fails with pom.ozone.xml as parent pom
> -
>
> Key: HDDS-2129
> URL: https://issues.apache.org/jira/browse/HDDS-2129
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Arpit Agarwal
>Assignee: Elek, Marton
>Priority: Major
>
> The build fails with the {{dist}} profile. Details in a comment below.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2119) Use checkstyle.xml and suppressions.xml in hdds/ozone projects for checkstyle validation

2019-09-13 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2119?focusedWorklogId=312505=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-312505
 ]

ASF GitHub Bot logged work on HDDS-2119:


Author: ASF GitHub Bot
Created on: 14/Sep/19 05:20
Start Date: 14/Sep/19 05:20
Worklog Time Spent: 10m 
  Work Description: elek commented on issue #1435: HDDS-2119. Use 
checkstyle.xml and suppressions.xml in hdds/ozone projects for checkstyle 
validation.
URL: https://github.com/apache/hadoop/pull/1435#issuecomment-531450551
 
 
   Thanks @nandakumar131 the patch. Unfortunately it seems to be failing for me 
with testing it with the checkstyle.sh.
   
   Until now the checkstyle.sh did a `mvn checkstyle:check` without compilation 
(which made it very fast). But now you should have the build-tools compiled and 
installed to do a full checkstyle check.
   
   I don't know what is the good solution here
   
1. We can use file references instead of artifact references to define the 
checkstyle rule files
2. Or we can add the required build-tools install to the checkstyle.sh
   
   I liked that it was very fast until know, therefore I would prefer the first 
one, but there could be other options...
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 312505)
Time Spent: 1h 50m  (was: 1h 40m)

> Use checkstyle.xml and suppressions.xml in hdds/ozone projects for checkstyle 
> validation
> 
>
> Key: HDDS-2119
> URL: https://issues.apache.org/jira/browse/HDDS-2119
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> After HDDS-2106 hdds/ozone no more relies on hadoop parent pom, so we have to 
> use separate checkstyle.xml and suppressions.xml in hdds/ozone projects for 
> checkstyle validation.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2110) Arbitrary file can be downloaded with the help of ProfilerServlet

2019-09-13 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2110?focusedWorklogId=312503=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-312503
 ]

ASF GitHub Bot logged work on HDDS-2110:


Author: ASF GitHub Bot
Created on: 14/Sep/19 05:09
Start Date: 14/Sep/19 05:09
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1448: HDDS-2110. 
Arbitrary file can be downloaded with the help of ProfilerServlet
URL: https://github.com/apache/hadoop/pull/1448#issuecomment-531449913
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 72 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | -1 | mvninstall | 31 | hadoop-ozone in trunk failed. |
   | -1 | compile | 19 | hadoop-ozone in trunk failed. |
   | +1 | checkstyle | 64 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 902 | branch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 14 | hadoop-hdds in trunk failed. |
   | -1 | javadoc | 15 | hadoop-ozone in trunk failed. |
   | 0 | spotbugs | 160 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | -1 | findbugs | 23 | hadoop-ozone in trunk failed. |
   ||| _ Patch Compile Tests _ |
   | -1 | mvninstall | 30 | hadoop-ozone in the patch failed. |
   | -1 | compile | 22 | hadoop-ozone in the patch failed. |
   | -1 | javac | 22 | hadoop-ozone in the patch failed. |
   | +1 | checkstyle | 51 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 1 | The patch has no whitespace issues. |
   | +1 | shadedclient | 721 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 13 | hadoop-hdds in the patch failed. |
   | -1 | javadoc | 14 | hadoop-ozone in the patch failed. |
   | -1 | findbugs | 164 | hadoop-hdds generated 1 new + 0 unchanged - 0 fixed 
= 1 total (was 0) |
   | -1 | findbugs | 22 | hadoop-ozone in the patch failed. |
   ||| _ Other Tests _ |
   | -1 | unit | 152 | hadoop-hdds in the patch failed. |
   | -1 | unit | 24 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 29 | The patch does not generate ASF License warnings. |
   | | | 2927 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | FindBugs | module:hadoop-hdds |
   |  |  Absolute path traversal in 
org.apache.hadoop.hdds.server.ProfileServlet.doGet(HttpServletRequest, 
HttpServletResponse)  At ProfileServlet.java:HttpServletResponse)  At 
ProfileServlet.java:[line 181] |
   | Failed junit tests | hadoop.ozone.container.ozoneimpl.TestOzoneContainer |
   |   | hadoop.ozone.container.keyvalue.TestKeyValueContainer |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.0 Server=19.03.0 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1448/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1448 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux c3afa8e01794 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 6a9f7ca |
   | Default Java | 1.8.0_222 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1448/1/artifact/out/branch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1448/1/artifact/out/branch-compile-hadoop-ozone.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1448/1/artifact/out/branch-javadoc-hadoop-hdds.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1448/1/artifact/out/branch-javadoc-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1448/1/artifact/out/branch-findbugs-hadoop-ozone.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1448/1/artifact/out/patch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1448/1/artifact/out/patch-compile-hadoop-ozone.txt
 |
   | javac | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1448/1/artifact/out/patch-compile-hadoop-ozone.txt
 |
   | javadoc | 

[jira] [Work logged] (HDDS-1982) Extend SCMNodeManager to support decommission and maintenance states

2019-09-13 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1982?focusedWorklogId=312504=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-312504
 ]

ASF GitHub Bot logged work on HDDS-1982:


Author: ASF GitHub Bot
Created on: 14/Sep/19 05:09
Start Date: 14/Sep/19 05:09
Worklog Time Spent: 10m 
  Work Description: anuengineer commented on pull request #1344: HDDS-1982 
Extend SCMNodeManager to support decommission and maintenance states
URL: https://github.com/apache/hadoop/pull/1344#discussion_r324412721
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/states/NodeStateMap.java
 ##
 @@ -309,4 +381,61 @@ private void checkIfNodeExist(UUID uuid) throws 
NodeNotFoundException {
   throw new NodeNotFoundException("Node UUID: " + uuid);
 }
   }
+
+  /**
+   * Create a list of datanodeInfo for all nodes matching the passed states.
+   * Passing null for one of the states acts like a wildcard for that state.
+   *
+   * @param opState
+   * @param health
+   * @return List of DatanodeInfo objects matching the passed state
+   */
+  private List filterNodes(
+  NodeOperationalState opState, NodeState health) {
+if (opState != null && health != null) {
 
 Review comment:
   please be aware that stream.filter kind of patterns have a huge overhead 
over normal for. If this code is going to be in any sort of critical path, it 
is better for the code to stay normal for.
   
   Please see some fixes made by todd lipcon, in HDFS because of this issue.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 312504)
Time Spent: 6h  (was: 5h 50m)

> Extend SCMNodeManager to support decommission and maintenance states
> 
>
> Key: HDDS-1982
> URL: https://issues.apache.org/jira/browse/HDDS-1982
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: SCM
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 6h
>  Remaining Estimate: 0h
>
> Currently, within SCM a node can have the following states:
> HEALTHY
> STALE
> DEAD
> DECOMMISSIONING
> DECOMMISSIONED
> The last 2 are not currently used.
> In order to support decommissioning and maintenance mode, we need to extend 
> the set of states a node can have to include decommission and maintenance 
> states.
> It is also important to note that a node decommissioning or entering 
> maintenance can also be HEALTHY, STALE or go DEAD.
> Therefore in this Jira I propose we should model a node state with two 
> different sets of values. The first, is effectively the liveliness of the 
> node, with the following states. This is largely what is in place now:
> HEALTHY
> STALE
> DEAD
> The second is the node operational state:
> IN_SERVICE
> DECOMMISSIONING
> DECOMMISSIONED
> ENTERING_MAINTENANCE
> IN_MAINTENANCE
> That means the overall total number of states for a node is the cross-product 
> of the two above lists, however it probably makes sense to keep the two 
> states seperate internally.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14303) check block directory logic not correct when there is only meta file, print no meaning warn log

2019-09-13 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14303?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16929683#comment-16929683
 ] 

Ayush Saxena commented on HDFS-14303:
-

Thanx everyone, Have took the addendum to branch-2.9.

> check block directory logic not correct when there is only meta file, print 
> no meaning warn log
> ---
>
> Key: HDFS-14303
> URL: https://issues.apache.org/jira/browse/HDFS-14303
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, hdfs
>Affects Versions: 2.7.3, 3.2.0, 2.9.2, 2.8.5
> Environment: env free
>Reporter: qiang Liu
>Assignee: qiang Liu
>Priority: Minor
>  Labels: easy-fix
> Fix For: 2.10.0, 3.0.4, 3.3.0, 2.8.6, 3.2.1, 2.9.3, 3.1.3
>
> Attachments: HDFS-14303-addendnum-branch-2.01.patch, 
> HDFS-14303-addendum-01.patch, HDFS-14303-addendum-02.patch, 
> HDFS-14303-addendum-branch-3.2-04.patch, 
> HDFS-14303-addendum-branch-3.2.04.patch, HDFS-14303-branch-2.005.patch, 
> HDFS-14303-branch-2.009.patch, HDFS-14303-branch-2.010.patch, 
> HDFS-14303-branch-2.015.patch, HDFS-14303-branch-2.017.patch, 
> HDFS-14303-branch-2.7.001.patch, HDFS-14303-branch-2.7.004.patch, 
> HDFS-14303-branch-2.7.006.patch, HDFS-14303-branch-2.9.011.patch, 
> HDFS-14303-branch-2.9.012.patch, HDFS-14303-branch-2.9.013.patch, 
> HDFS-14303-branch-2.addendum.02.patch, 
> HDFS-14303-branch-3.2-addendum-04.patch, 
> HDFS-14303-branch-3.2.addendum.03.patch, HDFS-14303-trunk.014.patch, 
> HDFS-14303-trunk.015.patch, HDFS-14303-trunk.016.patch, 
> HDFS-14303-trunk.016.path, HDFS-14303.branch-3.2.017.patch
>
>   Original Estimate: 1m
>  Remaining Estimate: 1m
>
> chek block directory logic not correct when there is only meta file,print no 
> meaning warn log, eg:
>  WARN DirectoryScanner:? - Block: 1101939874 has to be upgraded to block 
> ID-based layout. Actual block file path: 
> /data14/hadoop/data/current/BP-1461038173-10.8.48.152-1481686842620/current/finalized/subdir174/subdir68,
>  expected block file path: 
> /data14/hadoop/data/current/BP-1461038173-10.8.48.152-1481686842620/current/finalized/subdir174/subdir68/subdir68



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14303) check block directory logic not correct when there is only meta file, print no meaning warn log

2019-09-13 Thread Ayush Saxena (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14303?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-14303:

Target Version/s: 2.9.2, 3.2.0  (was: 3.2.0, 2.9.2)
  Resolution: Fixed
  Status: Resolved  (was: Patch Available)

> check block directory logic not correct when there is only meta file, print 
> no meaning warn log
> ---
>
> Key: HDFS-14303
> URL: https://issues.apache.org/jira/browse/HDFS-14303
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, hdfs
>Affects Versions: 2.7.3, 3.2.0, 2.9.2, 2.8.5
> Environment: env free
>Reporter: qiang Liu
>Assignee: qiang Liu
>Priority: Minor
>  Labels: easy-fix
> Fix For: 2.10.0, 3.0.4, 3.3.0, 2.8.6, 3.2.1, 2.9.3, 3.1.3
>
> Attachments: HDFS-14303-addendnum-branch-2.01.patch, 
> HDFS-14303-addendum-01.patch, HDFS-14303-addendum-02.patch, 
> HDFS-14303-addendum-branch-3.2-04.patch, 
> HDFS-14303-addendum-branch-3.2.04.patch, HDFS-14303-branch-2.005.patch, 
> HDFS-14303-branch-2.009.patch, HDFS-14303-branch-2.010.patch, 
> HDFS-14303-branch-2.015.patch, HDFS-14303-branch-2.017.patch, 
> HDFS-14303-branch-2.7.001.patch, HDFS-14303-branch-2.7.004.patch, 
> HDFS-14303-branch-2.7.006.patch, HDFS-14303-branch-2.9.011.patch, 
> HDFS-14303-branch-2.9.012.patch, HDFS-14303-branch-2.9.013.patch, 
> HDFS-14303-branch-2.addendum.02.patch, 
> HDFS-14303-branch-3.2-addendum-04.patch, 
> HDFS-14303-branch-3.2.addendum.03.patch, HDFS-14303-trunk.014.patch, 
> HDFS-14303-trunk.015.patch, HDFS-14303-trunk.016.patch, 
> HDFS-14303-trunk.016.path, HDFS-14303.branch-3.2.017.patch
>
>   Original Estimate: 1m
>  Remaining Estimate: 1m
>
> chek block directory logic not correct when there is only meta file,print no 
> meaning warn log, eg:
>  WARN DirectoryScanner:? - Block: 1101939874 has to be upgraded to block 
> ID-based layout. Actual block file path: 
> /data14/hadoop/data/current/BP-1461038173-10.8.48.152-1481686842620/current/finalized/subdir174/subdir68,
>  expected block file path: 
> /data14/hadoop/data/current/BP-1461038173-10.8.48.152-1481686842620/current/finalized/subdir174/subdir68/subdir68



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2078) Get/Renew DelegationToken NPE after HDDS-1909

2019-09-13 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2078?focusedWorklogId=312500=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-312500
 ]

ASF GitHub Bot logged work on HDDS-2078:


Author: ASF GitHub Bot
Created on: 14/Sep/19 04:53
Start Date: 14/Sep/19 04:53
Worklog Time Spent: 10m 
  Work Description: anuengineer commented on issue #1444: HDDS-2078. 
Get/Renew DelegationToken NPE after HDDS-1909. Contributed…
URL: https://github.com/apache/hadoop/pull/1444#issuecomment-531449080
 
 
   @xiaoyuyao I think all these failures are due to the trunk issues. If you 
can retest, it might be a good idea.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 312500)
Time Spent: 0.5h  (was: 20m)

> Get/Renew DelegationToken NPE after HDDS-1909
> -
>
> Key: HDDS-2078
> URL: https://issues.apache.org/jira/browse/HDDS-2078
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> [https://github.com/elek/ozone-ci/blob/master/pr/pr-hdds-1909-plfbr/integration/hadoop-ozone/integration-test/org.apache.hadoop.ozone.TestSecureOzoneCluster.txt]



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2078) Get/Renew DelegationToken NPE after HDDS-1909

2019-09-13 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2078?focusedWorklogId=312501=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-312501
 ]

ASF GitHub Bot logged work on HDDS-2078:


Author: ASF GitHub Bot
Created on: 14/Sep/19 04:53
Start Date: 14/Sep/19 04:53
Worklog Time Spent: 10m 
  Work Description: anuengineer commented on issue #1444: HDDS-2078. 
Get/Renew DelegationToken NPE after HDDS-1909. Contributed…
URL: https://github.com/apache/hadoop/pull/1444#issuecomment-531449098
 
 
   /retest
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 312501)
Time Spent: 40m  (was: 0.5h)

> Get/Renew DelegationToken NPE after HDDS-1909
> -
>
> Key: HDDS-2078
> URL: https://issues.apache.org/jira/browse/HDDS-2078
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> [https://github.com/elek/ozone-ci/blob/master/pr/pr-hdds-1909-plfbr/integration/hadoop-ozone/integration-test/org.apache.hadoop.ozone.TestSecureOzoneCluster.txt]



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2110) Arbitrary file can be downloaded with the help of ProfilerServlet

2019-09-13 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2110?focusedWorklogId=312499=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-312499
 ]

ASF GitHub Bot logged work on HDDS-2110:


Author: ASF GitHub Bot
Created on: 14/Sep/19 04:51
Start Date: 14/Sep/19 04:51
Worklog Time Spent: 10m 
  Work Description: anuengineer commented on issue #1448: HDDS-2110. 
Arbitrary file can be downloaded with the help of ProfilerServlet
URL: https://github.com/apache/hadoop/pull/1448#issuecomment-531448967
 
 
   +1, pending Jenkins. 
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 312499)
Time Spent: 20m  (was: 10m)

> Arbitrary file can be downloaded with the help of ProfilerServlet
> -
>
> Key: HDDS-2110
> URL: https://issues.apache.org/jira/browse/HDDS-2110
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Native
>Reporter: Aayush
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> The LOC 324 in the file 
> [ProfileServlet.java|https://github.com/apache/hadoop/blob/217bdbd940a96986df3b96899b43caae2b5a9ed2/hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/server/ProfileServlet.java]
>  is prone to an arbitrary file download:-
> {code:java}
> protected void doGetDownload(String fileName, final HttpServletRequest req,   
>final HttpServletResponse resp) throws IOException {
> File requestedFile = 
> ProfileServlet.OUTPUT_DIR.resolve(fileName).toAbsolutePath().toFile();{code}
> As the String fileName is directly considered as the requested file.
>  
> Which is called at LOC 180 with HTTP request directly passed:-
> {code:java}
> if (req.getParameter("file") != null) {  
> doGetDownload(req.getParameter("file"), req, resp);  
> return;
> }
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14844) Make buffer of BlockReaderRemote#newBlockReader#BufferedOutputStream configurable

2019-09-13 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14844?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16929679#comment-16929679
 ] 

Hadoop QA commented on HDFS-14844:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
41s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
19s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 53s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
19s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 20s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
16s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
51s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}105m 51s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
32s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}175m 45s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestDFSInotifyEventInputStreamKerberized |
|   | hadoop.hdfs.tools.TestDFSZKFailoverController |
|   | hadoop.hdfs.TestMultipleNNPortQOP |
|   | hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes |
|   | hadoop.hdfs.TestRollingUpgrade |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=18.09.7 Server=18.09.7 Image:yetus/hadoop:39e82acc485 |
| JIRA Issue | HDFS-14844 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12980316/HDFS-14844.004.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  

[jira] [Work logged] (HDDS-2022) Add additional freon tests

2019-09-13 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2022?focusedWorklogId=312498=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-312498
 ]

ASF GitHub Bot logged work on HDDS-2022:


Author: ASF GitHub Bot
Created on: 14/Sep/19 04:40
Start Date: 14/Sep/19 04:40
Worklog Time Spent: 10m 
  Work Description: elek commented on pull request #1341: HDDS-2022. Add 
additional freon tests
URL: https://github.com/apache/hadoop/pull/1341#discussion_r324412003
 
 

 ##
 File path: 
hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/freon/SameKeyReader.java
 ##
 @@ -0,0 +1,109 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with this
+ * work for additional information regarding copyright ownership.  The ASF
+ * licenses this file to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations 
under
+ * the License.
+ */
+package org.apache.hadoop.ozone.freon;
+
+import java.io.InputStream;
+import java.security.MessageDigest;
+import java.util.concurrent.Callable;
+
+import org.apache.hadoop.hdds.cli.HddsVersionProvider;
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.apache.hadoop.ozone.client.OzoneBucket;
+import org.apache.hadoop.ozone.client.OzoneClient;
+import org.apache.hadoop.ozone.client.OzoneClientFactory;
+
+import com.codahale.metrics.Timer;
+import org.apache.commons.io.IOUtils;
+import picocli.CommandLine.Command;
+import picocli.CommandLine.Option;
+
+/**
+ * Data generator tool test om performance.
+ */
+@Command(name = "ocokr",
+aliases = "ozone-client-one-key-reader",
+description = "Read the same key from multiple threads.",
+versionProvider = HddsVersionProvider.class,
+mixinStandardHelpOptions = true,
+showDefaultValues = true)
+public class SameKeyReader extends BaseFreonGenerator
+implements Callable {
+
+  @Option(names = {"-v", "--volume"},
+  description = "Name of the bucket which contains the test data. Will be"
+  + " created if missing.",
+  defaultValue = "vol1")
+  private String volumeName;
+
+  @Option(names = {"-b", "--bucket"},
+  description = "Name of the bucket which contains the test data. Will be"
+  + " created if missing.",
+  defaultValue = "bucket1")
+  private String bucketName;
+
+  @Option(names = {"-k", "--key"},
+  description = "Name of the key read from multiple threads",
+  defaultValue = "bucket1")
 
 Review comment:
   Oh, Shame on me. I didn't notice that it's a **key** and not a bucket. Let 
me make it required without default value.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 312498)
Time Spent: 3h 40m  (was: 3.5h)

> Add additional freon tests
> --
>
> Key: HDDS-2022
> URL: https://issues.apache.org/jira/browse/HDDS-2022
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Tools
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h 40m
>  Remaining Estimate: 0h
>
> Freon is a generic load generator tool for ozone (ozone freon) which supports 
> multiple generation pattern.
> As of now only the random-key-generator is implemented which uses ozone rpc 
> client.
> It would be great to add additional tests:
>  * Test key generation via s3 interface
>  * Test key generation via the hadoop fs interface
>  * Test key reads (validation)
>  * Test OM with direct RPC calls



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14762) "Path(Path/String parent, String child)" will fail when "child" contains ":"

2019-09-13 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14762?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16929678#comment-16929678
 ] 

Ayush Saxena commented on HDFS-14762:
-

If the path just has colon at end? say a: ? 

> "Path(Path/String parent, String child)" will fail when "child" contains ":"
> 
>
> Key: HDFS-14762
> URL: https://issues.apache.org/jira/browse/HDFS-14762
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Shixiong Zhu
>Assignee: hemanthboyina
>Priority: Major
> Attachments: HDFS-14762.001.patch, HDFS-14762.002.patch, 
> HDFS-14762.003.patch
>
>
> When the "child" parameter contains ":", "Path(Path/String parent, String 
> child)" will throw the following exception:
> {code}
> java.lang.IllegalArgumentException: java.net.URISyntaxException: Relative 
> path in absolute URI: ...
> {code}
> Not sure if this is a legit bug. But the following places will hit this error 
> when seeing a Path with a file name containing ":":
> https://github.com/apache/hadoop/blob/f9029c4070e8eb046b403f5cb6d0a132c5d58448/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/ChecksumFileSystem.java#L101
> https://github.com/apache/hadoop/blob/f9029c4070e8eb046b403f5cb6d0a132c5d58448/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/Globber.java#L270



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2125) maven-javadoc-plugin.version is missing in pom.ozone.xml

2019-09-13 Thread Elek, Marton (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2125?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-2125:
---
Fix Version/s: 0.5.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> maven-javadoc-plugin.version is missing in pom.ozone.xml
> 
>
> Key: HDDS-2125
> URL: https://issues.apache.org/jira/browse/HDDS-2125
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Sammi Chen
>Assignee: Nanda kumar
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> {{maven-javadoc-plugin.version}} is missing from {{pom.ozone.xml}} which is 
> causing build failure.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2125) maven-javadoc-plugin.version is missing in pom.ozone.xml

2019-09-13 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2125?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16929677#comment-16929677
 ] 

Hudson commented on HDDS-2125:
--

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17298 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17298/])
HDDS-2125. maven-javadoc-plugin.version is missing in pom.ozone.xml (elek: rev 
9a931b823ed9c5a7b49f0628d0c890cb3e79a928)
* (edit) pom.ozone.xml


> maven-javadoc-plugin.version is missing in pom.ozone.xml
> 
>
> Key: HDDS-2125
> URL: https://issues.apache.org/jira/browse/HDDS-2125
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Sammi Chen
>Assignee: Nanda kumar
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> {{maven-javadoc-plugin.version}} is missing from {{pom.ozone.xml}} which is 
> causing build failure.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2125) maven-javadoc-plugin.version is missing in pom.ozone.xml

2019-09-13 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2125?focusedWorklogId=312496=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-312496
 ]

ASF GitHub Bot logged work on HDDS-2125:


Author: ASF GitHub Bot
Created on: 14/Sep/19 04:32
Start Date: 14/Sep/19 04:32
Worklog Time Spent: 10m 
  Work Description: elek commented on pull request #1439: HDDS-2125. 
maven-javadoc-plugin.version is missing in pom.ozone.xml.
URL: https://github.com/apache/hadoop/pull/1439
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 312496)
Time Spent: 40m  (was: 0.5h)

> maven-javadoc-plugin.version is missing in pom.ozone.xml
> 
>
> Key: HDDS-2125
> URL: https://issues.apache.org/jira/browse/HDDS-2125
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Sammi Chen
>Assignee: Nanda kumar
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> {{maven-javadoc-plugin.version}} is missing from {{pom.ozone.xml}} which is 
> causing build failure.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2110) Arbitrary file can be downloaded with the help of ProfilerServlet

2019-09-13 Thread Elek, Marton (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2110?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16929675#comment-16929675
 ] 

Elek, Marton commented on HDDS-2110:


Thank you to report this [~adeo]. I am not sure how Major this is as 
ProfilerServler is a developer only tool, but we can definitely restrict the 
download to the output directory. 

> Arbitrary file can be downloaded with the help of ProfilerServlet
> -
>
> Key: HDDS-2110
> URL: https://issues.apache.org/jira/browse/HDDS-2110
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Native
>Reporter: Aayush
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The LOC 324 in the file 
> [ProfileServlet.java|https://github.com/apache/hadoop/blob/217bdbd940a96986df3b96899b43caae2b5a9ed2/hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/server/ProfileServlet.java]
>  is prone to an arbitrary file download:-
> {code:java}
> protected void doGetDownload(String fileName, final HttpServletRequest req,   
>final HttpServletResponse resp) throws IOException {
> File requestedFile = 
> ProfileServlet.OUTPUT_DIR.resolve(fileName).toAbsolutePath().toFile();{code}
> As the String fileName is directly considered as the requested file.
>  
> Which is called at LOC 180 with HTTP request directly passed:-
> {code:java}
> if (req.getParameter("file") != null) {  
> doGetDownload(req.getParameter("file"), req, resp);  
> return;
> }
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2110) Arbitrary file can be downloaded with the help of ProfilerServlet

2019-09-13 Thread Elek, Marton (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2110?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-2110:
---
Assignee: Elek, Marton
  Status: Patch Available  (was: Open)

> Arbitrary file can be downloaded with the help of ProfilerServlet
> -
>
> Key: HDDS-2110
> URL: https://issues.apache.org/jira/browse/HDDS-2110
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Native
>Reporter: Aayush
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The LOC 324 in the file 
> [ProfileServlet.java|https://github.com/apache/hadoop/blob/217bdbd940a96986df3b96899b43caae2b5a9ed2/hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/server/ProfileServlet.java]
>  is prone to an arbitrary file download:-
> {code:java}
> protected void doGetDownload(String fileName, final HttpServletRequest req,   
>final HttpServletResponse resp) throws IOException {
> File requestedFile = 
> ProfileServlet.OUTPUT_DIR.resolve(fileName).toAbsolutePath().toFile();{code}
> As the String fileName is directly considered as the requested file.
>  
> Which is called at LOC 180 with HTTP request directly passed:-
> {code:java}
> if (req.getParameter("file") != null) {  
> doGetDownload(req.getParameter("file"), req, resp);  
> return;
> }
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2110) Arbitrary file can be downloaded with the help of ProfilerServlet

2019-09-13 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2110?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-2110:
-
Labels: pull-request-available  (was: )

> Arbitrary file can be downloaded with the help of ProfilerServlet
> -
>
> Key: HDDS-2110
> URL: https://issues.apache.org/jira/browse/HDDS-2110
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Native
>Reporter: Aayush
>Priority: Major
>  Labels: pull-request-available
>
> The LOC 324 in the file 
> [ProfileServlet.java|https://github.com/apache/hadoop/blob/217bdbd940a96986df3b96899b43caae2b5a9ed2/hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/server/ProfileServlet.java]
>  is prone to an arbitrary file download:-
> {code:java}
> protected void doGetDownload(String fileName, final HttpServletRequest req,   
>final HttpServletResponse resp) throws IOException {
> File requestedFile = 
> ProfileServlet.OUTPUT_DIR.resolve(fileName).toAbsolutePath().toFile();{code}
> As the String fileName is directly considered as the requested file.
>  
> Which is called at LOC 180 with HTTP request directly passed:-
> {code:java}
> if (req.getParameter("file") != null) {  
> doGetDownload(req.getParameter("file"), req, resp);  
> return;
> }
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2110) Arbitrary file can be downloaded with the help of ProfilerServlet

2019-09-13 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2110?focusedWorklogId=312495=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-312495
 ]

ASF GitHub Bot logged work on HDDS-2110:


Author: ASF GitHub Bot
Created on: 14/Sep/19 04:19
Start Date: 14/Sep/19 04:19
Worklog Time Spent: 10m 
  Work Description: elek commented on pull request #1448: HDDS-2110. 
Arbitrary file can be downloaded with the help of ProfilerServlet
URL: https://github.com/apache/hadoop/pull/1448
 
 
   The LOC 324 in the file 
[ProfileServlet.java|https://github.com/apache/hadoop/blob/217bdbd940a96986df3b96899b43caae2b5a9ed2/hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/server/ProfileServlet.java]
 is prone to an arbitrary file download:-
   {code:java}
   protected void doGetDownload(String fileName, final HttpServletRequest req,  
final HttpServletResponse resp) throws IOException {
   
   File requestedFile = 
ProfileServlet.OUTPUT_DIR.resolve(fileName).toAbsolutePath().toFile();{code}
   As the String fileName is directly considered as the requested file.
   
    
   
   Which is called at LOC 180 with HTTP request directly passed:-
   {code:java}
   if (req.getParameter("file") != null) {  
doGetDownload(req.getParameter("file"), req, resp);  
   return;
   }
   {code}
    
   
   See: https://issues.apache.org/jira/browse/HDDS-2110
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 312495)
Remaining Estimate: 0h
Time Spent: 10m

> Arbitrary file can be downloaded with the help of ProfilerServlet
> -
>
> Key: HDDS-2110
> URL: https://issues.apache.org/jira/browse/HDDS-2110
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Native
>Reporter: Aayush
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The LOC 324 in the file 
> [ProfileServlet.java|https://github.com/apache/hadoop/blob/217bdbd940a96986df3b96899b43caae2b5a9ed2/hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/server/ProfileServlet.java]
>  is prone to an arbitrary file download:-
> {code:java}
> protected void doGetDownload(String fileName, final HttpServletRequest req,   
>final HttpServletResponse resp) throws IOException {
> File requestedFile = 
> ProfileServlet.OUTPUT_DIR.resolve(fileName).toAbsolutePath().toFile();{code}
> As the String fileName is directly considered as the requested file.
>  
> Which is called at LOC 180 with HTTP request directly passed:-
> {code:java}
> if (req.getParameter("file") != null) {  
> doGetDownload(req.getParameter("file"), req, resp);  
> return;
> }
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2111) XSS fragments can be injected to the S3g landing page

2019-09-13 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2111?focusedWorklogId=312494=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-312494
 ]

ASF GitHub Bot logged work on HDDS-2111:


Author: ASF GitHub Bot
Created on: 14/Sep/19 04:16
Start Date: 14/Sep/19 04:16
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1447: HDDS-2111. XSS 
fragments can be injected to the S3g landing page  
URL: https://github.com/apache/hadoop/pull/1447#issuecomment-531447108
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 42 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | -1 | mvninstall | 32 | hadoop-ozone in trunk failed. |
   | -1 | compile | 21 | hadoop-ozone in trunk failed. |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 880 | branch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 21 | hadoop-hdds in trunk failed. |
   | -1 | javadoc | 19 | hadoop-ozone in trunk failed. |
   ||| _ Patch Compile Tests _ |
   | -1 | mvninstall | 34 | hadoop-ozone in the patch failed. |
   | -1 | jshint | 77 | The patch generated 1392 new + 2737 unchanged - 0 fixed 
= 4129 total (was 2737) |
   | -1 | compile | 25 | hadoop-ozone in the patch failed. |
   | -1 | javac | 25 | hadoop-ozone in the patch failed. |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 654 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 18 | hadoop-hdds in the patch failed. |
   | -1 | javadoc | 18 | hadoop-ozone in the patch failed. |
   ||| _ Other Tests _ |
   | -1 | unit | 134 | hadoop-hdds in the patch failed. |
   | -1 | unit | 28 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 32 | The patch does not generate ASF License warnings. |
   | | | 2408 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.container.ozoneimpl.TestOzoneContainer |
   |   | hadoop.ozone.container.keyvalue.TestKeyValueContainer |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1447/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1447 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient jshint |
   | uname | Linux f1154b87d514 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 6a9f7ca |
   | Default Java | 1.8.0_222 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1447/1/artifact/out/branch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1447/1/artifact/out/branch-compile-hadoop-ozone.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1447/1/artifact/out/branch-javadoc-hadoop-hdds.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1447/1/artifact/out/branch-javadoc-hadoop-ozone.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1447/1/artifact/out/patch-mvninstall-hadoop-ozone.txt
 |
   | jshint | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1447/1/artifact/out/diff-patch-jshint.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1447/1/artifact/out/patch-compile-hadoop-ozone.txt
 |
   | javac | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1447/1/artifact/out/patch-compile-hadoop-ozone.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1447/1/artifact/out/patch-javadoc-hadoop-hdds.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1447/1/artifact/out/patch-javadoc-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1447/1/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1447/1/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1447/1/testReport/ |
   | Max. process+thread count | 399 (vs. ulimit of 5500) |
   | modules | 

[jira] [Updated] (HDDS-2110) Arbitrary file can be downloaded with the help of ProfilerServlet

2019-09-13 Thread Elek, Marton (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2110?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-2110:
---
Summary: Arbitrary file can be downloaded with the help of ProfilerServlet  
(was: Arbitrary File Download)

> Arbitrary file can be downloaded with the help of ProfilerServlet
> -
>
> Key: HDDS-2110
> URL: https://issues.apache.org/jira/browse/HDDS-2110
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Native
>Reporter: Aayush
>Priority: Major
>
> The LOC 324 in the file 
> [ProfileServlet.java|https://github.com/apache/hadoop/blob/217bdbd940a96986df3b96899b43caae2b5a9ed2/hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/server/ProfileServlet.java]
>  is prone to an arbitrary file download:-
> {code:java}
> protected void doGetDownload(String fileName, final HttpServletRequest req,   
>final HttpServletResponse resp) throws IOException {
> File requestedFile = 
> ProfileServlet.OUTPUT_DIR.resolve(fileName).toAbsolutePath().toFile();{code}
> As the String fileName is directly considered as the requested file.
>  
> Which is called at LOC 180 with HTTP request directly passed:-
> {code:java}
> if (req.getParameter("file") != null) {  
> doGetDownload(req.getParameter("file"), req, resp);  
> return;
> }
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1054) List Multipart uploads in a bucket

2019-09-13 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1054?focusedWorklogId=312492=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-312492
 ]

ASF GitHub Bot logged work on HDDS-1054:


Author: ASF GitHub Bot
Created on: 14/Sep/19 04:10
Start Date: 14/Sep/19 04:10
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1277: HDDS-1054. List 
Multipart uploads in a bucket
URL: https://github.com/apache/hadoop/pull/1277#issuecomment-531446769
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 1208 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 3 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 77 | Maven dependency ordering for branch |
   | -1 | mvninstall | 32 | hadoop-ozone in trunk failed. |
   | -1 | compile | 19 | hadoop-ozone in trunk failed. |
   | +1 | checkstyle | 100 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 937 | branch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 13 | hadoop-hdds in trunk failed. |
   | -1 | javadoc | 15 | hadoop-ozone in trunk failed. |
   | 0 | spotbugs | 159 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | -1 | findbugs | 23 | hadoop-ozone in trunk failed. |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 23 | Maven dependency ordering for patch |
   | -1 | mvninstall | 31 | hadoop-ozone in the patch failed. |
   | -1 | compile | 22 | hadoop-ozone in the patch failed. |
   | -1 | cc | 22 | hadoop-ozone in the patch failed. |
   | -1 | javac | 22 | hadoop-ozone in the patch failed. |
   | -0 | checkstyle | 24 | hadoop-hdds: The patch generated 5 new + 9 
unchanged - 1 fixed = 14 total (was 10) |
   | -0 | checkstyle | 71 | hadoop-ozone: The patch generated 373 new + 2410 
unchanged - 15 fixed = 2783 total (was 2425) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 1 | The patch has no whitespace issues. |
   | +1 | shadedclient | 726 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 14 | hadoop-hdds in the patch failed. |
   | -1 | javadoc | 15 | hadoop-ozone in the patch failed. |
   | -1 | findbugs | 23 | hadoop-ozone in the patch failed. |
   ||| _ Other Tests _ |
   | -1 | unit | 163 | hadoop-hdds in the patch failed. |
   | -1 | unit | 25 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 29 | The patch does not generate ASF License warnings. |
   | | | 4264 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.container.ozoneimpl.TestOzoneContainer |
   |   | hadoop.ozone.container.keyvalue.TestKeyValueContainer |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.2 Server=19.03.2 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1277/11/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1277 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle cc |
   | uname | Linux 86494a8d4639 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 6a9f7ca |
   | Default Java | 1.8.0_222 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1277/11/artifact/out/branch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1277/11/artifact/out/branch-compile-hadoop-ozone.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1277/11/artifact/out/branch-javadoc-hadoop-hdds.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1277/11/artifact/out/branch-javadoc-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1277/11/artifact/out/branch-findbugs-hadoop-ozone.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1277/11/artifact/out/patch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1277/11/artifact/out/patch-compile-hadoop-ozone.txt
 |
   | cc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1277/11/artifact/out/patch-compile-hadoop-ozone.txt
 |
   | javac | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1277/11/artifact/out/patch-compile-hadoop-ozone.txt
 |
   | checkstyle | 

[jira] [Updated] (HDDS-2111) XSS fragments can be injected to the S3g landing page

2019-09-13 Thread Elek, Marton (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2111?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-2111:
---
Status: Patch Available  (was: Open)

> XSS fragments can be injected to the S3g landing page  
> ---
>
> Key: HDDS-2111
> URL: https://issues.apache.org/jira/browse/HDDS-2111
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: S3
>Reporter: Aayush
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> VULNERABILITY DETAILS
> There is a way to bypass anti-XSS filter for DOM XSS exploiting a 
> "window.location.href".
> Considering a typical URL:
> scheme://domain:port/path?query_string#fragment_id
> Browsers encode correctly both "path" and "query_string", but not the 
> "fragment_id". 
> So if used "fragment_id" the vector is also not logged on Web Server.
> VERSION
> Chrome Version: 10.0.648.134 (Official Build 77917) beta
> REPRODUCTION CASE
> This is an index.html page:
> {code:java}
> aws s3api --endpoint 
> document.write(window.location.href.replace("static/", "")) 
> create-bucket --bucket=wordcount
> {code}
> The attack vector is:
> index.html?#alert('XSS');
> * PoC:
> For your convenience, a minimalist PoC is located on:
> http://security.onofri.org/xss_location.html?#alert('XSS');
> * References
> - DOM Based Cross-Site Scripting or XSS of the Third Kind - 
> http://www.webappsec.org/projects/articles/071105.shtml
> reference:- 
> https://bugs.chromium.org/p/chromium/issues/detail?id=76796



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2111) XSS fragments can be injected to the S3g landing page

2019-09-13 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2111?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-2111:
-
Labels: pull-request-available  (was: )

> XSS fragments can be injected to the S3g landing page  
> ---
>
> Key: HDDS-2111
> URL: https://issues.apache.org/jira/browse/HDDS-2111
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: S3
>Reporter: Aayush
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>
> VULNERABILITY DETAILS
> There is a way to bypass anti-XSS filter for DOM XSS exploiting a 
> "window.location.href".
> Considering a typical URL:
> scheme://domain:port/path?query_string#fragment_id
> Browsers encode correctly both "path" and "query_string", but not the 
> "fragment_id". 
> So if used "fragment_id" the vector is also not logged on Web Server.
> VERSION
> Chrome Version: 10.0.648.134 (Official Build 77917) beta
> REPRODUCTION CASE
> This is an index.html page:
> {code:java}
> aws s3api --endpoint 
> document.write(window.location.href.replace("static/", "")) 
> create-bucket --bucket=wordcount
> {code}
> The attack vector is:
> index.html?#alert('XSS');
> * PoC:
> For your convenience, a minimalist PoC is located on:
> http://security.onofri.org/xss_location.html?#alert('XSS');
> * References
> - DOM Based Cross-Site Scripting or XSS of the Third Kind - 
> http://www.webappsec.org/projects/articles/071105.shtml
> reference:- 
> https://bugs.chromium.org/p/chromium/issues/detail?id=76796



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2111) XSS fragments can be injected to the S3g landing page

2019-09-13 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2111?focusedWorklogId=312485=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-312485
 ]

ASF GitHub Bot logged work on HDDS-2111:


Author: ASF GitHub Bot
Created on: 14/Sep/19 03:34
Start Date: 14/Sep/19 03:34
Worklog Time Spent: 10m 
  Work Description: elek commented on pull request #1447: HDDS-2111. XSS 
fragments can be injected to the S3g landing page  
URL: https://github.com/apache/hadoop/pull/1447
 
 
   VULNERABILITY DETAILS
   There is a way to bypass anti-XSS filter for DOM XSS exploiting a 
"window.location.href".
   
   Considering a typical URL:
   
   scheme://domain:port/path?query_string#fragment_id
   
   Browsers encode correctly both "path" and "query_string", but not the 
"fragment_id". 
   
   So if used "fragment_id" the vector is also not logged on Web Server.
   
   VERSION
   Chrome Version: 10.0.648.134 (Official Build 77917) beta
   
   REPRODUCTION CASE
   This is an index.html page:
   
   
   {code:java}
   aws s3api --endpoint 
document.write(window.location.href.replace("static/", "")) 
create-bucket --bucket=wordcount
   {code}
   
   
   The attack vector is:
   index.html?#alert('XSS');
   
   * PoC:
   For your convenience, a minimalist PoC is located on:
   http://security.onofri.org/xss_location.html?#alert('XSS');
   
   * References
   - DOM Based Cross-Site Scripting or XSS of the Third Kind - 
http://www.webappsec.org/projects/articles/071105.shtml
   
   
   reference:- 
   
   https://bugs.chromium.org/p/chromium/issues/detail?id=76796
   
   See: https://issues.apache.org/jira/browse/HDDS-2111
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 312485)
Remaining Estimate: 0h
Time Spent: 10m

> XSS fragments can be injected to the S3g landing page  
> ---
>
> Key: HDDS-2111
> URL: https://issues.apache.org/jira/browse/HDDS-2111
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: S3
>Reporter: Aayush
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> VULNERABILITY DETAILS
> There is a way to bypass anti-XSS filter for DOM XSS exploiting a 
> "window.location.href".
> Considering a typical URL:
> scheme://domain:port/path?query_string#fragment_id
> Browsers encode correctly both "path" and "query_string", but not the 
> "fragment_id". 
> So if used "fragment_id" the vector is also not logged on Web Server.
> VERSION
> Chrome Version: 10.0.648.134 (Official Build 77917) beta
> REPRODUCTION CASE
> This is an index.html page:
> {code:java}
> aws s3api --endpoint 
> document.write(window.location.href.replace("static/", "")) 
> create-bucket --bucket=wordcount
> {code}
> The attack vector is:
> index.html?#alert('XSS');
> * PoC:
> For your convenience, a minimalist PoC is located on:
> http://security.onofri.org/xss_location.html?#alert('XSS');
> * References
> - DOM Based Cross-Site Scripting or XSS of the Third Kind - 
> http://www.webappsec.org/projects/articles/071105.shtml
> reference:- 
> https://bugs.chromium.org/p/chromium/issues/detail?id=76796



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2111) XSS fragments can be injected to the S3g landing page

2019-09-13 Thread Elek, Marton (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2111?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16929660#comment-16929660
 ] 

Elek, Marton commented on HDDS-2111:


Thanks [~adeo] to report it. I upload a PR. It fixed in two ways (using just 
window.location.pathname plus setting safer Content-Security-Policy)

> XSS fragments can be injected to the S3g landing page  
> ---
>
> Key: HDDS-2111
> URL: https://issues.apache.org/jira/browse/HDDS-2111
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: S3
>Reporter: Aayush
>Assignee: Elek, Marton
>Priority: Major
>
> VULNERABILITY DETAILS
> There is a way to bypass anti-XSS filter for DOM XSS exploiting a 
> "window.location.href".
> Considering a typical URL:
> scheme://domain:port/path?query_string#fragment_id
> Browsers encode correctly both "path" and "query_string", but not the 
> "fragment_id". 
> So if used "fragment_id" the vector is also not logged on Web Server.
> VERSION
> Chrome Version: 10.0.648.134 (Official Build 77917) beta
> REPRODUCTION CASE
> This is an index.html page:
> {code:java}
> aws s3api --endpoint 
> document.write(window.location.href.replace("static/", "")) 
> create-bucket --bucket=wordcount
> {code}
> The attack vector is:
> index.html?#alert('XSS');
> * PoC:
> For your convenience, a minimalist PoC is located on:
> http://security.onofri.org/xss_location.html?#alert('XSS');
> * References
> - DOM Based Cross-Site Scripting or XSS of the Third Kind - 
> http://www.webappsec.org/projects/articles/071105.shtml
> reference:- 
> https://bugs.chromium.org/p/chromium/issues/detail?id=76796



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2111) XSS fragments can be injected to the S3g landing page

2019-09-13 Thread Elek, Marton (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2111?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-2111:
---
Summary: XSS fragments can be injected to the S3g landing page(was: DOM 
XSS)

> XSS fragments can be injected to the S3g landing page  
> ---
>
> Key: HDDS-2111
> URL: https://issues.apache.org/jira/browse/HDDS-2111
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: S3
>Reporter: Aayush
>Assignee: Elek, Marton
>Priority: Major
>
> VULNERABILITY DETAILS
> There is a way to bypass anti-XSS filter for DOM XSS exploiting a 
> "window.location.href".
> Considering a typical URL:
> scheme://domain:port/path?query_string#fragment_id
> Browsers encode correctly both "path" and "query_string", but not the 
> "fragment_id". 
> So if used "fragment_id" the vector is also not logged on Web Server.
> VERSION
> Chrome Version: 10.0.648.134 (Official Build 77917) beta
> REPRODUCTION CASE
> This is an index.html page:
> {code:java}
> aws s3api --endpoint 
> document.write(window.location.href.replace("static/", "")) 
> create-bucket --bucket=wordcount
> {code}
> The attack vector is:
> index.html?#alert('XSS');
> * PoC:
> For your convenience, a minimalist PoC is located on:
> http://security.onofri.org/xss_location.html?#alert('XSS');
> * References
> - DOM Based Cross-Site Scripting or XSS of the Third Kind - 
> http://www.webappsec.org/projects/articles/071105.shtml
> reference:- 
> https://bugs.chromium.org/p/chromium/issues/detail?id=76796



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-2111) DOM XSS

2019-09-13 Thread Elek, Marton (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2111?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton reassigned HDDS-2111:
--

Assignee: Elek, Marton

> DOM XSS
> ---
>
> Key: HDDS-2111
> URL: https://issues.apache.org/jira/browse/HDDS-2111
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: S3
>Reporter: Aayush
>Assignee: Elek, Marton
>Priority: Major
>
> VULNERABILITY DETAILS
> There is a way to bypass anti-XSS filter for DOM XSS exploiting a 
> "window.location.href".
> Considering a typical URL:
> scheme://domain:port/path?query_string#fragment_id
> Browsers encode correctly both "path" and "query_string", but not the 
> "fragment_id". 
> So if used "fragment_id" the vector is also not logged on Web Server.
> VERSION
> Chrome Version: 10.0.648.134 (Official Build 77917) beta
> REPRODUCTION CASE
> This is an index.html page:
> {code:java}
> aws s3api --endpoint 
> document.write(window.location.href.replace("static/", "")) 
> create-bucket --bucket=wordcount
> {code}
> The attack vector is:
> index.html?#alert('XSS');
> * PoC:
> For your convenience, a minimalist PoC is located on:
> http://security.onofri.org/xss_location.html?#alert('XSS');
> * References
> - DOM Based Cross-Site Scripting or XSS of the Third Kind - 
> http://www.webappsec.org/projects/articles/071105.shtml
> reference:- 
> https://bugs.chromium.org/p/chromium/issues/detail?id=76796



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1054) List Multipart uploads in a bucket

2019-09-13 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1054?focusedWorklogId=312482=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-312482
 ]

ASF GitHub Bot logged work on HDDS-1054:


Author: ASF GitHub Bot
Created on: 14/Sep/19 02:52
Start Date: 14/Sep/19 02:52
Worklog Time Spent: 10m 
  Work Description: elek commented on pull request #1277: HDDS-1054. List 
Multipart uploads in a bucket
URL: https://github.com/apache/hadoop/pull/1277#discussion_r324409675
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/KeyManagerImpl.java
 ##
 @@ -1270,6 +1271,58 @@ public void abortMultipartUpload(OmKeyArgs omKeyArgs) 
throws IOException {
 
   }
 
+  @Override
+  public OmMultipartUploadList listMultipartUploads(String volumeName,
+  String bucketName, String prefix) throws OMException {
+Preconditions.checkNotNull(volumeName);
+Preconditions.checkNotNull(bucketName);
+
+metadataManager.getLock().acquireLock(BUCKET_LOCK, volumeName, bucketName);
+try {
+
+  List multipartUploadKeys =
+  metadataManager
+  .getMultipartUploadKeys(volumeName, bucketName, prefix);
+
+  List collect = multipartUploadKeys.stream()
+  .map(OmMultipartUpload::from)
+  .map(upload -> {
+String dbKey = metadataManager
+.getOzoneKey(upload.getVolumeName(),
+upload.getBucketName(),
+upload.getKeyName());
+try {
+  Table openKeyTable =
+  metadataManager.getOpenKeyTable();
+
+  OmKeyInfo omKeyInfo =
+  openKeyTable.get(upload.getDbKey());
 
 Review comment:
   You are 100% right. But it seems to be a bigger change. Let's do it in 
https://issues.apache.org/jira/browse/HDDS-2131
   
   (On the other hand I added audit + metrics support because they were just a 
few lines) 
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 312482)
Time Spent: 8h 20m  (was: 8h 10m)

> List Multipart uploads in a bucket
> --
>
> Key: HDDS-1054
> URL: https://issues.apache.org/jira/browse/HDDS-1054
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Elek, Marton
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 8h 20m
>  Remaining Estimate: 0h
>
> This Jira is to implement in ozone to list of in-progress multipart uploads 
> in a bucket.
> [https://docs.aws.amazon.com/AmazonS3/latest/API/mpUploadListMPUpload.html]
>  



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2131) Optimize replication type and creation time calculation in S3 MPU list call

2019-09-13 Thread Elek, Marton (Jira)
Elek, Marton created HDDS-2131:
--

 Summary: Optimize replication type and creation time calculation 
in S3 MPU list call
 Key: HDDS-2131
 URL: https://issues.apache.org/jira/browse/HDDS-2131
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
Reporter: Elek, Marton


Based on the review from [~bharatviswa]:

{code}
 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/KeyManagerImpl.java
  metadataManager.getOpenKeyTable();

  OmKeyInfo omKeyInfo =
  openKeyTable.get(upload.getDbKey());
{code}

{quote}Here we are reading openKeyTable only for getting creation time. If we 
can have this information in omMultipartKeyInfo, we could avoid DB calls for 
openKeyTable.

To do this, We can set creationTime in OmMultipartKeyInfo during 
initiateMultipartUpload . In this way, we can get all the required information 
from the MultipartKeyInfo table.

And also StorageClass is missing from the returned OmMultipartUpload, as 
listMultipartUploads shows StorageClass information. For this, if we can return 
replicationType and depending on this value, we can set StorageClass in the 
listMultipartUploads Response.
{quote}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1054) List Multipart uploads in a bucket

2019-09-13 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1054?focusedWorklogId=312481=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-312481
 ]

ASF GitHub Bot logged work on HDDS-1054:


Author: ASF GitHub Bot
Created on: 14/Sep/19 02:48
Start Date: 14/Sep/19 02:48
Worklog Time Spent: 10m 
  Work Description: elek commented on issue #1277: HDDS-1054. List 
Multipart uploads in a bucket
URL: https://github.com/apache/hadoop/pull/1277#issuecomment-531442173
 
 
   > Have few comments in place.
   > Can we open Jira's for all TODO's in this Jira for tracking purpose?
   > 
   > 1. Audit support for the new method.
   > 
   > 2. Pagination support and other query parameters support.
   > 
   > 3. If replication type will not be handled in this Jira, can you open 
Jira for this one too.
   
   Sure. Audit + the replication type support is added to this patch. I created 
HDDS-2130 for the pagination support.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 312481)
Time Spent: 8h 10m  (was: 8h)

> List Multipart uploads in a bucket
> --
>
> Key: HDDS-1054
> URL: https://issues.apache.org/jira/browse/HDDS-1054
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Elek, Marton
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 8h 10m
>  Remaining Estimate: 0h
>
> This Jira is to implement in ozone to list of in-progress multipart uploads 
> in a bucket.
> [https://docs.aws.amazon.com/AmazonS3/latest/API/mpUploadListMPUpload.html]
>  



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2130) Add pagniation support to the S3 ListMPU call

2019-09-13 Thread Elek, Marton (Jira)
Elek, Marton created HDDS-2130:
--

 Summary: Add pagniation support to the S3 ListMPU call
 Key: HDDS-2130
 URL: https://issues.apache.org/jira/browse/HDDS-2130
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
Reporter: Elek, Marton


HDDS-1054 introduced a simple implementation for the AWS S3 
ListMultipartUploads REST call.

However the pagination support (key-marker, max-uploads, upload-id-marker...) 
are missing. We should implement them in this jira.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13019) dfs put with -f to dir with existing file in dest should return 0, not -1

2019-09-13 Thread hemanthboyina (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-13019?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16929652#comment-16929652
 ] 

hemanthboyina commented on HDFS-13019:
--

[~ayushtkn] is this issue a valid one ?

> dfs put with -f to dir with existing file in dest should return 0, not -1
> -
>
> Key: HDFS-13019
> URL: https://issues.apache.org/jira/browse/HDFS-13019
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 2.7.3
>Reporter: BRYAN T VOLD
>Assignee: Bharat Viswanadham
>Priority: Major
>
> When doing an hdfs dfs -put   and there are existing 
> files, the return code will be -1, which is expected.  
> When you do an hdfs dfs -put -f   (force), the error code 
> still comes back as -1, which is unexpected.  
> If you use hdfs dfs -copyFromLocal using the same directories as above, the 
> -copyFromLocal stills gives the error which is expected and when you pass -f 
> to this version of the command, the error code is 0, which I think is the 
> correct behavior and I think the hdfs dfs -put should match this.  



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1054) List Multipart uploads in a bucket

2019-09-13 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1054?focusedWorklogId=312478=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-312478
 ]

ASF GitHub Bot logged work on HDDS-1054:


Author: ASF GitHub Bot
Created on: 14/Sep/19 02:39
Start Date: 14/Sep/19 02:39
Worklog Time Spent: 10m 
  Work Description: elek commented on pull request #1277: HDDS-1054. List 
Multipart uploads in a bucket
URL: https://github.com/apache/hadoop/pull/1277#discussion_r324409356
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OmMetadataManagerImpl.java
 ##
 @@ -826,6 +828,28 @@ private VolumeList getVolumesByUser(String userNameKey)
 return count;
   }
 
+  @Override
+  public List getMultipartUploadKeys(
+  String volumeName, String bucketName, String prefix) throws IOException {
+List response = new ArrayList<>();
+TableIterator>
+iterator = getMultipartInfoTable().iterator();
+
+String prefixKey =
+OmMultipartUpload.getDbKey(volumeName, bucketName, prefix);
+iterator.seek(prefixKey);
+
+while (iterator.hasNext()) {
 
 Review comment:
   Can you please help me to understand how should it be done? What about if an 
MPU is finished and deleted? How is it cached? I think I can't return with 
(cached values + db values) because the deletions
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 312478)
Time Spent: 8h  (was: 7h 50m)

> List Multipart uploads in a bucket
> --
>
> Key: HDDS-1054
> URL: https://issues.apache.org/jira/browse/HDDS-1054
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Elek, Marton
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 8h
>  Remaining Estimate: 0h
>
> This Jira is to implement in ozone to list of in-progress multipart uploads 
> in a bucket.
> [https://docs.aws.amazon.com/AmazonS3/latest/API/mpUploadListMPUpload.html]
>  



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1054) List Multipart uploads in a bucket

2019-09-13 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1054?focusedWorklogId=312476=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-312476
 ]

ASF GitHub Bot logged work on HDDS-1054:


Author: ASF GitHub Bot
Created on: 14/Sep/19 02:26
Start Date: 14/Sep/19 02:26
Worklog Time Spent: 10m 
  Work Description: elek commented on pull request #1277: HDDS-1054. List 
Multipart uploads in a bucket
URL: https://github.com/apache/hadoop/pull/1277#discussion_r324409058
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/OMMetadataManager.java
 ##
 @@ -327,4 +327,11 @@ String getMultipartKey(String volume, String bucket, 
String key, String
*/
long countEstimatedRowsInTable(Table table)
   throws IOException;
+
+  /**
+   * Return the existing upload keys which includes volumeName, bucketName,
+   * keyName and uploadId.
 
 Review comment:
   Thanks.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 312476)
Time Spent: 7h 50m  (was: 7h 40m)

> List Multipart uploads in a bucket
> --
>
> Key: HDDS-1054
> URL: https://issues.apache.org/jira/browse/HDDS-1054
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Elek, Marton
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 7h 50m
>  Remaining Estimate: 0h
>
> This Jira is to implement in ozone to list of in-progress multipart uploads 
> in a bucket.
> [https://docs.aws.amazon.com/AmazonS3/latest/API/mpUploadListMPUpload.html]
>  



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14567) If kms-acls is failed to load, and it will never be reload

2019-09-13 Thread hemanthboyina (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14567?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16929649#comment-16929649
 ] 

hemanthboyina commented on HDFS-14567:
--

[~jojochuang] can you have look into the patch

>  If kms-acls is failed to load, and it will never be reload
> ---
>
> Key: HDFS-14567
> URL: https://issues.apache.org/jira/browse/HDFS-14567
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: hemanthboyina
>Assignee: hemanthboyina
>Priority: Major
> Attachments: HDFS-14567.001.patch, HDFS-14567.002.patch, 
> HDFS-14567.patch
>
>
> Scenario : through one automation tool , we are generating kms-acls , though 
> the generation of kms-acls is not completed , the system will detect a 
> modification of kms-alcs and it will try to load
> Before getting the configuration we are modifiying last reload time , code 
> shown below
> {code:java}
> private Configuration loadACLsFromFile() {
> LOG.debug("Loading ACLs file");
> lastReload = System.currentTimeMillis();
> Configuration conf = KMSConfiguration.getACLsConf();
> // triggering the resource loading.
> conf.get(Type.CREATE.getAclConfigKey());
> return conf;
> }{code}
> if the kms-acls file written within next 100ms , the changes will not be 
> loaded as this condition "newer = f.lastModified() - time > 100" never meets 
> because we have modified last reload time before getting the configuration
> {code:java}
> public static boolean isACLsFileNewer(long time) {
> boolean newer = false;
> String confDir = System.getProperty(KMS_CONFIG_DIR);
> if (confDir != null) {
> Path confPath = new Path(confDir);
> if (!confPath.isUriPathAbsolute()) {
> throw new RuntimeException("System property '" + KMS_CONFIG_DIR +
> "' must be an absolute path: " + confDir);
> }
> File f = new File(confDir, KMS_ACLS_XML);
> LOG.trace("Checking file {}, modification time is {}, last reload time is"
> + " {}", f.getPath(), f.lastModified(), time);
> // at least 100ms newer than time, we do this to ensure the file
> // has been properly closed/flushed
> newer = f.lastModified() - time > 100;
> }
> return newer;
> } {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14495) RBF: Duplicate FederationRPCMetrics

2019-09-13 Thread hemanthboyina (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14495?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16929648#comment-16929648
 ] 

hemanthboyina commented on HDFS-14495:
--

hi [~aajisaka] 

you mean in HDFS-12335  they have populated FederationRPCMetrics using two ways 
    
 * FederationRPCMetrics via {{@Metrics}} and {{@Metric}} annotations
 * FederationRPCMetrics via registering FederationRPCMBean


we have to remove duplication ?

> RBF: Duplicate FederationRPCMetrics
> ---
>
> Key: HDFS-14495
> URL: https://issues.apache.org/jira/browse/HDFS-14495
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: metrics
>Reporter: Akira Ajisaka
>Priority: Major
>
> There are two FederationRPCMetrics displayed in Web UI (http:// hostname>:/jmx) and most of the metrics are the same.
> * FederationRPCMetrics via {{@Metrics}} and {{@Metric}} annotations
> * FederationRPCMetrics via registering FederationRPCMBean
> Can we remove {{@Metrics}} and {{@Metric}} annotations to remove duplication?



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-8720) Minimally replicated blocks counting from fsck is misleading

2019-09-13 Thread hemanthboyina (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-8720?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16929646#comment-16929646
 ] 

hemanthboyina commented on HDFS-8720:
-

[~aajisaka] can you have a look into the patch

> Minimally replicated blocks counting from fsck is misleading
> 
>
> Key: HDFS-8720
> URL: https://issues.apache.org/jira/browse/HDFS-8720
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Walter Su
>Assignee: hemanthboyina
>Priority: Minor
> Attachments: HDFS-8720.002.patch, HDFS-8720.01.patch
>
>
> {noformat}
>  Total blocks (validated):  1 (avg. block size 17087 B)
>  Minimally replicated blocks:   1 (100.0 %)
>  Over-replicated blocks:0 (0.0 %)
>  Under-replicated blocks:   0 (0.0 %)
>  Mis-replicated blocks: 0 (0.0 %)
>  Default replication factor:3
>  Average block replication: 3.0
>  Missing blocks:0
>  Corrupt blocks:0
>  Missing replicas:  0 (0.0 %)
>  Number of data-nodes:  3
>  Number of racks:   1
> {noformat}
> "Minimally replicated blocks" actually means "*at least* Minimally replicated 
> blocks" here.
> I want to know how many blocks are in danger, whose number of replicas is 
> *equals* to {{minReplication}}. I can't get it from fsck.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14762) "Path(Path/String parent, String child)" will fail when "child" contains ":"

2019-09-13 Thread hemanthboyina (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14762?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16929645#comment-16929645
 ] 

hemanthboyina commented on HDFS-14762:
--

no [~ayushtkn] that doesn't work
we need to update the scheme to

> "Path(Path/String parent, String child)" will fail when "child" contains ":"
> 
>
> Key: HDFS-14762
> URL: https://issues.apache.org/jira/browse/HDFS-14762
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Shixiong Zhu
>Assignee: hemanthboyina
>Priority: Major
> Attachments: HDFS-14762.001.patch, HDFS-14762.002.patch, 
> HDFS-14762.003.patch
>
>
> When the "child" parameter contains ":", "Path(Path/String parent, String 
> child)" will throw the following exception:
> {code}
> java.lang.IllegalArgumentException: java.net.URISyntaxException: Relative 
> path in absolute URI: ...
> {code}
> Not sure if this is a legit bug. But the following places will hit this error 
> when seeing a Path with a file name containing ":":
> https://github.com/apache/hadoop/blob/f9029c4070e8eb046b403f5cb6d0a132c5d58448/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/ChecksumFileSystem.java#L101
> https://github.com/apache/hadoop/blob/f9029c4070e8eb046b403f5cb6d0a132c5d58448/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/Globber.java#L270



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14844) Make buffer of BlockReaderRemote#newBlockReader#BufferedOutputStream configurable

2019-09-13 Thread Lisheng Sun (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14844?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16929638#comment-16929638
 ] 

Lisheng Sun commented on HDFS-14844:


Thanks [~elgoiri] for your deeply review. I updated the patch and uploaded the 
v004 patch.

> Make buffer of BlockReaderRemote#newBlockReader#BufferedOutputStream  
> configurable
> --
>
> Key: HDFS-14844
> URL: https://issues.apache.org/jira/browse/HDFS-14844
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Minor
> Attachments: HDFS-14844.001.patch, HDFS-14844.002.patch, 
> HDFS-14844.003.patch, HDFS-14844.004.patch
>
>
> details for HDFS-14820



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14844) Make buffer of BlockReaderRemote#newBlockReader#BufferedOutputStream configurable

2019-09-13 Thread Lisheng Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14844?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lisheng Sun updated HDFS-14844:
---
Attachment: HDFS-14844.004.patch

> Make buffer of BlockReaderRemote#newBlockReader#BufferedOutputStream  
> configurable
> --
>
> Key: HDFS-14844
> URL: https://issues.apache.org/jira/browse/HDFS-14844
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Minor
> Attachments: HDFS-14844.001.patch, HDFS-14844.002.patch, 
> HDFS-14844.003.patch, HDFS-14844.004.patch
>
>
> details for HDFS-14820



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14795) Add Throttler for writing block

2019-09-13 Thread Lisheng Sun (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14795?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16929634#comment-16929634
 ] 

Lisheng Sun commented on HDFS-14795:


I confirmed all failed UTs are ok in local. So they are unrelated to this 
patch. Thanks a lot.[~elgoiri]

> Add Throttler for writing block
> ---
>
> Key: HDFS-14795
> URL: https://issues.apache.org/jira/browse/HDFS-14795
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Minor
> Attachments: HDFS-14795.001.patch, HDFS-14795.002.patch, 
> HDFS-14795.003.patch, HDFS-14795.004.patch, HDFS-14795.005.patch, 
> HDFS-14795.006.patch, HDFS-14795.007.patch, HDFS-14795.008.patch, 
> HDFS-14795.009.patch, HDFS-14795.010.patch, HDFS-14795.011.patch, 
> HDFS-14795.012.patch
>
>
> DataXceiver#writeBlock
> {code:java}
> blockReceiver.receiveBlock(mirrorOut, mirrorIn, replyOut,
> mirrorAddr, null, targets, false);
> {code}
> As above code, DataXceiver#writeBlock doesn't throttler.
>  I think it is necessary to throttle for writing block, while add throttler 
> in stage of PIPELINE_SETUP_APPEND_RECOVERY or 
> PIPELINE_SETUP_STREAMING_RECOVERY.
> Default throttler value is still null.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2111) DOM XSS

2019-09-13 Thread Elek, Marton (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2111?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-2111:
---
Description: 
VULNERABILITY DETAILS
There is a way to bypass anti-XSS filter for DOM XSS exploiting a 
"window.location.href".

Considering a typical URL:

scheme://domain:port/path?query_string#fragment_id

Browsers encode correctly both "path" and "query_string", but not the 
"fragment_id". 

So if used "fragment_id" the vector is also not logged on Web Server.

VERSION
Chrome Version: 10.0.648.134 (Official Build 77917) beta

REPRODUCTION CASE
This is an index.html page:


{code:java}
aws s3api --endpoint 
document.write(window.location.href.replace("static/", "")) 
create-bucket --bucket=wordcount
{code}


The attack vector is:
index.html?#alert('XSS');

* PoC:
For your convenience, a minimalist PoC is located on:
http://security.onofri.org/xss_location.html?#alert('XSS');

* References
- DOM Based Cross-Site Scripting or XSS of the Third Kind - 
http://www.webappsec.org/projects/articles/071105.shtml


reference:- 

https://bugs.chromium.org/p/chromium/issues/detail?id=76796

  was:
VULNERABILITY DETAILS
There is a way to bypass anti-XSS filter for DOM XSS exploiting a 
"window.location.href".

Considering a typical URL:

scheme://domain:port/path?query_string#fragment_id

Browsers encode correctly both "path" and "query_string", but not the 
"fragment_id". 

So if used "fragment_id" the vector is also not logged on Web Server.

VERSION
Chrome Version: 10.0.648.134 (Official Build 77917) beta

REPRODUCTION CASE
This is an index.html page:


{code:java}
aws s3api --endpoint 
document.write(window.location.href.replace("static/", "")) 
create-bucket --bucket=wordcount
{code}


The attack vector is:
index.html?#alert('XSS');

* PoC:
For your convenience, a minimalist PoC is located on:
http://security.onofri.org/xss_location.html?#alert('XSS');

* References
- DOM Based Cross-Site Scripting or XSS of the Third Kind - 
http://www.webappsec.org/projects/articles/071105.shtml


reference:- 

https://bugs.chromium.org/p/chromium/issues/detail?id=76796


> DOM XSS
> ---
>
> Key: HDDS-2111
> URL: https://issues.apache.org/jira/browse/HDDS-2111
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: S3
>Reporter: Aayush
>Priority: Major
>
> VULNERABILITY DETAILS
> There is a way to bypass anti-XSS filter for DOM XSS exploiting a 
> "window.location.href".
> Considering a typical URL:
> scheme://domain:port/path?query_string#fragment_id
> Browsers encode correctly both "path" and "query_string", but not the 
> "fragment_id". 
> So if used "fragment_id" the vector is also not logged on Web Server.
> VERSION
> Chrome Version: 10.0.648.134 (Official Build 77917) beta
> REPRODUCTION CASE
> This is an index.html page:
> {code:java}
> aws s3api --endpoint 
> document.write(window.location.href.replace("static/", "")) 
> create-bucket --bucket=wordcount
> {code}
> The attack vector is:
> index.html?#alert('XSS');
> * PoC:
> For your convenience, a minimalist PoC is located on:
> http://security.onofri.org/xss_location.html?#alert('XSS');
> * References
> - DOM Based Cross-Site Scripting or XSS of the Third Kind - 
> http://www.webappsec.org/projects/articles/071105.shtml
> reference:- 
> https://bugs.chromium.org/p/chromium/issues/detail?id=76796



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14303) check block directory logic not correct when there is only meta file, print no meaning warn log

2019-09-13 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14303?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16929630#comment-16929630
 ] 

Hadoop QA commented on HDFS-14303:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
47s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} branch-3.2 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
46s{color} | {color:green} branch-3.2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
55s{color} | {color:green} branch-3.2 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
39s{color} | {color:green} branch-3.2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
0s{color} | {color:green} branch-3.2 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 34s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
57s{color} | {color:green} branch-3.2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} branch-3.2 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m  3s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}102m 50s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
29s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}162m  2s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.datanode.TestDataNodeErasureCodingMetrics |
|   | hadoop.hdfs.TestMaintenanceState |
|   | hadoop.hdfs.server.diskbalancer.TestDiskBalancer |
|   | hadoop.hdfs.server.balancer.TestBalancer |
|   | hadoop.hdfs.server.namenode.metrics.TestNameNodeMetrics |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=18.09.7 Server=18.09.7 Image:yetus/hadoop:63396beab41 |
| JIRA Issue | HDFS-14303 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12980309/HDFS-14303-branch-3.2-addendum-04.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux c0d99ba2827d 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | branch-3.2 / d39ebbf |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_222 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27872/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27872/testReport/ |
| Max. 

[jira] [Work logged] (HDDS-2128) Make ozone sh command work with OM HA service ids

2019-09-13 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2128?focusedWorklogId=312444=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-312444
 ]

ASF GitHub Bot logged work on HDDS-2128:


Author: ASF GitHub Bot
Created on: 13/Sep/19 23:41
Start Date: 13/Sep/19 23:41
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1445: HDDS-2128. Make 
ozone sh command work with OM HA service ids
URL: https://github.com/apache/hadoop/pull/1445#issuecomment-531422743
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 40 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | -1 | mvninstall | 31 | hadoop-ozone in trunk failed. |
   | -1 | compile | 22 | hadoop-ozone in trunk failed. |
   | +1 | checkstyle | 51 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 814 | branch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 18 | hadoop-hdds in trunk failed. |
   | -1 | javadoc | 18 | hadoop-ozone in trunk failed. |
   | 0 | spotbugs | 146 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | -1 | findbugs | 26 | hadoop-ozone in trunk failed. |
   ||| _ Patch Compile Tests _ |
   | -1 | mvninstall | 33 | hadoop-ozone in the patch failed. |
   | -1 | compile | 25 | hadoop-ozone in the patch failed. |
   | -1 | javac | 25 | hadoop-ozone in the patch failed. |
   | +1 | checkstyle | 56 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 650 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 17 | hadoop-hdds in the patch failed. |
   | -1 | javadoc | 18 | hadoop-ozone in the patch failed. |
   | -1 | findbugs | 27 | hadoop-ozone in the patch failed. |
   ||| _ Other Tests _ |
   | -1 | unit | 127 | hadoop-hdds in the patch failed. |
   | -1 | unit | 28 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 34 | The patch does not generate ASF License warnings. |
   | | | 2735 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.container.keyvalue.TestKeyValueContainer 
|
   |   | hadoop.ozone.container.ozoneimpl.TestOzoneContainer |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1445/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1445 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 60fc3e546740 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 6a9f7ca |
   | Default Java | 1.8.0_222 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1445/1/artifact/out/branch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1445/1/artifact/out/branch-compile-hadoop-ozone.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1445/1/artifact/out/branch-javadoc-hadoop-hdds.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1445/1/artifact/out/branch-javadoc-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1445/1/artifact/out/branch-findbugs-hadoop-ozone.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1445/1/artifact/out/patch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1445/1/artifact/out/patch-compile-hadoop-ozone.txt
 |
   | javac | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1445/1/artifact/out/patch-compile-hadoop-ozone.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1445/1/artifact/out/patch-javadoc-hadoop-hdds.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1445/1/artifact/out/patch-javadoc-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1445/1/artifact/out/patch-findbugs-hadoop-ozone.txt
 |
   | unit | 

[jira] [Commented] (HDDS-1868) Ozone pipelines should be marked as ready only after the leader election is complete

2019-09-13 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-1868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16929605#comment-16929605
 ] 

Hadoop QA commented on HDDS-1868:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m 11s{color} 
| {color:red} HDDS-1868 does not apply to trunk. Rebase required? Wrong Branch? 
See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDDS-1868 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12980313/HDDS-1868.03.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2780/console |
| versions | git=2.17.1 |
| Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |


This message was automatically generated.



> Ozone pipelines should be marked as ready only after the leader election is 
> complete
> 
>
> Key: HDDS-1868
> URL: https://issues.apache.org/jira/browse/HDDS-1868
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode, SCM
>Affects Versions: 0.4.0
>Reporter: Mukul Kumar Singh
>Assignee: Siddharth Wagle
>Priority: Major
> Fix For: 0.5.0
>
> Attachments: HDDS-1868.01.patch, HDDS-1868.02.patch, 
> HDDS-1868.03.patch
>
>
> Ozone pipeline on restart start in allocated state, they are moved into open 
> state after all the pipeline have reported to it. However this potentially 
> can lead into an issue where the pipeline is still not ready to accept any 
> incoming IO operations.
> The pipelines should be marked as ready only after the leader election is 
> complete and leader is ready to accept incoming IO.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1868) Ozone pipelines should be marked as ready only after the leader election is complete

2019-09-13 Thread Siddharth Wagle (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1868?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siddharth Wagle updated HDDS-1868:
--
Attachment: (was: HDDS-1868.03.patch)

> Ozone pipelines should be marked as ready only after the leader election is 
> complete
> 
>
> Key: HDDS-1868
> URL: https://issues.apache.org/jira/browse/HDDS-1868
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode, SCM
>Affects Versions: 0.4.0
>Reporter: Mukul Kumar Singh
>Assignee: Siddharth Wagle
>Priority: Major
> Fix For: 0.5.0
>
> Attachments: HDDS-1868.01.patch, HDDS-1868.02.patch, 
> HDDS-1868.03.patch
>
>
> Ozone pipeline on restart start in allocated state, they are moved into open 
> state after all the pipeline have reported to it. However this potentially 
> can lead into an issue where the pipeline is still not ready to accept any 
> incoming IO operations.
> The pipelines should be marked as ready only after the leader election is 
> complete and leader is ready to accept incoming IO.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1868) Ozone pipelines should be marked as ready only after the leader election is complete

2019-09-13 Thread Siddharth Wagle (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1868?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siddharth Wagle updated HDDS-1868:
--
Attachment: HDDS-1868.03.patch

> Ozone pipelines should be marked as ready only after the leader election is 
> complete
> 
>
> Key: HDDS-1868
> URL: https://issues.apache.org/jira/browse/HDDS-1868
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode, SCM
>Affects Versions: 0.4.0
>Reporter: Mukul Kumar Singh
>Assignee: Siddharth Wagle
>Priority: Major
> Fix For: 0.5.0
>
> Attachments: HDDS-1868.01.patch, HDDS-1868.02.patch, 
> HDDS-1868.03.patch
>
>
> Ozone pipeline on restart start in allocated state, they are moved into open 
> state after all the pipeline have reported to it. However this potentially 
> can lead into an issue where the pipeline is still not ready to accept any 
> incoming IO operations.
> The pipelines should be marked as ready only after the leader election is 
> complete and leader is ready to accept incoming IO.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1868) Ozone pipelines should be marked as ready only after the leader election is complete

2019-09-13 Thread Siddharth Wagle (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-1868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16929599#comment-16929599
 ] 

Siddharth Wagle commented on HDDS-1868:
---

Hi [~ljain]/[~msingh] can you guys take a look? Patch needs RATIS-678 to be 
committed so will add a unit test on Monday but need Ratis snapshot to be 
updated.

> Ozone pipelines should be marked as ready only after the leader election is 
> complete
> 
>
> Key: HDDS-1868
> URL: https://issues.apache.org/jira/browse/HDDS-1868
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode, SCM
>Affects Versions: 0.4.0
>Reporter: Mukul Kumar Singh
>Assignee: Siddharth Wagle
>Priority: Major
> Fix For: 0.5.0
>
> Attachments: HDDS-1868.01.patch, HDDS-1868.02.patch, 
> HDDS-1868.03.patch
>
>
> Ozone pipeline on restart start in allocated state, they are moved into open 
> state after all the pipeline have reported to it. However this potentially 
> can lead into an issue where the pipeline is still not ready to accept any 
> incoming IO operations.
> The pipelines should be marked as ready only after the leader election is 
> complete and leader is ready to accept incoming IO.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1868) Ozone pipelines should be marked as ready only after the leader election is complete

2019-09-13 Thread Siddharth Wagle (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1868?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siddharth Wagle updated HDDS-1868:
--
Attachment: HDDS-1868.03.patch

> Ozone pipelines should be marked as ready only after the leader election is 
> complete
> 
>
> Key: HDDS-1868
> URL: https://issues.apache.org/jira/browse/HDDS-1868
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode, SCM
>Affects Versions: 0.4.0
>Reporter: Mukul Kumar Singh
>Assignee: Siddharth Wagle
>Priority: Major
> Fix For: 0.5.0
>
> Attachments: HDDS-1868.01.patch, HDDS-1868.02.patch, 
> HDDS-1868.03.patch
>
>
> Ozone pipeline on restart start in allocated state, they are moved into open 
> state after all the pipeline have reported to it. However this potentially 
> can lead into an issue where the pipeline is still not ready to accept any 
> incoming IO operations.
> The pipelines should be marked as ready only after the leader election is 
> complete and leader is ready to accept incoming IO.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2114) Rename does not preserve non-explicitly created interim directories

2019-09-13 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2114?focusedWorklogId=312424=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-312424
 ]

ASF GitHub Bot logged work on HDDS-2114:


Author: ASF GitHub Bot
Created on: 13/Sep/19 22:58
Start Date: 13/Sep/19 22:58
Worklog Time Spent: 10m 
  Work Description: xiaoyuyao commented on pull request #1440: HDDS-2114: 
Rename does not preserve non-explicitly created interim directories
URL: https://github.com/apache/hadoop/pull/1440#discussion_r324392898
 
 

 ##
 File path: 
hadoop-ozone/ozonefs/src/main/java/org/apache/hadoop/fs/ozone/BasicOzoneFileSystem.java
 ##
 @@ -374,7 +374,11 @@ public boolean rename(Path src, Path dst) throws 
IOException {
   }
 }
 RenameIterator iterator = new RenameIterator(src, dst);
-return iterator.iterate();
+boolean result = iterator.iterate();
+if (result) {
+  createFakeParentDirectory(src);
 
 Review comment:
   should we createFakeParentDirectory for dst as it is the rename result? 
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 312424)
Time Spent: 40m  (was: 0.5h)

> Rename does not preserve non-explicitly created interim directories
> ---
>
> Key: HDDS-2114
> URL: https://issues.apache.org/jira/browse/HDDS-2114
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Istvan Fajth
>Assignee: Lokesh Jain
>Priority: Critical
>  Labels: pull-request-available
> Attachments: demonstrative_test.patch
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> I am attaching a patch that adds a test that demonstrates the problem.
> The scenario is coming from the way how Hive implements acid transactions 
> with the ORC table format, but the test is redacted to the simplest possible 
> code that reproduces the issue.
> The scenario:
>  * Given a 3 level directory structure, where the top level directory was 
> explicitly created, and the interim directory is implicitly created (for 
> example either by creating a file with create("/top/interim/file") or by 
> creating a directory with mkdirs("top/interim/dir"))
>  * When the leaf is moved out from the implicitly created directory making 
> this directory an empty directory
>  * Then a FileNotFoundException is thrown when getFileStatus or listStatus is 
> called on the interim directory.
> The expected behaviour:
> after the directory is becoming empty, the directory should still be part of 
> the file system, moreover an empty FileStatus array should be returned when 
> listStatus is called on it, and also a valid FileStatus object should be 
> returned when getFileStatus is called on it.
>  
>  
> As this issue is present with Hive, and as this is how a FileSystem is 
> expected to work this seems to be an at least critical issue as I see, please 
> feel free to change the priority if needed.
> Also please note that, if the interim directory is explicitly created with 
> mkdirs("top/interim") before creating the leaf, then the issue does not 
> appear.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2114) Rename does not preserve non-explicitly created interim directories

2019-09-13 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2114?focusedWorklogId=312420=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-312420
 ]

ASF GitHub Bot logged work on HDDS-2114:


Author: ASF GitHub Bot
Created on: 13/Sep/19 22:55
Start Date: 13/Sep/19 22:55
Worklog Time Spent: 10m 
  Work Description: xiaoyuyao commented on pull request #1440: HDDS-2114: 
Rename does not preserve non-explicitly created interim directories
URL: https://github.com/apache/hadoop/pull/1440#discussion_r324392898
 
 

 ##
 File path: 
hadoop-ozone/ozonefs/src/main/java/org/apache/hadoop/fs/ozone/BasicOzoneFileSystem.java
 ##
 @@ -374,7 +374,11 @@ public boolean rename(Path src, Path dst) throws 
IOException {
   }
 }
 RenameIterator iterator = new RenameIterator(src, dst);
-return iterator.iterate();
+boolean result = iterator.iterate();
+if (result) {
+  createFakeParentDirectory(src);
 
 Review comment:
   should we createFakeParentDirectory for dst as it is the rename result? 
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 312420)
Time Spent: 0.5h  (was: 20m)

> Rename does not preserve non-explicitly created interim directories
> ---
>
> Key: HDDS-2114
> URL: https://issues.apache.org/jira/browse/HDDS-2114
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Istvan Fajth
>Assignee: Lokesh Jain
>Priority: Critical
>  Labels: pull-request-available
> Attachments: demonstrative_test.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> I am attaching a patch that adds a test that demonstrates the problem.
> The scenario is coming from the way how Hive implements acid transactions 
> with the ORC table format, but the test is redacted to the simplest possible 
> code that reproduces the issue.
> The scenario:
>  * Given a 3 level directory structure, where the top level directory was 
> explicitly created, and the interim directory is implicitly created (for 
> example either by creating a file with create("/top/interim/file") or by 
> creating a directory with mkdirs("top/interim/dir"))
>  * When the leaf is moved out from the implicitly created directory making 
> this directory an empty directory
>  * Then a FileNotFoundException is thrown when getFileStatus or listStatus is 
> called on the interim directory.
> The expected behaviour:
> after the directory is becoming empty, the directory should still be part of 
> the file system, moreover an empty FileStatus array should be returned when 
> listStatus is called on it, and also a valid FileStatus object should be 
> returned when getFileStatus is called on it.
>  
>  
> As this issue is present with Hive, and as this is how a FileSystem is 
> expected to work this seems to be an at least critical issue as I see, please 
> feel free to change the priority if needed.
> Also please note that, if the interim directory is explicitly created with 
> mkdirs("top/interim") before creating the leaf, then the issue does not 
> appear.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14762) "Path(Path/String parent, String child)" will fail when "child" contains ":"

2019-09-13 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14762?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16929575#comment-16929575
 ] 

Ayush Saxena commented on HDFS-14762:
-

Will just having this work :

{code:java}
// add "./" in front of Linux relative paths so that a path containing
// a colon e.q. "a:b" will not be interpreted as scheme "a".
if (!WINDOWS && path.length() > 0 && path.charAt(0) != '/') {
  path = "./" + path;
}
{code}


> "Path(Path/String parent, String child)" will fail when "child" contains ":"
> 
>
> Key: HDFS-14762
> URL: https://issues.apache.org/jira/browse/HDFS-14762
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Shixiong Zhu
>Assignee: hemanthboyina
>Priority: Major
> Attachments: HDFS-14762.001.patch, HDFS-14762.002.patch, 
> HDFS-14762.003.patch
>
>
> When the "child" parameter contains ":", "Path(Path/String parent, String 
> child)" will throw the following exception:
> {code}
> java.lang.IllegalArgumentException: java.net.URISyntaxException: Relative 
> path in absolute URI: ...
> {code}
> Not sure if this is a legit bug. But the following places will hit this error 
> when seeing a Path with a file name containing ":":
> https://github.com/apache/hadoop/blob/f9029c4070e8eb046b403f5cb6d0a132c5d58448/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/ChecksumFileSystem.java#L101
> https://github.com/apache/hadoop/blob/f9029c4070e8eb046b403f5cb6d0a132c5d58448/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/Globber.java#L270



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2128) Make ozone sh command work with OM HA service ids

2019-09-13 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2128?focusedWorklogId=312407=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-312407
 ]

ASF GitHub Bot logged work on HDDS-2128:


Author: ASF GitHub Bot
Created on: 13/Sep/19 22:35
Start Date: 13/Sep/19 22:35
Worklog Time Spent: 10m 
  Work Description: smengcl commented on pull request #1445: HDDS-2128. 
Make ozone sh command work with OM HA service ids
URL: https://github.com/apache/hadoop/pull/1445
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 312407)
Remaining Estimate: 0h
Time Spent: 10m

> Make ozone sh command work with OM HA service ids
> -
>
> Key: HDDS-2128
> URL: https://issues.apache.org/jira/browse/HDDS-2128
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Now that HDDS-2007 is committed. I can use some common helper function to 
> make this work.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2128) Make ozone sh command work with OM HA service ids

2019-09-13 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2128?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-2128:
-
Labels: pull-request-available  (was: )

> Make ozone sh command work with OM HA service ids
> -
>
> Key: HDDS-2128
> URL: https://issues.apache.org/jira/browse/HDDS-2128
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
>  Labels: pull-request-available
>
> Now that HDDS-2007 is committed. I can use some common helper function to 
> make this work.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2078) Get/Renew DelegationToken NPE after HDDS-1909

2019-09-13 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2078?focusedWorklogId=312405=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-312405
 ]

ASF GitHub Bot logged work on HDDS-2078:


Author: ASF GitHub Bot
Created on: 13/Sep/19 22:31
Start Date: 13/Sep/19 22:31
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1444: HDDS-2078. 
Get/Renew DelegationToken NPE after HDDS-1909. Contributed…
URL: https://github.com/apache/hadoop/pull/1444#issuecomment-531410808
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 1117 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | -1 | mvninstall | 30 | hadoop-ozone in trunk failed. |
   | -1 | compile | 23 | hadoop-ozone in trunk failed. |
   | +1 | checkstyle | 48 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 808 | branch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 17 | hadoop-hdds in trunk failed. |
   | -1 | javadoc | 18 | hadoop-ozone in trunk failed. |
   | 0 | spotbugs | 148 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | -1 | findbugs | 25 | hadoop-ozone in trunk failed. |
   ||| _ Patch Compile Tests _ |
   | -1 | mvninstall | 32 | hadoop-ozone in the patch failed. |
   | -1 | compile | 25 | hadoop-ozone in the patch failed. |
   | -1 | javac | 25 | hadoop-ozone in the patch failed. |
   | +1 | checkstyle | 55 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 665 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 18 | hadoop-hdds in the patch failed. |
   | -1 | javadoc | 18 | hadoop-ozone in the patch failed. |
   | -1 | findbugs | 27 | hadoop-ozone in the patch failed. |
   ||| _ Other Tests _ |
   | -1 | unit | 130 | hadoop-hdds in the patch failed. |
   | -1 | unit | 29 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 32 | The patch does not generate ASF License warnings. |
   | | | 3815 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.container.keyvalue.TestKeyValueContainer 
|
   |   | hadoop.ozone.container.ozoneimpl.TestOzoneContainer |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1444/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1444 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 67e4ed679fe3 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 6a9f7ca |
   | Default Java | 1.8.0_222 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1444/1/artifact/out/branch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1444/1/artifact/out/branch-compile-hadoop-ozone.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1444/1/artifact/out/branch-javadoc-hadoop-hdds.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1444/1/artifact/out/branch-javadoc-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1444/1/artifact/out/branch-findbugs-hadoop-ozone.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1444/1/artifact/out/patch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1444/1/artifact/out/patch-compile-hadoop-ozone.txt
 |
   | javac | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1444/1/artifact/out/patch-compile-hadoop-ozone.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1444/1/artifact/out/patch-javadoc-hadoop-hdds.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1444/1/artifact/out/patch-javadoc-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1444/1/artifact/out/patch-findbugs-hadoop-ozone.txt
 |
   | unit | 

[jira] [Commented] (HDFS-14836) FileIoProvider should not increase FileIoErrors metric in datanode volume metric

2019-09-13 Thread Wei-Chiu Chuang (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14836?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16929569#comment-16929569
 ] 

Wei-Chiu Chuang commented on HDFS-14836:


bq. So why we not keep consistency to  HDFS-2054 .
Ok. Sounds good to me.

> FileIoProvider should not increase FileIoErrors metric in datanode volume 
> metric
> 
>
> Key: HDFS-14836
> URL: https://issues.apache.org/jira/browse/HDFS-14836
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Aiphago
>Assignee: Aiphago
>Priority: Minor
> Attachments: HDFS-14836.patch
>
>
> I found that  FileIoErrors metric will increase in 
> BlockSender.sendPacket(),when use fileIoProvider.transferToSocketFully().But 
> in https://issues.apache.org/jira/browse/HDFS-2054 the Exception has been 
> ignore like "Broken pipe" and "Connection reset" .
> So should do a filter when fileIoProvider increase FileIoErrors count ?



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2129) Using dist profile fails with pom.ozone.xml as parent pom

2019-09-13 Thread Arpit Agarwal (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16929567#comment-16929567
 ] 

Arpit Agarwal commented on HDDS-2129:
-

The initial error is the following:
{code}
[ERROR] [ERROR] Some problems were encountered while processing the POMs:
[ERROR] 'build.plugins.plugin.version' for 
org.apache.maven.plugins:maven-javadoc-plugin must be a valid version but is 
'${maven-javadoc-plugin.version}'. @ line 1604, column 20
 @
[ERROR] The build could not read 1 project -> [Help 1]
[ERROR]
[ERROR]   The project org.apache.hadoop:hadoop-main-ozone:0.5.0-SNAPSHOT 
(/Users/agarwal/src/hadoop/pom.ozone.xml) has 1 error
[ERROR] 'build.plugins.plugin.version' for 
org.apache.maven.plugins:maven-javadoc-plugin must be a valid version but is 
'${maven-javadoc-plugin.version}'. @ line 1604, column 20
[ERROR]
{code}

After fixing this error by adding {{maven-javadoc-plugin.version}} to 
pom.ozone.xml, the build fails later with:

{code}
[ERROR] 
/Users/agarwal/src/hadoop/hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/protocolPB/ScmBlockLocationProtocolServerSideTranslatorPB.java:79:
 warning: no @param for metrics
[ERROR]   public ScmBlockLocationProtocolServerSideTranslatorPB(
[ERROR]  ^
[ERROR] 
/Users/agarwal/src/hadoop/hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/protocolPB/ScmBlockLocationProtocolServerSideTranslatorPB.java:79:
 warning: no @throws for java.io.IOException
{code}

My guess is we need to configure some exclusions for javadoc warnings.

> Using dist profile fails with pom.ozone.xml as parent pom
> -
>
> Key: HDDS-2129
> URL: https://issues.apache.org/jira/browse/HDDS-2129
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Arpit Agarwal
>Priority: Major
>
> The build fails with the {{dist}} profile. Details in a comment below.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14303) check block directory logic not correct when there is only meta file, print no meaning warn log

2019-09-13 Thread Ayush Saxena (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14303?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-14303:

Attachment: HDFS-14303-branch-3.2-addendum-04.patch

> check block directory logic not correct when there is only meta file, print 
> no meaning warn log
> ---
>
> Key: HDFS-14303
> URL: https://issues.apache.org/jira/browse/HDFS-14303
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, hdfs
>Affects Versions: 2.7.3, 3.2.0, 2.9.2, 2.8.5
> Environment: env free
>Reporter: qiang Liu
>Assignee: qiang Liu
>Priority: Minor
>  Labels: easy-fix
> Fix For: 2.10.0, 3.0.4, 3.3.0, 2.8.6, 3.2.1, 2.9.3, 3.1.3
>
> Attachments: HDFS-14303-addendnum-branch-2.01.patch, 
> HDFS-14303-addendum-01.patch, HDFS-14303-addendum-02.patch, 
> HDFS-14303-addendum-branch-3.2-04.patch, 
> HDFS-14303-addendum-branch-3.2.04.patch, HDFS-14303-branch-2.005.patch, 
> HDFS-14303-branch-2.009.patch, HDFS-14303-branch-2.010.patch, 
> HDFS-14303-branch-2.015.patch, HDFS-14303-branch-2.017.patch, 
> HDFS-14303-branch-2.7.001.patch, HDFS-14303-branch-2.7.004.patch, 
> HDFS-14303-branch-2.7.006.patch, HDFS-14303-branch-2.9.011.patch, 
> HDFS-14303-branch-2.9.012.patch, HDFS-14303-branch-2.9.013.patch, 
> HDFS-14303-branch-2.addendum.02.patch, 
> HDFS-14303-branch-3.2-addendum-04.patch, 
> HDFS-14303-branch-3.2.addendum.03.patch, HDFS-14303-trunk.014.patch, 
> HDFS-14303-trunk.015.patch, HDFS-14303-trunk.016.patch, 
> HDFS-14303-trunk.016.path, HDFS-14303.branch-3.2.017.patch
>
>   Original Estimate: 1m
>  Remaining Estimate: 1m
>
> chek block directory logic not correct when there is only meta file,print no 
> meaning warn log, eg:
>  WARN DirectoryScanner:? - Block: 1101939874 has to be upgraded to block 
> ID-based layout. Actual block file path: 
> /data14/hadoop/data/current/BP-1461038173-10.8.48.152-1481686842620/current/finalized/subdir174/subdir68,
>  expected block file path: 
> /data14/hadoop/data/current/BP-1461038173-10.8.48.152-1481686842620/current/finalized/subdir174/subdir68/subdir68



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14303) check block directory logic not correct when there is only meta file, print no meaning warn log

2019-09-13 Thread Ayush Saxena (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14303?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-14303:

Attachment: (was: HDFS-14303-branch-3.2-addendum-04.patch)

> check block directory logic not correct when there is only meta file, print 
> no meaning warn log
> ---
>
> Key: HDFS-14303
> URL: https://issues.apache.org/jira/browse/HDFS-14303
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, hdfs
>Affects Versions: 2.7.3, 3.2.0, 2.9.2, 2.8.5
> Environment: env free
>Reporter: qiang Liu
>Assignee: qiang Liu
>Priority: Minor
>  Labels: easy-fix
> Fix For: 2.10.0, 3.0.4, 3.3.0, 2.8.6, 3.2.1, 2.9.3, 3.1.3
>
> Attachments: HDFS-14303-addendnum-branch-2.01.patch, 
> HDFS-14303-addendum-01.patch, HDFS-14303-addendum-02.patch, 
> HDFS-14303-addendum-branch-3.2-04.patch, 
> HDFS-14303-addendum-branch-3.2.04.patch, HDFS-14303-branch-2.005.patch, 
> HDFS-14303-branch-2.009.patch, HDFS-14303-branch-2.010.patch, 
> HDFS-14303-branch-2.015.patch, HDFS-14303-branch-2.017.patch, 
> HDFS-14303-branch-2.7.001.patch, HDFS-14303-branch-2.7.004.patch, 
> HDFS-14303-branch-2.7.006.patch, HDFS-14303-branch-2.9.011.patch, 
> HDFS-14303-branch-2.9.012.patch, HDFS-14303-branch-2.9.013.patch, 
> HDFS-14303-branch-2.addendum.02.patch, 
> HDFS-14303-branch-3.2-addendum-04.patch, 
> HDFS-14303-branch-3.2.addendum.03.patch, HDFS-14303-trunk.014.patch, 
> HDFS-14303-trunk.015.patch, HDFS-14303-trunk.016.patch, 
> HDFS-14303-trunk.016.path, HDFS-14303.branch-3.2.017.patch
>
>   Original Estimate: 1m
>  Remaining Estimate: 1m
>
> chek block directory logic not correct when there is only meta file,print no 
> meaning warn log, eg:
>  WARN DirectoryScanner:? - Block: 1101939874 has to be upgraded to block 
> ID-based layout. Actual block file path: 
> /data14/hadoop/data/current/BP-1461038173-10.8.48.152-1481686842620/current/finalized/subdir174/subdir68,
>  expected block file path: 
> /data14/hadoop/data/current/BP-1461038173-10.8.48.152-1481686842620/current/finalized/subdir174/subdir68/subdir68



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2129) Using dist profile fails with pom.ozone.xml as parent pom

2019-09-13 Thread Arpit Agarwal (Jira)
Arpit Agarwal created HDDS-2129:
---

 Summary: Using dist profile fails with pom.ozone.xml as parent pom
 Key: HDDS-2129
 URL: https://issues.apache.org/jira/browse/HDDS-2129
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Arpit Agarwal


The build fails with the {{dist}} profile. Details in a comment below.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14303) check block directory logic not correct when there is only meta file, print no meaning warn log

2019-09-13 Thread Ayush Saxena (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14303?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-14303:

Attachment: HDFS-14303-branch-3.2-addendum-04.patch

> check block directory logic not correct when there is only meta file, print 
> no meaning warn log
> ---
>
> Key: HDFS-14303
> URL: https://issues.apache.org/jira/browse/HDFS-14303
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, hdfs
>Affects Versions: 2.7.3, 3.2.0, 2.9.2, 2.8.5
> Environment: env free
>Reporter: qiang Liu
>Assignee: qiang Liu
>Priority: Minor
>  Labels: easy-fix
> Fix For: 2.10.0, 3.0.4, 3.3.0, 2.8.6, 3.2.1, 2.9.3, 3.1.3
>
> Attachments: HDFS-14303-addendnum-branch-2.01.patch, 
> HDFS-14303-addendum-01.patch, HDFS-14303-addendum-02.patch, 
> HDFS-14303-addendum-branch-3.2-04.patch, 
> HDFS-14303-addendum-branch-3.2.04.patch, HDFS-14303-branch-2.005.patch, 
> HDFS-14303-branch-2.009.patch, HDFS-14303-branch-2.010.patch, 
> HDFS-14303-branch-2.015.patch, HDFS-14303-branch-2.017.patch, 
> HDFS-14303-branch-2.7.001.patch, HDFS-14303-branch-2.7.004.patch, 
> HDFS-14303-branch-2.7.006.patch, HDFS-14303-branch-2.9.011.patch, 
> HDFS-14303-branch-2.9.012.patch, HDFS-14303-branch-2.9.013.patch, 
> HDFS-14303-branch-2.addendum.02.patch, 
> HDFS-14303-branch-3.2-addendum-04.patch, 
> HDFS-14303-branch-3.2.addendum.03.patch, HDFS-14303-trunk.014.patch, 
> HDFS-14303-trunk.015.patch, HDFS-14303-trunk.016.patch, 
> HDFS-14303-trunk.016.path, HDFS-14303.branch-3.2.017.patch
>
>   Original Estimate: 1m
>  Remaining Estimate: 1m
>
> chek block directory logic not correct when there is only meta file,print no 
> meaning warn log, eg:
>  WARN DirectoryScanner:? - Block: 1101939874 has to be upgraded to block 
> ID-based layout. Actual block file path: 
> /data14/hadoop/data/current/BP-1461038173-10.8.48.152-1481686842620/current/finalized/subdir174/subdir68,
>  expected block file path: 
> /data14/hadoop/data/current/BP-1461038173-10.8.48.152-1481686842620/current/finalized/subdir174/subdir68/subdir68



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2095) Submit mr job to yarn failed, Error messegs is "Provider org.apache.hadoop.fs.ozone.OzoneClientAdapterImpl$Renewer not found"

2019-09-13 Thread Xiaoyu Yao (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2095?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-2095:
-
Fix Version/s: 0.4.1
   Resolution: Duplicate
   Status: Resolved  (was: Patch Available)

Thanks [~Huachao] for reporting the issue and providing the patch. A similar 
fix has been merged last week.  

> Submit mr job to yarn failed,   Error messegs is "Provider 
> org.apache.hadoop.fs.ozone.OzoneClientAdapterImpl$Renewer not found"
> ---
>
> Key: HDDS-2095
> URL: https://issues.apache.org/jira/browse/HDDS-2095
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Filesystem
>Affects Versions: 0.4.1
>Reporter: luhuachao
>Priority: Major
>  Labels: kerberos
> Fix For: 0.4.1
>
> Attachments: HDDS-2095.001.patch
>
>
> below is the submit command 
> {code:java}
> hadoop jar hadoop-mapreduce-client-jobclient-3.2.0-tests.jar  nnbench 
> -Dfs.defaultFS=o3fs://buc.volume-test  -maps 3   -bytesToWrite 1 
> -numberOfFiles 1000  -blockSize 16  -operation create_write
> {code}
> clinet fail with message 
> {code:java}
> 19/09/06 15:26:52 INFO mapreduce.JobSubmitter: Cleaning up the staging area 
> /user/hdfs/.staging/job_1567754782562_000119/09/06 15:26:52 INFO 
> mapreduce.JobSubmitter: Cleaning up the staging area 
> /user/hdfs/.staging/job_1567754782562_0001java.io.IOException: 
> org.apache.hadoop.yarn.exceptions.YarnException: Failed to submit 
> application_1567754782562_0001 to YARN : 
> org.apache.hadoop.security.token.TokenRenewer: Provider 
> org.apache.hadoop.fs.ozone.OzoneClientAdapterImpl$Renewer not found at 
> org.apache.hadoop.mapred.YARNRunner.submitJob(YARNRunner.java:345) at 
> org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:254)
>  at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1570) at 
> org.apache.hadoop.mapreduce.Job$11.run(Job.java:1567) at 
> java.security.AccessController.doPrivileged(Native Method) at 
> javax.security.auth.Subject.doAs(Subject.java:422) at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1685)
>  at org.apache.hadoop.mapreduce.Job.submit(Job.java:1567) at 
> org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:576) at 
> org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:571) at 
> java.security.AccessController.doPrivileged(Native Method) at 
> javax.security.auth.Subject.doAs(Subject.java:422) at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1685)
>  at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:571) 
> at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:562) at 
> org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:873) at 
> org.apache.hadoop.hdfs.NNBench.runTests(NNBench.java:487) at 
> org.apache.hadoop.hdfs.NNBench.run(NNBench.java:604) at 
> org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76) at 
> org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90) at 
> org.apache.hadoop.hdfs.NNBench.main(NNBench.java:579) at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498) at 
> org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:71)
>  at org.apache.hadoop.util.ProgramDriver.run(ProgramDriver.java:144) at 
> org.apache.hadoop.test.MapredTestDriver.run(MapredTestDriver.java:144) at 
> org.apache.hadoop.test.MapredTestDriver.main(MapredTestDriver.java:152) at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498) at 
> org.apache.hadoop.util.RunJar.run(RunJar.java:308) at 
> org.apache.hadoop.util.RunJar.main(RunJar.java:222)Caused by: 
> org.apache.hadoop.yarn.exceptions.YarnException: Failed to submit 
> application_1567754782562_0001 to YARN : 
> org.apache.hadoop.security.token.TokenRenewer: Provider 
> org.apache.hadoop.fs.ozone.OzoneClientAdapterImpl$Renewer not found at 
> org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.submitApplication(YarnClientImpl.java:304)
>  at 
> org.apache.hadoop.mapred.ResourceMgrDelegate.submitApplication(ResourceMgrDelegate.java:299)
>  at org.apache.hadoop.mapred.YARNRunner.submitJob(YARNRunner.java:330) ... 34 
> more
> {code}
> the log in resourcemanager 
> {code:java}
> 2019-09-06 15:26:51,836 

[jira] [Created] (HDDS-2128) Make ozone sh command work with OM HA service ids

2019-09-13 Thread Siyao Meng (Jira)
Siyao Meng created HDDS-2128:


 Summary: Make ozone sh command work with OM HA service ids
 Key: HDDS-2128
 URL: https://issues.apache.org/jira/browse/HDDS-2128
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Siyao Meng
Assignee: Siyao Meng


Now that HDDS-2007 is committed. I can use some common helper function to make 
this work.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2104) Refactor OMFailoverProxyProvider#loadOMClientConfigs

2019-09-13 Thread Siyao Meng (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2104?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HDDS-2104:
-
Parent: HDDS-505
Issue Type: Sub-task  (was: Improvement)

> Refactor OMFailoverProxyProvider#loadOMClientConfigs
> 
>
> Key: HDDS-2104
> URL: https://issues.apache.org/jira/browse/HDDS-2104
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
>
> Ref: https://github.com/apache/hadoop/pull/1360#discussion_r321586979
> Now that we decide to use client-side configuration for OM HA, some logic in 
> OMFailoverProxyProvider#loadOMClientConfigs becomes redundant.
> The work will begin after HDDS-2007 is committed.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2105) Merge OzoneClientFactory#getRpcClient functions

2019-09-13 Thread Siyao Meng (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2105?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HDDS-2105:
-
Parent: HDDS-505
Issue Type: Sub-task  (was: Improvement)

> Merge OzoneClientFactory#getRpcClient functions
> ---
>
> Key: HDDS-2105
> URL: https://issues.apache.org/jira/browse/HDDS-2105
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
>
> Ref: https://github.com/apache/hadoop/pull/1360#discussion_r321585214
> There will be 5 overloaded OzoneClientFactory#getRpcClient functions (when 
> HDDS-2007 is committed). They contains some redundant logic and unnecessarily 
> increases code paths.
> Goal: Merge those functions into one or two.
> Work will begin after HDDS-2007 is committed.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14303) check block directory logic not correct when there is only meta file, print no meaning warn log

2019-09-13 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14303?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16929557#comment-16929557
 ] 

Hadoop QA commented on HDFS-14303:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  8s{color} 
| {color:red} HDFS-14303 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDFS-14303 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12980305/HDFS-14303-addendum-branch-3.2.04.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27871/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> check block directory logic not correct when there is only meta file, print 
> no meaning warn log
> ---
>
> Key: HDFS-14303
> URL: https://issues.apache.org/jira/browse/HDFS-14303
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, hdfs
>Affects Versions: 2.7.3, 3.2.0, 2.9.2, 2.8.5
> Environment: env free
>Reporter: qiang Liu
>Assignee: qiang Liu
>Priority: Minor
>  Labels: easy-fix
> Fix For: 2.10.0, 3.0.4, 3.3.0, 2.8.6, 3.2.1, 2.9.3, 3.1.3
>
> Attachments: HDFS-14303-addendnum-branch-2.01.patch, 
> HDFS-14303-addendum-01.patch, HDFS-14303-addendum-02.patch, 
> HDFS-14303-addendum-branch-3.2-04.patch, 
> HDFS-14303-addendum-branch-3.2.04.patch, HDFS-14303-branch-2.005.patch, 
> HDFS-14303-branch-2.009.patch, HDFS-14303-branch-2.010.patch, 
> HDFS-14303-branch-2.015.patch, HDFS-14303-branch-2.017.patch, 
> HDFS-14303-branch-2.7.001.patch, HDFS-14303-branch-2.7.004.patch, 
> HDFS-14303-branch-2.7.006.patch, HDFS-14303-branch-2.9.011.patch, 
> HDFS-14303-branch-2.9.012.patch, HDFS-14303-branch-2.9.013.patch, 
> HDFS-14303-branch-2.addendum.02.patch, 
> HDFS-14303-branch-3.2.addendum.03.patch, HDFS-14303-trunk.014.patch, 
> HDFS-14303-trunk.015.patch, HDFS-14303-trunk.016.patch, 
> HDFS-14303-trunk.016.path, HDFS-14303.branch-3.2.017.patch
>
>   Original Estimate: 1m
>  Remaining Estimate: 1m
>
> chek block directory logic not correct when there is only meta file,print no 
> meaning warn log, eg:
>  WARN DirectoryScanner:? - Block: 1101939874 has to be upgraded to block 
> ID-based layout. Actual block file path: 
> /data14/hadoop/data/current/BP-1461038173-10.8.48.152-1481686842620/current/finalized/subdir174/subdir68,
>  expected block file path: 
> /data14/hadoop/data/current/BP-1461038173-10.8.48.152-1481686842620/current/finalized/subdir174/subdir68/subdir68



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14303) check block directory logic not correct when there is only meta file, print no meaning warn log

2019-09-13 Thread Ayush Saxena (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14303?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-14303:

Attachment: HDFS-14303-addendum-branch-3.2.04.patch

> check block directory logic not correct when there is only meta file, print 
> no meaning warn log
> ---
>
> Key: HDFS-14303
> URL: https://issues.apache.org/jira/browse/HDFS-14303
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, hdfs
>Affects Versions: 2.7.3, 3.2.0, 2.9.2, 2.8.5
> Environment: env free
>Reporter: qiang Liu
>Assignee: qiang Liu
>Priority: Minor
>  Labels: easy-fix
> Fix For: 2.10.0, 3.0.4, 3.3.0, 2.8.6, 3.2.1, 2.9.3, 3.1.3
>
> Attachments: HDFS-14303-addendnum-branch-2.01.patch, 
> HDFS-14303-addendum-01.patch, HDFS-14303-addendum-02.patch, 
> HDFS-14303-addendum-branch-3.2-04.patch, 
> HDFS-14303-addendum-branch-3.2.04.patch, HDFS-14303-branch-2.005.patch, 
> HDFS-14303-branch-2.009.patch, HDFS-14303-branch-2.010.patch, 
> HDFS-14303-branch-2.015.patch, HDFS-14303-branch-2.017.patch, 
> HDFS-14303-branch-2.7.001.patch, HDFS-14303-branch-2.7.004.patch, 
> HDFS-14303-branch-2.7.006.patch, HDFS-14303-branch-2.9.011.patch, 
> HDFS-14303-branch-2.9.012.patch, HDFS-14303-branch-2.9.013.patch, 
> HDFS-14303-branch-2.addendum.02.patch, 
> HDFS-14303-branch-3.2.addendum.03.patch, HDFS-14303-trunk.014.patch, 
> HDFS-14303-trunk.015.patch, HDFS-14303-trunk.016.patch, 
> HDFS-14303-trunk.016.path, HDFS-14303.branch-3.2.017.patch
>
>   Original Estimate: 1m
>  Remaining Estimate: 1m
>
> chek block directory logic not correct when there is only meta file,print no 
> meaning warn log, eg:
>  WARN DirectoryScanner:? - Block: 1101939874 has to be upgraded to block 
> ID-based layout. Actual block file path: 
> /data14/hadoop/data/current/BP-1461038173-10.8.48.152-1481686842620/current/finalized/subdir174/subdir68,
>  expected block file path: 
> /data14/hadoop/data/current/BP-1461038173-10.8.48.152-1481686842620/current/finalized/subdir174/subdir68/subdir68



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14303) check block directory logic not correct when there is only meta file, print no meaning warn log

2019-09-13 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14303?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16929553#comment-16929553
 ] 

Hadoop QA commented on HDFS-14303:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  7s{color} 
| {color:red} HDFS-14303 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDFS-14303 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12980304/HDFS-14303-addendum-branch-3.2-04.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27870/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> check block directory logic not correct when there is only meta file, print 
> no meaning warn log
> ---
>
> Key: HDFS-14303
> URL: https://issues.apache.org/jira/browse/HDFS-14303
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, hdfs
>Affects Versions: 2.7.3, 3.2.0, 2.9.2, 2.8.5
> Environment: env free
>Reporter: qiang Liu
>Assignee: qiang Liu
>Priority: Minor
>  Labels: easy-fix
> Fix For: 2.10.0, 3.0.4, 3.3.0, 2.8.6, 3.2.1, 2.9.3, 3.1.3
>
> Attachments: HDFS-14303-addendnum-branch-2.01.patch, 
> HDFS-14303-addendum-01.patch, HDFS-14303-addendum-02.patch, 
> HDFS-14303-addendum-branch-3.2-04.patch, HDFS-14303-branch-2.005.patch, 
> HDFS-14303-branch-2.009.patch, HDFS-14303-branch-2.010.patch, 
> HDFS-14303-branch-2.015.patch, HDFS-14303-branch-2.017.patch, 
> HDFS-14303-branch-2.7.001.patch, HDFS-14303-branch-2.7.004.patch, 
> HDFS-14303-branch-2.7.006.patch, HDFS-14303-branch-2.9.011.patch, 
> HDFS-14303-branch-2.9.012.patch, HDFS-14303-branch-2.9.013.patch, 
> HDFS-14303-branch-2.addendum.02.patch, 
> HDFS-14303-branch-3.2.addendum.03.patch, HDFS-14303-trunk.014.patch, 
> HDFS-14303-trunk.015.patch, HDFS-14303-trunk.016.patch, 
> HDFS-14303-trunk.016.path, HDFS-14303.branch-3.2.017.patch
>
>   Original Estimate: 1m
>  Remaining Estimate: 1m
>
> chek block directory logic not correct when there is only meta file,print no 
> meaning warn log, eg:
>  WARN DirectoryScanner:? - Block: 1101939874 has to be upgraded to block 
> ID-based layout. Actual block file path: 
> /data14/hadoop/data/current/BP-1461038173-10.8.48.152-1481686842620/current/finalized/subdir174/subdir68,
>  expected block file path: 
> /data14/hadoop/data/current/BP-1461038173-10.8.48.152-1481686842620/current/finalized/subdir174/subdir68/subdir68



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14303) check block directory logic not correct when there is only meta file, print no meaning warn log

2019-09-13 Thread Ayush Saxena (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14303?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-14303:

Attachment: HDFS-14303-addendum-branch-3.2-04.patch

> check block directory logic not correct when there is only meta file, print 
> no meaning warn log
> ---
>
> Key: HDFS-14303
> URL: https://issues.apache.org/jira/browse/HDFS-14303
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, hdfs
>Affects Versions: 2.7.3, 3.2.0, 2.9.2, 2.8.5
> Environment: env free
>Reporter: qiang Liu
>Assignee: qiang Liu
>Priority: Minor
>  Labels: easy-fix
> Fix For: 2.10.0, 3.0.4, 3.3.0, 2.8.6, 3.2.1, 2.9.3, 3.1.3
>
> Attachments: HDFS-14303-addendnum-branch-2.01.patch, 
> HDFS-14303-addendum-01.patch, HDFS-14303-addendum-02.patch, 
> HDFS-14303-addendum-branch-3.2-04.patch, HDFS-14303-branch-2.005.patch, 
> HDFS-14303-branch-2.009.patch, HDFS-14303-branch-2.010.patch, 
> HDFS-14303-branch-2.015.patch, HDFS-14303-branch-2.017.patch, 
> HDFS-14303-branch-2.7.001.patch, HDFS-14303-branch-2.7.004.patch, 
> HDFS-14303-branch-2.7.006.patch, HDFS-14303-branch-2.9.011.patch, 
> HDFS-14303-branch-2.9.012.patch, HDFS-14303-branch-2.9.013.patch, 
> HDFS-14303-branch-2.addendum.02.patch, 
> HDFS-14303-branch-3.2.addendum.03.patch, HDFS-14303-trunk.014.patch, 
> HDFS-14303-trunk.015.patch, HDFS-14303-trunk.016.patch, 
> HDFS-14303-trunk.016.path, HDFS-14303.branch-3.2.017.patch
>
>   Original Estimate: 1m
>  Remaining Estimate: 1m
>
> chek block directory logic not correct when there is only meta file,print no 
> meaning warn log, eg:
>  WARN DirectoryScanner:? - Block: 1101939874 has to be upgraded to block 
> ID-based layout. Actual block file path: 
> /data14/hadoop/data/current/BP-1461038173-10.8.48.152-1481686842620/current/finalized/subdir174/subdir68,
>  expected block file path: 
> /data14/hadoop/data/current/BP-1461038173-10.8.48.152-1481686842620/current/finalized/subdir174/subdir68/subdir68



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2078) Get/Renew DelegationToken NPE after HDDS-1909

2019-09-13 Thread Xiaoyu Yao (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2078?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-2078:
-
Status: Patch Available  (was: Open)

> Get/Renew DelegationToken NPE after HDDS-1909
> -
>
> Key: HDDS-2078
> URL: https://issues.apache.org/jira/browse/HDDS-2078
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> [https://github.com/elek/ozone-ci/blob/master/pr/pr-hdds-1909-plfbr/integration/hadoop-ozone/integration-test/org.apache.hadoop.ozone.TestSecureOzoneCluster.txt]



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2078) Get/Renew DelegationToken NPE after HDDS-1909

2019-09-13 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2078?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-2078:
-
Labels: pull-request-available  (was: )

> Get/Renew DelegationToken NPE after HDDS-1909
> -
>
> Key: HDDS-2078
> URL: https://issues.apache.org/jira/browse/HDDS-2078
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
>
> [https://github.com/elek/ozone-ci/blob/master/pr/pr-hdds-1909-plfbr/integration/hadoop-ozone/integration-test/org.apache.hadoop.ozone.TestSecureOzoneCluster.txt]



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2078) Get/Renew DelegationToken NPE after HDDS-1909

2019-09-13 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2078?focusedWorklogId=312352=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-312352
 ]

ASF GitHub Bot logged work on HDDS-2078:


Author: ASF GitHub Bot
Created on: 13/Sep/19 21:26
Start Date: 13/Sep/19 21:26
Worklog Time Spent: 10m 
  Work Description: xiaoyuyao commented on pull request #1444: HDDS-2078. 
Get/Renew DelegationToken NPE after HDDS-1909. Contributed…
URL: https://github.com/apache/hadoop/pull/1444
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 312352)
Remaining Estimate: 0h
Time Spent: 10m

> Get/Renew DelegationToken NPE after HDDS-1909
> -
>
> Key: HDDS-2078
> URL: https://issues.apache.org/jira/browse/HDDS-2078
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> [https://github.com/elek/ozone-ci/blob/master/pr/pr-hdds-1909-plfbr/integration/hadoop-ozone/integration-test/org.apache.hadoop.ozone.TestSecureOzoneCluster.txt]



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-1529) BlockInputStream: Avoid buffer copy if the whole chunk is being read

2019-09-13 Thread Aravindan Vijayan (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1529?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aravindan Vijayan reassigned HDDS-1529:
---

Assignee: Aravindan Vijayan  (was: Hrishikesh Gadre)

> BlockInputStream: Avoid buffer copy if the whole chunk is being read
> 
>
> Key: HDDS-1529
> URL: https://issues.apache.org/jira/browse/HDDS-1529
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Hanisha Koneru
>Assignee: Aravindan Vijayan
>Priority: Major
>
> Currently, BlockInputStream reads chunk data from DNs and puts it in a local 
> buffer and then copies the data to clients buffer. This is required for 
> partial chunk reads where extra chunk data than requested might have to be 
> read so that checksum verification can be done. But if the whole chunk is 
> being read, we can copy the data directly into client buffer and avoid double 
> buffer copies.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-2078) Get/Renew DelegationToken NPE after HDDS-1909

2019-09-13 Thread Xiaoyu Yao (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2078?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao reassigned HDDS-2078:


Assignee: Xiaoyu Yao

> Get/Renew DelegationToken NPE after HDDS-1909
> -
>
> Key: HDDS-2078
> URL: https://issues.apache.org/jira/browse/HDDS-2078
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Xiaoyu Yao
>Priority: Major
>
> [https://github.com/elek/ozone-ci/blob/master/pr/pr-hdds-1909-plfbr/integration/hadoop-ozone/integration-test/org.apache.hadoop.ozone.TestSecureOzoneCluster.txt]



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2078) Get/Renew DelegationToken NPE after HDDS-1909

2019-09-13 Thread Xiaoyu Yao (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2078?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-2078:
-
Summary: Get/Renew DelegationToken NPE after HDDS-1909  (was: Fix 
TestSecureOzoneCluster)

> Get/Renew DelegationToken NPE after HDDS-1909
> -
>
> Key: HDDS-2078
> URL: https://issues.apache.org/jira/browse/HDDS-2078
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Priority: Major
>
> [https://github.com/elek/ozone-ci/blob/master/pr/pr-hdds-1909-plfbr/integration/hadoop-ozone/integration-test/org.apache.hadoop.ozone.TestSecureOzoneCluster.txt]



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14303) check block directory logic not correct when there is only meta file, print no meaning warn log

2019-09-13 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14303?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16929530#comment-16929530
 ] 

Ayush Saxena commented on HDFS-14303:
-

The branch-3.2 patch seems to have updated the wrong test, I guess. The test 
fails in the build.
[~iamgd67] [~hexiaoqiao] can you help check here. So as we can close this up

> check block directory logic not correct when there is only meta file, print 
> no meaning warn log
> ---
>
> Key: HDFS-14303
> URL: https://issues.apache.org/jira/browse/HDFS-14303
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, hdfs
>Affects Versions: 2.7.3, 3.2.0, 2.9.2, 2.8.5
> Environment: env free
>Reporter: qiang Liu
>Assignee: qiang Liu
>Priority: Minor
>  Labels: easy-fix
> Fix For: 2.10.0, 3.0.4, 3.3.0, 2.8.6, 3.2.1, 2.9.3, 3.1.3
>
> Attachments: HDFS-14303-addendnum-branch-2.01.patch, 
> HDFS-14303-addendum-01.patch, HDFS-14303-addendum-02.patch, 
> HDFS-14303-branch-2.005.patch, HDFS-14303-branch-2.009.patch, 
> HDFS-14303-branch-2.010.patch, HDFS-14303-branch-2.015.patch, 
> HDFS-14303-branch-2.017.patch, HDFS-14303-branch-2.7.001.patch, 
> HDFS-14303-branch-2.7.004.patch, HDFS-14303-branch-2.7.006.patch, 
> HDFS-14303-branch-2.9.011.patch, HDFS-14303-branch-2.9.012.patch, 
> HDFS-14303-branch-2.9.013.patch, HDFS-14303-branch-2.addendum.02.patch, 
> HDFS-14303-branch-3.2.addendum.03.patch, HDFS-14303-trunk.014.patch, 
> HDFS-14303-trunk.015.patch, HDFS-14303-trunk.016.patch, 
> HDFS-14303-trunk.016.path, HDFS-14303.branch-3.2.017.patch
>
>   Original Estimate: 1m
>  Remaining Estimate: 1m
>
> chek block directory logic not correct when there is only meta file,print no 
> meaning warn log, eg:
>  WARN DirectoryScanner:? - Block: 1101939874 has to be upgraded to block 
> ID-based layout. Actual block file path: 
> /data14/hadoop/data/current/BP-1461038173-10.8.48.152-1481686842620/current/finalized/subdir174/subdir68,
>  expected block file path: 
> /data14/hadoop/data/current/BP-1461038173-10.8.48.152-1481686842620/current/finalized/subdir174/subdir68/subdir68



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14844) Make buffer of BlockReaderRemote#newBlockReader#BufferedOutputStream configurable

2019-09-13 Thread Jira


[ 
https://issues.apache.org/jira/browse/HDFS-14844?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16929495#comment-16929495
 ] 

Íñigo Goiri commented on HDFS-14844:


The description is much more clear now.
Minor tune up:
The output stream buffer size of a DFSClient remote read. The buffer default 
value is 8KB. The buffer includes only
some request parameters that are: block, blockToken, clientName, startOffset, 
len, verifyChecksum, cachingStrategy.
It is recommended to adjust the value according to the workload, which can 
reduce unnecessary memory usage
and the frequency of the garbage collection. A value of 512 might be reasonable.

As we are also changing, let's also go with the minor format change:
{code}
int bufferSize = configuration.getInt(
DFS_CLIENT_BLOCK_READER_REMOTE_BUFFER_SIZE_KEY,
DFS_CLIENT_BLOCK_READER_REMOTE_BUFFER_SIZE_DEFAULT);
{code}


> Make buffer of BlockReaderRemote#newBlockReader#BufferedOutputStream  
> configurable
> --
>
> Key: HDFS-14844
> URL: https://issues.apache.org/jira/browse/HDFS-14844
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Minor
> Attachments: HDFS-14844.001.patch, HDFS-14844.002.patch, 
> HDFS-14844.003.patch
>
>
> details for HDFS-14820



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2124) Random next links

2019-09-13 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2124?focusedWorklogId=312298=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-312298
 ]

ASF GitHub Bot logged work on HDDS-2124:


Author: ASF GitHub Bot
Created on: 13/Sep/19 19:51
Start Date: 13/Sep/19 19:51
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1443: HDDS-2124. 
Random next links
URL: https://github.com/apache/hadoop/pull/1443#issuecomment-531369389
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 1159 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   ||| _ trunk Compile Tests _ |
   | -1 | mvninstall | 34 | hadoop-ozone in trunk failed. |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 912 | branch has no errors when building and testing 
our client artifacts. |
   ||| _ Patch Compile Tests _ |
   | -1 | mvninstall | 34 | hadoop-ozone in the patch failed. |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 680 | patch has no errors when building and testing 
our client artifacts. |
   ||| _ Other Tests _ |
   | +1 | asflicense | 34 | The patch does not generate ASF License warnings. |
   | | | 3033 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1443/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1443 |
   | Optional Tests | dupname asflicense mvnsite |
   | uname | Linux 659f01becfde 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 6a9f7ca |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1443/1/artifact/out/branch-mvninstall-hadoop-ozone.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1443/1/artifact/out/patch-mvninstall-hadoop-ozone.txt
 |
   | Max. process+thread count | 446 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/docs U: hadoop-hdds/docs |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1443/1/console |
   | versions | git=2.7.4 maven=3.3.9 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 312298)
Time Spent: 0.5h  (was: 20m)

> Random next links 
> --
>
> Key: HDDS-2124
> URL: https://issues.apache.org/jira/browse/HDDS-2124
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 0.4.1
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> _Next>>_ links at the bottom of some documentation pages seem to be out of 
> order.
>  * _Simple Single Ozone_ ("easy start") should link to one of the 
> intermediate level pages, but has no _Next_ link
>  * _Building From Sources_ (ninja) should be the last (no _Next_ link), but 
> points to _Minikube_ (intermediate)
>  * _Pseudo-cluster_ (intermediate) should point to the ninja level, but leads 
> to _Simple Single Ozone_ (easy start)



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14848) ACE for Non Existing Paths

2019-09-13 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16929484#comment-16929484
 ] 

Ayush Saxena commented on HDFS-14848:
-

Thanx [~kihwal] for the pointers here.
Just wondering, As of now too, we are putting in the message (not a directory) 
for ACE, which could also serve the same purpose, if someone tries being hacky. 
Anyway he will get to know in one go that the parent isn't a file. 
Do you suggest any ALT, for ones with access, getting ACE misguides, if they 
aren't checking the text and trace in that and drives the search in completely 
different direction as why access was denied.

> ACE for Non Existing Paths
> --
>
> Key: HDFS-14848
> URL: https://issues.apache.org/jira/browse/HDFS-14848
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.3.0
>Reporter: Akshay Agarwal
>Assignee: Ayush Saxena
>Priority: Major
>
> Access control exception for several operations in some cases when the path 
> doesn't exist.
> For eg: SetStoragePolicy, getStoragePolicy, getBlockLocations and many other.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14833) RBF: Router Update Doesn't Sync Quota

2019-09-13 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16929467#comment-16929467
 ] 

Ayush Saxena commented on HDFS-14833:
-

Thanx [~elgoiri] for the review.
bq.  Should we originally set entry to request.getEntry() and then change it if 
MountTableResolver?
I don't think this shall work, because we are concluding by comparing, if we do 
so, we will land up comparing the same entries. In the method we are passing 
the update request and the old entry. if we change from old entry to request 
entry. The check will not work since the entries to be compared shall be same.

Well I know these many if checks are confusing to decode, It took me hours to 
understand( not sure whether I understood correctly). I started considering to 
put just a location check, but while writing UT, realized these issues, That 
sync itself is not working now.

I would try explaining all changes, correct me if wrong :

* *Extracting mount entry first* : In the present logic, we used to execute 
update command and then go for checking if quota is changed, the comparison was 
done between updateRequest entry and mountTable entry(after it being updated). 
So the value use to remain same, Thus checks considering no update. So as to 
overcome this, I pulled up the getting the mountTable entry logic before 
calling update so that comparison can be done between old entries and new one.
* *Adding Location Check* : Earlier only change in quota use to trigger quota 
sync, but if the locations are changed, this is to trigger sync on the new 
locations too. Say a entry /mnt was pointing to /dst with quota 10 if we change 
dest to /dst1, /dst1 should also be set with quota 10. In such case since 
update entry quota and existing entry quota shall stay same, so added this 
check.
* *Load cache in synchronize quota* : While testing realized that when we go 
for synchronizing quota, if the cache is not loaded it shoots request on old 
path, To make sure it triggers on the new locations added this.

Let me know, If missed something. :)


> RBF: Router Update Doesn't Sync Quota
> -
>
> Key: HDFS-14833
> URL: https://issues.apache.org/jira/browse/HDFS-14833
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-14833-01.patch, HDFS-14833-02.patch
>
>
> HDFS-14777 Added a check to prevent RPC call, It checks whether in the 
> present state whether quota is changing. 
> But ignores the part that if the locations are changed. if the location is 
> changed the new destination should be synchronized with the mount entry 
> quota. 



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2007) Make ozone fs shell command work with OM HA service ids

2019-09-13 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2007?focusedWorklogId=312280=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-312280
 ]

ASF GitHub Bot logged work on HDDS-2007:


Author: ASF GitHub Bot
Created on: 13/Sep/19 19:14
Start Date: 13/Sep/19 19:14
Worklog Time Spent: 10m 
  Work Description: smengcl commented on issue #1360: HDDS-2007. Make ozone 
fs shell command work with OM HA service ids
URL: https://github.com/apache/hadoop/pull/1360#issuecomment-531358343
 
 
   Thanks for committing @bharatviswa504 !
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 312280)
Time Spent: 7h  (was: 6h 50m)

> Make ozone fs shell command work with OM HA service ids
> ---
>
> Key: HDDS-2007
> URL: https://issues.apache.org/jira/browse/HDDS-2007
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Client
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 7h
>  Remaining Estimate: 0h
>
> Build an HDFS HA-like nameservice for OM HA so that Ozone client can access 
> Ozone HA cluster with ease.
> The majority of the work is already done in HDDS-972. But the problem is that 
> the client would crash if there are more than one service ids 
> (ozone.om.service.ids) configured in ozone-site.xml. This need to be address 
> on client side.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2127) Detailed Tools doc not reachable

2019-09-13 Thread Doroszlai, Attila (Jira)
Doroszlai, Attila created HDDS-2127:
---

 Summary: Detailed Tools doc not reachable
 Key: HDDS-2127
 URL: https://issues.apache.org/jira/browse/HDDS-2127
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: documentation
Affects Versions: 0.4.1
Reporter: Doroszlai, Attila


There are two doc pages for tools:
 * docs/beyond/tools.html
 * docs/tools.html

The latter is more detailed (has subpages for several tools), but it is not 
reachable (even indirectly) from the start page.  Not sure if this is 
intentional.

On a related note, it has two "Testing tools" sub-pages. One of them is empty 
and should be removed.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2124) Random next links

2019-09-13 Thread Doroszlai, Attila (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2124?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doroszlai, Attila updated HDDS-2124:

Status: Patch Available  (was: In Progress)

> Random next links 
> --
>
> Key: HDDS-2124
> URL: https://issues.apache.org/jira/browse/HDDS-2124
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 0.4.1
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> _Next>>_ links at the bottom of some documentation pages seem to be out of 
> order.
>  * _Simple Single Ozone_ ("easy start") should link to one of the 
> intermediate level pages, but has no _Next_ link
>  * _Building From Sources_ (ninja) should be the last (no _Next_ link), but 
> points to _Minikube_ (intermediate)
>  * _Pseudo-cluster_ (intermediate) should point to the ninja level, but leads 
> to _Simple Single Ozone_ (easy start)



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2124) Random next links

2019-09-13 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2124?focusedWorklogId=312273=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-312273
 ]

ASF GitHub Bot logged work on HDDS-2124:


Author: ASF GitHub Bot
Created on: 13/Sep/19 19:00
Start Date: 13/Sep/19 19:00
Worklog Time Spent: 10m 
  Work Description: adoroszlai commented on issue #1443: HDDS-2124. Random 
next links
URL: https://github.com/apache/hadoop/pull/1443#issuecomment-531354016
 
 
   /label ozone
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 312273)
Time Spent: 20m  (was: 10m)

> Random next links 
> --
>
> Key: HDDS-2124
> URL: https://issues.apache.org/jira/browse/HDDS-2124
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 0.4.1
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> _Next>>_ links at the bottom of some documentation pages seem to be out of 
> order.
>  * _Simple Single Ozone_ ("easy start") should link to one of the 
> intermediate level pages, but has no _Next_ link
>  * _Building From Sources_ (ninja) should be the last (no _Next_ link), but 
> points to _Minikube_ (intermediate)
>  * _Pseudo-cluster_ (intermediate) should point to the ninja level, but leads 
> to _Simple Single Ozone_ (easy start)



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2124) Random next links

2019-09-13 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2124?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-2124:
-
Labels: pull-request-available  (was: )

> Random next links 
> --
>
> Key: HDDS-2124
> URL: https://issues.apache.org/jira/browse/HDDS-2124
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 0.4.1
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Blocker
>  Labels: pull-request-available
>
> _Next>>_ links at the bottom of some documentation pages seem to be out of 
> order.
>  * _Simple Single Ozone_ ("easy start") should link to one of the 
> intermediate level pages, but has no _Next_ link
>  * _Building From Sources_ (ninja) should be the last (no _Next_ link), but 
> points to _Minikube_ (intermediate)
>  * _Pseudo-cluster_ (intermediate) should point to the ninja level, but leads 
> to _Simple Single Ozone_ (easy start)



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2124) Random next links

2019-09-13 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2124?focusedWorklogId=312272=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-312272
 ]

ASF GitHub Bot logged work on HDDS-2124:


Author: ASF GitHub Bot
Created on: 13/Sep/19 19:00
Start Date: 13/Sep/19 19:00
Worklog Time Spent: 10m 
  Work Description: adoroszlai commented on pull request #1443: HDDS-2124. 
Random next links
URL: https://github.com/apache/hadoop/pull/1443
 
 
   ## What changes were proposed in this pull request?
   
   Assign weights to pages to ensure correct order.
   
   https://issues.apache.org/jira/browse/HDDS-2124
   
   ## How was this patch tested?
   
   Generated docs, verified order.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 312272)
Remaining Estimate: 0h
Time Spent: 10m

> Random next links 
> --
>
> Key: HDDS-2124
> URL: https://issues.apache.org/jira/browse/HDDS-2124
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 0.4.1
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> _Next>>_ links at the bottom of some documentation pages seem to be out of 
> order.
>  * _Simple Single Ozone_ ("easy start") should link to one of the 
> intermediate level pages, but has no _Next_ link
>  * _Building From Sources_ (ninja) should be the last (no _Next_ link), but 
> points to _Minikube_ (intermediate)
>  * _Pseudo-cluster_ (intermediate) should point to the ninja level, but leads 
> to _Simple Single Ozone_ (easy start)



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14844) Make buffer of BlockReaderRemote#newBlockReader#BufferedOutputStream configurable

2019-09-13 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14844?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16929458#comment-16929458
 ] 

Hadoop QA commented on HDFS-14844:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
47s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
18s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 18s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
19s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 15s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
15s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
47s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 96m 
45s{color} | {color:green} hadoop-hdfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
31s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}168m 22s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=18.09.7 Server=18.09.7 Image:yetus/hadoop:39e82acc485 |
| JIRA Issue | HDFS-14844 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12980284/HDFS-14844.003.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  xml  |
| uname | Linux 21d2f9129b7b 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | 

[jira] [Updated] (HDDS-2057) Incorrect Default OM Port in Ozone FS URI Error Message

2019-09-13 Thread Arpit Agarwal (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2057?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDDS-2057:

Fix Version/s: (was: 0.5.0)

> Incorrect Default OM Port in Ozone FS URI Error Message
> ---
>
> Key: HDDS-2057
> URL: https://issues.apache.org/jira/browse/HDDS-2057
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Filesystem
>Reporter: Supratim Deka
>Assignee: Supratim Deka
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 2.5h
>  Remaining Estimate: 0h
>
> The error message displayed from BasicOzoneFilesystem.initialize specifies 
> 5678 as the OM port. This is not the default port.
> "Ozone file system URL " +
>  "should be one of the following formats: " +
>  "o3fs://bucket.volume/key OR " +
>  "o3fs://bucket.volume.om-host.example.com/key OR " +
>  "o3fs://bucket.volume.om-host.example.com:5678/key";
>  
> This should be fixed to pull the default value from the configuration 
> parameter, instead of a hard-coded value.
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2057) Incorrect Default OM Port in Ozone FS URI Error Message

2019-09-13 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2057?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16929447#comment-16929447
 ] 

Hudson commented on HDDS-2057:
--

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17297 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17297/])
Revert "HDDS-2057. Incorrect Default OM Port in Ozone FS URI Error (arp: rev 
6a9f7caef47c0ccacf778134d33e0c7547017323)
* (edit) 
hadoop-ozone/ozonefs/src/main/java/org/apache/hadoop/fs/ozone/BasicOzoneFileSystem.java


> Incorrect Default OM Port in Ozone FS URI Error Message
> ---
>
> Key: HDDS-2057
> URL: https://issues.apache.org/jira/browse/HDDS-2057
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Filesystem
>Reporter: Supratim Deka
>Assignee: Supratim Deka
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 2.5h
>  Remaining Estimate: 0h
>
> The error message displayed from BasicOzoneFilesystem.initialize specifies 
> 5678 as the OM port. This is not the default port.
> "Ozone file system URL " +
>  "should be one of the following formats: " +
>  "o3fs://bucket.volume/key OR " +
>  "o3fs://bucket.volume.om-host.example.com/key OR " +
>  "o3fs://bucket.volume.om-host.example.com:5678/key";
>  
> This should be fixed to pull the default value from the configuration 
> parameter, instead of a hard-coded value.
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2057) Incorrect Default OM Port in Ozone FS URI Error Message

2019-09-13 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2057?focusedWorklogId=312252=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-312252
 ]

ASF GitHub Bot logged work on HDDS-2057:


Author: ASF GitHub Bot
Created on: 13/Sep/19 18:42
Start Date: 13/Sep/19 18:42
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on issue #1377: HDDS-2057. 
Incorrect Default OM Port in Ozone FS URI Error Message. Contributed by 
Supratim Deka
URL: https://github.com/apache/hadoop/pull/1377#issuecomment-531347971
 
 
   I think, we need to revert this. Due to some classpath loading magic stuff, 
we cannot use OzoneConfiguration class in BasicOzoneFileSystem.java (That is 
the reason acceptance test are failing with hadoop2 tests). We have seen a 
similar error during HDDS-2007, it was resolved by moving logic to 
BasicOzoneClientAdapterImpl.
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 312252)
Time Spent: 2h 40m  (was: 2.5h)

> Incorrect Default OM Port in Ozone FS URI Error Message
> ---
>
> Key: HDDS-2057
> URL: https://issues.apache.org/jira/browse/HDDS-2057
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Filesystem
>Reporter: Supratim Deka
>Assignee: Supratim Deka
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 2h 40m
>  Remaining Estimate: 0h
>
> The error message displayed from BasicOzoneFilesystem.initialize specifies 
> 5678 as the OM port. This is not the default port.
> "Ozone file system URL " +
>  "should be one of the following formats: " +
>  "o3fs://bucket.volume/key OR " +
>  "o3fs://bucket.volume.om-host.example.com/key OR " +
>  "o3fs://bucket.volume.om-host.example.com:5678/key";
>  
> This should be fixed to pull the default value from the configuration 
> parameter, instead of a hard-coded value.
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Reopened] (HDDS-2057) Incorrect Default OM Port in Ozone FS URI Error Message

2019-09-13 Thread Arpit Agarwal (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2057?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal reopened HDDS-2057:
-

Reverted based on discussion with [~bharatviswa].

Bharat, can you comment with the details?

> Incorrect Default OM Port in Ozone FS URI Error Message
> ---
>
> Key: HDDS-2057
> URL: https://issues.apache.org/jira/browse/HDDS-2057
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Filesystem
>Reporter: Supratim Deka
>Assignee: Supratim Deka
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 2.5h
>  Remaining Estimate: 0h
>
> The error message displayed from BasicOzoneFilesystem.initialize specifies 
> 5678 as the OM port. This is not the default port.
> "Ozone file system URL " +
>  "should be one of the following formats: " +
>  "o3fs://bucket.volume/key OR " +
>  "o3fs://bucket.volume.om-host.example.com/key OR " +
>  "o3fs://bucket.volume.om-host.example.com:5678/key";
>  
> This should be fixed to pull the default value from the configuration 
> parameter, instead of a hard-coded value.
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2057) Incorrect Default OM Port in Ozone FS URI Error Message

2019-09-13 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2057?focusedWorklogId=312251=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-312251
 ]

ASF GitHub Bot logged work on HDDS-2057:


Author: ASF GitHub Bot
Created on: 13/Sep/19 18:41
Start Date: 13/Sep/19 18:41
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on issue #1377: HDDS-2057. 
Incorrect Default OM Port in Ozone FS URI Error Message. Contributed by 
Supratim Deka
URL: https://github.com/apache/hadoop/pull/1377#issuecomment-531347971
 
 
   I think, we need to revert this. Due to some classpath loading magic stuff, 
we cannot use OzoneConfiguration class in BasicOzoneFileSystem.java. We have 
seen a similar error during HDDS-2007, it was resolved by moving logic to 
BasicOzoneClientAdapterImpl.
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 312251)
Time Spent: 2.5h  (was: 2h 20m)

> Incorrect Default OM Port in Ozone FS URI Error Message
> ---
>
> Key: HDDS-2057
> URL: https://issues.apache.org/jira/browse/HDDS-2057
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Filesystem
>Reporter: Supratim Deka
>Assignee: Supratim Deka
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 2.5h
>  Remaining Estimate: 0h
>
> The error message displayed from BasicOzoneFilesystem.initialize specifies 
> 5678 as the OM port. This is not the default port.
> "Ozone file system URL " +
>  "should be one of the following formats: " +
>  "o3fs://bucket.volume/key OR " +
>  "o3fs://bucket.volume.om-host.example.com/key OR " +
>  "o3fs://bucket.volume.om-host.example.com:5678/key";
>  
> This should be fixed to pull the default value from the configuration 
> parameter, instead of a hard-coded value.
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



  1   2   3   >