[GitHub] [hadoop] bgaborg opened a new pull request #1302: HADOOP-16138. hadoop fs mkdir / of nonexistent abfs container raises NPE

2019-08-15 Thread GitBox
bgaborg opened a new pull request #1302: HADOOP-16138. hadoop fs mkdir / of 
nonexistent abfs container raises NPE
URL: https://github.com/apache/hadoop/pull/1302
 
 
   Change-Id: I2f637865c871e400b95fe7ddaa24bf99fa192023


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] Hexiaoqiao commented on a change in pull request #1028: HDFS-14617 - Improve fsimage load time by writing sub-sections to the fsimage index

2019-08-15 Thread GitBox
Hexiaoqiao commented on a change in pull request #1028: HDFS-14617 - Improve 
fsimage load time by writing sub-sections to the fsimage index
URL: https://github.com/apache/hadoop/pull/1028#discussion_r314201460
 
 

 ##
 File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageFormatPBINode.java
 ##
 @@ -217,33 +273,151 @@ void loadINodeDirectorySection(InputStream in) throws 
IOException {
 INodeDirectory p = dir.getInode(e.getParent()).asDirectory();
 for (long id : e.getChildrenList()) {
   INode child = dir.getInode(id);
-  addToParent(p, child);
+  if (addToParent(p, child)) {
+if (child.isFile()) {
+  inodeList.add(child);
+}
+if (inodeList.size() >= 1000) {
+  addToCacheAndBlockMap(inodeList);
+  inodeList.clear();
+}
+  }
+
 }
+
 for (int refId : e.getRefChildrenList()) {
   INodeReference ref = refList.get(refId);
-  addToParent(p, ref);
+  if (addToParent(p, ref)) {
+if (ref.isFile()) {
+  inodeList.add(ref);
+}
+if (inodeList.size() >= 1000) {
+  addToCacheAndBlockMap(inodeList);
+  inodeList.clear();
+}
+  }
 }
   }
+  addToCacheAndBlockMap(inodeList);
+}
+
+private void addToCacheAndBlockMap(ArrayList inodeList) {
+  try {
+cacheNameMapLock.lock();
+for (INode i : inodeList) {
+  dir.cacheName(i);
+}
+  } finally {
+cacheNameMapLock.unlock();
+  }
+
+  try {
+blockMapLock.lock();
+for (INode i : inodeList) {
+  updateBlocksMap(i.asFile(), fsn.getBlockManager());
+}
+  } finally {
+blockMapLock.unlock();
+  }
 }
 
 void loadINodeSection(InputStream in, StartupProgress prog,
 Step currentStep) throws IOException {
-  INodeSection s = INodeSection.parseDelimitedFrom(in);
-  fsn.dir.resetLastInodeId(s.getLastInodeId());
-  long numInodes = s.getNumInodes();
-  LOG.info("Loading " + numInodes + " INodes.");
-  prog.setTotal(Phase.LOADING_FSIMAGE, currentStep, numInodes);
+  loadINodeSectionHeader(in, prog, currentStep);
   Counter counter = prog.getCounter(Phase.LOADING_FSIMAGE, currentStep);
-  for (int i = 0; i < numInodes; ++i) {
+  int totalLoaded = loadINodesInSection(in, counter);
+  LOG.info("Successfully loaded {} inodes", totalLoaded);
+}
+
+private int loadINodesInSection(InputStream in, Counter counter)
+throws IOException {
+  // As the input stream is a LimitInputStream, the reading will stop when
+  // EOF is encountered at the end of the stream.
+  int cntr = 0;
+  while (true) {
 INodeSection.INode p = INodeSection.INode.parseDelimitedFrom(in);
+if (p == null) {
+  break;
+}
 if (p.getId() == INodeId.ROOT_INODE_ID) {
-  loadRootINode(p);
+  synchronized(this) {
 
 Review comment:
   Thanks for pinging, it is OK for me. I agree it will not bring any 
performance overhead.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15958) Revisiting LICENSE and NOTICE files

2019-08-15 Thread Masatake Iwasaki (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15958?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16908032#comment-16908032
 ] 

Masatake Iwasaki commented on HADOOP-15958:
---

[~aajisaka] thanks for working on this. I'm reviewing this. Could you fix the 
conflict of 007 with current trunk?

> Revisiting LICENSE and NOTICE files
> ---
>
> Key: HADOOP-15958
> URL: https://issues.apache.org/jira/browse/HADOOP-15958
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Blocker
> Attachments: HADOOP-15958-002.patch, HADOOP-15958-003.patch, 
> HADOOP-15958-004.patch, HADOOP-15958-wip.001.patch, HADOOP-15958.005.patch, 
> HADOOP-15958.006.patch, HADOOP-15958.007.patch
>
>
> Originally reported from [~jmclean]:
> * NOTICE file incorrectly lists copyrights that shouldn't be there and 
> mentions licenses such as MIT, BSD, and public domain that should be 
> mentioned in LICENSE only.
> * It's better to have a separate LICENSE and NOTICE for the source and binary 
> releases.
> http://www.apache.org/dev/licensing-howto.html



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1302: HADOOP-16138. hadoop fs mkdir / of nonexistent abfs container raises NPE

2019-08-15 Thread GitBox
hadoop-yetus commented on issue #1302: HADOOP-16138. hadoop fs mkdir / of 
nonexistent abfs container raises NPE
URL: https://github.com/apache/hadoop/pull/1302#issuecomment-521650898
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 41 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 1075 | trunk passed |
   | +1 | compile | 23 | trunk passed |
   | +1 | checkstyle | 17 | trunk passed |
   | +1 | mvnsite | 27 | trunk passed |
   | +1 | shadedclient | 650 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 18 | trunk passed |
   | 0 | spotbugs | 47 | Used deprecated FindBugs config; considering switching 
to SpotBugs. |
   | +1 | findbugs | 45 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 27 | the patch passed |
   | +1 | compile | 19 | the patch passed |
   | +1 | javac | 19 | the patch passed |
   | +1 | checkstyle | 13 | the patch passed |
   | +1 | mvnsite | 22 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 692 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 17 | the patch passed |
   | +1 | findbugs | 49 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 76 | hadoop-azure in the patch passed. |
   | +1 | asflicense | 25 | The patch does not generate ASF License warnings. |
   | | | 2912 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1302/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1302 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 7bfa4def05c0 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 3468164 |
   | Default Java | 1.8.0_222 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1302/1/testReport/ |
   | Max. process+thread count | 412 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1302/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-16416) mark DynamoDBMetadataStore.deleteTrackingValueMap as final

2019-08-15 Thread Gabor Bota (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16416?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16908006#comment-16908006
 ] 

Gabor Bota edited comment on HADOOP-16416 at 8/15/19 11:28 AM:
---

You forgot to add final to v003 patch. 


was (Author: gabor.bota):
+1 on patch v003, even if checkstyle has it's own issues with this - I think 
{{final static}} should be uppercase.

> mark DynamoDBMetadataStore.deleteTrackingValueMap as final
> --
>
> Key: HADOOP-16416
> URL: https://issues.apache.org/jira/browse/HADOOP-16416
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Assignee: kevin su
>Priority: Trivial
> Attachments: HADOOP-16416.001.patch, HADOOP-16416.002.patch, 
> HADOOP-16416.003.patch
>
>
> S3Guard's {{DynamoDBMetadataStore.deleteTrackingValueMap}} field is static 
> and can/should be marked as final; its name changed to upper case to match 
> the coding conventions.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16416) mark DynamoDBMetadataStore.deleteTrackingValueMap as final

2019-08-15 Thread Gabor Bota (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16416?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16908006#comment-16908006
 ] 

Gabor Bota commented on HADOOP-16416:
-

+1 on patch v003, even if checkstyle has it's own issues with this - I think 
{{final static}} should be uppercase.

> mark DynamoDBMetadataStore.deleteTrackingValueMap as final
> --
>
> Key: HADOOP-16416
> URL: https://issues.apache.org/jira/browse/HADOOP-16416
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Assignee: kevin su
>Priority: Trivial
> Attachments: HADOOP-16416.001.patch, HADOOP-16416.002.patch, 
> HADOOP-16416.003.patch
>
>
> S3Guard's {{DynamoDBMetadataStore.deleteTrackingValueMap}} field is static 
> and can/should be marked as final; its name changed to upper case to match 
> the coding conventions.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] adamantal commented on issue #1122: YARN-9679. Regular code cleanup in TestResourcePluginManager

2019-08-15 Thread GitBox
adamantal commented on issue #1122: YARN-9679. Regular code cleanup in 
TestResourcePluginManager
URL: https://github.com/apache/hadoop/pull/1122#issuecomment-521627325
 
 
   Resolved conflicts, and force-pushed - ready to review (pending on jenkins, 
but I got a +1 previously).


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16138) hadoop fs mkdir / of nonexistent abfs container raises NPE

2019-08-15 Thread Gabor Bota (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16138?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota updated HADOOP-16138:

Status: Patch Available  (was: In Progress)

> hadoop fs mkdir / of nonexistent abfs container raises NPE
> --
>
> Key: HADOOP-16138
> URL: https://issues.apache.org/jira/browse/HADOOP-16138
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Assignee: Gabor Bota
>Priority: Minor
>
> If you try to do a mkdir on the root of a nonexistent container, you get an 
> NPE
> {code}
> hadoop fs -mkdir  abfs://contain...@abfswales1.dfs.core.windows.net/  
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16138) hadoop fs mkdir / of nonexistent abfs container raises NPE

2019-08-15 Thread Gabor Bota (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16138?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16908062#comment-16908062
 ] 

Gabor Bota commented on HADOOP-16138:
-

So I've created a test for it (new test class for testing CLI with ABFS).

The test was not failing. The output is:
{{mkdir: 
`abfs://nonexistent-3ab66e98-66dc-4e7b-879d-16e323bd2...@mycontainer.dfs.core.windows.net/':
 File exists}}

Next, I tried to run from a {{dist}}, where the output was the same. So I guess 
this got fixed, but we could add the test I've created for this - so it's ready 
for review!



> hadoop fs mkdir / of nonexistent abfs container raises NPE
> --
>
> Key: HADOOP-16138
> URL: https://issues.apache.org/jira/browse/HADOOP-16138
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Assignee: Gabor Bota
>Priority: Minor
>
> If you try to do a mkdir on the root of a nonexistent container, you get an 
> NPE
> {code}
> hadoop fs -mkdir  abfs://contain...@abfswales1.dfs.core.windows.net/  
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] mackrorysd commented on issue #1302: HADOOP-16138. hadoop fs mkdir / of nonexistent abfs container raises NPE

2019-08-15 Thread GitBox
mackrorysd commented on issue #1302: HADOOP-16138. hadoop fs mkdir / of 
nonexistent abfs container raises NPE
URL: https://github.com/apache/hadoop/pull/1302#issuecomment-521662657
 
 
   I only see test changes. Is there another change to actually fix the error 
and fail with a more graceful error message?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16504) Increase ipc.server.listen.queue.size default from 128 to 256

2019-08-15 Thread Lisheng Sun (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16504?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16908004#comment-16908004
 ] 

Lisheng Sun commented on HADOOP-16504:
--

hi [~jojochuang] no one comment for a few days and  this patch should be no 
problem. Should we commit this patch? Thank you.

> Increase ipc.server.listen.queue.size default from 128 to 256
> -
>
> Key: HADOOP-16504
> URL: https://issues.apache.org/jira/browse/HADOOP-16504
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Major
> Attachments: HADOOP-16504.000.patch, HADOOP-16504.001.patch
>
>
> Because ipc.server.listen.queue.size default value is too small, TCP's 
> ListenDrop indicator along with the rpc request large.
>  The upper limit of the system's semi-join queue is 65636 and maximum number 
> of fully connected queues is 1024.
> {code:java}
> [work@c3-hadoop-srv-talos27 ~]$ cat /proc/sys/net/ipv4/tcp_max_syn_backlog
> 65536
> [work@c3-hadoop-srv-talos27 ~]$ cat /proc/sys/net/core/somaxconn
> 1024
> {code}
> I think this default value should be adjusted.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1302: HADOOP-16138. hadoop fs mkdir / of nonexistent abfs container raises NPE

2019-08-15 Thread GitBox
hadoop-yetus commented on issue #1302: HADOOP-16138. hadoop fs mkdir / of 
nonexistent abfs container raises NPE
URL: https://github.com/apache/hadoop/pull/1302#issuecomment-521656984
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 145 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 1452 | trunk passed |
   | +1 | compile | 26 | trunk passed |
   | +1 | checkstyle | 24 | trunk passed |
   | +1 | mvnsite | 35 | trunk passed |
   | +1 | shadedclient | 776 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 27 | trunk passed |
   | 0 | spotbugs | 66 | Used deprecated FindBugs config; considering switching 
to SpotBugs. |
   | +1 | findbugs | 61 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 29 | the patch passed |
   | +1 | compile | 23 | the patch passed |
   | +1 | javac | 23 | the patch passed |
   | +1 | checkstyle | 17 | the patch passed |
   | +1 | mvnsite | 26 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 819 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 23 | the patch passed |
   | +1 | findbugs | 56 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 79 | hadoop-azure in the patch passed. |
   | +1 | asflicense | 33 | The patch does not generate ASF License warnings. |
   | | | 3743 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1302/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1302 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 05030a4e72a4 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 3468164 |
   | Default Java | 1.8.0_222 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1302/2/testReport/ |
   | Max. process+thread count | 412 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1302/2/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ChenSammi commented on a change in pull request #1286: Add filter to scmcli listPipelines.

2019-08-15 Thread GitBox
ChenSammi commented on a change in pull request #1286: Add filter to scmcli 
listPipelines.
URL: https://github.com/apache/hadoop/pull/1286#discussion_r314282539
 
 

 ##
 File path: 
hadoop-hdds/tools/src/main/java/org/apache/hadoop/hdds/scm/cli/pipeline/ListPipelinesSubcommand.java
 ##
 @@ -38,11 +38,34 @@
   @CommandLine.ParentCommand
   private SCMCLI parent;
 
+  @CommandLine.Option( names = {"-ffc", "--filterByFactor"},
+  description = "Filter listed pipelines by Factor(ONE/one)", 
defaultValue = "",
+  required = false)
+  private String factor;
+
+  @CommandLine.Option( names = {"-fst", "--filterByState"},
+  description = "Filter listed pipelines by State(OPEN/CLOSE)", 
defaultValue = "",
+  required = false)
+  private String state;
+
+
   @Override
   public Void call() throws Exception {
 try (ScmClient scmClient = parent.createScmClient()) {
-  scmClient.listPipelines().forEach(System.out::println);
+  if (isNullOrEmpty(factor) && isNullOrEmpty(state)) {
+System.out.println("No filter. List all.");
 
 Review comment:
   This output is not necessary. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ChenSammi commented on a change in pull request #1286: Add filter to scmcli listPipelines.

2019-08-15 Thread GitBox
ChenSammi commented on a change in pull request #1286: Add filter to scmcli 
listPipelines.
URL: https://github.com/apache/hadoop/pull/1286#discussion_r314282318
 
 

 ##
 File path: 
hadoop-hdds/tools/src/main/java/org/apache/hadoop/hdds/scm/cli/pipeline/ListPipelinesSubcommand.java
 ##
 @@ -38,11 +38,34 @@
   @CommandLine.ParentCommand
   private SCMCLI parent;
 
+  @CommandLine.Option( names = {"-ffc", "--filterByFactor"},
+  description = "Filter listed pipelines by Factor(ONE/one)", 
defaultValue = "",
+  required = false)
+  private String factor;
+
+  @CommandLine.Option( names = {"-fst", "--filterByState"},
+  description = "Filter listed pipelines by State(OPEN/CLOSE)", 
defaultValue = "",
+  required = false)
+  private String state;
+
+
   @Override
   public Void call() throws Exception {
 try (ScmClient scmClient = parent.createScmClient()) {
-  scmClient.listPipelines().forEach(System.out::println);
+  if (isNullOrEmpty(factor) && isNullOrEmpty(state)) {
 
 Review comment:
   You can use Strings.isNullOrEmpty here. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16351) Change ":" to ApplicationConstants.CLASS_PATH_SEPARATOR

2019-08-15 Thread kevin su (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16351?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16907958#comment-16907958
 ] 

kevin su commented on HADOOP-16351:
---

Sorry for the late reply, I just set up my Windows env for Hadoop.  

if we don't use _*ApplicationConstants.CLASS_PATH_SEPARATOR,*_ the path will be 
%PATH%:%JAVA_CLASSPATH%  on Windows

Windows use ";" instead of ":" to separate classpath
{code:java}
StringBuilder classPathEnv = new StringBuilder(Environment.CLASSPATH.$$())
 .append(ApplicationConstants.CLASS_PATH_SEPARATOR).append("./*");
 for (String c : conf.getStrings(
 YarnConfiguration.YARN_APPLICATION_CLASSPATH,
 YarnConfiguration.DEFAULT_YARN_CROSS_PLATFORM_APPLICATION_CLASSPATH)) {
 classPathEnv.append(ApplicationConstants.CLASS_PATH_SEPARATOR)
 .append(c.trim());
 }
 classPathEnv.append(ApplicationConstants.CLASS_PATH_SEPARATOR).append(
 "./log4j.properties");

// add the runtime classpath needed for tests to work
 if (conf.getBoolean(YarnConfiguration.IS_MINI_YARN_CLUSTER, false)) {
 classPathEnv.append(':')
 .append(System.getProperty("java.class.path"));
 }

env.put("CLASSPATH", classPathEnv.toString());
{code}
Above code shows that how classpath was be built for ApplicationMaster

> Change ":" to ApplicationConstants.CLASS_PATH_SEPARATOR
> ---
>
> Key: HADOOP-16351
> URL: https://issues.apache.org/jira/browse/HADOOP-16351
> Project: Hadoop Common
>  Issue Type: Task
>  Components: common
>Affects Versions: 3.1.2
>Reporter: kevin su
>Assignee: kevin su
>Priority: Trivial
> Fix For: 3.1.2
>
> Attachments: HADOOP-16351.01.patch
>
>
> under distributedshell/Clients.java 
> We should change ":" to ApplicationConstants.CLASS_PATH_SEPARATOR, so it 
> could also support Windows client
> {code}
> // add the runtime classpath needed for tests to work
> if (conf.getBoolean(YarnConfiguration.IS_MINI_YARN_CLUSTER, false)) {
>   classPathEnv.append(':')
>   .append(System.getProperty("java.class.path"));
> }
> {code}
> {code}
> // add the runtime classpath needed for tests to work
> if (conf.getBoolean(YarnConfiguration.IS_MINI_YARN_CLUSTER, false)) {
>   classPathEnv.append(ApplicationConstants.CLASS_PATH_SEPARATOR)
>   .append(System.getProperty("java.class.path"));
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] alexkingli opened a new pull request #1300: update updateMapIncr to avoid loading information form os every time

2019-08-15 Thread GitBox
alexkingli opened a new pull request #1300: update updateMapIncr to avoid 
loading information form os every time
URL: https://github.com/apache/hadoop/pull/1300
 
 
   If the user group does not exist, there will be a lot of invoking the 
updateMapIncr . And it is the synchronized methods,it really affects 
performance. 
   
   
   
   
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] Hexiaoqiao commented on a change in pull request #1028: HDFS-14617 - Improve fsimage load time by writing sub-sections to the fsimage index

2019-08-15 Thread GitBox
Hexiaoqiao commented on a change in pull request #1028: HDFS-14617 - Improve 
fsimage load time by writing sub-sections to the fsimage index
URL: https://github.com/apache/hadoop/pull/1028#discussion_r314210188
 
 

 ##
 File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageFormatPBINode.java
 ##
 @@ -197,16 +203,66 @@ public static void updateBlocksMap(INodeFile file, 
BlockManager bm) {
 private final FSDirectory dir;
 private final FSNamesystem fsn;
 private final FSImageFormatProtobuf.Loader parent;
+private ReentrantLock cacheNameMapLock;
+private ReentrantLock blockMapLock;
 
 Loader(FSNamesystem fsn, final FSImageFormatProtobuf.Loader parent) {
   this.fsn = fsn;
   this.dir = fsn.dir;
   this.parent = parent;
+  cacheNameMapLock = new ReentrantLock(true);
+  blockMapLock = new ReentrantLock(true);
+}
+
+void loadINodeDirectorySectionInParallel(ExecutorService service,
+ArrayList sections, String compressionCodec)
+throws IOException {
+  LOG.info("Loading the INodeDirectory section in parallel with {} sub-" +
+  "sections", sections.size());
+  CountDownLatch latch = new CountDownLatch(sections.size());
+  final CopyOnWriteArrayList exceptions =
+  new CopyOnWriteArrayList<>();
+  for (FileSummary.Section s : sections) {
+service.submit(new Runnable() {
+  public void run() {
+InputStream ins = null;
+try {
+  ins = parent.getInputStreamForSection(s,
+  compressionCodec);
+  loadINodeDirectorySection(ins);
+} catch (Exception e) {
+  LOG.error("An exception occurred loading INodeDirectories in " +
+  "parallel", e);
+  exceptions.add(new IOException(e));
+} finally {
+  latch.countDown();
+  try {
+ins.close();
+  } catch (IOException ioe) {
+LOG.warn("Failed to close the input stream, ignoring", ioe);
+  }
+}
+  }
+});
+  }
+  try {
+latch.await();
+  } catch (InterruptedException e) {
+LOG.error("Interrupted waiting for countdown latch", e);
+throw new IOException(e);
+  }
+  if (exceptions.size() != 0) {
+LOG.error("{} exceptions occurred loading INodeDirectories",
+exceptions.size());
+throw exceptions.get(0);
+  }
+  LOG.info("Completed loading all INodeDirectory sub-sections");
 }
 
 void loadINodeDirectorySection(InputStream in) throws IOException {
   final List refList = parent.getLoaderContext()
   .getRefList();
+  ArrayList inodeList = new ArrayList<>();
 
 Review comment:
   @jojochuang Thanks for your feedback, it is true that performance first. I 
am just concerned about memory overhead since the cache hit ratio decreased 
with batch cache way. Of course it is not serious issues.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] pingsutw closed pull request #1299: HDDS-1959. Decrement purge interval for Ratis logs

2019-08-15 Thread GitBox
pingsutw closed pull request #1299: HDDS-1959. Decrement purge interval for 
Ratis logs
URL: https://github.com/apache/hadoop/pull/1299
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] sodonnel commented on a change in pull request #1028: HDFS-14617 - Improve fsimage load time by writing sub-sections to the fsimage index

2019-08-15 Thread GitBox
sodonnel commented on a change in pull request #1028: HDFS-14617 - Improve 
fsimage load time by writing sub-sections to the fsimage index
URL: https://github.com/apache/hadoop/pull/1028#discussion_r314259665
 
 

 ##
 File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageFormatPBINode.java
 ##
 @@ -197,16 +203,66 @@ public static void updateBlocksMap(INodeFile file, 
BlockManager bm) {
 private final FSDirectory dir;
 private final FSNamesystem fsn;
 private final FSImageFormatProtobuf.Loader parent;
+private ReentrantLock cacheNameMapLock;
+private ReentrantLock blockMapLock;
 
 Loader(FSNamesystem fsn, final FSImageFormatProtobuf.Loader parent) {
   this.fsn = fsn;
   this.dir = fsn.dir;
   this.parent = parent;
+  cacheNameMapLock = new ReentrantLock(true);
+  blockMapLock = new ReentrantLock(true);
+}
+
+void loadINodeDirectorySectionInParallel(ExecutorService service,
+ArrayList sections, String compressionCodec)
+throws IOException {
+  LOG.info("Loading the INodeDirectory section in parallel with {} sub-" +
+  "sections", sections.size());
+  CountDownLatch latch = new CountDownLatch(sections.size());
+  final CopyOnWriteArrayList exceptions =
+  new CopyOnWriteArrayList<>();
+  for (FileSummary.Section s : sections) {
+service.submit(new Runnable() {
+  public void run() {
+InputStream ins = null;
+try {
+  ins = parent.getInputStreamForSection(s,
+  compressionCodec);
+  loadINodeDirectorySection(ins);
+} catch (Exception e) {
+  LOG.error("An exception occurred loading INodeDirectories in " +
+  "parallel", e);
+  exceptions.add(new IOException(e));
+} finally {
+  latch.countDown();
+  try {
+ins.close();
+  } catch (IOException ioe) {
+LOG.warn("Failed to close the input stream, ignoring", ioe);
+  }
+}
+  }
+});
+  }
+  try {
+latch.await();
+  } catch (InterruptedException e) {
+LOG.error("Interrupted waiting for countdown latch", e);
+throw new IOException(e);
+  }
+  if (exceptions.size() != 0) {
+LOG.error("{} exceptions occurred loading INodeDirectories",
+exceptions.size());
+throw exceptions.get(0);
+  }
+  LOG.info("Completed loading all INodeDirectory sub-sections");
 }
 
 void loadINodeDirectorySection(InputStream in) throws IOException {
   final List refList = parent.getLoaderContext()
   .getRefList();
+  ArrayList inodeList = new ArrayList<>();
 
 Review comment:
   I missed this comment earlier. I have changed the 1000 to a constant. 
   
   For the cache in used, I have no concerns. Its a filename cache where all 
the file names are loaded to it at startup time, so the same string can be 
re-used if there are multiple files with the same name. It is not an LRU cache 
or anything like that, but is a hashmap of filenames used to reduce the overall 
heap used in the namenode. Loading it in batch like this will not affect the 
usefulness of it.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] pingsutw opened a new pull request #1301: HDDS-1959. Decrement purge interval for Ratis logs in datanode

2019-08-15 Thread GitBox
pingsutw opened a new pull request #1301: HDDS-1959. Decrement purge interval 
for Ratis logs in datanode
URL: https://github.com/apache/hadoop/pull/1301
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] Hexiaoqiao commented on issue #1028: HDFS-14617 - Improve fsimage load time by writing sub-sections to the fsimage index

2019-08-15 Thread GitBox
Hexiaoqiao commented on issue #1028: HDFS-14617 - Improve fsimage load time by 
writing sub-sections to the fsimage index
URL: https://github.com/apache/hadoop/pull/1028#issuecomment-521548200
 
 
   @sodonnel for OIV tools, based on branch-2.7 I got the different result 
IIRC. unfortunately I do not dig it deeply at that time. I would like to test 
it again later and report it if meet some exceptions.
   
   > Two future improvements we could do in a new Jiras, are:
   > Make the ReverseXML processor write out the sub-section headers so it 
creates a parallel enabled image (if the relevant settings are enabled)
   > Investigate allowing OIV to process the image in parallel if it has the 
sub-sections in the index and parallel is enabled.
   
   +1.  Thanks @sodonnel 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16416) mark DynamoDBMetadataStore.deleteTrackingValueMap as final

2019-08-15 Thread kevin su (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16416?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16908324#comment-16908324
 ] 

kevin su commented on HADOOP-16416:
---

[~gabor.bota] thanks, I updated the patch.

> mark DynamoDBMetadataStore.deleteTrackingValueMap as final
> --
>
> Key: HADOOP-16416
> URL: https://issues.apache.org/jira/browse/HADOOP-16416
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Assignee: kevin su
>Priority: Trivial
> Attachments: HADOOP-16416.001.patch, HADOOP-16416.002.patch, 
> HADOOP-16416.003.patch, HADOOP-16416.004.patch
>
>
> S3Guard's {{DynamoDBMetadataStore.deleteTrackingValueMap}} field is static 
> and can/should be marked as final; its name changed to upper case to match 
> the coding conventions.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16517) Allow optional mutual TLS in HttpServer2

2019-08-15 Thread Kihwal Lee (JIRA)
Kihwal Lee created HADOOP-16517:
---

 Summary: Allow optional mutual TLS in HttpServer2
 Key: HADOOP-16517
 URL: https://issues.apache.org/jira/browse/HADOOP-16517
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Kihwal Lee


Currently the webservice can enforce mTLS by setting 
"dfs.client.https.need-auth" on the server side. (The config name is 
misleading, as it is actually server-side config. It has been deprecated from 
the client config)  A hadoop client can talk to mTLS enforced web service by 
setting "hadoop.ssl.require.client.cert" with proper ssl config.

We have seen use case where mTLS needs to be enabled optionally for only those 
clients who supplies their cert. In a mixed environment like this, individual 
services may still enforce mTLS for a subset of endpoints by checking the 
existence of x509 cert in the request.

 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] anuengineer closed pull request #1268: HDDS-1910. Cannot build hadoop-hdds-config from scratch in IDEA

2019-08-15 Thread GitBox
anuengineer closed pull request #1268: HDDS-1910. Cannot build 
hadoop-hdds-config from scratch in IDEA
URL: https://github.com/apache/hadoop/pull/1268
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] adoroszlai commented on issue #1283: HDDS-1954. StackOverflowError in OzoneClientInvocationHandler

2019-08-15 Thread GitBox
adoroszlai commented on issue #1283: HDDS-1954. StackOverflowError in 
OzoneClientInvocationHandler
URL: https://github.com/apache/hadoop/pull/1283#issuecomment-521740704
 
 
   Thanks @anuengineer for committing it.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] anuengineer commented on issue #1263: HDDS-1927. Consolidate add/remove Acl into OzoneAclUtil class. Contri…

2019-08-15 Thread GitBox
anuengineer commented on issue #1263: HDDS-1927. Consolidate add/remove Acl 
into OzoneAclUtil class. Contri…
URL: https://github.com/apache/hadoop/pull/1263#issuecomment-521741875
 
 
   I am +1 on these changes, and thanks for fixing the Findbugs issue. Can you 
please rebase and post a new patch. I will go ahead and commit it then. 
@bharatviswa504 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on issue #1204: HDDS-1768. Audit xxxAcl methods in OzoneManager

2019-08-15 Thread GitBox
bharatviswa504 commented on issue #1204: HDDS-1768. Audit xxxAcl methods in 
OzoneManager
URL: https://github.com/apache/hadoop/pull/1204#issuecomment-521688910
 
 
   Thank You @dineshchitlangia for the contribution.
   I will commit this to the trunk.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16416) mark DynamoDBMetadataStore.deleteTrackingValueMap as final

2019-08-15 Thread kevin su (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16416?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

kevin su updated HADOOP-16416:
--
Attachment: HADOOP-16416.004.patch

> mark DynamoDBMetadataStore.deleteTrackingValueMap as final
> --
>
> Key: HADOOP-16416
> URL: https://issues.apache.org/jira/browse/HADOOP-16416
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Assignee: kevin su
>Priority: Trivial
> Attachments: HADOOP-16416.001.patch, HADOOP-16416.002.patch, 
> HADOOP-16416.003.patch, HADOOP-16416.004.patch
>
>
> S3Guard's {{DynamoDBMetadataStore.deleteTrackingValueMap}} field is static 
> and can/should be marked as final; its name changed to upper case to match 
> the coding conventions.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16416) mark DynamoDBMetadataStore.deleteTrackingValueMap as final

2019-08-15 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16416?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16908380#comment-16908380
 ] 

Hadoop QA commented on HADOOP-16416:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m  
0s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 26s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 42s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
17s{color} | {color:green} hadoop-aws in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
28s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 55m 16s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.1 Server=19.03.1 Image:yetus/hadoop:bdbca0e53b4 |
| JIRA Issue | HADOOP-16416 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12977722/HADOOP-16416.004.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux bb03ca3e85dc 4.15.0-54-generic #58-Ubuntu SMP Mon Jun 24 
10:55:24 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / c801f7a |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_222 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16482/testReport/ |
| Max. process+thread count | 307 (vs. ulimit of 5500) |
| modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16482/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> mark DynamoDBMetadataStore.deleteTrackingValueMap as final
> 

[GitHub] [hadoop] dineshchitlangia commented on issue #1204: HDDS-1768. Audit xxxAcl methods in OzoneManager

2019-08-15 Thread GitBox
dineshchitlangia commented on issue #1204: HDDS-1768. Audit xxxAcl methods in 
OzoneManager
URL: https://github.com/apache/hadoop/pull/1204#issuecomment-521686023
 
 
   @bharatviswa504 , @anuengineer  - Verified the failures are unrelated to the 
test.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 merged pull request #1204: HDDS-1768. Audit xxxAcl methods in OzoneManager

2019-08-15 Thread GitBox
bharatviswa504 merged pull request #1204: HDDS-1768. Audit xxxAcl methods in 
OzoneManager
URL: https://github.com/apache/hadoop/pull/1204
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] LeonGao91 commented on issue #1252: HDFS-14678. Allow triggerBlockReport to a specific namenode.

2019-08-15 Thread GitBox
LeonGao91 commented on issue #1252: HDFS-14678. Allow triggerBlockReport to a 
specific namenode.
URL: https://github.com/apache/hadoop/pull/1252#issuecomment-521737233
 
 
   > I gave it a quick review and from that quick look it looks good. Thanks. 
I'll try to look at it harder, especially the test code.
   
   Thanks @jojochuang !


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] anuengineer commented on issue #1268: HDDS-1910. Cannot build hadoop-hdds-config from scratch in IDEA

2019-08-15 Thread GitBox
anuengineer commented on issue #1268: HDDS-1910. Cannot build 
hadoop-hdds-config from scratch in IDEA
URL: https://github.com/apache/hadoop/pull/1268#issuecomment-521740262
 
 
   Thank you for the contribution. I have committed this to the trunk.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] szilard-nemeth commented on issue #1122: YARN-9679. Regular code cleanup in TestResourcePluginManager

2019-08-15 Thread GitBox
szilard-nemeth commented on issue #1122: YARN-9679. Regular code cleanup in 
TestResourcePluginManager
URL: https://github.com/apache/hadoop/pull/1122#issuecomment-521684744
 
 
   Hi @adamantal !
   Thanks for this patch, +1
   Committing to trunk!


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] szilard-nemeth merged pull request #1122: YARN-9679. Regular code cleanup in TestResourcePluginManager

2019-08-15 Thread GitBox
szilard-nemeth merged pull request #1122: YARN-9679. Regular code cleanup in 
TestResourcePluginManager
URL: https://github.com/apache/hadoop/pull/1122
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16138) hadoop fs mkdir / of nonexistent abfs container raises NPE

2019-08-15 Thread Gabor Bota (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16138?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16908308#comment-16908308
 ] 

Gabor Bota commented on HADOOP-16138:
-

Based on offline discussion with [~mackrorysd] we agreed that this output is 
not what we would like to see.
Something like "{{The container does not exist: [nameofthecontainer].}}" is way 
better than what we currently have.
I'll update my PR accordingly.

> hadoop fs mkdir / of nonexistent abfs container raises NPE
> --
>
> Key: HADOOP-16138
> URL: https://issues.apache.org/jira/browse/HADOOP-16138
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Assignee: Gabor Bota
>Priority: Minor
>
> If you try to do a mkdir on the root of a nonexistent container, you get an 
> NPE
> {code}
> hadoop fs -mkdir  abfs://contain...@abfswales1.dfs.core.windows.net/  
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16138) hadoop fs mkdir / of nonexistent abfs container raises NPE

2019-08-15 Thread Gabor Bota (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16138?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota updated HADOOP-16138:

Status: In Progress  (was: Patch Available)

> hadoop fs mkdir / of nonexistent abfs container raises NPE
> --
>
> Key: HADOOP-16138
> URL: https://issues.apache.org/jira/browse/HADOOP-16138
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Assignee: Gabor Bota
>Priority: Minor
>
> If you try to do a mkdir on the root of a nonexistent container, you get an 
> NPE
> {code}
> hadoop fs -mkdir  abfs://contain...@abfswales1.dfs.core.windows.net/  
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] anuengineer closed pull request #1283: HDDS-1954. StackOverflowError in OzoneClientInvocationHandler

2019-08-15 Thread GitBox
anuengineer closed pull request #1283: HDDS-1954. StackOverflowError in 
OzoneClientInvocationHandler
URL: https://github.com/apache/hadoop/pull/1283
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] anuengineer commented on issue #1283: HDDS-1954. StackOverflowError in OzoneClientInvocationHandler

2019-08-15 Thread GitBox
anuengineer commented on issue #1283: HDDS-1954. StackOverflowError in 
OzoneClientInvocationHandler
URL: https://github.com/apache/hadoop/pull/1283#issuecomment-521737548
 
 
   @avijayanhwx and @dineshchitlangia  Thanks for the reviews. @adoroszlai  
Thanks for the contribution. I have committed this patch to the trunk branch.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16517) Allow optional mutual TLS in HttpServer2

2019-08-15 Thread Kihwal Lee (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16517?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16908445#comment-16908445
 ] 

Kihwal Lee commented on HADOOP-16517:
-

YARN's WebAppUtils#loadSslConfiguration() does not support this, so will need 
to be modified as well.

> Allow optional mutual TLS in HttpServer2
> 
>
> Key: HADOOP-16517
> URL: https://issues.apache.org/jira/browse/HADOOP-16517
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Kihwal Lee
>Assignee: Kihwal Lee
>Priority: Major
> Attachments: HADOOP-16517.patch
>
>
> Currently the webservice can enforce mTLS by setting 
> "dfs.client.https.need-auth" on the server side. (The config name is 
> misleading, as it is actually server-side config. It has been deprecated from 
> the client config)  A hadoop client can talk to mTLS enforced web service by 
> setting "hadoop.ssl.require.client.cert" with proper ssl config.
> We have seen use case where mTLS needs to be enabled optionally for only 
> those clients who supplies their cert. In a mixed environment like this, 
> individual services may still enforce mTLS for a subset of endpoints by 
> checking the existence of x509 cert in the request.
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] avijayanhwx commented on issue #1303: HDDS-1903 : Use dynamic ports for SCM in TestSCMClientProtocolServer …

2019-08-15 Thread GitBox
avijayanhwx commented on issue #1303: HDDS-1903 : Use dynamic ports for SCM in 
TestSCMClientProtocolServer …
URL: https://github.com/apache/hadoop/pull/1303#issuecomment-521791717
 
 
   @nandakumar131 / @mukul1987 Can you please review this change?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] avijayanhwx commented on issue #1303: HDDS-1903 : Use dynamic ports for SCM in TestSCMClientProtocolServer …

2019-08-15 Thread GitBox
avijayanhwx commented on issue #1303: HDDS-1903 : Use dynamic ports for SCM in 
TestSCMClientProtocolServer …
URL: https://github.com/apache/hadoop/pull/1303#issuecomment-521791743
 
 
   /label ozone


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #1266: HDDS-1948. S3 MPU can't be created with octet-stream content-type

2019-08-15 Thread GitBox
bharatviswa504 commented on a change in pull request #1266: HDDS-1948. S3 MPU 
can't be created with octet-stream content-type 
URL: https://github.com/apache/hadoop/pull/1266#discussion_r314537125
 
 

 ##
 File path: 
hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/HeaderPreprocessor.java
 ##
 @@ -17,39 +17,58 @@
  */
 package org.apache.hadoop.ozone.s3;
 
+import javax.annotation.Priority;
 import javax.ws.rs.container.ContainerRequestContext;
 import javax.ws.rs.container.ContainerRequestFilter;
 import javax.ws.rs.container.PreMatching;
 import javax.ws.rs.core.MediaType;
+import javax.ws.rs.core.MultivaluedMap;
 import javax.ws.rs.ext.Provider;
 import java.io.IOException;
 
 /**
  * Filter to adjust request headers for compatible reasons.
+ *
+ * It should be executed AFTER signature check (VirtualHostStyleFilter).
  */
-
 @Provider
 @PreMatching
+@Priority(150)
 
 Review comment:
   Is this priority setting is done, to perform this Preprocessing after 
VirtualHostStyleFilter processing.
   Can we define these constants and use them, instead of hardcoded values.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #1266: HDDS-1948. S3 MPU can't be created with octet-stream content-type

2019-08-15 Thread GitBox
bharatviswa504 commented on a change in pull request #1266: HDDS-1948. S3 MPU 
can't be created with octet-stream content-type 
URL: https://github.com/apache/hadoop/pull/1266#discussion_r314537125
 
 

 ##
 File path: 
hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/HeaderPreprocessor.java
 ##
 @@ -17,39 +17,58 @@
  */
 package org.apache.hadoop.ozone.s3;
 
+import javax.annotation.Priority;
 import javax.ws.rs.container.ContainerRequestContext;
 import javax.ws.rs.container.ContainerRequestFilter;
 import javax.ws.rs.container.PreMatching;
 import javax.ws.rs.core.MediaType;
+import javax.ws.rs.core.MultivaluedMap;
 import javax.ws.rs.ext.Provider;
 import java.io.IOException;
 
 /**
  * Filter to adjust request headers for compatible reasons.
+ *
+ * It should be executed AFTER signature check (VirtualHostStyleFilter).
  */
-
 @Provider
 @PreMatching
+@Priority(150)
 
 Review comment:
   Is this priority setting is done, to perform this Preprocessing after 
VirtualHostStyleFilter processing. (Is there any reason to do in this order?)
   Can we define these constants and use them, instead of hardcoded values.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-15784) Tracing in Hadoop failed because of Unknown protocol: org.apache.hadoop.tracing.TraceAdminPB.TraceAdminService

2019-08-15 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15784?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang reassigned HADOOP-15784:


Assignee: yunfei liu

> Tracing in Hadoop failed because of  Unknown protocol: 
> org.apache.hadoop.tracing.TraceAdminPB.TraceAdminService
> ---
>
> Key: HADOOP-15784
> URL: https://issues.apache.org/jira/browse/HADOOP-15784
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tracing
>Affects Versions: 2.8.2
>Reporter: yunfei liu
>Assignee: yunfei liu
>Priority: Major
> Attachments: HADOOP-15784-1.patch
>
>
> I'm trying to use tracing feature in Hadoop according to this document 
> [Enabling Dapper-like Tracing in 
> Hadoop|https://hadoop.apache.org/docs/r2.8.5/hadoop-project-dist/hadoop-common/Tracing.html]
> when executing command 
> {code:java}
> hadoop trace -list -host localhost:8020
> {code}
>  it failed with exception 
> {code:java}
> Exception in thread "main" 
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.authorize.AuthorizationException):
>  Unknown protocol: org.apache.hadoop.tracing.TraceAdminPB.TraceAdminService
>   at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1495)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1441)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1351)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:235)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
>   at com.sun.proxy.$Proxy8.listSpanReceivers(Unknown Source)
>   at 
> org.apache.hadoop.tracing.TraceAdminProtocolTranslatorPB.listSpanReceivers(TraceAdminProtocolTranslatorPB.java:59)
>   at 
> org.apache.hadoop.tracing.TraceAdmin.listSpanReceivers(TraceAdmin.java:73)
>   at org.apache.hadoop.tracing.TraceAdmin.run(TraceAdmin.java:192)
>   at org.apache.hadoop.tracing.TraceAdmin.main(TraceAdmin.java:210)
> {code}
> after examining the source code , I change the protocalName in 
> TraceAdminProtocolPB.java from 
> org.apache.hadoop.tracing.TraceAdminPB.TraceAdminService to 
> org.apache.hadoop.tracing.TraceAdminProtocol ,  and the tracing function 
> finally work well
> {code:java}
> [hdfs@yunfeil ~]$ hadoop trace -list -host localhost:8020
> ID  CLASS 
> 1   org.apache.htrace.impl.ZipkinSpanReceiver 
> {code}
> the code change is as the patch 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #1305: HDDS-1938. Change omPort parameter type from String to int in BasicOzoneFileSystem#createAdapter

2019-08-15 Thread GitBox
bharatviswa504 commented on a change in pull request #1305: HDDS-1938. Change 
omPort parameter type from String to int in BasicOzoneFileSystem#createAdapter
URL: https://github.com/apache/hadoop/pull/1305#discussion_r314546738
 
 

 ##
 File path: 
hadoop-ozone/ozonefs/src/main/java/org/apache/hadoop/fs/ozone/BasicOzoneFileSystem.java
 ##
 @@ -113,7 +113,7 @@ public void initialize(URI name, Configuration conf) 
throws IOException {
 String remaining = matcher.groupCount() == 3 ? matcher.group(3) : null;
 
 String omHost = null;
-String omPort = String.valueOf(-1);
+int omPort = -1;
 
 Review comment:
   Minor Nit: We don't need to set it to -1.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #1306: HDDS-1971. Update document for HDDS-1891: Ozone fs shell command should work with default port when port number is not spe

2019-08-15 Thread GitBox
bharatviswa504 commented on a change in pull request #1306: HDDS-1971. Update 
document for HDDS-1891: Ozone fs shell command should work with default port 
when port number is not specified
URL: https://github.com/apache/hadoop/pull/1306#discussion_r314547870
 
 

 ##
 File path: hadoop-hdds/docs/content/interface/OzoneFS.md
 ##
 @@ -77,13 +77,39 @@ Or put command etc. In other words, all programs like 
Hive, Spark, and Distcp wi
 Please note that any keys created/deleted in the bucket using methods apart 
from OzoneFileSystem will show up as directories and files in the Ozone File 
System.
 
 Note: Bucket and volume names are not allowed to have a period in them.
-Moreover, the filesystem URI can take a fully qualified form with the OM host 
and port as a part of the path following the volume name.
-For example,
+Moreover, the filesystem URI can take a fully qualified form with the OM host 
and an optional port as a part of the path following the volume name.
+For example, you can specify both host and port:
 
 {{< highlight bash>}}
 hdfs dfs -ls o3fs://bucket.volume.om-host.example.com:5678/key
 {{< /highlight >}}
 
+When the port number is not specified, it will be retrieved from config key 
`ozone.om.address`.
+For example, we have `ozone.om.address` configured as following in 
`ozone-site.xml`:
+
+{{< highlight xml >}}
+  
+ozone.om.address
+0.0.0.0:6789
+  
+{{< /highlight >}}
+
+When we run command:
+
+{{< highlight bash>}}
+hdfs dfs -ls o3fs://bucket.volume.om-host.example.com/key
+{{< /highlight >}}
+
+The above command is essentially equivalent to:
+
+{{< highlight bash>}}
+hdfs dfs -ls o3fs://bucket.volume.om-host.example.com:6789/key
+{{< /highlight >}}
+
+Note only the port number in the config is relevant. The host name in config 
`ozone.om.address` is ignored in this case.
 
 Review comment:
   Can we reword this as:
   Note: In the above case, port number from the config is used, whereas 
hostname in the config `ozone.om.address` is ignored.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1130: HDDS-1827. Load Snapshot info when OM Ratis server starts.

2019-08-15 Thread GitBox
hadoop-yetus commented on issue #1130: HDDS-1827. Load Snapshot info when OM 
Ratis server starts.
URL: https://github.com/apache/hadoop/pull/1130#issuecomment-521802948
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 71 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 5 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 28 | Maven dependency ordering for branch |
   | +1 | mvninstall | 596 | trunk passed |
   | +1 | compile | 359 | trunk passed |
   | +1 | checkstyle | 62 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 793 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 149 | trunk passed |
   | 0 | spotbugs | 416 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 608 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 17 | Maven dependency ordering for patch |
   | +1 | mvninstall | 533 | the patch passed |
   | +1 | compile | 348 | the patch passed |
   | +1 | javac | 348 | the patch passed |
   | -0 | checkstyle | 35 | hadoop-ozone: The patch generated 7 new + 0 
unchanged - 0 fixed = 7 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 676 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 88 | hadoop-ozone generated 1 new + 20 unchanged - 0 fixed 
= 21 total (was 20) |
   | -1 | findbugs | 450 | hadoop-ozone generated 4 new + 0 unchanged - 0 fixed 
= 4 total (was 0) |
   ||| _ Other Tests _ |
   | +1 | unit | 412 | hadoop-hdds in the patch passed. |
   | -1 | unit | 3714 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 41 | The patch does not generate ASF License warnings. |
   | | | 9439 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | FindBugs | module:hadoop-ozone |
   |  |  Found reliance on default encoding in 
org.apache.hadoop.ozone.om.ratis.OMRatisSnapshotInfo.loadRatisSnapshotIndex():in
 org.apache.hadoop.ozone.om.ratis.OMRatisSnapshotInfo.loadRatisSnapshotIndex(): 
new java.io.FileReader(File)  At OMRatisSnapshotInfo.java:[line 82] |
   |  |  Found reliance on default encoding in 
org.apache.hadoop.ozone.om.ratis.OMRatisSnapshotInfo.saveRatisSnapshotToDisk(long):in
 
org.apache.hadoop.ozone.om.ratis.OMRatisSnapshotInfo.saveRatisSnapshotToDisk(long):
 new java.io.FileWriter(File)  At OMRatisSnapshotInfo.java:[line 104] |
   |  |  Dereference of the result of readLine() without nullcheck in 
org.apache.hadoop.ozone.om.ratis.OMRatisSnapshotInfo.loadRatisSnapshotIndex()  
At OMRatisSnapshotInfo.java:readLine() without nullcheck in 
org.apache.hadoop.ozone.om.ratis.OMRatisSnapshotInfo.loadRatisSnapshotIndex()  
At OMRatisSnapshotInfo.java:[line 88] |
   |  |  Dereference of the result of readLine() without nullcheck in 
org.apache.hadoop.ozone.om.ratis.OMRatisSnapshotInfo.loadRatisSnapshotIndex()  
At OMRatisSnapshotInfo.java:readLine() without nullcheck in 
org.apache.hadoop.ozone.om.ratis.OMRatisSnapshotInfo.loadRatisSnapshotIndex()  
At OMRatisSnapshotInfo.java:[line 87] |
   | Failed junit tests | hadoop.ozone.client.rpc.TestBlockOutputStream |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.hdds.scm.pipeline.TestRatisPipelineCreateAndDestory |
   |   | hadoop.ozone.client.rpc.Test2WayCommitInRatis |
   |   | hadoop.ozone.client.rpc.TestOzoneClientRetriesOnException |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientForAclAuditLog |
   |   | hadoop.ozone.TestContainerStateMachineIdempotency |
   |   | hadoop.ozone.client.rpc.TestReadRetries |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1130/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1130 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 120842c5ec46 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 77d102c |
   | Default Java | 1.8.0_222 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1130/2/artifact/out/diff-checkstyle-hadoop-ozone.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1130/2/artifact/out/diff-javadoc-javadoc-hadoop-ozone.txt
 |
   | findbugs | 

[GitHub] [hadoop] bharatviswa504 opened a new pull request #1304: HDDS-1972. Provide example ha proxy with multiple s3 servers back end.

2019-08-15 Thread GitBox
bharatviswa504 opened a new pull request #1304: HDDS-1972. Provide example ha 
proxy with multiple s3 servers back end.
URL: https://github.com/apache/hadoop/pull/1304
 
 
   Tested it on my laptop:
   ```
   $ docker-compose up
   Creating network "ozones3-haproxy_default" with the default driver
   Creating ozones3-haproxy_s3-proxy_1 ... done
   Creating ozones3-haproxy_s3g3_1 ... done
   Creating ozones3-haproxy_scm_1  ... done
   Creating ozones3-haproxy_s3g1_1 ... done
   Creating ozones3-haproxy_om_1   ... done
   Creating ozones3-haproxy_s3g2_1 ... done
   Creating ozones3-haproxy_datanode_1 ... done
   Attaching to ozones3-haproxy_s3-proxy_1, ozones3-haproxy_om_1, 
ozones3-haproxy_datanode_1, ozones3-haproxy_s3g3_1, ozones3-haproxy_s3g1_1, 
ozones3-haproxy_scm_1, ozones3-haproxy_s3g2_1
   s3-proxy_1  | [NOTICE] 226/212303 (1) : New worker #1 (6) forked
   ```
   
   ```
   $ aws s3api --endpoint http://localhost:8081 create-bucket --bucket b12346
   {
   "Location": "http://localhost:8081/b12346;
   }
   HW13865:ozones3-haproxy bviswanadham$ aws s3api --endpoint 
http://localhost:8081 create-bucket --bucket b1234
   {
   "Location": "http://localhost:8081/b1234;
   }
   HW13865:ozones3-haproxy bviswanadham$ aws s3api --endpoint 
http://localhost:8081 create-bucket --bucket b123
   {
   "Location": "http://localhost:8081/b123;
   }
   HW13865:ozones3-haproxy bviswanadham$ aws s3api --endpoint 
http://localhost:8081 list-buckets
   {
   "Buckets": [
   {
   "CreationDate": "2019-08-15T21:23:49.643Z", 
   "Name": "b123"
   }, 
   {
   "CreationDate": "2019-08-15T21:23:45.330Z", 
   "Name": "b1234"
   }, 
   {
   "CreationDate": "2019-08-15T21:23:42.629Z", 
   "Name": "b12346"
   }
   ]
   }
   ```
   
   docker logs:
   
   ```
   s3g1_1  | 2019-08-15 21:23:42 INFO  BucketEndpoint:206 - Location is 
/b12346
   s3g2_1  | 2019-08-15 21:23:45 INFO  BucketEndpoint:206 - Location is 
/b1234
   s3g3_1  | 2019-08-15 21:23:49 INFO  BucketEndpoint:206 - Location is 
/b123
   ```
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] smengcl opened a new pull request #1305: HDDS-1938. Change omPort parameter type from String to int in BasicOzoneFileSystem#createAdapter

2019-08-15 Thread GitBox
smengcl opened a new pull request #1305: HDDS-1938. Change omPort parameter 
type from String to int in BasicOzoneFileSystem#createAdapter
URL: https://github.com/apache/hadoop/pull/1305
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] smengcl commented on issue #1305: HDDS-1938. Change omPort parameter type from String to int in BasicOzoneFileSystem#createAdapter

2019-08-15 Thread GitBox
smengcl commented on issue #1305: HDDS-1938. Change omPort parameter type from 
String to int in BasicOzoneFileSystem#createAdapter
URL: https://github.com/apache/hadoop/pull/1305#issuecomment-521804697
 
 
   /label ozone


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] smengcl opened a new pull request #1306: HDDS-1971. Update document for HDDS-1891: Ozone fs shell command should work with default port when port number is not specified

2019-08-15 Thread GitBox
smengcl opened a new pull request #1306: HDDS-1971. Update document for 
HDDS-1891: Ozone fs shell command should work with default port when port 
number is not specified
URL: https://github.com/apache/hadoop/pull/1306
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-16504) Increase ipc.server.listen.queue.size default from 128 to 256

2019-08-15 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16504?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang resolved HADOOP-16504.
--
   Resolution: Fixed
Fix Version/s: 3.3.0

Thanks [~leosun08]!

> Increase ipc.server.listen.queue.size default from 128 to 256
> -
>
> Key: HADOOP-16504
> URL: https://issues.apache.org/jira/browse/HADOOP-16504
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HADOOP-16504.000.patch, HADOOP-16504.001.patch
>
>
> Because ipc.server.listen.queue.size default value is too small, TCP's 
> ListenDrop indicator along with the rpc request large.
>  The upper limit of the system's semi-join queue is 65636 and maximum number 
> of fully connected queues is 1024.
> {code:java}
> [work@c3-hadoop-srv-talos27 ~]$ cat /proc/sys/net/ipv4/tcp_max_syn_backlog
> 65536
> [work@c3-hadoop-srv-talos27 ~]$ cat /proc/sys/net/core/somaxconn
> 1024
> {code}
> I think this default value should be adjusted.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #1279: HDDS-1942. Support copy during S3 multipart upload part creation

2019-08-15 Thread GitBox
bharatviswa504 commented on a change in pull request #1279: HDDS-1942. Support 
copy during S3 multipart upload part creation
URL: https://github.com/apache/hadoop/pull/1279#discussion_r314534545
 
 

 ##
 File path: hadoop-ozone/dist/src/main/smoketest/s3/MultipartUpload.robot
 ##
 @@ -200,3 +200,26 @@ Test Multipart Upload with the simplified aws s3 cp API
 Execute AWSS3Clicp s3://${BUCKET}/mpyawscli 
/tmp/part1.result
 Execute AWSS3Clirm s3://${BUCKET}/mpyawscli
 Compare files   /tmp/part1
/tmp/part1.result
+
+Test Multipart Upload Put With Copy
+Run Keyword Create Random file  5
+${result} = Execute AWSS3APICli put-object --bucket ${BUCKET} 
--key copytest/source --body /tmp/part1
+
+
+${result} = Execute AWSS3APICli create-multipart-upload 
--bucket ${BUCKET} --key copytest/destination
+
+${uploadID} =   Execute and checkrc  echo '${result}' | jq -r 
'.UploadId'0
+Should contain   ${result}${BUCKET}
+Should contain   ${result}UploadId
+
+${result} = Execute AWSS3APICli  upload-part-copy --bucket 
${BUCKET} --key copytest/destination --upload-id ${uploadID} --part-number 1 
--copy-source ${BUCKET}/copytest/source
+Should contain   ${result}${BUCKET}
 
 Review comment:
   Can we add an test part copy with byte-range also.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #1279: HDDS-1942. Support copy during S3 multipart upload part creation

2019-08-15 Thread GitBox
bharatviswa504 commented on a change in pull request #1279: HDDS-1942. Support 
copy during S3 multipart upload part creation
URL: https://github.com/apache/hadoop/pull/1279#discussion_r314534295
 
 

 ##
 File path: 
hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/endpoint/ObjectEndpoint.java
 ##
 @@ -553,12 +555,45 @@ private Response createMultipartKey(String bucket, 
String key, long length,
   OzoneBucket ozoneBucket = getBucket(bucket);
   OzoneOutputStream ozoneOutputStream = ozoneBucket.createMultipartKey(
   key, length, partNumber, uploadID);
-  IOUtils.copy(body, ozoneOutputStream);
+
+  String copyHeader = headers.getHeaderString(COPY_SOURCE_HEADER);
+  if (copyHeader != null) {
+Pair result = parseSourceHeader(copyHeader);
+
+String sourceBucket = result.getLeft();
+String sourceKey = result.getRight();
+
+try (OzoneInputStream sourceObject =
+getBucket(sourceBucket).readKey(sourceKey)) {
+
+  String range =
+  headers.getHeaderString(COPY_SOURCE_HEADER_RANGE);
+  if (range != null) {
+RangeHeader rangeHeader =
+RangeHeaderParserUtil.parseRangeHeader(range, 0);
+IOUtils.copyLarge(sourceObject, ozoneOutputStream,
+rangeHeader.getStartOffset(),
+rangeHeader.getEndOffset() - rangeHeader.getStartOffset());
+
+  } else {
+IOUtils.copy(sourceObject, ozoneOutputStream);
+  }
+}
+
+  } else {
+IOUtils.copy(body, ozoneOutputStream);
+  }
   ozoneOutputStream.close();
   OmMultipartCommitUploadPartInfo omMultipartCommitUploadPartInfo =
   ozoneOutputStream.getCommitUploadPartInfo();
-  return Response.status(Status.OK).header("ETag",
-  omMultipartCommitUploadPartInfo.getPartName()).build();
+  String eTag = omMultipartCommitUploadPartInfo.getPartName();
+
+  if (copyHeader != null) {
+return Response.ok(new CopyPartResult(eTag)).build();
 
 Review comment:
   CopyPartResult from the documentation has 2 fields etag and LastModified. 
Here we are not setting lastModified.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #1279: HDDS-1942. Support copy during S3 multipart upload part creation

2019-08-15 Thread GitBox
bharatviswa504 commented on a change in pull request #1279: HDDS-1942. Support 
copy during S3 multipart upload part creation
URL: https://github.com/apache/hadoop/pull/1279#discussion_r314534545
 
 

 ##
 File path: hadoop-ozone/dist/src/main/smoketest/s3/MultipartUpload.robot
 ##
 @@ -200,3 +200,26 @@ Test Multipart Upload with the simplified aws s3 cp API
 Execute AWSS3Clicp s3://${BUCKET}/mpyawscli 
/tmp/part1.result
 Execute AWSS3Clirm s3://${BUCKET}/mpyawscli
 Compare files   /tmp/part1
/tmp/part1.result
+
+Test Multipart Upload Put With Copy
+Run Keyword Create Random file  5
+${result} = Execute AWSS3APICli put-object --bucket ${BUCKET} 
--key copytest/source --body /tmp/part1
+
+
+${result} = Execute AWSS3APICli create-multipart-upload 
--bucket ${BUCKET} --key copytest/destination
+
+${uploadID} =   Execute and checkrc  echo '${result}' | jq -r 
'.UploadId'0
+Should contain   ${result}${BUCKET}
+Should contain   ${result}UploadId
+
+${result} = Execute AWSS3APICli  upload-part-copy --bucket 
${BUCKET} --key copytest/destination --upload-id ${uploadID} --part-number 1 
--copy-source ${BUCKET}/copytest/source
+Should contain   ${result}${BUCKET}
 
 Review comment:
   Can we add an example of part copy with byte-range also.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #1266: HDDS-1948. S3 MPU can't be created with octet-stream content-type

2019-08-15 Thread GitBox
bharatviswa504 commented on a change in pull request #1266: HDDS-1948. S3 MPU 
can't be created with octet-stream content-type 
URL: https://github.com/apache/hadoop/pull/1266#discussion_r314537125
 
 

 ##
 File path: 
hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/HeaderPreprocessor.java
 ##
 @@ -17,39 +17,58 @@
  */
 package org.apache.hadoop.ozone.s3;
 
+import javax.annotation.Priority;
 import javax.ws.rs.container.ContainerRequestContext;
 import javax.ws.rs.container.ContainerRequestFilter;
 import javax.ws.rs.container.PreMatching;
 import javax.ws.rs.core.MediaType;
+import javax.ws.rs.core.MultivaluedMap;
 import javax.ws.rs.ext.Provider;
 import java.io.IOException;
 
 /**
  * Filter to adjust request headers for compatible reasons.
+ *
+ * It should be executed AFTER signature check (VirtualHostStyleFilter).
  */
-
 @Provider
 @PreMatching
+@Priority(150)
 
 Review comment:
   Is this done, to perform this Preprocessing after VirtualHostStyleFilter 
processing.
   Can we define these constants and use them, instead of hardcoded values.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16517) Allow optional mutual TLS in HttpServer2

2019-08-15 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16517?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16908550#comment-16908550
 ] 

Hadoop QA commented on HADOOP-16517:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
32s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m  
9s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
17m  2s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
57s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
21s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m 
43s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m 22s{color} | {color:orange} root: The patch generated 2 new + 547 unchanged 
- 0 fixed = 549 total (was 547) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 37s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
59s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m 
35s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}103m 27s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
47s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}215m 53s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.namenode.TestNameNodeMXBean |
|   | hadoop.hdfs.TestMultipleNNPortQOP |
|   | hadoop.hdfs.TestSafeModeWithStripedFileWithRandomECPolicy |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.1 Server=19.03.1 Image:yetus/hadoop:bdbca0e53b4 |
| JIRA Issue | HADOOP-16517 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12977726/HADOOP-16517.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  

[GitHub] [hadoop] hadoop-yetus commented on issue #1304: HDDS-1972. Provide example ha proxy with multiple s3 servers back end.

2019-08-15 Thread GitBox
hadoop-yetus commented on issue #1304: HDDS-1972. Provide example ha proxy with 
multiple s3 servers back end.
URL: https://github.com/apache/hadoop/pull/1304#issuecomment-521835741
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 43 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | 0 | yamllint | 1 | yamllint was not available. |
   | 0 | shelldocs | 1 | Shelldocs was not available. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 630 | trunk passed |
   | +1 | compile | 370 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 791 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 169 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 562 | the patch passed |
   | +1 | compile | 380 | the patch passed |
   | +1 | javac | 380 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | shellcheck | 0 | There were no new shellcheck issues. |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 689 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 165 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 286 | hadoop-hdds in the patch passed. |
   | -1 | unit | 1832 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 45 | The patch does not generate ASF License warnings. |
   | | | 6185 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.ozone.container.common.statemachine.commandhandler.TestBlockDeletion |
   |   | hadoop.ozone.client.rpc.TestBlockOutputStreamWithFailures |
   |   | hadoop.hdds.scm.pipeline.TestRatisPipelineProvider |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1304/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1304 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient yamllint shellcheck shelldocs |
   | uname | Linux b004b49d3141 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 77d102c |
   | Default Java | 1.8.0_222 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1304/1/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1304/1/testReport/ |
   | Max. process+thread count | 4308 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/dist U: hadoop-ozone/dist |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1304/1/console |
   | versions | git=2.7.4 maven=3.3.9 shellcheck=0.4.6 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-15760) Include Apache Commons Collections4

2019-08-15 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15760?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang reassigned HADOOP-15760:


Assignee: David Mollitor

> Include Apache Commons Collections4
> ---
>
> Key: HADOOP-15760
> URL: https://issues.apache.org/jira/browse/HADOOP-15760
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.10.0, 3.0.3
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Major
> Attachments: HADOOP-15760.1.patch
>
>
> Please allow for use of Apache Commons Collections 4 library with the end 
> goal of migrating from Apache Commons Collections 3.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1252: HDFS-14678. Allow triggerBlockReport to a specific namenode.

2019-08-15 Thread GitBox
hadoop-yetus commented on issue #1252: HDFS-14678. Allow triggerBlockReport to 
a specific namenode.
URL: https://github.com/apache/hadoop/pull/1252#issuecomment-521842024
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 120 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 2 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 13 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1207 | trunk passed |
   | +1 | compile | 236 | trunk passed |
   | +1 | checkstyle | 62 | trunk passed |
   | +1 | mvnsite | 121 | trunk passed |
   | +1 | shadedclient | 889 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 87 | trunk passed |
   | 0 | spotbugs | 178 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 301 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 10 | Maven dependency ordering for patch |
   | +1 | mvninstall | 109 | the patch passed |
   | +1 | compile | 219 | the patch passed |
   | +1 | cc | 219 | the patch passed |
   | +1 | javac | 219 | the patch passed |
   | -0 | checkstyle | 54 | hadoop-hdfs-project: The patch generated 19 new + 
337 unchanged - 2 fixed = 356 total (was 339) |
   | +1 | mvnsite | 105 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 794 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 76 | the patch passed |
   | +1 | findbugs | 305 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 115 | hadoop-hdfs-client in the patch passed. |
   | -1 | unit | 8003 | hadoop-hdfs in the patch failed. |
   | +1 | asflicense | 35 | The patch does not generate ASF License warnings. |
   | | | 12878 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdfs.server.namenode.TestCommitBlockWithInvalidGenStamp |
   |   | hadoop.hdfs.server.namenode.TestFSImage |
   |   | hadoop.hdfs.server.namenode.TestNameNodeMetadataConsistency |
   |   | hadoop.hdfs.server.blockmanagement.TestNodeCount |
   |   | hadoop.hdfs.TestDecommission |
   |   | hadoop.hdfs.server.namenode.ha.TestBootstrapStandby |
   |   | hadoop.hdfs.server.namenode.TestBlockPlacementPolicyRackFaultTolerant |
   |   | hadoop.hdfs.server.namenode.TestCheckpoint |
   |   | hadoop.hdfs.server.namenode.metrics.TestNameNodeMetrics |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1252/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1252 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle cc |
   | uname | Linux 6e267a98ed82 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 77d102c |
   | Default Java | 1.8.0_222 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1252/4/artifact/out/diff-checkstyle-hadoop-hdfs-project.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1252/4/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1252/4/testReport/ |
   | Max. process+thread count | 3635 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs-client 
hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1252/4/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #1306: HDDS-1971. Update document for HDDS-1891: Ozone fs shell command should work with default port when port number is not spe

2019-08-15 Thread GitBox
bharatviswa504 commented on a change in pull request #1306: HDDS-1971. Update 
document for HDDS-1891: Ozone fs shell command should work with default port 
when port number is not specified
URL: https://github.com/apache/hadoop/pull/1306#discussion_r314548268
 
 

 ##
 File path: hadoop-hdds/docs/content/interface/OzoneFS.md
 ##
 @@ -77,13 +77,39 @@ Or put command etc. In other words, all programs like 
Hive, Spark, and Distcp wi
 Please note that any keys created/deleted in the bucket using methods apart 
from OzoneFileSystem will show up as directories and files in the Ozone File 
System.
 
 Note: Bucket and volume names are not allowed to have a period in them.
-Moreover, the filesystem URI can take a fully qualified form with the OM host 
and port as a part of the path following the volume name.
-For example,
+Moreover, the filesystem URI can take a fully qualified form with the OM host 
and an optional port as a part of the path following the volume name.
+For example, you can specify both host and port:
 
 {{< highlight bash>}}
 hdfs dfs -ls o3fs://bucket.volume.om-host.example.com:5678/key
 {{< /highlight >}}
 
+When the port number is not specified, it will be retrieved from config key 
`ozone.om.address`.
 
 Review comment:
   When the port number is not specified, it will be retrieved from config key 
`ozone.om.address` if defined or fall back to default port 9862.
   
   With this, we can eliminate, last lines from 109-111.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1303: HDDS-1903 : Use dynamic ports for SCM in TestSCMClientProtocolServer …

2019-08-15 Thread GitBox
hadoop-yetus commented on issue #1303: HDDS-1903 : Use dynamic ports for SCM in 
TestSCMClientProtocolServer …
URL: https://github.com/apache/hadoop/pull/1303#issuecomment-521801636
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 39 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 2 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 776 | trunk passed |
   | +1 | compile | 849 | trunk passed |
   | +1 | checkstyle | 200 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 1446 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 170 | trunk passed |
   | 0 | spotbugs | 434 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 635 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 569 | the patch passed |
   | +1 | compile | 358 | the patch passed |
   | +1 | javac | 358 | the patch passed |
   | +1 | checkstyle | 66 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 622 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 162 | the patch passed |
   | +1 | findbugs | 644 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 287 | hadoop-hdds in the patch failed. |
   | -1 | unit | 1926 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 43 | The patch does not generate ASF License warnings. |
   | | | 8834 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdds.scm.block.TestBlockManager |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1303/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1303 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux c7e6625255d4 4.4.0-157-generic #185-Ubuntu SMP Tue Jul 23 
09:17:01 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 77d102c |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1303/1/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1303/1/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1303/1/testReport/ |
   | Max. process+thread count | 5174 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/server-scm U: hadoop-hdds/server-scm |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1303/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16504) Increase ipc.server.listen.queue.size default from 128 to 256

2019-08-15 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16504?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16908519#comment-16908519
 ] 

Wei-Chiu Chuang commented on HADOOP-16504:
--

+1 committing this now.

> Increase ipc.server.listen.queue.size default from 128 to 256
> -
>
> Key: HADOOP-16504
> URL: https://issues.apache.org/jira/browse/HADOOP-16504
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Major
> Attachments: HADOOP-16504.000.patch, HADOOP-16504.001.patch
>
>
> Because ipc.server.listen.queue.size default value is too small, TCP's 
> ListenDrop indicator along with the rpc request large.
>  The upper limit of the system's semi-join queue is 65636 and maximum number 
> of fully connected queues is 1024.
> {code:java}
> [work@c3-hadoop-srv-talos27 ~]$ cat /proc/sys/net/ipv4/tcp_max_syn_backlog
> 65536
> [work@c3-hadoop-srv-talos27 ~]$ cat /proc/sys/net/core/somaxconn
> 1024
> {code}
> I think this default value should be adjusted.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16504) Increase ipc.server.listen.queue.size default from 128 to 256

2019-08-15 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16504?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16908544#comment-16908544
 ] 

Hudson commented on HADOOP-16504:
-

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17134 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17134/])
HADOOP-16504. Increase ipc.server.listen.queue.size default from 128 to 
(weichiu: rev 5882cf94ea5094626eb86ff7ac7f8cd32aacb139)
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeysPublic.java
* (edit) hadoop-common-project/hadoop-common/src/main/resources/core-default.xml


> Increase ipc.server.listen.queue.size default from 128 to 256
> -
>
> Key: HADOOP-16504
> URL: https://issues.apache.org/jira/browse/HADOOP-16504
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HADOOP-16504.000.patch, HADOOP-16504.001.patch
>
>
> Because ipc.server.listen.queue.size default value is too small, TCP's 
> ListenDrop indicator along with the rpc request large.
>  The upper limit of the system's semi-join queue is 65636 and maximum number 
> of fully connected queues is 1024.
> {code:java}
> [work@c3-hadoop-srv-talos27 ~]$ cat /proc/sys/net/ipv4/tcp_max_syn_backlog
> 65536
> [work@c3-hadoop-srv-talos27 ~]$ cat /proc/sys/net/core/somaxconn
> 1024
> {code}
> I think this default value should be adjusted.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] jojochuang commented on issue #1028: HDFS-14617 - Improve fsimage load time by writing sub-sections to the fsimage index

2019-08-15 Thread GitBox
jojochuang commented on issue #1028: HDFS-14617 - Improve fsimage load time by 
writing sub-sections to the fsimage index
URL: https://github.com/apache/hadoop/pull/1028#issuecomment-521838800
 
 
   +1 from me. I've reviewed it several times and I think this is good. Will 
let it sit for a few days for other folks to comment on.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on issue #1305: HDDS-1938. Change omPort parameter type from String to int in BasicOzoneFileSystem#createAdapter

2019-08-15 Thread GitBox
bharatviswa504 commented on issue #1305: HDDS-1938. Change omPort parameter 
type from String to int in BasicOzoneFileSystem#createAdapter
URL: https://github.com/apache/hadoop/pull/1305#issuecomment-521842348
 
 
   @smengcl 
   For Info: After posting PR can you post a comment "/label ozone" to trigger 
Ozone CI.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] avijayanhwx opened a new pull request #1303: HDDS-1903 : Use dynamic ports for SCM in TestSCMClientProtocolServer …

2019-08-15 Thread GitBox
avijayanhwx opened a new pull request #1303: HDDS-1903 : Use dynamic ports for 
SCM in TestSCMClientProtocolServer …
URL: https://github.com/apache/hadoop/pull/1303
 
 
   …and TestSCMSecurityProtocolServer.
   
   
   Add dynamic ports for a couple of unit tests failing due to the following 
error.
   
   _java.net.BindException: Problem binding to [0.0.0.0:9961] 
java.net.BindException: Address already in use; For more details see:  
http://wiki.apache.org/hadoop/BindException
   at_ 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16517) Allow optional mutual TLS in HttpServer2

2019-08-15 Thread Kihwal Lee (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16517?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HADOOP-16517:

Status: Patch Available  (was: Open)

The patch has no unit test.

> Allow optional mutual TLS in HttpServer2
> 
>
> Key: HADOOP-16517
> URL: https://issues.apache.org/jira/browse/HADOOP-16517
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Kihwal Lee
>Assignee: Kihwal Lee
>Priority: Major
> Attachments: HADOOP-16517.patch
>
>
> Currently the webservice can enforce mTLS by setting 
> "dfs.client.https.need-auth" on the server side. (The config name is 
> misleading, as it is actually server-side config. It has been deprecated from 
> the client config)  A hadoop client can talk to mTLS enforced web service by 
> setting "hadoop.ssl.require.client.cert" with proper ssl config.
> We have seen use case where mTLS needs to be enabled optionally for only 
> those clients who supplies their cert. In a mixed environment like this, 
> individual services may still enforce mTLS for a subset of endpoints by 
> checking the existence of x509 cert in the request.
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16517) Allow optional mutual TLS in HttpServer2

2019-08-15 Thread Kihwal Lee (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16517?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HADOOP-16517:

Attachment: HADOOP-16517.patch

> Allow optional mutual TLS in HttpServer2
> 
>
> Key: HADOOP-16517
> URL: https://issues.apache.org/jira/browse/HADOOP-16517
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Kihwal Lee
>Assignee: Kihwal Lee
>Priority: Major
> Attachments: HADOOP-16517.patch
>
>
> Currently the webservice can enforce mTLS by setting 
> "dfs.client.https.need-auth" on the server side. (The config name is 
> misleading, as it is actually server-side config. It has been deprecated from 
> the client config)  A hadoop client can talk to mTLS enforced web service by 
> setting "hadoop.ssl.require.client.cert" with proper ssl config.
> We have seen use case where mTLS needs to be enabled optionally for only 
> those clients who supplies their cert. In a mixed environment like this, 
> individual services may still enforce mTLS for a subset of endpoints by 
> checking the existence of x509 cert in the request.
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] dineshchitlangia commented on issue #1204: HDDS-1768. Audit xxxAcl methods in OzoneManager

2019-08-15 Thread GitBox
dineshchitlangia commented on issue #1204: HDDS-1768. Audit xxxAcl methods in 
OzoneManager
URL: https://github.com/apache/hadoop/pull/1204#issuecomment-521773241
 
 
   @bharatviswa504  thanks for review & commit.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] jojochuang commented on issue #1252: HDFS-14678. Allow triggerBlockReport to a specific namenode.

2019-08-15 Thread GitBox
jojochuang commented on issue #1252: HDFS-14678. Allow triggerBlockReport to a 
specific namenode.
URL: https://github.com/apache/hadoop/pull/1252#issuecomment-521792278
 
 
   +1


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hanishakoneru commented on a change in pull request #1130: HDDS-1827. Load Snapshot info when OM Ratis server starts.

2019-08-15 Thread GitBox
hanishakoneru commented on a change in pull request #1130: HDDS-1827. Load 
Snapshot info when OM Ratis server starts.
URL: https://github.com/apache/hadoop/pull/1130#discussion_r314449239
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/ratis/TestOzoneManagerRatisServer.java
 ##
 @@ -126,6 +130,24 @@ public void testStartOMRatisServer() throws Exception {
 LifeCycle.State.RUNNING, omRatisServer.getServerState());
   }
 
+  @Test
+  public void testLoadSnapshotInfoOnStart() throws Exception {
+// Stop the Ratis server and manually update the snapshotInfo.
+long oldSnaphsotIndex = ozoneManager.saveRatisSnapshot();
+ozoneManager.getSnapshotInfo().saveRatisSnapshotToDisk(oldSnaphsotIndex);
+omRatisServer.stop();
+long newSnapshotIndex = oldSnaphsotIndex + 100;
+ozoneManager.getSnapshotInfo().saveRatisSnapshotToDisk(newSnapshotIndex);
+
+// Start new Ratis server. It should pick up and load the new SnapshotInfo
+omRatisServer = OzoneManagerRatisServer.newOMRatisServer(conf, 
ozoneManager,
+omNodeDetails, Collections.emptyList());
+omRatisServer.start();
 
 Review comment:
   omRatisServer is stopped when shutdown() is executed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1259: HDDS-1105 : Add mechanism in Recon to obtain DB snapshot 'delta' updates from Ozone Manager

2019-08-15 Thread GitBox
hadoop-yetus commented on issue #1259: HDDS-1105 : Add mechanism in Recon to 
obtain DB snapshot 'delta' updates from Ozone Manager
URL: https://github.com/apache/hadoop/pull/1259#issuecomment-521779783
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 43 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 9 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 22 | Maven dependency ordering for branch |
   | +1 | mvninstall | 597 | trunk passed |
   | +1 | compile | 352 | trunk passed |
   | +1 | checkstyle | 66 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 855 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 152 | trunk passed |
   | 0 | spotbugs | 416 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 606 | trunk passed |
   | -0 | patch | 454 | Used diff version of patch file. Binary files and 
potentially other changes not applied. Please rebase and squash commits if 
necessary. |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 30 | Maven dependency ordering for patch |
   | +1 | mvninstall | 534 | the patch passed |
   | +1 | compile | 358 | the patch passed |
   | +1 | javac | 358 | the patch passed |
   | +1 | checkstyle | 65 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 2 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 670 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 86 | hadoop-ozone generated 6 new + 20 unchanged - 0 fixed 
= 26 total (was 20) |
   | +1 | findbugs | 619 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 296 | hadoop-hdds in the patch passed. |
   | -1 | unit | 1796 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 37 | The patch does not generate ASF License warnings. |
   | | | 7423 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdds.scm.pipeline.TestRatisPipelineProvider |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1259/6/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1259 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle xml |
   | uname | Linux 12a6161526fb 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 46d6191 |
   | Default Java | 1.8.0_222 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1259/6/artifact/out/diff-javadoc-javadoc-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1259/6/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1259/6/testReport/ |
   | Max. process+thread count | 4448 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-ozone/common 
hadoop-ozone/ozone-manager hadoop-ozone/ozone-recon U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1259/6/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1303: HDDS-1903 : Use dynamic ports for SCM in TestSCMClientProtocolServer …

2019-08-15 Thread GitBox
hadoop-yetus commented on issue #1303: HDDS-1903 : Use dynamic ports for SCM in 
TestSCMClientProtocolServer …
URL: https://github.com/apache/hadoop/pull/1303#issuecomment-521797647
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 46 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 2 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 616 | trunk passed |
   | +1 | compile | 351 | trunk passed |
   | +1 | checkstyle | 67 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 865 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 153 | trunk passed |
   | 0 | spotbugs | 410 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 600 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 548 | the patch passed |
   | +1 | compile | 359 | the patch passed |
   | +1 | javac | 359 | the patch passed |
   | +1 | checkstyle | 69 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 670 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 150 | the patch passed |
   | +1 | findbugs | 627 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 276 | hadoop-hdds in the patch passed. |
   | -1 | unit | 1673 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 40 | The patch does not generate ASF License warnings. |
   | | | 7271 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.client.rpc.TestWatchForCommit |
   |   | hadoop.ozone.client.rpc.TestOzoneClientRetriesOnException |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1303/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1303 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 64f3394b2dd2 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 77d102c |
   | Default Java | 1.8.0_222 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1303/2/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1303/2/testReport/ |
   | Max. process+thread count | 5412 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/server-scm U: hadoop-hdds/server-scm |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1303/2/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-16517) Allow optional mutual TLS in HttpServer2

2019-08-15 Thread Kihwal Lee (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16517?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee reassigned HADOOP-16517:
---

Assignee: Kihwal Lee

> Allow optional mutual TLS in HttpServer2
> 
>
> Key: HADOOP-16517
> URL: https://issues.apache.org/jira/browse/HADOOP-16517
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Kihwal Lee
>Assignee: Kihwal Lee
>Priority: Major
>
> Currently the webservice can enforce mTLS by setting 
> "dfs.client.https.need-auth" on the server side. (The config name is 
> misleading, as it is actually server-side config. It has been deprecated from 
> the client config)  A hadoop client can talk to mTLS enforced web service by 
> setting "hadoop.ssl.require.client.cert" with proper ssl config.
> We have seen use case where mTLS needs to be enabled optionally for only 
> those clients who supplies their cert. In a mixed environment like this, 
> individual services may still enforce mTLS for a subset of endpoints by 
> checking the existence of x509 cert in the request.
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] adoroszlai commented on a change in pull request #1255: HDDS-1935. Improve the visibility with Ozone Insight tool

2019-08-15 Thread GitBox
adoroszlai commented on a change in pull request #1255: HDDS-1935. Improve the 
visibility with Ozone Insight tool
URL: https://github.com/apache/hadoop/pull/1255#discussion_r314484032
 
 

 ##
 File path: 
hadoop-ozone/insight/src/main/java/org/apache/hadoop/ozone/insight/List.java
 ##
 @@ -0,0 +1,38 @@
+package org.apache.hadoop.ozone.insight;
+
+import org.apache.hadoop.hdds.cli.HddsVersionProvider;
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+
+import picocli.CommandLine;
+
+import java.util.Map;
+import java.util.concurrent.Callable;
+
+/**
+ * Subcommand to list of the available insight points.
+ */
+@CommandLine.Command(
+name = "list",
+description = "Show available insight points.",
+mixinStandardHelpOptions = true,
+versionProvider = HddsVersionProvider.class)
+public class List extends BaseInsightSubcommand implements Callable {
+
+  @CommandLine.Parameters(defaultValue = "")
+  private String selection;
 
 Review comment:
   Do you plan to use this parameter, eg. to filter available insight point 
list?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] adoroszlai commented on a change in pull request #1255: HDDS-1935. Improve the visibility with Ozone Insight tool

2019-08-15 Thread GitBox
adoroszlai commented on a change in pull request #1255: HDDS-1935. Improve the 
visibility with Ozone Insight tool
URL: https://github.com/apache/hadoop/pull/1255#discussion_r314290947
 
 

 ##
 File path: 
hadoop-ozone/insight/src/main/java/org/apache/hadoop/ozone/insight/om/KeyManagerInsight.java
 ##
 @@ -0,0 +1,61 @@
+package org.apache.hadoop.ozone.insight.om;
+
+import java.util.ArrayList;
+import java.util.List;
+
+import org.apache.hadoop.ozone.insight.BaseInsightPoint;
+import org.apache.hadoop.ozone.insight.Component.Type;
+import org.apache.hadoop.ozone.insight.LoggerSource;
+import org.apache.hadoop.ozone.insight.MetricDisplay;
+import org.apache.hadoop.ozone.insight.MetricGroupDisplay;
+import org.apache.hadoop.ozone.om.KeyManagerImpl;
+
+/**
+ * Insight implementation for the key management related operations.
+ */
+public class KeyManagerInsight extends BaseInsightPoint {
+
+  @Override
+  public List getMetrics() {
+List display = new ArrayList<>();
+
+MetricGroupDisplay state =
+new MetricGroupDisplay(Type.OM, "Key related metrics");
+state
+.addMetrics(new MetricDisplay("Number of keys", 
"om_metrics_num_keys"));
+state.addMetrics(new MetricDisplay("Number of key operations",
+"om_metrics_num_key_ops"));
+
+display.add(state);
+
+MetricGroupDisplay key =
+new MetricGroupDisplay(Type.OM, "Key operation stats");
+for (String operation : new String[] {"allocate", "commit", "lookup",
+"list", "delete"}) {
+  key.addMetrics(new MetricDisplay(
+  "Number of key " + operation + "s (failure + success)",
+  "om_metrics_num_key_" + operation));
+  key.addMetrics(
+  new MetricDisplay("Number of failed key " + operation + "s",
+  "om_metrics_num_key_" + operation + "_fails"));
+}
+display.add(key);
+
+return display;
+  }
+
+  @Override
+  public List getRelatedLoggers(boolean verbose) {
+List loggers = new ArrayList<>();
+loggers.add(
+new LoggerSource(Type.SCM, KeyManagerImpl.class,
 
 Review comment:
   I think it should be:
   
   ```suggestion
   new LoggerSource(Type.OM, KeyManagerImpl.class,
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] adoroszlai commented on a change in pull request #1255: HDDS-1935. Improve the visibility with Ozone Insight tool

2019-08-15 Thread GitBox
adoroszlai commented on a change in pull request #1255: HDDS-1935. Improve the 
visibility with Ozone Insight tool
URL: https://github.com/apache/hadoop/pull/1255#discussion_r314483408
 
 

 ##
 File path: 
hadoop-ozone/insight/src/main/java/org/apache/hadoop/ozone/insight/ConfigurationSubCommand.java
 ##
 @@ -0,0 +1,70 @@
+package org.apache.hadoop.ozone.insight;
+
+import org.apache.hadoop.hdds.cli.HddsVersionProvider;
+import org.apache.hadoop.hdds.conf.Config;
+import org.apache.hadoop.hdds.conf.ConfigGroup;
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+
+import picocli.CommandLine;
+
+import java.lang.reflect.Method;
+import java.util.concurrent.Callable;
+
+/**
+ * Subcommand to show configuration values/documentation.
+ */
+@CommandLine.Command(
+name = "config",
+description = "Show configuration for a specific subcomponents",
+mixinStandardHelpOptions = true,
+versionProvider = HddsVersionProvider.class)
+public class ConfigurationSubCommand extends BaseInsightSubcommand
+implements Callable {
+
+  @CommandLine.Parameters(defaultValue = "")
+  private String selection;
+
+  @Override
+  public Void call() throws Exception {
+InsightPoint insight =
+getInsight(getInsightCommand().createOzoneConfiguration(), selection);
+System.out.println(
+"Configuration for `" + selection + "` (" + insight.getDescription()
++ ")");
+System.out.println();
+for (Class clazz : insight.getConfigurationClasses()) {
+  showConfig(clazz);
+
+}
+return null;
+  }
+
+  private void showConfig(Class clazz) {
+OzoneConfiguration conf = new OzoneConfiguration();
+conf.addResource("http://localhost:9876/conf;);
 
 Review comment:
   Is this SCM-specific?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] adoroszlai commented on a change in pull request #1255: HDDS-1935. Improve the visibility with Ozone Insight tool

2019-08-15 Thread GitBox
adoroszlai commented on a change in pull request #1255: HDDS-1935. Improve the 
visibility with Ozone Insight tool
URL: https://github.com/apache/hadoop/pull/1255#discussion_r314274453
 
 

 ##
 File path: 
hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/protocolPB/ScmBlockLocationProtocolServerSideTranslatorPB.java
 ##
 @@ -97,15 +96,45 @@ public SCMBlockLocationResponse send(RpcController 
controller,
   SCMBlockLocationRequest request) throws ServiceException {
 String traceId = request.getTraceID();
 
+if (LOG.isTraceEnabled()) {
+  LOG.trace("BlockLocationProtocol {} request is received: 
{}",
+  request.getCmdType().toString(),
+  request.toString().replaceAll("\n", "n"));
+
+} else if (LOG.isDebugEnabled()) {
+  LOG.debug("BlockLocationProtocol {} request is received",
+  request.getCmdType().toString());
+}
+
+protocolMessageMetrics.increment(request.getCmdType());
+
+try (Scope scope = TracingUtil
+.importAndCreateScope(
+"ScmBlockLocationProtocol." + request.getCmdType(),
+request.getTraceID())) {
+  SCMBlockLocationResponse response =
+  processMessage(request, traceId);
+
+  if (LOG.isTraceEnabled()) {
+LOG.trace(
+"BlockLocationProtocol {} request is processed. Response: "
++ "{}",
+request.getCmdType().toString(),
+request.toString().replaceAll("\n", "n"));
 
 Review comment:
   ```suggestion
   response.toString().replaceAll("\n", "n"));
   ```
   
   Although response is already logged in `processMessage()`.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] adoroszlai commented on a change in pull request #1255: HDDS-1935. Improve the visibility with Ozone Insight tool

2019-08-15 Thread GitBox
adoroszlai commented on a change in pull request #1255: HDDS-1935. Improve the 
visibility with Ozone Insight tool
URL: https://github.com/apache/hadoop/pull/1255#discussion_r314292807
 
 

 ##
 File path: 
hadoop-ozone/insight/src/main/java/org/apache/hadoop/ozone/insight/LogSubcommand.java
 ##
 @@ -0,0 +1,142 @@
+package org.apache.hadoop.ozone.insight;
+
+import java.io.BufferedReader;
+import java.io.IOException;
+import java.io.InputStreamReader;
+import java.nio.charset.StandardCharsets;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Set;
+import java.util.concurrent.Callable;
+import java.util.regex.Matcher;
+import java.util.regex.Pattern;
+import java.util.stream.Collectors;
+
+import org.apache.hadoop.hdds.cli.HddsVersionProvider;
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+
+import org.apache.http.HttpResponse;
+import org.apache.http.client.HttpClient;
+import org.apache.http.client.methods.HttpGet;
+import org.apache.http.impl.client.HttpClientBuilder;
+import picocli.CommandLine;
+
+/**
+ * Subcommand to display log.
+ */
+@CommandLine.Command(
+name = "log",
+aliases = "logs",
+description = "Show log4j events related to the insight point",
+mixinStandardHelpOptions = true,
+versionProvider = HddsVersionProvider.class)
+public class LogSubcommand extends BaseInsightSubcommand
+implements Callable {
+
+  @CommandLine.Parameters(description = "Name of the insight point (use list "
+  + "to check the available options)")
+  private String insightName;
+
+  @CommandLine.Option(names = "-v", description = "Enable verbose mode to "
+  + "show more information / detailed message")
+  private boolean verbose;
+
+  @CommandLine.Parameters(defaultValue = "")
+  private String selection;
 
 Review comment:
   It seems only one of `insightName` and `selection` is needed.
   
* `insightName` is not used currently, but has description
* `selection` is not documented


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] adoroszlai commented on a change in pull request #1255: HDDS-1935. Improve the visibility with Ozone Insight tool

2019-08-15 Thread GitBox
adoroszlai commented on a change in pull request #1255: HDDS-1935. Improve the 
visibility with Ozone Insight tool
URL: https://github.com/apache/hadoop/pull/1255#discussion_r314293659
 
 

 ##
 File path: 
hadoop-ozone/insight/src/main/java/org/apache/hadoop/ozone/insight/MetricsSubCommand.java
 ##
 @@ -0,0 +1,114 @@
+package org.apache.hadoop.ozone.insight;
+
+import org.apache.hadoop.hdds.cli.HddsVersionProvider;
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+
+import org.apache.http.HttpResponse;
+import org.apache.http.client.HttpClient;
+import org.apache.http.client.methods.HttpGet;
+import org.apache.http.impl.client.HttpClientBuilder;
+import picocli.CommandLine;
+
+import java.io.BufferedReader;
+import java.io.IOException;
+import java.io.InputStreamReader;
+import java.nio.charset.StandardCharsets;
+import java.util.List;
+import java.util.*;
+import java.util.Map.Entry;
+import java.util.concurrent.Callable;
+import java.util.stream.Collectors;
+
+/**
+ * Command line interface to show metrics for a specific component.
+ */
+@CommandLine.Command(
+name = "metrics",
+aliases = "metric",
+description = "Show available metrics.",
+mixinStandardHelpOptions = true,
+versionProvider = HddsVersionProvider.class)
+public class MetricsSubCommand extends BaseInsightSubcommand
+implements Callable {
+
+  @CommandLine.Parameters(defaultValue = "")
 
 Review comment:
   `defaultValue = ""` prevents help from being shown for incomplete command 
(`ozone insight metrics`).  Instead it gives `No such component` error.
   
   Same for `ConfigurationSubCommand`.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #1303: HDDS-1903 : Use dynamic ports for SCM in TestSCMClientProtocolServer …

2019-08-15 Thread GitBox
bharatviswa504 commented on a change in pull request #1303: HDDS-1903 : Use 
dynamic ports for SCM in TestSCMClientProtocolServer …
URL: https://github.com/apache/hadoop/pull/1303#discussion_r314549215
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/server/TestSCMSecurityProtocolServer.java
 ##
 @@ -32,10 +39,19 @@
 
   @Rule
   public Timeout timeout = new Timeout(1000 * 20);
+  private static int scmRpcSecurePort;
+
+  @BeforeClass
+  public static void setupClass() throws Exception {
+scmRpcSecurePort = new ServerSocket(0).getLocalPort();
+  }
 
   @Before
   public void setUp() throws Exception {
 config = new OzoneConfiguration();
+config.set(OZONE_SCM_SECURITY_SERVICE_ADDRESS_KEY,
+StringUtils.join(OZONE_SCM_SECURITY_SERVICE_BIND_HOST_DEFAULT,
+":", String.valueOf(scmRpcSecurePort)));
 
 Review comment:
   Here we can directly specify OZONE_SCM_SECURITY_SERVICE_BIND_HOST_DEFAULT:0
   Is there any reason for doing this way?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #1303: HDDS-1903 : Use dynamic ports for SCM in TestSCMClientProtocolServer …

2019-08-15 Thread GitBox
bharatviswa504 commented on a change in pull request #1303: HDDS-1903 : Use 
dynamic ports for SCM in TestSCMClientProtocolServer …
URL: https://github.com/apache/hadoop/pull/1303#discussion_r314549278
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/server/TestSCMClientProtocolServer.java
 ##
 @@ -46,7 +52,11 @@
 
   @Before
   public void setUp() throws Exception {
+int port = new ServerSocket(0).getLocalPort();
 config = new OzoneConfiguration();
+config.set(OZONE_SCM_CLIENT_ADDRESS_KEY,
+StringUtils.join(OZONE_SCM_CLIENT_BIND_HOST_DEFAULT, ":",
 
 Review comment:
   Same here too.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #1278: HDDS-1950. S3 MPU part-list call fails if there are no parts

2019-08-15 Thread GitBox
bharatviswa504 commented on a change in pull request #1278: HDDS-1950. S3 MPU 
part-list call fails if there are no parts
URL: https://github.com/apache/hadoop/pull/1278#discussion_r314549567
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/KeyManagerImpl.java
 ##
 @@ -1329,8 +1329,16 @@ public OmMultipartUploadListParts listParts(String 
volumeName,
 multipartKeyInfo.getPartKeyInfoMap();
 Iterator> partKeyInfoMapIterator =
 partKeyInfoMap.entrySet().iterator();
-HddsProtos.ReplicationType replicationType =
-partKeyInfoMap.firstEntry().getValue().getPartKeyInfo().getType();
+
+OmKeyInfo omKeyInfo =
+metadataManager.getOpenKeyTable().get(multipartKey);
+
+if (omKeyInfo == null) {
+  throw new IllegalStateException(
+  "Open key is missing for multipart upload " + multipartKey);
+}
+
+HddsProtos.ReplicationType replicationType = omKeyInfo.getType();
 
 Review comment:
   Yes agreed, if we do this way, we can save one DB read.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15760) Include Apache Commons Collections4

2019-08-15 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15760?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16908594#comment-16908594
 ] 

Hadoop QA commented on HADOOP-15760:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
42s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
 7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
30m 51s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
15s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 44s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
12s{color} | {color:green} hadoop-project in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
28s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 45m 49s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.1 Server=19.03.1 Image:yetus/hadoop:bdbca0e |
| JIRA Issue | HADOOP-15760 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12940024/HADOOP-15760.1.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  xml  |
| uname | Linux ef683d634bc8 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 5882cf9 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_222 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16485/testReport/ |
| Max. process+thread count | 413 (vs. ulimit of 1) |
| modules | C: hadoop-project U: hadoop-project |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16485/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Include Apache Commons Collections4
> ---
>
> Key: HADOOP-15760
> URL: https://issues.apache.org/jira/browse/HADOOP-15760
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.10.0, 3.0.3
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Major
> Attachments: HADOOP-15760.1.patch
>
>
> Please allow for use of Apache Commons Collections 4 

[jira] [Commented] (HADOOP-15784) Tracing in Hadoop failed because of Unknown protocol: org.apache.hadoop.tracing.TraceAdminPB.TraceAdminService

2019-08-15 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15784?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16908618#comment-16908618
 ] 

Hadoop QA commented on HADOOP-15784:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
54s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 19m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m  8s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 18m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 59s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
53s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
47s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}108m 38s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.1 Server=19.03.1 Image:yetus/hadoop:bdbca0e |
| JIRA Issue | HADOOP-15784 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12941147/HADOOP-15784-1.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 5fda01d3eade 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 5882cf9 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_222 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16484/testReport/ |
| Max. process+thread count | 1391 (vs. ulimit of 1) |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16484/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Tracing in Hadoop failed because of  

[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #1303: HDDS-1903 : Use dynamic ports for SCM in TestSCMClientProtocolServer …

2019-08-15 Thread GitBox
bharatviswa504 commented on a change in pull request #1303: HDDS-1903 : Use 
dynamic ports for SCM in TestSCMClientProtocolServer …
URL: https://github.com/apache/hadoop/pull/1303#discussion_r314549215
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/server/TestSCMSecurityProtocolServer.java
 ##
 @@ -32,10 +39,19 @@
 
   @Rule
   public Timeout timeout = new Timeout(1000 * 20);
+  private static int scmRpcSecurePort;
+
+  @BeforeClass
+  public static void setupClass() throws Exception {
+scmRpcSecurePort = new ServerSocket(0).getLocalPort();
+  }
 
   @Before
   public void setUp() throws Exception {
 config = new OzoneConfiguration();
+config.set(OZONE_SCM_SECURITY_SERVICE_ADDRESS_KEY,
+StringUtils.join(OZONE_SCM_SECURITY_SERVICE_BIND_HOST_DEFAULT,
+":", String.valueOf(scmRpcSecurePort)));
 
 Review comment:
   Here we can directly specify OZONE_SCM_SECURITY_SERVICE_BIND_HOST_DEFAULT:0
   is there any reason for doing this way?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] timmylicheng commented on a change in pull request #1286: HDDS-1894. Add filter to scmcli listPipelines.

2019-08-15 Thread GitBox
timmylicheng commented on a change in pull request #1286: HDDS-1894. Add filter 
to scmcli listPipelines.
URL: https://github.com/apache/hadoop/pull/1286#discussion_r314565982
 
 

 ##
 File path: 
hadoop-hdds/tools/src/main/java/org/apache/hadoop/hdds/scm/cli/pipeline/ListPipelinesSubcommand.java
 ##
 @@ -38,11 +38,34 @@
   @CommandLine.ParentCommand
   private SCMCLI parent;
 
+  @CommandLine.Option( names = {"-ffc", "--filterByFactor"},
+  description = "Filter listed pipelines by Factor(ONE/one)", 
defaultValue = "",
+  required = false)
+  private String factor;
+
+  @CommandLine.Option( names = {"-fst", "--filterByState"},
+  description = "Filter listed pipelines by State(OPEN/CLOSE)", 
defaultValue = "",
+  required = false)
+  private String state;
+
+
   @Override
   public Void call() throws Exception {
 try (ScmClient scmClient = parent.createScmClient()) {
-  scmClient.listPipelines().forEach(System.out::println);
+  if (isNullOrEmpty(factor) && isNullOrEmpty(state)) {
 
 Review comment:
   My method removes spaces :)


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] timmylicheng commented on a change in pull request #1286: HDDS-1894. Add filter to scmcli listPipelines.

2019-08-15 Thread GitBox
timmylicheng commented on a change in pull request #1286: HDDS-1894. Add filter 
to scmcli listPipelines.
URL: https://github.com/apache/hadoop/pull/1286#discussion_r314565875
 
 

 ##
 File path: 
hadoop-hdds/tools/src/main/java/org/apache/hadoop/hdds/scm/cli/pipeline/ListPipelinesSubcommand.java
 ##
 @@ -38,11 +38,34 @@
   @CommandLine.ParentCommand
   private SCMCLI parent;
 
+  @CommandLine.Option( names = {"-ffc", "--filterByFactor"},
+  description = "Filter listed pipelines by Factor(ONE/one)", 
defaultValue = "",
+  required = false)
+  private String factor;
+
+  @CommandLine.Option( names = {"-fst", "--filterByState"},
+  description = "Filter listed pipelines by State(OPEN/CLOSE)", 
defaultValue = "",
+  required = false)
+  private String state;
+
+
   @Override
   public Void call() throws Exception {
 try (ScmClient scmClient = parent.createScmClient()) {
-  scmClient.listPipelines().forEach(System.out::println);
+  if (isNullOrEmpty(factor) && isNullOrEmpty(state)) {
+System.out.println("No filter. List all.");
 
 Review comment:
   Removed


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-8738) junit JAR is showing up in the distro

2019-08-15 Thread Masatake Iwasaki (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-8738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16908698#comment-16908698
 ] 

Masatake Iwasaki commented on HADOOP-8738:
--

YARN seems not to have junit as its runtime dependency today but I can see 
junit jar under hadoop-tools due to hadoop-dynamometer-infra having compile 
scope dependency on junit.
{noformat}
$ tar ztvf hadoop-dist/target/hadoop-3.3.0-SNAPSHOT.tar.gz | grep junit
-rw-r--r-- iwasakims/docker 11366 2019-08-15 07:38 
hadoop-3.3.0-SNAPSHOT/licenses-binary/LICENSE-junit.txt
-rw-r--r-- iwasakims/docker314932 2019-08-16 10:53 
hadoop-3.3.0-SNAPSHOT/share/hadoop/tools/lib/junit-4.12.jar
{noformat}
It might be harmless now because jars of hadoop-tools and its dependencies are 
not on classpath without setting HADOOP_OPTIONAL_TOOLS. In addition, 
dynamometer seems not to be using scripts under libexec/shellprofile.d/.

> junit JAR is showing up in the distro
> -
>
> Key: HADOOP-8738
> URL: https://issues.apache.org/jira/browse/HADOOP-8738
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 2.0.2-alpha, 3.0.0-alpha1
>Reporter: Alejandro Abdelnur
>Priority: Major
> Attachments: HADOOP-8738.patch
>
>
> It seems that with the move of YARN module to trunk/ level the test scope in 
> junit got lost. This makes the junit JAR to show up in the TAR



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-16438) Introduce a config to control SSL Channel mode in Azure DataLake Store Gen1

2019-08-15 Thread Sneha Vijayarajan (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16438?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16906870#comment-16906870
 ] 

Sneha Vijayarajan edited comment on HADOOP-16438 at 8/16/19 5:01 AM:
-

PR raised : [{color:#0066cc}[https://github.com/apache/hadoop/pull/1290]{color}]

Results of Contract tests on Gen1 account on East US2 location.

[INFO] Results:
 [INFO]
 [ERROR] Failures:
 [ERROR]   
TestAdlFileSystemContractLive>FileSystemContractBaseTest.testMkdirsWithUmask:266
 expected:<461> but was:<456>
 [INFO]
 [ERROR] Tests run: 882, Failures: 1, Errors: 0, Skipped: 3

 

The Failure in Testcase testMkdirsWithUmask is not related to the changes made 
(test was run without the changes and confirmed that the same testcase fails). 
Will track it in a separate JIRA.


was (Author: snvijaya):
PR raised : [{color:#0066cc}https://github.com/apache/hadoop/pull/1290{color}]

Results of Contract tests on Gen1 account on East US2 location.

[INFO] Results:
[INFO]
[ERROR] Failures:
[ERROR]   
TestAdlFileSystemContractLive>FileSystemContractBaseTest.testMkdirsWithUmask:266
 expected:<461> but was:<456>
[INFO]
[ERROR] Tests run: 882, Failures: 1, Errors: 0, Skipped: 3

 

The Failure in Testcase testMkdirsWithUmask is not related to the changes made. 
Will track it in a separate JIRA.

> Introduce a config to control SSL Channel mode in Azure DataLake Store Gen1
> ---
>
> Key: HADOOP-16438
> URL: https://issues.apache.org/jira/browse/HADOOP-16438
> Project: Hadoop Common
>  Issue Type: Task
>  Components: fs/adl
>Affects Versions: 2.9.0
>Reporter: Sneha Vijayarajan
>Assignee: Sneha Vijayarajan
>Priority: Minor
>
> Currently there is no user control possible on the SSL channel mode used for 
> server connections. It will try to connect using SSLChannelMode.OpenSSL and 
> default to SSLChannelMode.Default_JSE when there is any issue. 
> A new config is needed to toggle the choice if any issues are observed with 
> OpenSSL. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org