[GitHub] hadoop-yetus commented on issue #538: HDFS-14318:dn cannot be recognized and must be restarted to recognize the Repaired disk

2019-02-28 Thread GitBox
hadoop-yetus commented on issue #538: HDFS-14318:dn cannot be recognized and 
must be restarted to recognize the Repaired disk
URL: https://github.com/apache/hadoop/pull/538#issuecomment-468568741
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 25 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 1056 | trunk passed |
   | +1 | compile | 60 | trunk passed |
   | +1 | checkstyle | 51 | trunk passed |
   | +1 | mvnsite | 73 | trunk passed |
   | +1 | shadedclient | 761 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 126 | trunk passed |
   | +1 | javadoc | 50 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 59 | the patch passed |
   | +1 | compile | 56 | the patch passed |
   | +1 | javac | 56 | the patch passed |
   | -0 | checkstyle | 49 | hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 8 new + 154 unchanged - 0 fixed = 162 total (was 154) |
   | +1 | mvnsite | 59 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 706 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | findbugs | 139 | hadoop-hdfs-project/hadoop-hdfs generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0) |
   | +1 | javadoc | 49 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 4735 | hadoop-hdfs in the patch failed. |
   | +1 | asflicense | 39 | The patch does not generate ASF License warnings. |
   | | | 8128 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | FindBugs | module:hadoop-hdfs-project/hadoop-hdfs |
   |  |  Possible doublecheck on 
org.apache.hadoop.hdfs.server.datanode.DataNode.checkDiskThread in 
org.apache.hadoop.hdfs.server.datanode.DataNode.startCheckDiskThread()  At 
DataNode.java:org.apache.hadoop.hdfs.server.datanode.DataNode.startCheckDiskThread()
  At DataNode.java:[lines 2161-2163] |
   | Failed junit tests | 
hadoop.hdfs.server.datanode.TestDataNodeHotSwapVolumes |
   |   | hadoop.hdfs.qjournal.server.TestJournalNodeSync |
   |   | hadoop.hdfs.TestErasureCodingExerciseAPIs |
   |   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
   |   | hadoop.hdfs.TestUnsetAndChangeDirectoryEcPolicy |
   |   | hadoop.hdfs.server.datanode.TestDataNodeLifeline |
   |   | hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetImpl |
   |   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
   |   | hadoop.hdfs.TestFileChecksum |
   |   | hadoop.hdfs.TestHdfsAdmin |
   |   | hadoop.hdfs.TestReconstructStripedFile |
   |   | hadoop.hdfs.server.balancer.TestBalancerRPCDelay |
   |   | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks |
   |   | hadoop.tools.TestJMXGet |
   |   | hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA |
   |   | hadoop.hdfs.TestFileChecksumCompositeCrc |
   |   | hadoop.hdfs.web.TestWebHdfsTimeouts |
   |   | hadoop.hdfs.tools.TestDFSAdmin |
   |   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-538/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/538 |
   | JIRA Issue | HDFS-14318 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  |
   | uname | Linux e04acdee8463 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / eae3db9 |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | findbugs | v3.1.0-RC1 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-538/1/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-538/1/artifact/out/new-findbugs-hadoop-hdfs-project_hadoop-hdfs.html
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-538/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-538/1/testReport/ |
   | Max. process+thread count | 4363 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
   | Console output | 

[GitHub] xiaoyuyao commented on a change in pull request #526: HDDS-1183. Override getDelegationToken API for OzoneFileSystem. Contr…

2019-02-28 Thread GitBox
xiaoyuyao commented on a change in pull request #526: HDDS-1183. Override 
getDelegationToken API for OzoneFileSystem. Contr…
URL: https://github.com/apache/hadoop/pull/526#discussion_r261487247
 
 

 ##
 File path: 
hadoop-ozone/ozonefs/src/main/java/org/apache/hadoop/fs/ozone/OzoneFileSystem.java
 ##
 @@ -669,6 +676,12 @@ public Path getWorkingDirectory() {
 return workingDir;
   }
 
+  @Override
+  public Token getDelegationToken(String renewer) throws IOException {
+return securityEnabled? adapter.getDelegationToken(renewer) :
+super.getDelegationToken(renewer);
 
 Review comment:
   This will be added after HDDS-134.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] bharatviswa504 merged pull request #528: HDDS-1182. Pipeline Rule where atleast one datanode is reported in the pipeline.

2019-02-28 Thread GitBox
bharatviswa504 merged pull request #528: HDDS-1182. Pipeline Rule where atleast 
one datanode is reported in the pipeline.
URL: https://github.com/apache/hadoop/pull/528
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] bharatviswa504 commented on issue #528: HDDS-1182. Pipeline Rule where atleast one datanode is reported in the pipeline.

2019-02-28 Thread GitBox
bharatviswa504 commented on issue #528: HDDS-1182. Pipeline Rule where atleast 
one datanode is reported in the pipeline.
URL: https://github.com/apache/hadoop/pull/528#issuecomment-468549636
 
 
   Thank You @arp7 for review.
   Test failures are not related to the patch.
   Ran tests locally and they passed.
   I will commit this shortly.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16152) Upgrade Eclipse Jetty version to 9.4.x

2019-02-28 Thread Yuming Wang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16152?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16781307#comment-16781307
 ] 

Yuming Wang commented on HADOOP-16152:
--

It conflict when doing test. I [set Jetty to 
9.3.24.v20180605|https://github.com/wangyum/spark/blob/5075a4231a5a46254ff393c30fba02f76cb4ddbf/pom.xml#L2844]
 to workaround this issue.
{code:java}
$ git clone https://github.com/wangyum/spark.git
$ cd spark && git checkout DNR-HADOOP-16152
$ build/sbt  "yarn/testOnly"  -Phadoop-3.1 -Pyarn
{code}
{noformat}
[info] YarnShuffleAuthSuite:
[info] org.apache.spark.deploy.yarn.YarnShuffleAuthSuite *** ABORTED *** (146 
milliseconds)
[info]   org.apache.hadoop.yarn.exceptions.YarnRuntimeException: 
java.lang.NoSuchMethodError: 
org.eclipse.jetty.server.session.SessionHandler.getSessionManager()Lorg/eclipse/jetty/server/SessionManager;
[info]   at 
org.apache.hadoop.yarn.server.MiniYARNCluster.startResourceManager(MiniYARNCluster.java:373)
[info]   at 
org.apache.hadoop.yarn.server.MiniYARNCluster.access$300(MiniYARNCluster.java:128)
[info]   at 
org.apache.hadoop.yarn.server.MiniYARNCluster$ResourceManagerWrapper.serviceStart(MiniYARNCluster.java:503)
[info]   at 
org.apache.hadoop.service.AbstractService.start(AbstractService.java:194)
[info]   at 
org.apache.hadoop.service.CompositeService.serviceStart(CompositeService.java:121)
[info]   at 
org.apache.hadoop.yarn.server.MiniYARNCluster.serviceStart(MiniYARNCluster.java:322)
[info]   at 
org.apache.hadoop.service.AbstractService.start(AbstractService.java:194)
[info]   at 
org.apache.spark.deploy.yarn.BaseYarnClusterSuite.beforeAll(BaseYarnClusterSuite.scala:86)
[info]   at 
org.scalatest.BeforeAndAfterAll.liftedTree1$1(BeforeAndAfterAll.scala:212)
[info]   at org.scalatest.BeforeAndAfterAll.run(BeforeAndAfterAll.scala:210)
[info]   at org.scalatest.BeforeAndAfterAll.run$(BeforeAndAfterAll.scala:208)
[info]   at org.apache.spark.SparkFunSuite.run(SparkFunSuite.scala:53)
[info]   at 
org.scalatest.tools.Framework.org$scalatest$tools$Framework$$runSuite(Framework.scala:314)
[info]   at 
org.scalatest.tools.Framework$ScalaTestTask.execute(Framework.scala:507)
[info]   at sbt.ForkMain$Run$2.call(ForkMain.java:296)
[info]   at sbt.ForkMain$Run$2.call(ForkMain.java:286)
[info]   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
[info]   at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
[info]   at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
[info]   at java.lang.Thread.run(Thread.java:748)
[info]   Cause: java.lang.NoSuchMethodError: 
org.eclipse.jetty.server.session.SessionHandler.getSessionManager()Lorg/eclipse/jetty/server/SessionManager;
[info]   at 
org.apache.hadoop.http.HttpServer2.initializeWebServer(HttpServer2.java:577)
[info]   at org.apache.hadoop.http.HttpServer2.(HttpServer2.java:558)
[info]   at org.apache.hadoop.http.HttpServer2.(HttpServer2.java:119)
[info]   at 
org.apache.hadoop.http.HttpServer2$Builder.build(HttpServer2.java:433)
[info]   at 
org.apache.hadoop.yarn.webapp.WebApps$Builder.build(WebApps.java:341)
[info]   at 
org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:432)
[info]   at 
org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startWepApp(ResourceManager.java:1226)
[info]   at 
org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStart(ResourceManager.java:1335)
[info]   at 
org.apache.hadoop.service.AbstractService.start(AbstractService.java:194)
[info]   at 
org.apache.hadoop.yarn.server.MiniYARNCluster.startResourceManager(MiniYARNCluster.java:365)
[info]   at 
org.apache.hadoop.yarn.server.MiniYARNCluster.access$300(MiniYARNCluster.java:128)
[info]   at 
org.apache.hadoop.yarn.server.MiniYARNCluster$ResourceManagerWrapper.serviceStart(MiniYARNCluster.java:503)
[info]   at 
org.apache.hadoop.service.AbstractService.start(AbstractService.java:194)
[info]   at 
org.apache.hadoop.service.CompositeService.serviceStart(CompositeService.java:121)
[info]   at 
org.apache.hadoop.yarn.server.MiniYARNCluster.serviceStart(MiniYARNCluster.java:322)
[info]   at 
org.apache.hadoop.service.AbstractService.start(AbstractService.java:194)
[info]   at 
org.apache.spark.deploy.yarn.BaseYarnClusterSuite.beforeAll(BaseYarnClusterSuite.scala:86)
[info]   at 
org.scalatest.BeforeAndAfterAll.liftedTree1$1(BeforeAndAfterAll.scala:212)
[info]   at org.scalatest.BeforeAndAfterAll.run(BeforeAndAfterAll.scala:210)
[info]   at org.scalatest.BeforeAndAfterAll.run$(BeforeAndAfterAll.scala:208)
[info]   at org.apache.spark.SparkFunSuite.run(SparkFunSuite.scala:53)
[info]   at 
org.scalatest.tools.Framework.org$scalatest$tools$Framework$$runSuite(Framework.scala:314)
[info]   at 
org.scalatest.tools.Framework$ScalaTestTask.execute(Framework.scala:507)
[info]   at sbt.ForkMain$Run$2.call(ForkMain.java:296)
[info]   at sbt.ForkMain$Run$2.call(ForkMain.java:286)

[GitHub] hadoop-yetus commented on issue #537: HDDS-1136 : Add metric counters to capture the RocksDB checkpointing statistics.

2019-02-28 Thread GitBox
hadoop-yetus commented on issue #537: HDDS-1136 : Add metric counters to 
capture the RocksDB checkpointing statistics.
URL: https://github.com/apache/hadoop/pull/537#issuecomment-468548037
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 25 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 23 | Maven dependency ordering for branch |
   | +1 | mvninstall | 974 | trunk passed |
   | +1 | compile | 991 | trunk passed |
   | +1 | checkstyle | 199 | trunk passed |
   | +1 | mvnsite | 131 | trunk passed |
   | +1 | shadedclient | 1024 | branch has no errors when building and testing 
our client artifacts. |
   | 0 | findbugs | 0 | Skipped patched modules with no Java source: 
hadoop-ozone/integration-test |
   | +1 | findbugs | 124 | trunk passed |
   | +1 | javadoc | 92 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 21 | Maven dependency ordering for patch |
   | -1 | mvninstall | 23 | integration-test in the patch failed. |
   | +1 | compile | 991 | the patch passed |
   | +1 | javac | 991 | the patch passed |
   | +1 | checkstyle | 183 | the patch passed |
   | +1 | mvnsite | 110 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 667 | patch has no errors when building and testing 
our client artifacts. |
   | 0 | findbugs | 0 | Skipped patched modules with no Java source: 
hadoop-ozone/integration-test |
   | +1 | findbugs | 141 | the patch passed |
   | -1 | javadoc | 23 | common in the patch failed. |
   ||| _ Other Tests _ |
   | -1 | unit | 81 | common in the patch failed. |
   | -1 | unit | 1012 | integration-test in the patch failed. |
   | +1 | unit | 47 | ozone-manager in the patch passed. |
   | +1 | asflicense | 36 | The patch does not generate ASF License warnings. |
   | | | 6949 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdds.security.x509.certificate.client.TestDefaultCertificateClient |
   |   | hadoop.ozone.scm.pipeline.TestPipelineManagerMXBean |
   |   | hadoop.ozone.ozShell.TestOzoneShell |
   |   | hadoop.ozone.om.TestOzoneManager |
   |   | hadoop.ozone.scm.TestSCMNodeManagerMXBean |
   |   | hadoop.ozone.scm.node.TestQueryNode |
   |   | hadoop.ozone.scm.TestXceiverClientMetrics |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-537/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/537 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  |
   | uname | Linux 7cfa2d420110 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / eae3db9 |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | findbugs | v3.1.0-RC1 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-537/1/artifact/out/patch-mvninstall-hadoop-ozone_integration-test.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-537/1/artifact/out/patch-javadoc-hadoop-hdds_common.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-537/1/artifact/out/patch-unit-hadoop-hdds_common.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-537/1/artifact/out/patch-unit-hadoop-ozone_integration-test.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-537/1/testReport/ |
   | Max. process+thread count | 3239 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-ozone/integration-test 
hadoop-ozone/ozone-manager U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-537/1/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16119) KMS on Hadoop RPC Engine

2019-02-28 Thread He Xiaoqiao (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16781284#comment-16781284
 ] 

He Xiaoqiao commented on HADOOP-16119:
--

Thanks [~jojochuang], it is interesting work. Now that I have deploy KMS to 
support massive column encryption for a long time, KMS performance improvement 
rather appeals to me and I would like to join and contribute to this work.

> KMS on Hadoop RPC Engine
> 
>
> Key: HADOOP-16119
> URL: https://issues.apache.org/jira/browse/HADOOP-16119
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: Jonathan Eagles
>Assignee: Wei-Chiu Chuang
>Priority: Major
> Attachments: Design doc_ KMS v2.pdf
>
>
> Per discussion on common-dev and text copied here for ease of reference.
> https://lists.apache.org/thread.html/0e2eeaf07b013f17fad6d362393f53d52041828feec53dcddff04808@%3Ccommon-dev.hadoop.apache.org%3E
> {noformat}
> Thanks all for the inputs,
> To offer additional information (while Daryn is working on his stuff),
> optimizing RPC encryption opens up another possibility: migrating KMS
> service to use Hadoop RPC.
> Today's KMS uses HTTPS + REST API, much like webhdfs. It has very
> undesirable performance (a few thousand ops per second) compared to
> NameNode. Unfortunately for each NameNode namespace operation you also need
> to access KMS too.
> Migrating KMS to Hadoop RPC greatly improves its performance (if
> implemented correctly), and RPC encryption would be a prerequisite. So
> please keep that in mind when discussing the Hadoop RPC encryption
> improvements. Cloudera is very interested to help with the Hadoop RPC
> encryption project because a lot of our customers are using at-rest
> encryption, and some of them are starting to hit KMS performance limit.
> This whole "migrating KMS to Hadoop RPC" was Daryn's idea. I heard this
> idea in the meetup and I am very thrilled to see this happening because it
> is a real issue bothering some of our customers, and I suspect it is the
> right solution to address this tech debt.
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] hunshenshi opened a new pull request #538: HDFS-14318:dn cannot be recognized and must be restarted to recognize the Repaired disk

2019-02-28 Thread GitBox
hunshenshi opened a new pull request #538: HDFS-14318:dn cannot be recognized 
and must be restarted to recognize the Repaired disk
URL: https://github.com/apache/hadoop/pull/538
 
 
   dn detected that disk a has failed. After disk a is repaired, dn cannot be 
recognized and must be restarted to recognize
   

   
   I make a patch to dn for recognize the repaired disk without restart dn


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] avijayanhwx opened a new pull request #537: HDDS-1136 : Add metric counters to capture the RocksDB checkpointing statistics.

2019-02-28 Thread GitBox
avijayanhwx opened a new pull request #537: HDDS-1136 : Add metric counters to 
capture the RocksDB checkpointing statistics.
URL: https://github.com/apache/hadoop/pull/537
 
 
   Added metric gauges for tracking DB checkpointing statistics. The OMMetrics 
class will hold these guages at any instant. These can be pulled from OM by 
Recon.
   
   **Testing done**
   Integration test for Servlet method that gets the OM DB checkpoint added.
   Manually verified the patch on single node Ozone cluster.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16119) KMS on Hadoop RPC Engine

2019-02-28 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16781242#comment-16781242
 ] 

Wei-Chiu Chuang commented on HADOOP-16119:
--

Thanks [~fabbri]! I was at a conference this week. Will go ahead with the 
implementation.

> KMS on Hadoop RPC Engine
> 
>
> Key: HADOOP-16119
> URL: https://issues.apache.org/jira/browse/HADOOP-16119
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: Jonathan Eagles
>Assignee: Wei-Chiu Chuang
>Priority: Major
> Attachments: Design doc_ KMS v2.pdf
>
>
> Per discussion on common-dev and text copied here for ease of reference.
> https://lists.apache.org/thread.html/0e2eeaf07b013f17fad6d362393f53d52041828feec53dcddff04808@%3Ccommon-dev.hadoop.apache.org%3E
> {noformat}
> Thanks all for the inputs,
> To offer additional information (while Daryn is working on his stuff),
> optimizing RPC encryption opens up another possibility: migrating KMS
> service to use Hadoop RPC.
> Today's KMS uses HTTPS + REST API, much like webhdfs. It has very
> undesirable performance (a few thousand ops per second) compared to
> NameNode. Unfortunately for each NameNode namespace operation you also need
> to access KMS too.
> Migrating KMS to Hadoop RPC greatly improves its performance (if
> implemented correctly), and RPC encryption would be a prerequisite. So
> please keep that in mind when discussing the Hadoop RPC encryption
> improvements. Cloudera is very interested to help with the Hadoop RPC
> encryption project because a lot of our customers are using at-rest
> encryption, and some of them are starting to hit KMS performance limit.
> This whole "migrating KMS to Hadoop RPC" was Daryn's idea. I heard this
> idea in the meetup and I am very thrilled to see this happening because it
> is a real issue bothering some of our customers, and I suspect it is the
> right solution to address this tech debt.
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] avijayanhwx commented on issue #536: HDDS-1136 : Add metric counters to capture the RocksDB checkpointing statistics.

2019-02-28 Thread GitBox
avijayanhwx commented on issue #536: HDDS-1136 : Add metric counters to capture 
the RocksDB checkpointing statistics.
URL: https://github.com/apache/hadoop/pull/536#issuecomment-468528391
 
 
   Closing this and opening a new one with a single commit. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] avijayanhwx closed pull request #536: HDDS-1136 : Add metric counters to capture the RocksDB checkpointing statistics.

2019-02-28 Thread GitBox
avijayanhwx closed pull request #536: HDDS-1136 : Add metric counters to 
capture the RocksDB checkpointing statistics.
URL: https://github.com/apache/hadoop/pull/536
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] hadoop-yetus commented on issue #528: HDDS-1182. Pipeline Rule where atleast one datanode is reported in the pipeline.

2019-02-28 Thread GitBox
hadoop-yetus commented on issue #528: HDDS-1182. Pipeline Rule where atleast 
one datanode is reported in the pipeline.
URL: https://github.com/apache/hadoop/pull/528#issuecomment-468526872
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 26 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 2 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 27 | Maven dependency ordering for branch |
   | +1 | mvninstall | 989 | trunk passed |
   | +1 | compile | 76 | trunk passed |
   | +1 | checkstyle | 31 | trunk passed |
   | +1 | mvnsite | 73 | trunk passed |
   | +1 | shadedclient | 735 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 100 | trunk passed |
   | +1 | javadoc | 52 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 9 | Maven dependency ordering for patch |
   | +1 | mvninstall | 65 | the patch passed |
   | +1 | compile | 65 | the patch passed |
   | +1 | javac | 65 | the patch passed |
   | +1 | checkstyle | 21 | the patch passed |
   | +1 | mvnsite | 58 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 1 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 663 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 113 | the patch passed |
   | +1 | javadoc | 48 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 68 | common in the patch failed. |
   | +1 | unit | 123 | server-scm in the patch passed. |
   | +1 | asflicense | 23 | The patch does not generate ASF License warnings. |
   | | | 3401 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdds.security.x509.certificate.client.TestDefaultCertificateClient |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-528/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/528 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  xml  |
   | uname | Linux de32c33a8a2a 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / eae3db9 |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | findbugs | v3.1.0-RC1 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-528/3/artifact/out/patch-unit-hadoop-hdds_common.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-528/3/testReport/ |
   | Max. process+thread count | 537 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-hdds/server-scm U: hadoop-hdds |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-528/3/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16152) Upgrade Eclipse Jetty version to 9.4.x

2019-02-28 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16152?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16781233#comment-16781233
 ] 

Wei-Chiu Chuang commented on HADOOP-16152:
--

Hi [~yumwang] thanks for reporting the issue.

We isolated classpath in Hadoop 3, so downstream applications shouldn't worry 
about Jetty version conflicts. Or are you aware of any API compatibilities 
between Jetty 9.3 (which the latest Hadoop depends on) and 9.4?

> Upgrade Eclipse Jetty version to 9.4.x
> --
>
> Key: HADOOP-16152
> URL: https://issues.apache.org/jira/browse/HADOOP-16152
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.2.0
>Reporter: Yuming Wang
>Priority: Major
>
> Some big data projects have been upgraded Jetty to 9.4.x, which causes some 
> compatibility issues.
> Spark: 
> https://github.com/apache/spark/blob/5a92b5a47cdfaea96a9aeedaf80969d825a382f2/pom.xml#L141
> Calcite: https://github.com/apache/calcite/blob/avatica-1.13.0-rc0/pom.xml#L87



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] hadoop-yetus commented on issue #527: HDDS-1093. Configuration tab in OM/SCM ui is not displaying the correct values

2019-02-28 Thread GitBox
hadoop-yetus commented on issue #527: HDDS-1093. Configuration tab in OM/SCM ui 
is not displaying the correct values
URL: https://github.com/apache/hadoop/pull/527#issuecomment-468522015
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 42 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 2 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 26 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1075 | trunk passed |
   | +1 | compile | 75 | trunk passed |
   | +1 | checkstyle | 30 | trunk passed |
   | +1 | mvnsite | 71 | trunk passed |
   | +1 | shadedclient | 786 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 100 | trunk passed |
   | +1 | javadoc | 60 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 11 | Maven dependency ordering for patch |
   | +1 | mvninstall | 66 | the patch passed |
   | -1 | jshint | 76 | The patch generated 294 new + 1942 unchanged - 1053 
fixed = 2236 total (was 2995) |
   | +1 | compile | 65 | the patch passed |
   | +1 | javac | 65 | the patch passed |
   | -0 | checkstyle | 24 | hadoop-hdds: The patch generated 5 new + 0 
unchanged - 0 fixed = 5 total (was 0) |
   | +1 | mvnsite | 54 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 772 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 115 | the patch passed |
   | +1 | javadoc | 56 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 93 | common in the patch failed. |
   | +1 | unit | 33 | framework in the patch passed. |
   | +1 | asflicense | 27 | The patch does not generate ASF License warnings. |
   | | | 3713 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdds.security.x509.certificate.client.TestDefaultCertificateClient |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-527/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/527 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  jshint  |
   | uname | Linux c15708c89542 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / eae3db9 |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | findbugs | v3.1.0-RC1 |
   | jshint | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-527/2/artifact/out/diff-patch-jshint.txt
 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-527/2/artifact/out/diff-checkstyle-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-527/2/artifact/out/patch-unit-hadoop-hdds_common.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-527/2/testReport/ |
   | Max. process+thread count | 306 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-hdds/framework U: hadoop-hdds |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-527/2/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] avijayanhwx commented on issue #536: HDDS-1136 : Add metric counters to capture the RocksDB checkpointing statistics.

2019-02-28 Thread GitBox
avijayanhwx commented on issue #536: HDDS-1136 : Add metric counters to capture 
the RocksDB checkpointing statistics.
URL: https://github.com/apache/hadoop/pull/536#issuecomment-468518541
 
 
   I am not sure why jenkins is not able to apply my patch on trunk. I pulled 
from apache hadoop trunk and merged to my branch. I also verified 
https://github.com/apache/hadoop/pull/536.patch applies cleanly on apache 
hadoop trunk. @elek Is it possible jenkins is trying to apply individual 
commits rather than the squashed patch? 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16052) Remove Subversion and Forrest from Dockerfile

2019-02-28 Thread Akira Ajisaka (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16052?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-16052:
---
Summary: Remove Subversion and Forrest from Dockerfile  (was: Remove 
Forrest from Dockerfile)

> Remove Subversion and Forrest from Dockerfile
> -
>
> Key: HADOOP-16052
> URL: https://issues.apache.org/jira/browse/HADOOP-16052
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Reporter: Akira Ajisaka
>Priority: Minor
>  Labels: newbie
>
> After HADOOP-14163, Apache Hadoop website is generated by hugo. Forrest can 
> be removed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] hadoop-yetus commented on issue #536: HDDS-1136 : Add metric counters to capture the RocksDB checkpointing statistics.

2019-02-28 Thread GitBox
hadoop-yetus commented on issue #536: HDDS-1136 : Add metric counters to 
capture the RocksDB checkpointing statistics.
URL: https://github.com/apache/hadoop/pull/536#issuecomment-468516380
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 0 | Docker mode activated. |
   | -1 | patch | 7 | https://github.com/apache/hadoop/pull/536 does not apply 
to trunk. Rebase required? Wrong Branch? See 
https://wiki.apache.org/hadoop/HowToContribute for help. |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | GITHUB PR | https://github.com/apache/hadoop/pull/536 |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-536/3/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] hadoop-yetus commented on issue #536: HDDS-1136 : Add metric counters to capture the RocksDB checkpointing statistics.

2019-02-28 Thread GitBox
hadoop-yetus commented on issue #536: HDDS-1136 : Add metric counters to 
capture the RocksDB checkpointing statistics.
URL: https://github.com/apache/hadoop/pull/536#issuecomment-468515363
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 0 | Docker mode activated. |
   | -1 | patch | 6 | https://github.com/apache/hadoop/pull/536 does not apply 
to trunk. Rebase required? Wrong Branch? See 
https://wiki.apache.org/hadoop/HowToContribute for help. |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | GITHUB PR | https://github.com/apache/hadoop/pull/536 |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-536/2/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] hadoop-yetus commented on issue #536: HDDS-1136 : Add metric counters to capture the RocksDB checkpointing statistics.

2019-02-28 Thread GitBox
hadoop-yetus commented on issue #536: HDDS-1136 : Add metric counters to 
capture the RocksDB checkpointing statistics.
URL: https://github.com/apache/hadoop/pull/536#issuecomment-468514322
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 23 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 68 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1104 | trunk passed |
   | +1 | compile | 945 | trunk passed |
   | +1 | checkstyle | 271 | trunk passed |
   | +1 | mvnsite | 154 | trunk passed |
   | +1 | shadedclient | 1198 | branch has no errors when building and testing 
our client artifacts. |
   | 0 | findbugs | 0 | Skipped patched modules with no Java source: 
hadoop-ozone/integration-test |
   | +1 | findbugs | 127 | trunk passed |
   | +1 | javadoc | 97 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 20 | Maven dependency ordering for patch |
   | -1 | mvninstall | 23 | integration-test in the patch failed. |
   | +1 | compile | 920 | the patch passed |
   | +1 | javac | 920 | the patch passed |
   | +1 | checkstyle | 207 | the patch passed |
   | -1 | mvnsite | 33 | integration-test in the patch failed. |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 716 | patch has no errors when building and testing 
our client artifacts. |
   | 0 | findbugs | 0 | Skipped patched modules with no Java source: 
hadoop-ozone/integration-test |
   | +1 | findbugs | 135 | the patch passed |
   | +1 | javadoc | 97 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 83 | common in the patch failed. |
   | -1 | unit | 35 | integration-test in the patch failed. |
   | +1 | unit | 45 | ozone-manager in the patch passed. |
   | +1 | asflicense | 40 | The patch does not generate ASF License warnings. |
   | | | 6326 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdds.security.x509.certificate.client.TestDefaultCertificateClient |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-536/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/536 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  |
   | uname | Linux 773ed8ce5243 4.4.0-139-generic #165~14.04.1-Ubuntu SMP Wed 
Oct 31 10:55:11 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 0d61fac |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | findbugs | v3.1.0-RC1 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-536/1/artifact/out/patch-mvninstall-hadoop-ozone_integration-test.txt
 |
   | mvnsite | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-536/1/artifact/out/patch-mvnsite-hadoop-ozone_integration-test.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-536/1/artifact/out/patch-unit-hadoop-hdds_common.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-536/1/artifact/out/patch-unit-hadoop-ozone_integration-test.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-536/1/testReport/ |
   | Max. process+thread count | 305 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-ozone/integration-test 
hadoop-ozone/ozone-manager U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-536/1/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16137) hadoop version - fairscheduler-statedump.log (No such file or directory)

2019-02-28 Thread Akira Ajisaka (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16137?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16781206#comment-16781206
 ] 

Akira Ajisaka commented on HADOOP-16137:


This issue is duplicate of YARN-9308.

Workaround: Remove the following settings from log4j.properties.
{noformat}
# Fair scheduler requests log on state dump 
log4j.logger.org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.statedump=DEBUG,FSLOGGER

log4j.additivity.org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.statedump=false
 
log4j.appender.FSLOGGER=org.apache.log4j.RollingFileAppender
log4j.appender.FSLOGGER.File=${hadoop.log.dir}/fairscheduler-statedump.log  

log4j.appender.FSLOGGER.layout=org.apache.log4j.PatternLayout   
log4j.appender.FSLOGGER.layout.ConversionPattern=%d{ISO8601} %p %c: %m%n

log4j.appender.FSLOGGER.MaxFileSize=${hadoop.log.maxfilesize}   
log4j.appender.FSLOGGER.MaxBackupIndex=${hadoop.log.maxbackupindex}
{noformat}

> hadoop version - fairscheduler-statedump.log (No such file or directory)
> 
>
> Key: HADOOP-16137
> URL: https://issues.apache.org/jira/browse/HADOOP-16137
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.2.0
>Reporter: Massoud Maboudi
>Priority: Major
>
> I tried to install hadoop-3.2.0 on linux mint. Everything is going fine. Also 
> java 11.0.2 is installed like this:
> {code:java}
> $ java -version java version "11.0.2" 2018-10-16 LTS Java(TM) SE Runtime 
> Environment 18.9 (build 11.0.2+7-LTS) Java HotSpot(TM) 64-Bit Server VM 18.9 
> (build 11.0.2+7-LTS, mixed mode){code}
> when I use this command {{hadoop version}}, I get this error:
> {code:java}
> $ hadoop version log4j:ERROR setFile(null,true) call failed. 
> java.io.FileNotFoundException: 
> /usr/local/hadoop-3.2.0/logs/fairscheduler-statedump.log (No such file or 
> directory) at java.base/java.io.FileOutputStream.open0(Native Method) at 
> java.base/java.io.FileOutputStream.open(FileOutputStream.java:298) at 
> java.base/java.io.FileOutputStream.(FileOutputStream.java:237) at 
> java.base/java.io.FileOutputStream.(FileOutputStream.java:158) at 
> org.apache.log4j.FileAppender.setFile(FileAppender.java:294) at 
> org.apache.log4j.RollingFileAppender.setFile(RollingFileAppender.java:207) at 
> org.apache.log4j.FileAppender.activateOptions(FileAppender.java:165) at 
> org.apache.log4j.config.PropertySetter.activate(PropertySetter.java:307) at 
> org.apache.log4j.config.PropertySetter.setProperties(PropertySetter.java:172) 
> at 
> org.apache.log4j.config.PropertySetter.setProperties(PropertySetter.java:104) 
> at 
> org.apache.log4j.PropertyConfigurator.parseAppender(PropertyConfigurator.java:842)
>  at 
> org.apache.log4j.PropertyConfigurator.parseCategory(PropertyConfigurator.java:768)
>  at 
> org.apache.log4j.PropertyConfigurator.parseCatsAndRenderers(PropertyConfigurator.java:672)
>  at 
> org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:516)
>  at 
> org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:580)
>  at 
> org.apache.log4j.helpers.OptionConverter.selectAndConfigure(OptionConverter.java:526)
>  at org.apache.log4j.LogManager.(LogManager.java:127) at 
> org.slf4j.impl.Log4jLoggerFactory.(Log4jLoggerFactory.java:66) at 
> org.slf4j.impl.StaticLoggerBinder.(StaticLoggerBinder.java:72) at 
> org.slf4j.impl.StaticLoggerBinder.(StaticLoggerBinder.java:45) at 
> org.slf4j.LoggerFactory.bind(LoggerFactory.java:150) at 
> org.slf4j.LoggerFactory.performInitialization(LoggerFactory.java:124) at 
> org.slf4j.LoggerFactory.getILoggerFactory(LoggerFactory.java:412) at 
> org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:357) at 
> org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:383) at 
> org.apache.hadoop.util.VersionInfo.(VersionInfo.java:37) Hadoop 3.2.0 
> Source code repository https://github.com/apache/hadoop.git -r 
> e97acb3bd8f3befd27418996fa5d4b50bf2e17bf Compiled by sunilg on 
> 2019-01-08T06:08Z Compiled with protoc 2.5.0 From source with checksum 
> d3f0795ed0d9dc378e2c785d3668f39 This command was run using 
> /usr/local/hadoop-3.2.0/share/hadoop/common/hadoop-common-3.2.0.jar{code}
> It seems hadoop is properly installed but something is wrong with {{log4j}}. 
> May I ask you to help me to solve this error?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] aw-was-here commented on issue #535: HADOOP-16109. Parquet reading S3AFileSystem causes EOF

2019-02-28 Thread GitBox
aw-was-here commented on issue #535: HADOOP-16109. Parquet reading 
S3AFileSystem causes EOF
URL: https://github.com/apache/hadoop/pull/535#issuecomment-468512504
 
 
   > hey, @aw-was-here , yetus is bouncing all my PRs
   
   "#535 does not apply to s3/HADOOP-16109-parquet-eof-s3a-seek"
   
   Yetus knows when your parent repo is a branch. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15984) Update jersey from 1.19 to 2.x

2019-02-28 Thread Akira Ajisaka (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15984?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-15984:
---
Priority: Critical  (was: Major)

> Update jersey from 1.19 to 2.x
> --
>
> Key: HADOOP-15984
> URL: https://issues.apache.org/jira/browse/HADOOP-15984
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Akira Ajisaka
>Priority: Critical
>
> jersey-json 1.19 depends on Jackson 1.9.2. Let's upgrade.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15984) Update jersey from 1.19 to 2.x

2019-02-28 Thread Akira Ajisaka (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15984?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16781180#comment-16781180
 ] 

Akira Ajisaka commented on HADOOP-15984:


I'm trying this and found this issue is super difficult. Jersey 1.x and 2.x 
cannot co-exist, so we need to rewrite all the jersey-related code at once. 
https://stackoverflow.com/questions/45187624/java-lang-nosuchmethoderror-javax-ws-rs-core-application-getpropertiesljava-u

> Update jersey from 1.19 to 2.x
> --
>
> Key: HADOOP-15984
> URL: https://issues.apache.org/jira/browse/HADOOP-15984
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Akira Ajisaka
>Priority: Major
>
> jersey-json 1.19 depends on Jackson 1.9.2. Let's upgrade.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16131) Support reencrypt in KMS Benchmark

2019-02-28 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16131?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16781176#comment-16781176
 ] 

Hadoop QA commented on HADOOP-16131:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 27s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 14m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
1m 47s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m  
9s{color} | {color:green} hadoop-kms in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
39s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 72m 24s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HADOOP-16131 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12960688/HADOOP-16131.001.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux b96100d9e54c 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri Oct 
5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 0d61fac |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16003/testReport/ |
| Max. process+thread count | 341 (vs. ulimit of 1) |
| modules | C: hadoop-common-project/hadoop-kms U: 
hadoop-common-project/hadoop-kms |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16003/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Support reencrypt  in KMS Benchmark
> ---
>
> Key: HADOOP-16131
> URL: 

[jira] [Comment Edited] (HADOOP-16109) Parquet reading S3AFileSystem causes EOF

2019-02-28 Thread Matt Foley (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16109?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16781137#comment-16781137
 ] 

Matt Foley edited comment on HADOOP-16109 at 3/1/19 12:49 AM:
--

Yes, I'm thinking at 
[https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AInputStream.java#L264]
 we need

{{&& diff < forwardSeekLimit;}} instead of {{<=}}

What do you think?

The big question I have is, some of the text description talks about "reading 
past the already active readahead range", i.e., past 
{{remainingInCurrentRequest}}, as being a problem, but it seems to me that 
should be okay; the problem documented so far is *seeking* past 
{{remainingInCurrentRequest}} (specifically to exactly the end of 
CurrentRequest, which is incorrectly guarded by the above L264 inequality) and 
then not doing a stream close, which causes the problem.  Do you know if, say, 
seeking to a few bytes before the end of CurrentRequest, then reading past it 
(when the S3 file does indeed have more to read), also causes an EOF, or does 
the stream machinery handle that case correctly?

I'm putting together a test platform so I can answer such questions myself, but 
it will take me a few hours; I haven't worked in s3a before.

 


was (Author: mattf):
Yes, I'm thinking at 
[https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AInputStream.java#L264]
 we need

{{ && diff < forwardSeekLimit; }} instead of {{ <= }}

What do you think?

The big question I have is, some of the text description talks about "reading 
past the already active readahead range", i.e., past 
{{remainingInCurrentRequest}}, as being a problem, but it seems to me that 
should be okay; the problem documented so far is *seeking* past 
{{remainingInCurrentRequest}} (specifically to exactly the end of 
CurrentRequest, which is incorrectly guarded by the above L264 inequality) and 
then not doing a stream close, which causes the problem.  Do you know if, say, 
seeking to a few bytes before the end of CurrentRequest, then reading past it 
(when the S3 file does indeed have more to read), also causes an EOF, or does 
the stream machinery handle that case correctly?

I'm putting together a test platform so I can answer such questions myself, but 
it will take me a few hours; I haven't worked in s3a before.

 

> Parquet reading S3AFileSystem causes EOF
> 
>
> Key: HADOOP-16109
> URL: https://issues.apache.org/jira/browse/HADOOP-16109
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.9.2, 2.8.5, 3.3.0, 3.1.2
>Reporter: Dave Christianson
>Assignee: Steve Loughran
>Priority: Blocker
>
> When using S3AFileSystem to read Parquet files a specific set of 
> circumstances causes an  EOFException that is not thrown when reading the 
> same file from local disk
> Note this has only been observed under specific circumstances:
>   - when the reader is doing a projection (will cause it to do a seek 
> backwards and put the filesystem into random mode)
>  - when the file is larger than the readahead buffer size
>  - when the seek behavior of the Parquet reader causes the reader to seek 
> towards the end of the current input stream without reopening, such that the 
> next read on the currently open stream will read past the end of the 
> currently open stream.
> Exception from Parquet reader is as follows:
> {code}
> Caused by: java.io.EOFException: Reached the end of stream with 51 bytes left 
> to read
>  at 
> org.apache.parquet.io.DelegatingSeekableInputStream.readFully(DelegatingSeekableInputStream.java:104)
>  at 
> org.apache.parquet.io.DelegatingSeekableInputStream.readFullyHeapBuffer(DelegatingSeekableInputStream.java:127)
>  at 
> org.apache.parquet.io.DelegatingSeekableInputStream.readFully(DelegatingSeekableInputStream.java:91)
>  at 
> org.apache.parquet.hadoop.ParquetFileReader$ConsecutiveChunkList.readAll(ParquetFileReader.java:1174)
>  at 
> org.apache.parquet.hadoop.ParquetFileReader.readNextRowGroup(ParquetFileReader.java:805)
>  at 
> org.apache.parquet.hadoop.InternalParquetRecordReader.checkRead(InternalParquetRecordReader.java:127)
>  at 
> org.apache.parquet.hadoop.InternalParquetRecordReader.nextKeyValue(InternalParquetRecordReader.java:222)
>  at 
> org.apache.parquet.hadoop.ParquetRecordReader.nextKeyValue(ParquetRecordReader.java:207)
>  at 
> org.apache.flink.api.java.hadoop.mapreduce.HadoopInputFormatBase.fetchNext(HadoopInputFormatBase.java:206)
>  at 
> org.apache.flink.api.java.hadoop.mapreduce.HadoopInputFormatBase.reachedEnd(HadoopInputFormatBase.java:199)
>  at 
> org.apache.flink.runtime.operators.DataSourceTask.invoke(DataSourceTask.java:190)
>  at 

[GitHub] vivekratnavel commented on a change in pull request #527: HDDS-1093. Configuration tab in OM/SCM ui is not displaying the correct values

2019-02-28 Thread GitBox
vivekratnavel commented on a change in pull request #527: HDDS-1093. 
Configuration tab in OM/SCM ui is not displaying the correct values
URL: https://github.com/apache/hadoop/pull/527#discussion_r261449198
 
 

 ##
 File path: 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/conf/OzoneConfiguration.java
 ##
 @@ -161,4 +163,31 @@ public static void activate() {
 Configuration.addDefaultResource("ozone-default.xml");
 Configuration.addDefaultResource("ozone-site.xml");
   }
+
+  /**
+   * The super class method getAllPropertiesByTag
 
 Review comment:
   Yes @elek 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16150) checksumFS doesn't wrap concat(): concatenated files don't have checksums

2019-02-28 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16150?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16781144#comment-16781144
 ] 

Eric Yang commented on HADOOP-16150:


+1

> checksumFS doesn't wrap concat(): concatenated files don't have checksums
> -
>
> Key: HADOOP-16150
> URL: https://issues.apache.org/jira/browse/HADOOP-16150
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>
> Followon from HADOOP-16107. FilterFS passes through the concat operation, and 
> checksum FS doesn't override that call -so files created through concat *do 
> not have checksums*.
> If people are using a checksummed fs directly with the expectations that they 
> will, that expectation is not being met. 
> What to do?
> * fail always?
> * fail if checksums are enabled?
> * try and implement the concat operation from raw local up at the checksum 
> level
> append() just gives up always; doing the same for concat would be the 
> simplest. Again, brings us back to "need a way to see if an FS supports a 
> feature before invocation", here checksum fs would reject append and concat



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] bharatviswa504 merged pull request #524: HDDS-1187. Healthy pipeline Chill Mode rule to consider only pipelines with replication factor three.

2019-02-28 Thread GitBox
bharatviswa504 merged pull request #524: HDDS-1187.  Healthy pipeline Chill 
Mode rule to consider only pipelines with replication factor three.
URL: https://github.com/apache/hadoop/pull/524
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] bharatviswa504 commented on issue #524: HDDS-1187. Healthy pipeline Chill Mode rule to consider only pipelines with replication factor three.

2019-02-28 Thread GitBox
bharatviswa504 commented on issue #524: HDDS-1187.  Healthy pipeline Chill Mode 
rule to consider only pipelines with replication factor three.
URL: https://github.com/apache/hadoop/pull/524#issuecomment-468501068
 
 
   Thank You @arp7  for review.
   I will commit this shortly.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] anuengineer commented on issue #529: HDDS-1191. Replace Ozone Rest client with S3 client in smoketests and docs

2019-02-28 Thread GitBox
anuengineer commented on issue #529: HDDS-1191. Replace Ozone Rest client with 
S3 client in smoketests and docs
URL: https://github.com/apache/hadoop/pull/529#issuecomment-468498681
 
 
   :+1:  I will commit this soon.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-16109) Parquet reading S3AFileSystem causes EOF

2019-02-28 Thread Matt Foley (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16109?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16781137#comment-16781137
 ] 

Matt Foley edited comment on HADOOP-16109 at 3/1/19 12:50 AM:
--

Yes, I'm thinking at 
[https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AInputStream.java#L264]
 we need

{{&& diff < forwardSeekLimit;  // instead of <=}}

What do you think?

The big question I have is, some of the text description talks about "reading 
past the already active readahead range", i.e., past 
{{remainingInCurrentRequest}}, as being a problem, but it seems to me that 
should be okay; the problem documented so far is *seeking* past 
{{remainingInCurrentRequest}} (specifically to exactly the end of 
CurrentRequest, which is incorrectly guarded by the above L264 inequality) and 
then not doing a stream close, which causes the problem.  Do you know if, say, 
seeking to a few bytes before the end of CurrentRequest, then reading past it 
(when the S3 file does indeed have more to read), also causes an EOF, or does 
the stream machinery handle that case correctly?

I'm putting together a test platform so I can answer such questions myself, but 
it will take me a few hours; I haven't worked in s3a before.

 


was (Author: mattf):
Yes, I'm thinking at 
[https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AInputStream.java#L264]
 we need

{{&& diff < forwardSeekLimit;}} instead of {{<=}}

What do you think?

The big question I have is, some of the text description talks about "reading 
past the already active readahead range", i.e., past 
{{remainingInCurrentRequest}}, as being a problem, but it seems to me that 
should be okay; the problem documented so far is *seeking* past 
{{remainingInCurrentRequest}} (specifically to exactly the end of 
CurrentRequest, which is incorrectly guarded by the above L264 inequality) and 
then not doing a stream close, which causes the problem.  Do you know if, say, 
seeking to a few bytes before the end of CurrentRequest, then reading past it 
(when the S3 file does indeed have more to read), also causes an EOF, or does 
the stream machinery handle that case correctly?

I'm putting together a test platform so I can answer such questions myself, but 
it will take me a few hours; I haven't worked in s3a before.

 

> Parquet reading S3AFileSystem causes EOF
> 
>
> Key: HADOOP-16109
> URL: https://issues.apache.org/jira/browse/HADOOP-16109
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.9.2, 2.8.5, 3.3.0, 3.1.2
>Reporter: Dave Christianson
>Assignee: Steve Loughran
>Priority: Blocker
>
> When using S3AFileSystem to read Parquet files a specific set of 
> circumstances causes an  EOFException that is not thrown when reading the 
> same file from local disk
> Note this has only been observed under specific circumstances:
>   - when the reader is doing a projection (will cause it to do a seek 
> backwards and put the filesystem into random mode)
>  - when the file is larger than the readahead buffer size
>  - when the seek behavior of the Parquet reader causes the reader to seek 
> towards the end of the current input stream without reopening, such that the 
> next read on the currently open stream will read past the end of the 
> currently open stream.
> Exception from Parquet reader is as follows:
> {code}
> Caused by: java.io.EOFException: Reached the end of stream with 51 bytes left 
> to read
>  at 
> org.apache.parquet.io.DelegatingSeekableInputStream.readFully(DelegatingSeekableInputStream.java:104)
>  at 
> org.apache.parquet.io.DelegatingSeekableInputStream.readFullyHeapBuffer(DelegatingSeekableInputStream.java:127)
>  at 
> org.apache.parquet.io.DelegatingSeekableInputStream.readFully(DelegatingSeekableInputStream.java:91)
>  at 
> org.apache.parquet.hadoop.ParquetFileReader$ConsecutiveChunkList.readAll(ParquetFileReader.java:1174)
>  at 
> org.apache.parquet.hadoop.ParquetFileReader.readNextRowGroup(ParquetFileReader.java:805)
>  at 
> org.apache.parquet.hadoop.InternalParquetRecordReader.checkRead(InternalParquetRecordReader.java:127)
>  at 
> org.apache.parquet.hadoop.InternalParquetRecordReader.nextKeyValue(InternalParquetRecordReader.java:222)
>  at 
> org.apache.parquet.hadoop.ParquetRecordReader.nextKeyValue(ParquetRecordReader.java:207)
>  at 
> org.apache.flink.api.java.hadoop.mapreduce.HadoopInputFormatBase.fetchNext(HadoopInputFormatBase.java:206)
>  at 
> org.apache.flink.api.java.hadoop.mapreduce.HadoopInputFormatBase.reachedEnd(HadoopInputFormatBase.java:199)
>  at 
> org.apache.flink.runtime.operators.DataSourceTask.invoke(DataSourceTask.java:190)
>  at 

[jira] [Comment Edited] (HADOOP-16109) Parquet reading S3AFileSystem causes EOF

2019-02-28 Thread Matt Foley (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16109?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16781137#comment-16781137
 ] 

Matt Foley edited comment on HADOOP-16109 at 3/1/19 12:48 AM:
--

Yes, I'm thinking at 
[https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AInputStream.java#L264]
 we need

{{ && diff < forwardSeekLimit; }} instead of {{ <= }}

What do you think?

The big question I have is, some of the text description talks about "reading 
past the already active readahead range", i.e., past 
{{remainingInCurrentRequest}}, as being a problem, but it seems to me that 
should be okay; the problem documented so far is *seeking* past 
{{remainingInCurrentRequest}} (specifically to exactly the end of 
CurrentRequest, which is incorrectly guarded by the above L264 inequality) and 
then not doing a stream close, which causes the problem.  Do you know if, say, 
seeking to a few bytes before the end of CurrentRequest, then reading past it 
(when the S3 file does indeed have more to read), also causes an EOF, or does 
the stream machinery handle that case correctly?

I'm putting together a test platform so I can answer such questions myself, but 
it will take me a few hours; I haven't worked in s3a before.

 


was (Author: mattf):
Yes, I'm thinking at 
[https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AInputStream.java#L264]
 we need

`&& diff < forwardSeekLimit;` instead of `<=`

What do you think?

The big question I have is, some of the text description talks about "reading 
past the already active readahead range", i.e., past 
`remainingInCurrentRequest`, as being a problem, but it seems to me that should 
be okay; the problem documented so far is *seeking* past 
`remainingInCurrentRequest` (specifically to exactly the end of CurrentRequest, 
which is incorrectly guarded by the above L264 inequality) and then not doing a 
stream close, which causes the problem.  Do you know if, say, seeking to a few 
bytes before the end of CurrentRequest, then reading past it (when the S3 file 
does indeed have more to read), also causes an EOF, or does the stream 
machinery handle that case correctly?

I'm putting together a test platform so I can answer such questions myself, but 
it will take me a few hours; I haven't worked in s3a before.

 

> Parquet reading S3AFileSystem causes EOF
> 
>
> Key: HADOOP-16109
> URL: https://issues.apache.org/jira/browse/HADOOP-16109
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.9.2, 2.8.5, 3.3.0, 3.1.2
>Reporter: Dave Christianson
>Assignee: Steve Loughran
>Priority: Blocker
>
> When using S3AFileSystem to read Parquet files a specific set of 
> circumstances causes an  EOFException that is not thrown when reading the 
> same file from local disk
> Note this has only been observed under specific circumstances:
>   - when the reader is doing a projection (will cause it to do a seek 
> backwards and put the filesystem into random mode)
>  - when the file is larger than the readahead buffer size
>  - when the seek behavior of the Parquet reader causes the reader to seek 
> towards the end of the current input stream without reopening, such that the 
> next read on the currently open stream will read past the end of the 
> currently open stream.
> Exception from Parquet reader is as follows:
> {code}
> Caused by: java.io.EOFException: Reached the end of stream with 51 bytes left 
> to read
>  at 
> org.apache.parquet.io.DelegatingSeekableInputStream.readFully(DelegatingSeekableInputStream.java:104)
>  at 
> org.apache.parquet.io.DelegatingSeekableInputStream.readFullyHeapBuffer(DelegatingSeekableInputStream.java:127)
>  at 
> org.apache.parquet.io.DelegatingSeekableInputStream.readFully(DelegatingSeekableInputStream.java:91)
>  at 
> org.apache.parquet.hadoop.ParquetFileReader$ConsecutiveChunkList.readAll(ParquetFileReader.java:1174)
>  at 
> org.apache.parquet.hadoop.ParquetFileReader.readNextRowGroup(ParquetFileReader.java:805)
>  at 
> org.apache.parquet.hadoop.InternalParquetRecordReader.checkRead(InternalParquetRecordReader.java:127)
>  at 
> org.apache.parquet.hadoop.InternalParquetRecordReader.nextKeyValue(InternalParquetRecordReader.java:222)
>  at 
> org.apache.parquet.hadoop.ParquetRecordReader.nextKeyValue(ParquetRecordReader.java:207)
>  at 
> org.apache.flink.api.java.hadoop.mapreduce.HadoopInputFormatBase.fetchNext(HadoopInputFormatBase.java:206)
>  at 
> org.apache.flink.api.java.hadoop.mapreduce.HadoopInputFormatBase.reachedEnd(HadoopInputFormatBase.java:199)
>  at 
> org.apache.flink.runtime.operators.DataSourceTask.invoke(DataSourceTask.java:190)
>  at 

[jira] [Commented] (HADOOP-16109) Parquet reading S3AFileSystem causes EOF

2019-02-28 Thread Matt Foley (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16109?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16781137#comment-16781137
 ] 

Matt Foley commented on HADOOP-16109:
-

Yes, I'm thinking at 
[https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AInputStream.java#L264]
 we need

`&& diff < forwardSeekLimit;` instead of `<=`

What do you think?

The big question I have is, some of the text description talks about "reading 
past the already active readahead range", i.e., past 
`remainingInCurrentRequest`, as being a problem, but it seems to me that should 
be okay; the problem documented so far is *seeking* past 
`remainingInCurrentRequest` (specifically to exactly the end of CurrentRequest, 
which is incorrectly guarded by the above L264 inequality) and then not doing a 
stream close, which causes the problem.  Do you know if, say, seeking to a few 
bytes before the end of CurrentRequest, then reading past it (when the S3 file 
does indeed have more to read), also causes an EOF, or does the stream 
machinery handle that case correctly?

I'm putting together a test platform so I can answer such questions myself, but 
it will take me a few hours; I haven't worked in s3a before.

 

> Parquet reading S3AFileSystem causes EOF
> 
>
> Key: HADOOP-16109
> URL: https://issues.apache.org/jira/browse/HADOOP-16109
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.9.2, 2.8.5, 3.3.0, 3.1.2
>Reporter: Dave Christianson
>Assignee: Steve Loughran
>Priority: Blocker
>
> When using S3AFileSystem to read Parquet files a specific set of 
> circumstances causes an  EOFException that is not thrown when reading the 
> same file from local disk
> Note this has only been observed under specific circumstances:
>   - when the reader is doing a projection (will cause it to do a seek 
> backwards and put the filesystem into random mode)
>  - when the file is larger than the readahead buffer size
>  - when the seek behavior of the Parquet reader causes the reader to seek 
> towards the end of the current input stream without reopening, such that the 
> next read on the currently open stream will read past the end of the 
> currently open stream.
> Exception from Parquet reader is as follows:
> {code}
> Caused by: java.io.EOFException: Reached the end of stream with 51 bytes left 
> to read
>  at 
> org.apache.parquet.io.DelegatingSeekableInputStream.readFully(DelegatingSeekableInputStream.java:104)
>  at 
> org.apache.parquet.io.DelegatingSeekableInputStream.readFullyHeapBuffer(DelegatingSeekableInputStream.java:127)
>  at 
> org.apache.parquet.io.DelegatingSeekableInputStream.readFully(DelegatingSeekableInputStream.java:91)
>  at 
> org.apache.parquet.hadoop.ParquetFileReader$ConsecutiveChunkList.readAll(ParquetFileReader.java:1174)
>  at 
> org.apache.parquet.hadoop.ParquetFileReader.readNextRowGroup(ParquetFileReader.java:805)
>  at 
> org.apache.parquet.hadoop.InternalParquetRecordReader.checkRead(InternalParquetRecordReader.java:127)
>  at 
> org.apache.parquet.hadoop.InternalParquetRecordReader.nextKeyValue(InternalParquetRecordReader.java:222)
>  at 
> org.apache.parquet.hadoop.ParquetRecordReader.nextKeyValue(ParquetRecordReader.java:207)
>  at 
> org.apache.flink.api.java.hadoop.mapreduce.HadoopInputFormatBase.fetchNext(HadoopInputFormatBase.java:206)
>  at 
> org.apache.flink.api.java.hadoop.mapreduce.HadoopInputFormatBase.reachedEnd(HadoopInputFormatBase.java:199)
>  at 
> org.apache.flink.runtime.operators.DataSourceTask.invoke(DataSourceTask.java:190)
>  at org.apache.flink.runtime.taskmanager.Task.run(Task.java:711)
>  at java.lang.Thread.run(Thread.java:748)
> {code}
> The following example program generate the same root behavior (sans finding a 
> Parquet file that happens to trigger this condition) by purposely reading 
> past the already active readahead range on any file >= 1029 bytes in size.. 
> {code:java}
> final Configuration conf = new Configuration();
> conf.set("fs.s3a.readahead.range", "1K");
> conf.set("fs.s3a.experimental.input.fadvise", "random");
> final FileSystem fs = FileSystem.get(path.toUri(), conf);
> // forward seek reading across readahead boundary
> try (FSDataInputStream in = fs.open(path)) {
> final byte[] temp = new byte[5];
> in.readByte();
> in.readFully(1023, temp); // <-- works
> }
> // forward seek reading from end of readahead boundary
> try (FSDataInputStream in = fs.open(path)) {
>  final byte[] temp = new byte[5];
>  in.readByte();
>  in.readFully(1024, temp); // <-- throws EOFException
> }
> {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: 

[jira] [Updated] (HADOOP-16131) Support reencrypt in KMS Benchmark

2019-02-28 Thread George Huang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16131?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

George Huang updated HADOOP-16131:
--
Status: Patch Available  (was: Open)

> Support reencrypt  in KMS Benchmark
> ---
>
> Key: HADOOP-16131
> URL: https://issues.apache.org/jira/browse/HADOOP-16131
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: kms
>Affects Versions: 3.3.0
>Reporter: Wei-Chiu Chuang
>Assignee: George Huang
>Priority: Major
> Attachments: HADOOP-16131.001.patch
>
>
> It would be nice to support KMS reencrypt related operations -- reencrypt, 
> invalidateCache, rollNewVersion.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16131) Support reencrypt in KMS Benchmark

2019-02-28 Thread George Huang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16131?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

George Huang updated HADOOP-16131:
--
Attachment: HADOOP-16131.001.patch

> Support reencrypt  in KMS Benchmark
> ---
>
> Key: HADOOP-16131
> URL: https://issues.apache.org/jira/browse/HADOOP-16131
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: kms
>Affects Versions: 3.3.0
>Reporter: Wei-Chiu Chuang
>Assignee: George Huang
>Priority: Major
> Attachments: HADOOP-16131.001.patch
>
>
> It would be nice to support KMS reencrypt related operations -- reencrypt, 
> invalidateCache, rollNewVersion.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] avijayanhwx commented on issue #536: HDDS-1136 : Add metric counters to capture the RocksDB checkpointing statistics.

2019-02-28 Thread GitBox
avijayanhwx commented on issue #536: HDDS-1136 : Add metric counters to capture 
the RocksDB checkpointing statistics.
URL: https://github.com/apache/hadoop/pull/536#issuecomment-468486162
 
 
   cc @anuengineer @bharatviswa504 @arp7 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] avijayanhwx opened a new pull request #536: HDDS-1136 : Add metric counters to capture the RocksDB checkpointing statistics.

2019-02-28 Thread GitBox
avijayanhwx opened a new pull request #536: HDDS-1136 : Add metric counters to 
capture the RocksDB checkpointing statistics.
URL: https://github.com/apache/hadoop/pull/536
 
 
   Added metric gauges for tracking DB checkpointing statistics. The OMMetrics 
class will hold these guages at any instant. These can be pulled from OM by 
Recon. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16132) Support multipart download in S3AFileSystem

2019-02-28 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16132?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16781099#comment-16781099
 ] 

Hadoop QA commented on HADOOP-16132:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
24s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  3s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 20s{color} | {color:orange} hadoop-tools/hadoop-aws: The patch generated 6 
new + 5 unchanged - 0 fixed = 11 total (was 5) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 55s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m 
31s{color} | {color:green} hadoop-aws in the patch passed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
24s{color} | {color:red} The patch generated 3 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 56m  0s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HADOOP-16132 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12960677/HADOOP-16132.003.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 91be6a956f76 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri Oct 
5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 0d61fac |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16002/artifact/out/diff-checkstyle-hadoop-tools_hadoop-aws.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16002/artifact/out/whitespace-eol.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16002/testReport/ |
| asflicense | 

[jira] [Commented] (HADOOP-15920) get patch for S3a nextReadPos(), through Yetus

2019-02-28 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16781097#comment-16781097
 ] 

Hadoop QA commented on HADOOP-15920:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
25s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} branch-3.2 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  5m 
33s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
12s{color} | {color:green} branch-3.2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m  
5s{color} | {color:green} branch-3.2 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
26s{color} | {color:green} branch-3.2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
53s{color} | {color:green} branch-3.2 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
17m 54s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
25s{color} | {color:green} branch-3.2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
29s{color} | {color:green} branch-3.2 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
18s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m 
58s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
3m 21s{color} | {color:orange} root: The patch generated 3 new + 10 unchanged - 
0 fixed = 13 total (was 10) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 1s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 51s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
27s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m 
28s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m 
28s{color} | {color:green} hadoop-aws in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
39s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}119m 15s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:63396be |
| JIRA Issue | HADOOP-15920 |
| GITHUB PR | https://github.com/apache/hadoop/pull/433 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 1844348427f8 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri Oct 
5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | branch-3.2 / 3f3548b |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 

[jira] [Updated] (HADOOP-16132) Support multipart download in S3AFileSystem

2019-02-28 Thread Justin Uang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16132?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Justin Uang updated HADOOP-16132:
-
Attachment: HADOOP-16132.003.patch
Status: Patch Available  (was: Open)

> Support multipart download in S3AFileSystem
> ---
>
> Key: HADOOP-16132
> URL: https://issues.apache.org/jira/browse/HADOOP-16132
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Justin Uang
>Priority: Major
> Attachments: HADOOP-16132.001.patch, HADOOP-16132.002.patch, 
> HADOOP-16132.003.patch, seek-logs-parquet.txt
>
>
> I noticed that I get 150MB/s when I use the AWS CLI
> {code:java}
> aws s3 cp s3:/// - > /dev/null{code}
> vs 50MB/s when I use the S3AFileSystem
> {code:java}
> hadoop fs -cat s3:/// > /dev/null{code}
> Looking into the AWS CLI code, it looks like the 
> [download|https://github.com/boto/s3transfer/blob/ca0b708ea8a6a1213c6e21ca5a856e184f824334/s3transfer/download.py]
>  logic is quite clever. It downloads the next couple parts in parallel using 
> range requests, and then buffers them in memory in order to reorder them and 
> expose a single contiguous stream. I translated the logic to Java and 
> modified the S3AFileSystem to do similar things, and am able to achieve 
> 150MB/s download speeds as well. It is mostly done but I have some things to 
> clean up first. The PR is here: 
> https://github.com/palantir/hadoop/pull/47/files
> It would be great to get some other eyes on it to see what we need to do to 
> get it merged.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16132) Support multipart download in S3AFileSystem

2019-02-28 Thread Justin Uang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16132?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Justin Uang updated HADOOP-16132:
-
Status: Open  (was: Patch Available)

> Support multipart download in S3AFileSystem
> ---
>
> Key: HADOOP-16132
> URL: https://issues.apache.org/jira/browse/HADOOP-16132
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Justin Uang
>Priority: Major
> Attachments: HADOOP-16132.001.patch, HADOOP-16132.002.patch, 
> HADOOP-16132.003.patch, seek-logs-parquet.txt
>
>
> I noticed that I get 150MB/s when I use the AWS CLI
> {code:java}
> aws s3 cp s3:/// - > /dev/null{code}
> vs 50MB/s when I use the S3AFileSystem
> {code:java}
> hadoop fs -cat s3:/// > /dev/null{code}
> Looking into the AWS CLI code, it looks like the 
> [download|https://github.com/boto/s3transfer/blob/ca0b708ea8a6a1213c6e21ca5a856e184f824334/s3transfer/download.py]
>  logic is quite clever. It downloads the next couple parts in parallel using 
> range requests, and then buffers them in memory in order to reorder them and 
> expose a single contiguous stream. I translated the logic to Java and 
> modified the S3AFileSystem to do similar things, and am able to achieve 
> 150MB/s download speeds as well. It is mostly done but I have some things to 
> clean up first. The PR is here: 
> https://github.com/palantir/hadoop/pull/47/files
> It would be great to get some other eyes on it to see what we need to do to 
> get it merged.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] ajayydv commented on a change in pull request #526: HDDS-1183. Override getDelegationToken API for OzoneFileSystem. Contr…

2019-02-28 Thread GitBox
ajayydv commented on a change in pull request #526: HDDS-1183. Override 
getDelegationToken API for OzoneFileSystem. Contr…
URL: https://github.com/apache/hadoop/pull/526#discussion_r261023663
 
 

 ##
 File path: 
hadoop-ozone/ozonefs/src/main/java/org/apache/hadoop/fs/ozone/OzoneFileSystem.java
 ##
 @@ -669,6 +676,12 @@ public Path getWorkingDirectory() {
 return workingDir;
   }
 
+  @Override
+  public Token getDelegationToken(String renewer) throws IOException {
+return securityEnabled? adapter.getDelegationToken(renewer) :
+super.getDelegationToken(renewer);
 
 Review comment:
   If ozone security as well hadoop security is turned on then we should fetch 
DT from both.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] ajayydv commented on a change in pull request #526: HDDS-1183. Override getDelegationToken API for OzoneFileSystem. Contr…

2019-02-28 Thread GitBox
ajayydv commented on a change in pull request #526: HDDS-1183. Override 
getDelegationToken API for OzoneFileSystem. Contr…
URL: https://github.com/apache/hadoop/pull/526#discussion_r261024027
 
 

 ##
 File path: 
hadoop-ozone/ozonefs/src/main/java/org/apache/hadoop/fs/ozone/OzoneFileSystem.java
 ##
 @@ -669,6 +676,12 @@ public Path getWorkingDirectory() {
 return workingDir;
   }
 
+  @Override
+  public Token getDelegationToken(String renewer) throws IOException {
+return securityEnabled? adapter.getDelegationToken(renewer) :
+super.getDelegationToken(renewer);
 
 Review comment:
   Shall we add a unit test or robot test?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16132) Support multipart download in S3AFileSystem

2019-02-28 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16132?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16780955#comment-16780955
 ] 

Hadoop QA commented on HADOOP-16132:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 11s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 18s{color} | {color:orange} hadoop-tools/hadoop-aws: The patch generated 9 
new + 5 unchanged - 0 fixed = 14 total (was 5) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 58s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
55s{color} | {color:red} hadoop-tools/hadoop-aws generated 1 new + 0 unchanged 
- 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m 
39s{color} | {color:green} hadoop-aws in the patch passed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
25s{color} | {color:red} The patch generated 9 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 53m 56s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-tools/hadoop-aws |
|  |  org.apache.hadoop.fs.s3a.multipart.MultipartDownloader$1.run() may fail 
to close stream  At MultipartDownloader.java:stream  At 
MultipartDownloader.java:[line 88] |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HADOOP-16132 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12960666/HADOOP-16132.002.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 84dd1c36e719 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri Oct 
5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 0d61fac |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16000/artifact/out/diff-checkstyle-hadoop-tools_hadoop-aws.txt
 |
| findbugs | 

[GitHub] bharatviswa504 commented on issue #523: HDDS-623. On SCM UI, Node Manager info is empty

2019-02-28 Thread GitBox
bharatviswa504 commented on issue #523: HDDS-623. On SCM UI, Node Manager info 
is empty
URL: https://github.com/apache/hadoop/pull/523#issuecomment-468449859
 
 
   +1 LGTM. (Not a frontend guy, but as this is not adding any code, so had 
taken a look at it)
   
   But one more thing I have observed is 
   Block Manager: Open containers this is also broken, as currently in the code 
we have a TODO for this
   in BlockManagerImpl.java
   
   This also needs to be fixed.
   ```
@Override
 public int getOpenContainersNo() {
   return 0;
   // TODO : FIX ME : The open container being a single number does not make
   // sense.
   // We have to get open containers by Replication Type and Replication
   // factor. Hence returning 0 for now.
   // containers.get(HddsProtos.LifeCycleState.OPEN).size();
 }
   ```
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15920) get patch for S3a nextReadPos(), through Yetus

2019-02-28 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16780945#comment-16780945
 ] 

Steve Loughran commented on HADOOP-15920:
-

AWS S3 ireland; tests happy

> get patch for S3a nextReadPos(), through Yetus
> --
>
> Key: HADOOP-15920
> URL: https://issues.apache.org/jira/browse/HADOOP-15920
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.1.1
>Reporter: Steve Loughran
>Assignee: lqjacklee
>Priority: Major
> Attachments: HADOOP-15870-001.diff, HADOOP-15870-002.patch, 
> HADOOP-15870-003.patch, HADOOP-15870-004.patch, HADOOP-15870-005.patch, 
> HADOOP-15870-008.patch, HADOOP-15920-06.patch, HADOOP-15920-07.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16132) Support multipart download in S3AFileSystem

2019-02-28 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16132?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16780932#comment-16780932
 ] 

Hadoop QA commented on HADOOP-16132:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
27s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 29s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 20s{color} | {color:orange} hadoop-tools/hadoop-aws: The patch generated 9 
new + 5 unchanged - 0 fixed = 14 total (was 5) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 21s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
47s{color} | {color:red} hadoop-tools/hadoop-aws generated 1 new + 0 unchanged 
- 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m 
35s{color} | {color:green} hadoop-aws in the patch passed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
36s{color} | {color:red} The patch generated 9 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 64m 35s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-tools/hadoop-aws |
|  |  org.apache.hadoop.fs.s3a.multipart.MultipartDownloader$1.run() may fail 
to close stream  At MultipartDownloader.java:stream  At 
MultipartDownloader.java:[line 88] |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HADOOP-16132 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12960661/HADOOP-16132.001.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux a64a54e7024e 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 0d61fac |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15998/artifact/out/diff-checkstyle-hadoop-tools_hadoop-aws.txt
 |
| findbugs | 

[jira] [Updated] (HADOOP-15920) get patch for S3a nextReadPos(), through Yetus

2019-02-28 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15920?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15920:

Status: Patch Available  (was: Open)

patch 008: style fixup. AWS tests in progress. 

> get patch for S3a nextReadPos(), through Yetus
> --
>
> Key: HADOOP-15920
> URL: https://issues.apache.org/jira/browse/HADOOP-15920
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.1.1
>Reporter: Steve Loughran
>Assignee: lqjacklee
>Priority: Major
> Attachments: HADOOP-15870-001.diff, HADOOP-15870-002.patch, 
> HADOOP-15870-003.patch, HADOOP-15870-004.patch, HADOOP-15870-005.patch, 
> HADOOP-15870-008.patch, HADOOP-15920-06.patch, HADOOP-15920-07.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15920) get patch for S3a nextReadPos(), through Yetus

2019-02-28 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15920?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15920:

Status: Open  (was: Patch Available)

> get patch for S3a nextReadPos(), through Yetus
> --
>
> Key: HADOOP-15920
> URL: https://issues.apache.org/jira/browse/HADOOP-15920
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.1.1
>Reporter: Steve Loughran
>Assignee: lqjacklee
>Priority: Major
> Attachments: HADOOP-15870-001.diff, HADOOP-15870-002.patch, 
> HADOOP-15870-003.patch, HADOOP-15870-004.patch, HADOOP-15870-005.patch, 
> HADOOP-15870-008.patch, HADOOP-15920-06.patch, HADOOP-15920-07.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15920) get patch for S3a nextReadPos(), through Yetus

2019-02-28 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15920?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15920:

Attachment: HADOOP-15870-008.patch

> get patch for S3a nextReadPos(), through Yetus
> --
>
> Key: HADOOP-15920
> URL: https://issues.apache.org/jira/browse/HADOOP-15920
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.1.1
>Reporter: Steve Loughran
>Assignee: lqjacklee
>Priority: Major
> Attachments: HADOOP-15870-001.diff, HADOOP-15870-002.patch, 
> HADOOP-15870-003.patch, HADOOP-15870-004.patch, HADOOP-15870-005.patch, 
> HADOOP-15870-008.patch, HADOOP-15920-06.patch, HADOOP-15920-07.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] bharatviswa504 commented on issue #502: HDDS-919. Enable prometheus endpoints for Ozone datanodes

2019-02-28 Thread GitBox
bharatviswa504 commented on issue #502: HDDS-919. Enable prometheus endpoints 
for Ozone datanodes
URL: https://github.com/apache/hadoop/pull/502#issuecomment-468442678
 
 
   Hi @elek 
   When I am planning to commit, just seen test failures.
   And also, in MiniOzoneClusterImpl, in configureHddsDatanodes() we need to 
set this port address to 0. As when multiple dn's start on the localhost, start 
of httpserver will fail.
   
   I think this patch needs some more work, see below error.
   
   
   
   ```
   2019-02-28 20:08:24,593 INFO  hdfs.DFSUtil 
(DFSUtil.java:httpServerTemplateForNNAndJN(1641)) - Starting Web-server for 
hddsDatanode at: http://0.0.0.0:9882
   2019-02-28 20:08:24,594 ERROR ozone.HddsDatanodeService 
(HddsDatanodeService.java:start(189)) - HttpServer failed to start.
   java.io.FileNotFoundException: webapps/hddsDatanode not found in CLASSPATH
at 
org.apache.hadoop.http.HttpServer2.getWebAppsPath(HttpServer2.java:1070)
at org.apache.hadoop.http.HttpServer2.(HttpServer2.java:536)
at org.apache.hadoop.http.HttpServer2.(HttpServer2.java:119)
at 
org.apache.hadoop.http.HttpServer2$Builder.build(HttpServer2.java:433)
at 
org.apache.hadoop.hdds.server.BaseHttpServer.(BaseHttpServer.java:90)
at 
org.apache.hadoop.ozone.HddsDatanodeHttpServer.(HddsDatanodeHttpServer.java:34)
at 
org.apache.hadoop.ozone.HddsDatanodeService.start(HddsDatanodeService.java:186)
at 
org.apache.hadoop.ozone.MiniOzoneClusterImpl.lambda$startHddsDatanodes$2(MiniOzoneClusterImpl.java:367)
at java.util.ArrayList.forEach(ArrayList.java:1257)
at 
org.apache.hadoop.ozone.MiniOzoneClusterImpl.startHddsDatanodes(MiniOzoneClusterImpl.java:367)
at 
org.apache.hadoop.ozone.om.TestScmChillMode.init(TestScmChillMode.java:99)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at 
org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16087) Backport HADOOP-15549 to branch-3.0

2019-02-28 Thread Yuming Wang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16087?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuming Wang updated HADOOP-16087:
-
Fix Version/s: (was: 3.1.3)
   3.0.2

> Backport HADOOP-15549 to branch-3.0
> ---
>
> Key: HADOOP-16087
> URL: https://issues.apache.org/jira/browse/HADOOP-16087
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: metrics
>Affects Versions: 3.0.2
>Reporter: Yuming Wang
>Assignee: Todd Lipcon
>Priority: Major
> Fix For: 3.0.2
>
> Attachments: HADOOP-16087-branch-3.0-001.patch, 
> HADOOP-16087-branch-3.0-002.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16086) Backport HADOOP-15549 to branch-3.1

2019-02-28 Thread Yuming Wang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16086?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuming Wang updated HADOOP-16086:
-
Fix Version/s: (was: 3.0.2)
   3.1.3

> Backport HADOOP-15549 to branch-3.1
> ---
>
> Key: HADOOP-16086
> URL: https://issues.apache.org/jira/browse/HADOOP-16086
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: metrics
>Affects Versions: 3.0.2
>Reporter: Yuming Wang
>Assignee: Todd Lipcon
>Priority: Major
> Fix For: 3.1.3
>
> Attachments: HADOOP-16086-branch-3.1-001.patch, 
> HADOOP-16086-branch-3.1-002.patch
>
>
> Backport HADOOP-15549 to branch-3.1 to fix IllegalArgumentException:
> {noformat}
> 02:44:34.707 ERROR org.apache.hadoop.hive.ql.exec.Task: Job Submission failed 
> with exception 'java.io.IOException(Cannot initialize Cluster. Please check 
> your configuration for mapreduce.framework.name and the correspond server 
> addresses.)'
> java.io.IOException: Cannot initialize Cluster. Please check your 
> configuration for mapreduce.framework.name and the correspond server 
> addresses.
>   at org.apache.hadoop.mapreduce.Cluster.initialize(Cluster.java:116)
>   at org.apache.hadoop.mapreduce.Cluster.(Cluster.java:109)
>   at org.apache.hadoop.mapreduce.Cluster.(Cluster.java:102)
>   at org.apache.hadoop.mapred.JobClient.init(JobClient.java:475)
>   at org.apache.hadoop.mapred.JobClient.(JobClient.java:454)
>   at 
> org.apache.hadoop.hive.ql.exec.mr.ExecDriver.execute(ExecDriver.java:369)
>   at 
> org.apache.hadoop.hive.ql.exec.mr.MapRedTask.execute(MapRedTask.java:151)
>   at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:199)
>   at 
> org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:100)
>   at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2183)
>   at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1839)
>   at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1526)
>   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1237)
>   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1227)
>   at 
> org.apache.spark.sql.hive.client.HiveClientImpl.$anonfun$runHive$1(HiveClientImpl.scala:730)
>   at 
> org.apache.spark.sql.hive.client.HiveClientImpl.$anonfun$withHiveState$1(HiveClientImpl.scala:283)
>   at 
> org.apache.spark.sql.hive.client.HiveClientImpl.liftedTree1$1(HiveClientImpl.scala:221)
>   at 
> org.apache.spark.sql.hive.client.HiveClientImpl.retryLocked(HiveClientImpl.scala:220)
>   at 
> org.apache.spark.sql.hive.client.HiveClientImpl.withHiveState(HiveClientImpl.scala:266)
>   at 
> org.apache.spark.sql.hive.client.HiveClientImpl.runHive(HiveClientImpl.scala:719)
>   at 
> org.apache.spark.sql.hive.client.HiveClientImpl.runSqlHive(HiveClientImpl.scala:709)
>   at 
> org.apache.spark.sql.hive.StatisticsSuite.createNonPartitionedTable(StatisticsSuite.scala:719)
>   at 
> org.apache.spark.sql.hive.StatisticsSuite.$anonfun$testAlterTableProperties$2(StatisticsSuite.scala:822)
>   at 
> org.apache.spark.sql.test.SQLTestUtilsBase.withTable(SQLTestUtils.scala:284)
>   at 
> org.apache.spark.sql.test.SQLTestUtilsBase.withTable$(SQLTestUtils.scala:283)
>   at 
> org.apache.spark.sql.StatisticsCollectionTestBase.withTable(StatisticsCollectionTestBase.scala:40)
>   at 
> org.apache.spark.sql.hive.StatisticsSuite.$anonfun$testAlterTableProperties$1(StatisticsSuite.scala:821)
>   at 
> org.apache.spark.sql.hive.StatisticsSuite.$anonfun$testAlterTableProperties$1$adapted(StatisticsSuite.scala:820)
>   at scala.collection.immutable.List.foreach(List.scala:392)
>   at 
> org.apache.spark.sql.hive.StatisticsSuite.testAlterTableProperties(StatisticsSuite.scala:820)
>   at 
> org.apache.spark.sql.hive.StatisticsSuite.$anonfun$new$70(StatisticsSuite.scala:851)
>   at 
> scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
>   at org.scalatest.OutcomeOf.outcomeOf(OutcomeOf.scala:85)
>   at org.scalatest.OutcomeOf.outcomeOf$(OutcomeOf.scala:83)
>   at org.scalatest.OutcomeOf$.outcomeOf(OutcomeOf.scala:104)
>   at org.scalatest.Transformer.apply(Transformer.scala:22)
>   at org.scalatest.Transformer.apply(Transformer.scala:20)
>   at org.scalatest.FunSuiteLike$$anon$1.apply(FunSuiteLike.scala:186)
>   at org.apache.spark.SparkFunSuite.withFixture(SparkFunSuite.scala:104)
>   at 
> org.scalatest.FunSuiteLike.invokeWithFixture$1(FunSuiteLike.scala:184)
>   at org.scalatest.FunSuiteLike.$anonfun$runTest$1(FunSuiteLike.scala:196)
>   at org.scalatest.SuperEngine.runTestImpl(Engine.scala:289)
>   at org.scalatest.FunSuiteLike.runTest(FunSuiteLike.scala:196)
>   at 

[jira] [Updated] (HADOOP-16132) Support multipart download in S3AFileSystem

2019-02-28 Thread Justin Uang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16132?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Justin Uang updated HADOOP-16132:
-
Status: Open  (was: Patch Available)

> Support multipart download in S3AFileSystem
> ---
>
> Key: HADOOP-16132
> URL: https://issues.apache.org/jira/browse/HADOOP-16132
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Justin Uang
>Priority: Major
> Attachments: HADOOP-16132.001.patch, HADOOP-16132.002.patch, 
> seek-logs-parquet.txt
>
>
> I noticed that I get 150MB/s when I use the AWS CLI
> {code:java}
> aws s3 cp s3:/// - > /dev/null{code}
> vs 50MB/s when I use the S3AFileSystem
> {code:java}
> hadoop fs -cat s3:/// > /dev/null{code}
> Looking into the AWS CLI code, it looks like the 
> [download|https://github.com/boto/s3transfer/blob/ca0b708ea8a6a1213c6e21ca5a856e184f824334/s3transfer/download.py]
>  logic is quite clever. It downloads the next couple parts in parallel using 
> range requests, and then buffers them in memory in order to reorder them and 
> expose a single contiguous stream. I translated the logic to Java and 
> modified the S3AFileSystem to do similar things, and am able to achieve 
> 150MB/s download speeds as well. It is mostly done but I have some things to 
> clean up first. The PR is here: 
> https://github.com/palantir/hadoop/pull/47/files
> It would be great to get some other eyes on it to see what we need to do to 
> get it merged.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16132) Support multipart download in S3AFileSystem

2019-02-28 Thread Justin Uang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16132?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Justin Uang updated HADOOP-16132:
-
Attachment: HADOOP-16132.002.patch
Status: Patch Available  (was: Open)

> Support multipart download in S3AFileSystem
> ---
>
> Key: HADOOP-16132
> URL: https://issues.apache.org/jira/browse/HADOOP-16132
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Justin Uang
>Priority: Major
> Attachments: HADOOP-16132.001.patch, HADOOP-16132.002.patch, 
> seek-logs-parquet.txt
>
>
> I noticed that I get 150MB/s when I use the AWS CLI
> {code:java}
> aws s3 cp s3:/// - > /dev/null{code}
> vs 50MB/s when I use the S3AFileSystem
> {code:java}
> hadoop fs -cat s3:/// > /dev/null{code}
> Looking into the AWS CLI code, it looks like the 
> [download|https://github.com/boto/s3transfer/blob/ca0b708ea8a6a1213c6e21ca5a856e184f824334/s3transfer/download.py]
>  logic is quite clever. It downloads the next couple parts in parallel using 
> range requests, and then buffers them in memory in order to reorder them and 
> expose a single contiguous stream. I translated the logic to Java and 
> modified the S3AFileSystem to do similar things, and am able to achieve 
> 150MB/s download speeds as well. It is mostly done but I have some things to 
> clean up first. The PR is here: 
> https://github.com/palantir/hadoop/pull/47/files
> It would be great to get some other eyes on it to see what we need to do to 
> get it merged.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16132) Support multipart download in S3AFileSystem

2019-02-28 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16132?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16780899#comment-16780899
 ] 

Hadoop QA commented on HADOOP-16132:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  6s{color} 
| {color:red} HADOOP-16132 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HADOOP-16132 |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15999/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Support multipart download in S3AFileSystem
> ---
>
> Key: HADOOP-16132
> URL: https://issues.apache.org/jira/browse/HADOOP-16132
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Justin Uang
>Priority: Major
> Attachments: HADOOP-16132.001.patch, seek-logs-parquet.txt
>
>
> I noticed that I get 150MB/s when I use the AWS CLI
> {code:java}
> aws s3 cp s3:/// - > /dev/null{code}
> vs 50MB/s when I use the S3AFileSystem
> {code:java}
> hadoop fs -cat s3:/// > /dev/null{code}
> Looking into the AWS CLI code, it looks like the 
> [download|https://github.com/boto/s3transfer/blob/ca0b708ea8a6a1213c6e21ca5a856e184f824334/s3transfer/download.py]
>  logic is quite clever. It downloads the next couple parts in parallel using 
> range requests, and then buffers them in memory in order to reorder them and 
> expose a single contiguous stream. I translated the logic to Java and 
> modified the S3AFileSystem to do similar things, and am able to achieve 
> 150MB/s download speeds as well. It is mostly done but I have some things to 
> clean up first. The PR is here: 
> https://github.com/palantir/hadoop/pull/47/files
> It would be great to get some other eyes on it to see what we need to do to 
> get it merged.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16132) Support multipart download in S3AFileSystem

2019-02-28 Thread Justin Uang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16132?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16780896#comment-16780896
 ] 

Justin Uang commented on HADOOP-16132:
--

[~ste...@apache.org]

The billing differences are good to know. I'm going to have to check with our 
usages, but I'm pretty sure the billing difference is small for us since it 
costs only $0.0004 per 1,000 requests ([https://aws.amazon.com/s3/pricing/).] I 
think that our main costs are in storage. Regarding the throttling, assuming 
that this is for sequential reads, we would only be requesting per the 
part-size which is 8MB, which I imagine is less frequent than the heavy random 
IO.

That's interesting about random IO. I do think that it would be hard to 
implement this for random IO given that the cost of guessing the wrong 
readahead can be quite expensive if the blocks are that large. It's a lot 
easier to guess what needs to be read in Sequential IO.

I do want to make sure I'm on the same page as you regarding what constitutes 
sequential IO. I view parquet as mostly sequential IO because from the 
perspective of [^seek-logs-parquet.txt], we do seek a few times for the footer 
(hundreds of bytes), but then afterwards we a straight read of several hundred 
MBs. Is my understanding the same as you?

I also posted a patch! I'm still getting familiar with the process, but any 
feedback on how to push this forwards would be great!

> Support multipart download in S3AFileSystem
> ---
>
> Key: HADOOP-16132
> URL: https://issues.apache.org/jira/browse/HADOOP-16132
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Justin Uang
>Priority: Major
> Attachments: HADOOP-16132.001.patch, seek-logs-parquet.txt
>
>
> I noticed that I get 150MB/s when I use the AWS CLI
> {code:java}
> aws s3 cp s3:/// - > /dev/null{code}
> vs 50MB/s when I use the S3AFileSystem
> {code:java}
> hadoop fs -cat s3:/// > /dev/null{code}
> Looking into the AWS CLI code, it looks like the 
> [download|https://github.com/boto/s3transfer/blob/ca0b708ea8a6a1213c6e21ca5a856e184f824334/s3transfer/download.py]
>  logic is quite clever. It downloads the next couple parts in parallel using 
> range requests, and then buffers them in memory in order to reorder them and 
> expose a single contiguous stream. I translated the logic to Java and 
> modified the S3AFileSystem to do similar things, and am able to achieve 
> 150MB/s download speeds as well. It is mostly done but I have some things to 
> clean up first. The PR is here: 
> https://github.com/palantir/hadoop/pull/47/files
> It would be great to get some other eyes on it to see what we need to do to 
> get it merged.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16132) Support multipart download in S3AFileSystem

2019-02-28 Thread Justin Uang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16132?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Justin Uang updated HADOOP-16132:
-
Attachment: seek-logs-parquet.txt

> Support multipart download in S3AFileSystem
> ---
>
> Key: HADOOP-16132
> URL: https://issues.apache.org/jira/browse/HADOOP-16132
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Justin Uang
>Priority: Major
> Attachments: HADOOP-16132.001.patch, seek-logs-parquet.txt
>
>
> I noticed that I get 150MB/s when I use the AWS CLI
> {code:java}
> aws s3 cp s3:/// - > /dev/null{code}
> vs 50MB/s when I use the S3AFileSystem
> {code:java}
> hadoop fs -cat s3:/// > /dev/null{code}
> Looking into the AWS CLI code, it looks like the 
> [download|https://github.com/boto/s3transfer/blob/ca0b708ea8a6a1213c6e21ca5a856e184f824334/s3transfer/download.py]
>  logic is quite clever. It downloads the next couple parts in parallel using 
> range requests, and then buffers them in memory in order to reorder them and 
> expose a single contiguous stream. I translated the logic to Java and 
> modified the S3AFileSystem to do similar things, and am able to achieve 
> 150MB/s download speeds as well. It is mostly done but I have some things to 
> clean up first. The PR is here: 
> https://github.com/palantir/hadoop/pull/47/files
> It would be great to get some other eyes on it to see what we need to do to 
> get it merged.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16132) Support multipart download in S3AFileSystem

2019-02-28 Thread Justin Uang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16132?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16780886#comment-16780886
 ] 

Justin Uang commented on HADOOP-16132:
--

Copying over the last comment from the github ticket since we will be 
continuing the conversation here:

[~ste...@apache.org]
{quote}BTW, one little side effect of breaking up the reads: every GET is its 
own HTTP request, so gets billed differently, and for SSE-KMS, possibly a 
separate call to AWS:KMS. Nobody quite knows about the latter, we do know that 
heavy random seek IO on a single tree in a bucket can trigger more throttling 
than you'd expect

Anyway, maybe for random IO the strategy would be to have a notion of aligned 
blocks, say 8 MB, the current block is cached as it is read in, so a backward 
seek can often work from in memory; the stream could be doing a readahead of , 
say, the next 2+ blocks in parallel & then store them in a ring of cached 
blocks ready for when they are used.

you've got me thinking now...
{quote}
 

> Support multipart download in S3AFileSystem
> ---
>
> Key: HADOOP-16132
> URL: https://issues.apache.org/jira/browse/HADOOP-16132
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Justin Uang
>Priority: Major
> Attachments: HADOOP-16132.001.patch
>
>
> I noticed that I get 150MB/s when I use the AWS CLI
> {code:java}
> aws s3 cp s3:/// - > /dev/null{code}
> vs 50MB/s when I use the S3AFileSystem
> {code:java}
> hadoop fs -cat s3:/// > /dev/null{code}
> Looking into the AWS CLI code, it looks like the 
> [download|https://github.com/boto/s3transfer/blob/ca0b708ea8a6a1213c6e21ca5a856e184f824334/s3transfer/download.py]
>  logic is quite clever. It downloads the next couple parts in parallel using 
> range requests, and then buffers them in memory in order to reorder them and 
> expose a single contiguous stream. I translated the logic to Java and 
> modified the S3AFileSystem to do similar things, and am able to achieve 
> 150MB/s download speeds as well. It is mostly done but I have some things to 
> clean up first. The PR is here: 
> https://github.com/palantir/hadoop/pull/47/files
> It would be great to get some other eyes on it to see what we need to do to 
> get it merged.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16132) Support multipart download in S3AFileSystem

2019-02-28 Thread Justin Uang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16132?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Justin Uang updated HADOOP-16132:
-
Attachment: HADOOP-16132.001.patch
Status: Patch Available  (was: Open)

> Support multipart download in S3AFileSystem
> ---
>
> Key: HADOOP-16132
> URL: https://issues.apache.org/jira/browse/HADOOP-16132
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Justin Uang
>Priority: Major
> Attachments: HADOOP-16132.001.patch
>
>
> I noticed that I get 150MB/s when I use the AWS CLI
> {code:java}
> aws s3 cp s3:/// - > /dev/null{code}
> vs 50MB/s when I use the S3AFileSystem
> {code:java}
> hadoop fs -cat s3:/// > /dev/null{code}
> Looking into the AWS CLI code, it looks like the 
> [download|https://github.com/boto/s3transfer/blob/ca0b708ea8a6a1213c6e21ca5a856e184f824334/s3transfer/download.py]
>  logic is quite clever. It downloads the next couple parts in parallel using 
> range requests, and then buffers them in memory in order to reorder them and 
> expose a single contiguous stream. I translated the logic to Java and 
> modified the S3AFileSystem to do similar things, and am able to achieve 
> 150MB/s download speeds as well. It is mostly done but I have some things to 
> clean up first. The PR is here: 
> https://github.com/palantir/hadoop/pull/47/files
> It would be great to get some other eyes on it to see what we need to do to 
> get it merged.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] vivekratnavel commented on issue #527: HDDS-1093. Configuration tab in OM/SCM ui is not displaying the correct values

2019-02-28 Thread GitBox
vivekratnavel commented on issue #527: HDDS-1093. Configuration tab in OM/SCM 
ui is not displaying the correct values
URL: https://github.com/apache/hadoop/pull/527#issuecomment-468423032
 
 
   > I am just wondering if it's the same issue as 
https://issues.apache.org/jira/browse/HDDS-611
   
   Thanks @elek ! Marked it a duplicate.  


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] asfgit closed pull request #535: HADOOP-16109. Parquet reading S3AFileSystem causes EOF

2019-02-28 Thread GitBox
asfgit closed pull request #535: HADOOP-16109. Parquet reading S3AFileSystem 
causes EOF
URL: https://github.com/apache/hadoop/pull/535
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16119) KMS on Hadoop RPC Engine

2019-02-28 Thread Aaron Fabbri (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16780873#comment-16780873
 ] 

Aaron Fabbri commented on HADOOP-16119:
---

Thank you for writing this up [~jojochuang]. The doc looks good.

> KMS on Hadoop RPC Engine
> 
>
> Key: HADOOP-16119
> URL: https://issues.apache.org/jira/browse/HADOOP-16119
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: Jonathan Eagles
>Assignee: Wei-Chiu Chuang
>Priority: Major
> Attachments: Design doc_ KMS v2.pdf
>
>
> Per discussion on common-dev and text copied here for ease of reference.
> https://lists.apache.org/thread.html/0e2eeaf07b013f17fad6d362393f53d52041828feec53dcddff04808@%3Ccommon-dev.hadoop.apache.org%3E
> {noformat}
> Thanks all for the inputs,
> To offer additional information (while Daryn is working on his stuff),
> optimizing RPC encryption opens up another possibility: migrating KMS
> service to use Hadoop RPC.
> Today's KMS uses HTTPS + REST API, much like webhdfs. It has very
> undesirable performance (a few thousand ops per second) compared to
> NameNode. Unfortunately for each NameNode namespace operation you also need
> to access KMS too.
> Migrating KMS to Hadoop RPC greatly improves its performance (if
> implemented correctly), and RPC encryption would be a prerequisite. So
> please keep that in mind when discussing the Hadoop RPC encryption
> improvements. Cloudera is very interested to help with the Hadoop RPC
> encryption project because a lot of our customers are using at-rest
> encryption, and some of them are starting to hit KMS performance limit.
> This whole "migrating KMS to Hadoop RPC" was Daryn's idea. I heard this
> idea in the meetup and I am very thrilled to see this happening because it
> is a real issue bothering some of our customers, and I suspect it is the
> right solution to address this tech debt.
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] steveloughran commented on issue #535: HADOOP-16109. Parquet reading S3AFileSystem causes EOF

2019-02-28 Thread GitBox
steveloughran commented on issue #535: HADOOP-16109. Parquet reading 
S3AFileSystem causes EOF
URL: https://github.com/apache/hadoop/pull/535#issuecomment-468415749
 
 
   S3 testing: AWS Ireland + s3guard


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] steveloughran commented on issue #535: HADOOP-16109. Parquet reading S3AFileSystem causes EOF

2019-02-28 Thread GitBox
steveloughran commented on issue #535: HADOOP-16109. Parquet reading 
S3AFileSystem causes EOF
URL: https://github.com/apache/hadoop/pull/535#issuecomment-468415599
 
 
   hey, @aw-was-here , yetus is bouncing all my PRs


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16109) Parquet reading S3AFileSystem causes EOF

2019-02-28 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16109?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-16109:

Affects Version/s: (was: 3.1.0)
   3.3.0
   2.9.2
   2.8.5
   3.1.2

> Parquet reading S3AFileSystem causes EOF
> 
>
> Key: HADOOP-16109
> URL: https://issues.apache.org/jira/browse/HADOOP-16109
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.9.2, 2.8.5, 3.3.0, 3.1.2
>Reporter: Dave Christianson
>Assignee: Steve Loughran
>Priority: Blocker
>
> When using S3AFileSystem to read Parquet files a specific set of 
> circumstances causes an  EOFException that is not thrown when reading the 
> same file from local disk
> Note this has only been observed under specific circumstances:
>   - when the reader is doing a projection (will cause it to do a seek 
> backwards and put the filesystem into random mode)
>  - when the file is larger than the readahead buffer size
>  - when the seek behavior of the Parquet reader causes the reader to seek 
> towards the end of the current input stream without reopening, such that the 
> next read on the currently open stream will read past the end of the 
> currently open stream.
> Exception from Parquet reader is as follows:
> {code}
> Caused by: java.io.EOFException: Reached the end of stream with 51 bytes left 
> to read
>  at 
> org.apache.parquet.io.DelegatingSeekableInputStream.readFully(DelegatingSeekableInputStream.java:104)
>  at 
> org.apache.parquet.io.DelegatingSeekableInputStream.readFullyHeapBuffer(DelegatingSeekableInputStream.java:127)
>  at 
> org.apache.parquet.io.DelegatingSeekableInputStream.readFully(DelegatingSeekableInputStream.java:91)
>  at 
> org.apache.parquet.hadoop.ParquetFileReader$ConsecutiveChunkList.readAll(ParquetFileReader.java:1174)
>  at 
> org.apache.parquet.hadoop.ParquetFileReader.readNextRowGroup(ParquetFileReader.java:805)
>  at 
> org.apache.parquet.hadoop.InternalParquetRecordReader.checkRead(InternalParquetRecordReader.java:127)
>  at 
> org.apache.parquet.hadoop.InternalParquetRecordReader.nextKeyValue(InternalParquetRecordReader.java:222)
>  at 
> org.apache.parquet.hadoop.ParquetRecordReader.nextKeyValue(ParquetRecordReader.java:207)
>  at 
> org.apache.flink.api.java.hadoop.mapreduce.HadoopInputFormatBase.fetchNext(HadoopInputFormatBase.java:206)
>  at 
> org.apache.flink.api.java.hadoop.mapreduce.HadoopInputFormatBase.reachedEnd(HadoopInputFormatBase.java:199)
>  at 
> org.apache.flink.runtime.operators.DataSourceTask.invoke(DataSourceTask.java:190)
>  at org.apache.flink.runtime.taskmanager.Task.run(Task.java:711)
>  at java.lang.Thread.run(Thread.java:748)
> {code}
> The following example program generate the same root behavior (sans finding a 
> Parquet file that happens to trigger this condition) by purposely reading 
> past the already active readahead range on any file >= 1029 bytes in size.. 
> {code:java}
> final Configuration conf = new Configuration();
> conf.set("fs.s3a.readahead.range", "1K");
> conf.set("fs.s3a.experimental.input.fadvise", "random");
> final FileSystem fs = FileSystem.get(path.toUri(), conf);
> // forward seek reading across readahead boundary
> try (FSDataInputStream in = fs.open(path)) {
> final byte[] temp = new byte[5];
> in.readByte();
> in.readFully(1023, temp); // <-- works
> }
> // forward seek reading from end of readahead boundary
> try (FSDataInputStream in = fs.open(path)) {
>  final byte[] temp = new byte[5];
>  in.readByte();
>  in.readFully(1024, temp); // <-- throws EOFException
> }
> {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16109) Parquet reading S3AFileSystem causes EOF

2019-02-28 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16109?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16780864#comment-16780864
 ] 

Steve Loughran commented on HADOOP-16109:
-

Matt, love to see. it just stuck my own PR up too: lets compare! And if there 
are extra tests, pull them in.

Root cause: using <= over = in the decision making about whether to skip vs 
close. The situation which triggered the failure was

* random IO mode (i.e. shorter reads)
* active read
* next read spanned the current active read but went beyond.

One like to fix, one for extra debug log, parameterized tests for regression. 
This is going to need backporting to 2.8.+

> Parquet reading S3AFileSystem causes EOF
> 
>
> Key: HADOOP-16109
> URL: https://issues.apache.org/jira/browse/HADOOP-16109
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.0
>Reporter: Dave Christianson
>Assignee: Steve Loughran
>Priority: Blocker
>
> When using S3AFileSystem to read Parquet files a specific set of 
> circumstances causes an  EOFException that is not thrown when reading the 
> same file from local disk
> Note this has only been observed under specific circumstances:
>   - when the reader is doing a projection (will cause it to do a seek 
> backwards and put the filesystem into random mode)
>  - when the file is larger than the readahead buffer size
>  - when the seek behavior of the Parquet reader causes the reader to seek 
> towards the end of the current input stream without reopening, such that the 
> next read on the currently open stream will read past the end of the 
> currently open stream.
> Exception from Parquet reader is as follows:
> {code}
> Caused by: java.io.EOFException: Reached the end of stream with 51 bytes left 
> to read
>  at 
> org.apache.parquet.io.DelegatingSeekableInputStream.readFully(DelegatingSeekableInputStream.java:104)
>  at 
> org.apache.parquet.io.DelegatingSeekableInputStream.readFullyHeapBuffer(DelegatingSeekableInputStream.java:127)
>  at 
> org.apache.parquet.io.DelegatingSeekableInputStream.readFully(DelegatingSeekableInputStream.java:91)
>  at 
> org.apache.parquet.hadoop.ParquetFileReader$ConsecutiveChunkList.readAll(ParquetFileReader.java:1174)
>  at 
> org.apache.parquet.hadoop.ParquetFileReader.readNextRowGroup(ParquetFileReader.java:805)
>  at 
> org.apache.parquet.hadoop.InternalParquetRecordReader.checkRead(InternalParquetRecordReader.java:127)
>  at 
> org.apache.parquet.hadoop.InternalParquetRecordReader.nextKeyValue(InternalParquetRecordReader.java:222)
>  at 
> org.apache.parquet.hadoop.ParquetRecordReader.nextKeyValue(ParquetRecordReader.java:207)
>  at 
> org.apache.flink.api.java.hadoop.mapreduce.HadoopInputFormatBase.fetchNext(HadoopInputFormatBase.java:206)
>  at 
> org.apache.flink.api.java.hadoop.mapreduce.HadoopInputFormatBase.reachedEnd(HadoopInputFormatBase.java:199)
>  at 
> org.apache.flink.runtime.operators.DataSourceTask.invoke(DataSourceTask.java:190)
>  at org.apache.flink.runtime.taskmanager.Task.run(Task.java:711)
>  at java.lang.Thread.run(Thread.java:748)
> {code}
> The following example program generate the same root behavior (sans finding a 
> Parquet file that happens to trigger this condition) by purposely reading 
> past the already active readahead range on any file >= 1029 bytes in size.. 
> {code:java}
> final Configuration conf = new Configuration();
> conf.set("fs.s3a.readahead.range", "1K");
> conf.set("fs.s3a.experimental.input.fadvise", "random");
> final FileSystem fs = FileSystem.get(path.toUri(), conf);
> // forward seek reading across readahead boundary
> try (FSDataInputStream in = fs.open(path)) {
> final byte[] temp = new byte[5];
> in.readByte();
> in.readFully(1023, temp); // <-- works
> }
> // forward seek reading from end of readahead boundary
> try (FSDataInputStream in = fs.open(path)) {
>  final byte[] temp = new byte[5];
>  in.readByte();
>  in.readFully(1024, temp); // <-- throws EOFException
> }
> {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] hadoop-yetus commented on issue #535: HADOOP-16109. Parquet reading S3AFileSystem causes EOF

2019-02-28 Thread GitBox
hadoop-yetus commented on issue #535: HADOOP-16109. Parquet reading 
S3AFileSystem causes EOF
URL: https://github.com/apache/hadoop/pull/535#issuecomment-468413300
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 0 | Docker mode activated. |
   | -1 | patch | 7 | https://github.com/apache/hadoop/pull/535 does not apply 
to s3/HADOOP-16109-parquet-eof-s3a-seek. Rebase required? Wrong Branch? See 
https://wiki.apache.org/hadoop/HowToContribute for help. |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | GITHUB PR | https://github.com/apache/hadoop/pull/535 |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-535/1/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] steveloughran opened a new pull request #535: HADOOP-16109. Parquet reading S3AFileSystem causes EOF

2019-02-28 Thread GitBox
steveloughran opened a new pull request #535: HADOOP-16109. Parquet reading 
S3AFileSystem causes EOF
URL: https://github.com/apache/hadoop/pull/535
 
 
   HADOOP-16109. Parquet reading S3AFileSystem causes EOF
   
   Nobody gets seek right. No matter how many times they think they have.
   
   Reproducible test from:  Dave Christianson
   Fixed seek() logic: Steve Loughran


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15889) Add hadoop.token configuration parameter to load tokens

2019-02-28 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HADOOP-15889?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16780849#comment-16780849
 ] 

Íñigo Goiri commented on HADOOP-15889:
--

Thanks [~ajayydv]!

> Add hadoop.token configuration parameter to load tokens
> ---
>
> Key: HADOOP-15889
> URL: https://issues.apache.org/jira/browse/HADOOP-15889
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
> Attachments: HADOOP-15889.000.patch, HADOOP-15889.001.patch, 
> HADOOP-15889.002.patch, HADOOP-15889.003.patch
>
>
> Currently, Hadoop allows passing files containing tokens.
> WebHDFS provides base64 delegation tokens that can be used directly.
> This JIRA adds the option to pass base64 tokens directly without using files.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16055) Upgrade AWS SDK to 1.11.271 in branch-2

2019-02-28 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16780843#comment-16780843
 ] 

Hadoop QA commented on HADOOP-16055:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 12m 
46s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} branch-2.8 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m  
0s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
22s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
27s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 2s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  9m 
16s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  5m 
44s{color} | {color:green} branch-2.8 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
25s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
23s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  7m 23s{color} 
| {color:red} root generated 8 new + 964 unchanged - 0 fixed = 972 total (was 
964) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m  3s{color} | {color:orange} root: The patch generated 2 new + 0 unchanged - 
0 fixed = 2 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  9m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  5m 
38s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}228m 20s{color} 
| {color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  1m 
35s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}309m  4s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Unreaped Processes | root:32 |
| Failed junit tests | hadoop.fs.adl.live.TestAdlFileSystemContractLive |
|   | hadoop.hdfs.web.TestWebHdfsWithMultipleNameNodes |
|   | hadoop.hdfs.server.namenode.snapshot.TestRenameWithSnapshots |
|   | hadoop.hdfs.web.TestHttpFSPorts |
|   | hadoop.hdfs.web.TestWebHdfsFileSystemContract |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.web.TestWebHdfsWithAuthenticationFilter |
|   | hadoop.hdfs.web.TestWebHdfsUrl |
|   | hadoop.hdfs.web.TestWebHDFSForHA |
|   | hadoop.hdfs.web.TestWebHdfsTokens |
| Timed out junit tests | org.apache.hadoop.hdfs.TestLeaseRecovery2 |
|   | org.apache.hadoop.hdfs.TestEncryptionZones |
|   | org.apache.hadoop.hdfs.TestDFSStartupVersions |
|   | org.apache.hadoop.hdfs.TestWriteRead |
|   | 
org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshottableDirListing |
|   | org.apache.hadoop.hdfs.TestDatanodeRegistration |
|   | 
org.apache.hadoop.hdfs.server.namenode.snapshot.TestGetContentSummaryWithSnapshot
 |
|   | org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotDiffReport |
|   | org.apache.hadoop.hdfs.TestReplaceDatanodeOnFailure |
|   | org.apache.hadoop.hdfs.TestAclsEndToEnd |
|   | 

[jira] [Commented] (HADOOP-16150) checksumFS doesn't wrap concat(): concatenated files don't have checksums

2019-02-28 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16150?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16780841#comment-16780841
 ] 

Eric Yang commented on HADOOP-16150:


[~ste...@apache.org], in Hadoop wiki [create a github pull 
request|https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute#HowToContribute-CreatingaGitHubpullrequest],
 it says that precommit job will search the URL that starts with 
"https://github.com; and ends with ".patch" in the JIRA issue.

I set this jira to patch available and see if the patch gets tested.

> checksumFS doesn't wrap concat(): concatenated files don't have checksums
> -
>
> Key: HADOOP-16150
> URL: https://issues.apache.org/jira/browse/HADOOP-16150
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>
> Followon from HADOOP-16107. FilterFS passes through the concat operation, and 
> checksum FS doesn't override that call -so files created through concat *do 
> not have checksums*.
> If people are using a checksummed fs directly with the expectations that they 
> will, that expectation is not being met. 
> What to do?
> * fail always?
> * fail if checksums are enabled?
> * try and implement the concat operation from raw local up at the checksum 
> level
> append() just gives up always; doing the same for concat would be the 
> simplest. Again, brings us back to "need a way to see if an FS supports a 
> feature before invocation", here checksum fs would reject append and concat



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16155) S3AInputStream read(bytes[]) to not retry on read failure: pass action up

2019-02-28 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-16155:
---

 Summary: S3AInputStream read(bytes[]) to not retry on read 
failure: pass action up
 Key: HADOOP-16155
 URL: https://issues.apache.org/jira/browse/HADOOP-16155
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Affects Versions: 3.2.0
Reporter: Steve Loughran


The S3AInputStream reacts to read(byte[]) failure by reopening the stream, just 
as for the single byte read(). We shouldn't need to do that. Instead just close 
the stream, return 0 and let the caller decided what to do. 

why so? 
# its in the contract of InputStream.read(bytes[]),
# readFully() can handle the 0 in its loop
# other apps can decided what to do.





--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] bharatviswa504 commented on issue #502: HDDS-919. Enable prometheus endpoints for Ozone datanodes

2019-02-28 Thread GitBox
bharatviswa504 commented on issue #502: HDDS-919. Enable prometheus endpoints 
for Ozone datanodes
URL: https://github.com/apache/hadoop/pull/502#issuecomment-468397125
 
 
   Thank You @elek  for the update.
   +1 LGTM (pending jenkins).


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16150) checksumFS doesn't wrap concat(): concatenated files don't have checksums

2019-02-28 Thread Eric Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16150?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated HADOOP-16150:
---
Assignee: Steve Loughran
  Status: Patch Available  (was: Open)

> checksumFS doesn't wrap concat(): concatenated files don't have checksums
> -
>
> Key: HADOOP-16150
> URL: https://issues.apache.org/jira/browse/HADOOP-16150
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>
> Followon from HADOOP-16107. FilterFS passes through the concat operation, and 
> checksum FS doesn't override that call -so files created through concat *do 
> not have checksums*.
> If people are using a checksummed fs directly with the expectations that they 
> will, that expectation is not being met. 
> What to do?
> * fail always?
> * fail if checksums are enabled?
> * try and implement the concat operation from raw local up at the checksum 
> level
> append() just gives up always; doing the same for concat would be the 
> simplest. Again, brings us back to "need a way to see if an FS supports a 
> feature before invocation", here checksum fs would reject append and concat



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] bharatviswa504 opened a new pull request #534: HDDS-1193. Refactor ContainerChillModeRule and DatanodeChillMode rule.

2019-02-28 Thread GitBox
bharatviswa504 opened a new pull request #534: HDDS-1193. Refactor 
ContainerChillModeRule and DatanodeChillMode rule.
URL: https://github.com/apache/hadoop/pull/534
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16109) Parquet reading S3AFileSystem causes EOF

2019-02-28 Thread Matt Foley (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16109?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16780826#comment-16780826
 ] 

Matt Foley commented on HADOOP-16109:
-

Hi [~ste...@apache.org], one of my colleagues, Shruti Gumma, has a proposed fix 
which I'll help him post here today.

> Parquet reading S3AFileSystem causes EOF
> 
>
> Key: HADOOP-16109
> URL: https://issues.apache.org/jira/browse/HADOOP-16109
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.0
>Reporter: Dave Christianson
>Assignee: Steve Loughran
>Priority: Blocker
>
> When using S3AFileSystem to read Parquet files a specific set of 
> circumstances causes an  EOFException that is not thrown when reading the 
> same file from local disk
> Note this has only been observed under specific circumstances:
>   - when the reader is doing a projection (will cause it to do a seek 
> backwards and put the filesystem into random mode)
>  - when the file is larger than the readahead buffer size
>  - when the seek behavior of the Parquet reader causes the reader to seek 
> towards the end of the current input stream without reopening, such that the 
> next read on the currently open stream will read past the end of the 
> currently open stream.
> Exception from Parquet reader is as follows:
> {code}
> Caused by: java.io.EOFException: Reached the end of stream with 51 bytes left 
> to read
>  at 
> org.apache.parquet.io.DelegatingSeekableInputStream.readFully(DelegatingSeekableInputStream.java:104)
>  at 
> org.apache.parquet.io.DelegatingSeekableInputStream.readFullyHeapBuffer(DelegatingSeekableInputStream.java:127)
>  at 
> org.apache.parquet.io.DelegatingSeekableInputStream.readFully(DelegatingSeekableInputStream.java:91)
>  at 
> org.apache.parquet.hadoop.ParquetFileReader$ConsecutiveChunkList.readAll(ParquetFileReader.java:1174)
>  at 
> org.apache.parquet.hadoop.ParquetFileReader.readNextRowGroup(ParquetFileReader.java:805)
>  at 
> org.apache.parquet.hadoop.InternalParquetRecordReader.checkRead(InternalParquetRecordReader.java:127)
>  at 
> org.apache.parquet.hadoop.InternalParquetRecordReader.nextKeyValue(InternalParquetRecordReader.java:222)
>  at 
> org.apache.parquet.hadoop.ParquetRecordReader.nextKeyValue(ParquetRecordReader.java:207)
>  at 
> org.apache.flink.api.java.hadoop.mapreduce.HadoopInputFormatBase.fetchNext(HadoopInputFormatBase.java:206)
>  at 
> org.apache.flink.api.java.hadoop.mapreduce.HadoopInputFormatBase.reachedEnd(HadoopInputFormatBase.java:199)
>  at 
> org.apache.flink.runtime.operators.DataSourceTask.invoke(DataSourceTask.java:190)
>  at org.apache.flink.runtime.taskmanager.Task.run(Task.java:711)
>  at java.lang.Thread.run(Thread.java:748)
> {code}
> The following example program generate the same root behavior (sans finding a 
> Parquet file that happens to trigger this condition) by purposely reading 
> past the already active readahead range on any file >= 1029 bytes in size.. 
> {code:java}
> final Configuration conf = new Configuration();
> conf.set("fs.s3a.readahead.range", "1K");
> conf.set("fs.s3a.experimental.input.fadvise", "random");
> final FileSystem fs = FileSystem.get(path.toUri(), conf);
> // forward seek reading across readahead boundary
> try (FSDataInputStream in = fs.open(path)) {
> final byte[] temp = new byte[5];
> in.readByte();
> in.readFully(1023, temp); // <-- works
> }
> // forward seek reading from end of readahead boundary
> try (FSDataInputStream in = fs.open(path)) {
>  final byte[] temp = new byte[5];
>  in.readByte();
>  in.readFully(1024, temp); // <-- throws EOFException
> }
> {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15889) Add hadoop.token configuration parameter to load tokens

2019-02-28 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15889?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HADOOP-15889:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

[~elgoiri] thanks the patch. Committed to trunk.

> Add hadoop.token configuration parameter to load tokens
> ---
>
> Key: HADOOP-15889
> URL: https://issues.apache.org/jira/browse/HADOOP-15889
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
> Attachments: HADOOP-15889.000.patch, HADOOP-15889.001.patch, 
> HADOOP-15889.002.patch, HADOOP-15889.003.patch
>
>
> Currently, Hadoop allows passing files containing tokens.
> WebHDFS provides base64 delegation tokens that can be used directly.
> This JIRA adds the option to pass base64 tokens directly without using files.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] hadoop-yetus commented on issue #528: HDDS-1182. Pipeline Rule where atleast one datanode is reported in the pipeline.

2019-02-28 Thread GitBox
hadoop-yetus commented on issue #528: HDDS-1182. Pipeline Rule where atleast 
one datanode is reported in the pipeline.
URL: https://github.com/apache/hadoop/pull/528#issuecomment-468388549
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 25 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 2 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 34 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1186 | trunk passed |
   | +1 | compile | 76 | trunk passed |
   | +1 | checkstyle | 29 | trunk passed |
   | +1 | mvnsite | 71 | trunk passed |
   | +1 | shadedclient | 809 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 111 | trunk passed |
   | +1 | javadoc | 58 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 11 | Maven dependency ordering for patch |
   | +1 | mvninstall | 74 | the patch passed |
   | +1 | compile | 72 | the patch passed |
   | +1 | javac | 72 | the patch passed |
   | +1 | checkstyle | 24 | the patch passed |
   | +1 | mvnsite | 62 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 1 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 793 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 140 | the patch passed |
   | +1 | javadoc | 66 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 89 | common in the patch failed. |
   | +1 | unit | 143 | server-scm in the patch passed. |
   | +1 | asflicense | 28 | The patch does not generate ASF License warnings. |
   | | | 3958 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdds.security.x509.certificate.client.TestDefaultCertificateClient |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-528/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/528 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  xml  |
   | uname | Linux a90d96c64096 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri 
Oct 5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 0feba43 |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | findbugs | v3.1.0-RC1 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-528/2/artifact/out/patch-unit-hadoop-hdds_common.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-528/2/testReport/ |
   | Max. process+thread count | 477 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-hdds/server-scm U: hadoop-hdds |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-528/2/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-15889) Add hadoop.token configuration parameter to load tokens

2019-02-28 Thread Ajay Kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15889?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16780818#comment-16780818
 ] 

Ajay Kumar edited comment on HADOOP-15889 at 2/28/19 6:35 PM:
--

[~elgoiri] thanks for the patch. Committed to trunk.


was (Author: ajayydv):
[~elgoiri] thanks the patch. Committed to trunk.

> Add hadoop.token configuration parameter to load tokens
> ---
>
> Key: HADOOP-15889
> URL: https://issues.apache.org/jira/browse/HADOOP-15889
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
> Attachments: HADOOP-15889.000.patch, HADOOP-15889.001.patch, 
> HADOOP-15889.002.patch, HADOOP-15889.003.patch
>
>
> Currently, Hadoop allows passing files containing tokens.
> WebHDFS provides base64 delegation tokens that can be used directly.
> This JIRA adds the option to pass base64 tokens directly without using files.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] hadoop-yetus commented on issue #502: HDDS-919. Enable prometheus endpoints for Ozone datanodes

2019-02-28 Thread GitBox
hadoop-yetus commented on issue #502: HDDS-919. Enable prometheus endpoints for 
Ozone datanodes
URL: https://github.com/apache/hadoop/pull/502#issuecomment-468375970
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 0 | Docker mode activated. |
   | -1 | patch | 7 | https://github.com/apache/hadoop/pull/502 does not apply 
to trunk. Rebase required? Wrong Branch? See 
https://wiki.apache.org/hadoop/HowToContribute for help. |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | GITHUB PR | https://github.com/apache/hadoop/pull/502 |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-502/4/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] elek commented on issue #502: HDDS-919. Enable prometheus endpoints for Ozone datanodes

2019-02-28 Thread GitBox
elek commented on issue #502: HDDS-919. Enable prometheus endpoints for Ozone 
datanodes
URL: https://github.com/apache/hadoop/pull/502#issuecomment-468375437
 
 
   > One minor comment: We dont need the change in 
hadoop-hdds/container-service/pom.xml.
   
   Ups, thanks. I removed it.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] hadoop-yetus commented on issue #533: HADOOP-14630

2019-02-28 Thread GitBox
hadoop-yetus commented on issue #533: HADOOP-14630 
URL: https://github.com/apache/hadoop/pull/533#issuecomment-468366506
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 41 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 1 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 5 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 61 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1090 | trunk passed |
   | +1 | compile | 948 | trunk passed |
   | +1 | checkstyle | 216 | trunk passed |
   | +1 | mvnsite | 232 | trunk passed |
   | +1 | shadedclient | 1238 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 357 | trunk passed |
   | +1 | javadoc | 179 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 23 | Maven dependency ordering for patch |
   | +1 | mvninstall | 150 | the patch passed |
   | +1 | compile | 907 | the patch passed |
   | -1 | javac | 907 | root generated 1 new + 1492 unchanged - 0 fixed = 1493 
total (was 1492) |
   | -0 | checkstyle | 215 | root: The patch generated 4 new + 147 unchanged - 
3 fixed = 151 total (was 150) |
   | +1 | mvnsite | 229 | the patch passed |
   | -1 | whitespace | 0 | The patch has 5 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply |
   | +1 | shadedclient | 743 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 421 | the patch passed |
   | +1 | javadoc | 178 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 514 | hadoop-common in the patch failed. |
   | +1 | unit | 115 | hadoop-hdfs-client in the patch passed. |
   | +1 | unit | 29 | hadoop-openstack in the patch passed. |
   | +1 | unit | 82 | hadoop-azure in the patch passed. |
   | +1 | unit | 59 | hadoop-azure-datalake in the patch passed. |
   | +1 | asflicense | 42 | The patch does not generate ASF License warnings. |
   | | | 7896 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.fs.contract.localfs.TestLocalFSContractRename |
   |   | hadoop.fs.contract.rawlocal.TestRawlocalContractRename |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-533/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/533 |
   | Optional Tests |  dupname  asflicense  mvnsite  compile  javac  javadoc  
mvninstall  unit  shadedclient  findbugs  checkstyle  |
   | uname | Linux 1692ad49d3cc 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri 
Oct 5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 3a8118b |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | findbugs | v3.1.0-RC1 |
   | javac | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-533/1/artifact/out/diff-compile-javac-root.txt
 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-533/1/artifact/out/diff-checkstyle-root.txt
 |
   | whitespace | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-533/1/artifact/out/whitespace-eol.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-533/1/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-533/1/testReport/ |
   | Max. process+thread count | 1714 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common 
hadoop-hdfs-project/hadoop-hdfs-client hadoop-tools/hadoop-openstack 
hadoop-tools/hadoop-azure hadoop-tools/hadoop-azure-datalake U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-533/1/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] hadoop-yetus commented on a change in pull request #533: HADOOP-14630

2019-02-28 Thread GitBox
hadoop-yetus commented on a change in pull request #533: HADOOP-14630 
URL: https://github.com/apache/hadoop/pull/533#discussion_r261306963
 
 

 ##
 File path: 
hadoop-common-project/hadoop-common/src/site/markdown/filesystem/filesystem.md
 ##
 @@ -524,9 +524,17 @@ Create a directory and all its parents
  Preconditions
 
 
+The path must either be a directory or not exist
+ 
  if exists(FS, p) and not isDir(FS, p) :
  raise [ParentNotDirectoryException, FileAlreadyExistsException, 
IOException]
 
+No ancestor may be a file
+
+forall d = ancestors(FS, p) : 
 
 Review comment:
   whitespace:end of line
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] hadoop-yetus commented on a change in pull request #533: HADOOP-14630

2019-02-28 Thread GitBox
hadoop-yetus commented on a change in pull request #533: HADOOP-14630 
URL: https://github.com/apache/hadoop/pull/533#discussion_r261306985
 
 

 ##
 File path: 
hadoop-common-project/hadoop-common/src/site/markdown/filesystem/filesystem.md
 ##
 @@ -566,6 +574,12 @@ Writing to or overwriting a directory must fail.
 
 if isDir(FS, p) : raise {FileAlreadyExistsException, 
FileNotFoundException, IOException}
 
+No ancestor may be a file
+
+forall d = ancestors(FS, p) : 
 
 Review comment:
   whitespace:end of line
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] hadoop-yetus commented on a change in pull request #533: HADOOP-14630

2019-02-28 Thread GitBox
hadoop-yetus commented on a change in pull request #533: HADOOP-14630 
URL: https://github.com/apache/hadoop/pull/533#discussion_r261306953
 
 

 ##
 File path: 
hadoop-common-project/hadoop-common/src/site/markdown/filesystem/filesystem.md
 ##
 @@ -524,9 +524,17 @@ Create a directory and all its parents
  Preconditions
 
 
+The path must either be a directory or not exist
+ 
 
 Review comment:
   whitespace:end of line
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] hadoop-yetus commented on a change in pull request #533: HADOOP-14630

2019-02-28 Thread GitBox
hadoop-yetus commented on a change in pull request #533: HADOOP-14630 
URL: https://github.com/apache/hadoop/pull/533#discussion_r261306972
 
 

 ##
 File path: 
hadoop-common-project/hadoop-common/src/site/markdown/filesystem/filesystem.md
 ##
 @@ -524,9 +524,17 @@ Create a directory and all its parents
  Preconditions
 
 
+The path must either be a directory or not exist
+ 
  if exists(FS, p) and not isDir(FS, p) :
  raise [ParentNotDirectoryException, FileAlreadyExistsException, 
IOException]
 
+No ancestor may be a file
+
+forall d = ancestors(FS, p) : 
+if exists(FS, d) and not isDir(FS, d) :
+raise [ParentNotDirectoryException, FileAlreadyExistsException, 
IOException]
+
 
 Review comment:
   whitespace:end of line
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] hadoop-yetus commented on a change in pull request #533: HADOOP-14630

2019-02-28 Thread GitBox
hadoop-yetus commented on a change in pull request #533: HADOOP-14630 
URL: https://github.com/apache/hadoop/pull/533#discussion_r261307001
 
 

 ##
 File path: 
hadoop-common-project/hadoop-common/src/site/markdown/filesystem/filesystem.md
 ##
 @@ -566,6 +574,12 @@ Writing to or overwriting a directory must fail.
 
 if isDir(FS, p) : raise {FileAlreadyExistsException, 
FileNotFoundException, IOException}
 
+No ancestor may be a file
+
+forall d = ancestors(FS, p) : 
+if exists(FS, d) and not isDir(FS, d) :
+raise [ParentNotDirectoryException, FileAlreadyExistsException, 
IOException]
+  
 
 Review comment:
   whitespace:end of line
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16109) Parquet reading S3AFileSystem causes EOF

2019-02-28 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16109?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16780686#comment-16780686
 ] 

Steve Loughran commented on HADOOP-16109:
-

yeah, I can replicate this in a test
{code:java}
[ERROR] 
testReadPastReadahead(org.apache.hadoop.fs.contract.s3a.ITestS3AContractRandomSeek)
  Time elapsed: 3.061 s  <<< ERROR!
java.io.EOFException: End of file reached before reading fully.
at 
org.apache.hadoop.fs.s3a.S3AInputStream.readFully(S3AInputStream.java:707)
at 
org.apache.hadoop.fs.FSDataInputStream.readFully(FSDataInputStream.java:121)
at 
org.apache.hadoop.fs.contract.s3a.ITestS3AContractSeek.testReadPastReadahead(ITestS3AContractSeek.java:79)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.lang.Thread.run(Thread.java:745)

[INFO] 

 {code}

> Parquet reading S3AFileSystem causes EOF
> 
>
> Key: HADOOP-16109
> URL: https://issues.apache.org/jira/browse/HADOOP-16109
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.0
>Reporter: Dave Christianson
>Assignee: Steve Loughran
>Priority: Blocker
>
> When using S3AFileSystem to read Parquet files a specific set of 
> circumstances causes an  EOFException that is not thrown when reading the 
> same file from local disk
> Note this has only been observed under specific circumstances:
>   - when the reader is doing a projection (will cause it to do a seek 
> backwards and put the filesystem into random mode)
>  - when the file is larger than the readahead buffer size
>  - when the seek behavior of the Parquet reader causes the reader to seek 
> towards the end of the current input stream without reopening, such that the 
> next read on the currently open stream will read past the end of the 
> currently open stream.
> Exception from Parquet reader is as follows:
> {code}
> Caused by: java.io.EOFException: Reached the end of stream with 51 bytes left 
> to read
>  at 
> org.apache.parquet.io.DelegatingSeekableInputStream.readFully(DelegatingSeekableInputStream.java:104)
>  at 
> org.apache.parquet.io.DelegatingSeekableInputStream.readFullyHeapBuffer(DelegatingSeekableInputStream.java:127)
>  at 
> org.apache.parquet.io.DelegatingSeekableInputStream.readFully(DelegatingSeekableInputStream.java:91)
>  at 
> org.apache.parquet.hadoop.ParquetFileReader$ConsecutiveChunkList.readAll(ParquetFileReader.java:1174)
>  at 
> org.apache.parquet.hadoop.ParquetFileReader.readNextRowGroup(ParquetFileReader.java:805)
>  at 
> org.apache.parquet.hadoop.InternalParquetRecordReader.checkRead(InternalParquetRecordReader.java:127)
>  at 
> org.apache.parquet.hadoop.InternalParquetRecordReader.nextKeyValue(InternalParquetRecordReader.java:222)
>  at 
> org.apache.parquet.hadoop.ParquetRecordReader.nextKeyValue(ParquetRecordReader.java:207)
>  at 
> org.apache.flink.api.java.hadoop.mapreduce.HadoopInputFormatBase.fetchNext(HadoopInputFormatBase.java:206)
>  at 
> org.apache.flink.api.java.hadoop.mapreduce.HadoopInputFormatBase.reachedEnd(HadoopInputFormatBase.java:199)
>  at 
> org.apache.flink.runtime.operators.DataSourceTask.invoke(DataSourceTask.java:190)
>  at org.apache.flink.runtime.taskmanager.Task.run(Task.java:711)
>  at java.lang.Thread.run(Thread.java:748)
> {code}
> The following example program generate the same root behavior (sans finding a 
> Parquet file that happens to trigger this condition) by purposely reading 
> past the already active readahead range on any file >= 1029 bytes in size.. 
> {code:java}
> final Configuration conf = new Configuration();
> 

[jira] [Commented] (HADOOP-13327) Add OutputStream + Syncable to the Filesystem Specification

2019-02-28 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-13327?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16780626#comment-16780626
 ] 

Hadoop QA commented on HADOOP-13327:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  7s{color} 
| {color:red} HADOOP-13327 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HADOOP-13327 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12928014/HADOOP-13327-003.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15997/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Add OutputStream + Syncable to the Filesystem Specification
> ---
>
> Key: HADOOP-13327
> URL: https://issues.apache.org/jira/browse/HADOOP-13327
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: HADOOP-13327-002.patch, HADOOP-13327-003.patch, 
> HADOOP-13327-branch-2-001.patch
>
>
> Write down what a Filesystem output stream should do. While core the API is 
> defined in Java, that doesn't say what's expected about visibility, 
> durability, etc —and Hadoop Syncable interface is entirely ours to define.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] steveloughran opened a new pull request #533: HADOOP-14630

2019-02-28 Thread GitBox
steveloughran opened a new pull request #533: HADOOP-14630 
URL: https://github.com/apache/hadoop/pull/533
 
 
   HADOOP-14630. Contract Tests to verify create, mkdirs and rename under a 
file is forbidden


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16150) checksumFS doesn't wrap concat(): concatenated files don't have checksums

2019-02-28 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16150?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16780621#comment-16780621
 ] 

Steve Loughran commented on HADOOP-16150:
-

In HADOOP-15691  ChecksumFileSystem declares that it doesn't support append or 
concat, so you can check before you use

> checksumFS doesn't wrap concat(): concatenated files don't have checksums
> -
>
> Key: HADOOP-16150
> URL: https://issues.apache.org/jira/browse/HADOOP-16150
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Priority: Major
>
> Followon from HADOOP-16107. FilterFS passes through the concat operation, and 
> checksum FS doesn't override that call -so files created through concat *do 
> not have checksums*.
> If people are using a checksummed fs directly with the expectations that they 
> will, that expectation is not being met. 
> What to do?
> * fail always?
> * fail if checksums are enabled?
> * try and implement the concat operation from raw local up at the checksum 
> level
> append() just gives up always; doing the same for concat would be the 
> simplest. Again, brings us back to "need a way to see if an FS supports a 
> feature before invocation", here checksum fs would reject append and concat



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13327) Add OutputStream + Syncable to the Filesystem Specification

2019-02-28 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-13327?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16780622#comment-16780622
 ] 

Hadoop QA commented on HADOOP-13327:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m 10s{color} 
| {color:red} https://github.com/apache/hadoop/pull/532 does not apply to 
trunk. Rebase required? Wrong Branch? See 
https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| GITHUB PR | https://github.com/apache/hadoop/pull/532 |
| JIRA Issue | HADOOP-13327 |
| Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-532/1/console |
| Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |


This message was automatically generated.



> Add OutputStream + Syncable to the Filesystem Specification
> ---
>
> Key: HADOOP-13327
> URL: https://issues.apache.org/jira/browse/HADOOP-13327
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: HADOOP-13327-002.patch, HADOOP-13327-003.patch, 
> HADOOP-13327-branch-2-001.patch
>
>
> Write down what a Filesystem output stream should do. While core the API is 
> defined in Java, that doesn't say what's expected about visibility, 
> durability, etc —and Hadoop Syncable interface is entirely ours to define.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] hadoop-yetus commented on issue #532: HADOOP-13327: Add OutputStream + Syncable to the Filesystem Specification

2019-02-28 Thread GitBox
hadoop-yetus commented on issue #532: HADOOP-13327: Add OutputStream + Syncable 
to the Filesystem Specification
URL: https://github.com/apache/hadoop/pull/532#issuecomment-468313495
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 0 | Docker mode activated. |
   | -1 | patch | 10 | https://github.com/apache/hadoop/pull/532 does not apply 
to trunk. Rebase required? Wrong Branch? See 
https://wiki.apache.org/hadoop/HowToContribute for help. |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | GITHUB PR | https://github.com/apache/hadoop/pull/532 |
   | JIRA Issue | HADOOP-13327 |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-532/1/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] steveloughran opened a new pull request #532: HADOOP-13327: Add OutputStream + Syncable to the Filesystem Specification

2019-02-28 Thread GitBox
steveloughran opened a new pull request #532: HADOOP-13327: Add OutputStream + 
Syncable to the Filesystem Specification
URL: https://github.com/apache/hadoop/pull/532
 
 
   HADOOP-13327: Add OutputStream + Syncable to the Filesystem Specification
   
   * defines what an output stream should do
   * And what implementations of Syncable MUST do if they declare they support 
the method.
   * Consistently declare behaviors in our streams
   * Including for some (S3ABlockOutputStream) state tracking: no operations 
once closed; if an error has occurred, future operations raise it, etc.
   * With some more utility classes in org.apache.hadoop.fs.impl to aid this


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16068) ABFS Authentication and Delegation Token plugins to optionally be bound to specific URI of the store

2019-02-28 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16068?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16780587#comment-16780587
 ] 

Hudson commented on HADOOP-16068:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #16093 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/16093/])
HADOOP-16068. ABFS Authentication and Delegation Token plugins to (stevel: rev 
65f60e56b082faf92e1cd3daee2569d8fc669c67)
* (edit) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsHttpOperation.java
* (edit) hadoop-tools/hadoop-azure/src/site/markdown/index.md
* (add) 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/extensions/TestDTManagerLifecycle.java
* (edit) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/contracts/exceptions/TokenAccessProviderException.java
* (add) 
hadoop-tools/hadoop-azure/src/main/resources/META-INF/services/org.apache.hadoop.security.token.DtFetcher
* (add) 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/extensions/ITestAbfsDelegationTokens.java
* (edit) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsRestOperation.java
* (add) 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/extensions/TestCustomOauthTokenProvider.java
* (edit) 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/AbstractAbfsIntegrationTest.java
* (edit) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsClient.java
* (edit) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/security/AbfsDelegationTokenIdentifier.java
* (add) 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/extensions/KerberizedAbfsCluster.java
* (add) 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/extensions/WrappingTokenProvider.java
* (edit) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/contracts/exceptions/AbfsRestOperationException.java
* (edit) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/oauth2/CustomTokenProviderAdapter.java
* (add) 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/extensions/StubAbfsTokenIdentifier.java
* (edit) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystemStore.java
* (edit) 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAbfsIdentityTransformer.java
* (edit) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsConfiguration.java
* (edit) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/extensions/CustomDelegationTokenManager.java
* (add) 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/extensions/ClassicDelegationTokenManager.java
* (add) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/security/AbfssDtFetcher.java
* (edit) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/oauth2/AzureADAuthenticator.java
* (edit) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/security/AbfsDelegationTokenManager.java
* (add) 
hadoop-tools/hadoop-azure/src/test/resources/META-INF/services/org.apache.hadoop.security.token.TokenIdentifier
* (edit) hadoop-tools/hadoop-azure/pom.xml
* (add) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/extensions/ExtensionHelper.java
* (add) 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/extensions/StubDelegationTokenManager.java
* (edit) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/SharedKeyCredentials.java
* (edit) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/security/package-info.java
* (add) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsIoUtils.java
* (edit) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystem.java
* (add) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/security/AbfsDtFetcher.java
* (add) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/extensions/BoundDTExtension.java
* (edit) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/security/AbfsTokenRenewer.java
* (edit) hadoop-tools/hadoop-azure/src/test/resources/log4j.properties
* (edit) hadoop-tools/hadoop-azure/src/site/markdown/abfs.md


> ABFS Authentication and Delegation Token plugins to optionally be bound to 
> specific URI of the store
> 
>
> Key: HADOOP-16068
> URL: https://issues.apache.org/jira/browse/HADOOP-16068
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.2.0
>   

[GitHub] hadoop-yetus commented on issue #531: HADOOP-15961 Add PathCapabilities to FS and FC to complement StreamCapabilities

2019-02-28 Thread GitBox
hadoop-yetus commented on issue #531: HADOOP-15961 Add PathCapabilities to FS 
and FC to complement StreamCapabilities
URL: https://github.com/apache/hadoop/pull/531#issuecomment-468297298
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 0 | Docker mode activated. |
   | -1 | patch | 8 | https://github.com/apache/hadoop/pull/531 does not apply 
to trunk. Rebase required? Wrong Branch? See 
https://wiki.apache.org/hadoop/HowToContribute for help. |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | GITHUB PR | https://github.com/apache/hadoop/pull/531 |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-531/1/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] steveloughran opened a new pull request #531: HADOOP-15961 Add PathCapabilities to FS and FC to complement StreamCapabilities

2019-02-28 Thread GitBox
steveloughran opened a new pull request #531: HADOOP-15961 Add PathCapabilities 
to FS and FC to complement StreamCapabilities
URL: https://github.com/apache/hadoop/pull/531
 
 
   Add a PathCapabilities interface to both FileSystem and FileContext to 
declare the capabilities under the path of a filesystem through both the 
FileSystem and FileContext APIs


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15691) Add PathCapabilities to FS and FC to complement StreamCapabilities

2019-02-28 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15691?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15691:

Description: 
Add a {{PathCapabilities}} interface to both FileSystem and FileContext to 
declare the capabilities under the path of a filesystem through both the 
FileSystem and FileContext APIs

This is needed for 
* HADOOP-14707: declare that a dest FS supports permissions
* object stores to declare that they offer PUT-in-place alongside (slow-rename)
* Anything else where the implementation semantics of an FS is so different 
caller apps would benefit from probing for the underlying semantics

I know, we want all filesystem to work *exactly* the same. But it doesn't hold, 
especially for object stores —and to efficiently use them, callers need to be 
able to ask for specific features.

  was:
Add a {{PathCapabilities}} interface to both FileSystem and FileContext to 
declare the capabilities under the path of an FS

This is needed for 
* HADOOP-14707: declare that a dest FS supports permissions
* object stores to declare that they offer PUT-in-place alongside (slow-rename)
* Anything else where the implementation semantics of an FS is so different 
caller apps would benefit from probing for the underlying semantics

I know, we want all filesystem to work *exactly* the same. But it doesn't hold, 
especially for object stores —and to efficiently use them, callers need to be 
able to ask for specific features.


> Add PathCapabilities to FS and FC to complement StreamCapabilities
> --
>
> Key: HADOOP-15691
> URL: https://issues.apache.org/jira/browse/HADOOP-15691
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: HADOOP-15691-001.patch, HADOOP-15691-002.patch, 
> HADOOP-15691-003.patch, HADOOP-15691-004.patch
>
>
> Add a {{PathCapabilities}} interface to both FileSystem and FileContext to 
> declare the capabilities under the path of a filesystem through both the 
> FileSystem and FileContext APIs
> This is needed for 
> * HADOOP-14707: declare that a dest FS supports permissions
> * object stores to declare that they offer PUT-in-place alongside 
> (slow-rename)
> * Anything else where the implementation semantics of an FS is so different 
> caller apps would benefit from probing for the underlying semantics
> I know, we want all filesystem to work *exactly* the same. But it doesn't 
> hold, especially for object stores —and to efficiently use them, callers need 
> to be able to ask for specific features.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



  1   2   >