[jira] [Commented] (HADOOP-16930) Add com.amazonaws.auth.profile.ProfileCredentialsProvider to hadoop-aws docs

2020-03-20 Thread Nicholas Chammas (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17063734#comment-17063734
 ] 

Nicholas Chammas commented on HADOOP-16930:
---

cc [~ste...@apache.org] - I'd be happy to work on the doc update if I've 
understood the issue correctly.

> Add com.amazonaws.auth.profile.ProfileCredentialsProvider to hadoop-aws docs
> 
>
> Key: HADOOP-16930
> URL: https://issues.apache.org/jira/browse/HADOOP-16930
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation, fs/s3
>Reporter: Nicholas Chammas
>Priority: Minor
>
> There is a very, very useful S3A authentication method that is not currently 
> documented: {{com.amazonaws.auth.profile.ProfileCredentialsProvider}}
> This provider lets you source your AWS credentials from a shared credentials 
> file, typically stored under {{~/.aws/credentials}}, using a [named 
> profile|https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-profiles.html].
>  All you need is to set the {{AWS_PROFILE}} environment variable, and the 
> provider will get the appropriate credentials for you.
> I discovered this from my coworkers, but cannot find it in the docs for 
> hadoop-aws. I'd expect to see it at least mentioned in [this 
> section|https://hadoop.apache.org/docs/r2.9.2/hadoop-aws/tools/hadoop-aws/index.html#S3A_Authentication_methods].
>  It should probably be added to the docs for every minor release that 
> supports it, which I'd guess includes 2.8 on up.
> (This provider should probably also be added to the default list of 
> credential provider classes, but we can address that in another ticket. I can 
> say that at least in 2.9.2, it's not in the default list.)
> (This is not to be confused with 
> {{com.amazonaws.auth.InstanceProfileCredentialsProvider}}, which serves a 
> completely different purpose.)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16930) Add com.amazonaws.auth.profile.ProfileCredentialsProvider to hadoop-aws docs

2020-03-20 Thread Nicholas Chammas (Jira)
Nicholas Chammas created HADOOP-16930:
-

 Summary: Add com.amazonaws.auth.profile.ProfileCredentialsProvider 
to hadoop-aws docs
 Key: HADOOP-16930
 URL: https://issues.apache.org/jira/browse/HADOOP-16930
 Project: Hadoop Common
  Issue Type: Improvement
  Components: documentation, fs/s3
Reporter: Nicholas Chammas


There is a very, very useful S3A authentication method that is not currently 
documented: {{com.amazonaws.auth.profile.ProfileCredentialsProvider}}

This provider lets you source your AWS credentials from a shared credentials 
file, typically stored under {{~/.aws/credentials}}, using a [named 
profile|https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-profiles.html].
 All you need is to set the {{AWS_PROFILE}} environment variable, and the 
provider will get the appropriate credentials for you.

I discovered this from my coworkers, but cannot find it in the docs for 
hadoop-aws. I'd expect to see it at least mentioned in [this 
section|https://hadoop.apache.org/docs/r2.9.2/hadoop-aws/tools/hadoop-aws/index.html#S3A_Authentication_methods].
 It should probably be added to the docs for every minor release that supports 
it, which I'd guess includes 2.8 on up.

(This provider should probably also be added to the default list of credential 
provider classes, but we can address that in another ticket. I can say that at 
least in 2.9.2, it's not in the default list.)

(This is not to be confused with 
{{com.amazonaws.auth.InstanceProfileCredentialsProvider}}, which serves a 
completely different purpose.)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16912) Emit per priority rpc queue time and processing time from DecayRpcScheduler

2020-03-20 Thread Fengnan Li (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17063642#comment-17063642
 ] 

Fengnan Li commented on HADOOP-16912:
-

Thanks [~csun] for the detailed review! I have addressed all comments and 
uploaded [^HADOOP-16912.006.patch]

> Emit per priority rpc queue time and processing time from DecayRpcScheduler
> ---
>
> Key: HADOOP-16912
> URL: https://issues.apache.org/jira/browse/HADOOP-16912
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: common
>Reporter: Fengnan Li
>Assignee: Fengnan Li
>Priority: Major
>  Labels: metrics
> Attachments: HADOOP-16912.001.patch, HADOOP-16912.002.patch, 
> HADOOP-16912.003.patch, HADOOP-16912.004.patch, HADOOP-16912.005.patch, 
> HADOOP-16912.006.patch
>
>
> At ipc Server level we have the overall rpc queue time and processing time 
> for the whole CallQueueManager. In the case of using FairCallQueue, it will 
> be great to know the per queue/priority level rpc queue time since many times 
> we want to keep certain queues to meet some queue time SLA for customers.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16912) Emit per priority rpc queue time and processing time from DecayRpcScheduler

2020-03-20 Thread Fengnan Li (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16912?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fengnan Li updated HADOOP-16912:

Attachment: HADOOP-16912.006.patch

> Emit per priority rpc queue time and processing time from DecayRpcScheduler
> ---
>
> Key: HADOOP-16912
> URL: https://issues.apache.org/jira/browse/HADOOP-16912
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: common
>Reporter: Fengnan Li
>Assignee: Fengnan Li
>Priority: Major
>  Labels: metrics
> Attachments: HADOOP-16912.001.patch, HADOOP-16912.002.patch, 
> HADOOP-16912.003.patch, HADOOP-16912.004.patch, HADOOP-16912.005.patch, 
> HADOOP-16912.006.patch
>
>
> At ipc Server level we have the overall rpc queue time and processing time 
> for the whole CallQueueManager. In the case of using FairCallQueue, it will 
> be great to know the per queue/priority level rpc queue time since many times 
> we want to keep certain queues to meet some queue time SLA for customers.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] esahekmat commented on issue #1902: HDFS-15219. fix ResponseProcessor.run to catch Throwable instead of Exception

2020-03-20 Thread GitBox
esahekmat commented on issue #1902: HDFS-15219. fix ResponseProcessor.run to 
catch Throwable instead of Exception
URL: https://github.com/apache/hadoop/pull/1902#issuecomment-601901409
 
 
   I tried, but as ResponseProcessor class is a private inner class it's almost 
impossible to mock its behavior ( throw Error in the middle of run() method)
   even I tried to mock block field to throw an Error when calling 
setNumBytes(), but as it's final and ExtendedBlock and BlockToWrite classes are 
very cautious to not expose their internal object it is impossible to inject my 
mock instead of their Block object.
   can you help me to find a way to write its unit test?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16647) Support OpenSSL 1.1.1 LTS

2020-03-20 Thread Wei-Chiu Chuang (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16647?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17063556#comment-17063556
 ] 

Wei-Chiu Chuang commented on HADOOP-16647:
--

So looks like HADOOP-16405 takes care of the openssl 1.1.1 in cloud connectors.

For the Hadoop services (YARN, HDFS), one of my colleague is taking a look at 
it and found Hadoop does not currently run on openssl 1.1.1.

It generates error like the following:
{noformat}
20/03/20 05:20:07 ERROR random.OpensslSecureRandom: Failed to load Openssl 
SecureRandom
java.lang.UnsatisfiedLinkError: CRYPTO_num_locks
at org.apache.hadoop.crypto.random.OpensslSecureRandom.initSR(Native 
Method)
at 
org.apache.hadoop.crypto.random.OpensslSecureRandom.(OpensslSecureRandom.java:57)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:348)
at 
org.apache.hadoop.conf.Configuration.getClassByNameOrNull(Configuration.java:2598)
at 
org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:2563)
at 
org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2659)
at 
org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2685)
at 
org.apache.hadoop.crypto.OpensslAesCtrCryptoCodec.setConf(OpensslAesCtrCryptoCodec.java:59)
at 
org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:77)
at 
org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:137)
at org.apache.hadoop.crypto.CryptoCodec.getInstance(CryptoCodec.java:69)
at 
org.apache.hadoop.hdfs.HdfsKMSUtil.getCryptoCodec(HdfsKMSUtil.java:110)
at 
org.apache.hadoop.hdfs.DFSClient.createWrappedOutputStream(DFSClient.java:961)
at 
org.apache.hadoop.hdfs.DFSClient.createWrappedOutputStream(DFSClient.java:947)
at 
org.apache.hadoop.hdfs.DistributedFileSystem$8.doCall(DistributedFileSystem.java:538)
at 
org.apache.hadoop.hdfs.DistributedFileSystem$8.doCall(DistributedFileSystem.java:532)
at 
org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at 
org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:546)
at 
org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:473)
at 
org.apache.hadoop.fs.FilterFileSystem.create(FilterFileSystem.java:195)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1133)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1113)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1002)
at 
{noformat}

> Support OpenSSL 1.1.1 LTS
> -
>
> Key: HADOOP-16647
> URL: https://issues.apache.org/jira/browse/HADOOP-16647
> Project: Hadoop Common
>  Issue Type: Task
>  Components: security
>Reporter: Wei-Chiu Chuang
>Priority: Critical
>
> See Hadoop user mailing list 
> http://mail-archives.apache.org/mod_mbox/hadoop-user/201910.mbox/%3CCADiq6%3DweDFxHTL_7eGwDNnxVCza39y2QYQTSggfLn7mXhMLOdg%40mail.gmail.com%3E
> Hadoop 2 supports OpenSSL 1.0.2.
> Hadoop 3 supports OpenSSL 1.1.0 (HADOOP-14597) and I believe 1.0.2 too.
> Per OpenSSL blog https://www.openssl.org/policies/releasestrat.html
> * 1.1.0 is EOL 2019/09/11
> * 1.0.2 EOL 2019/12/31
> * 1.1.1 is EOL 2023/09/11 (LTS)
> Many Hadoop installation relies on the OpenSSL package provided by Linux 
> distros, but it's not clear to me if Linux distros are going support 
> 1.1.0/1.0.2 beyond this date.
> We should make sure Hadoop works with OpenSSL 1.1.1, as well as document the 
> openssl version supported. File this jira to test/document/fix bugs.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16912) Emit per priority rpc queue time and processing time from DecayRpcScheduler

2020-03-20 Thread Chao Sun (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17063539#comment-17063539
 ] 

Chao Sun commented on HADOOP-16912:
---

Thanks [~fengnanli] for the update! Looks pretty good to me with a few comments 
(mostly nits):
 1. In {{DecayRpcSchedulerDetailedMetrics}}, can we change 
{{rpcschedulerdetailed}} in {{context}} to {{decayrpcschedulerdetailed}} as 
well?
 2. In {{DecayRpcSchedulerDetailedMetrics}}, the variable {{LOG}} can be 
private, and methods {{getRpcQueueRates}} and {{getRpcProcessingRates}} can be 
package-private.
 3. Please add a blank line between method description and the first line with 
{{@param}}.
4. Perhaps the test on {{MutableRatesWithAggregation#init}} can be added to 
{{TestMutableMetrics}}? this also removes the necessity of changing 
{{getGlobalMetrics}} to public.
5. We'll also need to document this in {{Metrics.md}} for the newly added 
metrics.


> Emit per priority rpc queue time and processing time from DecayRpcScheduler
> ---
>
> Key: HADOOP-16912
> URL: https://issues.apache.org/jira/browse/HADOOP-16912
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: common
>Reporter: Fengnan Li
>Assignee: Fengnan Li
>Priority: Major
>  Labels: metrics
> Attachments: HADOOP-16912.001.patch, HADOOP-16912.002.patch, 
> HADOOP-16912.003.patch, HADOOP-16912.004.patch, HADOOP-16912.005.patch
>
>
> At ipc Server level we have the overall rpc queue time and processing time 
> for the whole CallQueueManager. In the case of using FairCallQueue, it will 
> be great to know the per queue/priority level rpc queue time since many times 
> we want to keep certain queues to meet some queue time SLA for customers.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on a change in pull request #1899: HADOOP-16914 Adding Output Stream Counters in ABFS

2020-03-20 Thread GitBox
steveloughran commented on a change in pull request #1899: HADOOP-16914 Adding 
Output Stream Counters in ABFS
URL: https://github.com/apache/hadoop/pull/1899#discussion_r395754519
 
 

 ##
 File path: 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemOauth.java
 ##
 @@ -143,7 +143,7 @@ public void testBlobDataReader() throws Exception {
 
 // TEST WRITE FILE
 try {
-  abfsStore.openFileForWrite(EXISTED_FILE_PATH, true);
+  abfsStore.openFileForWrite(EXISTED_FILE_PATH, fs.getFsStatistics(), 
true);
 
 Review comment:
   add a .close(), even if the original code didn't. Always good to improve a 
test


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on a change in pull request #1899: HADOOP-16914 Adding Output Stream Counters in ABFS

2020-03-20 Thread GitBox
steveloughran commented on a change in pull request #1899: HADOOP-16914 Adding 
Output Stream Counters in ABFS
URL: https://github.com/apache/hadoop/pull/1899#discussion_r395751202
 
 

 ##
 File path: 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAbfsOutputStream.java
 ##
 @@ -133,58 +133,60 @@ public void testAbfsOutputStreamTimeSpendOnWaitTask() 
throws IOException {
   public void testAbfsOutputStreamQueueShrink() throws IOException {
 describe("Testing Queue Shrink calls in AbfsOutputStream");
 final AzureBlobFileSystem fs = getFileSystem();
-Path TEST_PATH = new Path("AbfsOutputStreamStatsPath");
+Path queueShrinkFilePath = new Path("AbfsOutputStreamStatsPath");
 AzureBlobFileSystemStore abfss = fs.getAbfsStore();
 abfss.getAbfsConfiguration().setDisableOutputStreamFlush(false);
 FileSystem.Statistics statistics = fs.getFsStatistics();
 String testQueueShrink = "testQueue";
 
-
 AbfsOutputStream outForOneOp = null;
 
 try {
-  outForOneOp = (AbfsOutputStream) abfss.createFile(TEST_PATH, statistics,
-true,
-  FsPermission.getDefault(), FsPermission.getUMask(fs.getConf()));
+  outForOneOp =
+  (AbfsOutputStream) abfss.createFile(queueShrinkFilePath, statistics,
+  true,
+  FsPermission.getDefault(), FsPermission.getUMask(fs.getConf()));
 
   //Test for shrinking Queue zero time
-  Assert.assertEquals("Mismatch in number of queueShrink() Calls", 0,
+  assertValues("number of queueShrink() Calls", 0,
   outForOneOp.getOutputStreamStatistics().queueShrink);
 
   outForOneOp.write(testQueueShrink.getBytes());
   // Queue is shrunk 2 times when outStream is flushed
   outForOneOp.flush();
 
   //Test for shrinking Queue 2 times
-  Assert.assertEquals("Mismatch in number of queueShrink() Calls", 2,
+  assertValues("number of queueShrink() Calls", 2,
   outForOneOp.getOutputStreamStatistics().queueShrink);
 
 } finally {
-  if(outForOneOp != null){
+  if (outForOneOp != null) {
 outForOneOp.close();
   }
 }
 
 AbfsOutputStream outForLargeOps = null;
 
 try {
-  outForLargeOps = (AbfsOutputStream) abfss.createFile(TEST_PATH,
+  outForLargeOps = (AbfsOutputStream) abfss.createFile(queueShrinkFilePath,
   statistics, true,
   FsPermission.getDefault(), FsPermission.getUMask(fs.getConf()));
 
+  int largeValue = 1000;
   //QueueShrink is called 2 times in 1 flush(), hence 1000 flushes must
   // give 2000 QueueShrink calls
-  for (int i = 0; i < 1000; i++) {
+  for (int i = 0; i < largeValue; i++) {
 outForLargeOps.write(testQueueShrink.getBytes());
 //Flush is quite expensive so 1000 calls only which takes 1 min+
 outForLargeOps.flush();
 
 Review comment:
   do you have to call it so many times?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on a change in pull request #1899: HADOOP-16914 Adding Output Stream Counters in ABFS

2020-03-20 Thread GitBox
steveloughran commented on a change in pull request #1899: HADOOP-16914 Adding 
Output Stream Counters in ABFS
URL: https://github.com/apache/hadoop/pull/1899#discussion_r395753205
 
 

 ##
 File path: 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAbfsStreamStatistics.java
 ##
 @@ -0,0 +1,147 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs;
+
+import org.junit.Assert;
+import org.junit.Test;
+
+import org.apache.hadoop.fs.FSDataOutputStream;
+import org.apache.hadoop.fs.FSDataInputStream;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+
+/**
+ * Test Abfs Stream.
+ */
+
+public class ITestAbfsStreamStatistics extends AbstractAbfsIntegrationTest {
+  public ITestAbfsStreamStatistics() throws Exception {
+  }
+
+  /***
+   * Testing {@code incrementReadOps()} in class {@code AbfsInputStream} and
+   * {@code incrementWriteOps()} in class {@code AbfsOutputStream}.
+   *
+   * @throws Exception
+   */
+  @Test
+  public void testAbfsStreamOps() throws Exception {
+describe("Test to see correct population of read and write operations in "
++ "Abfs");
+
+final AzureBlobFileSystem fs = getFileSystem();
+Path smallOperationsFile = new Path("testOneReadWriteOps");
+Path largeOperationsFile = new Path("testLargeReadWriteOps");
+FileSystem.Statistics statistics = fs.getFsStatistics();
+String testReadWriteOps = "test this";
+statistics.reset();
+
+//Test for zero write operation
+assertReadWriteOps("write", 0, statistics.getWriteOps());
+
+//Test for zero read operation
+assertReadWriteOps("read", 0, statistics.getReadOps());
+
+FSDataOutputStream outForOneOperation = null;
+FSDataInputStream inForOneOperation = null;
+try {
+  outForOneOperation = fs.create(smallOperationsFile);
+  statistics.reset();
+  outForOneOperation.write(testReadWriteOps.getBytes());
+
+  //Test for a single write operation
+  assertReadWriteOps("write", 1, statistics.getWriteOps());
+
+  inForOneOperation = fs.open(smallOperationsFile);
+  inForOneOperation.read(testReadWriteOps.getBytes(), 0,
+  testReadWriteOps.getBytes().length);
+
+  //Test for a single read operation
+  assertReadWriteOps("read", 1, statistics.getReadOps());
+
+} finally {
+  if (inForOneOperation != null) {
 
 Review comment:
   IOUtils.closeQuietly


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on a change in pull request #1899: HADOOP-16914 Adding Output Stream Counters in ABFS

2020-03-20 Thread GitBox
steveloughran commented on a change in pull request #1899: HADOOP-16914 Adding 
Output Stream Counters in ABFS
URL: https://github.com/apache/hadoop/pull/1899#discussion_r395749601
 
 

 ##
 File path: 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAbfsOutputStream.java
 ##
 @@ -0,0 +1,278 @@
+package org.apache.hadoop.fs.azurebfs;
+
+import java.io.IOException;
+
+import org.junit.Test;
+
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.azurebfs.services.AbfsOutputStream;
+import org.apache.hadoop.fs.permission.FsPermission;
+
+/**
+ * Test AbfsOutputStream statistics.
+ */
+public class ITestAbfsOutputStream extends AbstractAbfsIntegrationTest {
+
+  public ITestAbfsOutputStream() throws Exception {
+  }
+
+  /**
+   * Tests to check bytes Uploading in {@link AbfsOutputStream}.
+   *
+   * @throws IOException
+   */
+  @Test
+  public void testAbfsOutputStreamUploadingBytes() throws IOException {
+describe("Testing Bytes uploaded in AbfsOutputSteam");
+final AzureBlobFileSystem fs = getFileSystem();
+Path uploadBytesFilePath = new Path("AbfsOutputStreamStatsPath");
+AzureBlobFileSystemStore abfss = fs.getAbfsStore();
+FileSystem.Statistics statistics = fs.getFsStatistics();
+abfss.getAbfsConfiguration().setDisableOutputStreamFlush(false);
+String testBytesToUpload = "bytes";
+
+AbfsOutputStream outForSomeBytes = null;
+try {
+  outForSomeBytes = (AbfsOutputStream) 
abfss.createFile(uploadBytesFilePath,
+  statistics,
+  true,
+  FsPermission.getDefault(), FsPermission.getUMask(fs.getConf()));
+
+  //Test for zero bytes To upload
+  assertValues("bytes to upload", 0,
+  outForSomeBytes.getOutputStreamStatistics().bytesToUpload);
+
+  outForSomeBytes.write(testBytesToUpload.getBytes());
+  outForSomeBytes.flush();
+
+  //Test for some bytes to upload
+  assertValues("bytes to upload", testBytesToUpload.getBytes().length,
+  outForSomeBytes.getOutputStreamStatistics().bytesToUpload);
+
+  //Test for relation between bytesUploadSuccessful, bytesUploadFailed
+  // and bytesToUpload
+  assertValues("bytesUploadSuccessful equal to difference between "
+  + "bytesToUpload and bytesUploadFailed",
+  outForSomeBytes.getOutputStreamStatistics().bytesUploadSuccessful,
+  outForSomeBytes.getOutputStreamStatistics().bytesToUpload -
+  outForSomeBytes.getOutputStreamStatistics().bytesUploadFailed);
+
+} finally {
+  if (outForSomeBytes != null) {
+outForSomeBytes.close();
+  }
+}
+
+AbfsOutputStream outForLargeBytes = null;
+try {
+  outForLargeBytes =
+  (AbfsOutputStream) abfss.createFile(uploadBytesFilePath,
+  statistics
+  , true, FsPermission.getDefault(),
+  FsPermission.getUMask(fs.getConf()));
+
+  int largeValue = 10;
+  for (int i = 0; i < largeValue; i++) {
+outForLargeBytes.write(testBytesToUpload.getBytes());
+  }
+  outForLargeBytes.flush();
+
+  //Test for large bytes to upload
+  assertValues("bytes to upload",
+  largeValue * (testBytesToUpload.getBytes().length),
+  outForLargeBytes.getOutputStreamStatistics().bytesToUpload);
+
+  //Test for relation between bytesUploadSuccessful, bytesUploadFailed
+  // and bytesToUpload
+  assertValues("bytesUploadSuccessful equal to difference between "
+  + "bytesToUpload and bytesUploadFailed",
+  outForSomeBytes.getOutputStreamStatistics().bytesUploadSuccessful,
+  outForSomeBytes.getOutputStreamStatistics().bytesToUpload -
+  outForSomeBytes.getOutputStreamStatistics().bytesUploadFailed);
+
+} finally {
+  if (outForLargeBytes != null) {
+outForLargeBytes.close();
+  }
+}
+
+  }
+
+  /**
+   * Tests to check time spend on waiting for tasks to be complete on a
+   * blocking queue in {@link AbfsOutputStream}.
+   *
+   * @throws IOException
+   */
+  @Test
+  public void testAbfsOutputStreamTimeSpendOnWaitTask() throws IOException {
 
 Review comment:
   I don't see any easy way except to assert that it is > 0


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on a change in pull request #1899: HADOOP-16914 Adding Output Stream Counters in ABFS

2020-03-20 Thread GitBox
steveloughran commented on a change in pull request #1899: HADOOP-16914 Adding 
Output Stream Counters in ABFS
URL: https://github.com/apache/hadoop/pull/1899#discussion_r395750400
 
 

 ##
 File path: 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAbfsOutputStream.java
 ##
 @@ -0,0 +1,278 @@
+package org.apache.hadoop.fs.azurebfs;
+
+import java.io.IOException;
+
+import org.junit.Test;
+
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.azurebfs.services.AbfsOutputStream;
+import org.apache.hadoop.fs.permission.FsPermission;
+
+/**
+ * Test AbfsOutputStream statistics.
+ */
+public class ITestAbfsOutputStream extends AbstractAbfsIntegrationTest {
+
+  public ITestAbfsOutputStream() throws Exception {
+  }
+
+  /**
+   * Tests to check bytes Uploading in {@link AbfsOutputStream}.
+   *
+   * @throws IOException
+   */
+  @Test
+  public void testAbfsOutputStreamUploadingBytes() throws IOException {
+describe("Testing Bytes uploaded in AbfsOutputSteam");
+final AzureBlobFileSystem fs = getFileSystem();
+Path uploadBytesFilePath = new Path("AbfsOutputStreamStatsPath");
+AzureBlobFileSystemStore abfss = fs.getAbfsStore();
+FileSystem.Statistics statistics = fs.getFsStatistics();
+abfss.getAbfsConfiguration().setDisableOutputStreamFlush(false);
+String testBytesToUpload = "bytes";
+
+AbfsOutputStream outForSomeBytes = null;
+try {
+  outForSomeBytes = (AbfsOutputStream) 
abfss.createFile(uploadBytesFilePath,
+  statistics,
+  true,
+  FsPermission.getDefault(), FsPermission.getUMask(fs.getConf()));
+
+  //Test for zero bytes To upload
+  assertValues("bytes to upload", 0,
+  outForSomeBytes.getOutputStreamStatistics().bytesToUpload);
+
+  outForSomeBytes.write(testBytesToUpload.getBytes());
+  outForSomeBytes.flush();
+
+  //Test for some bytes to upload
+  assertValues("bytes to upload", testBytesToUpload.getBytes().length,
+  outForSomeBytes.getOutputStreamStatistics().bytesToUpload);
+
+  //Test for relation between bytesUploadSuccessful, bytesUploadFailed
+  // and bytesToUpload
+  assertValues("bytesUploadSuccessful equal to difference between "
+  + "bytesToUpload and bytesUploadFailed",
+  outForSomeBytes.getOutputStreamStatistics().bytesUploadSuccessful,
+  outForSomeBytes.getOutputStreamStatistics().bytesToUpload -
+  outForSomeBytes.getOutputStreamStatistics().bytesUploadFailed);
+
+} finally {
+  if (outForSomeBytes != null) {
+outForSomeBytes.close();
+  }
+}
+
+AbfsOutputStream outForLargeBytes = null;
+try {
+  outForLargeBytes =
+  (AbfsOutputStream) abfss.createFile(uploadBytesFilePath,
+  statistics
+  , true, FsPermission.getDefault(),
+  FsPermission.getUMask(fs.getConf()));
+
+  int largeValue = 10;
+  for (int i = 0; i < largeValue; i++) {
+outForLargeBytes.write(testBytesToUpload.getBytes());
+  }
+  outForLargeBytes.flush();
+
+  //Test for large bytes to upload
+  assertValues("bytes to upload",
+  largeValue * (testBytesToUpload.getBytes().length),
+  outForLargeBytes.getOutputStreamStatistics().bytesToUpload);
+
+  //Test for relation between bytesUploadSuccessful, bytesUploadFailed
+  // and bytesToUpload
+  assertValues("bytesUploadSuccessful equal to difference between "
+  + "bytesToUpload and bytesUploadFailed",
+  outForSomeBytes.getOutputStreamStatistics().bytesUploadSuccessful,
+  outForSomeBytes.getOutputStreamStatistics().bytesToUpload -
+  outForSomeBytes.getOutputStreamStatistics().bytesUploadFailed);
+
+} finally {
+  if (outForLargeBytes != null) {
+outForLargeBytes.close();
+  }
+}
+
+  }
+
+  /**
+   * Tests to check time spend on waiting for tasks to be complete on a
+   * blocking queue in {@link AbfsOutputStream}.
+   *
+   * @throws IOException
+   */
+  @Test
+  public void testAbfsOutputStreamTimeSpendOnWaitTask() throws IOException {
+describe("Testing Time Spend on Waiting for Task to be complete");
+final AzureBlobFileSystem fs = getFileSystem();
+Path timeSpendFilePath = new Path("AbfsOutputStreamStatsPath");
+AzureBlobFileSystemStore abfss = fs.getAbfsStore();
+FileSystem.Statistics statistics = fs.getFsStatistics();
+
+AbfsOutputStream out =
+(AbfsOutputStream) abfss.createFile(timeSpendFilePath,
+statistics, true,
+FsPermission.getDefault(), FsPermission.getUMask(fs.getConf()));
+
+  }
+
+  /**
+   * Tests to check number of {@codes shrinkWriteOperationQueue()}
+   * calls.
+   * After writing data, AbfsOutputStream doesn't upload the data until
+   * Flushed. Hence, flush() method is called after write() to test Queue
+   * shrink calls.
+   *

[GitHub] [hadoop] steveloughran commented on a change in pull request #1899: HADOOP-16914 Adding Output Stream Counters in ABFS

2020-03-20 Thread GitBox
steveloughran commented on a change in pull request #1899: HADOOP-16914 Adding 
Output Stream Counters in ABFS
URL: https://github.com/apache/hadoop/pull/1899#discussion_r395749213
 
 

 ##
 File path: 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAbfsOutputStream.java
 ##
 @@ -0,0 +1,278 @@
+package org.apache.hadoop.fs.azurebfs;
+
+import java.io.IOException;
+
+import org.junit.Test;
+
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.azurebfs.services.AbfsOutputStream;
+import org.apache.hadoop.fs.permission.FsPermission;
+
+/**
+ * Test AbfsOutputStream statistics.
+ */
+public class ITestAbfsOutputStream extends AbstractAbfsIntegrationTest {
+
+  public ITestAbfsOutputStream() throws Exception {
+  }
+
+  /**
+   * Tests to check bytes Uploading in {@link AbfsOutputStream}.
+   *
+   * @throws IOException
+   */
+  @Test
+  public void testAbfsOutputStreamUploadingBytes() throws IOException {
+describe("Testing Bytes uploaded in AbfsOutputSteam");
+final AzureBlobFileSystem fs = getFileSystem();
+Path uploadBytesFilePath = new Path("AbfsOutputStreamStatsPath");
+AzureBlobFileSystemStore abfss = fs.getAbfsStore();
+FileSystem.Statistics statistics = fs.getFsStatistics();
+abfss.getAbfsConfiguration().setDisableOutputStreamFlush(false);
+String testBytesToUpload = "bytes";
+
+AbfsOutputStream outForSomeBytes = null;
+try {
+  outForSomeBytes = (AbfsOutputStream) 
abfss.createFile(uploadBytesFilePath,
+  statistics,
+  true,
+  FsPermission.getDefault(), FsPermission.getUMask(fs.getConf()));
+
+  //Test for zero bytes To upload
+  assertValues("bytes to upload", 0,
+  outForSomeBytes.getOutputStreamStatistics().bytesToUpload);
+
+  outForSomeBytes.write(testBytesToUpload.getBytes());
+  outForSomeBytes.flush();
+
+  //Test for some bytes to upload
+  assertValues("bytes to upload", testBytesToUpload.getBytes().length,
+  outForSomeBytes.getOutputStreamStatistics().bytesToUpload);
+
+  //Test for relation between bytesUploadSuccessful, bytesUploadFailed
+  // and bytesToUpload
+  assertValues("bytesUploadSuccessful equal to difference between "
+  + "bytesToUpload and bytesUploadFailed",
+  outForSomeBytes.getOutputStreamStatistics().bytesUploadSuccessful,
+  outForSomeBytes.getOutputStreamStatistics().bytesToUpload -
+  outForSomeBytes.getOutputStreamStatistics().bytesUploadFailed);
+
+} finally {
+  if (outForSomeBytes != null) {
+outForSomeBytes.close();
+  }
+}
+
+AbfsOutputStream outForLargeBytes = null;
+try {
 
 Review comment:
   same


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on a change in pull request #1899: HADOOP-16914 Adding Output Stream Counters in ABFS

2020-03-20 Thread GitBox
steveloughran commented on a change in pull request #1899: HADOOP-16914 Adding 
Output Stream Counters in ABFS
URL: https://github.com/apache/hadoop/pull/1899#discussion_r395751750
 
 

 ##
 File path: 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAbfsOutputStream.java
 ##
 @@ -0,0 +1,278 @@
+package org.apache.hadoop.fs.azurebfs;
+
+import java.io.IOException;
+
+import org.junit.Test;
+
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.azurebfs.services.AbfsOutputStream;
+import org.apache.hadoop.fs.permission.FsPermission;
+
+/**
+ * Test AbfsOutputStream statistics.
+ */
+public class ITestAbfsOutputStream extends AbstractAbfsIntegrationTest {
+
+  public ITestAbfsOutputStream() throws Exception {
+  }
+
+  /**
+   * Tests to check bytes Uploading in {@link AbfsOutputStream}.
+   *
+   * @throws IOException
+   */
+  @Test
+  public void testAbfsOutputStreamUploadingBytes() throws IOException {
+describe("Testing Bytes uploaded in AbfsOutputSteam");
+final AzureBlobFileSystem fs = getFileSystem();
+Path uploadBytesFilePath = new Path("AbfsOutputStreamStatsPath");
+AzureBlobFileSystemStore abfss = fs.getAbfsStore();
+FileSystem.Statistics statistics = fs.getFsStatistics();
+abfss.getAbfsConfiguration().setDisableOutputStreamFlush(false);
+String testBytesToUpload = "bytes";
+
+AbfsOutputStream outForSomeBytes = null;
+try {
+  outForSomeBytes = (AbfsOutputStream) 
abfss.createFile(uploadBytesFilePath,
+  statistics,
+  true,
+  FsPermission.getDefault(), FsPermission.getUMask(fs.getConf()));
+
+  //Test for zero bytes To upload
+  assertValues("bytes to upload", 0,
+  outForSomeBytes.getOutputStreamStatistics().bytesToUpload);
+
+  outForSomeBytes.write(testBytesToUpload.getBytes());
+  outForSomeBytes.flush();
+
+  //Test for some bytes to upload
+  assertValues("bytes to upload", testBytesToUpload.getBytes().length,
+  outForSomeBytes.getOutputStreamStatistics().bytesToUpload);
+
+  //Test for relation between bytesUploadSuccessful, bytesUploadFailed
+  // and bytesToUpload
+  assertValues("bytesUploadSuccessful equal to difference between "
+  + "bytesToUpload and bytesUploadFailed",
+  outForSomeBytes.getOutputStreamStatistics().bytesUploadSuccessful,
+  outForSomeBytes.getOutputStreamStatistics().bytesToUpload -
+  outForSomeBytes.getOutputStreamStatistics().bytesUploadFailed);
+
+} finally {
+  if (outForSomeBytes != null) {
+outForSomeBytes.close();
+  }
+}
+
+AbfsOutputStream outForLargeBytes = null;
+try {
+  outForLargeBytes =
+  (AbfsOutputStream) abfss.createFile(uploadBytesFilePath,
+  statistics
+  , true, FsPermission.getDefault(),
+  FsPermission.getUMask(fs.getConf()));
+
+  int largeValue = 10;
+  for (int i = 0; i < largeValue; i++) {
+outForLargeBytes.write(testBytesToUpload.getBytes());
+  }
+  outForLargeBytes.flush();
+
+  //Test for large bytes to upload
+  assertValues("bytes to upload",
+  largeValue * (testBytesToUpload.getBytes().length),
+  outForLargeBytes.getOutputStreamStatistics().bytesToUpload);
+
+  //Test for relation between bytesUploadSuccessful, bytesUploadFailed
+  // and bytesToUpload
+  assertValues("bytesUploadSuccessful equal to difference between "
+  + "bytesToUpload and bytesUploadFailed",
+  outForSomeBytes.getOutputStreamStatistics().bytesUploadSuccessful,
+  outForSomeBytes.getOutputStreamStatistics().bytesToUpload -
+  outForSomeBytes.getOutputStreamStatistics().bytesUploadFailed);
+
+} finally {
+  if (outForLargeBytes != null) {
+outForLargeBytes.close();
+  }
+}
+
+  }
+
+  /**
+   * Tests to check time spend on waiting for tasks to be complete on a
+   * blocking queue in {@link AbfsOutputStream}.
+   *
+   * @throws IOException
+   */
+  @Test
+  public void testAbfsOutputStreamTimeSpendOnWaitTask() throws IOException {
+describe("Testing Time Spend on Waiting for Task to be complete");
+final AzureBlobFileSystem fs = getFileSystem();
+Path timeSpendFilePath = new Path("AbfsOutputStreamStatsPath");
+AzureBlobFileSystemStore abfss = fs.getAbfsStore();
+FileSystem.Statistics statistics = fs.getFsStatistics();
+
+AbfsOutputStream out =
+(AbfsOutputStream) abfss.createFile(timeSpendFilePath,
+statistics, true,
+FsPermission.getDefault(), FsPermission.getUMask(fs.getConf()));
+
+  }
+
+  /**
+   * Tests to check number of {@codes shrinkWriteOperationQueue()}
+   * calls.
+   * After writing data, AbfsOutputStream doesn't upload the data until
+   * Flushed. Hence, flush() method is called after write() to test Queue
+   * shrink calls.
+   *

[GitHub] [hadoop] steveloughran commented on a change in pull request #1899: HADOOP-16914 Adding Output Stream Counters in ABFS

2020-03-20 Thread GitBox
steveloughran commented on a change in pull request #1899: HADOOP-16914 Adding 
Output Stream Counters in ABFS
URL: https://github.com/apache/hadoop/pull/1899#discussion_r395750967
 
 

 ##
 File path: 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAbfsOutputStream.java
 ##
 @@ -0,0 +1,278 @@
+package org.apache.hadoop.fs.azurebfs;
+
+import java.io.IOException;
+
+import org.junit.Test;
+
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.azurebfs.services.AbfsOutputStream;
+import org.apache.hadoop.fs.permission.FsPermission;
+
+/**
+ * Test AbfsOutputStream statistics.
+ */
+public class ITestAbfsOutputStream extends AbstractAbfsIntegrationTest {
+
+  public ITestAbfsOutputStream() throws Exception {
+  }
+
+  /**
+   * Tests to check bytes Uploading in {@link AbfsOutputStream}.
+   *
+   * @throws IOException
+   */
+  @Test
+  public void testAbfsOutputStreamUploadingBytes() throws IOException {
+describe("Testing Bytes uploaded in AbfsOutputSteam");
+final AzureBlobFileSystem fs = getFileSystem();
+Path uploadBytesFilePath = new Path("AbfsOutputStreamStatsPath");
+AzureBlobFileSystemStore abfss = fs.getAbfsStore();
+FileSystem.Statistics statistics = fs.getFsStatistics();
+abfss.getAbfsConfiguration().setDisableOutputStreamFlush(false);
+String testBytesToUpload = "bytes";
+
+AbfsOutputStream outForSomeBytes = null;
+try {
+  outForSomeBytes = (AbfsOutputStream) 
abfss.createFile(uploadBytesFilePath,
+  statistics,
+  true,
+  FsPermission.getDefault(), FsPermission.getUMask(fs.getConf()));
+
+  //Test for zero bytes To upload
+  assertValues("bytes to upload", 0,
+  outForSomeBytes.getOutputStreamStatistics().bytesToUpload);
+
+  outForSomeBytes.write(testBytesToUpload.getBytes());
+  outForSomeBytes.flush();
+
+  //Test for some bytes to upload
+  assertValues("bytes to upload", testBytesToUpload.getBytes().length,
+  outForSomeBytes.getOutputStreamStatistics().bytesToUpload);
+
+  //Test for relation between bytesUploadSuccessful, bytesUploadFailed
+  // and bytesToUpload
+  assertValues("bytesUploadSuccessful equal to difference between "
+  + "bytesToUpload and bytesUploadFailed",
+  outForSomeBytes.getOutputStreamStatistics().bytesUploadSuccessful,
+  outForSomeBytes.getOutputStreamStatistics().bytesToUpload -
+  outForSomeBytes.getOutputStreamStatistics().bytesUploadFailed);
+
+} finally {
+  if (outForSomeBytes != null) {
+outForSomeBytes.close();
+  }
+}
+
+AbfsOutputStream outForLargeBytes = null;
+try {
+  outForLargeBytes =
+  (AbfsOutputStream) abfss.createFile(uploadBytesFilePath,
+  statistics
+  , true, FsPermission.getDefault(),
+  FsPermission.getUMask(fs.getConf()));
+
+  int largeValue = 10;
+  for (int i = 0; i < largeValue; i++) {
+outForLargeBytes.write(testBytesToUpload.getBytes());
+  }
+  outForLargeBytes.flush();
+
+  //Test for large bytes to upload
+  assertValues("bytes to upload",
+  largeValue * (testBytesToUpload.getBytes().length),
+  outForLargeBytes.getOutputStreamStatistics().bytesToUpload);
+
+  //Test for relation between bytesUploadSuccessful, bytesUploadFailed
+  // and bytesToUpload
+  assertValues("bytesUploadSuccessful equal to difference between "
+  + "bytesToUpload and bytesUploadFailed",
+  outForSomeBytes.getOutputStreamStatistics().bytesUploadSuccessful,
+  outForSomeBytes.getOutputStreamStatistics().bytesToUpload -
+  outForSomeBytes.getOutputStreamStatistics().bytesUploadFailed);
+
+} finally {
+  if (outForLargeBytes != null) {
+outForLargeBytes.close();
+  }
+}
+
+  }
+
+  /**
+   * Tests to check time spend on waiting for tasks to be complete on a
+   * blocking queue in {@link AbfsOutputStream}.
+   *
+   * @throws IOException
+   */
+  @Test
+  public void testAbfsOutputStreamTimeSpendOnWaitTask() throws IOException {
+describe("Testing Time Spend on Waiting for Task to be complete");
+final AzureBlobFileSystem fs = getFileSystem();
+Path timeSpendFilePath = new Path("AbfsOutputStreamStatsPath");
+AzureBlobFileSystemStore abfss = fs.getAbfsStore();
+FileSystem.Statistics statistics = fs.getFsStatistics();
+
+AbfsOutputStream out =
+(AbfsOutputStream) abfss.createFile(timeSpendFilePath,
+statistics, true,
+FsPermission.getDefault(), FsPermission.getUMask(fs.getConf()));
+
+  }
+
+  /**
+   * Tests to check number of {@codes shrinkWriteOperationQueue()}
+   * calls.
+   * After writing data, AbfsOutputStream doesn't upload the data until
+   * Flushed. Hence, flush() method is called after write() to test Queue
+   * shrink calls.
+   *

[GitHub] [hadoop] steveloughran commented on a change in pull request #1899: HADOOP-16914 Adding Output Stream Counters in ABFS

2020-03-20 Thread GitBox
steveloughran commented on a change in pull request #1899: HADOOP-16914 Adding 
Output Stream Counters in ABFS
URL: https://github.com/apache/hadoop/pull/1899#discussion_r395755381
 
 

 ##
 File path: 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsOutputStreamStatisticsImpl.java
 ##
 @@ -0,0 +1,102 @@
+package org.apache.hadoop.fs.azurebfs.services;
+
+/**
+ * OutputStream Statistics Implementation for Abfs.
+ * timeSpendOnTaskWait - Time spend on waiting for tasks to be complete on
+ * Blocking Queue in AbfsOutputStream.
+ *
+ * queueShrink - Number of times Blocking Queue was shrunk after writing
+ * data.
+ *
+ * WriteCurrentBufferOperations - Number of times
+ * {@codes writeCurrentBufferToService()} calls were made.
+ */
+public class AbfsOutputStreamStatisticsImpl
+implements AbfsOutputStreamStatistics {
+  public volatile long bytesToUpload;
 
 Review comment:
   let's make these private and have getters


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on a change in pull request #1899: HADOOP-16914 Adding Output Stream Counters in ABFS

2020-03-20 Thread GitBox
steveloughran commented on a change in pull request #1899: HADOOP-16914 Adding 
Output Stream Counters in ABFS
URL: https://github.com/apache/hadoop/pull/1899#discussion_r395750011
 
 

 ##
 File path: 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAbfsOutputStream.java
 ##
 @@ -0,0 +1,278 @@
+package org.apache.hadoop.fs.azurebfs;
+
+import java.io.IOException;
+
+import org.junit.Test;
+
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.azurebfs.services.AbfsOutputStream;
+import org.apache.hadoop.fs.permission.FsPermission;
+
+/**
+ * Test AbfsOutputStream statistics.
+ */
+public class ITestAbfsOutputStream extends AbstractAbfsIntegrationTest {
+
+  public ITestAbfsOutputStream() throws Exception {
+  }
+
+  /**
+   * Tests to check bytes Uploading in {@link AbfsOutputStream}.
+   *
+   * @throws IOException
+   */
+  @Test
+  public void testAbfsOutputStreamUploadingBytes() throws IOException {
+describe("Testing Bytes uploaded in AbfsOutputSteam");
+final AzureBlobFileSystem fs = getFileSystem();
+Path uploadBytesFilePath = new Path("AbfsOutputStreamStatsPath");
+AzureBlobFileSystemStore abfss = fs.getAbfsStore();
+FileSystem.Statistics statistics = fs.getFsStatistics();
+abfss.getAbfsConfiguration().setDisableOutputStreamFlush(false);
+String testBytesToUpload = "bytes";
+
+AbfsOutputStream outForSomeBytes = null;
+try {
+  outForSomeBytes = (AbfsOutputStream) 
abfss.createFile(uploadBytesFilePath,
+  statistics,
+  true,
+  FsPermission.getDefault(), FsPermission.getUMask(fs.getConf()));
+
+  //Test for zero bytes To upload
+  assertValues("bytes to upload", 0,
+  outForSomeBytes.getOutputStreamStatistics().bytesToUpload);
+
+  outForSomeBytes.write(testBytesToUpload.getBytes());
+  outForSomeBytes.flush();
+
+  //Test for some bytes to upload
+  assertValues("bytes to upload", testBytesToUpload.getBytes().length,
+  outForSomeBytes.getOutputStreamStatistics().bytesToUpload);
+
+  //Test for relation between bytesUploadSuccessful, bytesUploadFailed
+  // and bytesToUpload
+  assertValues("bytesUploadSuccessful equal to difference between "
+  + "bytesToUpload and bytesUploadFailed",
+  outForSomeBytes.getOutputStreamStatistics().bytesUploadSuccessful,
+  outForSomeBytes.getOutputStreamStatistics().bytesToUpload -
+  outForSomeBytes.getOutputStreamStatistics().bytesUploadFailed);
+
+} finally {
+  if (outForSomeBytes != null) {
+outForSomeBytes.close();
+  }
+}
+
+AbfsOutputStream outForLargeBytes = null;
+try {
+  outForLargeBytes =
+  (AbfsOutputStream) abfss.createFile(uploadBytesFilePath,
+  statistics
+  , true, FsPermission.getDefault(),
+  FsPermission.getUMask(fs.getConf()));
+
+  int largeValue = 10;
+  for (int i = 0; i < largeValue; i++) {
+outForLargeBytes.write(testBytesToUpload.getBytes());
+  }
+  outForLargeBytes.flush();
+
+  //Test for large bytes to upload
+  assertValues("bytes to upload",
+  largeValue * (testBytesToUpload.getBytes().length),
+  outForLargeBytes.getOutputStreamStatistics().bytesToUpload);
+
+  //Test for relation between bytesUploadSuccessful, bytesUploadFailed
+  // and bytesToUpload
+  assertValues("bytesUploadSuccessful equal to difference between "
+  + "bytesToUpload and bytesUploadFailed",
+  outForSomeBytes.getOutputStreamStatistics().bytesUploadSuccessful,
+  outForSomeBytes.getOutputStreamStatistics().bytesToUpload -
+  outForSomeBytes.getOutputStreamStatistics().bytesUploadFailed);
+
+} finally {
+  if (outForLargeBytes != null) {
+outForLargeBytes.close();
+  }
+}
+
+  }
+
+  /**
+   * Tests to check time spend on waiting for tasks to be complete on a
+   * blocking queue in {@link AbfsOutputStream}.
+   *
+   * @throws IOException
+   */
+  @Test
+  public void testAbfsOutputStreamTimeSpendOnWaitTask() throws IOException {
+describe("Testing Time Spend on Waiting for Task to be complete");
+final AzureBlobFileSystem fs = getFileSystem();
+Path timeSpendFilePath = new Path("AbfsOutputStreamStatsPath");
+AzureBlobFileSystemStore abfss = fs.getAbfsStore();
+FileSystem.Statistics statistics = fs.getFsStatistics();
+
+AbfsOutputStream out =
 
 Review comment:
   will need to be closed


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hadoop] steveloughran commented on a change in pull request #1899: HADOOP-16914 Adding Output Stream Counters in ABFS

2020-03-20 Thread GitBox
steveloughran commented on a change in pull request #1899: HADOOP-16914 Adding 
Output Stream Counters in ABFS
URL: https://github.com/apache/hadoop/pull/1899#discussion_r395754102
 
 

 ##
 File path: 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAbfsStreamStatistics.java
 ##
 @@ -0,0 +1,147 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs;
+
+import org.junit.Assert;
+import org.junit.Test;
+
+import org.apache.hadoop.fs.FSDataOutputStream;
+import org.apache.hadoop.fs.FSDataInputStream;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+
+/**
+ * Test Abfs Stream.
+ */
+
+public class ITestAbfsStreamStatistics extends AbstractAbfsIntegrationTest {
+  public ITestAbfsStreamStatistics() throws Exception {
+  }
+
+  /***
+   * Testing {@code incrementReadOps()} in class {@code AbfsInputStream} and
+   * {@code incrementWriteOps()} in class {@code AbfsOutputStream}.
+   *
+   * @throws Exception
+   */
+  @Test
+  public void testAbfsStreamOps() throws Exception {
+describe("Test to see correct population of read and write operations in "
++ "Abfs");
+
+final AzureBlobFileSystem fs = getFileSystem();
+Path smallOperationsFile = new Path("testOneReadWriteOps");
+Path largeOperationsFile = new Path("testLargeReadWriteOps");
+FileSystem.Statistics statistics = fs.getFsStatistics();
+String testReadWriteOps = "test this";
+statistics.reset();
+
+//Test for zero write operation
+assertReadWriteOps("write", 0, statistics.getWriteOps());
+
+//Test for zero read operation
+assertReadWriteOps("read", 0, statistics.getReadOps());
+
+FSDataOutputStream outForOneOperation = null;
+FSDataInputStream inForOneOperation = null;
+try {
+  outForOneOperation = fs.create(smallOperationsFile);
+  statistics.reset();
+  outForOneOperation.write(testReadWriteOps.getBytes());
+
+  //Test for a single write operation
+  assertReadWriteOps("write", 1, statistics.getWriteOps());
+
+  inForOneOperation = fs.open(smallOperationsFile);
+  inForOneOperation.read(testReadWriteOps.getBytes(), 0,
+  testReadWriteOps.getBytes().length);
+
+  //Test for a single read operation
+  assertReadWriteOps("read", 1, statistics.getReadOps());
+
+} finally {
+  if (inForOneOperation != null) {
+inForOneOperation.close();
+  }
+  if (outForOneOperation != null) {
+outForOneOperation.close();
+  }
+}
+
+//Validating if content is being written in the smallOperationsFile
+Assert.assertTrue("Mismatch in content validation",
+validateContent(fs, smallOperationsFile,
+testReadWriteOps.getBytes()));
+
+FSDataOutputStream outForLargeOperations = null;
+FSDataInputStream inForLargeOperations = null;
+StringBuilder largeOperationsValidationString = new StringBuilder();
+try {
+  outForLargeOperations = fs.create(largeOperationsFile);
+  statistics.reset();
+  int largeValue = 100;
+  for (int i = 0; i < largeValue; i++) {
+outForLargeOperations.write(testReadWriteOps.getBytes());
+
+//Creating the String for content Validation
+largeOperationsValidationString.append(testReadWriteOps);
+  }
+
+  //Test for 100 write operations
+  assertReadWriteOps("write", largeValue, statistics.getWriteOps());
+
+  inForLargeOperations = fs.open(largeOperationsFile);
+  for (int i = 0; i < largeValue; i++)
+inForLargeOperations
+.read(testReadWriteOps.getBytes(), 0,
+testReadWriteOps.getBytes().length);
+
+  //Test for 100 read operations
+  assertReadWriteOps("read", largeValue, statistics.getReadOps());
+
+} finally {
+  if (inForLargeOperations != null) {
+inForLargeOperations.close();
+  }
+  if (outForLargeOperations != null) {
+outForLargeOperations.close();
+  }
+}
+
+//Validating if content is being written in largeOperationsFile
+Assert.assertTrue("Mismatch in content validation",
 
 Review comment:
   again, superflous with validateContent 

[GitHub] [hadoop] steveloughran commented on a change in pull request #1899: HADOOP-16914 Adding Output Stream Counters in ABFS

2020-03-20 Thread GitBox
steveloughran commented on a change in pull request #1899: HADOOP-16914 Adding 
Output Stream Counters in ABFS
URL: https://github.com/apache/hadoop/pull/1899#discussion_r395753442
 
 

 ##
 File path: 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAbfsStreamStatistics.java
 ##
 @@ -0,0 +1,147 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs;
+
+import org.junit.Assert;
+import org.junit.Test;
+
+import org.apache.hadoop.fs.FSDataOutputStream;
+import org.apache.hadoop.fs.FSDataInputStream;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+
+/**
+ * Test Abfs Stream.
+ */
+
+public class ITestAbfsStreamStatistics extends AbstractAbfsIntegrationTest {
+  public ITestAbfsStreamStatistics() throws Exception {
+  }
+
+  /***
+   * Testing {@code incrementReadOps()} in class {@code AbfsInputStream} and
+   * {@code incrementWriteOps()} in class {@code AbfsOutputStream}.
+   *
+   * @throws Exception
+   */
+  @Test
+  public void testAbfsStreamOps() throws Exception {
+describe("Test to see correct population of read and write operations in "
++ "Abfs");
+
+final AzureBlobFileSystem fs = getFileSystem();
+Path smallOperationsFile = new Path("testOneReadWriteOps");
+Path largeOperationsFile = new Path("testLargeReadWriteOps");
+FileSystem.Statistics statistics = fs.getFsStatistics();
+String testReadWriteOps = "test this";
+statistics.reset();
+
+//Test for zero write operation
+assertReadWriteOps("write", 0, statistics.getWriteOps());
+
+//Test for zero read operation
+assertReadWriteOps("read", 0, statistics.getReadOps());
+
+FSDataOutputStream outForOneOperation = null;
+FSDataInputStream inForOneOperation = null;
+try {
+  outForOneOperation = fs.create(smallOperationsFile);
+  statistics.reset();
+  outForOneOperation.write(testReadWriteOps.getBytes());
+
+  //Test for a single write operation
+  assertReadWriteOps("write", 1, statistics.getWriteOps());
+
+  inForOneOperation = fs.open(smallOperationsFile);
+  inForOneOperation.read(testReadWriteOps.getBytes(), 0,
+  testReadWriteOps.getBytes().length);
+
+  //Test for a single read operation
+  assertReadWriteOps("read", 1, statistics.getReadOps());
+
+} finally {
+  if (inForOneOperation != null) {
+inForOneOperation.close();
+  }
+  if (outForOneOperation != null) {
+outForOneOperation.close();
+  }
+}
+
+//Validating if content is being written in the smallOperationsFile
+Assert.assertTrue("Mismatch in content validation",
 
 Review comment:
   once validateContent raises exceptions, you don't need to wrap in an assert


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on a change in pull request #1899: HADOOP-16914 Adding Output Stream Counters in ABFS

2020-03-20 Thread GitBox
steveloughran commented on a change in pull request #1899: HADOOP-16914 Adding 
Output Stream Counters in ABFS
URL: https://github.com/apache/hadoop/pull/1899#discussion_r395751870
 
 

 ##
 File path: 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAbfsOutputStream.java
 ##
 @@ -0,0 +1,278 @@
+package org.apache.hadoop.fs.azurebfs;
+
+import java.io.IOException;
+
+import org.junit.Test;
+
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.azurebfs.services.AbfsOutputStream;
+import org.apache.hadoop.fs.permission.FsPermission;
+
+/**
+ * Test AbfsOutputStream statistics.
+ */
+public class ITestAbfsOutputStream extends AbstractAbfsIntegrationTest {
+
+  public ITestAbfsOutputStream() throws Exception {
+  }
+
+  /**
+   * Tests to check bytes Uploading in {@link AbfsOutputStream}.
+   *
+   * @throws IOException
+   */
+  @Test
+  public void testAbfsOutputStreamUploadingBytes() throws IOException {
+describe("Testing Bytes uploaded in AbfsOutputSteam");
+final AzureBlobFileSystem fs = getFileSystem();
+Path uploadBytesFilePath = new Path("AbfsOutputStreamStatsPath");
+AzureBlobFileSystemStore abfss = fs.getAbfsStore();
+FileSystem.Statistics statistics = fs.getFsStatistics();
+abfss.getAbfsConfiguration().setDisableOutputStreamFlush(false);
+String testBytesToUpload = "bytes";
+
+AbfsOutputStream outForSomeBytes = null;
+try {
+  outForSomeBytes = (AbfsOutputStream) 
abfss.createFile(uploadBytesFilePath,
+  statistics,
+  true,
+  FsPermission.getDefault(), FsPermission.getUMask(fs.getConf()));
+
+  //Test for zero bytes To upload
+  assertValues("bytes to upload", 0,
+  outForSomeBytes.getOutputStreamStatistics().bytesToUpload);
+
+  outForSomeBytes.write(testBytesToUpload.getBytes());
+  outForSomeBytes.flush();
+
+  //Test for some bytes to upload
+  assertValues("bytes to upload", testBytesToUpload.getBytes().length,
+  outForSomeBytes.getOutputStreamStatistics().bytesToUpload);
+
+  //Test for relation between bytesUploadSuccessful, bytesUploadFailed
+  // and bytesToUpload
+  assertValues("bytesUploadSuccessful equal to difference between "
+  + "bytesToUpload and bytesUploadFailed",
+  outForSomeBytes.getOutputStreamStatistics().bytesUploadSuccessful,
+  outForSomeBytes.getOutputStreamStatistics().bytesToUpload -
+  outForSomeBytes.getOutputStreamStatistics().bytesUploadFailed);
+
+} finally {
+  if (outForSomeBytes != null) {
+outForSomeBytes.close();
+  }
+}
+
+AbfsOutputStream outForLargeBytes = null;
+try {
+  outForLargeBytes =
+  (AbfsOutputStream) abfss.createFile(uploadBytesFilePath,
+  statistics
+  , true, FsPermission.getDefault(),
+  FsPermission.getUMask(fs.getConf()));
+
+  int largeValue = 10;
+  for (int i = 0; i < largeValue; i++) {
+outForLargeBytes.write(testBytesToUpload.getBytes());
+  }
+  outForLargeBytes.flush();
+
+  //Test for large bytes to upload
+  assertValues("bytes to upload",
+  largeValue * (testBytesToUpload.getBytes().length),
+  outForLargeBytes.getOutputStreamStatistics().bytesToUpload);
+
+  //Test for relation between bytesUploadSuccessful, bytesUploadFailed
+  // and bytesToUpload
+  assertValues("bytesUploadSuccessful equal to difference between "
+  + "bytesToUpload and bytesUploadFailed",
+  outForSomeBytes.getOutputStreamStatistics().bytesUploadSuccessful,
+  outForSomeBytes.getOutputStreamStatistics().bytesToUpload -
+  outForSomeBytes.getOutputStreamStatistics().bytesUploadFailed);
+
+} finally {
+  if (outForLargeBytes != null) {
+outForLargeBytes.close();
+  }
+}
+
+  }
+
+  /**
+   * Tests to check time spend on waiting for tasks to be complete on a
+   * blocking queue in {@link AbfsOutputStream}.
+   *
+   * @throws IOException
+   */
+  @Test
+  public void testAbfsOutputStreamTimeSpendOnWaitTask() throws IOException {
+describe("Testing Time Spend on Waiting for Task to be complete");
+final AzureBlobFileSystem fs = getFileSystem();
+Path timeSpendFilePath = new Path("AbfsOutputStreamStatsPath");
+AzureBlobFileSystemStore abfss = fs.getAbfsStore();
+FileSystem.Statistics statistics = fs.getFsStatistics();
+
+AbfsOutputStream out =
+(AbfsOutputStream) abfss.createFile(timeSpendFilePath,
+statistics, true,
+FsPermission.getDefault(), FsPermission.getUMask(fs.getConf()));
+
+  }
+
+  /**
+   * Tests to check number of {@codes shrinkWriteOperationQueue()}
+   * calls.
+   * After writing data, AbfsOutputStream doesn't upload the data until
+   * Flushed. Hence, flush() method is called after write() to test Queue
+   * shrink calls.
+   *

[GitHub] [hadoop] steveloughran commented on a change in pull request #1899: HADOOP-16914 Adding Output Stream Counters in ABFS

2020-03-20 Thread GitBox
steveloughran commented on a change in pull request #1899: HADOOP-16914 Adding 
Output Stream Counters in ABFS
URL: https://github.com/apache/hadoop/pull/1899#discussion_r395752947
 
 

 ##
 File path: 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAbfsStreamStatistics.java
 ##
 @@ -0,0 +1,147 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs;
+
+import org.junit.Assert;
+import org.junit.Test;
+
+import org.apache.hadoop.fs.FSDataOutputStream;
+import org.apache.hadoop.fs.FSDataInputStream;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+
+/**
+ * Test Abfs Stream.
+ */
+
+public class ITestAbfsStreamStatistics extends AbstractAbfsIntegrationTest {
+  public ITestAbfsStreamStatistics() throws Exception {
 
 Review comment:
   not needed


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on a change in pull request #1899: HADOOP-16914 Adding Output Stream Counters in ABFS

2020-03-20 Thread GitBox
steveloughran commented on a change in pull request #1899: HADOOP-16914 Adding 
Output Stream Counters in ABFS
URL: https://github.com/apache/hadoop/pull/1899#discussion_r395752609
 
 

 ##
 File path: 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAbfsOutputStream.java
 ##
 @@ -0,0 +1,278 @@
+package org.apache.hadoop.fs.azurebfs;
+
+import java.io.IOException;
+
+import org.junit.Test;
+
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.azurebfs.services.AbfsOutputStream;
+import org.apache.hadoop.fs.permission.FsPermission;
+
+/**
+ * Test AbfsOutputStream statistics.
+ */
+public class ITestAbfsOutputStream extends AbstractAbfsIntegrationTest {
 
 Review comment:
   This is sounds like a slow test.
   
   1. Use smaller values than 1000, e.g. "10"
   2. make the value a constant used across all tests. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on a change in pull request #1899: HADOOP-16914 Adding Output Stream Counters in ABFS

2020-03-20 Thread GitBox
steveloughran commented on a change in pull request #1899: HADOOP-16914 Adding 
Output Stream Counters in ABFS
URL: https://github.com/apache/hadoop/pull/1899#discussion_r395743922
 
 

 ##
 File path: 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsOutputStream.java
 ##
 @@ -36,20 +36,25 @@
 import com.google.common.annotations.VisibleForTesting;
 import com.google.common.base.Preconditions;
 
+import org.apache.hadoop.fs.FileSystem.Statistics;
 
 Review comment:
   move down to under ElasticByteBufferPool


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on a change in pull request #1899: HADOOP-16914 Adding Output Stream Counters in ABFS

2020-03-20 Thread GitBox
steveloughran commented on a change in pull request #1899: HADOOP-16914 Adding 
Output Stream Counters in ABFS
URL: https://github.com/apache/hadoop/pull/1899#discussion_r395756248
 
 

 ##
 File path: 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAbfsOutputStream.java
 ##
 @@ -28,28 +26,38 @@ public ITestAbfsOutputStream() throws Exception {
   public void testAbfsOutputStreamUploadingBytes() throws IOException {
 
 Review comment:
   I can't think of any. Maybe just have a unit test to take an 
AbfsOutputStreamsImpl and verify that when the method is called, the counter is 
updated.
   
   (Actually, mocking could simulate failure, ...)


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on a change in pull request #1899: HADOOP-16914 Adding Output Stream Counters in ABFS

2020-03-20 Thread GitBox
steveloughran commented on a change in pull request #1899: HADOOP-16914 Adding 
Output Stream Counters in ABFS
URL: https://github.com/apache/hadoop/pull/1899#discussion_r395743551
 
 

 ##
 File path: 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsInputStream.java
 ##
 @@ -101,6 +101,7 @@ public synchronized int read(final byte[] b, final int 
off, final int len) throw
 int currentLen = len;
 int lastReadBytes;
 int totalReadBytes = 0;
+incrementReadOps();
 
 Review comment:
   this is input stream; presumably it's come in from somewhere else


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on a change in pull request #1899: HADOOP-16914 Adding Output Stream Counters in ABFS

2020-03-20 Thread GitBox
steveloughran commented on a change in pull request #1899: HADOOP-16914 Adding 
Output Stream Counters in ABFS
URL: https://github.com/apache/hadoop/pull/1899#discussion_r395753822
 
 

 ##
 File path: 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAbfsStreamStatistics.java
 ##
 @@ -0,0 +1,147 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs;
+
+import org.junit.Assert;
+import org.junit.Test;
+
+import org.apache.hadoop.fs.FSDataOutputStream;
+import org.apache.hadoop.fs.FSDataInputStream;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+
+/**
+ * Test Abfs Stream.
+ */
+
+public class ITestAbfsStreamStatistics extends AbstractAbfsIntegrationTest {
+  public ITestAbfsStreamStatistics() throws Exception {
+  }
+
+  /***
+   * Testing {@code incrementReadOps()} in class {@code AbfsInputStream} and
+   * {@code incrementWriteOps()} in class {@code AbfsOutputStream}.
+   *
+   * @throws Exception
+   */
+  @Test
+  public void testAbfsStreamOps() throws Exception {
+describe("Test to see correct population of read and write operations in "
++ "Abfs");
+
+final AzureBlobFileSystem fs = getFileSystem();
+Path smallOperationsFile = new Path("testOneReadWriteOps");
+Path largeOperationsFile = new Path("testLargeReadWriteOps");
+FileSystem.Statistics statistics = fs.getFsStatistics();
+String testReadWriteOps = "test this";
+statistics.reset();
+
+//Test for zero write operation
+assertReadWriteOps("write", 0, statistics.getWriteOps());
+
+//Test for zero read operation
+assertReadWriteOps("read", 0, statistics.getReadOps());
+
+FSDataOutputStream outForOneOperation = null;
+FSDataInputStream inForOneOperation = null;
+try {
+  outForOneOperation = fs.create(smallOperationsFile);
+  statistics.reset();
+  outForOneOperation.write(testReadWriteOps.getBytes());
+
+  //Test for a single write operation
+  assertReadWriteOps("write", 1, statistics.getWriteOps());
+
+  inForOneOperation = fs.open(smallOperationsFile);
+  inForOneOperation.read(testReadWriteOps.getBytes(), 0,
+  testReadWriteOps.getBytes().length);
+
+  //Test for a single read operation
+  assertReadWriteOps("read", 1, statistics.getReadOps());
+
+} finally {
+  if (inForOneOperation != null) {
+inForOneOperation.close();
+  }
+  if (outForOneOperation != null) {
+outForOneOperation.close();
+  }
+}
+
+//Validating if content is being written in the smallOperationsFile
+Assert.assertTrue("Mismatch in content validation",
+validateContent(fs, smallOperationsFile,
+testReadWriteOps.getBytes()));
+
+FSDataOutputStream outForLargeOperations = null;
+FSDataInputStream inForLargeOperations = null;
+StringBuilder largeOperationsValidationString = new StringBuilder();
+try {
+  outForLargeOperations = fs.create(largeOperationsFile);
+  statistics.reset();
+  int largeValue = 100;
+  for (int i = 0; i < largeValue; i++) {
+outForLargeOperations.write(testReadWriteOps.getBytes());
+
+//Creating the String for content Validation
+largeOperationsValidationString.append(testReadWriteOps);
+  }
+
+  //Test for 100 write operations
+  assertReadWriteOps("write", largeValue, statistics.getWriteOps());
+
+  inForLargeOperations = fs.open(largeOperationsFile);
+  for (int i = 0; i < largeValue; i++)
+inForLargeOperations
+.read(testReadWriteOps.getBytes(), 0,
+testReadWriteOps.getBytes().length);
+
+  //Test for 100 read operations
+  assertReadWriteOps("read", largeValue, statistics.getReadOps());
+
+} finally {
+  if (inForLargeOperations != null) {
 
 Review comment:
   IOUtils.closeQuietly


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:

[GitHub] [hadoop] steveloughran commented on a change in pull request #1899: HADOOP-16914 Adding Output Stream Counters in ABFS

2020-03-20 Thread GitBox
steveloughran commented on a change in pull request #1899: HADOOP-16914 Adding 
Output Stream Counters in ABFS
URL: https://github.com/apache/hadoop/pull/1899#discussion_r395746300
 
 

 ##
 File path: 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsOutputStreamStatisticsImpl.java
 ##
 @@ -0,0 +1,102 @@
+package org.apache.hadoop.fs.azurebfs.services;
+
+/**
+ * OutputStream Statistics Implementation for Abfs.
+ * timeSpendOnTaskWait - Time spend on waiting for tasks to be complete on
+ * Blocking Queue in AbfsOutputStream.
+ *
+ * queueShrink - Number of times Blocking Queue was shrunk after writing
+ * data.
+ *
+ * WriteCurrentBufferOperations - Number of times
+ * {@codes writeCurrentBufferToService()} calls were made.
+ */
+public class AbfsOutputStreamStatisticsImpl
+implements AbfsOutputStreamStatistics {
+  public volatile long bytesToUpload;
+  public volatile long bytesUploadSuccessful;
+  public volatile long bytesUploadFailed;
+  public volatile long timeSpendOnTaskWait;
+  public volatile long queueShrink;
+  public volatile long writeCurrentBufferOperations;
+
+  /**
+   * Number of bytes uploaded only when bytes passed are positive.
+   *
+   * @param bytes negative values are ignored
+   */
+  @Override
+  public void bytesToUpload(long bytes) {
+if (bytes > 0) {
+  bytesToUpload += bytes;
+}
+  }
+
+  @Override
+  public void bytesUploadedSuccessfully(long bytes) {
+if (bytes > 0) {
+  bytesUploadSuccessful += bytes;
+}
+  }
+
+  /**
+   * Number of bytes that weren't uploaded.
+   *
+   * @param bytes negative values are ignored
+   */
+  @Override
+  public void bytesFailed(long bytes) {
+if (bytes > 0) {
+  bytesUploadFailed += bytes;
+}
+  }
+
+  /**
+   * Time spend for waiting a task to be completed.
+   *
+   * @param startTime on calling {@link 
AbfsOutputStream#waitForTaskToComplete()}
+   * @param endTime   on method completing
+   */
+  @Override
+  public void timeSpendTaskWait(long startTime, long endTime) {
+timeSpendOnTaskWait += endTime - startTime;
+  }
+
+  /**
+   * Number of calls to {@link AbfsOutputStream#shrinkWriteOperationQueue()}.
+   */
+  @Override
+  public void queueShrinked() {
+queueShrink++;
+  }
+
+  /**
+   * Number of calls to {@link AbfsOutputStream#writeCurrentBufferToService()}.
 
 Review comment:
   see above comment about javadocs


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on a change in pull request #1899: HADOOP-16914 Adding Output Stream Counters in ABFS

2020-03-20 Thread GitBox
steveloughran commented on a change in pull request #1899: HADOOP-16914 Adding 
Output Stream Counters in ABFS
URL: https://github.com/apache/hadoop/pull/1899#discussion_r395746252
 
 

 ##
 File path: 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsOutputStreamStatisticsImpl.java
 ##
 @@ -0,0 +1,102 @@
+package org.apache.hadoop.fs.azurebfs.services;
+
+/**
+ * OutputStream Statistics Implementation for Abfs.
+ * timeSpendOnTaskWait - Time spend on waiting for tasks to be complete on
+ * Blocking Queue in AbfsOutputStream.
+ *
+ * queueShrink - Number of times Blocking Queue was shrunk after writing
+ * data.
+ *
+ * WriteCurrentBufferOperations - Number of times
+ * {@codes writeCurrentBufferToService()} calls were made.
+ */
+public class AbfsOutputStreamStatisticsImpl
+implements AbfsOutputStreamStatistics {
+  public volatile long bytesToUpload;
+  public volatile long bytesUploadSuccessful;
+  public volatile long bytesUploadFailed;
+  public volatile long timeSpendOnTaskWait;
+  public volatile long queueShrink;
+  public volatile long writeCurrentBufferOperations;
+
+  /**
+   * Number of bytes uploaded only when bytes passed are positive.
+   *
+   * @param bytes negative values are ignored
+   */
+  @Override
+  public void bytesToUpload(long bytes) {
+if (bytes > 0) {
+  bytesToUpload += bytes;
+}
+  }
+
+  @Override
+  public void bytesUploadedSuccessfully(long bytes) {
+if (bytes > 0) {
+  bytesUploadSuccessful += bytes;
+}
+  }
+
+  /**
+   * Number of bytes that weren't uploaded.
+   *
+   * @param bytes negative values are ignored
+   */
+  @Override
+  public void bytesFailed(long bytes) {
+if (bytes > 0) {
+  bytesUploadFailed += bytes;
+}
+  }
+
+  /**
+   * Time spend for waiting a task to be completed.
+   *
+   * @param startTime on calling {@link 
AbfsOutputStream#waitForTaskToComplete()}
+   * @param endTime   on method completing
+   */
+  @Override
+  public void timeSpendTaskWait(long startTime, long endTime) {
+timeSpendOnTaskWait += endTime - startTime;
+  }
+
+  /**
+   * Number of calls to {@link AbfsOutputStream#shrinkWriteOperationQueue()}.
 
 Review comment:
   see above comment about javadocs


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on a change in pull request #1899: HADOOP-16914 Adding Output Stream Counters in ABFS

2020-03-20 Thread GitBox
steveloughran commented on a change in pull request #1899: HADOOP-16914 Adding 
Output Stream Counters in ABFS
URL: https://github.com/apache/hadoop/pull/1899#discussion_r395744474
 
 

 ##
 File path: 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsOutputStream.java
 ##
 @@ -36,20 +36,25 @@
 import com.google.common.annotations.VisibleForTesting;
 import com.google.common.base.Preconditions;
 
+import org.apache.hadoop.fs.FileSystem.Statistics;
 import 
org.apache.hadoop.fs.azurebfs.contracts.exceptions.AbfsRestOperationException;
 import 
org.apache.hadoop.fs.azurebfs.contracts.exceptions.AzureBlobFileSystemException;
 import org.apache.hadoop.io.ElasticByteBufferPool;
 import org.apache.hadoop.fs.FSExceptionMessages;
 import org.apache.hadoop.fs.StreamCapabilities;
 import org.apache.hadoop.fs.Syncable;
 
+import static org.apache.hadoop.io.IOUtils.LOG;
 import static org.apache.hadoop.io.IOUtils.wrapException;
 
 /**
  * The BlobFsOutputStream for Rest AbfsClient.
  */
 public class AbfsOutputStream extends OutputStream implements Syncable, 
StreamCapabilities {
+
 
 Review comment:
   add both new fields at the bottom of the other fields, e.g Line 85, and keep 
togeher. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on a change in pull request #1899: HADOOP-16914 Adding Output Stream Counters in ABFS

2020-03-20 Thread GitBox
steveloughran commented on a change in pull request #1899: HADOOP-16914 Adding 
Output Stream Counters in ABFS
URL: https://github.com/apache/hadoop/pull/1899#discussion_r395746011
 
 

 ##
 File path: 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsOutputStreamStatisticsImpl.java
 ##
 @@ -0,0 +1,102 @@
+package org.apache.hadoop.fs.azurebfs.services;
+
+/**
+ * OutputStream Statistics Implementation for Abfs.
+ * timeSpendOnTaskWait - Time spend on waiting for tasks to be complete on
+ * Blocking Queue in AbfsOutputStream.
+ *
+ * queueShrink - Number of times Blocking Queue was shrunk after writing
+ * data.
+ *
+ * WriteCurrentBufferOperations - Number of times
+ * {@codes writeCurrentBufferToService()} calls were made.
+ */
+public class AbfsOutputStreamStatisticsImpl
+implements AbfsOutputStreamStatistics {
+  public volatile long bytesToUpload;
+  public volatile long bytesUploadSuccessful;
+  public volatile long bytesUploadFailed;
+  public volatile long timeSpendOnTaskWait;
+  public volatile long queueShrink;
+  public volatile long writeCurrentBufferOperations;
+
+  /**
+   * Number of bytes uploaded only when bytes passed are positive.
+   *
+   * @param bytes negative values are ignored
+   */
+  @Override
+  public void bytesToUpload(long bytes) {
+if (bytes > 0) {
+  bytesToUpload += bytes;
+}
+  }
+
+  @Override
+  public void bytesUploadedSuccessfully(long bytes) {
+if (bytes > 0) {
+  bytesUploadSuccessful += bytes;
+}
+  }
+
+  /**
+   * Number of bytes that weren't uploaded.
+   *
+   * @param bytes negative values are ignored
+   */
+  @Override
+  public void bytesFailed(long bytes) {
+if (bytes > 0) {
+  bytesUploadFailed += bytes;
+}
+  }
+
+  /**
+   * Time spend for waiting a task to be completed.
+   *
+   * @param startTime on calling {@link 
AbfsOutputStream#waitForTaskToComplete()}
 
 Review comment:
   MUST NOT use @link to private/package-private/protected methods. Javadoc 
will fail


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on a change in pull request #1899: HADOOP-16914 Adding Output Stream Counters in ABFS

2020-03-20 Thread GitBox
steveloughran commented on a change in pull request #1899: HADOOP-16914 Adding 
Output Stream Counters in ABFS
URL: https://github.com/apache/hadoop/pull/1899#discussion_r395748307
 
 

 ##
 File path: 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/AbstractAbfsTestWithTimeout.java
 ##
 @@ -67,4 +77,46 @@ public void nameThread() {
   protected int getTestTimeoutMillis() {
 return TEST_TIMEOUT;
   }
+
+  /**
+   * Describe a test in the logs.
+   *
+   * @param text text to print
+   * @param args arguments to format in the printing
+   */
+  protected void describe(String text, Object... args) {
+LOG.info("\n\n{}: {}\n",
+methodName.getMethodName(),
+String.format(text, args));
+  }
+
+  /**
+   * Validate Contents written on a file in Abfs.
+   *
+   * @param fsAzureBlobFileSystem
+   * @param path  Path of the file
+   * @param originalByteArray original byte array
+   * @return if content is validated true else, false
+   * @throws IOException
+   */
+  protected boolean validateContent(AzureBlobFileSystem fs, Path path,
+  byte[] originalByteArray)
+  throws IOException {
+FSDataInputStream in = fs.open(path);
+
+int pos = 0;
+int lenOfOriginalByteArray = originalByteArray.length;
+byte valueOfContentAtPos = (byte) in.read();
+
+while (valueOfContentAtPos != -1 && pos < lenOfOriginalByteArray) {
 
 Review comment:
   1. MUST use { } in all if () clauses.
   2. If there's a mismatch, use AssertEquals and include the pos where the 
problem occurred
   
   Imagine: "A remote test run failed -what information should be in the test 
report to begin debugging this?"


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on a change in pull request #1899: HADOOP-16914 Adding Output Stream Counters in ABFS

2020-03-20 Thread GitBox
steveloughran commented on a change in pull request #1899: HADOOP-16914 Adding 
Output Stream Counters in ABFS
URL: https://github.com/apache/hadoop/pull/1899#discussion_r395749763
 
 

 ##
 File path: 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAbfsOutputStream.java
 ##
 @@ -0,0 +1,278 @@
+package org.apache.hadoop.fs.azurebfs;
+
+import java.io.IOException;
+
+import org.junit.Test;
+
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.azurebfs.services.AbfsOutputStream;
+import org.apache.hadoop.fs.permission.FsPermission;
+
+/**
+ * Test AbfsOutputStream statistics.
+ */
+public class ITestAbfsOutputStream extends AbstractAbfsIntegrationTest {
+
+  public ITestAbfsOutputStream() throws Exception {
+  }
+
+  /**
+   * Tests to check bytes Uploading in {@link AbfsOutputStream}.
+   *
+   * @throws IOException
+   */
+  @Test
+  public void testAbfsOutputStreamUploadingBytes() throws IOException {
+describe("Testing Bytes uploaded in AbfsOutputSteam");
+final AzureBlobFileSystem fs = getFileSystem();
+Path uploadBytesFilePath = new Path("AbfsOutputStreamStatsPath");
+AzureBlobFileSystemStore abfss = fs.getAbfsStore();
+FileSystem.Statistics statistics = fs.getFsStatistics();
+abfss.getAbfsConfiguration().setDisableOutputStreamFlush(false);
+String testBytesToUpload = "bytes";
+
+AbfsOutputStream outForSomeBytes = null;
+try {
+  outForSomeBytes = (AbfsOutputStream) 
abfss.createFile(uploadBytesFilePath,
+  statistics,
+  true,
+  FsPermission.getDefault(), FsPermission.getUMask(fs.getConf()));
+
+  //Test for zero bytes To upload
+  assertValues("bytes to upload", 0,
+  outForSomeBytes.getOutputStreamStatistics().bytesToUpload);
+
+  outForSomeBytes.write(testBytesToUpload.getBytes());
+  outForSomeBytes.flush();
+
+  //Test for some bytes to upload
+  assertValues("bytes to upload", testBytesToUpload.getBytes().length,
+  outForSomeBytes.getOutputStreamStatistics().bytesToUpload);
+
+  //Test for relation between bytesUploadSuccessful, bytesUploadFailed
+  // and bytesToUpload
+  assertValues("bytesUploadSuccessful equal to difference between "
+  + "bytesToUpload and bytesUploadFailed",
+  outForSomeBytes.getOutputStreamStatistics().bytesUploadSuccessful,
+  outForSomeBytes.getOutputStreamStatistics().bytesToUpload -
+  outForSomeBytes.getOutputStreamStatistics().bytesUploadFailed);
+
+} finally {
+  if (outForSomeBytes != null) {
+outForSomeBytes.close();
+  }
+}
+
+AbfsOutputStream outForLargeBytes = null;
+try {
+  outForLargeBytes =
+  (AbfsOutputStream) abfss.createFile(uploadBytesFilePath,
+  statistics
+  , true, FsPermission.getDefault(),
+  FsPermission.getUMask(fs.getConf()));
+
+  int largeValue = 10;
+  for (int i = 0; i < largeValue; i++) {
+outForLargeBytes.write(testBytesToUpload.getBytes());
+  }
+  outForLargeBytes.flush();
+
+  //Test for large bytes to upload
+  assertValues("bytes to upload",
+  largeValue * (testBytesToUpload.getBytes().length),
+  outForLargeBytes.getOutputStreamStatistics().bytesToUpload);
+
+  //Test for relation between bytesUploadSuccessful, bytesUploadFailed
+  // and bytesToUpload
+  assertValues("bytesUploadSuccessful equal to difference between "
+  + "bytesToUpload and bytesUploadFailed",
+  outForSomeBytes.getOutputStreamStatistics().bytesUploadSuccessful,
+  outForSomeBytes.getOutputStreamStatistics().bytesToUpload -
+  outForSomeBytes.getOutputStreamStatistics().bytesUploadFailed);
+
+} finally {
+  if (outForLargeBytes != null) {
+outForLargeBytes.close();
+  }
+}
+
+  }
+
+  /**
+   * Tests to check time spend on waiting for tasks to be complete on a
+   * blocking queue in {@link AbfsOutputStream}.
+   *
+   * @throws IOException
+   */
+  @Test
+  public void testAbfsOutputStreamTimeSpendOnWaitTask() throws IOException {
 
 Review comment:
   Also, "time spent"


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on a change in pull request #1899: HADOOP-16914 Adding Output Stream Counters in ABFS

2020-03-20 Thread GitBox
steveloughran commented on a change in pull request #1899: HADOOP-16914 Adding 
Output Stream Counters in ABFS
URL: https://github.com/apache/hadoop/pull/1899#discussion_r395745249
 
 

 ##
 File path: 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsOutputStreamStatistics.java
 ##
 @@ -0,0 +1,60 @@
+package org.apache.hadoop.fs.azurebfs.services;
+
+import org.apache.hadoop.classification.InterfaceStability;
+
+/**
+ * Interface for {@link AbfsOutputStream} statistics.
+ */
+@InterfaceStability.Unstable
+public interface AbfsOutputStreamStatistics {
+
+  /**
+   * Number of bytes to be uploaded.
+   *
+   * @param bytes number of bytes to upload
+   */
+  void bytesToUpload(long bytes);
+
+  /**
+   * Number of bytes uploaded Successfully.
+   *
+   * @param bytes number of bytes that were successfully uploaded
+   */
+  void bytesUploadedSuccessfully(long bytes);
+
+  /**
+   * Number of bytes failed to upload.
+   *
+   * @param bytes number of bytes that failed to upload
+   */
+  void bytesFailed(long bytes);
 
 Review comment:
   prefer a more detailed description like uploadFailed(long). It's recording 
that an upload failed and the number of bytes


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on a change in pull request #1899: HADOOP-16914 Adding Output Stream Counters in ABFS

2020-03-20 Thread GitBox
steveloughran commented on a change in pull request #1899: HADOOP-16914 Adding 
Output Stream Counters in ABFS
URL: https://github.com/apache/hadoop/pull/1899#discussion_r395751470
 
 

 ##
 File path: 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAbfsOutputStream.java
 ##
 @@ -0,0 +1,278 @@
+package org.apache.hadoop.fs.azurebfs;
+
+import java.io.IOException;
+
+import org.junit.Test;
+
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.azurebfs.services.AbfsOutputStream;
+import org.apache.hadoop.fs.permission.FsPermission;
+
+/**
+ * Test AbfsOutputStream statistics.
+ */
+public class ITestAbfsOutputStream extends AbstractAbfsIntegrationTest {
+
+  public ITestAbfsOutputStream() throws Exception {
+  }
+
+  /**
+   * Tests to check bytes Uploading in {@link AbfsOutputStream}.
+   *
+   * @throws IOException
+   */
+  @Test
+  public void testAbfsOutputStreamUploadingBytes() throws IOException {
+describe("Testing Bytes uploaded in AbfsOutputSteam");
+final AzureBlobFileSystem fs = getFileSystem();
+Path uploadBytesFilePath = new Path("AbfsOutputStreamStatsPath");
+AzureBlobFileSystemStore abfss = fs.getAbfsStore();
+FileSystem.Statistics statistics = fs.getFsStatistics();
+abfss.getAbfsConfiguration().setDisableOutputStreamFlush(false);
+String testBytesToUpload = "bytes";
+
+AbfsOutputStream outForSomeBytes = null;
+try {
+  outForSomeBytes = (AbfsOutputStream) 
abfss.createFile(uploadBytesFilePath,
+  statistics,
+  true,
+  FsPermission.getDefault(), FsPermission.getUMask(fs.getConf()));
+
+  //Test for zero bytes To upload
+  assertValues("bytes to upload", 0,
+  outForSomeBytes.getOutputStreamStatistics().bytesToUpload);
+
+  outForSomeBytes.write(testBytesToUpload.getBytes());
+  outForSomeBytes.flush();
+
+  //Test for some bytes to upload
+  assertValues("bytes to upload", testBytesToUpload.getBytes().length,
+  outForSomeBytes.getOutputStreamStatistics().bytesToUpload);
+
+  //Test for relation between bytesUploadSuccessful, bytesUploadFailed
+  // and bytesToUpload
+  assertValues("bytesUploadSuccessful equal to difference between "
+  + "bytesToUpload and bytesUploadFailed",
+  outForSomeBytes.getOutputStreamStatistics().bytesUploadSuccessful,
+  outForSomeBytes.getOutputStreamStatistics().bytesToUpload -
+  outForSomeBytes.getOutputStreamStatistics().bytesUploadFailed);
+
+} finally {
+  if (outForSomeBytes != null) {
+outForSomeBytes.close();
+  }
+}
+
+AbfsOutputStream outForLargeBytes = null;
+try {
+  outForLargeBytes =
+  (AbfsOutputStream) abfss.createFile(uploadBytesFilePath,
+  statistics
+  , true, FsPermission.getDefault(),
+  FsPermission.getUMask(fs.getConf()));
+
+  int largeValue = 10;
+  for (int i = 0; i < largeValue; i++) {
+outForLargeBytes.write(testBytesToUpload.getBytes());
+  }
+  outForLargeBytes.flush();
+
+  //Test for large bytes to upload
+  assertValues("bytes to upload",
+  largeValue * (testBytesToUpload.getBytes().length),
+  outForLargeBytes.getOutputStreamStatistics().bytesToUpload);
+
+  //Test for relation between bytesUploadSuccessful, bytesUploadFailed
+  // and bytesToUpload
+  assertValues("bytesUploadSuccessful equal to difference between "
+  + "bytesToUpload and bytesUploadFailed",
+  outForSomeBytes.getOutputStreamStatistics().bytesUploadSuccessful,
+  outForSomeBytes.getOutputStreamStatistics().bytesToUpload -
+  outForSomeBytes.getOutputStreamStatistics().bytesUploadFailed);
+
+} finally {
+  if (outForLargeBytes != null) {
+outForLargeBytes.close();
+  }
+}
+
+  }
+
+  /**
+   * Tests to check time spend on waiting for tasks to be complete on a
+   * blocking queue in {@link AbfsOutputStream}.
+   *
+   * @throws IOException
+   */
+  @Test
+  public void testAbfsOutputStreamTimeSpendOnWaitTask() throws IOException {
+describe("Testing Time Spend on Waiting for Task to be complete");
+final AzureBlobFileSystem fs = getFileSystem();
+Path timeSpendFilePath = new Path("AbfsOutputStreamStatsPath");
+AzureBlobFileSystemStore abfss = fs.getAbfsStore();
+FileSystem.Statistics statistics = fs.getFsStatistics();
+
+AbfsOutputStream out =
+(AbfsOutputStream) abfss.createFile(timeSpendFilePath,
+statistics, true,
+FsPermission.getDefault(), FsPermission.getUMask(fs.getConf()));
+
+  }
+
+  /**
+   * Tests to check number of {@codes shrinkWriteOperationQueue()}
+   * calls.
+   * After writing data, AbfsOutputStream doesn't upload the data until
+   * Flushed. Hence, flush() method is called after write() to test Queue
+   * shrink calls.
+   *

[GitHub] [hadoop] steveloughran commented on a change in pull request #1899: HADOOP-16914 Adding Output Stream Counters in ABFS

2020-03-20 Thread GitBox
steveloughran commented on a change in pull request #1899: HADOOP-16914 Adding 
Output Stream Counters in ABFS
URL: https://github.com/apache/hadoop/pull/1899#discussion_r395749023
 
 

 ##
 File path: 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAbfsOutputStream.java
 ##
 @@ -0,0 +1,278 @@
+package org.apache.hadoop.fs.azurebfs;
+
+import java.io.IOException;
+
+import org.junit.Test;
+
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.azurebfs.services.AbfsOutputStream;
+import org.apache.hadoop.fs.permission.FsPermission;
+
+/**
+ * Test AbfsOutputStream statistics.
+ */
+public class ITestAbfsOutputStream extends AbstractAbfsIntegrationTest {
+
+  public ITestAbfsOutputStream() throws Exception {
+  }
+
+  /**
+   * Tests to check bytes Uploading in {@link AbfsOutputStream}.
+   *
+   * @throws IOException
+   */
+  @Test
+  public void testAbfsOutputStreamUploadingBytes() throws IOException {
+describe("Testing Bytes uploaded in AbfsOutputSteam");
+final AzureBlobFileSystem fs = getFileSystem();
+Path uploadBytesFilePath = new Path("AbfsOutputStreamStatsPath");
+AzureBlobFileSystemStore abfss = fs.getAbfsStore();
+FileSystem.Statistics statistics = fs.getFsStatistics();
+abfss.getAbfsConfiguration().setDisableOutputStreamFlush(false);
+String testBytesToUpload = "bytes";
+
+AbfsOutputStream outForSomeBytes = null;
+try {
+  outForSomeBytes = (AbfsOutputStream) 
abfss.createFile(uploadBytesFilePath,
+  statistics,
+  true,
+  FsPermission.getDefault(), FsPermission.getUMask(fs.getConf()));
+
+  //Test for zero bytes To upload
+  assertValues("bytes to upload", 0,
+  outForSomeBytes.getOutputStreamStatistics().bytesToUpload);
+
+  outForSomeBytes.write(testBytesToUpload.getBytes());
+  outForSomeBytes.flush();
+
+  //Test for some bytes to upload
+  assertValues("bytes to upload", testBytesToUpload.getBytes().length,
+  outForSomeBytes.getOutputStreamStatistics().bytesToUpload);
+
+  //Test for relation between bytesUploadSuccessful, bytesUploadFailed
+  // and bytesToUpload
+  assertValues("bytesUploadSuccessful equal to difference between "
+  + "bytesToUpload and bytesUploadFailed",
+  outForSomeBytes.getOutputStreamStatistics().bytesUploadSuccessful,
+  outForSomeBytes.getOutputStreamStatistics().bytesToUpload -
+  outForSomeBytes.getOutputStreamStatistics().bytesUploadFailed);
+
+} finally {
+  if (outForSomeBytes != null) {
 
 Review comment:
   IOUtils.closeQuietly(LOG, ...), or try-with-resources


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on a change in pull request #1881: HADOOP-16910 Adding file system counters in ABFS

2020-03-20 Thread GitBox
steveloughran commented on a change in pull request #1881: HADOOP-16910 Adding 
file system counters in ABFS
URL: https://github.com/apache/hadoop/pull/1881#discussion_r395740845
 
 

 ##
 File path: 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAbfsStreamStatistics.java
 ##
 @@ -76,12 +79,8 @@ public void testAbfsStreamOps() throws Exception {
   assertReadWriteOps("read", 1, statistics.getReadOps());
 
 } finally {
-  if (inForOneOperation != null) {
-inForOneOperation.close();
-  }
-  if (outForOneOperation != null) {
-outForOneOperation.close();
-  }
+  IOUtils.cleanupWithLogger(null, inForOneOperation,
 
 Review comment:
   you are going to need to create a logger in this test case and pass it down 
i'm afraid


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on issue #1875: HADOOP-16794. S3A reverts KMS encryption to the bucket's default KMS …

2020-03-20 Thread GitBox
steveloughran commented on issue #1875: HADOOP-16794. S3A reverts KMS 
encryption to the bucket's default KMS …
URL: https://github.com/apache/hadoop/pull/1875#issuecomment-601779959
 
 
   I think the way to do 3.2 support properly is to pretty much backport all 
the 3.3.x stuff in order


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16818) ABFS: Combine append+flush calls for blockblob & appendblob

2020-03-20 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16818?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17063417#comment-17063417
 ] 

Ayush Saxena commented on HADOOP-16818:
---

Got my eyes while checking mails..
Shouldn't the fix version be 3.3.0 instead 3.4.0 the trunk seems still on 3.3.0

{code:java}
  hadoop-main
  3.3.0-SNAPSHOT
{code}

Please check once, Apologies  if I have messed up. :)

> ABFS:  Combine append+flush calls for blockblob & appendblob
> 
>
> Key: HADOOP-16818
> URL: https://issues.apache.org/jira/browse/HADOOP-16818
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.3.0
>Reporter: Bilahari T H
>Assignee: Ishani
>Priority: Minor
> Fix For: 3.4.0
>
>
> Combine append+flush calls for blockblob & appendblob



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14918) Remove the Local Dynamo DB test option

2020-03-20 Thread Brahma Reddy Battula (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-14918?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17063294#comment-17063294
 ] 

Brahma Reddy Battula commented on HADOOP-14918:
---

[~gabor.bota], Looks this Jira is open and branc-2.10 patch needs to merged., 
can you take a look..?

> Remove the Local Dynamo DB test option
> --
>
> Key: HADOOP-14918
> URL: https://issues.apache.org/jira/browse/HADOOP-14918
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.9.0, 3.0.0
>Reporter: Steve Loughran
>Assignee: Gabor Bota
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: HADOOP-14918-001.patch, HADOOP-14918-002.patch, 
> HADOOP-14918-003.patch, HADOOP-14918-004.patch, 
> HADOOP-14918-branch-2.10.001.patch, HADOOP-14918.005.patch, 
> HADOOP-14918.006.patch
>
>
> I'm going to propose cutting out the localdynamo test option for s3guard
> * the local DDB JAR is unmaintained/lags the SDK We work with...eventually 
> there'll be differences in API.
> * as the local dynamo DB is unshaded. it complicates classpath setup for the 
> build. Remove it and there's no need to worry about versions of anything 
> other than the shaded AWS
> * it complicates test runs. Now we need to test for both localdynamo *and* 
> real dynamo
> * but we can't ignore real dynamo, because that's the one which matters
> While the local option promises to reduce test costs, really, it's just 
> adding complexity. If you are testing with s3guard, you need to have a real 
> table to test against., And with the exception of those people testing s3a 
> against non-AWS, consistent endpoints, everyone should be testing with 
> S3Guard.
> -Straightforward to remove.-



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16818) ABFS: Combine append+flush calls for blockblob & appendblob

2020-03-20 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16818?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17063275#comment-17063275
 ] 

Hudson commented on HADOOP-16818:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18070 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18070/])
HADOOP-16818. ABFS: Combine append+flush calls for blockblob & (github: rev 
3612317038196ee0cb6d7204056d54b7a7ed8bf7)
* (edit) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/constants/HttpQueryParams.java
* (edit) 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemE2E.java
* (edit) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/constants/AbfsHttpConstants.java
* (edit) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/constants/ConfigurationKeys.java
* (edit) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsClient.java
* (edit) hadoop-tools/hadoop-azure/src/site/markdown/abfs.md
* (edit) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsOutputStream.java
* (edit) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/constants/FileSystemConfigurations.java
* (edit) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystemStore.java
* (add) 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/services/TestAbfsOutputStream.java
* (edit) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsConfiguration.java


> ABFS:  Combine append+flush calls for blockblob & appendblob
> 
>
> Key: HADOOP-16818
> URL: https://issues.apache.org/jira/browse/HADOOP-16818
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.3.0
>Reporter: Bilahari T H
>Assignee: Ishani
>Priority: Minor
> Fix For: 3.4.0
>
>
> Combine append+flush calls for blockblob & appendblob



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1903: HADOOP-16929. Added support for AArch32 for Dev Environment

2020-03-20 Thread GitBox
hadoop-yetus commented on issue #1903: HADOOP-16929. Added support for AArch32 
for Dev Environment
URL: https://github.com/apache/hadoop/pull/1903#issuecomment-601632540
 
 
   (!) A patch to the testing environment has been detected. 
   Re-executing against the patched versions to perform further tests. 
   The console is at 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1903/1/console in case 
of problems.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-16818) ABFS: Combine append+flush calls for blockblob & appendblob

2020-03-20 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16818?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-16818.
-
Fix Version/s: 3.4.0
 Assignee: Ishani
   Resolution: Fixed

> ABFS:  Combine append+flush calls for blockblob & appendblob
> 
>
> Key: HADOOP-16818
> URL: https://issues.apache.org/jira/browse/HADOOP-16818
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.3.0
>Reporter: Bilahari T H
>Assignee: Ishani
>Priority: Minor
> Fix For: 3.4.0
>
>
> Combine append+flush calls for blockblob & appendblob



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran merged pull request #1790: [HADOOP-16818] ABFS: Combine append+flush calls for blockblob & appendblob

2020-03-20 Thread GitBox
steveloughran merged pull request #1790: [HADOOP-16818] ABFS: Combine 
append+flush calls for blockblob & appendblob
URL: https://github.com/apache/hadoop/pull/1790
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16929) ARM Compile Scripts only work for AArch64, not AArch32

2020-03-20 Thread Jira


[ 
https://issues.apache.org/jira/browse/HADOOP-16929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17063265#comment-17063265
 ] 

Maximilian Böther commented on HADOOP-16929:


Added the fixes as a PR on Github.

Btw, even with  the setup for AArch64 (or the fixes for AArch32), one still 
needs to do some manual tinkering in the build process. E.g. some components 
like the YARN Application Catalog just blindly download x86 versions of node 
and thus, the build fails. Also, the known primitive.h issue (see HADOOP-15505) 
and the soft/hard float issue (see HADOOP-9320) have to be tackled manually. I 
did a write-down of what is necessary to really compile Hadoop on ARM, but it's 
just for the native files as I was interested in them. For a full build without 
e.g. disabling building the application catalog, external fixes have to be 
applied (see [https://github.com/eirslett/frontend-maven-plugin/issues/884] for 
example).

> ARM Compile Scripts only work for AArch64, not AArch32
> --
>
> Key: HADOOP-16929
> URL: https://issues.apache.org/jira/browse/HADOOP-16929
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Maximilian Böther
>Priority: Major
>
> The dockerfile added in HADOOP-16797 only works for AArch32. The detection 
> itself only recognizes 64-bit ARM architectures as well.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] MaxiBoether opened a new pull request #1903: HADOOP-16929. Added support for AArch32 for Dev Environment

2020-03-20 Thread GitBox
MaxiBoether opened a new pull request #1903: HADOOP-16929. Added support for 
AArch32 for Dev Environment
URL: https://github.com/apache/hadoop/pull/1903
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus removed a comment on issue #1890: HADOOP-16854 Fix to prevent OutOfMemoryException and Make the threadpool and bytebuffer pool common across all AbfsOutputStream instanc

2020-03-20 Thread GitBox
hadoop-yetus removed a comment on issue #1890: HADOOP-16854 Fix to prevent 
OutOfMemoryException and Make the threadpool and bytebuffer pool common across 
all AbfsOutputStream instances
URL: https://github.com/apache/hadoop/pull/1890#issuecomment-597482104
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 34s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  19m 34s |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 32s |  trunk passed  |
   | +1 :green_heart: |  checkstyle  |   0m 24s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 35s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  14m 50s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 27s |  trunk passed  |
   | +0 :ok: |  spotbugs  |   0m 52s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   0m 50s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 29s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 24s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 24s |  the patch passed  |
   | -0 :warning: |  checkstyle  |   0m 16s |  hadoop-tools/hadoop-azure: The 
patch generated 2 new + 0 unchanged - 0 fixed = 2 total (was 0)  |
   | +1 :green_heart: |  mvnsite  |   0m 26s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  14m  5s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 23s |  the patch passed  |
   | -1 :x: |  findbugs  |   0m 54s |  hadoop-tools/hadoop-azure generated 1 
new + 0 unchanged - 0 fixed = 1 total (was 0)  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 24s |  hadoop-azure in the patch passed.  
|
   | +1 :green_heart: |  asflicense  |   0m 32s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  58m 15s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | FindBugs | module:hadoop-tools/hadoop-azure |
   |  |  Possible doublecheck on 
org.apache.hadoop.fs.azurebfs.services.AbfsOutputStream.threadExecutor in new 
org.apache.hadoop.fs.azurebfs.services.AbfsOutputStream(AbfsClient, String, 
long, int, boolean, boolean)  At AbfsOutputStream.java:new 
org.apache.hadoop.fs.azurebfs.services.AbfsOutputStream(AbfsClient, String, 
long, int, boolean, boolean)  At AbfsOutputStream.java:[lines 112-114] |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.7 Server=19.03.7 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1890/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1890 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 0a6b74a2b28b 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / cf9cf83 |
   | Default Java | 1.8.0_242 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1890/1/artifact/out/diff-checkstyle-hadoop-tools_hadoop-azure.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1890/1/artifact/out/new-findbugs-hadoop-tools_hadoop-azure.html
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1890/1/testReport/ |
   | Max. process+thread count | 415 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1890/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git 

[jira] [Created] (HADOOP-16929) ARM Compile Scripts only work for AArch64, not AArch32

2020-03-20 Thread Jira
Maximilian Böther created HADOOP-16929:
--

 Summary: ARM Compile Scripts only work for AArch64, not AArch32
 Key: HADOOP-16929
 URL: https://issues.apache.org/jira/browse/HADOOP-16929
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Maximilian Böther


The dockerfile added in HADOOP-16797 only works for AArch32. The detection 
itself only recognizes 64-bit ARM architectures as well.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16927) Update hadoop-thirdparty dependency version to 1.0.0

2020-03-20 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16927?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17063256#comment-17063256
 ] 

Hudson commented on HADOOP-16927:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18069 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18069/])
HADOOP-16927. Update hadoop-thirdparty dependency version to 1.0.0 
(vinayakumarb: rev f02d5abacd84efb5436fa418c9192450d815c989)
* (edit) hadoop-project/pom.xml


> Update hadoop-thirdparty dependency version to 1.0.0
> 
>
> Key: HADOOP-16927
> URL: https://issues.apache.org/jira/browse/HADOOP-16927
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
>Priority: Major
>  Labels: release-blocker
> Fix For: 3.3.0
>
>
> Now hadoop-thirdparty 1.0.0 is released, its time to upgrade to released 
> version in hadoop



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-16927) Update hadoop-thirdparty dependency version to 1.0.0

2020-03-20 Thread Vinayakumar B (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16927?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17063247#comment-17063247
 ] 

Vinayakumar B edited comment on HADOOP-16927 at 3/20/20, 9:48 AM:
--

Committed to trunk. thanks [~ayushtkn] for review in PR.


was (Author: vinayrpet):
Committed to trunk.

> Update hadoop-thirdparty dependency version to 1.0.0
> 
>
> Key: HADOOP-16927
> URL: https://issues.apache.org/jira/browse/HADOOP-16927
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
>Priority: Major
>  Labels: release-blocker
> Fix For: 3.3.0
>
>
> Now hadoop-thirdparty 1.0.0 is released, its time to upgrade to released 
> version in hadoop



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-16927) Update hadoop-thirdparty dependency version to 1.0.0

2020-03-20 Thread Vinayakumar B (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16927?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B resolved HADOOP-16927.

Fix Version/s: 3.3.0
 Hadoop Flags: Reviewed
   Resolution: Fixed

Committed to trunk.

> Update hadoop-thirdparty dependency version to 1.0.0
> 
>
> Key: HADOOP-16927
> URL: https://issues.apache.org/jira/browse/HADOOP-16927
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
>Priority: Major
>  Labels: release-blocker
> Fix For: 3.3.0
>
>
> Now hadoop-thirdparty 1.0.0 is released, its time to upgrade to released 
> version in hadoop



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1900: HADOOP-16927. Update hadoop-thirdparty dependency version to 1.0.0

2020-03-20 Thread GitBox
hadoop-yetus commented on issue #1900: HADOOP-16927. Update hadoop-thirdparty 
dependency version to 1.0.0
URL: https://github.com/apache/hadoop/pull/1900#issuecomment-601612873
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m  0s |  Docker mode activated.  |
   | -1 :x: |  patch  |   0m  4s |  https://github.com/apache/hadoop/pull/1900 
does not apply to trunk. Rebase required? Wrong Branch? See 
https://wiki.apache.org/hadoop/HowToContribute for help.  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | GITHUB PR | https://github.com/apache/hadoop/pull/1900 |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1900/2/console |
   | versions | git=2.17.1 |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] vinayakumarb commented on issue #1900: HADOOP-16927. Update hadoop-thirdparty dependency version to 1.0.0

2020-03-20 Thread GitBox
vinayakumarb commented on issue #1900: HADOOP-16927. Update hadoop-thirdparty 
dependency version to 1.0.0
URL: https://github.com/apache/hadoop/pull/1900#issuecomment-601612474
 
 
   Thanks @ayushtkn  for review. Merged to trunk.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] asfgit merged pull request #1900: HADOOP-16927. Update hadoop-thirdparty dependency version to 1.0.0

2020-03-20 Thread GitBox
asfgit merged pull request #1900: HADOOP-16927. Update hadoop-thirdparty 
dependency version to 1.0.0
URL: https://github.com/apache/hadoop/pull/1900
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ayushtkn commented on issue #1902: HDFS-15219. fix ResponseProcessor.run to catch Throwable instead of Exception

2020-03-20 Thread GitBox
ayushtkn commented on issue #1902: HDFS-15219. fix ResponseProcessor.run to 
catch Throwable instead of Exception
URL: https://github.com/apache/hadoop/pull/1902#issuecomment-601570002
 
 
   Can we extend a UT for the issue?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org