[GitHub] [hadoop] iwasakims opened a new pull request #1797: HDFS-15077. Fix intermittent failure of TestDFSClientRetries#testLeas…

2020-01-06 Thread GitBox
iwasakims opened a new pull request #1797: HDFS-15077. Fix intermittent failure 
of TestDFSClientRetries#testLeas…
URL: https://github.com/apache/hadoop/pull/1797
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15993) Upgrade Kafka version in hadoop-kafka module

2020-01-06 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15993?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-15993:
---
Status: Patch Available  (was: Open)

> Upgrade Kafka version in hadoop-kafka module
> 
>
> Key: HADOOP-15993
> URL: https://issues.apache.org/jira/browse/HADOOP-15993
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build, security
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
>
> Now the version is 0.8.2.1 and it has net.jpountz.lz4:lz4:1.2.0 dependency, 
> which is vulnerable. 
> (https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2014-4611)
> Let's upgrade.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] aajisaka opened a new pull request #1796: HADOOP-15993. Upgrade Kafka to 2.4.0 in hadoop-kafka module.

2020-01-06 Thread GitBox
aajisaka opened a new pull request #1796: HADOOP-15993. Upgrade Kafka to 2.4.0 
in hadoop-kafka module.
URL: https://github.com/apache/hadoop/pull/1796
 
 
   JIRA: https://issues.apache.org/jira/browse/HADOOP-15993


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] snvijaya commented on issue #1712: HADOOP-16699: Add verbose TRACE logging to ABFS

2020-01-06 Thread GitBox
snvijaya commented on issue #1712: HADOOP-16699: Add verbose TRACE logging to 
ABFS
URL: https://github.com/apache/hadoop/pull/1712#issuecomment-571448646
 
 
   Hi @steveloughran , It could take few weeks before I can get back to this 
change. Meanwhile getting this change merged will help in any debugging and 
also also serve as pin points for the next change. 
   
   Thanks a lot for your time.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16756) distcp -update to S3A always overwrites due to block size mismatch

2020-01-06 Thread Daisuke Kobayashi (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17009368#comment-17009368
 ] 

Daisuke Kobayashi commented on HADOOP-16756:


Yea. Otherwise, if this is an expected behavior per its design we should 
document it. Thoughts?

> distcp -update to S3A always overwrites due to block size mismatch
> --
>
> Key: HADOOP-16756
> URL: https://issues.apache.org/jira/browse/HADOOP-16756
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, tools/distcp
>Affects Versions: 3.3.0
>Reporter: Daisuke Kobayashi
>Priority: Major
>
> Distcp over S3A always copies all source files no matter the files are 
> changed or not. This is opposite to the statement in the doc below.
> [http://hadoop.apache.org/docs/current/hadoop-distcp/DistCp.html]
> {noformat}
> And to use -update to only copy changed files.
> {noformat}
> CopyMapper compares file length as well as block size before copying. While 
> the file length should match, the block size does not. This is apparently 
> because the returned block size from S3A is always 32MB.
> [https://github.com/apache/hadoop/blob/release-3.2.0-RC1/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/CopyMapper.java#L348]
> I'd suppose we should update the documentation or make code change.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-15993) Upgrade Kafka version in hadoop-kafka module

2020-01-06 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15993?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka reassigned HADOOP-15993:
--

Assignee: Akira Ajisaka

> Upgrade Kafka version in hadoop-kafka module
> 
>
> Key: HADOOP-15993
> URL: https://issues.apache.org/jira/browse/HADOOP-15993
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build, security
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
>
> Now the version is 0.8.2.1 and it has net.jpountz.lz4:lz4:1.2.0 dependency, 
> which is vulnerable. 
> (https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2014-4611)
> Let's upgrade.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16621) [pb-upgrade] spark-hive doesn't compile against hadoop trunk because of Token's marshalling

2020-01-06 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16621?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-16621:
---
Priority: Critical  (was: Major)

> [pb-upgrade] spark-hive doesn't compile against hadoop trunk because of 
> Token's marshalling
> ---
>
> Key: HADOOP-16621
> URL: https://issues.apache.org/jira/browse/HADOOP-16621
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: common
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Vinayakumar B
>Priority: Critical
>
> the move to protobuf 3.x stops spark building because Token has a method 
> which returns a protobuf, and now its returning some v3 types.
> if we want to isolate downstream code from protobuf changes, we need to move 
> that marshalling method from token and put in a helper class.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] aajisaka commented on a change in pull request #1640: HADOOP-16637. Fix findbugs warnings in hadoop-cos.

2020-01-06 Thread GitBox
aajisaka commented on a change in pull request #1640: HADOOP-16637. Fix 
findbugs warnings in hadoop-cos.
URL: https://github.com/apache/hadoop/pull/1640#discussion_r363584238
 
 

 ##
 File path: 
hadoop-cloud-storage-project/hadoop-cos/src/main/java/org/apache/hadoop/fs/cosn/BufferPool.java
 ##
 @@ -86,10 +85,6 @@ private File createDir(String dirPath) throws IOException {
   } else {
 LOG.debug("buffer dir: {} already exists.", dirPath);
   }
-} else {
-  throw new IOException("creating buffer dir: " + dir.getAbsolutePath()
-  + "unsuccessfully.");
-}
 
 Review comment:
   Would you fix indent?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16670) Stripping Submarine code from Hadoop codebase.

2020-01-06 Thread Akira Ajisaka (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16670?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17009358#comment-17009358
 ] 

Akira Ajisaka commented on HADOOP-16670:


Hi [~tangzhankun], would you reflect Wanqiang's comment? I'm +1 if that is 
addressed.

> Stripping Submarine code from Hadoop codebase.
> --
>
> Key: HADOOP-16670
> URL: https://issues.apache.org/jira/browse/HADOOP-16670
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Wei-Chiu Chuang
>Assignee: Zhankun Tang
>Priority: Blocker
> Attachments: HADOOP-16670-trunk.001.patch, 
> HADOOP-16670-trunk.002.patch, HADOOP-16670-trunk.003.patch, 
> HADOOP-16670-trunk.004.patch, HADOOP-16670-trunk.005.patch, 
> HADOOP-16670-trunk.006.patch
>
>
> Now that Submarine is getting out of Hadoop and has its own repo, it's time 
> to stripe the Submarine code from Hadoop codebase in Hadoop 3.3.0



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16670) Stripping Submarine code from Hadoop codebase.

2020-01-06 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16670?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-16670:
---
Priority: Blocker  (was: Major)

> Stripping Submarine code from Hadoop codebase.
> --
>
> Key: HADOOP-16670
> URL: https://issues.apache.org/jira/browse/HADOOP-16670
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Wei-Chiu Chuang
>Assignee: Zhankun Tang
>Priority: Blocker
> Attachments: HADOOP-16670-trunk.001.patch, 
> HADOOP-16670-trunk.002.patch, HADOOP-16670-trunk.003.patch, 
> HADOOP-16670-trunk.004.patch, HADOOP-16670-trunk.005.patch, 
> HADOOP-16670-trunk.006.patch
>
>
> Now that Submarine is getting out of Hadoop and has its own repo, it's time 
> to stripe the Submarine code from Hadoop codebase in Hadoop 3.3.0



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1795: HADOOP-16792: Make S3 client request timeout configurable

2020-01-06 Thread GitBox
hadoop-yetus commented on issue #1795: HADOOP-16792: Make S3 client request 
timeout configurable
URL: https://github.com/apache/hadoop/pull/1795#issuecomment-571414727
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |  29m 23s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
1 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  21m 30s |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 30s |  trunk passed  |
   | +1 :green_heart: |  checkstyle  |   0m 24s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 36s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  14m 49s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 25s |  trunk passed  |
   | +0 :ok: |  spotbugs  |   0m 59s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   0m 56s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 32s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 26s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 26s |  the patch passed  |
   | -0 :warning: |  checkstyle  |   0m 18s |  hadoop-tools/hadoop-aws: The 
patch generated 1 new + 21 unchanged - 0 fixed = 22 total (was 21)  |
   | +1 :green_heart: |  mvnsite  |   0m 32s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  14m 58s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 22s |  the patch passed  |
   | +1 :green_heart: |  findbugs  |   1m  2s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 16s |  hadoop-aws in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 29s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  89m 51s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.5 Server=19.03.5 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1795/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1795 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 9a4c42ae4d48 4.15.0-66-generic #75-Ubuntu SMP Tue Oct 1 
05:24:09 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 59aac00 |
   | Default Java | 1.8.0_232 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1795/1/artifact/out/diff-checkstyle-hadoop-tools_hadoop-aws.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1795/1/testReport/ |
   | Max. process+thread count | 420 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1795/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ramesh0201 edited a comment on issue #1768: HADOOP-16769. LocalDirAllocator to provide diagnostics when file creation fails

2020-01-06 Thread GitBox
ramesh0201 edited a comment on issue #1768: HADOOP-16769. LocalDirAllocator to 
provide diagnostics when file creation fails
URL: https://github.com/apache/hadoop/pull/1768#issuecomment-571394894
 
 
   I will address the above change to catch and throw the two errors, so they 
are nested into one another, as part of the new pull request. Thanks!


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ramesh0201 commented on issue #1768: HADOOP-16769. LocalDirAllocator to provide diagnostics when file creation fails

2020-01-06 Thread GitBox
ramesh0201 commented on issue #1768: HADOOP-16769. LocalDirAllocator to provide 
diagnostics when file creation fails
URL: https://github.com/apache/hadoop/pull/1768#issuecomment-571394894
 
 
   I will address the above change to catch and throw the two errors, so they 
are nested into one another. Thanks!


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] mustafaiman opened a new pull request #1795: HADOOP-16792: Make S3 client request timeout configurable

2020-01-06 Thread GitBox
mustafaiman opened a new pull request #1795: HADOOP-16792: Make S3 client 
request timeout configurable
URL: https://github.com/apache/hadoop/pull/1795
 
 
   ## NOTICE
   
   Please create an issue in ASF JIRA before opening a pull request,
   and you need to set the title of the pull request which starts with
   the corresponding JIRA issue number. (e.g. HADOOP-X. Fix a typo in YYY.)
   For more details, please see 
https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] belugabehr commented on a change in pull request #1792: HADOOP-16790: Add Write Convenience Methods

2020-01-06 Thread GitBox
belugabehr commented on a change in pull request #1792: HADOOP-16790: Add Write 
Convenience Methods
URL: https://github.com/apache/hadoop/pull/1792#discussion_r363555279
 
 

 ##
 File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileUtil.java
 ##
 @@ -1633,4 +1638,119 @@ public static boolean compareFs(FileSystem srcFs, 
FileSystem destFs) {
 // check for ports
 return srcUri.getPort()==dstUri.getPort();
   }
+
+  /**
+   * Writes bytes to a file. This utility method opens the file for writing,
+   * creating the file if it does not exist, or overwrites an existing file. 
All
+   * bytes in the byte array are written to the file.
+   *
+   * @param fs the files system with which to create the file
+   * @param path the path to the file
+   * @param bytes the byte array with the bytes to write
+   *
+   * @return the file system
+   *
+   * @throws NullPointerException if any of the arguments are {@code null}
+   * @throws IOException if an I/O error occurs creating or writing to the file
+   */
+  public static FileSystem write(final FileSystem fs, final Path path,
+  final byte[] bytes) throws IOException {
+
+Objects.requireNonNull(path);
+Objects.requireNonNull(bytes);
+
+try (FSDataOutputStream out = fs.create(path)) {
+  out.write(bytes);
+}
+
+return fs;
+  }
+
+  /**
+   * Write lines of text to a file. Each line is a char sequence and is written
+   * to the file in sequence with each line terminated by the platform's line
+   * separator, as defined by the system property {@code
+   * line.separator}. Characters are encoded into bytes using the specified
+   * charset. This utility method opens the file for writing, creating the file
+   * if it does not exist, or overwrites an existing file.
+   *
+   * @param fs the files system with which to create the file
+   * @param path the path to the file
+   * @param lines a Collection to iterate over the char sequences
+   * @param cs the charset to use for encoding
+   *
+   * @return the file system
+   *
+   * @throws NullPointerException if any of the arguments are {@code null}
+   * @throws IOException if an I/O error occurs creating or writing to the file
+   */
+  public static FileSystem write(final FileSystem fs, final Path path,
+  final Iterable lines, final Charset cs)
+  throws IOException {
+
+Objects.requireNonNull(path);
+Objects.requireNonNull(lines);
+Objects.requireNonNull(cs);
+
+CharsetEncoder encoder = cs.newEncoder();
+try (FSDataOutputStream out = fs.create(path);
+BufferedWriter writer =
+new BufferedWriter(new OutputStreamWriter(out, encoder))) {
+  for (CharSequence line : lines) {
+writer.append(line);
+writer.newLine();
+  }
+}
+return fs;
+  }
+
+  /**
+   * Write a line of text to a file. Characters are encoded into bytes using 
the
+   * specified charset. This utility method opens the file for writing, 
creating
+   * the file if it does not exist, or overwrites an existing file.
+   *
+   * @param fs the files system with which to create the file
+   * @param path the path to the file
+   * @param charseq the char sequence to write to the file
+   * @param cs the charset to use for encoding
+   *
+   * @return the file system
+   *
+   * @throws NullPointerException if any of the arguments are {@code null}
+   * @throws IOException if an I/O error occurs creating or writing to the file
+   */
+  public static FileSystem write(final FileSystem fs, final Path path,
+  final CharSequence charseq, final Charset cs) throws IOException {
+
+Objects.requireNonNull(path);
+Objects.requireNonNull(charseq);
+Objects.requireNonNull(cs);
+
+CharsetEncoder encoder = cs.newEncoder();
+try (FSDataOutputStream out = fs.create(path);
+BufferedWriter writer =
+new BufferedWriter(new OutputStreamWriter(out, encoder))) {
+  writer.append(charseq);
+}
+return fs;
+  }
+
+  /**
+   * Write a line of text to a file. Characters are encoded into bytes using
+   * UTF-8. This utility method opens the file for writing, creating the file 
if
+   * it does not exist, or overwrites an existing file.
+   *
+   * @param fs the files system with which to create the file
+   * @param path the path to the file
+   * @param charseq the char sequence to write to the file
+   *
+   * @return the file system
 
 Review comment:
   Might as well.  Allows for method chaining.  For example it's common to 
write a file into the tmp directory then move it into its final destination to 
avoid writing garbage into the target directory if the write fails.
   
   ```File.write(fs, tmpPath, byte[]).rename(tmpPath, path);```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 

[GitHub] [hadoop] ramesh0201 commented on a change in pull request #1768: HADOOP-16769. LocalDirAllocator to provide diagnostics when file creation fails

2020-01-06 Thread GitBox
ramesh0201 commented on a change in pull request #1768: HADOOP-16769. 
LocalDirAllocator to provide diagnostics when file creation fails
URL: https://github.com/apache/hadoop/pull/1768#discussion_r363555060
 
 

 ##
 File path: 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestLocalDirAllocator.java
 ##
 @@ -532,4 +532,23 @@ public void testGetLocalPathForWriteForInvalidPaths() 
throws Exception {
 }
   }
 
+  /**
+   * Test to check the LocalDirAllocation for the less space HADOOP-16769
+   *
+   * @throws Exception
+   */
+  @Test(timeout = 3)
+  public void testGetLocalPathForWriteForLessSpace() throws Exception {
+String dir0 = buildBufferDir(ROOT, 0);
+String dir1 = buildBufferDir(ROOT, 1);
+conf.set(CONTEXT, dir0 + "," + dir1);
+try {
+  dirAllocator.getLocalPathForWrite("p1/x", 3_000_000_000_000L, conf);
 
 Review comment:
   Sorry I missed updating the pull request. I have a new code change that uses 
Long.MAX_VALUE, instead of this hardcoded number and then use a regex to match 
the error message. I will create a new pull request


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] belugabehr commented on a change in pull request #1792: HADOOP-16790: Add Write Convenience Methods

2020-01-06 Thread GitBox
belugabehr commented on a change in pull request #1792: HADOOP-16790: Add Write 
Convenience Methods
URL: https://github.com/apache/hadoop/pull/1792#discussion_r363554355
 
 

 ##
 File path: 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestFileUtil.java
 ##
 @@ -1493,6 +1495,73 @@ public void testReadSymlinkWithAFileAsInput() throws 
IOException {
 file.delete();
   }
 
+  /**
+   * Test that bytes are written out correctly to the local file system.
+   */
+  @Test
+  public void testWriteBytes() throws IOException {
+setupDirs();
+
+URI uri = tmp.toURI();
+Configuration conf = new Configuration();
+FileSystem fs = FileSystem.newInstance(uri, conf);
 
 Review comment:
   This was copy & paste from other tests in this same class.  I can look at 
that though.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] belugabehr commented on issue #1792: HADOOP-16790: Add Write Convenience Methods

2020-01-06 Thread GitBox
belugabehr commented on issue #1792: HADOOP-16790: Add Write Convenience Methods
URL: https://github.com/apache/hadoop/pull/1792#issuecomment-571387538
 
 
   @steveloughran Thanks for the review!
   
   > can't point to a good alternative place right now.
   
   Neither could I.
   
   > Passing in overwrite options on create is critical, or make overwrite the 
default (and tell people!)
   
   The default is to overwrite because this is not an append function and is 
the most straightforward behavior.  The behavior is already documented in the 
JavaDoc:
   
   ```
This utility method opens the file for writing, creating the file if it 
does not exist, or overwrites an existing file.
   ```
   
   > And we will need FileContext equivalent.
   
   I'm honestly not sure that that is, but can that be added as a backlog item?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16792) Let s3 clients configure request timeout

2020-01-06 Thread Mustafa Iman (Jira)
Mustafa Iman created HADOOP-16792:
-

 Summary: Let s3 clients configure request timeout
 Key: HADOOP-16792
 URL: https://issues.apache.org/jira/browse/HADOOP-16792
 Project: Hadoop Common
  Issue Type: Improvement
  Components: tools
Affects Versions: 3.3.0
Reporter: Mustafa Iman


S3 does not guarantee latency. Every once in a while a request may straggle and 
drive latency up for the greater procedure. In these cases, simply timing-out 
the individual request is beneficial so that the client application can retry. 
The retry tends to complete faster than the original straggling request most of 
the time. Others experienced this issue too: 
[https://arxiv.org/pdf/1911.11727.pdf] .

S3 configuration already provides timeout facility via 
`ClientConfiguration#setTimeout`. Exposing this configuration is beneficial for 
latency sensitive applications. S3 client configuration is shared with DynamoDB 
client which is also affected from unreliable worst case latency.

 

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16727) KMS Jetty server does not startup if trust store password is null

2020-01-06 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16727?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17009261#comment-17009261
 ] 

Hadoop QA commented on HADOOP-16727:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 29m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 35s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
19s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
32s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 17m 32s{color} 
| {color:red} root generated 4 new + 1864 unchanged - 4 fixed = 1868 total (was 
1868) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
46s{color} | {color:green} hadoop-common-project/hadoop-common: The patch 
generated 0 new + 86 unchanged - 18 fixed = 86 total (was 104) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 58s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
21s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m 
38s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
43s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}135m 42s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.5 Server=19.03.5 Image:yetus/hadoop:c44943d1fc3 |
| JIRA Issue | HADOOP-16727 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12987500/HADOOP-16727.003.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 5c78f658d7cb 4.15.0-66-generic #75-Ubuntu SMP Tue Oct 1 
05:24:09 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 819159f |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_232 |
| findbugs | v3.1.0-RC1 |
| javac | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16725/artifact/out/diff-compile-javac-root.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16725/testReport/ |
| Max. process+thread count | 1346 (vs. ulimit of 5500) |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16725/console |

[jira] [Commented] (HADOOP-16727) KMS Jetty server does not startup if trust store password is null

2020-01-06 Thread Hanisha Koneru (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16727?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17009208#comment-17009208
 ] 

Hanisha Koneru commented on HADOOP-16727:
-

Thank you [~weichiu]. I have retriggered a Jenkins pre-commit run as the last 
run was long back. If it comes back clean, I will commit the patch.

> KMS Jetty server does not startup if trust store password is null
> -
>
> Key: HADOOP-16727
> URL: https://issues.apache.org/jira/browse/HADOOP-16727
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Attachments: HADOOP-16727.003.patch, HDFS-14951.001.patch, 
> HDFS-14951.002.patch
>
>
> In HttpServe2, if the trustStore is set but the trust store password is not, 
> then we set the TrustStorePassword of SSLContextFactory to null. This results 
> in the Jetty server not starting up.
> {code:java}
> In HttpServer2#createHttpsChannelConnector(),
> if (trustStore != null) {
>   sslContextFactory.setTrustStorePath(trustStore);
>   sslContextFactory.setTrustStoreType(trustStoreType);
>   sslContextFactory.setTrustStorePassword(trustStorePassword);
> }
> {code}
> Before setting the trust store password, we should check that it is not null.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ThomasMarquardt commented on a change in pull request #1791: HADOOP-16785: wasb to raise IOE if write() invoked on a closed stream

2020-01-06 Thread GitBox
ThomasMarquardt commented on a change in pull request #1791: HADOOP-16785: wasb 
to raise IOE if write() invoked on a closed stream
URL: https://github.com/apache/hadoop/pull/1791#discussion_r363454016
 
 

 ##
 File path: 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemCreate.java
 ##
 @@ -104,4 +107,39 @@ public void testCreateNonRecursive2() throws Exception {
 .close();
 assertIsFile(fs, testFile);
   }
+
+  /**
+   * Attempts to use to the ABFS stream after it is closed.
+   */
+  @Test
+  public void testWriteAfterClose() throws Throwable {
+final AzureBlobFileSystem fs = getFileSystem();
+Path testPath = new Path(TEST_FOLDER_PATH, TEST_CHILD_FILE);
+FSDataOutputStream out = fs.create(testPath);
+out.close();
+intercept(IOException.class, () -> out.write('a'));
+intercept(IOException.class, () -> out.write(new byte[]{'a'}));
+// hsync is not ignored on a closed stream
+// out.hsync();
 
 Review comment:
   Ok, certainly not critical so I'll resolve this comment.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ThomasMarquardt commented on a change in pull request #1791: HADOOP-16785: wasb to raise IOE if write() invoked on a closed stream

2020-01-06 Thread GitBox
ThomasMarquardt commented on a change in pull request #1791: HADOOP-16785: wasb 
to raise IOE if write() invoked on a closed stream
URL: https://github.com/apache/hadoop/pull/1791#discussion_r363453384
 
 

 ##
 File path: 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemCreate.java
 ##
 @@ -104,4 +107,39 @@ public void testCreateNonRecursive2() throws Exception {
 .close();
 assertIsFile(fs, testFile);
   }
+
+  /**
+   * Attempts to use to the ABFS stream after it is closed.
+   */
+  @Test
+  public void testWriteAfterClose() throws Throwable {
+final AzureBlobFileSystem fs = getFileSystem();
+Path testPath = new Path(TEST_FOLDER_PATH, TEST_CHILD_FILE);
+FSDataOutputStream out = fs.create(testPath);
+out.close();
+intercept(IOException.class, () -> out.write('a'));
+intercept(IOException.class, () -> out.write(new byte[]{'a'}));
+// hsync is not ignored on a closed stream
+// out.hsync();
+out.flush();
+out.close();
+  }
+
+  /**
+   * Attempts to double close an ABFS output stream from within a
+   * FilterOutputStream.
+   * That class handles a double failure on close badly if the second
+   * exception rethrows the first.
+   */
+  @Test
+  public void testFilteredDoubleClose() throws Throwable {
 
 Review comment:
   I think you can simply create a second stream for the same path, or even 
call delete on it after writing some data with the other stream.  Basically, 
write to the same path with two streams.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on issue #1668: HADOOP-16645. S3A Delegation Token extension point to use StoreContext.

2020-01-06 Thread GitBox
steveloughran commented on issue #1668: HADOOP-16645. S3A Delegation Token 
extension point to use StoreContext.
URL: https://github.com/apache/hadoop/pull/1668#issuecomment-571285652
 
 
   thanks!
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on issue #1791: HADOOP-16785: wasb to raise IOE if write() invoked on a closed stream

2020-01-06 Thread GitBox
steveloughran commented on issue #1791: HADOOP-16785: wasb to raise IOE if 
write() invoked on a closed stream
URL: https://github.com/apache/hadoop/pull/1791#issuecomment-571278717
 
 
   checkstyle
   ```
   
/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsOutputStream.java:268:
  } else: 'else' construct must use '{}'s. [NeedBraces]
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus removed a comment on issue #1791: HADOOP-16785: wasb to raise IOE if write() invoked on a closed stream

2020-01-06 Thread GitBox
hadoop-yetus removed a comment on issue #1791: HADOOP-16785: wasb to raise IOE 
if write() invoked on a closed stream
URL: https://github.com/apache/hadoop/pull/1791#issuecomment-571215129
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 11s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
4 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 25s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  19m 46s |  trunk passed  |
   | +1 :green_heart: |  compile  |  17m 47s |  trunk passed  |
   | +1 :green_heart: |  checkstyle  |   2m 31s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m  6s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  18m 51s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   2m  3s |  trunk passed  |
   | +0 :ok: |  spotbugs  |   1m  1s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   3m  7s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 22s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 17s |  the patch passed  |
   | +1 :green_heart: |  compile  |  17m 15s |  the patch passed  |
   | +1 :green_heart: |  javac  |  17m 15s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   2m 31s |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   2m  8s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  14m 20s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   2m 14s |  the patch passed  |
   | +1 :green_heart: |  findbugs  |   3m 20s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   9m  9s |  hadoop-common in the patch passed. 
 |
   | +1 :green_heart: |  unit  |   1m 38s |  hadoop-azure in the patch passed.  
|
   | +1 :green_heart: |  asflicense  |   0m 53s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 123m 15s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.5 Server=19.03.5 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1791/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1791 |
   | JIRA Issue | HADOOP-16785 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 543c92b9b053 4.15.0-65-generic #74-Ubuntu SMP Tue Sep 17 
17:06:04 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 4a76ab7 |
   | Default Java | 1.8.0_232 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1791/3/testReport/ |
   | Max. process+thread count | 1373 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-azure 
U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1791/3/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus removed a comment on issue #1791: HADOOP-16785: wasb to raise IOE if write() invoked on a closed stream

2020-01-06 Thread GitBox
hadoop-yetus removed a comment on issue #1791: HADOOP-16785: wasb to raise IOE 
if write() invoked on a closed stream
URL: https://github.com/apache/hadoop/pull/1791#issuecomment-571209228
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   2m 28s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
4 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   1m 52s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  19m  6s |  trunk passed  |
   | +1 :green_heart: |  compile  |  17m 40s |  trunk passed  |
   | +1 :green_heart: |  checkstyle  |   2m 49s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m  6s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  19m 37s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   2m  6s |  trunk passed  |
   | +0 :ok: |  spotbugs  |   1m  0s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   3m 12s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 26s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 18s |  the patch passed  |
   | +1 :green_heart: |  compile  |  17m 10s |  the patch passed  |
   | +1 :green_heart: |  javac  |  17m 10s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   2m 44s |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   2m  4s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  13m  4s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   2m 14s |  the patch passed  |
   | +1 :green_heart: |  findbugs  |   3m 26s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   9m 19s |  hadoop-common in the patch passed. 
 |
   | +1 :green_heart: |  unit  |   1m 34s |  hadoop-azure in the patch passed.  
|
   | +1 :green_heart: |  asflicense  |   0m 54s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 126m  2s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.5 Server=19.03.5 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1791/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1791 |
   | JIRA Issue | HADOOP-16785 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux ffaaa026f726 4.15.0-65-generic #74-Ubuntu SMP Tue Sep 17 
17:06:04 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 4a76ab7 |
   | Default Java | 1.8.0_232 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1791/2/testReport/ |
   | Max. process+thread count | 1375 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-azure 
U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1791/2/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on a change in pull request #1791: HADOOP-16785: wasb to raise IOE if write() invoked on a closed stream

2020-01-06 Thread GitBox
steveloughran commented on a change in pull request #1791: HADOOP-16785: wasb 
to raise IOE if write() invoked on a closed stream
URL: https://github.com/apache/hadoop/pull/1791#discussion_r363445962
 
 

 ##
 File path: 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemCreate.java
 ##
 @@ -104,4 +107,39 @@ public void testCreateNonRecursive2() throws Exception {
 .close();
 assertIsFile(fs, testFile);
   }
+
+  /**
+   * Attempts to use to the ABFS stream after it is closed.
+   */
+  @Test
+  public void testWriteAfterClose() throws Throwable {
+final AzureBlobFileSystem fs = getFileSystem();
+Path testPath = new Path(TEST_FOLDER_PATH, TEST_CHILD_FILE);
+FSDataOutputStream out = fs.create(testPath);
+out.close();
+intercept(IOException.class, () -> out.write('a'));
+intercept(IOException.class, () -> out.write(new byte[]{'a'}));
+// hsync is not ignored on a closed stream
+// out.hsync();
+out.flush();
+out.close();
+  }
+
+  /**
+   * Attempts to double close an ABFS output stream from within a
+   * FilterOutputStream.
+   * That class handles a double failure on close badly if the second
+   * exception rethrows the first.
+   */
+  @Test
+  public void testFilteredDoubleClose() throws Throwable {
 
 Review comment:
   You are right, I couldn't think how to reliably replicate it. without going 
near mockito etc., 
   
   or we add a package level setLastError() call? 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on a change in pull request #1791: HADOOP-16785: wasb to raise IOE if write() invoked on a closed stream

2020-01-06 Thread GitBox
steveloughran commented on a change in pull request #1791: HADOOP-16785: wasb 
to raise IOE if write() invoked on a closed stream
URL: https://github.com/apache/hadoop/pull/1791#discussion_r363445286
 
 

 ##
 File path: 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemCreate.java
 ##
 @@ -104,4 +107,39 @@ public void testCreateNonRecursive2() throws Exception {
 .close();
 assertIsFile(fs, testFile);
   }
+
+  /**
+   * Attempts to use to the ABFS stream after it is closed.
+   */
+  @Test
+  public void testWriteAfterClose() throws Throwable {
+final AzureBlobFileSystem fs = getFileSystem();
+Path testPath = new Path(TEST_FOLDER_PATH, TEST_CHILD_FILE);
+FSDataOutputStream out = fs.create(testPath);
+out.close();
+intercept(IOException.class, () -> out.write('a'));
+intercept(IOException.class, () -> out.write(new byte[]{'a'}));
+// hsync is not ignored on a closed stream
+// out.hsync();
 
 Review comment:
   no idea, it just failed for me *and I wasn't worried about it*. flush() is 
the one I care about as that I know can get called in some places when closed 
(compression libs etc, so nothing we can fix).


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on a change in pull request #1791: HADOOP-16785: wasb to raise IOE if write() invoked on a closed stream

2020-01-06 Thread GitBox
steveloughran commented on a change in pull request #1791: HADOOP-16785: wasb 
to raise IOE if write() invoked on a closed stream
URL: https://github.com/apache/hadoop/pull/1791#discussion_r363444853
 
 

 ##
 File path: 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsOutputStream.java
 ##
 @@ -259,7 +259,15 @@ public synchronized void close() throws IOException {
   }
 
   private synchronized void flushInternal(boolean isClose) throws IOException {
-maybeThrowLastError();
+try {
+  maybeThrowLastError();
+} catch (IOException e) {
+  if (isClose) {
+// wrap existing exception so as to avoid breaking try-with-resources
+throw new IOException("Skipping final flush and write due to " + e, e);
+  } else
+throw e;
+}
 
 Review comment:
   thought of that, but also thought it might be good to differentiate 
exception raised in the later bits of the flush. Keeping it in close() is 
cleaner as it puts the problem where it belongs


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on a change in pull request #1791: HADOOP-16785: wasb to raise IOE if write() invoked on a closed stream

2020-01-06 Thread GitBox
steveloughran commented on a change in pull request #1791: HADOOP-16785: wasb 
to raise IOE if write() invoked on a closed stream
URL: https://github.com/apache/hadoop/pull/1791#discussion_r363444377
 
 

 ##
 File path: 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/NativeAzureFileSystem.java
 ##
 @@ -1198,6 +1201,17 @@ public void setEncodedKey(String anEncodedKey) {
 private void restoreKey() throws IOException {
   store.rename(getEncodedKey(), getKey());
 }
+
+/**
+ * Check for the stream being open.
+ * @throws IOException if the stream is closed.
+ */
+private void checkOpen() throws IOException {
 
 Review comment:
   yes, some bits of code do call flush() after its closed, so its safest to 
no-op it.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on issue #1712: HADOOP-16699: Add verbose TRACE logging to ABFS

2020-01-06 Thread GitBox
steveloughran commented on issue #1712: HADOOP-16699: Add verbose TRACE logging 
to ABFS
URL: https://github.com/apache/hadoop/pull/1712#issuecomment-571274373
 
 
   well, provided you are aware that moving to DurationInfo will essentially 
require rolling back this logging, I am happy to merge it in.
   
   Is this what everyone wants?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran closed pull request #1702: HDFS-14788 Use dynamic regex filter to ignore copy of source files in…

2020-01-06 Thread GitBox
steveloughran closed pull request #1702: HDFS-14788 Use dynamic regex filter to 
ignore copy of source files in…
URL: https://github.com/apache/hadoop/pull/1702
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on issue #1702: HDFS-14788 Use dynamic regex filter to ignore copy of source files in…

2020-01-06 Thread GitBox
steveloughran commented on issue #1702: HDFS-14788 Use dynamic regex filter to 
ignore copy of source files in…
URL: https://github.com/apache/hadoop/pull/1702#issuecomment-571270584
 
 
   +1
   committed to trunk after a full distcp retest. Thanks!


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ThomasMarquardt commented on a change in pull request #1791: HADOOP-16785: wasb to raise IOE if write() invoked on a closed stream

2020-01-06 Thread GitBox
ThomasMarquardt commented on a change in pull request #1791: HADOOP-16785: wasb 
to raise IOE if write() invoked on a closed stream
URL: https://github.com/apache/hadoop/pull/1791#discussion_r363435811
 
 

 ##
 File path: 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemCreate.java
 ##
 @@ -104,4 +107,39 @@ public void testCreateNonRecursive2() throws Exception {
 .close();
 assertIsFile(fs, testFile);
   }
+
+  /**
+   * Attempts to use to the ABFS stream after it is closed.
+   */
+  @Test
+  public void testWriteAfterClose() throws Throwable {
+final AzureBlobFileSystem fs = getFileSystem();
+Path testPath = new Path(TEST_FOLDER_PATH, TEST_CHILD_FILE);
+FSDataOutputStream out = fs.create(testPath);
+out.close();
+intercept(IOException.class, () -> out.write('a'));
+intercept(IOException.class, () -> out.write(new byte[]{'a'}));
+// hsync is not ignored on a closed stream
+// out.hsync();
+out.flush();
+out.close();
+  }
+
+  /**
+   * Attempts to double close an ABFS output stream from within a
+   * FilterOutputStream.
+   * That class handles a double failure on close badly if the second
+   * exception rethrows the first.
+   */
+  @Test
+  public void testFilteredDoubleClose() throws Throwable {
 
 Review comment:
   Double close would have already been passing with these changes.  To really 
test the try-with-resources I think you need to modify the underlying blob 
after fs.create but before flush so that flush fails and sets 
AbfsOutputStream.lastError.  For example, if you write some data to the blob so 
that the position held by the AbfsOutputStream is invalid, flush may fail.  For 
example, the following test:
   
   out = fs.create(path)
   out.write('a');
   // externally write data to the same path so that the next call to flush 
will fail due to position being off.
intercept(IOException.class, () -> out.flush());
intercept(IOException.class, () -> out.close());


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1635: HADOOP-16596. [pb-upgrade] Use shaded protobuf classes from hadoop-thirdparty dependency

2020-01-06 Thread GitBox
hadoop-yetus commented on issue #1635: HADOOP-16596. [pb-upgrade] Use shaded 
protobuf classes from hadoop-thirdparty dependency
URL: https://github.com/apache/hadoop/pull/1635#issuecomment-571265883
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m  0s |  Docker mode activated.  |
   | -1 :x: |  docker  |   3m 10s |  Docker failed to build 
yetus/hadoop:c44943d1fc3.  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | GITHUB PR | https://github.com/apache/hadoop/pull/1635 |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1635/3/console |
   | versions | git=2.17.1 |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ThomasMarquardt commented on a change in pull request #1791: HADOOP-16785: wasb to raise IOE if write() invoked on a closed stream

2020-01-06 Thread GitBox
ThomasMarquardt commented on a change in pull request #1791: HADOOP-16785: wasb 
to raise IOE if write() invoked on a closed stream
URL: https://github.com/apache/hadoop/pull/1791#discussion_r363432586
 
 

 ##
 File path: 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemCreate.java
 ##
 @@ -104,4 +107,39 @@ public void testCreateNonRecursive2() throws Exception {
 .close();
 assertIsFile(fs, testFile);
   }
+
+  /**
+   * Attempts to use to the ABFS stream after it is closed.
+   */
+  @Test
+  public void testWriteAfterClose() throws Throwable {
+final AzureBlobFileSystem fs = getFileSystem();
+Path testPath = new Path(TEST_FOLDER_PATH, TEST_CHILD_FILE);
+FSDataOutputStream out = fs.create(testPath);
+out.close();
+intercept(IOException.class, () -> out.write('a'));
+intercept(IOException.class, () -> out.write(new byte[]{'a'}));
+// hsync is not ignored on a closed stream
+// out.hsync();
 
 Review comment:
   Why is hsync special?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ThomasMarquardt commented on a change in pull request #1791: HADOOP-16785: wasb to raise IOE if write() invoked on a closed stream

2020-01-06 Thread GitBox
ThomasMarquardt commented on a change in pull request #1791: HADOOP-16785: wasb 
to raise IOE if write() invoked on a closed stream
URL: https://github.com/apache/hadoop/pull/1791#discussion_r363431500
 
 

 ##
 File path: 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/NativeAzureFileSystem.java
 ##
 @@ -1198,6 +1201,17 @@ public void setEncodedKey(String anEncodedKey) {
 private void restoreKey() throws IOException {
   store.rename(getEncodedKey(), getKey());
 }
+
+/**
+ * Check for the stream being open.
+ * @throws IOException if the stream is closed.
+ */
+private void checkOpen() throws IOException {
 
 Review comment:
   After thinking about this, I have changed my mind and no longer think it is 
necessary to check for closure inside flush, hflush, hsync, or hasCapabilities. 
 These are no-ops when the buffers are empty, except for hasCapabilities which 
doesn't throw (so it doesn't even make sense for hasCapabilities to check).


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ThomasMarquardt commented on a change in pull request #1791: HADOOP-16785: wasb to raise IOE if write() invoked on a closed stream

2020-01-06 Thread GitBox
ThomasMarquardt commented on a change in pull request #1791: HADOOP-16785: wasb 
to raise IOE if write() invoked on a closed stream
URL: https://github.com/apache/hadoop/pull/1791#discussion_r363422859
 
 

 ##
 File path: 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsOutputStream.java
 ##
 @@ -259,7 +259,15 @@ public synchronized void close() throws IOException {
   }
 
   private synchronized void flushInternal(boolean isClose) throws IOException {
-maybeThrowLastError();
+try {
+  maybeThrowLastError();
+} catch (IOException e) {
+  if (isClose) {
+// wrap existing exception so as to avoid breaking try-with-resources
+throw new IOException("Skipping final flush and write due to " + e, e);
+  } else
+throw e;
+}
 
 Review comment:
   Instead, I think we should update AbfsOutputStream.close to wrap the 
exception, short of having the Java implementers fix this. :)


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1620: HADOOP-16642. ITestDynamoDBMetadataStoreScale fails when throttled.

2020-01-06 Thread GitBox
hadoop-yetus commented on issue #1620: HADOOP-16642. 
ITestDynamoDBMetadataStoreScale fails when throttled.
URL: https://github.com/apache/hadoop/pull/1620#issuecomment-571253386
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |  29m 59s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
1 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  21m  6s |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 31s |  trunk passed  |
   | +1 :green_heart: |  checkstyle  |   0m 23s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 35s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  14m 44s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 26s |  trunk passed  |
   | +0 :ok: |  spotbugs  |   1m  0s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   0m 57s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 31s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 26s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 26s |  the patch passed  |
   | -0 :warning: |  checkstyle  |   0m 18s |  hadoop-tools/hadoop-aws: The 
patch generated 1 new + 12 unchanged - 0 fixed = 13 total (was 12)  |
   | +1 :green_heart: |  mvnsite  |   0m 31s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  1s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  15m 10s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 24s |  the patch passed  |
   | +1 :green_heart: |  findbugs  |   1m  1s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 12s |  hadoop-aws in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 28s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  90m 10s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.5 Server=19.03.5 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1620/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1620 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux aa363eafa2a2 4.15.0-70-generic #79-Ubuntu SMP Tue Nov 12 
10:36:11 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / dd2607e |
   | Default Java | 1.8.0_232 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1620/3/artifact/out/diff-checkstyle-hadoop-tools_hadoop-aws.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1620/3/testReport/ |
   | Max. process+thread count | 427 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1620/3/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1707: HADOOP-16697. Tune/audit auth mode

2020-01-06 Thread GitBox
hadoop-yetus commented on issue #1707: HADOOP-16697. Tune/audit auth mode
URL: https://github.com/apache/hadoop/pull/1707#issuecomment-571248639
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 30s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  No case conflicting files 
found.  |
   | +0 :ok: |  markdownlint  |   0m  1s |  markdownlint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
12 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   1m 28s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  23m 41s |  trunk passed  |
   | +1 :green_heart: |  compile  |  17m 40s |  trunk passed  |
   | +1 :green_heart: |  checkstyle  |   2m 51s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m  6s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  20m 51s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   2m  3s |  trunk passed  |
   | +0 :ok: |  spotbugs  |   1m  8s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   3m 26s |  trunk passed  |
   | -0 :warning: |  patch  |   1m 29s |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 24s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 23s |  the patch passed  |
   | +1 :green_heart: |  compile  |  18m  9s |  the patch passed  |
   | +1 :green_heart: |  javac  |  18m  9s |  the patch passed  |
   | -0 :warning: |  checkstyle  |   2m 50s |  root: The patch generated 3 new 
+ 96 unchanged - 0 fixed = 99 total (was 96)  |
   | +1 :green_heart: |  mvnsite  |   2m 12s |  the patch passed  |
   | -1 :x: |  whitespace  |   0m  0s |  The patch has 3 line(s) that end in 
whitespace. Use git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply  |
   | +1 :green_heart: |  xml  |   0m  1s |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  shadedclient  |  14m 14s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 59s |  the patch passed  |
   | +1 :green_heart: |  findbugs  |   3m 48s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   9m 52s |  hadoop-common in the patch passed. 
 |
   | +1 :green_heart: |  unit  |   1m 31s |  hadoop-aws in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 48s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 132m 43s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.5 Server=19.03.5 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1707/18/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1707 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle xml markdownlint |
   | uname | Linux 46978472041a 4.15.0-66-generic #75-Ubuntu SMP Tue Oct 1 
05:24:09 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 768ee22 |
   | Default Java | 1.8.0_232 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1707/18/artifact/out/diff-checkstyle-root.txt
 |
   | whitespace | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1707/18/artifact/out/whitespace-eol.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1707/18/testReport/ |
   | Max. process+thread count | 1346 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws 
U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1707/18/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional command

[jira] [Commented] (HADOOP-16785) Improve wasb and abfs resilience on double close() calls

2020-01-06 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16785?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17009061#comment-17009061
 ] 

Hadoop QA commented on HADOOP-16785:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m 
12s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
18s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
19m 41s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
55s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  0m 
58s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m  
1s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
21s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 16m 
51s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m 58s{color} | {color:orange} root: The patch generated 1 new + 29 unchanged - 
0 fixed = 30 total (was 29) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 13s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
38s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m 56s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
32s{color} | {color:green} hadoop-azure in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
45s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}124m 39s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.security.TestFixKerberosTicketOrder |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.5 Server=19.03.5 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1791/4/artifact/out/Dockerfile
 |
| GITHUB PR | https://github.com/apache/hadoop/pull/1791 |
| JIRA Issue | HADOOP-16785 |
| Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite 
unit shadedclient findbugs checkstyle |
| uname | Linu

[GitHub] [hadoop] hadoop-yetus commented on issue #1791: HADOOP-16785: wasb to raise IOE if write() invoked on a closed stream

2020-01-06 Thread GitBox
hadoop-yetus commented on issue #1791: HADOOP-16785: wasb to raise IOE if 
write() invoked on a closed stream
URL: https://github.com/apache/hadoop/pull/1791#issuecomment-571242097
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 12s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
4 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   1m 18s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  20m 49s |  trunk passed  |
   | +1 :green_heart: |  compile  |  17m 29s |  trunk passed  |
   | +1 :green_heart: |  checkstyle  |   2m 49s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m  0s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  19m 41s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 55s |  trunk passed  |
   | +0 :ok: |  spotbugs  |   0m 58s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   3m  1s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 21s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 13s |  the patch passed  |
   | +1 :green_heart: |  compile  |  16m 51s |  the patch passed  |
   | +1 :green_heart: |  javac  |  16m 51s |  the patch passed  |
   | -0 :warning: |  checkstyle  |   2m 58s |  root: The patch generated 1 new 
+ 29 unchanged - 0 fixed = 30 total (was 29)  |
   | +1 :green_heart: |  mvnsite  |   2m 13s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  14m 13s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   2m  2s |  the patch passed  |
   | +1 :green_heart: |  findbugs  |   3m 38s |  the patch passed  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  |   8m 56s |  hadoop-common in the patch failed.  |
   | +1 :green_heart: |  unit  |   1m 32s |  hadoop-azure in the patch passed.  
|
   | +1 :green_heart: |  asflicense  |   0m 45s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 124m 39s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.security.TestFixKerberosTicketOrder |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.5 Server=19.03.5 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1791/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1791 |
   | JIRA Issue | HADOOP-16785 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux eb4d151346fc 4.15.0-66-generic #75-Ubuntu SMP Tue Oct 1 
05:24:09 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 768ee22 |
   | Default Java | 1.8.0_232 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1791/4/artifact/out/diff-checkstyle-root.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1791/4/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1791/4/testReport/ |
   | Max. process+thread count | 1344 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-azure 
U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1791/4/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ThomasMarquardt commented on a change in pull request #1791: HADOOP-16785: wasb to raise IOE if write() invoked on a closed stream

2020-01-06 Thread GitBox
ThomasMarquardt commented on a change in pull request #1791: HADOOP-16785: wasb 
to raise IOE if write() invoked on a closed stream
URL: https://github.com/apache/hadoop/pull/1791#discussion_r363405980
 
 

 ##
 File path: 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/NativeAzureFileSystem.java
 ##
 @@ -1198,6 +1201,17 @@ public void setEncodedKey(String anEncodedKey) {
 private void restoreKey() throws IOException {
   store.rename(getEncodedKey(), getKey());
 }
+
+/**
+ * Check for the stream being open.
+ * @throws IOException if the stream is closed.
+ */
+private void checkOpen() throws IOException {
 
 Review comment:
   Also, a similar issue exists for NativeAzureFsInputStream.  I should add 
that I'm inclined not to fix this stuff, the reason being that it has been like 
this for a long time, I'm not aware of any related customer support issues, and 
there is risk in any change.  With that said, I will +1 the changes if the risk 
is low--you are fixing valid issues and thank you for that!


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ThomasMarquardt commented on a change in pull request #1791: HADOOP-16785: wasb to raise IOE if write() invoked on a closed stream

2020-01-06 Thread GitBox
ThomasMarquardt commented on a change in pull request #1791: HADOOP-16785: wasb 
to raise IOE if write() invoked on a closed stream
URL: https://github.com/apache/hadoop/pull/1791#discussion_r363403741
 
 

 ##
 File path: 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/NativeAzureFileSystem.java
 ##
 @@ -1198,6 +1201,17 @@ public void setEncodedKey(String anEncodedKey) {
 private void restoreKey() throws IOException {
   store.rename(getEncodedKey(), getKey());
 }
+
+/**
+ * Check for the stream being open.
+ * @throws IOException if the stream is closed.
+ */
+private void checkOpen() throws IOException {
 
 Review comment:
   We should check for stream closure for all the methods in my opinion, if we 
are going to check for some of them.  In particular, we should check for 
flush() which is inherited and hflush, hsync and hasCapabilities (also sync for 
branch-2).


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on issue #1404: HDFS-13660 Copy file till the source file length during distcp

2020-01-06 Thread GitBox
steveloughran commented on issue #1404: HDFS-13660 Copy file till the source 
file length during distcp
URL: https://github.com/apache/hadoop/pull/1404#issuecomment-571225773
 
 
   sorry, my mistake. Thought this was in and was doing soem more reviews. 
Ignore that. I'm just trying to run through various outstanding PRs before the 
2020 ones come in ..


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on a change in pull request #1404: HDFS-13660 Copy file till the source file length during distcp

2020-01-06 Thread GitBox
steveloughran commented on a change in pull request #1404: HDFS-13660 Copy file 
till the source file length during distcp
URL: https://github.com/apache/hadoop/pull/1404#discussion_r363390848
 
 

 ##
 File path: 
hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/util/TestDistCpUtils.java
 ##
 @@ -63,6 +63,7 @@
 import static org.apache.hadoop.fs.permission.FsAction.READ_EXECUTE;
 import static org.apache.hadoop.fs.permission.FsAction.READ_WRITE;
 import static org.apache.hadoop.hdfs.server.namenode.AclTestHelpers.aclEntry;
+import static org.apache.hadoop.test.LambdaTestUtils.intercept;
 
 Review comment:
   add a newline after. thanks


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on a change in pull request #1404: HDFS-13660 Copy file till the source file length during distcp

2020-01-06 Thread GitBox
steveloughran commented on a change in pull request #1404: HDFS-13660 Copy file 
till the source file length during distcp
URL: https://github.com/apache/hadoop/pull/1404#discussion_r363390684
 
 

 ##
 File path: 
hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/util/TestDistCpUtils.java
 ##
 @@ -1248,28 +1245,24 @@ public void testCompareFileLengthsAndChecksums() 
throws IOException {
 replFactor, srcSeed);
 DFSTestUtil.createFile(fs, dstWithChecksum1, 1024,
 replFactor, srcSeed);
-DistCpUtils.compareFileLengthsAndChecksums(fs, srcWithChecksum1,
-null, fs, dstWithChecksum1, false);
-DistCpUtils.compareFileLengthsAndChecksums(fs, srcWithChecksum1,
+DistCpUtils.compareFileLengthsAndChecksums(1024, fs, srcWithChecksum1,
 
 Review comment:
   howe about we factor out all these 1024s into a single variable, including 
the one on L1246


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on a change in pull request #1404: HDFS-13660 Copy file till the source file length during distcp

2020-01-06 Thread GitBox
steveloughran commented on a change in pull request #1404: HDFS-13660 Copy file 
till the source file length during distcp
URL: https://github.com/apache/hadoop/pull/1404#discussion_r363390247
 
 

 ##
 File path: 
hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/util/TestDistCpUtils.java
 ##
 @@ -18,6 +18,7 @@
 
 package org.apache.hadoop.tools.util;
 
+import org.apache.hadoop.tools.DistCpConstants;
 
 Review comment:
   move into the other distcp imports


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on a change in pull request #1404: HDFS-13660 Copy file till the source file length during distcp

2020-01-06 Thread GitBox
steveloughran commented on a change in pull request #1404: HDFS-13660 Copy file 
till the source file length during distcp
URL: https://github.com/apache/hadoop/pull/1404#discussion_r363390092
 
 

 ##
 File path: 
hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/mapred/TestRetriableFileCopyCommand.java
 ##
 @@ -24,6 +24,7 @@
 import org.apache.hadoop.mapreduce.*;
 import org.apache.hadoop.tools.CopyListingFileStatus;
 import org.apache.hadoop.tools.mapred.CopyMapper.FileAction;
+import org.junit.Assert;
 
 Review comment:
   add a newline above this to separate package groups


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on a change in pull request #1404: HDFS-13660 Copy file till the source file length during distcp

2020-01-06 Thread GitBox
steveloughran commented on a change in pull request #1404: HDFS-13660 Copy file 
till the source file length during distcp
URL: https://github.com/apache/hadoop/pull/1404#discussion_r363389669
 
 

 ##
 File path: 
hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/mapred/TestCopyMapper.java
 ##
 @@ -444,6 +450,57 @@ private void testCopyingExistingFiles(FileSystem fs, 
CopyMapper copyMapper,
 }
   }
 
+  @Test(timeout = 4)
+  public void testCopyWhileAppend() throws Exception {
+deleteState();
+mkdirs(SOURCE_PATH + "/1");
+touchFile(SOURCE_PATH + "/1/3");
+CopyMapper copyMapper = new CopyMapper();
+StubContext stubContext = new StubContext(getConfiguration(), null, 0);
+Mapper.Context context =
+stubContext.getContext();
+copyMapper.setup(context);
+final Path path = new Path(SOURCE_PATH + "/1/3");
+int manyBytes = 1;
+appendFile(path, manyBytes);
+ScheduledExecutorService scheduledExecutorService =
+Executors.newSingleThreadScheduledExecutor();
+Runnable task = new Runnable() {
+  public void run() {
+try {
+  int maxAppendAttempts = 20;
+  int appendCount = 0;
+  while (appendCount < maxAppendAttempts) {
+appendFile(path, 1000);
+Thread.sleep(200);
+appendCount++;
+  }
+} catch (IOException | InterruptedException e) {
+LOG.error("Exception encountered ", e);
+Assert.fail("Test failed: " + e.getMessage());
 
 Review comment:
   This issue is still open: we need that stack trace. Or the caught exception 
is saved to some variable outside the runnable; after the run() we throw that 
exception if non null


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on a change in pull request #1404: HDFS-13660 Copy file till the source file length during distcp

2020-01-06 Thread GitBox
steveloughran commented on a change in pull request #1404: HDFS-13660 Copy file 
till the source file length during distcp
URL: https://github.com/apache/hadoop/pull/1404#discussion_r363389171
 
 

 ##
 File path: 
hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/mapred/TestCopyMapper.java
 ##
 @@ -444,6 +449,54 @@ private void testCopyingExistingFiles(FileSystem fs, 
CopyMapper copyMapper,
 }
   }
 
+  @Test
+  public void testCopyWhileAppend() throws Exception {
+deleteState();
+mkdirs(SOURCE_PATH + "/1");
+touchFile(SOURCE_PATH + "/1/3");
+CopyMapper copyMapper = new CopyMapper();
+StubContext stubContext = new StubContext(getConfiguration(), null, 0);
+Mapper.Context context =
+stubContext.getContext();
+copyMapper.setup(context);
+final Path path = new Path(SOURCE_PATH + "/1/3");
+int manyBytes = 1;
+appendFile(path, manyBytes);
+ScheduledExecutorService scheduledExecutorService = 
Executors.newSingleThreadScheduledExecutor();
+Runnable task = new Runnable() {
+  public void run() {
+try {
+  int maxAppendAttempts = 20;
+  int appendCount = 0;
+  while (appendCount < maxAppendAttempts) {
+appendFile(path, 1000);
+Thread.sleep(200);
+appendCount++;
+  }
+} catch (IOException | InterruptedException e) {
+  e.printStackTrace();
+}
+  }
+};
+scheduledExecutorService.schedule(task, 10, TimeUnit.MILLISECONDS);
+boolean isFileMismatchErrorPresent = false;
+try {
 
 Review comment:
   OK


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus removed a comment on issue #1620: HADOOP-16642. ITestDynamoDBMetadataStoreScale fails when throttled.

2020-01-06 Thread GitBox
hadoop-yetus removed a comment on issue #1620: HADOOP-16642. 
ITestDynamoDBMetadataStoreScale fails when throttled.
URL: https://github.com/apache/hadoop/pull/1620#issuecomment-546320706
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 76 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 1211 | trunk passed |
   | +1 | compile | 31 | trunk passed |
   | +1 | checkstyle | 22 | trunk passed |
   | +1 | mvnsite | 35 | trunk passed |
   | +1 | shadedclient | 883 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 26 | trunk passed |
   | 0 | spotbugs | 57 | Used deprecated FindBugs config; considering switching 
to SpotBugs. |
   | +1 | findbugs | 56 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 32 | the patch passed |
   | +1 | compile | 27 | the patch passed |
   | +1 | javac | 27 | the patch passed |
   | -0 | checkstyle | 18 | hadoop-tools/hadoop-aws: The patch generated 1 new 
+ 12 unchanged - 0 fixed = 13 total (was 12) |
   | +1 | mvnsite | 30 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 894 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 24 | the patch passed |
   | +1 | findbugs | 62 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 80 | hadoop-aws in the patch passed. |
   | +1 | asflicense | 29 | The patch does not generate ASF License warnings. |
   | | | 3630 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.4 Server=19.03.4 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1620/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1620 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 6ee711400b8c 4.15.0-66-generic #75-Ubuntu SMP Tue Oct 1 
05:24:09 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 8625265 |
   | Default Java | 1.8.0_222 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1620/2/artifact/out/diff-checkstyle-hadoop-tools_hadoop-aws.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1620/2/testReport/ |
   | Max. process+thread count | 421 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1620/2/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus removed a comment on issue #1620: HADOOP-16642. ITestDynamoDBMetadataStoreScale fails when throttled.

2020-01-06 Thread GitBox
hadoop-yetus removed a comment on issue #1620: HADOOP-16642. 
ITestDynamoDBMetadataStoreScale fails when throttled.
URL: https://github.com/apache/hadoop/pull/1620#issuecomment-539632520
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 78 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 1217 | trunk passed |
   | +1 | compile | 33 | trunk passed |
   | +1 | checkstyle | 24 | trunk passed |
   | +1 | mvnsite | 37 | trunk passed |
   | +1 | shadedclient | 857 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 25 | trunk passed |
   | 0 | spotbugs | 61 | Used deprecated FindBugs config; considering switching 
to SpotBugs. |
   | +1 | findbugs | 59 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 32 | the patch passed |
   | +1 | compile | 27 | the patch passed |
   | +1 | javac | 27 | the patch passed |
   | -0 | checkstyle | 18 | hadoop-tools/hadoop-aws: The patch generated 1 new 
+ 12 unchanged - 0 fixed = 13 total (was 12) |
   | +1 | mvnsite | 32 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 883 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 23 | the patch passed |
   | +1 | findbugs | 62 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 75 | hadoop-aws in the patch passed. |
   | +1 | asflicense | 29 | The patch does not generate ASF License warnings. |
   | | | 3607 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.2 Server=19.03.2 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1620/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1620 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux b11807aba0de 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 91320b4 |
   | Default Java | 1.8.0_222 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1620/1/artifact/out/diff-checkstyle-hadoop-tools_hadoop-aws.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1620/1/testReport/ |
   | Max. process+thread count | 427 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1620/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on issue #1620: HADOOP-16642. ITestDynamoDBMetadataStoreScale fails when throttled.

2020-01-06 Thread GitBox
steveloughran commented on issue #1620: HADOOP-16642. 
ITestDynamoDBMetadataStoreScale fails when throttled.
URL: https://github.com/apache/hadoop/pull/1620#issuecomment-571219014
 
 
   rebase and retesting. @bgaborg  -can you take a quick look at this?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16785) Improve wasb and abfs resilience on double close() calls

2020-01-06 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16785?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17008990#comment-17008990
 ] 

Hadoop QA commented on HADOOP-16785:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m 
11s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
1s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
25s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
18m 51s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
3s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  1m  
1s{color} | {color:blue} Used deprecated FindBugs config; considering switching 
to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m  
7s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
22s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 17m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 20s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
20s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m  
9s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
38s{color} | {color:green} hadoop-azure in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
53s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}123m 15s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.5 Server=19.03.5 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1791/3/artifact/out/Dockerfile
 |
| GITHUB PR | https://github.com/apache/hadoop/pull/1791 |
| JIRA Issue | HADOOP-16785 |
| Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite 
unit shadedclient findbugs checkstyle |
| uname | Linux 543c92b9b053 4.15.0-65-generic #74-Ubuntu SMP Tue Sep 17 
17:06:04 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | p

[GitHub] [hadoop] steveloughran commented on a change in pull request #1794: HADOOP-15887: Add an option to avoid writing data locally in Distcp

2020-01-06 Thread GitBox
steveloughran commented on a change in pull request #1794: HADOOP-15887: Add an 
option to avoid writing data locally in Distcp
URL: https://github.com/apache/hadoop/pull/1794#discussion_r363376799
 
 

 ##
 File path: 
hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/RetriableFileCopyCommand.java
 ##
 @@ -197,14 +200,25 @@ private long copyToFile(Path targetPath, FileSystem 
targetFS,
   targetFS, targetPath);
   final long blockSize = getBlockSize(fileAttributes, source,
   targetFS, targetPath);
+  EnumSet createFlags =
+  EnumSet.of(CreateFlag.CREATE, CreateFlag.OVERWRITE);
+  if (noLocalWrite) {
+createFlags.add(CreateFlag.NO_LOCAL_WRITE);
+  }
   FSDataOutputStream out = targetFS.create(targetPath, permission,
-  EnumSet.of(CreateFlag.CREATE, CreateFlag.OVERWRITE),
-  copyBufferSize, repl, blockSize, context,
+  createFlags, copyBufferSize, repl, blockSize, context,
   getChecksumOpt(fileAttributes, sourceChecksum));
   outStream = new BufferedOutputStream(out);
 } else {
-  outStream = new BufferedOutputStream(targetFS.append(targetPath,
-  copyBufferSize));
+  if (targetFS instanceof DistributedFileSystem && noLocalWrite) {
+outStream = new BufferedOutputStream(
+((DistributedFileSystem) targetFS).append(targetPath,
 
 Review comment:
   you need to do an explicit cast here? Ugly. And it will mean we need 
hadoop-hdfs on the classpath, always, which object stores may not have. We'd be 
better off (carefully) pulling that HDFS method up 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on a change in pull request #1794: HADOOP-15887: Add an option to avoid writing data locally in Distcp

2020-01-06 Thread GitBox
steveloughran commented on a change in pull request #1794: HADOOP-15887: Add an 
option to avoid writing data locally in Distcp
URL: https://github.com/apache/hadoop/pull/1794#discussion_r363378165
 
 

 ##
 File path: hadoop-tools/hadoop-distcp/src/site/markdown/DistCp.md.vm
 ##
 @@ -362,6 +362,7 @@ Command Line Options
 | `-copybuffersize ` | Size of the copy buffer to use. By 
default, `` is set to 8192B | |
 | `-xtrack ` | Save information about missing source files to the 
specified path. | This option is only valid with `-update` option. This is an 
experimental property and it cannot be used with `-atomic` option. |
 | `-direct` | Write directly to destination paths | Useful for avoiding 
potentially very expensive temporary file rename operations when the 
destination is an object store |
+| `-noLocalWrite` | Write data to target cluster with data locality disabled. 
| If this option is set, the distcp task will not write data replication to 
local datanode to avoid datanode being imbalanced. This option is suggested to 
be specified when the data to copy is very large and the DistCp job runs on the 
target cluster. |
 
 Review comment:
   suggest:
   
   Write data to an HDFS cluster with data locality disabled. | If this option 
is set, the distcp tasks will not write data blocks to their local datanodes, 
so avoiding datanodes becoming imbalanced. Recommended when the amount of data 
to copy is very large, the target cluster is HDFS and the DistCp job runs on 
that target cluster. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on a change in pull request #1794: HADOOP-15887: Add an option to avoid writing data locally in Distcp

2020-01-06 Thread GitBox
steveloughran commented on a change in pull request #1794: HADOOP-15887: Add an 
option to avoid writing data locally in Distcp
URL: https://github.com/apache/hadoop/pull/1794#discussion_r363378426
 
 

 ##
 File path: 
hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/TestIntegration.java
 ##
 @@ -511,7 +518,27 @@ public void testCleanup() {
   Assert.fail("testCleanup failed " + e.getMessage());
 }
   }
-  
+
+  @Test(timeout=10)
+  public void testNoLocalWrite() {
+try {
+  addEntries(listFile, "singlefile1/file1");
+  createFiles("singlefile1/file1", "target");
+
+  Configuration conf = new Configuration(getConf());
+  conf.set("fs.file.impl", MockFileSystem.class.getName());
+  conf.setBoolean("fs.file.impl.disable.cache", true);
+  runTest(listFile, target, false, false, false, false, true, conf);
+
+  checkResult(target, 1);
+} catch (IOException e) {
 
 Review comment:
   remove the catch, just have the test method throw IOException


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on a change in pull request #1794: HADOOP-15887: Add an option to avoid writing data locally in Distcp

2020-01-06 Thread GitBox
steveloughran commented on a change in pull request #1794: HADOOP-15887: Add an 
option to avoid writing data locally in Distcp
URL: https://github.com/apache/hadoop/pull/1794#discussion_r363379090
 
 

 ##
 File path: 
hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/contract/AbstractContractDistCpTest.java
 ##
 @@ -650,4 +650,22 @@ private Job runDistCpDirectWrite(final Path srcDir, final 
Path destDir)
 Collections.singletonList(srcDir), destDir)
 .withDirectWrite(true)));
   }
+
+  @Test
+  public void testDistCpWithNoLocalWrite() throws Exception {
+describe("test dictcp job compatibility with option: noLocalWrite");
+Path target = distCpDeepDirectoryStructure(localFS, localDir, remoteFS,
+remoteDir);
+lsR("Local to update", localFS, localDir);
+lsR("Remote before update", remoteFS, target);
+Job job = runDistCp(buildWithStandardOptions(
+new DistCpOptions.Builder(
+Collections.singletonList(localDir), target)
+.withDeleteMissing(true)
+.withSyncFolder(true)
+.withCRC(true)
+.withOverwrite(false)
+.withNoLocalWrite(true)));
+assertTrue(job.isSuccessful());
 
 Review comment:
   Add a string to include in the exception text, e.g "job failed"


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on a change in pull request #1794: HADOOP-15887: Add an option to avoid writing data locally in Distcp

2020-01-06 Thread GitBox
steveloughran commented on a change in pull request #1794: HADOOP-15887: Add an 
option to avoid writing data locally in Distcp
URL: https://github.com/apache/hadoop/pull/1794#discussion_r363378663
 
 

 ##
 File path: 
hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/TestOptionsParser.java
 ##
 @@ -804,4 +804,18 @@ public void testExclusionsOption() {
 "hdfs://localhost:8020/target/"});
 assertThat(options.getFiltersFile()).isEqualTo("/tmp/filters.txt");
   }
+
+  @Test
+  public void testParseNoLocalWrite() {
+DistCpOptions options = OptionsParser.parse(new String[] {
+"hdfs://localhost:8020/source/first",
+"hdfs://localhost:8020/target/"});
+Assert.assertEquals(options.shouldNoLocalWrite(), false);
 
 Review comment:
   use assertFalse


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on a change in pull request #1794: HADOOP-15887: Add an option to avoid writing data locally in Distcp

2020-01-06 Thread GitBox
steveloughran commented on a change in pull request #1794: HADOOP-15887: Add an 
option to avoid writing data locally in Distcp
URL: https://github.com/apache/hadoop/pull/1794#discussion_r363378804
 
 

 ##
 File path: 
hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/TestOptionsParser.java
 ##
 @@ -804,4 +804,18 @@ public void testExclusionsOption() {
 "hdfs://localhost:8020/target/"});
 assertThat(options.getFiltersFile()).isEqualTo("/tmp/filters.txt");
   }
+
+  @Test
+  public void testParseNoLocalWrite() {
+DistCpOptions options = OptionsParser.parse(new String[] {
+"hdfs://localhost:8020/source/first",
+"hdfs://localhost:8020/target/"});
+Assert.assertEquals(options.shouldNoLocalWrite(), false);
+
+options = OptionsParser.parse(new String[] {
+"-noLocalWrite",
+"hdfs://localhost:8020/source/first",
+"hdfs://localhost:8020/target/"});
+Assert.assertEquals(options.shouldNoLocalWrite(), true);
 
 Review comment:
   use assertTrue


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1791: HADOOP-16785: wasb to raise IOE if write() invoked on a closed stream

2020-01-06 Thread GitBox
hadoop-yetus commented on issue #1791: HADOOP-16785: wasb to raise IOE if 
write() invoked on a closed stream
URL: https://github.com/apache/hadoop/pull/1791#issuecomment-571215129
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 11s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
4 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 25s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  19m 46s |  trunk passed  |
   | +1 :green_heart: |  compile  |  17m 47s |  trunk passed  |
   | +1 :green_heart: |  checkstyle  |   2m 31s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m  6s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  18m 51s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   2m  3s |  trunk passed  |
   | +0 :ok: |  spotbugs  |   1m  1s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   3m  7s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 22s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 17s |  the patch passed  |
   | +1 :green_heart: |  compile  |  17m 15s |  the patch passed  |
   | +1 :green_heart: |  javac  |  17m 15s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   2m 31s |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   2m  8s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  14m 20s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   2m 14s |  the patch passed  |
   | +1 :green_heart: |  findbugs  |   3m 20s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   9m  9s |  hadoop-common in the patch passed. 
 |
   | +1 :green_heart: |  unit  |   1m 38s |  hadoop-azure in the patch passed.  
|
   | +1 :green_heart: |  asflicense  |   0m 53s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 123m 15s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.5 Server=19.03.5 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1791/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1791 |
   | JIRA Issue | HADOOP-16785 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 543c92b9b053 4.15.0-65-generic #74-Ubuntu SMP Tue Sep 17 
17:06:04 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 4a76ab7 |
   | Default Java | 1.8.0_232 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1791/3/testReport/ |
   | Max. process+thread count | 1373 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-azure 
U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1791/3/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16621) [pb-upgrade] spark-hive doesn't compile against hadoop trunk because of Token's marshalling

2020-01-06 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16621?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17008981#comment-17008981
 ] 

Steve Loughran commented on HADOOP-16621:
-

I'm for #1, maybe with a backport of the helper methods to older versions just 
so that you can switch to it everywhere

And add something to the release notes, obviously. We have the "evolving" tag 
as a get-out

Nobody is using this outside our code AFAIK, 

something to raise on the dev lists, IMO

> [pb-upgrade] spark-hive doesn't compile against hadoop trunk because of 
> Token's marshalling
> ---
>
> Key: HADOOP-16621
> URL: https://issues.apache.org/jira/browse/HADOOP-16621
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: common
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Vinayakumar B
>Priority: Major
>
> the move to protobuf 3.x stops spark building because Token has a method 
> which returns a protobuf, and now its returning some v3 types.
> if we want to isolate downstream code from protobuf changes, we need to move 
> that marshalling method from token and put in a helper class.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16785) Improve wasb and abfs resilience on double close() calls

2020-01-06 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16785?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17008979#comment-17008979
 ] 

Hadoop QA commented on HADOOP-16785:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  2m 
28s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
52s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
 6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
19m 37s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
6s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  1m  
0s{color} | {color:blue} Used deprecated FindBugs config; considering switching 
to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
12s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
26s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 17m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m  4s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
26s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m 
19s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
34s{color} | {color:green} hadoop-azure in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
54s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}126m  2s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.5 Server=19.03.5 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1791/2/artifact/out/Dockerfile
 |
| GITHUB PR | https://github.com/apache/hadoop/pull/1791 |
| JIRA Issue | HADOOP-16785 |
| Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite 
unit shadedclient findbugs checkstyle |
| uname | Linux ffaaa026f726 4.15.0-65-generic #74-Ubuntu SMP Tue Sep 17 
17:06:04 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | p

[GitHub] [hadoop] hadoop-yetus commented on issue #1791: HADOOP-16785: wasb to raise IOE if write() invoked on a closed stream

2020-01-06 Thread GitBox
hadoop-yetus commented on issue #1791: HADOOP-16785: wasb to raise IOE if 
write() invoked on a closed stream
URL: https://github.com/apache/hadoop/pull/1791#issuecomment-571209228
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   2m 28s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
4 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   1m 52s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  19m  6s |  trunk passed  |
   | +1 :green_heart: |  compile  |  17m 40s |  trunk passed  |
   | +1 :green_heart: |  checkstyle  |   2m 49s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m  6s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  19m 37s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   2m  6s |  trunk passed  |
   | +0 :ok: |  spotbugs  |   1m  0s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   3m 12s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 26s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 18s |  the patch passed  |
   | +1 :green_heart: |  compile  |  17m 10s |  the patch passed  |
   | +1 :green_heart: |  javac  |  17m 10s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   2m 44s |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   2m  4s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  13m  4s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   2m 14s |  the patch passed  |
   | +1 :green_heart: |  findbugs  |   3m 26s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   9m 19s |  hadoop-common in the patch passed. 
 |
   | +1 :green_heart: |  unit  |   1m 34s |  hadoop-azure in the patch passed.  
|
   | +1 :green_heart: |  asflicense  |   0m 54s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 126m  2s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.5 Server=19.03.5 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1791/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1791 |
   | JIRA Issue | HADOOP-16785 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux ffaaa026f726 4.15.0-65-generic #74-Ubuntu SMP Tue Sep 17 
17:06:04 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 4a76ab7 |
   | Default Java | 1.8.0_232 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1791/2/testReport/ |
   | Max. process+thread count | 1375 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-azure 
U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1791/2/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16756) distcp -update to S3A always overwrites due to block size mismatch

2020-01-06 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17008978#comment-17008978
 ] 

Steve Loughran commented on HADOOP-16756:
-

think we should still add some -p for "preserve nothing" (-p0) ? ; needed for 
hdfs -> (abfs, s3) copies

> distcp -update to S3A always overwrites due to block size mismatch
> --
>
> Key: HADOOP-16756
> URL: https://issues.apache.org/jira/browse/HADOOP-16756
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, tools/distcp
>Affects Versions: 3.3.0
>Reporter: Daisuke Kobayashi
>Priority: Major
>
> Distcp over S3A always copies all source files no matter the files are 
> changed or not. This is opposite to the statement in the doc below.
> [http://hadoop.apache.org/docs/current/hadoop-distcp/DistCp.html]
> {noformat}
> And to use -update to only copy changed files.
> {noformat}
> CopyMapper compares file length as well as block size before copying. While 
> the file length should match, the block size does not. This is apparently 
> because the returned block size from S3A is always 32MB.
> [https://github.com/apache/hadoop/blob/release-3.2.0-RC1/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/CopyMapper.java#L348]
> I'd suppose we should update the documentation or make code change.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1635: HADOOP-16596. [pb-upgrade] Use shaded protobuf classes from hadoop-thirdparty dependency

2020-01-06 Thread GitBox
hadoop-yetus commented on issue #1635: HADOOP-16596. [pb-upgrade] Use shaded 
protobuf classes from hadoop-thirdparty dependency
URL: https://github.com/apache/hadoop/pull/1635#issuecomment-571201484
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |  25m  7s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m 15s |  No case conflicting files 
found.  |
   | +0 :ok: |  shelldocs  |   0m 15s |  Shelldocs was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
31 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   1m 14s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  18m 20s |  trunk passed  |
   | +1 :green_heart: |  compile  |  16m 49s |  trunk passed  |
   | +1 :green_heart: |  checkstyle  |   4m 45s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |  16m 19s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  12m 39s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   7m  7s |  trunk passed  |
   | +0 :ok: |  spotbugs  |  27m 32s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +0 :ok: |  findbugs  |   0m 22s |  branch/hadoop-project no findbugs 
output file (findbugsXml.xml)  |
   | +0 :ok: |  findbugs  |   0m 29s |  
branch/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests
 no findbugs output file (findbugsXml.xml)  |
   | -1 :x: |  findbugs  |  27m 26s |  root in trunk has 9 extant findbugs 
warnings.  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 36s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |  30m 41s |  the patch passed  |
   | +1 :green_heart: |  compile  |  16m 32s |  the patch passed  |
   | +1 :green_heart: |  javac  |  16m 32s |  the patch passed  |
   | -0 :warning: |  checkstyle  |   4m 53s |  root: The patch generated 3 new 
+ 3347 unchanged - 1 fixed = 3350 total (was 3348)  |
   | +1 :green_heart: |  mvnsite  |  17m 31s |  the patch passed  |
   | +1 :green_heart: |  shellcheck  |   0m  0s |  There were no new shellcheck 
issues.  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  xml  |   0m 24s |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  shadedclient  |  14m 20s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   7m  5s |  the patch passed  |
   | +0 :ok: |  findbugs  |   0m 22s |  hadoop-project has no data from 
findbugs  |
   | +0 :ok: |  findbugs  |   0m 30s |  
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests has 
no data from findbugs  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  | 152m 20s |  root in the patch failed.  |
   | +1 :green_heart: |  asflicense  |   1m 17s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 460m 34s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.server.namenode.TestRedudantBlocks |
   |   | hadoop.hdfs.TestDeadNodeDetection |
   |   | hadoop.hdfs.server.namenode.TestDecommissioningStatus |
   |   | hadoop.hdfs.server.balancer.TestBalancer |
   |   | hadoop.hdfs.server.blockmanagement.TestBlocksWithNotEnoughRacks |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.5 Server=19.03.5 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1635/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1635 |
   | Optional Tests | dupname asflicense shellcheck shelldocs compile javac 
javadoc mvninstall mvnsite unit shadedclient xml findbugs checkstyle |
   | uname | Linux a27810baaadc 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / b343e15 |
   | Default Java | 1.8.0_232 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1635/2/artifact/out/branch-findbugs-root-warnings.html
 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1635/2/artifact/out/diff-checkstyle-root.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1635/2/artifact/out/patch-unit-root.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1635/2/testReport/ |
   | Max. process+thread count | 4503 (vs. ulimit of 5500) |
   | modules | C: hadoop-project hadoop-common-project/hadoop-common 
hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs

[jira] [Commented] (HADOOP-16785) Improve wasb and abfs resilience on double close() calls

2020-01-06 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16785?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17008964#comment-17008964
 ] 

Steve Loughran commented on HADOOP-16785:
-

and abfs:
{code}
java.lang.IllegalArgumentException: Self-suppression not permitted
at java.lang.Throwable.addSuppressed(Throwable.java:1072)
at java.io.FilterOutputStream.close(FilterOutputStream.java:159)
at org.apache.commons.io.IOUtils.closeQuietly(IOUtils.java:303)
at org.apache.commons.io.IOUtils.closeQuietly(IOUtils.java:274)
{code}

This is in try-with-resources
{code}
public void close() throws IOException {
try (OutputStream ostream = out) {
flush();
}
}
{code}

hypothesis: flush() and close() are raising the same exception; the attempt to 
add the close() exception to that of flush() is rejected; 

> Improve wasb and abfs resilience on double close() calls
> 
>
> Key: HADOOP-16785
> URL: https://issues.apache.org/jira/browse/HADOOP-16785
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.2.1
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Blocker
>
> # if you call write() after the NativeAzureFsOutputStream is closed it throws 
> an NPE ... which isn't always caught by closeQuietly code. It needs to raise 
> an IOE
> # abfs close ops can trigger failures in try-with-resources use



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16785) Improve wasb and abfs resilience on double close() calls

2020-01-06 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16785?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-16785:

Priority: Major  (was: Blocker)

> Improve wasb and abfs resilience on double close() calls
> 
>
> Key: HADOOP-16785
> URL: https://issues.apache.org/jira/browse/HADOOP-16785
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.2.1
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>
> # if you call write() after the NativeAzureFsOutputStream is closed it throws 
> an NPE ... which isn't always caught by closeQuietly code. It needs to raise 
> an IOE
> # abfs close ops can trigger failures in try-with-resources use



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16785) Improve wasb and abfs resilience on double close() calls

2020-01-06 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16785?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-16785:

Description: 
# if you call write() after the NativeAzureFsOutputStream is closed it throws 
an NPE ... which isn't always caught by closeQuietly code. It needs to raise an 
IOE
# abfs close ops can trigger failures in try-with-resources use

  was:if you call write() after the NativeAzureFsOutputStream is closed it 
throws an NPE ... which isn't always caught by closeQuietly code. It needs to 
raise an IOE


> Improve wasb and abfs resilience on double close() calls
> 
>
> Key: HADOOP-16785
> URL: https://issues.apache.org/jira/browse/HADOOP-16785
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.2.1
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Blocker
>
> # if you call write() after the NativeAzureFsOutputStream is closed it throws 
> an NPE ... which isn't always caught by closeQuietly code. It needs to raise 
> an IOE
> # abfs close ops can trigger failures in try-with-resources use



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16785) Improve wasb and abfs resilience on double close() calls

2020-01-06 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16785?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-16785:

Summary: Improve wasb and abfs resilience on double close() calls  (was: 
NativeAzureFsOutputStream NPEs on write() once closed)

> Improve wasb and abfs resilience on double close() calls
> 
>
> Key: HADOOP-16785
> URL: https://issues.apache.org/jira/browse/HADOOP-16785
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.2.1
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Blocker
>
> if you call write() after the NativeAzureFsOutputStream is closed it throws 
> an NPE ... which isn't always caught by closeQuietly code. It needs to raise 
> an IOE



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on issue #1791: HADOOP-16785: wasb to raise IOE if write() invoked on a closed stream

2020-01-06 Thread GitBox
steveloughran commented on issue #1791: HADOOP-16785: wasb to raise IOE if 
write() invoked on a closed stream
URL: https://github.com/apache/hadoop/pull/1791#issuecomment-571193435
 
 
   tested -azure wales. All good except for HADOOP-16706 and 
org.apache.hadoop.fs.azurebfs.ITestClientUrlScheme


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16789) In TestZKFailoverController, restore changes from HADOOP-11149 that were dropped by HDFS-6440

2020-01-06 Thread Jim Brennan (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16789?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17008959#comment-17008959
 ] 

Jim Brennan commented on HADOOP-16789:
--

Thanks [~vagarychen]!

 

> In TestZKFailoverController, restore changes from HADOOP-11149 that were 
> dropped by HDFS-6440
> -
>
> Key: HADOOP-16789
> URL: https://issues.apache.org/jira/browse/HADOOP-16789
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 2.10.0
>Reporter: Jim Brennan
>Assignee: Jim Brennan
>Priority: Minor
> Fix For: 2.10.1
>
> Attachments: HADOOP-16789-branch-2.10.001.patch
>
>
> In our automated tests, we are seeing intermittent failures in 
> TestZKFailoverController.  I have been unable to reproduce the failures 
> locally, but in examining the code, I found a difference that may explain the 
> failures.
> In trunk, HDFS-6440 ( Support more than 2 NameNodes. Contributed by Jesse 
> Yates.) was checked in before HADOOP-11149. TestZKFailoverController times 
> out), which changed the test added in HDFS-6440.
> In branch-2, the order was reversed, and the test that was added in HDFS-6440 
> does not retain the fixes from HADOOP-11149.
> Note that there was also a change from HDFS-10985. 
> (o.a.h.ha.TestZKFailoverController should not use fixed time sleep before 
> assertions.) that was missed in the HDFS-6440 backport.
> My proposal is to restore the changes from HADOOP-11149.  I made this change 
> internally and it seems to have fixed the intermittent failures.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on issue #1791: HADOOP-16785: wasb to raise IOE if write() invoked on a closed stream

2020-01-06 Thread GitBox
steveloughran commented on issue #1791: HADOOP-16785: wasb to raise IOE if 
write() invoked on a closed stream
URL: https://github.com/apache/hadoop/pull/1791#issuecomment-571175415
 
 
   
   I can't replicate that in the test -but can see how it does surface...it's 
in the close() of a try-with-resources
   
   * If a call like out.flush() raises an IOE, try-with-resources tries to 
close the stream
   * and an exception from close() is added to the first IOE via addSuppressed
   * which doesn't let you add the caught exception to itself.
   * So if the ABFS stream close() always rethrows the same exception, you get 
a failure.
   * This could happen if there had been some previous failure and 
maybeRethrowException() was throwing it - the final 
 flushInternal() call will throw the same exception, so addSuppressed will 
fail.
 
   I'm patching flushInternal() so that if it rethrows an exception in close, 
it wraps that with a covering IOE. (it still nests the inner, so there is a 
loop in the chain).
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16772) Extract version numbers to head of pom.xml (addendum)

2020-01-06 Thread Tamas Penzes (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16772?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17008868#comment-17008868
 ] 

Tamas Penzes commented on HADOOP-16772:
---

[~gabor.bota] thanks for committing #1774, could you please check and commit 
#1773 too?

Thanks.

> Extract version numbers to head of pom.xml (addendum)
> -
>
> Key: HADOOP-16772
> URL: https://issues.apache.org/jira/browse/HADOOP-16772
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Reporter: Tamas Penzes
>Assignee: Tamas Penzes
>Priority: Major
>
> Frogotten to extract a few version numbers, this is a follow up ticket of 
> HADOOP-16729.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Reopened] (HADOOP-16772) Extract version numbers to head of pom.xml (addendum)

2020-01-06 Thread Tamas Penzes (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16772?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tamas Penzes reopened HADOOP-16772:
---

> Extract version numbers to head of pom.xml (addendum)
> -
>
> Key: HADOOP-16772
> URL: https://issues.apache.org/jira/browse/HADOOP-16772
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Reporter: Tamas Penzes
>Assignee: Tamas Penzes
>Priority: Major
>
> Frogotten to extract a few version numbers, this is a follow up ticket of 
> HADOOP-16729.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16595) [pb-upgrade] Create hadoop-thirdparty artifact to have shaded protobuf

2020-01-06 Thread Vinayakumar B (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16595?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B updated HADOOP-16595:
---
Target Version/s: thirdparty-1.0.0

> [pb-upgrade] Create hadoop-thirdparty artifact to have shaded protobuf
> --
>
> Key: HADOOP-16595
> URL: https://issues.apache.org/jira/browse/HADOOP-16595
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: hadoop-thirdparty
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
>Priority: Major
>
> Create a separate repo "hadoop-thirdparty" to have shaded dependencies.
> starting with protobuf-java:3.7.1



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on issue #1707: HADOOP-16697. Tune/audit auth mode

2020-01-06 Thread GitBox
steveloughran commented on issue #1707: HADOOP-16697. Tune/audit auth mode
URL: https://github.com/apache/hadoop/pull/1707#issuecomment-571136940
 
 
   oops, deleted last yetus review. I'll kick off a new one.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus removed a comment on issue #1707: HADOOP-16697. Tune/audit auth mode

2020-01-06 Thread GitBox
hadoop-yetus removed a comment on issue #1707: HADOOP-16697. Tune/audit auth 
mode
URL: https://github.com/apache/hadoop/pull/1707#issuecomment-570640020
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |  29m 44s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  2s |  No case conflicting files 
found.  |
   | +0 :ok: |  markdownlint  |   0m  0s |  markdownlint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
12 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   1m  8s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  18m 11s |  trunk passed  |
   | +1 :green_heart: |  compile  |  16m 44s |  trunk passed  |
   | +1 :green_heart: |  checkstyle  |   2m 40s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 17s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  18m 56s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   2m 16s |  trunk passed  |
   | +0 :ok: |  spotbugs  |   1m 10s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   3m 13s |  trunk passed  |
   | -0 :warning: |  patch  |   1m 33s |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 25s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 20s |  the patch passed  |
   | +1 :green_heart: |  compile  |  16m  2s |  the patch passed  |
   | +1 :green_heart: |  javac  |  16m  2s |  the patch passed  |
   | -0 :warning: |  checkstyle  |   2m 41s |  root: The patch generated 3 new 
+ 96 unchanged - 0 fixed = 99 total (was 96)  |
   | +1 :green_heart: |  mvnsite  |   2m 16s |  the patch passed  |
   | -1 :x: |  whitespace  |   0m  0s |  The patch has 3 line(s) that end in 
whitespace. Use git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply  |
   | +1 :green_heart: |  xml  |   0m  2s |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  shadedclient  |  13m 19s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   2m  8s |  the patch passed  |
   | +1 :green_heart: |  findbugs  |   3m 35s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   9m 18s |  hadoop-common in the patch passed. 
 |
   | +1 :green_heart: |  unit  |   1m 37s |  hadoop-aws in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 51s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 149m  2s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.5 Server=19.03.5 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1707/16/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1707 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle xml markdownlint |
   | uname | Linux a50b0836b119 4.15.0-65-generic #74-Ubuntu SMP Tue Sep 17 
17:06:04 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / b19d87c |
   | Default Java | 1.8.0_232 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1707/16/artifact/out/diff-checkstyle-root.txt
 |
   | whitespace | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1707/16/artifact/out/whitespace-eol.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1707/16/testReport/ |
   | Max. process+thread count | 1368 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws 
U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1707/16/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For addition

[GitHub] [hadoop] hadoop-yetus removed a comment on issue #1707: HADOOP-16697. Tune/audit auth mode

2020-01-06 Thread GitBox
hadoop-yetus removed a comment on issue #1707: HADOOP-16697. Tune/audit auth 
mode
URL: https://github.com/apache/hadoop/pull/1707#issuecomment-570636585
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 25s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +0 :ok: |  markdownlint  |   0m  0s |  markdownlint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
12 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   1m 26s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  22m  0s |  trunk passed  |
   | +1 :green_heart: |  compile  |  18m 36s |  trunk passed  |
   | +1 :green_heart: |  checkstyle  |   2m 54s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m  6s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  20m 45s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   2m  2s |  trunk passed  |
   | +0 :ok: |  spotbugs  |   1m 11s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   3m 23s |  trunk passed  |
   | -0 :warning: |  patch  |   1m 31s |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 24s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 25s |  the patch passed  |
   | +1 :green_heart: |  compile  |  17m  4s |  the patch passed  |
   | +1 :green_heart: |  javac  |  17m  4s |  the patch passed  |
   | -0 :warning: |  checkstyle  |   2m 45s |  root: The patch generated 3 new 
+ 96 unchanged - 0 fixed = 99 total (was 96)  |
   | +1 :green_heart: |  mvnsite  |   2m  7s |  the patch passed  |
   | -1 :x: |  whitespace  |   0m  0s |  The patch has 3 line(s) that end in 
whitespace. Use git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply  |
   | +1 :green_heart: |  xml  |   0m  2s |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  shadedclient  |  13m 57s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 56s |  the patch passed  |
   | +1 :green_heart: |  findbugs  |   3m 26s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   9m 26s |  hadoop-common in the patch passed. 
 |
   | +1 :green_heart: |  unit  |   1m 26s |  hadoop-aws in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 47s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 129m 19s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.5 Server=19.03.5 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1707/17/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1707 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle xml markdownlint |
   | uname | Linux 3c68b09d666a 4.15.0-66-generic #75-Ubuntu SMP Tue Oct 1 
05:24:09 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / b19d87c |
   | Default Java | 1.8.0_232 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1707/17/artifact/out/diff-checkstyle-root.txt
 |
   | whitespace | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1707/17/artifact/out/whitespace-eol.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1707/17/testReport/ |
   | Max. process+thread count | 1347 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws 
U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1707/17/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additiona

[GitHub] [hadoop] hadoop-yetus removed a comment on issue #1707: HADOOP-16697. Tune/audit auth mode

2020-01-06 Thread GitBox
hadoop-yetus removed a comment on issue #1707: HADOOP-16697. Tune/audit auth 
mode
URL: https://github.com/apache/hadoop/pull/1707#issuecomment-570346561
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |  24m 56s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  No case conflicting files 
found.  |
   | +0 :ok: |  markdownlint  |   0m  0s |  markdownlint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
12 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   1m 25s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  18m 23s |  trunk passed  |
   | +1 :green_heart: |  compile  |  16m 30s |  trunk passed  |
   | +1 :green_heart: |  checkstyle  |   2m 39s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 18s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  18m 59s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   2m 11s |  trunk passed  |
   | +0 :ok: |  spotbugs  |   1m 11s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   3m 13s |  trunk passed  |
   | -0 :warning: |  patch  |   1m 35s |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 23s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 21s |  the patch passed  |
   | +1 :green_heart: |  compile  |  16m  2s |  the patch passed  |
   | +1 :green_heart: |  javac  |  16m  2s |  the patch passed  |
   | -0 :warning: |  checkstyle  |   2m 46s |  root: The patch generated 5 new 
+ 95 unchanged - 0 fixed = 100 total (was 95)  |
   | +1 :green_heart: |  mvnsite  |   2m 21s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  xml  |   0m  1s |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  shadedclient  |  12m 41s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   2m 11s |  the patch passed  |
   | +1 :green_heart: |  findbugs  |   3m 30s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   9m  7s |  hadoop-common in the patch passed. 
 |
   | +1 :green_heart: |  unit  |   1m 25s |  hadoop-aws in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 54s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 143m 44s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.5 Server=19.03.5 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1707/15/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1707 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle xml markdownlint |
   | uname | Linux 9aa3884cb463 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / b19d87c |
   | Default Java | 1.8.0_232 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1707/15/artifact/out/diff-checkstyle-root.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1707/15/testReport/ |
   | Max. process+thread count | 1376 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws 
U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1707/15/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on issue #1747: HDFS-15042 Add more tests for ByteBufferPositionedReadable.

2020-01-06 Thread GitBox
steveloughran commented on issue #1747: HDFS-15042 Add more tests for 
ByteBufferPositionedReadable.
URL: https://github.com/apache/hadoop/pull/1747#issuecomment-571136496
 
 
   hdfs failures clearly something else
   ```
   [INFO] 
   [INFO] Results:
   [INFO] 
   [ERROR] Failures: 
   [ERROR]   TestMultipleNNPortQOP.testMultipleNNPortOverwriteDownStream:177 
expected: but was:
   [ERROR]   TestRedudantBlocks.testProcessOverReplicatedAndRedudantBlock:138 
expected:<5> but was:<4>
   [ERROR] Errors: 
   [ERROR]   
TestDeadNodeDetection.testDeadNodeDetectionInBackground:105->waitForDeadNode:321
 ? Timeout
   [ERROR]   
TestUnderReplicatedBlocks.testSetRepIncWithUnderReplicatedBlocks:80 ? 
TestTimedOut
   [INFO] 
   ```
   
   will rebase and fix checkstyles anyway


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-16772) Extract version numbers to head of pom.xml (addendum)

2020-01-06 Thread Gabor Bota (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16772?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota resolved HADOOP-16772.
-
Resolution: Fixed

> Extract version numbers to head of pom.xml (addendum)
> -
>
> Key: HADOOP-16772
> URL: https://issues.apache.org/jira/browse/HADOOP-16772
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Reporter: Tamas Penzes
>Assignee: Tamas Penzes
>Priority: Major
>
> Frogotten to extract a few version numbers, this is a follow up ticket of 
> HADOOP-16729.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16772) Extract version numbers to head of pom.xml (addendum)

2020-01-06 Thread Gabor Bota (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16772?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17008832#comment-17008832
 ] 

Gabor Bota commented on HADOOP-16772:
-

+1 on PR #1774

> Extract version numbers to head of pom.xml (addendum)
> -
>
> Key: HADOOP-16772
> URL: https://issues.apache.org/jira/browse/HADOOP-16772
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Reporter: Tamas Penzes
>Assignee: Tamas Penzes
>Priority: Major
>
> Frogotten to extract a few version numbers, this is a follow up ticket of 
> HADOOP-16729.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bgaborg merged pull request #1774: HADOOP-16772. Extract version numbers to head of pom.xml (addendum)

2020-01-06 Thread GitBox
bgaborg merged pull request #1774: HADOOP-16772. Extract version numbers to 
head of pom.xml (addendum)
URL: https://github.com/apache/hadoop/pull/1774
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bgaborg commented on issue #1774: HADOOP-16772. Extract version numbers to head of pom.xml (addendum)

2020-01-06 Thread GitBox
bgaborg commented on issue #1774: HADOOP-16772. Extract version numbers to head 
of pom.xml (addendum)
URL: https://github.com/apache/hadoop/pull/1774#issuecomment-571132180
 
 
   +1, compile works for me, change seems ok.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-16621) [pb-upgrade] spark-hive doesn't compile against hadoop trunk because of Token's marshalling

2020-01-06 Thread Vinayakumar B (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16621?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B reassigned HADOOP-16621:
--

Assignee: Vinayakumar B

> [pb-upgrade] spark-hive doesn't compile against hadoop trunk because of 
> Token's marshalling
> ---
>
> Key: HADOOP-16621
> URL: https://issues.apache.org/jira/browse/HADOOP-16621
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: common
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Vinayakumar B
>Priority: Major
>
> the move to protobuf 3.x stops spark building because Token has a method 
> which returns a protobuf, and now its returning some v3 types.
> if we want to isolate downstream code from protobuf changes, we need to move 
> that marshalling method from token and put in a helper class.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15887) Add an option to avoid writing data locally in Distcp

2020-01-06 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-15887?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17008813#comment-17008813
 ] 

Hadoop QA commented on HADOOP-15887:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 30m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
 9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 33s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  0m 
43s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 19s{color} | {color:orange} hadoop-tools/hadoop-distcp: The patch generated 
1 new + 113 unchanged - 2 fixed = 114 total (was 115) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m  8s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 12m 
51s{color} | {color:green} hadoop-distcp in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
30s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 95m 21s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.5 Server=19.03.5 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1794/1/artifact/out/Dockerfile
 |
| GITHUB PR | https://github.com/apache/hadoop/pull/1794 |
| JIRA Issue | HADOOP-15887 |
| Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite 
unit shadedclient findbugs checkstyle |
| uname | Linux 7b19cd995ff9 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | personality/hadoop.sh |
| git revision | trunk / 4a76ab7 |
| Default Java | 1.8.0_232 |
| checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1794/1/artifact/out/diff-checkstyle-hadoop-tools_hadoop-distcp.txt
 |
|  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1794/1/testReport/ |
| Max. process+thread 

[GitHub] [hadoop] hadoop-yetus commented on issue #1794: HADOOP-15887: Add an option to avoid writing data locally in Distcp

2020-01-06 Thread GitBox
hadoop-yetus commented on issue #1794: HADOOP-15887: Add an option to avoid 
writing data locally in Distcp
URL: https://github.com/apache/hadoop/pull/1794#issuecomment-571126994
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |  30m 21s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
4 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  18m  9s |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 30s |  trunk passed  |
   | +1 :green_heart: |  checkstyle  |   0m 25s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 31s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  13m 33s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 24s |  trunk passed  |
   | +0 :ok: |  spotbugs  |   0m 43s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   0m 41s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 26s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 21s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 21s |  the patch passed  |
   | -0 :warning: |  checkstyle  |   0m 19s |  hadoop-tools/hadoop-distcp: The 
patch generated 1 new + 113 unchanged - 2 fixed = 114 total (was 115)  |
   | +1 :green_heart: |  mvnsite  |   0m 24s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  13m  8s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 20s |  the patch passed  |
   | +1 :green_heart: |  findbugs  |   0m 45s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |  12m 51s |  hadoop-distcp in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 30s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  95m 21s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.5 Server=19.03.5 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1794/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1794 |
   | JIRA Issue | HADOOP-15887 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 7b19cd995ff9 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 4a76ab7 |
   | Default Java | 1.8.0_232 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1794/1/artifact/out/diff-checkstyle-hadoop-tools_hadoop-distcp.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1794/1/testReport/ |
   | Max. process+thread count | 411 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-distcp U: hadoop-tools/hadoop-distcp |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1794/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16358) Add an ARM CI for Hadoop

2020-01-06 Thread Vinayakumar B (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16358?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17008748#comment-17008748
 ] 

Vinayakumar B commented on HADOOP-16358:


Thanks Everyone

> Add an ARM CI for Hadoop
> 
>
> Key: HADOOP-16358
> URL: https://issues.apache.org/jira/browse/HADOOP-16358
> Project: Hadoop Common
>  Issue Type: Task
>  Components: build
>Reporter: Zhenyu Zheng
>Assignee: Zhenyu Zheng
>Priority: Major
> Fix For: 3.3.0
>
>
> Now the CI of Hadoop is handled by jenkins. While the tests are running under 
> x86 ARCH, the ARM arch has not being considered. This leads an problem that 
> we don't have a way to test every pull request that if it'll break the Hadoop 
> deployment on ARM or not.
> We should add a CI system that support ARM arch. Using it, Hadoop can 
> officially support arm release in the future. Here I'd like to introduce 
> OpenLab to the community. [OpenLab|https://openlabtesting.org/] is a open 
> source CI system that can test any open source software on either x86 or arm 
> ARCH, it's mainly used by github projects. Now some 
> [projects|https://github.com/theopenlab/openlab-zuul-jobs/blob/master/zuul.d/jobs.yaml]
>  has integrated it already. Such as containerd (a graduated CNCF project, the 
> arm build will be triggerd in every PR, 
> [https://github.com/containerd/containerd/pulls]), terraform and so on.
> OpenLab uses the open source CI software [Zuul 
> |https://github.com/openstack-infra/zuul] for CI system. Zuul is used by 
> OpenStack community as well. integrating with OpneLab is quite easy using its 
> github app. All config info is open source as well.
> If apache Hadoop community has interested with it, we can help for the 
> integration.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-16358) Add an ARM CI for Hadoop

2020-01-06 Thread Vinayakumar B (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16358?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B reassigned HADOOP-16358:
--

Assignee: Zhenyu Zheng

> Add an ARM CI for Hadoop
> 
>
> Key: HADOOP-16358
> URL: https://issues.apache.org/jira/browse/HADOOP-16358
> Project: Hadoop Common
>  Issue Type: Task
>  Components: build
>Reporter: Zhenyu Zheng
>Assignee: Zhenyu Zheng
>Priority: Major
> Fix For: 3.3.0
>
>
> Now the CI of Hadoop is handled by jenkins. While the tests are running under 
> x86 ARCH, the ARM arch has not being considered. This leads an problem that 
> we don't have a way to test every pull request that if it'll break the Hadoop 
> deployment on ARM or not.
> We should add a CI system that support ARM arch. Using it, Hadoop can 
> officially support arm release in the future. Here I'd like to introduce 
> OpenLab to the community. [OpenLab|https://openlabtesting.org/] is a open 
> source CI system that can test any open source software on either x86 or arm 
> ARCH, it's mainly used by github projects. Now some 
> [projects|https://github.com/theopenlab/openlab-zuul-jobs/blob/master/zuul.d/jobs.yaml]
>  has integrated it already. Such as containerd (a graduated CNCF project, the 
> arm build will be triggerd in every PR, 
> [https://github.com/containerd/containerd/pulls]), terraform and so on.
> OpenLab uses the open source CI software [Zuul 
> |https://github.com/openstack-infra/zuul] for CI system. Zuul is used by 
> OpenStack community as well. integrating with OpneLab is quite easy using its 
> github app. All config info is open source as well.
> If apache Hadoop community has interested with it, we can help for the 
> integration.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16791) ABFS: Have all external dependent module execution tracked with DurationInfo

2020-01-06 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16791?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17008747#comment-17008747
 ] 

Steve Loughran commented on HADOOP-16791:
-


Sounds good. We've just added some speedups to DurationInfo; we should be back 
porting that.

Some of my colleagues (hello Hive team) have been expressing concern to me 
about how long it takes object store filesystems to be created once their 
initialize() method starts making remote HTTP calls to multiple Service 
endpoints. This matters because FileSystem.get() Will create multiple instances 
of the same FS endpoint in parallel if the initialisation is ongoing while 
separate thread also call the get() method. The discussion has focused around 
doing async init in S3A; abfs could add that too. You've have a separate thread 
doing the token init etc -but then every public API call would need to 
potentially block awaiting that init to complete.

+[~rbalamohan] [~gabor.bota]

> ABFS: Have all external dependent module execution tracked with DurationInfo
> 
>
> Key: HADOOP-16791
> URL: https://issues.apache.org/jira/browse/HADOOP-16791
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Reporter: Sneha Vijayarajan
>Assignee: Sneha Vijayarajan
>Priority: Major
> Fix For: 3.3.0
>
>
> To be able to break down the perf impact of the external module executions 
> within ABFS Driver, add execution time computation using DurationInfo in all 
> the relative places. 
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-16358) Add an ARM CI for Hadoop

2020-01-06 Thread Vinayakumar B (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16358?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B resolved HADOOP-16358.

Fix Version/s: 3.3.0
   Resolution: Fixed

A Jenkins Job has been created to run nightly tests on aarch64

[https://builds.apache.org/view/H-L/view/Hadoop/job/Hadoop-qbt-linux-ARM-trunk/]

> Add an ARM CI for Hadoop
> 
>
> Key: HADOOP-16358
> URL: https://issues.apache.org/jira/browse/HADOOP-16358
> Project: Hadoop Common
>  Issue Type: Task
>  Components: build
>Reporter: Zhenyu Zheng
>Priority: Major
> Fix For: 3.3.0
>
>
> Now the CI of Hadoop is handled by jenkins. While the tests are running under 
> x86 ARCH, the ARM arch has not being considered. This leads an problem that 
> we don't have a way to test every pull request that if it'll break the Hadoop 
> deployment on ARM or not.
> We should add a CI system that support ARM arch. Using it, Hadoop can 
> officially support arm release in the future. Here I'd like to introduce 
> OpenLab to the community. [OpenLab|https://openlabtesting.org/] is a open 
> source CI system that can test any open source software on either x86 or arm 
> ARCH, it's mainly used by github projects. Now some 
> [projects|https://github.com/theopenlab/openlab-zuul-jobs/blob/master/zuul.d/jobs.yaml]
>  has integrated it already. Such as containerd (a graduated CNCF project, the 
> arm build will be triggerd in every PR, 
> [https://github.com/containerd/containerd/pulls]), terraform and so on.
> OpenLab uses the open source CI software [Zuul 
> |https://github.com/openstack-infra/zuul] for CI system. Zuul is used by 
> OpenStack community as well. integrating with OpneLab is quite easy using its 
> github app. All config info is open source as well.
> If apache Hadoop community has interested with it, we can help for the 
> integration.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16358) Add an ARM CI for Hadoop

2020-01-06 Thread Vinayakumar B (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16358?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B updated HADOOP-16358:
---
Issue Type: Task  (was: Improvement)

> Add an ARM CI for Hadoop
> 
>
> Key: HADOOP-16358
> URL: https://issues.apache.org/jira/browse/HADOOP-16358
> Project: Hadoop Common
>  Issue Type: Task
>  Components: build
>Reporter: Zhenyu Zheng
>Priority: Major
>
> Now the CI of Hadoop is handled by jenkins. While the tests are running under 
> x86 ARCH, the ARM arch has not being considered. This leads an problem that 
> we don't have a way to test every pull request that if it'll break the Hadoop 
> deployment on ARM or not.
> We should add a CI system that support ARM arch. Using it, Hadoop can 
> officially support arm release in the future. Here I'd like to introduce 
> OpenLab to the community. [OpenLab|https://openlabtesting.org/] is a open 
> source CI system that can test any open source software on either x86 or arm 
> ARCH, it's mainly used by github projects. Now some 
> [projects|https://github.com/theopenlab/openlab-zuul-jobs/blob/master/zuul.d/jobs.yaml]
>  has integrated it already. Such as containerd (a graduated CNCF project, the 
> arm build will be triggerd in every PR, 
> [https://github.com/containerd/containerd/pulls]), terraform and so on.
> OpenLab uses the open source CI software [Zuul 
> |https://github.com/openstack-infra/zuul] for CI system. Zuul is used by 
> OpenStack community as well. integrating with OpneLab is quite easy using its 
> github app. All config info is open source as well.
> If apache Hadoop community has interested with it, we can help for the 
> integration.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16621) [pb-upgrade] spark-hive doesn't compile against hadoop trunk because of Token's marshalling

2020-01-06 Thread Vinayakumar B (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16621?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B updated HADOOP-16621:
---
Target Version/s: 3.3.0

> [pb-upgrade] spark-hive doesn't compile against hadoop trunk because of 
> Token's marshalling
> ---
>
> Key: HADOOP-16621
> URL: https://issues.apache.org/jira/browse/HADOOP-16621
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: common
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Priority: Major
>
> the move to protobuf 3.x stops spark building because Token has a method 
> which returns a protobuf, and now its returning some v3 types.
> if we want to isolate downstream code from protobuf changes, we need to move 
> that marshalling method from token and put in a helper class.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on a change in pull request #1792: HADOOP-16790: Add Write Convenience Methods

2020-01-06 Thread GitBox
steveloughran commented on a change in pull request #1792: HADOOP-16790: Add 
Write Convenience Methods
URL: https://github.com/apache/hadoop/pull/1792#discussion_r363253470
 
 

 ##
 File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileUtil.java
 ##
 @@ -1633,4 +1638,119 @@ public static boolean compareFs(FileSystem srcFs, 
FileSystem destFs) {
 // check for ports
 return srcUri.getPort()==dstUri.getPort();
   }
+
+  /**
+   * Writes bytes to a file. This utility method opens the file for writing,
+   * creating the file if it does not exist, or overwrites an existing file. 
All
+   * bytes in the byte array are written to the file.
+   *
+   * @param fs the files system with which to create the file
+   * @param path the path to the file
+   * @param bytes the byte array with the bytes to write
+   *
+   * @return the file system
+   *
+   * @throws NullPointerException if any of the arguments are {@code null}
+   * @throws IOException if an I/O error occurs creating or writing to the file
+   */
+  public static FileSystem write(final FileSystem fs, final Path path,
+  final byte[] bytes) throws IOException {
+
+Objects.requireNonNull(path);
+Objects.requireNonNull(bytes);
+
+try (FSDataOutputStream out = fs.create(path)) {
+  out.write(bytes);
+}
+
+return fs;
+  }
+
+  /**
+   * Write lines of text to a file. Each line is a char sequence and is written
+   * to the file in sequence with each line terminated by the platform's line
+   * separator, as defined by the system property {@code
+   * line.separator}. Characters are encoded into bytes using the specified
+   * charset. This utility method opens the file for writing, creating the file
+   * if it does not exist, or overwrites an existing file.
+   *
+   * @param fs the files system with which to create the file
+   * @param path the path to the file
+   * @param lines a Collection to iterate over the char sequences
+   * @param cs the charset to use for encoding
+   *
+   * @return the file system
+   *
+   * @throws NullPointerException if any of the arguments are {@code null}
+   * @throws IOException if an I/O error occurs creating or writing to the file
+   */
+  public static FileSystem write(final FileSystem fs, final Path path,
+  final Iterable lines, final Charset cs)
+  throws IOException {
+
+Objects.requireNonNull(path);
+Objects.requireNonNull(lines);
+Objects.requireNonNull(cs);
+
+CharsetEncoder encoder = cs.newEncoder();
+try (FSDataOutputStream out = fs.create(path);
+BufferedWriter writer =
+new BufferedWriter(new OutputStreamWriter(out, encoder))) {
+  for (CharSequence line : lines) {
+writer.append(line);
+writer.newLine();
+  }
+}
+return fs;
+  }
+
+  /**
+   * Write a line of text to a file. Characters are encoded into bytes using 
the
+   * specified charset. This utility method opens the file for writing, 
creating
+   * the file if it does not exist, or overwrites an existing file.
+   *
+   * @param fs the files system with which to create the file
+   * @param path the path to the file
+   * @param charseq the char sequence to write to the file
+   * @param cs the charset to use for encoding
+   *
+   * @return the file system
+   *
+   * @throws NullPointerException if any of the arguments are {@code null}
+   * @throws IOException if an I/O error occurs creating or writing to the file
+   */
+  public static FileSystem write(final FileSystem fs, final Path path,
 
 Review comment:
   add in overwrite options. We've been dealing with 404 caching in S3A, which 
relies on createFile(overwrite = false). Unless you make the default, it must 
be something callers can use.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on a change in pull request #1792: HADOOP-16790: Add Write Convenience Methods

2020-01-06 Thread GitBox
steveloughran commented on a change in pull request #1792: HADOOP-16790: Add 
Write Convenience Methods
URL: https://github.com/apache/hadoop/pull/1792#discussion_r363252414
 
 

 ##
 File path: 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestFileUtil.java
 ##
 @@ -1493,6 +1495,73 @@ public void testReadSymlinkWithAFileAsInput() throws 
IOException {
 file.delete();
   }
 
+  /**
+   * Test that bytes are written out correctly to the local file system.
+   */
+  @Test
+  public void testWriteBytes() throws IOException {
+setupDirs();
+
+URI uri = tmp.toURI();
+Configuration conf = new Configuration();
+FileSystem fs = FileSystem.newInstance(uri, conf);
 
 Review comment:
   just use FileSystem.get()


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on a change in pull request #1792: HADOOP-16790: Add Write Convenience Methods

2020-01-06 Thread GitBox
steveloughran commented on a change in pull request #1792: HADOOP-16790: Add 
Write Convenience Methods
URL: https://github.com/apache/hadoop/pull/1792#discussion_r363253006
 
 

 ##
 File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileUtil.java
 ##
 @@ -1633,4 +1638,119 @@ public static boolean compareFs(FileSystem srcFs, 
FileSystem destFs) {
 // check for ports
 return srcUri.getPort()==dstUri.getPort();
   }
+
+  /**
+   * Writes bytes to a file. This utility method opens the file for writing,
+   * creating the file if it does not exist, or overwrites an existing file. 
All
+   * bytes in the byte array are written to the file.
+   *
+   * @param fs the files system with which to create the file
+   * @param path the path to the file
+   * @param bytes the byte array with the bytes to write
+   *
+   * @return the file system
+   *
+   * @throws NullPointerException if any of the arguments are {@code null}
+   * @throws IOException if an I/O error occurs creating or writing to the file
+   */
+  public static FileSystem write(final FileSystem fs, final Path path,
+  final byte[] bytes) throws IOException {
+
+Objects.requireNonNull(path);
+Objects.requireNonNull(bytes);
+
+try (FSDataOutputStream out = fs.create(path)) {
+  out.write(bytes);
+}
+
+return fs;
+  }
+
+  /**
+   * Write lines of text to a file. Each line is a char sequence and is written
+   * to the file in sequence with each line terminated by the platform's line
+   * separator, as defined by the system property {@code
+   * line.separator}. Characters are encoded into bytes using the specified
+   * charset. This utility method opens the file for writing, creating the file
+   * if it does not exist, or overwrites an existing file.
+   *
+   * @param fs the files system with which to create the file
+   * @param path the path to the file
+   * @param lines a Collection to iterate over the char sequences
+   * @param cs the charset to use for encoding
+   *
+   * @return the file system
+   *
+   * @throws NullPointerException if any of the arguments are {@code null}
+   * @throws IOException if an I/O error occurs creating or writing to the file
+   */
+  public static FileSystem write(final FileSystem fs, final Path path,
+  final Iterable lines, final Charset cs)
+  throws IOException {
+
+Objects.requireNonNull(path);
+Objects.requireNonNull(lines);
+Objects.requireNonNull(cs);
+
+CharsetEncoder encoder = cs.newEncoder();
+try (FSDataOutputStream out = fs.create(path);
+BufferedWriter writer =
+new BufferedWriter(new OutputStreamWriter(out, encoder))) {
+  for (CharSequence line : lines) {
+writer.append(line);
+writer.newLine();
+  }
+}
+return fs;
+  }
+
+  /**
+   * Write a line of text to a file. Characters are encoded into bytes using 
the
+   * specified charset. This utility method opens the file for writing, 
creating
+   * the file if it does not exist, or overwrites an existing file.
+   *
+   * @param fs the files system with which to create the file
+   * @param path the path to the file
+   * @param charseq the char sequence to write to the file
+   * @param cs the charset to use for encoding
+   *
+   * @return the file system
+   *
+   * @throws NullPointerException if any of the arguments are {@code null}
+   * @throws IOException if an I/O error occurs creating or writing to the file
+   */
+  public static FileSystem write(final FileSystem fs, final Path path,
+  final CharSequence charseq, final Charset cs) throws IOException {
+
+Objects.requireNonNull(path);
+Objects.requireNonNull(charseq);
+Objects.requireNonNull(cs);
+
+CharsetEncoder encoder = cs.newEncoder();
+try (FSDataOutputStream out = fs.create(path);
+BufferedWriter writer =
+new BufferedWriter(new OutputStreamWriter(out, encoder))) {
+  writer.append(charseq);
+}
+return fs;
+  }
+
+  /**
+   * Write a line of text to a file. Characters are encoded into bytes using
+   * UTF-8. This utility method opens the file for writing, creating the file 
if
+   * it does not exist, or overwrites an existing file.
+   *
+   * @param fs the files system with which to create the file
+   * @param path the path to the file
+   * @param charseq the char sequence to write to the file
+   *
+   * @return the file system
 
 Review comment:
   why?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional c

[GitHub] [hadoop] steveloughran commented on a change in pull request #1792: HADOOP-16790: Add Write Convenience Methods

2020-01-06 Thread GitBox
steveloughran commented on a change in pull request #1792: HADOOP-16790: Add 
Write Convenience Methods
URL: https://github.com/apache/hadoop/pull/1792#discussion_r363252650
 
 

 ##
 File path: 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestFileUtil.java
 ##
 @@ -1493,6 +1495,73 @@ public void testReadSymlinkWithAFileAsInput() throws 
IOException {
 file.delete();
   }
 
+  /**
+   * Test that bytes are written out correctly to the local file system.
+   */
+  @Test
+  public void testWriteBytes() throws IOException {
+setupDirs();
+
+URI uri = tmp.toURI();
+Configuration conf = new Configuration();
+FileSystem fs = FileSystem.newInstance(uri, conf);
+Path testPath = new Path(new Path(uri), "writebytes.out");
+
+byte[] write = new byte[] {0x00, 0x01, 0x02, 0x03};
+
+FileUtil.write(fs, testPath, write);
+
+byte[] read = FileUtils.readFileToByteArray(new File(testPath.toUri()));
+
+assertArrayEquals(write, read);
+  }
+
+  /**
+   * Test that a Collection of Strings are written out correctly to the local
+   * file system.
+   */
+  @Test
+  public void testWriteStrings() throws IOException {
+setupDirs();
+
+URI uri = tmp.toURI();
+Configuration conf = new Configuration();
+FileSystem fs = FileSystem.newInstance(uri, conf);
+Path testPath = new Path(new Path(uri), "writestrings.out");
+
+Collection write = Arrays.asList("over", "the", "lazy", "dog");
+
+FileUtil.write(fs, testPath, write, StandardCharsets.UTF_8);
+
+List read =
+FileUtils.readLines(new File(testPath.toUri()), 
StandardCharsets.UTF_8);
+
+assertEquals(write, read);
 
 Review comment:
   I'd consider some round trip tests with ContractTestUtils too, to verify 
interop. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



  1   2   >