[GitHub] [hadoop] hadoop-yetus commented on issue #648: HDDS-1340. Add List Containers API for Recon

2019-03-26 Thread GitBox
hadoop-yetus commented on issue #648: HDDS-1340. Add List Containers API for 
Recon
URL: https://github.com/apache/hadoop/pull/648#issuecomment-476985391
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 30 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 1086 | trunk passed |
   | +1 | compile | 50 | trunk passed |
   | +1 | checkstyle | 14 | trunk passed |
   | +1 | mvnsite | 24 | trunk passed |
   | +1 | shadedclient | 677 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 35 | trunk passed |
   | +1 | javadoc | 22 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 30 | the patch passed |
   | +1 | compile | 22 | the patch passed |
   | +1 | javac | 22 | the patch passed |
   | -0 | checkstyle | 14 | hadoop-ozone/ozone-recon: The patch generated 2 new 
+ 0 unchanged - 0 fixed = 2 total (was 0) |
   | +1 | mvnsite | 23 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 732 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 41 | the patch passed |
   | +1 | javadoc | 20 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 36 | ozone-recon in the patch passed. |
   | +1 | asflicense | 29 | The patch does not generate ASF License warnings. |
   | | | 2992 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-648/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/648 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  |
   | uname | Linux 21608414ace0 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / eef8cae |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | findbugs | v3.1.0-RC1 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-648/2/artifact/out/diff-checkstyle-hadoop-ozone_ozone-recon.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-648/2/testReport/ |
   | Max. process+thread count | 441 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/ozone-recon U: hadoop-ozone/ozone-recon |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-648/2/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 merged pull request #614: HDDS-1264. Remove Parametrized in TestOzoneShell

2019-03-26 Thread GitBox
bharatviswa504 merged pull request #614: HDDS-1264. Remove Parametrized in 
TestOzoneShell
URL: https://github.com/apache/hadoop/pull/614
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on issue #614: HDDS-1264. Remove Parametrized in TestOzoneShell

2019-03-26 Thread GitBox
bharatviswa504 commented on issue #614: HDDS-1264. Remove Parametrized in 
TestOzoneShell
URL: https://github.com/apache/hadoop/pull/614#issuecomment-476977165
 
 
   +1 LGTM.
   I will commit this shortly.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] vivekratnavel commented on a change in pull request #648: HDDS-1340. Add List Containers API for Recon

2019-03-26 Thread GitBox
vivekratnavel commented on a change in pull request #648: HDDS-1340. Add List 
Containers API for Recon
URL: https://github.com/apache/hadoop/pull/648#discussion_r269404999
 
 

 ##
 File path: 
hadoop-ozone/ozone-recon/src/main/java/org/apache/hadoop/ozone/recon/api/ContainerKeyService.java
 ##
 @@ -51,6 +51,23 @@
   @Inject
   private ReconOMMetadataManager omMetadataManager;
 
+  /**
+   * Return list of container IDs for all the containers
+   *
+   * @return {@link Response}
+   */
+  @GET
+  public Response getContainerIDList() {
+List containerIDs;
 
 Review comment:
   Initialization is redundant here since the initialized value will never be 
used. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 merged pull request #626: HDDS-1262. In OM HA OpenKey and initiateMultipartUpload call Should happen only leader OM.

2019-03-26 Thread GitBox
bharatviswa504 merged pull request #626: HDDS-1262. In OM HA OpenKey and 
initiateMultipartUpload call Should happen only leader OM.
URL: https://github.com/apache/hadoop/pull/626
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] vivekratnavel commented on a change in pull request #648: HDDS-1340. Add List Containers API for Recon

2019-03-26 Thread GitBox
vivekratnavel commented on a change in pull request #648: HDDS-1340. Add List 
Containers API for Recon
URL: https://github.com/apache/hadoop/pull/648#discussion_r269403451
 
 

 ##
 File path: 
hadoop-ozone/ozone-recon/src/test/java/org/apache/hadoop/ozone/recon/api/TestContainerKeyService.java
 ##
 @@ -198,6 +198,38 @@ public void testGetKeysForContainer() throws Exception {
 assertTrue(keyMetadataList.isEmpty());
   }
 
+  @Test
+  public void testGetContainerIDList() throws Exception {
+//Take snapshot of OM DB and copy over to Recon OM DB.
+DBCheckpoint checkpoint = omMetadataManager.getStore()
 
 Review comment:
   Writes to OM DB is moved to setup phase in `@Before` 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] toddlipcon commented on a change in pull request #597: HDFS-3246: pRead equivalent for direct read path

2019-03-26 Thread GitBox
toddlipcon commented on a change in pull request #597: HDFS-3246: pRead 
equivalent for direct read path
URL: https://github.com/apache/hadoop/pull/597#discussion_r269400852
 
 

 ##
 File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/CryptoInputStream.java
 ##
 @@ -375,7 +399,72 @@ private void decrypt(long position, byte[] buffer, int 
offset, int length)
   returnDecryptor(decryptor);
 }
   }
-  
+
+  /**
+   * Decrypt n bytes in buf starting at start. Output is also put into buf
+   * starting at current position. buf.position() and buf.limit() should be
+   * unchanged after decryption. It is thread-safe.
+   *
+   * 
+   *   This method decrypts the input buf chunk-by-chunk and writes the
+   *   decrypted output back into the input buf. It uses two local buffers
+   *   taken from the {@link #bufferPool} to assist in this process: one is
+   *   designated as the input buffer and it stores a single chunk of the
+   *   given buf, the other is designated as the output buffer, which stores
+   *   the output of decrypting the input buffer. Both buffers are of size
+   *   {@link #bufferSize}.
+   * 
+   *
+   * 
+   *   Decryption is done by using a {@link Decryptor} and the
+   *   {@link #decrypt(Decryptor, ByteBuffer, ByteBuffer, byte)} method. Once
+   *   the decrypted data is written into the output buffer, is is copied back
+   *   into buf. Both buffers are returned back into the pool once the entire
+   *   buf is decrypted.
+   * 
+   */
+  private void decrypt(long position, ByteBuffer buf, int n, int start)
+  throws IOException {
+ByteBuffer localInBuffer = null;
+ByteBuffer localOutBuffer = null;
+final int pos = buf.position();
+final int limit = buf.limit();
+int len = 0;
+Decryptor localDecryptor = null;
+try {
+  localInBuffer = getBuffer();
 
 Review comment:
   looking briefly through openssl code, it _seems_ like it actually supports 
in == out encryption, so maybe we could avoid both buffer copies for full 
blocks, and maybe avoid one buffer copy for the non-bytebuffer case as well by 
making inBuffer == outBuffer.
   
   Again, probably just something to file for later


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] toddlipcon commented on a change in pull request #597: HDFS-3246: pRead equivalent for direct read path

2019-03-26 Thread GitBox
toddlipcon commented on a change in pull request #597: HDFS-3246: pRead 
equivalent for direct read path
URL: https://github.com/apache/hadoop/pull/597#discussion_r269402611
 
 

 ##
 File path: 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs/hdfs.c
 ##
 @@ -1399,6 +1463,12 @@ static int readPrepare(JNIEnv* env, hdfsFS fs, hdfsFile 
f,
 return 0;
 }
 
+/**
+ * If the underlying stream supports the ByteBufferReadable interface then
+ * this method will transparently use read(ByteBuffer). This can help
+ * improve performance as it avoids unnecessary copies between the kernel
+ * space, the Java process space, and the C process space.
 
 Review comment:
   per above, the kernel->user transition's the same here, it's just avoiding 
some JVM heap copies


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] toddlipcon commented on a change in pull request #597: HDFS-3246: pRead equivalent for direct read path

2019-03-26 Thread GitBox
toddlipcon commented on a change in pull request #597: HDFS-3246: pRead 
equivalent for direct read path
URL: https://github.com/apache/hadoop/pull/597#discussion_r269398123
 
 

 ##
 File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/CryptoInputStream.java
 ##
 @@ -375,7 +399,72 @@ private void decrypt(long position, byte[] buffer, int 
offset, int length)
   returnDecryptor(decryptor);
 }
   }
-  
+
+  /**
+   * Decrypt n bytes in buf starting at start. Output is also put into buf
+   * starting at current position. buf.position() and buf.limit() should be
+   * unchanged after decryption. It is thread-safe.
+   *
+   * 
+   *   This method decrypts the input buf chunk-by-chunk and writes the
+   *   decrypted output back into the input buf. It uses two local buffers
+   *   taken from the {@link #bufferPool} to assist in this process: one is
+   *   designated as the input buffer and it stores a single chunk of the
+   *   given buf, the other is designated as the output buffer, which stores
+   *   the output of decrypting the input buffer. Both buffers are of size
+   *   {@link #bufferSize}.
+   * 
+   *
+   * 
+   *   Decryption is done by using a {@link Decryptor} and the
+   *   {@link #decrypt(Decryptor, ByteBuffer, ByteBuffer, byte)} method. Once
+   *   the decrypted data is written into the output buffer, is is copied back
+   *   into buf. Both buffers are returned back into the pool once the entire
+   *   buf is decrypted.
+   * 
+   */
+  private void decrypt(long position, ByteBuffer buf, int n, int start)
+  throws IOException {
+ByteBuffer localInBuffer = null;
+ByteBuffer localOutBuffer = null;
+final int pos = buf.position();
 
 Review comment:
   similar suggestion to above: rename to 'bufPos' for clarity vs file positions


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] toddlipcon commented on a change in pull request #597: HDFS-3246: pRead equivalent for direct read path

2019-03-26 Thread GitBox
toddlipcon commented on a change in pull request #597: HDFS-3246: pRead 
equivalent for direct read path
URL: https://github.com/apache/hadoop/pull/597#discussion_r269402406
 
 

 ##
 File path: 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs/hdfs.c
 ##
 @@ -56,8 +56,23 @@
 
 // Bit fields for hdfsFile_internal flags
 #define HDFS_FILE_SUPPORTS_DIRECT_READ (1<<0)
+#define HDFS_FILE_SUPPORTS_DIRECT_PREAD (1<<0)
 
+/**
+ * Reads bytes using the read(ByteBuffer) API. By using Java
+ * DirectByteBuffers we can avoid copying the bytes from kernel space into
 
 Review comment:
   DirectByteBuffer avoids an extra copy into the java heap vs the C heap. It's 
still copying data out of the kernel to user space either way.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] toddlipcon commented on a change in pull request #597: HDFS-3246: pRead equivalent for direct read path

2019-03-26 Thread GitBox
toddlipcon commented on a change in pull request #597: HDFS-3246: pRead 
equivalent for direct read path
URL: https://github.com/apache/hadoop/pull/597#discussion_r269397696
 
 

 ##
 File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/CryptoInputStream.java
 ##
 @@ -375,7 +399,72 @@ private void decrypt(long position, byte[] buffer, int 
offset, int length)
   returnDecryptor(decryptor);
 }
   }
-  
+
+  /**
+   * Decrypt n bytes in buf starting at start. Output is also put into buf
+   * starting at current position. buf.position() and buf.limit() should be
 
 Review comment:
   This doc is confusing me a bit (haven't looked at the impl yet). It seems 
this both reads and writes to the same buf, but the read is happening from the 
'start' whereas the write is happening at 'buf.position()'? That seems somewhat 
unexpected and opens up some questions about whether the output range and input 
range can overlap


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] toddlipcon commented on a change in pull request #597: HDFS-3246: pRead equivalent for direct read path

2019-03-26 Thread GitBox
toddlipcon commented on a change in pull request #597: HDFS-3246: pRead 
equivalent for direct read path
URL: https://github.com/apache/hadoop/pull/597#discussion_r269396288
 
 

 ##
 File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/CryptoInputStream.java
 ##
 @@ -341,17 +343,39 @@ public int read(long position, byte[] buffer, int 
offset, int length)
   "positioned read.");
 }
   }
+
+   /** Positioned read using ByteBuffers. It is thread-safe */
+  @Override
+  public int read(long position, final ByteBuffer buf)
+  throws IOException {
+checkStream();
+try {
+  int pos = buf.position();
 
 Review comment:
   nit: I think it would be good to rename this pos to 'bufPos' so it's clearer 
that it's referring to the position in the buffer and not the current position 
in the file


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] toddlipcon commented on a change in pull request #597: HDFS-3246: pRead equivalent for direct read path

2019-03-26 Thread GitBox
toddlipcon commented on a change in pull request #597: HDFS-3246: pRead 
equivalent for direct read path
URL: https://github.com/apache/hadoop/pull/597#discussion_r269397766
 
 

 ##
 File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/CryptoInputStream.java
 ##
 @@ -375,7 +399,72 @@ private void decrypt(long position, byte[] buffer, int 
offset, int length)
   returnDecryptor(decryptor);
 }
   }
-  
+
+  /**
+   * Decrypt n bytes in buf starting at start. Output is also put into buf
+   * starting at current position. buf.position() and buf.limit() should be
 
 Review comment:
   Separate nit: "should be unchanged" -> "will be unchanged" or "are not 
changed". Should be sounds awfully wishy-washy for a postcondition


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] toddlipcon commented on a change in pull request #597: HDFS-3246: pRead equivalent for direct read path

2019-03-26 Thread GitBox
toddlipcon commented on a change in pull request #597: HDFS-3246: pRead 
equivalent for direct read path
URL: https://github.com/apache/hadoop/pull/597#discussion_r269398994
 
 

 ##
 File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/CryptoInputStream.java
 ##
 @@ -375,7 +399,72 @@ private void decrypt(long position, byte[] buffer, int 
offset, int length)
   returnDecryptor(decryptor);
 }
   }
-  
+
+  /**
+   * Decrypt n bytes in buf starting at start. Output is also put into buf
+   * starting at current position. buf.position() and buf.limit() should be
+   * unchanged after decryption. It is thread-safe.
+   *
+   * 
+   *   This method decrypts the input buf chunk-by-chunk and writes the
+   *   decrypted output back into the input buf. It uses two local buffers
+   *   taken from the {@link #bufferPool} to assist in this process: one is
+   *   designated as the input buffer and it stores a single chunk of the
+   *   given buf, the other is designated as the output buffer, which stores
+   *   the output of decrypting the input buffer. Both buffers are of size
+   *   {@link #bufferSize}.
+   * 
+   *
+   * 
+   *   Decryption is done by using a {@link Decryptor} and the
+   *   {@link #decrypt(Decryptor, ByteBuffer, ByteBuffer, byte)} method. Once
+   *   the decrypted data is written into the output buffer, is is copied back
+   *   into buf. Both buffers are returned back into the pool once the entire
+   *   buf is decrypted.
+   * 
+   */
+  private void decrypt(long position, ByteBuffer buf, int n, int start)
+  throws IOException {
+ByteBuffer localInBuffer = null;
+ByteBuffer localOutBuffer = null;
+final int pos = buf.position();
+final int limit = buf.limit();
+int len = 0;
 
 Review comment:
   it's quite confusing that the sense of 'len' and 'n' in this function are 
the reverse of 'length' and 'n' in the non-ByteBuffer read path.
   
   Can we consider some clearer names and rename both so that they match each 
other?
   
   Suggestion: 'len' or 'length' for the total length to be decrypted. 
'decryptedBytes' or 'doneBytes' for the number of bytes decrypted so far?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] toddlipcon commented on a change in pull request #597: HDFS-3246: pRead equivalent for direct read path

2019-03-26 Thread GitBox
toddlipcon commented on a change in pull request #597: HDFS-3246: pRead 
equivalent for direct read path
URL: https://github.com/apache/hadoop/pull/597#discussion_r269401040
 
 

 ##
 File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/CryptoInputStream.java
 ##
 @@ -375,7 +399,72 @@ private void decrypt(long position, byte[] buffer, int 
offset, int length)
   returnDecryptor(decryptor);
 }
   }
-  
+
+  /**
+   * Decrypt n bytes in buf starting at start. Output is also put into buf
+   * starting at current position. buf.position() and buf.limit() should be
+   * unchanged after decryption. It is thread-safe.
+   *
+   * 
+   *   This method decrypts the input buf chunk-by-chunk and writes the
+   *   decrypted output back into the input buf. It uses two local buffers
+   *   taken from the {@link #bufferPool} to assist in this process: one is
+   *   designated as the input buffer and it stores a single chunk of the
+   *   given buf, the other is designated as the output buffer, which stores
+   *   the output of decrypting the input buffer. Both buffers are of size
+   *   {@link #bufferSize}.
+   * 
+   *
+   * 
+   *   Decryption is done by using a {@link Decryptor} and the
+   *   {@link #decrypt(Decryptor, ByteBuffer, ByteBuffer, byte)} method. Once
+   *   the decrypted data is written into the output buffer, is is copied back
+   *   into buf. Both buffers are returned back into the pool once the entire
+   *   buf is decrypted.
+   * 
+   */
+  private void decrypt(long position, ByteBuffer buf, int n, int start)
+  throws IOException {
+ByteBuffer localInBuffer = null;
+ByteBuffer localOutBuffer = null;
+final int pos = buf.position();
+final int limit = buf.limit();
 
 Review comment:
   instead of saving pos/limit here and restoring them later, would it be 
easier to duplicate() the bytebuffer? then you could easily just set the limit 
to match 'n' and not worry about it? The loop bounds might become a bit easier, 
too (while buf.remaining() > 0) etc since you no longer need to consider the 
passed-in length.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] toddlipcon commented on a change in pull request #597: HDFS-3246: pRead equivalent for direct read path

2019-03-26 Thread GitBox
toddlipcon commented on a change in pull request #597: HDFS-3246: pRead 
equivalent for direct read path
URL: https://github.com/apache/hadoop/pull/597#discussion_r269396612
 
 

 ##
 File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/CryptoInputStream.java
 ##
 @@ -741,6 +830,7 @@ public boolean hasCapability(String capability) {
 case StreamCapabilities.DROPBEHIND:
 case StreamCapabilities.UNBUFFER:
 case StreamCapabilities.READBYTEBUFFER:
+case StreamCapabilities.PREADBYTEBUFFER:
 
 Review comment:
   it seems like preadByteBuffer requires the underlying stream to have this 
capability, so this should probably delegate to 
((StreamCapabiltiies)in).hasCapability(PREADBYTEBUFFER), right?
   
   (interestingly, the same goes for a few others of the capabilities like 
dropbehind, I think. Curious what @steveloughran has to say.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] toddlipcon commented on a change in pull request #597: HDFS-3246: pRead equivalent for direct read path

2019-03-26 Thread GitBox
toddlipcon commented on a change in pull request #597: HDFS-3246: pRead 
equivalent for direct read path
URL: https://github.com/apache/hadoop/pull/597#discussion_r269397015
 
 

 ##
 File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/CryptoInputStream.java
 ##
 @@ -341,6 +343,26 @@ public int read(long position, byte[] buffer, int offset, 
int length)
   "positioned read.");
 }
   }
+
+   /** Positioned read using ByteBuffers. It is thread-safe */
+  @Override
+  public int read(long position, final ByteBuffer buf)
+  throws IOException {
+checkStream();
+try {
+  int pos = buf.position();
+  final int n = ((ByteBufferPositionedReadable) in).read(position, buf);
+  if (n > 0) {
+// This operation does not change the current offset of the file
+decrypt(position, buf, n, pos);
+  }
+
+  return n;
+} catch (ClassCastException e) {
 
 Review comment:
   hrm. I think we should probably do a follow-up JIRA to fix this, not for 
performance reasons, but because the try{...} block encompasses a lot of code. 
Let's say we accidentally screw up something in our encryption config and we 
get a ClassCastException somewhere inside decrypt. We'll swallow the real 
exception and claim that positioned read isn't supported, which isn't quite 
right.
   
   So, I agree an instanceof check up front is probably the clearest from a 
code perspective and also avoids the above issue.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] toddlipcon commented on a change in pull request #597: HDFS-3246: pRead equivalent for direct read path

2019-03-26 Thread GitBox
toddlipcon commented on a change in pull request #597: HDFS-3246: pRead 
equivalent for direct read path
URL: https://github.com/apache/hadoop/pull/597#discussion_r269397240
 
 

 ##
 File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/CryptoInputStream.java
 ##
 @@ -341,17 +343,39 @@ public int read(long position, byte[] buffer, int 
offset, int length)
   "positioned read.");
 }
   }
+
+   /** Positioned read using ByteBuffers. It is thread-safe */
+  @Override
+  public int read(long position, final ByteBuffer buf)
+  throws IOException {
+checkStream();
+try {
+  int pos = buf.position();
+  final int n = ((ByteBufferPositionedReadable) in).read(position, buf);
+  if (n > 0) {
+// This operation does not change the current offset of the file
+decrypt(position, buf, n, pos);
+  }
+
+  return n;
+} catch (ClassCastException e) {
+  throw new UnsupportedOperationException("This stream does not support " +
+  "positioned read.");
 
 Review comment:
   probably should specifically say "with byte buffers" or something


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] toddlipcon commented on a change in pull request #597: HDFS-3246: pRead equivalent for direct read path

2019-03-26 Thread GitBox
toddlipcon commented on a change in pull request #597: HDFS-3246: pRead 
equivalent for direct read path
URL: https://github.com/apache/hadoop/pull/597#discussion_r269401352
 
 

 ##
 File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/ByteBufferPositionedReadable.java
 ##
 @@ -0,0 +1,64 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.fs;
+
+import java.io.IOException;
+import java.nio.ByteBuffer;
+
+import org.apache.hadoop.classification.InterfaceAudience;
+import org.apache.hadoop.classification.InterfaceStability;
+
+/**
+ * Implementers of this interface provide a positioned read API that writes to 
a
+ * {@link ByteBuffer} rather than a {@code byte[]}.
+ *
+ * @see PositionedReadable
+ * @see ByteBufferReadable
+ */
+@InterfaceAudience.Public
+@InterfaceStability.Evolving
+public interface ByteBufferPositionedReadable {
+  /**
+   * Reads up to {@code buf.remaining()} bytes into buf from a given position
+   * in the file and returns the number of bytes read. Callers should use
+   * {@code buf.limit(...)} to control the size of the desired read and
+   * {@code buf.position(...)} to control the offset into the buffer the data
+   * should be written to.
+   * 
+   * After a successful call, {@code buf.position()} will be advanced by the
+   * number of bytes read and {@code buf.limit()} should be unchanged.
+   * 
+   * In the case of an exception, the values of {@code buf.position()} and
+   * {@code buf.limit()} are undefined, and callers should be prepared to
+   * recover from this eventuality.
 
 Review comment:
   Worth noting that the way it's implemented, it seems like on exception, the 
contents of the buffer are also undefined, right? ie we could have partially 
overwritten the buffer and then thrown?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] toddlipcon commented on a change in pull request #597: HDFS-3246: pRead equivalent for direct read path

2019-03-26 Thread GitBox
toddlipcon commented on a change in pull request #597: HDFS-3246: pRead 
equivalent for direct read path
URL: https://github.com/apache/hadoop/pull/597#discussion_r269399800
 
 

 ##
 File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/CryptoInputStream.java
 ##
 @@ -375,7 +399,72 @@ private void decrypt(long position, byte[] buffer, int 
offset, int length)
   returnDecryptor(decryptor);
 }
   }
-  
+
+  /**
+   * Decrypt n bytes in buf starting at start. Output is also put into buf
+   * starting at current position. buf.position() and buf.limit() should be
+   * unchanged after decryption. It is thread-safe.
+   *
+   * 
+   *   This method decrypts the input buf chunk-by-chunk and writes the
+   *   decrypted output back into the input buf. It uses two local buffers
+   *   taken from the {@link #bufferPool} to assist in this process: one is
+   *   designated as the input buffer and it stores a single chunk of the
+   *   given buf, the other is designated as the output buffer, which stores
+   *   the output of decrypting the input buffer. Both buffers are of size
+   *   {@link #bufferSize}.
+   * 
+   *
+   * 
+   *   Decryption is done by using a {@link Decryptor} and the
+   *   {@link #decrypt(Decryptor, ByteBuffer, ByteBuffer, byte)} method. Once
+   *   the decrypted data is written into the output buffer, is is copied back
+   *   into buf. Both buffers are returned back into the pool once the entire
+   *   buf is decrypted.
+   * 
+   */
+  private void decrypt(long position, ByteBuffer buf, int n, int start)
+  throws IOException {
+ByteBuffer localInBuffer = null;
+ByteBuffer localOutBuffer = null;
+final int pos = buf.position();
+final int limit = buf.limit();
+int len = 0;
+Decryptor localDecryptor = null;
+try {
+  localInBuffer = getBuffer();
 
 Review comment:
   Can you add a TODO here that we can likely avoid one of these copies, at 
least when the byte buffer passed by the user is a direct buffer? It looks like 
the patch is currently doing:
   
   ```
   pread -> user buffer
   for each chunk:
 copy from user buffer to tmp input
 decrypt tmp input to tmp output
 copy from tmp output to user buffer
   ```
   
   but we could likely decrypt back to the user buffer directly, or decrypt 
_from_ the user buffer to a tmp, and then write back. (this all assumes that 
the crypto codecs don't support in-place decryption, which they might)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] toddlipcon commented on a change in pull request #597: HDFS-3246: pRead equivalent for direct read path

2019-03-26 Thread GitBox
toddlipcon commented on a change in pull request #597: HDFS-3246: pRead 
equivalent for direct read path
URL: https://github.com/apache/hadoop/pull/597#discussion_r269398067
 
 

 ##
 File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/CryptoInputStream.java
 ##
 @@ -375,7 +399,72 @@ private void decrypt(long position, byte[] buffer, int 
offset, int length)
   returnDecryptor(decryptor);
 }
   }
-  
+
+  /**
+   * Decrypt n bytes in buf starting at start. Output is also put into buf
+   * starting at current position. buf.position() and buf.limit() should be
+   * unchanged after decryption. It is thread-safe.
+   *
+   * 
+   *   This method decrypts the input buf chunk-by-chunk and writes the
+   *   decrypted output back into the input buf. It uses two local buffers
+   *   taken from the {@link #bufferPool} to assist in this process: one is
+   *   designated as the input buffer and it stores a single chunk of the
+   *   given buf, the other is designated as the output buffer, which stores
+   *   the output of decrypting the input buffer. Both buffers are of size
+   *   {@link #bufferSize}.
+   * 
+   *
+   * 
+   *   Decryption is done by using a {@link Decryptor} and the
+   *   {@link #decrypt(Decryptor, ByteBuffer, ByteBuffer, byte)} method. Once
+   *   the decrypted data is written into the output buffer, is is copied back
+   *   into buf. Both buffers are returned back into the pool once the entire
+   *   buf is decrypted.
+   * 
+   */
+  private void decrypt(long position, ByteBuffer buf, int n, int start)
 
 Review comment:
   can you clarify in the javadoc (and maybe through some different variable 
name) what 'position' is? I think 'position' is the offset in the file, so that 
you can use it to position the decryptor into the keystream. Maybe 
'filePosition' or 'fileOffset' or something would be clearer so it doesn't get 
confused with a position in the buf.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] avijayanhwx commented on a change in pull request #648: HDDS-1340. Add List Containers API for Recon

2019-03-26 Thread GitBox
avijayanhwx commented on a change in pull request #648: HDDS-1340. Add List 
Containers API for Recon
URL: https://github.com/apache/hadoop/pull/648#discussion_r269382496
 
 

 ##
 File path: 
hadoop-ozone/ozone-recon/src/test/java/org/apache/hadoop/ozone/recon/api/TestContainerKeyService.java
 ##
 @@ -198,6 +198,38 @@ public void testGetKeysForContainer() throws Exception {
 assertTrue(keyMetadataList.isEmpty());
   }
 
+  @Test
+  public void testGetContainerIDList() throws Exception {
+//Take snapshot of OM DB and copy over to Recon OM DB.
+DBCheckpoint checkpoint = omMetadataManager.getStore()
 
 Review comment:
   Why are we taking DB snapshot if we are not writing anything new to OM DB? 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] avijayanhwx commented on a change in pull request #648: HDDS-1340. Add List Containers API for Recon

2019-03-26 Thread GitBox
avijayanhwx commented on a change in pull request #648: HDDS-1340. Add List 
Containers API for Recon
URL: https://github.com/apache/hadoop/pull/648#discussion_r269382718
 
 

 ##
 File path: 
hadoop-ozone/ozone-recon/src/main/java/org/apache/hadoop/ozone/recon/spi/ContainerDBServiceProvider.java
 ##
 @@ -66,4 +67,12 @@ Integer getCountForForContainerKeyPrefix(
*/
   Map getKeyPrefixesForContainer(long containerId)
   throws IOException;
+
+  /**
+   * Get a list of all Container IDs.
+   *
+   * @return List of Container IDs.
+   * @throws IOException
+   */
+  List getContainerIDList() throws IOException;
 
 Review comment:
   API can return Set instead of List.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] avijayanhwx commented on a change in pull request #648: HDDS-1340. Add List Containers API for Recon

2019-03-26 Thread GitBox
avijayanhwx commented on a change in pull request #648: HDDS-1340. Add List 
Containers API for Recon
URL: https://github.com/apache/hadoop/pull/648#discussion_r269382028
 
 

 ##
 File path: 
hadoop-ozone/ozone-recon/src/main/java/org/apache/hadoop/ozone/recon/api/ContainerKeyService.java
 ##
 @@ -51,6 +51,23 @@
   @Inject
   private ReconOMMetadataManager omMetadataManager;
 
+  /**
+   * Return list of container IDs for all the containers
+   *
+   * @return {@link Response}
+   */
+  @GET
+  public Response getContainerIDList() {
+List containerIDs;
 
 Review comment:
   (Minor) Initialize to empty list.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #595: HDFS-14304: High lock contention on hdfsHashMutex in libhdfs

2019-03-26 Thread GitBox
hadoop-yetus commented on issue #595: HDFS-14304: High lock contention on 
hdfsHashMutex in libhdfs
URL: https://github.com/apache/hadoop/pull/595#issuecomment-476962637
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 29 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 1098 | trunk passed |
   | +1 | compile | 99 | trunk passed |
   | +1 | mvnsite | 23 | trunk passed |
   | +1 | shadedclient | 1878 | branch has no errors when building and testing 
our client artifacts. |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 18 | the patch passed |
   | +1 | compile | 112 | the patch passed |
   | +1 | cc | 112 | the patch passed |
   | +1 | javac | 112 | the patch passed |
   | +1 | mvnsite | 17 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | -1 | shadedclient | 150 | patch has errors when building and testing our 
client artifacts. |
   ||| _ Other Tests _ |
   | +1 | unit | 351 | hadoop-hdfs-native-client in the patch passed. |
   | +1 | asflicense | 24 | The patch does not generate ASF License warnings. |
   | | | 2723 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-595/7/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/595 |
   | JIRA Issue | HDFS-14304 |
   | Optional Tests |  dupname  asflicense  compile  cc  mvnsite  javac  unit  |
   | uname | Linux 77c238b5cbc6 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / f426b7c |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-595/7/testReport/ |
   | Max. process+thread count | 412 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs-native-client U: 
hadoop-hdfs-project/hadoop-hdfs-native-client |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-595/7/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] asfgit closed pull request #595: HDFS-14304: High lock contention on hdfsHashMutex in libhdfs

2019-03-26 Thread GitBox
asfgit closed pull request #595: HDFS-14304: High lock contention on 
hdfsHashMutex in libhdfs
URL: https://github.com/apache/hadoop/pull/595
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] sahilTakiar commented on issue #595: HDFS-14304: High lock contention on hdfsHashMutex in libhdfs

2019-03-26 Thread GitBox
sahilTakiar commented on issue #595: HDFS-14304: High lock contention on 
hdfsHashMutex in libhdfs
URL: https://github.com/apache/hadoop/pull/595#issuecomment-476950505
 
 
   Fixing compilation issues. `thread_local_storage.c` compilation was failing 
because it was using the old version of `invokeMethod`. Changed it to 
`findClassAndInvokeMethod` since the code is not on the hot path + it is 
possible this code is called before the cached Java classes are created.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ajayydv commented on a change in pull request #632: HDDS-1255. Refactor ozone acceptance test to allow run in secure mode. Contributed by Ajay Kumar.

2019-03-26 Thread GitBox
ajayydv commented on a change in pull request #632: HDDS-1255. Refactor ozone 
acceptance test to allow run in secure mode. Contributed by Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/632#discussion_r269384452
 
 

 ##
 File path: hadoop-ozone/dist/src/main/smoketest/commonlib.robot
 ##
 @@ -35,3 +41,51 @@ Compare files
 ${checksumbefore} = Executemd5sum ${file1} | 
awk '{print $1}'
 ${checksumafter} =  Executemd5sum ${file2} | 
awk '{print $1}'
 Should Be Equal${checksumbefore}   
 ${checksumafter}
+Execute AWSS3APICli
+[Arguments]   ${command}
+${output} =   Executeaws s3api --endpoint-url 
${ENDPOINT_URL} ${command}
+[return]  ${output}
+
+Execute AWSS3APICli and checkrc
+[Arguments]   ${command} ${expected_error_code}
+${output} =   Execute and checkrcaws s3api --endpoint-url 
${ENDPOINT_URL} ${command}  ${expected_error_code}
+[return]  ${output}
+
+Execute AWSS3Cli
+[Arguments]   ${command}
+${output} =   Execute aws s3 --endpoint-url 
${ENDPOINT_URL} ${command}
+[return]  ${output}
+
+Install aws cli s3 centos
+Executesudo yum install -y awscli
+
+Install aws cli s3 debian
+Executesudo apt-get install -y awscli
+
+Install aws cli
+${rc}  ${output} = Run And Return Rc And 
Output   which apt-get
+Run Keyword if '${rc}' == '0'  Install aws cli s3 debian
+${rc}  ${output} = Run And Return Rc And 
Output   yum --help
+Run Keyword if '${rc}' == '0'  Install aws cli s3 centos
+
+Kinit test user
+${hostname} =   Executehostname
+Set Suite Variable  ${TEST_USER}   
testuser/${hostname}@EXAMPLE.COM
+Execute kinit -k ${TEST_USER} -t 
/etc/security/keytabs/testuser.keytab
+
+Setup secure credentials
+Run Keyword Install aws cli
 
 Review comment:
   you are right, moved s3 part to s3 commonlib.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ajayydv commented on a change in pull request #632: HDDS-1255. Refactor ozone acceptance test to allow run in secure mode. Contributed by Ajay Kumar.

2019-03-26 Thread GitBox
ajayydv commented on a change in pull request #632: HDDS-1255. Refactor ozone 
acceptance test to allow run in secure mode. Contributed by Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/632#discussion_r269384368
 
 

 ##
 File path: hadoop-ozone/dist/src/main/smoketest/commonlib.robot
 ##
 @@ -35,3 +41,51 @@ Compare files
 ${checksumbefore} = Executemd5sum ${file1} | 
awk '{print $1}'
 ${checksumafter} =  Executemd5sum ${file2} | 
awk '{print $1}'
 Should Be Equal${checksumbefore}   
 ${checksumafter}
+Execute AWSS3APICli
+[Arguments]   ${command}
+${output} =   Executeaws s3api --endpoint-url 
${ENDPOINT_URL} ${command}
+[return]  ${output}
+
+Execute AWSS3APICli and checkrc
+[Arguments]   ${command} ${expected_error_code}
+${output} =   Execute and checkrcaws s3api --endpoint-url 
${ENDPOINT_URL} ${command}  ${expected_error_code}
+[return]  ${output}
+
+Execute AWSS3Cli
+[Arguments]   ${command}
+${output} =   Execute aws s3 --endpoint-url 
${ENDPOINT_URL} ${command}
+[return]  ${output}
+
+Install aws cli s3 centos
+Executesudo yum install -y awscli
+
+Install aws cli s3 debian
+Executesudo apt-get install -y awscli
+
+Install aws cli
+${rc}  ${output} = Run And Return Rc And 
Output   which apt-get
+Run Keyword if '${rc}' == '0'  Install aws cli s3 debian
+${rc}  ${output} = Run And Return Rc And 
Output   yum --help
+Run Keyword if '${rc}' == '0'  Install aws cli s3 centos
+
+Kinit test user
+${hostname} =   Executehostname
+Set Suite Variable  ${TEST_USER}   
testuser/${hostname}@EXAMPLE.COM
+Execute kinit -k ${TEST_USER} -t 
/etc/security/keytabs/testuser.keytab
+
+Setup secure credentials
+Run Keyword Install aws cli
+Run Keyword Kinit test user
+${result} = Executeozone s3 getsecret
+${accessKey} =  Get Regexp Matches ${result} 
(?<=awsAccessKey=).*
+${secret} = Get Regexp Matches ${result} 
(?<=awsSecret=).*
+Executeaws configure set 
default.s3.signature_version s3v4
+Executeaws configure set 
aws_access_key_id ${accessKey[0]}
+Executeaws configure set 
aws_secret_access_key ${secret[0]}
+Executeaws configure set region 
us-west-1
+
+Setup incorrect credentials for S3
 
 Review comment:
   done


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #632: HDDS-1255. Refactor ozone acceptance test to allow run in secure mode. Contributed by Ajay Kumar.

2019-03-26 Thread GitBox
hadoop-yetus commented on issue #632: HDDS-1255. Refactor ozone acceptance test 
to allow run in secure mode. Contributed by Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/632#issuecomment-476930753
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 0 | Docker mode activated. |
   | -1 | patch | 5 | https://github.com/apache/hadoop/pull/632 does not apply 
to trunk. Rebase required? Wrong Branch? See 
https://wiki.apache.org/hadoop/HowToContribute for help. |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | GITHUB PR | https://github.com/apache/hadoop/pull/632 |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-632/3/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #649: HDDS-1332. Add some logging for flaky test testStartStopDatanodeState…

2019-03-26 Thread GitBox
hadoop-yetus commented on issue #649: HDDS-1332. Add some logging for flaky 
test testStartStopDatanodeState…
URL: https://github.com/apache/hadoop/pull/649#issuecomment-476924855
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 41 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 1076 | trunk passed |
   | +1 | compile | 40 | trunk passed |
   | +1 | checkstyle | 18 | trunk passed |
   | +1 | mvnsite | 34 | trunk passed |
   | +1 | shadedclient | 784 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 51 | trunk passed |
   | +1 | javadoc | 28 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 36 | the patch passed |
   | +1 | compile | 26 | the patch passed |
   | +1 | javac | 26 | the patch passed |
   | -0 | checkstyle | 14 | hadoop-hdds/container-service: The patch generated 
1 new + 0 unchanged - 0 fixed = 1 total (was 0) |
   | +1 | mvnsite | 29 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 808 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 54 | the patch passed |
   | +1 | javadoc | 25 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 81 | container-service in the patch failed. |
   | +1 | asflicense | 25 | The patch does not generate ASF License warnings. |
   | | | 3257 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.ozone.container.common.TestDatanodeStateMachine |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-649/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/649 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  |
   | uname | Linux 5e9db77ed771 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri 
Oct 5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / fe29b39 |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | findbugs | v3.1.0-RC1 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-649/1/artifact/out/diff-checkstyle-hadoop-hdds_container-service.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-649/1/artifact/out/patch-unit-hadoop-hdds_container-service.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-649/1/testReport/ |
   | Max. process+thread count | 403 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/container-service U: 
hadoop-hdds/container-service |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-649/1/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16183) Use latest Yetus to support ozone specific build process

2019-03-26 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16183?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16802331#comment-16802331
 ] 

Sean Busbey commented on HADOOP-16183:
--

We should ask for a Yetus release. We're a downstream user of that project. We 
know that as an ASF project Yetus isn't supposed to allow downstreamers to 
consume non-released stuff.

> Use latest Yetus to support ozone specific build process
> 
>
> Key: HADOOP-16183
> URL: https://issues.apache.org/jira/browse/HADOOP-16183
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>
> In YETUS-816 the hadoop personality is improved to better support ozone 
> specific changes.
> Unfortunately the hadoop personality is part of the Yetus project and not the 
> Hadoop project: we need a new yetus release or switch to an unreleased 
> version.
> In this patch I propose to use the latest commit from yetus (but use that 
> fixed commit instead updating all the time). 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #595: HDFS-14304: High lock contention on hdfsHashMutex in libhdfs

2019-03-26 Thread GitBox
hadoop-yetus commented on issue #595: HDFS-14304: High lock contention on 
hdfsHashMutex in libhdfs
URL: https://github.com/apache/hadoop/pull/595#issuecomment-476922972
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 29 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 994 | trunk passed |
   | +1 | compile | 101 | trunk passed |
   | +1 | mvnsite | 16 | trunk passed |
   | +1 | shadedclient | 1717 | branch has no errors when building and testing 
our client artifacts. |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 14 | the patch passed |
   | -1 | compile | 34 | hadoop-hdfs-native-client in the patch failed. |
   | -1 | cc | 34 | hadoop-hdfs-native-client in the patch failed. |
   | -1 | javac | 34 | hadoop-hdfs-native-client in the patch failed. |
   | +1 | mvnsite | 16 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 678 | patch has no errors when building and testing 
our client artifacts. |
   ||| _ Other Tests _ |
   | -1 | unit | 35 | hadoop-hdfs-native-client in the patch failed. |
   | +1 | asflicense | 24 | The patch does not generate ASF License warnings. |
   | | | 2671 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-595/6/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/595 |
   | JIRA Issue | HDFS-14304 |
   | Optional Tests |  dupname  asflicense  compile  cc  mvnsite  javac  unit  |
   | uname | Linux fc6e44fb6578 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / fe29b39 |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-595/6/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs-native-client.txt
 |
   | cc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-595/6/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs-native-client.txt
 |
   | javac | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-595/6/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs-native-client.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-595/6/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-native-client.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-595/6/testReport/ |
   | Max. process+thread count | 411 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs-native-client U: 
hadoop-hdfs-project/hadoop-hdfs-native-client |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-595/6/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #648: HDDS-1340. Add List Containers API for Recon

2019-03-26 Thread GitBox
hadoop-yetus commented on issue #648: HDDS-1340. Add List Containers API for 
Recon
URL: https://github.com/apache/hadoop/pull/648#issuecomment-476918265
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 531 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 993 | trunk passed |
   | +1 | compile | 50 | trunk passed |
   | +1 | checkstyle | 19 | trunk passed |
   | +1 | mvnsite | 29 | trunk passed |
   | +1 | shadedclient | 731 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 34 | trunk passed |
   | +1 | javadoc | 22 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 27 | the patch passed |
   | +1 | compile | 20 | the patch passed |
   | +1 | javac | 20 | the patch passed |
   | -0 | checkstyle | 10 | hadoop-ozone/ozone-recon: The patch generated 2 new 
+ 0 unchanged - 0 fixed = 2 total (was 0) |
   | +1 | mvnsite | 19 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 691 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 39 | the patch passed |
   | +1 | javadoc | 16 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 31 | ozone-recon in the patch passed. |
   | +1 | asflicense | 22 | The patch does not generate ASF License warnings. |
   | | | 3367 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-648/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/648 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  |
   | uname | Linux b1271d539ce3 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / fe29b39 |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | findbugs | v3.1.0-RC1 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-648/1/artifact/out/diff-checkstyle-hadoop-ozone_ozone-recon.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-648/1/testReport/ |
   | Max. process+thread count | 440 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/ozone-recon U: hadoop-ozone/ozone-recon |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-648/1/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] sahilTakiar commented on issue #595: HDFS-14304: High lock contention on hdfsHashMutex in libhdfs

2019-03-26 Thread GitBox
sahilTakiar commented on issue #595: HDFS-14304: High lock contention on 
hdfsHashMutex in libhdfs
URL: https://github.com/apache/hadoop/pull/595#issuecomment-476912733
 
 
   Rebased on trunk. Only conflict was:
   
   ```
   } else {
   jclass clazz = (*env)->FindClass(env, READ_OPTION);
   if (!clazz) {
   -   jthr = newRuntimeError(env, "failed "
   -   "to find class for %s", READ_OPTION);
   +   jthr = getPendingExceptionAndClear(env);
   goto done;
   }
   jthr = invokeMethod(env, &jVal, STATIC, NULL,
   ```
   
   This patch changed `READ_OPTION` to `HADOOP_RO` which caused a conflict.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] arp7 opened a new pull request #649: HDDS-1332. Add some logging for flaky test testStartStopDatanodeState…

2019-03-26 Thread GitBox
arp7 opened a new pull request #649: HDDS-1332. Add some logging for flaky test 
testStartStopDatanodeState…
URL: https://github.com/apache/hadoop/pull/649
 
 
   …Machine. Contributed by Arpit Agarwal.
   
   Change-Id: I4f9dc6aeff7f4502956d160e35f2c4caadccb246


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on issue #626: HDDS-1262. In OM HA OpenKey and initiateMultipartUpload call Should happen only leader OM.

2019-03-26 Thread GitBox
bharatviswa504 commented on issue #626: HDDS-1262. In OM HA OpenKey and 
initiateMultipartUpload call Should happen only leader OM.
URL: https://github.com/apache/hadoop/pull/626#issuecomment-476904363
 
 
   Not sure why for this PR yetus is throwing mvn install errors.
   I am able to compile locally on my dev machine.
   Posted a patch to the jira.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] vivekratnavel opened a new pull request #648: HDDS-1340. Add List Containers API for Recon

2019-03-26 Thread GitBox
vivekratnavel opened a new pull request #648: HDDS-1340. Add List Containers 
API for Recon
URL: https://github.com/apache/hadoop/pull/648
 
 
   Recon server should support "/containers" API that lists all the containers


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #626: HDDS-1262. In OM HA OpenKey and initiateMultipartUpload call Should happen only leader OM.

2019-03-26 Thread GitBox
hadoop-yetus commented on issue #626: HDDS-1262. In OM HA OpenKey and 
initiateMultipartUpload call Should happen only leader OM.
URL: https://github.com/apache/hadoop/pull/626#issuecomment-476903348
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 25 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 2 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 11 | Maven dependency ordering for branch |
   | +1 | mvninstall | 979 | trunk passed |
   | +1 | compile | 91 | trunk passed |
   | +1 | checkstyle | 25 | trunk passed |
   | -1 | mvnsite | 24 | ozone-manager in trunk failed. |
   | -1 | mvnsite | 23 | integration-test in trunk failed. |
   | +1 | shadedclient | 703 | branch has no errors when building and testing 
our client artifacts. |
   | 0 | findbugs | 0 | Skipped patched modules with no Java source: 
hadoop-ozone/integration-test |
   | -1 | findbugs | 22 | ozone-manager in trunk failed. |
   | +1 | javadoc | 66 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 11 | Maven dependency ordering for patch |
   | -1 | mvninstall | 19 | integration-test in the patch failed. |
   | +1 | compile | 89 | the patch passed |
   | +1 | cc | 89 | the patch passed |
   | +1 | javac | 89 | the patch passed |
   | +1 | checkstyle | 20 | the patch passed |
   | -1 | mvnsite | 20 | integration-test in the patch failed. |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 670 | patch has no errors when building and testing 
our client artifacts. |
   | 0 | findbugs | 0 | Skipped patched modules with no Java source: 
hadoop-ozone/integration-test |
   | +1 | findbugs | 109 | the patch passed |
   | +1 | javadoc | 103 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 37 | common in the patch passed. |
   | +1 | unit | 39 | ozone-manager in the patch passed. |
   | -1 | unit | 25 | integration-test in the patch failed. |
   | +1 | asflicense | 30 | The patch does not generate ASF License warnings. |
   | | | 3387 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-626/12/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/626 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  cc  |
   | uname | Linux f90876374467 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / ce4bafd |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | mvnsite | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-626/12/artifact/out/branch-mvnsite-hadoop-ozone_ozone-manager.txt
 |
   | mvnsite | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-626/12/artifact/out/branch-mvnsite-hadoop-ozone_integration-test.txt
 |
   | findbugs | v3.1.0-RC1 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-626/12/artifact/out/branch-findbugs-hadoop-ozone_ozone-manager.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-626/12/artifact/out/patch-mvninstall-hadoop-ozone_integration-test.txt
 |
   | mvnsite | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-626/12/artifact/out/patch-mvnsite-hadoop-ozone_integration-test.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-626/12/artifact/out/patch-unit-hadoop-ozone_integration-test.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-626/12/testReport/ |
   | Max. process+thread count | 446 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/common hadoop-ozone/ozone-manager 
hadoop-ozone/integration-test U: hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-626/12/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #626: HDDS-1262. In OM HA OpenKey and initiateMultipartUpload call Should happen only leader OM.

2019-03-26 Thread GitBox
hadoop-yetus commented on issue #626: HDDS-1262. In OM HA OpenKey and 
initiateMultipartUpload call Should happen only leader OM.
URL: https://github.com/apache/hadoop/pull/626#issuecomment-476901803
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 24 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 2 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 13 | Maven dependency ordering for branch |
   | +1 | mvninstall | 983 | trunk passed |
   | +1 | compile | 89 | trunk passed |
   | +1 | checkstyle | 25 | trunk passed |
   | -1 | mvnsite | 23 | ozone-manager in trunk failed. |
   | -1 | mvnsite | 23 | integration-test in trunk failed. |
   | +1 | shadedclient | 712 | branch has no errors when building and testing 
our client artifacts. |
   | 0 | findbugs | 0 | Skipped patched modules with no Java source: 
hadoop-ozone/integration-test |
   | -1 | findbugs | 21 | ozone-manager in trunk failed. |
   | +1 | javadoc | 68 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 12 | Maven dependency ordering for patch |
   | -1 | mvninstall | 19 | integration-test in the patch failed. |
   | +1 | compile | 93 | the patch passed |
   | +1 | cc | 93 | the patch passed |
   | +1 | javac | 93 | the patch passed |
   | +1 | checkstyle | 22 | the patch passed |
   | -1 | mvnsite | 20 | integration-test in the patch failed. |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 665 | patch has no errors when building and testing 
our client artifacts. |
   | 0 | findbugs | 0 | Skipped patched modules with no Java source: 
hadoop-ozone/integration-test |
   | +1 | findbugs | 102 | the patch passed |
   | -1 | javadoc | 28 | hadoop-ozone_common generated 1 new + 1 unchanged - 0 
fixed = 2 total (was 1) |
   ||| _ Other Tests _ |
   | +1 | unit | 31 | common in the patch passed. |
   | +1 | unit | 38 | ozone-manager in the patch passed. |
   | -1 | unit | 22 | integration-test in the patch failed. |
   | +1 | asflicense | 22 | The patch does not generate ASF License warnings. |
   | | | 3329 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-626/11/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/626 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  cc  |
   | uname | Linux db25df154125 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / ce4bafd |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | mvnsite | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-626/11/artifact/out/branch-mvnsite-hadoop-ozone_ozone-manager.txt
 |
   | mvnsite | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-626/11/artifact/out/branch-mvnsite-hadoop-ozone_integration-test.txt
 |
   | findbugs | v3.1.0-RC1 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-626/11/artifact/out/branch-findbugs-hadoop-ozone_ozone-manager.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-626/11/artifact/out/patch-mvninstall-hadoop-ozone_integration-test.txt
 |
   | mvnsite | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-626/11/artifact/out/patch-mvnsite-hadoop-ozone_integration-test.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-626/11/artifact/out/diff-javadoc-javadoc-hadoop-ozone_common.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-626/11/artifact/out/patch-unit-hadoop-ozone_integration-test.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-626/11/testReport/ |
   | Max. process+thread count | 446 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/common hadoop-ozone/ozone-manager 
hadoop-ozone/integration-test U: hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-626/11/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

--

[GitHub] [hadoop] hadoop-yetus commented on a change in pull request #647: HADOOP-16118. S3Guard to support on-demand DDB tables.

2019-03-26 Thread GitBox
hadoop-yetus commented on a change in pull request #647: HADOOP-16118. S3Guard 
to support on-demand DDB tables.
URL: https://github.com/apache/hadoop/pull/647#discussion_r269358538
 
 

 ##
 File path: 
hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/s3guard.md
 ##
 @@ -906,22 +909,102 @@ If operations, especially directory operations, are 
slow, check the AWS
 console. It is also possible to set up AWS alerts for capacity limits
 being exceeded.
 
+###  On-Demand Dynamo Capacity
+
+[Amazon DynamoDB 
On-Demand](https://aws.amazon.com/blogs/aws/amazon-dynamodb-on-demand-no-capacity-planning-and-pay-per-request-pricing/)
+removes the need to pre-allocate I/O capacity for S3Guard tables.
+Instead the caller is _only_ charged per I/O Operation.
+
+* There are no SLA capacity guarantees. This is generally not an issue
+for S3Guard applications.
+* There's no explicit limit on I/O capacity, so operations which make
+heavy use of S3Guard tables (for example: SQL query planning) do not
+get throttled.
+* There's no way put a limit on the I/O; you may unintentionally run up
+large bills through sustained heavy load.
+* The `s3guard set-capacity` command fails: it does not make sense any more.
+
+When idle, S3Guard tables are only billed for the data stored, not for
+any unused capacity. For this reason, there is no benefit from sharing
+a single S3Guard table across multiple buckets.
+
+*Enabling DynamoDB On-Demand for a S3Guard table*
+
+You cannot currently enable DynamoDB on-demand from the `s3guard` command
+when creating or updating a bucket.
+
+Instead it must be done through the AWS console or [the 
CLI](https://docs.aws.amazon.com/cli/latest/reference/dynamodb/update-table.html).
+From the Web console or the command line, switch the billing to 
pay-per-request.
+
+Once enabled, the read and write capacities of the table listed in the
+`hadoop s3guard bucket-info` command become "0", and the "billing-mode"
+attribute changes to "per-request":
+
+```
+> hadoop s3guard bucket-info s3a://example-bucket/
+
+Filesystem s3a://example-bucket
+Location: eu-west-1
+Filesystem s3a://example-bucket is using S3Guard with store
+  DynamoDBMetadataStore{region=eu-west-1, tableName=example-bucket,
+  tableArn=arn:aws:dynamodb:eu-west-1:11:table/example-bucket}
+Authoritative S3Guard: fs.s3a.metadatastore.authoritative=false
+Metadata Store Diagnostics:
+  ARN=arn:aws:dynamodb:eu-west-1:11:table/example-bucket
+  billing-mode=per-request
+  description=S3Guard metadata store in DynamoDB
+  name=example-bucket
+  persist.authoritative.bit=true
+  read-capacity=0
+  region=eu-west-1
+  retryPolicy=ExponentialBackoffRetry(maxRetries=9, sleepTime=250 MILLISECONDS)
+  size=66797
+  status=ACTIVE
+  table={AttributeDefinitions:
+[{AttributeName: child,AttributeType: S},
+ {AttributeName: parent,AttributeType: S}],
+ TableName: example-bucket,
+ KeySchema: [{
+   AttributeName: parent,KeyType: HASH},
+   {AttributeName: child,KeyType: RANGE}],
+ TableStatus: ACTIVE,
+ CreationDateTime: Thu Oct 11 18:51:14 BST 2018,
+ ProvisionedThroughput: {
+   LastIncreaseDateTime: Tue Oct 30 16:48:45 GMT 2018,
+   LastDecreaseDateTime: Tue Oct 30 18:00:03 GMT 2018,
+   NumberOfDecreasesToday: 0,
+   ReadCapacityUnits: 0,
+   WriteCapacityUnits: 0},
+ TableSizeBytes: 66797,
+ ItemCount: 415,
+ TableArn: arn:aws:dynamodb:eu-west-1:11:table/example-bucket,
+ TableId: a7b0728a-f008-4260-b2a0-ab,}
+  write-capacity=0
+The "magic" committer is supported
+```
+
+###  Autoscaling S3Guard tables.
+
 [DynamoDB Auto 
Scaling](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/AutoScaling.html)
 can automatically increase and decrease the allocated capacity.
-This is good for keeping capacity high when needed, but avoiding large
-bills when it is not.
+
+Before DynamoDB On-Demand was introduced, autoscaling was the sole form
+of dynamic scaling. 
 
 Review comment:
   whitespace:end of line
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #647: HADOOP-16118. S3Guard to support on-demand DDB tables.

2019-03-26 Thread GitBox
hadoop-yetus commented on issue #647: HADOOP-16118. S3Guard to support 
on-demand DDB tables.
URL: https://github.com/apache/hadoop/pull/647#issuecomment-476898348
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 26 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 3 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 996 | trunk passed |
   | +1 | compile | 33 | trunk passed |
   | +1 | checkstyle | 24 | trunk passed |
   | +1 | mvnsite | 37 | trunk passed |
   | +1 | shadedclient | 727 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 44 | trunk passed |
   | +1 | javadoc | 25 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 30 | the patch passed |
   | +1 | compile | 28 | the patch passed |
   | +1 | javac | 28 | the patch passed |
   | -0 | checkstyle | 18 | hadoop-tools/hadoop-aws: The patch generated 2 new 
+ 17 unchanged - 0 fixed = 19 total (was 17) |
   | +1 | mvnsite | 31 | the patch passed |
   | -1 | whitespace | 0 | The patch has 1 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply |
   | +1 | shadedclient | 744 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 47 | the patch passed |
   | +1 | javadoc | 23 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 273 | hadoop-aws in the patch passed. |
   | +1 | asflicense | 28 | The patch does not generate ASF License warnings. |
   | | | 3215 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-647/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/647 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  |
   | uname | Linux 67573c9c00b7 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / ce4bafd |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | findbugs | v3.1.0-RC1 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-647/1/artifact/out/diff-checkstyle-hadoop-tools_hadoop-aws.txt
 |
   | whitespace | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-647/1/artifact/out/whitespace-eol.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-647/1/testReport/ |
   | Max. process+thread count | 411 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-647/1/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] asfgit closed pull request #600: HDFS-14348: Fix JNI exception handling issues in libhdfs

2019-03-26 Thread GitBox
asfgit closed pull request #600: HDFS-14348: Fix JNI exception handling issues 
in libhdfs
URL: https://github.com/apache/hadoop/pull/600
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16085) S3Guard: use object version or etags to protect against inconsistent read after replace/overwrite

2019-03-26 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16802249#comment-16802249
 ] 

Hadoop QA commented on HADOOP-16085:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 18 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 44s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 28s{color} 
| {color:red} hadoop-tools_hadoop-aws generated 1 new + 15 unchanged - 0 fixed 
= 16 total (was 15) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 19s{color} | {color:orange} hadoop-tools/hadoop-aws: The patch generated 43 
new + 57 unchanged - 2 fixed = 100 total (was 59) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 2 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 50s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
52s{color} | {color:red} hadoop-tools/hadoop-aws generated 1 new + 0 unchanged 
- 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m 
42s{color} | {color:green} hadoop-aws in the patch passed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
25s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 56m 17s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-tools/hadoop-aws |
|  |  org.apache.hadoop.fs.s3a.S3LocatedFileStatus doesn't override 
org.apache.hadoop.fs.LocatedFileStatus.equals(Object)  At 
S3LocatedFileStatus.java:At S3LocatedFileStatus.java:[line 1] |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-646/1/artifact/out/Dockerfile
 |
| GITHUB PR | https://github.com/apache/hadoop/pull/646 |
| JIRA Issue | HADOOP-16085 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 63e0c6f06812 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri Oct 
5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | personality/hadoop.sh |
| git revision | trunk / ce4bafd |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
| javac | 
https://builds.apache.org/job/hadoop-multib

[GitHub] [hadoop] hadoop-yetus commented on a change in pull request #646: HADOOP-16085: use object version or etags to protect against inconsistent read after replace/overwrite

2019-03-26 Thread GitBox
hadoop-yetus commented on a change in pull request #646: HADOOP-16085: use 
object version or etags to protect against inconsistent read after 
replace/overwrite
URL: https://github.com/apache/hadoop/pull/646#discussion_r269348104
 
 

 ##
 File path: 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/ChangeDetectionPolicy.java
 ##
 @@ -51,6 +54,10 @@
 
   private final Mode mode;
   private final boolean requireVersion;
+  
 
 Review comment:
   whitespace:end of line
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on a change in pull request #646: HADOOP-16085: use object version or etags to protect against inconsistent read after replace/overwrite

2019-03-26 Thread GitBox
hadoop-yetus commented on a change in pull request #646: HADOOP-16085: use 
object version or etags to protect against inconsistent read after 
replace/overwrite
URL: https://github.com/apache/hadoop/pull/646#discussion_r269348100
 
 

 ##
 File path: 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/ChangeDetectionPolicy.java
 ##
 @@ -322,12 +360,33 @@ public String getRevisionId(ObjectMetadata 
objectMetadata, String uri) {
   }
   return versionId;
 }
 
 Review comment:
   whitespace:end of line
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #646: HADOOP-16085: use object version or etags to protect against inconsistent read after replace/overwrite

2019-03-26 Thread GitBox
hadoop-yetus commented on issue #646: HADOOP-16085: use object version or etags 
to protect against inconsistent read after replace/overwrite
URL: https://github.com/apache/hadoop/pull/646#issuecomment-476884777
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 21 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 18 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 1091 | trunk passed |
   | +1 | compile | 30 | trunk passed |
   | +1 | checkstyle | 23 | trunk passed |
   | +1 | mvnsite | 35 | trunk passed |
   | +1 | shadedclient | 764 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 49 | trunk passed |
   | +1 | javadoc | 22 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 29 | the patch passed |
   | +1 | compile | 28 | the patch passed |
   | -1 | javac | 28 | hadoop-tools_hadoop-aws generated 1 new + 15 unchanged - 
0 fixed = 16 total (was 15) |
   | -0 | checkstyle | 19 | hadoop-tools/hadoop-aws: The patch generated 43 new 
+ 57 unchanged - 2 fixed = 100 total (was 59) |
   | +1 | mvnsite | 32 | the patch passed |
   | -1 | whitespace | 0 | The patch has 2 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply |
   | +1 | shadedclient | 770 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | findbugs | 52 | hadoop-tools/hadoop-aws generated 1 new + 0 unchanged 
- 0 fixed = 1 total (was 0) |
   | +1 | javadoc | 22 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 282 | hadoop-aws in the patch passed. |
   | -1 | asflicense | 25 | The patch generated 1 ASF License warnings. |
   | | | 3377 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | FindBugs | module:hadoop-tools/hadoop-aws |
   |  |  org.apache.hadoop.fs.s3a.S3LocatedFileStatus doesn't override 
org.apache.hadoop.fs.LocatedFileStatus.equals(Object)  At 
S3LocatedFileStatus.java:At S3LocatedFileStatus.java:[line 1] |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-646/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/646 |
   | JIRA Issue | HADOOP-16085 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  |
   | uname | Linux 63e0c6f06812 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri 
Oct 5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / ce4bafd |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | findbugs | v3.1.0-RC1 |
   | javac | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-646/1/artifact/out/diff-compile-javac-hadoop-tools_hadoop-aws.txt
 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-646/1/artifact/out/diff-checkstyle-hadoop-tools_hadoop-aws.txt
 |
   | whitespace | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-646/1/artifact/out/whitespace-eol.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-646/1/artifact/out/new-findbugs-hadoop-tools_hadoop-aws.html
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-646/1/testReport/ |
   | asflicense | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-646/1/artifact/out/patch-asflicense-problems.txt
 |
   | Max. process+thread count | 339 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-646/1/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #626: HDDS-1262. In OM HA OpenKey and initiateMultipartUpload call Should happen only leader OM.

2019-03-26 Thread GitBox
hadoop-yetus commented on issue #626: HDDS-1262. In OM HA OpenKey and 
initiateMultipartUpload call Should happen only leader OM.
URL: https://github.com/apache/hadoop/pull/626#issuecomment-476884407
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 25 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 2 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 13 | Maven dependency ordering for branch |
   | +1 | mvninstall | 991 | trunk passed |
   | +1 | compile | 97 | trunk passed |
   | +1 | checkstyle | 30 | trunk passed |
   | -1 | mvnsite | 29 | integration-test in trunk failed. |
   | +1 | shadedclient | 768 | branch has no errors when building and testing 
our client artifacts. |
   | 0 | findbugs | 0 | Skipped patched modules with no Java source: 
hadoop-ozone/integration-test |
   | +1 | findbugs | 96 | trunk passed |
   | +1 | javadoc | 79 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 14 | Maven dependency ordering for patch |
   | -1 | mvninstall | 22 | integration-test in the patch failed. |
   | +1 | compile | 94 | the patch passed |
   | +1 | cc | 94 | the patch passed |
   | +1 | javac | 94 | the patch passed |
   | +1 | checkstyle | 24 | the patch passed |
   | -1 | mvnsite | 23 | integration-test in the patch failed. |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 722 | patch has no errors when building and testing 
our client artifacts. |
   | 0 | findbugs | 0 | Skipped patched modules with no Java source: 
hadoop-ozone/integration-test |
   | +1 | findbugs | 112 | the patch passed |
   | -1 | javadoc | 34 | hadoop-ozone_common generated 1 new + 1 unchanged - 0 
fixed = 2 total (was 1) |
   ||| _ Other Tests _ |
   | +1 | unit | 35 | common in the patch passed. |
   | +1 | unit | 42 | ozone-manager in the patch passed. |
   | -1 | unit | 26 | integration-test in the patch failed. |
   | +1 | asflicense | 30 | The patch does not generate ASF License warnings. |
   | | | 3572 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-626/10/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/626 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  cc  |
   | uname | Linux 7032c47b14f8 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / ce4bafd |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | mvnsite | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-626/10/artifact/out/branch-mvnsite-hadoop-ozone_integration-test.txt
 |
   | findbugs | v3.1.0-RC1 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-626/10/artifact/out/patch-mvninstall-hadoop-ozone_integration-test.txt
 |
   | mvnsite | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-626/10/artifact/out/patch-mvnsite-hadoop-ozone_integration-test.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-626/10/artifact/out/diff-javadoc-javadoc-hadoop-ozone_common.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-626/10/artifact/out/patch-unit-hadoop-ozone_integration-test.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-626/10/testReport/ |
   | Max. process+thread count | 446 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/common hadoop-ozone/ozone-manager 
hadoop-ozone/integration-test U: hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-626/10/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on issue #630: HADOOP-15999 S3Guard OOB: improve test resilience and probes

2019-03-26 Thread GitBox
steveloughran commented on issue #630: HADOOP-15999 S3Guard OOB: improve test 
resilience and probes
URL: https://github.com/apache/hadoop/pull/630#issuecomment-476883734
 
 
   Checkstyle
   ```
   
./hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/AbstractContractGetFileStatusTest.java:400:
  + "-" + UUID.randomUUID());: '+' has incorrect indentation level 6, 
expected level should be 8. [Indentation]
   
./hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3GuardOutOfBandOperations.java:401:
  assertArraySize("Added one file to the new dir and modified the same 
file, ": Line is longer than 80 characters (found 82). [LineLength]
   
./hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3GuardOutOfBandOperations.java:509:
  /**: First sentence should end with a period. [JavadocStyle]
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran opened a new pull request #647: HADOOP-16118. S3Guard to support on-demand DDB tables.

2019-03-26 Thread GitBox
steveloughran opened a new pull request #647: HADOOP-16118. S3Guard to support 
on-demand DDB tables.
URL: https://github.com/apache/hadoop/pull/647
 
 
   This patch adds awareness of on-demand tables; it does not support
   creating them as a new SDK upgrade is needed for that. This patch
   is one which can be backported without the consequences of such an update.
   
   * The diagnostics map includes the billing mode (as inferred from IO 
capacities)
   * Set capacity fails fast (and always).
   * The documentation discusses the mode, argues for it over autoscaling
   * Example output of bucket-info updated
   * Test that if the table is on-demand, set-capacity will fail.
   * If table is on-demand, The dynamo db scale tests are disabled. There's 
nothing to prove.
   
   Change-Id: I77b7a6b593a2cd805376ca24d68b06bde75589c5


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on issue #647: HADOOP-16118. S3Guard to support on-demand DDB tables.

2019-03-26 Thread GitBox
steveloughran commented on issue #647: HADOOP-16118. S3Guard to support 
on-demand DDB tables.
URL: https://github.com/apache/hadoop/pull/647#issuecomment-476883514
 
 
   Tested: S3a ireland with s3guard _ dynamoDB. One failure, which I am not 
sure if/how it is related to on-demand tables.
   {code}
   guard.ITestS3GuardToolDynamoDB
   [ERROR] 
testBucketInfoUnguarded(org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardToolDynamoDB)
  Time elapsed: 1.318 s  <<< ERROR!
   java.io.FileNotFoundException: DynamoDB table 
'testBucketInfoUnguarded-b48b3cd0-f21c-4973-9f9e-1e0861f44478' does not exist 
in region eu-west-1; auto-creation is turned off
at 
org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore.initTable(DynamoDBMetadataStore.java:1243)
at 
org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore.initialize(DynamoDBMetadataStore.java:374)
at 
org.apache.hadoop.fs.s3a.s3guard.S3Guard.getMetadataStore(S3Guard.java:99)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.initialize(S3AFileSystem.java:394)
at 
org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3324)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:136)
at 
org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3373)
at org.apache.hadoop.fs.FileSystem$Cache.getUnique(FileSystem.java:3347)
at org.apache.hadoop.fs.FileSystem.newInstance(FileSystem.java:544)
at 
org.apache.hadoop.fs.s3a.s3guard.S3GuardTool$BucketInfo.run(S3GuardTool.java:1140)
at 
org.apache.hadoop.fs.s3a.s3guard.S3GuardToolTestHelper.exec(S3GuardToolTestHelper.java:79)
at 
org.apache.hadoop.fs.s3a.s3guard.S3GuardToolTestHelper.exec(S3GuardToolTestHelper.java:51)
at 
org.apache.hadoop.fs.s3a.s3guard.AbstractS3GuardToolTestBase.testBucketInfoUnguarded(AbstractS3GuardToolTestBase.java:341)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.lang.Thread.run(Thread.java:745)
   Caused by: 
com.amazonaws.services.dynamodbv2.model.ResourceNotFoundException: Requested 
resource not found: Table: 
testBucketInfoUnguarded-b48b3cd0-f21c-4973-9f9e-1e0861f44478 not found 
(Service: AmazonDynamoDBv2; Status Code: 400; Error Code: 
ResourceNotFoundException; Request ID: 
8A42F39OH2IASKST147L6U36Q7VV4KQNSO5AEMVJF66Q9ASUAAJG)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleErrorResponse(AmazonHttpClient.java:1640)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1304)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1058)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:743)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:717)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:699)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:667)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:649)
at 
com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:513)
at 
com.amazonaws.services.dynamodbv2.AmazonDynamoDBClient.doInvoke(AmazonDynamoDBClient.java:3443)
at 
com.amazonaws.services.dynamodbv2.AmazonDynamoDBClient.invoke(AmazonDynamoDBClient.java:3419)
at 
com.amazonaws.services.dynamodbv2.AmazonDynamoDBClient.executeDescribeTable(AmazonDynamoDBClient.java:1660)
at 
com.amazonaws.services.dynamodbv2.AmazonDynamoDBClient.describeTable(AmazonDynamoDBClient.java:1635)
at 
com.amazonaws.services.dynamodbv2.document.Table.describe(Table.java:137)
at 
org.apache.ha

[jira] [Commented] (HADOOP-16208) Do Not Log InterruptedException in Client

2019-03-26 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16208?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16802247#comment-16802247
 ] 

Hadoop QA commented on HADOOP-16208:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m  
6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 57s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 21s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m 
11s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
43s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 95m 53s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HADOOP-16208 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12963788/HADOOP-16208.1.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 645978dbfe9e 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / ce4bafd |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16077/testReport/ |
| Max. process+thread count | 1347 (vs. ulimit of 1) |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16077/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Do Not Log In

[jira] [Commented] (HADOOP-16118) S3Guard to support on-demand DDB tables

2019-03-26 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16118?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16802233#comment-16802233
 ] 

Steve Loughran commented on HADOOP-16118:
-

And the bucket info lists the billing mode, again using the terminology from 
the AWS SDK
{code}
bin/hadoop s3guard bucket-info s3a://hwdev-steve-ireland-new/
Filesystem s3a://hwdev-steve-ireland-new
Location: eu-west-1
Filesystem s3a://hwdev-steve-ireland-new is using S3Guard with store 
DynamoDBMetadataStore{region=eu-west-1, tableName=hwdev-steve-ireland-new, 
tableArn=arn:aws:dynamodb:eu-west-1::table/hwdev-steve-ireland-new}
Authoritative S3Guard: fs.s3a.metadatastore.authoritative=false
Metadata Store Diagnostics:
ARN=arn:aws:dynamodb:eu-west-1::table/hwdev-steve-ireland-new
billing-mode=per-request
description=S3Guard metadata store in DynamoDB
name=hwdev-steve-ireland-new
persist.authoritative.bit=true
read-capacity=0
region=eu-west-1
retryPolicy=ExponentialBackoffRetry(maxRetries=9, sleepTime=250 
MILLISECONDS)
size=66797
status=ACTIVE
table={AttributeDefinitions: [{AttributeName: child,AttributeType: S}, 
{AttributeName: parent,AttributeType: S}],TableName: 
hwdev-steve-ireland-new,KeySchema: [{AttributeName: parent,KeyType: HASH}, 
{AttributeName: child,KeyType: RANGE}],TableStatus: ACTIVE,CreationDateTime: 
Thu Oct 11 18:51:14 BST 2018,ProvisionedThroughput: {LastIncreaseDateTime: Tue 
Oct 30 16:48:45 GMT 2018,LastDecreaseDateTime: Tue Oct 30 18:00:03 GMT 
2018,NumberOfDecreasesToday: 0,ReadCapacityUnits: 0,WriteCapacityUnits: 
0},TableSizeBytes: 66797,ItemCount: 415,TableArn: 
arn:aws:dynamodb:eu-west-1::table/hwdev-steve-ireland-new,TableId: 
a7b0728a-f008-4260-b2a0-ff3dd03367d1,}
write-capacity=0
The "magic" committer is supported
{code}

> S3Guard to support on-demand DDB tables
> ---
>
> Key: HADOOP-16118
> URL: https://issues.apache.org/jira/browse/HADOOP-16118
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Priority: Major
>
> AWS now supports [on demand DDB 
> capacity|https://aws.amazon.com/blogs/aws/amazon-dynamodb-on-demand-no-capacity-planning-and-pay-per-request-pricing/]
>  
> This has lowest cost and best scalability, so could be the default capacity. 
> + add a new option to set-capacity.
> Will depend on an SDK update: created HADOOP-16117.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hanishakoneru commented on issue #626: HDDS-1262. In OM HA OpenKey and initiateMultipartUpload call Should happen only leader OM.

2019-03-26 Thread GitBox
hanishakoneru commented on issue #626: HDDS-1262. In OM HA OpenKey and 
initiateMultipartUpload call Should happen only leader OM.
URL: https://github.com/apache/hadoop/pull/626#issuecomment-476874883
 
 
   LGTM. +1 pending Jenkins/ CI.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16085) S3Guard: use object version or etags to protect against inconsistent read after replace/overwrite

2019-03-26 Thread Ben Roling (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=1680#comment-1680
 ] 

Ben Roling commented on HADOOP-16085:
-

I posted a PR with my latest progress:
https://github.com/apache/hadoop/pull/646

I'll continue there rather than doing patch uploads to the JIRA.  Hopefully 
that transitions smoothly.  It's my first experience with a PR for a Hadoop 
Common JIRA.

> S3Guard: use object version or etags to protect against inconsistent read 
> after replace/overwrite
> -
>
> Key: HADOOP-16085
> URL: https://issues.apache.org/jira/browse/HADOOP-16085
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.0
>Reporter: Ben Roling
>Priority: Major
> Attachments: HADOOP-16085-003.patch, HADOOP-16085_002.patch, 
> HADOOP-16085_3.2.0_001.patch
>
>
> Currently S3Guard doesn't track S3 object versions.  If a file is written in 
> S3A with S3Guard and then subsequently overwritten, there is no protection 
> against the next reader seeing the old version of the file instead of the new 
> one.
> It seems like the S3Guard metadata could track the S3 object version.  When a 
> file is created or updated, the object version could be written to the 
> S3Guard metadata.  When a file is read, the read out of S3 could be performed 
> by object version, ensuring the correct version is retrieved.
> I don't have a lot of direct experience with this yet, but this is my 
> impression from looking through the code.  My organization is looking to 
> shift some datasets stored in HDFS over to S3 and is concerned about this 
> potential issue as there are some cases in our codebase that would do an 
> overwrite.
> I imagine this idea may have been considered before but I couldn't quite 
> track down any JIRAs discussing it.  If there is one, feel free to close this 
> with a reference to it.
> Am I understanding things correctly?  Is this idea feasible?  Any feedback 
> that could be provided would be appreciated.  We may consider crafting a 
> patch.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #626: HDDS-1262. In OM HA OpenKey and initiateMultipartUpload call Should happen only leader OM.

2019-03-26 Thread GitBox
bharatviswa504 commented on a change in pull request #626: HDDS-1262. In OM HA 
OpenKey and initiateMultipartUpload call Should happen only leader OM.
URL: https://github.com/apache/hadoop/pull/626#discussion_r269330512
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ratis/OzoneManagerStateMachine.java
 ##
 @@ -181,6 +297,17 @@ private TransactionContext handleAllocateBlock(
 
   }
 
+  /**
+   * Construct IOException message for failed requests in StartTransaction.
+   * @param omResponse
+   * @return
+   */
+  private IOException constructExceptionForFailedRequest(
+  OMResponse omResponse) {
+return new IOException(omResponse.getMessage() + " " +
+STATUS_CODE + omResponse.getStatus());
+  }
 
 Review comment:
   I tried that way, as this is converted to IOException somewhere in Ratis 
end, I am not able to do that. Initially I have tried the way you have 
suggested and figured out it is not working. Because from Ratis, we get 
StateMachine Exception.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ben-roling opened a new pull request #646: HADOOP-16085: use object version or etags to protect against inconsistent read after replace/overwrite

2019-03-26 Thread GitBox
ben-roling opened a new pull request #646: HADOOP-16085: use object version or 
etags to protect against inconsistent read after replace/overwrite
URL: https://github.com/apache/hadoop/pull/646
 
 
   This started with 
[HADOOP-16085-003.patch](https://issues.apache.org/jira/secure/attachment/12962649/HADOOP-16085-003.patch)
 from [the JIRA](https://issues.apache.org/jira/browse/HADOOP-16085).
   
   I'm switching over to a PR instead of using patch files attached to the 
JIRA.  I expect that will make review easier.
   
   I've addressed a few things since that patch:
   * copy exception handling - handling 412 error on the response
   * addressed [Gabor's 
comments](https://issues.apache.org/jira/browse/HADOOP-16085?focusedCommentId=16797173&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16797173)
 on 
   TestPathMetadataDynamoDBTranslation, TestDirListingMetadata
   * fixed a problem I introduced around inconsistency between 
PathMetadata.isEmptyDir and the underlying S3AFileStatus.isEmpyDir that was 
manifesting in failures to clean up files after tests
   * increased default LocalMetadataStore cache timeout as the low 10 second 
default was making debugging some failing tests confusing as the outcome would 
depend on how quickly I went through breakpoints
   * fixed S3 Select test in ITestS3ARemoteFileChanged and added test for 
copy/rename
   * improved documentation
   
   I haven't actually run all the tests again since these changes.  Also, I 
think there might be a couple more tests to add or alter.  For example, I don't 
have an explicit integration test yet to read a file that has no ETag or 
versionId in S3Guard.
   
   I'll make another pass through but figured it is worthwhile to post the 
progress.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16118) S3Guard to support on-demand DDB tables

2019-03-26 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16118?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16802215#comment-16802215
 ] 

Steve Loughran commented on HADOOP-16118:
-

with the forthcoming patch, you will see an error earlier. The error message == 
that you get from the AWS service itself, to make things consistent with other 
apps.
{code}
bin/hadoop s3guard set-capacity s3a://hwdev-steve-ireland-new/
2019-03-26 21:53:20,725 [main] INFO  s3guard.S3GuardTool 
(S3GuardTool.java:initMetadataStore(318)) - Metadata store 
DynamoDBMetadataStore{region=eu-west-1, tableName=hwdev-steve-ireland-new, 
tableArn=arn:aws:dynamodb:eu-west-1:980678866538:table/hwdev-steve-ireland-new} 
is initialized.
java.io.IOException: Neither ReadCapacityUnits nor WriteCapacityUnits can be 
specified when BillingMode is PAY_PER_REQUEST
at 
org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore.updateParameters(DynamoDBMetadataStore.java:1546)
at 
org.apache.hadoop.fs.s3a.s3guard.S3GuardTool$SetCapacity.run(S3GuardTool.java:587)
at 
org.apache.hadoop.fs.s3a.s3guard.S3GuardTool.run(S3GuardTool.java:398)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
at 
org.apache.hadoop.fs.s3a.s3guard.S3GuardTool.run(S3GuardTool.java:1628)
at 
org.apache.hadoop.fs.s3a.s3guard.S3GuardTool.main(S3GuardTool.java:1637)
2019-03-26 21:53:20,825 [main] INFO  util.ExitUtil 
(ExitUtil.java:terminate(210)) - Exiting with status -1: java.io.IOException: 
Neither ReadCapacityUnits nor WriteCapacityUnits can be specified when 
BillingMode is PAY_PER_REQUEST
{code}

> S3Guard to support on-demand DDB tables
> ---
>
> Key: HADOOP-16118
> URL: https://issues.apache.org/jira/browse/HADOOP-16118
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Priority: Major
>
> AWS now supports [on demand DDB 
> capacity|https://aws.amazon.com/blogs/aws/amazon-dynamodb-on-demand-no-capacity-planning-and-pay-per-request-pricing/]
>  
> This has lowest cost and best scalability, so could be the default capacity. 
> + add a new option to set-capacity.
> Will depend on an SDK update: created HADOOP-16117.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #626: HDDS-1262. In OM HA OpenKey and initiateMultipartUpload call Should happen only leader OM.

2019-03-26 Thread GitBox
bharatviswa504 commented on a change in pull request #626: HDDS-1262. In OM HA 
OpenKey and initiateMultipartUpload call Should happen only leader OM.
URL: https://github.com/apache/hadoop/pull/626#discussion_r269330512
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ratis/OzoneManagerStateMachine.java
 ##
 @@ -181,6 +297,17 @@ private TransactionContext handleAllocateBlock(
 
   }
 
+  /**
+   * Construct IOException message for failed requests in StartTransaction.
+   * @param omResponse
+   * @return
+   */
+  private IOException constructExceptionForFailedRequest(
+  OMResponse omResponse) {
+return new IOException(omResponse.getMessage() + " " +
+STATUS_CODE + omResponse.getStatus());
+  }
 
 Review comment:
   I tried that way, as this is converted to IOException somewhere in Ratis 
end, I am not able to do that. Initially I have tried the way you have 
suggested and figured out it is not working.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #626: HDDS-1262. In OM HA OpenKey and initiateMultipartUpload call Should happen only leader OM.

2019-03-26 Thread GitBox
bharatviswa504 commented on a change in pull request #626: HDDS-1262. In OM HA 
OpenKey and initiateMultipartUpload call Should happen only leader OM.
URL: https://github.com/apache/hadoop/pull/626#discussion_r269330114
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManager.java
 ##
 @@ -2474,6 +2524,28 @@ public String getOzoneBucketMapping(String s3BucketName)
 }
   }
 
+
+  @Override
+  public OmMultipartInfo applyInitiateMultipartUpload(OmKeyArgs keyArgs,
+  String multipartUploadID) throws IOException {
+OmMultipartInfo multipartInfo;
+metrics.incNumInitiateMultipartUploads();
+try {
+  multipartInfo = keyManager.applyInitiateMultipartUpload(keyArgs,
+  multipartUploadID);
+  AUDIT.logWriteSuccess(buildAuditMessageForSuccess(
+  OMAction.INITIATE_MULTIPART_UPLOAD, (keyArgs == null) ? null :
+  keyArgs.toAuditMap()));
+} catch (IOException ex) {
+  AUDIT.logWriteFailure(buildAuditMessageForFailure(
+  OMAction.INITIATE_MULTIPART_UPLOAD,
+  (keyArgs == null) ? null : keyArgs.toAuditMap(), ex));
+  metrics.incNumInitiateMultipartUploadFails();
 
 Review comment:
   IN HA case initiateMultipartUpload will not be called, so it will not be 
updated twice.
   In startTransaction, we are not calling initiateMultipartUpload. (It 
generates a random id as multipartuploadID). This is the reason for not having 
new type here.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #626: HDDS-1262. In OM HA OpenKey and initiateMultipartUpload call Should happen only leader OM.

2019-03-26 Thread GitBox
bharatviswa504 commented on a change in pull request #626: HDDS-1262. In OM HA 
OpenKey and initiateMultipartUpload call Should happen only leader OM.
URL: https://github.com/apache/hadoop/pull/626#discussion_r269330114
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManager.java
 ##
 @@ -2474,6 +2524,28 @@ public String getOzoneBucketMapping(String s3BucketName)
 }
   }
 
+
+  @Override
+  public OmMultipartInfo applyInitiateMultipartUpload(OmKeyArgs keyArgs,
+  String multipartUploadID) throws IOException {
+OmMultipartInfo multipartInfo;
+metrics.incNumInitiateMultipartUploads();
+try {
+  multipartInfo = keyManager.applyInitiateMultipartUpload(keyArgs,
+  multipartUploadID);
+  AUDIT.logWriteSuccess(buildAuditMessageForSuccess(
+  OMAction.INITIATE_MULTIPART_UPLOAD, (keyArgs == null) ? null :
+  keyArgs.toAuditMap()));
+} catch (IOException ex) {
+  AUDIT.logWriteFailure(buildAuditMessageForFailure(
+  OMAction.INITIATE_MULTIPART_UPLOAD,
+  (keyArgs == null) ? null : keyArgs.toAuditMap(), ex));
+  metrics.incNumInitiateMultipartUploadFails();
 
 Review comment:
   IN HA case initiateMultipartUpload will not be called, so it will not be 
updated twice.
   In startTransaction, we are not calling initiateMultipartUpload. (It 
generates a random id as multipartuploadID)


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #626: HDDS-1262. In OM HA OpenKey and initiateMultipartUpload call Should happen only leader OM.

2019-03-26 Thread GitBox
bharatviswa504 commented on a change in pull request #626: HDDS-1262. In OM HA 
OpenKey and initiateMultipartUpload call Should happen only leader OM.
URL: https://github.com/apache/hadoop/pull/626#discussion_r269331803
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManager.java
 ##
 @@ -1985,6 +1990,51 @@ public OpenKeySession openKey(OmKeyArgs args) throws 
IOException {
 }
   }
 
+  @Override
+  public void applyOpenKey(KeyArgs omKeyArgs, KeyInfo keyInfo, long clientID)
+  throws IOException {
+// Do we need to check again Acl's for apply OpenKey call?
+if(isAclEnabled) {
+  checkAcls(ResourceType.KEY, StoreType.OZONE, ACLType.READ,
+  omKeyArgs.getVolumeName(), omKeyArgs.getBucketName(),
+  omKeyArgs.getKeyName());
+}
+boolean auditSuccess = true;
+try {
+  keyManager.applyOpenKey(omKeyArgs, keyInfo, clientID);
+} catch (Exception ex) {
+  metrics.incNumKeyAllocateFails();
+  auditSuccess = false;
+  AUDIT.logWriteFailure(buildAuditMessageForFailure(
+  OMAction.APPLY_ALLOCATE_KEY,
+  (omKeyArgs == null) ? null : toAuditMap(omKeyArgs), ex));
+  throw ex;
+} finally {
+  if(auditSuccess){
+AUDIT.logWriteSuccess(buildAuditMessageForSuccess(
+OMAction.ALLOCATE_KEY, (omKeyArgs == null) ? null :
 
 Review comment:
   Done


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #626: HDDS-1262. In OM HA OpenKey and initiateMultipartUpload call Should happen only leader OM.

2019-03-26 Thread GitBox
bharatviswa504 commented on a change in pull request #626: HDDS-1262. In OM HA 
OpenKey and initiateMultipartUpload call Should happen only leader OM.
URL: https://github.com/apache/hadoop/pull/626#discussion_r269330512
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ratis/OzoneManagerStateMachine.java
 ##
 @@ -181,6 +297,17 @@ private TransactionContext handleAllocateBlock(
 
   }
 
+  /**
+   * Construct IOException message for failed requests in StartTransaction.
+   * @param omResponse
+   * @return
+   */
+  private IOException constructExceptionForFailedRequest(
+  OMResponse omResponse) {
+return new IOException(omResponse.getMessage() + " " +
+STATUS_CODE + omResponse.getStatus());
+  }
 
 Review comment:
   I tried that way, as this is converted to IOException to in Ratis end, I am 
not able to do that. Initially I have tried the way you have suggested and 
figured out it is not working.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #626: HDDS-1262. In OM HA OpenKey and initiateMultipartUpload call Should happen only leader OM.

2019-03-26 Thread GitBox
bharatviswa504 commented on a change in pull request #626: HDDS-1262. In OM HA 
OpenKey and initiateMultipartUpload call Should happen only leader OM.
URL: https://github.com/apache/hadoop/pull/626#discussion_r269330114
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManager.java
 ##
 @@ -2474,6 +2524,28 @@ public String getOzoneBucketMapping(String s3BucketName)
 }
   }
 
+
+  @Override
+  public OmMultipartInfo applyInitiateMultipartUpload(OmKeyArgs keyArgs,
+  String multipartUploadID) throws IOException {
+OmMultipartInfo multipartInfo;
+metrics.incNumInitiateMultipartUploads();
+try {
+  multipartInfo = keyManager.applyInitiateMultipartUpload(keyArgs,
+  multipartUploadID);
+  AUDIT.logWriteSuccess(buildAuditMessageForSuccess(
+  OMAction.INITIATE_MULTIPART_UPLOAD, (keyArgs == null) ? null :
+  keyArgs.toAuditMap()));
+} catch (IOException ex) {
+  AUDIT.logWriteFailure(buildAuditMessageForFailure(
+  OMAction.INITIATE_MULTIPART_UPLOAD,
+  (keyArgs == null) ? null : keyArgs.toAuditMap(), ex));
+  metrics.incNumInitiateMultipartUploadFails();
 
 Review comment:
   IN HA case initiateMultipartUpload will nott be called, so it will not be 
updated twice.
   In startTransaction, we are not calling initiateMultipartUpload. (It 
generates a random id as multipartuploadID)


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hanishakoneru commented on a change in pull request #626: HDDS-1262. In OM HA OpenKey and initiateMultipartUpload call Should happen only leader OM.

2019-03-26 Thread GitBox
hanishakoneru commented on a change in pull request #626: HDDS-1262. In OM HA 
OpenKey and initiateMultipartUpload call Should happen only leader OM.
URL: https://github.com/apache/hadoop/pull/626#discussion_r268886537
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManager.java
 ##
 @@ -1985,6 +1990,51 @@ public OpenKeySession openKey(OmKeyArgs args) throws 
IOException {
 }
   }
 
+  @Override
+  public void applyOpenKey(KeyArgs omKeyArgs, KeyInfo keyInfo, long clientID)
+  throws IOException {
+// Do we need to check again Acl's for apply OpenKey call?
+if(isAclEnabled) {
+  checkAcls(ResourceType.KEY, StoreType.OZONE, ACLType.READ,
+  omKeyArgs.getVolumeName(), omKeyArgs.getBucketName(),
+  omKeyArgs.getKeyName());
+}
+boolean auditSuccess = true;
+try {
+  keyManager.applyOpenKey(omKeyArgs, keyInfo, clientID);
+} catch (Exception ex) {
+  metrics.incNumKeyAllocateFails();
+  auditSuccess = false;
+  AUDIT.logWriteFailure(buildAuditMessageForFailure(
+  OMAction.APPLY_ALLOCATE_KEY,
+  (omKeyArgs == null) ? null : toAuditMap(omKeyArgs), ex));
+  throw ex;
+} finally {
+  if(auditSuccess){
+AUDIT.logWriteSuccess(buildAuditMessageForSuccess(
+OMAction.ALLOCATE_KEY, (omKeyArgs == null) ? null :
 
 Review comment:
   OMAction should be APPLY_ALLOCATE_KEY. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hanishakoneru commented on a change in pull request #626: HDDS-1262. In OM HA OpenKey and initiateMultipartUpload call Should happen only leader OM.

2019-03-26 Thread GitBox
hanishakoneru commented on a change in pull request #626: HDDS-1262. In OM HA 
OpenKey and initiateMultipartUpload call Should happen only leader OM.
URL: https://github.com/apache/hadoop/pull/626#discussion_r268887028
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManager.java
 ##
 @@ -2474,6 +2524,28 @@ public String getOzoneBucketMapping(String s3BucketName)
 }
   }
 
+
+  @Override
+  public OmMultipartInfo applyInitiateMultipartUpload(OmKeyArgs keyArgs,
+  String multipartUploadID) throws IOException {
+OmMultipartInfo multipartInfo;
+metrics.incNumInitiateMultipartUploads();
+try {
+  multipartInfo = keyManager.applyInitiateMultipartUpload(keyArgs,
+  multipartUploadID);
+  AUDIT.logWriteSuccess(buildAuditMessageForSuccess(
+  OMAction.INITIATE_MULTIPART_UPLOAD, (keyArgs == null) ? null :
+  keyArgs.toAuditMap()));
+} catch (IOException ex) {
+  AUDIT.logWriteFailure(buildAuditMessageForFailure(
+  OMAction.INITIATE_MULTIPART_UPLOAD,
+  (keyArgs == null) ? null : keyArgs.toAuditMap(), ex));
+  metrics.incNumInitiateMultipartUploadFails();
 
 Review comment:
   The metrics and Audit log would be updated twice with the same OMAction 
(INITIATE_MULTIPART_UPLOAD). Can we create new OMAction for this method also.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hanishakoneru commented on a change in pull request #626: HDDS-1262. In OM HA OpenKey and initiateMultipartUpload call Should happen only leader OM.

2019-03-26 Thread GitBox
hanishakoneru commented on a change in pull request #626: HDDS-1262. In OM HA 
OpenKey and initiateMultipartUpload call Should happen only leader OM.
URL: https://github.com/apache/hadoop/pull/626#discussion_r268885709
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ratis/OzoneManagerStateMachine.java
 ##
 @@ -181,6 +297,17 @@ private TransactionContext handleAllocateBlock(
 
   }
 
+  /**
+   * Construct IOException message for failed requests in StartTransaction.
+   * @param omResponse
+   * @return
+   */
+  private IOException constructExceptionForFailedRequest(
+  OMResponse omResponse) {
+return new IOException(omResponse.getMessage() + " " +
+STATUS_CODE + omResponse.getStatus());
+  }
 
 Review comment:
   Instead of creating an IOException and then parsing the status code back at 
the client, can we use OMException instead? We can add the Status parameter to 
OMException.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #643: HDDS-1260. Create Recon Server lifecycle integration with Ozone.

2019-03-26 Thread GitBox
hadoop-yetus commented on issue #643: HDDS-1260. Create Recon Server lifecycle 
integration with Ozone.
URL: https://github.com/apache/hadoop/pull/643#issuecomment-476864906
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 0 | Docker mode activated. |
   | -1 | patch | 7 | https://github.com/apache/hadoop/pull/643 does not apply 
to trunk. Rebase required? Wrong Branch? See 
https://wiki.apache.org/hadoop/HowToContribute for help. |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | GITHUB PR | https://github.com/apache/hadoop/pull/643 |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-643/7/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] xiaoyuyao commented on a change in pull request #632: HDDS-1255. Refactor ozone acceptance test to allow run in secure mode. Contributed by Ajay Kumar.

2019-03-26 Thread GitBox
xiaoyuyao commented on a change in pull request #632: HDDS-1255. Refactor ozone 
acceptance test to allow run in secure mode. Contributed by Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/632#discussion_r269327752
 
 

 ##
 File path: hadoop-ozone/dist/src/main/smoketest/commonlib.robot
 ##
 @@ -35,3 +41,51 @@ Compare files
 ${checksumbefore} = Executemd5sum ${file1} | 
awk '{print $1}'
 ${checksumafter} =  Executemd5sum ${file2} | 
awk '{print $1}'
 Should Be Equal${checksumbefore}   
 ${checksumafter}
+Execute AWSS3APICli
+[Arguments]   ${command}
+${output} =   Executeaws s3api --endpoint-url 
${ENDPOINT_URL} ${command}
+[return]  ${output}
+
+Execute AWSS3APICli and checkrc
+[Arguments]   ${command} ${expected_error_code}
+${output} =   Execute and checkrcaws s3api --endpoint-url 
${ENDPOINT_URL} ${command}  ${expected_error_code}
+[return]  ${output}
+
+Execute AWSS3Cli
+[Arguments]   ${command}
+${output} =   Execute aws s3 --endpoint-url 
${ENDPOINT_URL} ${command}
+[return]  ${output}
+
+Install aws cli s3 centos
+Executesudo yum install -y awscli
+
+Install aws cli s3 debian
+Executesudo apt-get install -y awscli
+
+Install aws cli
+${rc}  ${output} = Run And Return Rc And 
Output   which apt-get
+Run Keyword if '${rc}' == '0'  Install aws cli s3 debian
+${rc}  ${output} = Run And Return Rc And 
Output   yum --help
+Run Keyword if '${rc}' == '0'  Install aws cli s3 centos
+
+Kinit test user
+${hostname} =   Executehostname
+Set Suite Variable  ${TEST_USER}   
testuser/${hostname}@EXAMPLE.COM
+Execute kinit -k ${TEST_USER} -t 
/etc/security/keytabs/testuser.keytab
+
+Setup secure credentials
+Run Keyword Install aws cli
+Run Keyword Kinit test user
+${result} = Executeozone s3 getsecret
+${accessKey} =  Get Regexp Matches ${result} 
(?<=awsAccessKey=).*
+${secret} = Get Regexp Matches ${result} 
(?<=awsSecret=).*
+Executeaws configure set 
default.s3.signature_version s3v4
+Executeaws configure set 
aws_access_key_id ${accessKey[0]}
+Executeaws configure set 
aws_secret_access_key ${secret[0]}
+Executeaws configure set region 
us-west-1
+
+Setup incorrect credentials for S3
 
 Review comment:
   shall we move this to commonawslib.robot?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16199) KMSLoadBlanceClientProvider does not select token correctly

2019-03-26 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16802189#comment-16802189
 ] 

Wei-Chiu Chuang commented on HADOOP-16199:
--

The added test is almost the same as testTokenServiceCreationWithUriFormat, 
added in HADOOP-15997, except that it configured key provider explicitly.
{code:java}
String providerUriString = "kms://http@host1;host2;host3:9600/kms/foo";
conf.set(CommonConfigurationKeysPublic.HADOOP_SECURITY_KEY_PROVIDER_PATH,
providerUriString);
{code}
After HADOOP-14445, if configuring KMS provide path explicitly for client, the 
expected behavior is: the client gets a kms dt whose credential alias is one of 
(randomly selected) KMS URI.

After HADOOP-14445, and if client gets KMS URI in FsServerDefaults from 
NameNode, it gets a delegation token whose credential alias is the concatenated 
string of KMS URIs.

Looking at the application log, my question is: why does the client have a KMS 
dt in the newer form rather than the old form ("host1:9600")? Is it expected?

> KMSLoadBlanceClientProvider does not select token correctly
> ---
>
> Key: HADOOP-16199
> URL: https://issues.apache.org/jira/browse/HADOOP-16199
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.2
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: kms
>
> After HADOOP-14445 and HADOOP-15997, there are still cases where 
> KMSLoadBlanceClientProvider does not select token correctly. 
> Here is the use case:
> The new configuration key 
> hadoop.security.kms.client.token.use.uri.format=true is set cross all the 
> cluster, including both Submitter and Yarn RM(renewer), which is not covered 
> in the test matrix in this [HADOOP-14445 
> comment|https://issues.apache.org/jira/browse/HADOOP-14445?focusedCommentId=16505761&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16505761].
> I will post the debug log and the proposed fix shortly, cc: [~xiaochen] and 
> [~jojochuang].



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16208) Do Not Log InterruptedException in Client

2019-03-26 Thread David Mollitor (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16208?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Mollitor updated HADOOP-16208:

Status: Patch Available  (was: Open)

> Do Not Log InterruptedException in Client
> -
>
> Key: HADOOP-16208
> URL: https://issues.apache.org/jira/browse/HADOOP-16208
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Affects Versions: 3.2.0
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Minor
> Attachments: HADOOP-16208.1.patch
>
>
> {code:java}
>    } catch (InterruptedException e) {
> Thread.currentThread().interrupt();
> LOG.warn("interrupted waiting to send rpc request to server", e);
> throw new IOException(e);
>   }
> {code:java}
> https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java#L1450
> I'm working on a project that uses an {{ExecutorService}} to launch a bunch 
> of threads.  Each thread spins up an HDFS client connection.  At any point in 
> time, the program can terminate and call {{ExecutorService#shutdownNow()}} to 
> forcibly close vis-à-vis {{Thread#interrupt()}}.  At that point, I get a 
> cascade of logging from the above code and there's no easy to way to turn it 
> off.
> "Log and throw" is generally frowned upon, just throw the {{Exception}} and 
> move on.
> https://community.oracle.com/docs/DOC-983543#logAndThrow



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16208) Do Not Log InterruptedException in Client

2019-03-26 Thread David Mollitor (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16208?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Mollitor updated HADOOP-16208:

Attachment: HADOOP-16208.1.patch

> Do Not Log InterruptedException in Client
> -
>
> Key: HADOOP-16208
> URL: https://issues.apache.org/jira/browse/HADOOP-16208
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Affects Versions: 3.2.0
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Minor
> Attachments: HADOOP-16208.1.patch
>
>
> {code:java}
>    } catch (InterruptedException e) {
> Thread.currentThread().interrupt();
> LOG.warn("interrupted waiting to send rpc request to server", e);
> throw new IOException(e);
>   }
> {code:java}
> https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java#L1450
> I'm working on a project that uses an {{ExecutorService}} to launch a bunch 
> of threads.  Each thread spins up an HDFS client connection.  At any point in 
> time, the program can terminate and call {{ExecutorService#shutdownNow()}} to 
> forcibly close vis-à-vis {{Thread#interrupt()}}.  At that point, I get a 
> cascade of logging from the above code and there's no easy to way to turn it 
> off.
> "Log and throw" is generally frowned upon, just throw the {{Exception}} and 
> move on.
> https://community.oracle.com/docs/DOC-983543#logAndThrow



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16208) Do Not Log InterruptedException in Client

2019-03-26 Thread David Mollitor (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16208?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Mollitor updated HADOOP-16208:

Description: 
{code:java}
   } catch (InterruptedException e) {
Thread.currentThread().interrupt();
LOG.warn("interrupted waiting to send rpc request to server", e);
throw new IOException(e);
  }
{code}

https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java#L1450

I'm working on a project that uses an {{ExecutorService}} to launch a bunch of 
threads.  Each thread spins up an HDFS client connection.  At any point in 
time, the program can terminate and call {{ExecutorService#shutdownNow()}} to 
forcibly close vis-à-vis {{Thread#interrupt()}}.  At that point, I get a 
cascade of logging from the above code and there's no easy to way to turn it 
off.

"Log and throw" is generally frowned upon, just throw the {{Exception}} and 
move on.

https://community.oracle.com/docs/DOC-983543#logAndThrow



  was:
{code:java}
   } catch (InterruptedException e) {
Thread.currentThread().interrupt();
LOG.warn("interrupted waiting to send rpc request to server", e);
throw new IOException(e);
  }
{code:java}

https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java#L1450

I'm working on a project that uses an {{ExecutorService}} to launch a bunch of 
threads.  Each thread spins up an HDFS client connection.  At any point in 
time, the program can terminate and call {{ExecutorService#shutdownNow()}} to 
forcibly close vis-à-vis {{Thread#interrupt()}}.  At that point, I get a 
cascade of logging from the above code and there's no easy to way to turn it 
off.

"Log and throw" is generally frowned upon, just throw the {{Exception}} and 
move on.

https://community.oracle.com/docs/DOC-983543#logAndThrow




> Do Not Log InterruptedException in Client
> -
>
> Key: HADOOP-16208
> URL: https://issues.apache.org/jira/browse/HADOOP-16208
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Affects Versions: 3.2.0
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Minor
> Attachments: HADOOP-16208.1.patch
>
>
> {code:java}
>    } catch (InterruptedException e) {
> Thread.currentThread().interrupt();
> LOG.warn("interrupted waiting to send rpc request to server", e);
> throw new IOException(e);
>   }
> {code}
> https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java#L1450
> I'm working on a project that uses an {{ExecutorService}} to launch a bunch 
> of threads.  Each thread spins up an HDFS client connection.  At any point in 
> time, the program can terminate and call {{ExecutorService#shutdownNow()}} to 
> forcibly close vis-à-vis {{Thread#interrupt()}}.  At that point, I get a 
> cascade of logging from the above code and there's no easy to way to turn it 
> off.
> "Log and throw" is generally frowned upon, just throw the {{Exception}} and 
> move on.
> https://community.oracle.com/docs/DOC-983543#logAndThrow



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16208) Do Not Log InterruptedException in Client

2019-03-26 Thread David Mollitor (JIRA)
David Mollitor created HADOOP-16208:
---

 Summary: Do Not Log InterruptedException in Client
 Key: HADOOP-16208
 URL: https://issues.apache.org/jira/browse/HADOOP-16208
 Project: Hadoop Common
  Issue Type: Improvement
  Components: common
Affects Versions: 3.2.0
Reporter: David Mollitor
Assignee: David Mollitor


{code:java}
   } catch (InterruptedException e) {
Thread.currentThread().interrupt();
LOG.warn("interrupted waiting to send rpc request to server", e);
throw new IOException(e);
  }
{code:java}

https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java#L1450

I'm working on a project that uses an {{ExecutorService}} to launch a bunch of 
threads.  Each thread spins up an HDFS client connection.  At any point in 
time, the program can terminate and call {{ExecutorService#shutdownNow()}} to 
forcibly close vis-à-vis {{Thread#interrupt()}}.  At that point, I get a 
cascade of logging from the above code and there's no easy to way to turn it 
off.

"Log and throw" is generally frowned upon, just throw the {{Exception}} and 
move on.

https://community.oracle.com/docs/DOC-983543#logAndThrow





--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #645: HADOOP-16132 Support multipart download in S3AFileSystem

2019-03-26 Thread GitBox
hadoop-yetus commented on issue #645: HADOOP-16132 Support multipart download 
in S3AFileSystem
URL: https://github.com/apache/hadoop/pull/645#issuecomment-476849744
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 23 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 3 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 1005 | trunk passed |
   | +1 | compile | 29 | trunk passed |
   | +1 | checkstyle | 19 | trunk passed |
   | +1 | mvnsite | 33 | trunk passed |
   | +1 | shadedclient | 706 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 41 | trunk passed |
   | +1 | javadoc | 25 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 29 | the patch passed |
   | +1 | compile | 28 | the patch passed |
   | +1 | javac | 28 | the patch passed |
   | +1 | checkstyle | 17 | the patch passed |
   | +1 | mvnsite | 32 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 713 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 48 | the patch passed |
   | +1 | javadoc | 19 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 275 | hadoop-aws in the patch passed. |
   | +1 | asflicense | 24 | The patch does not generate ASF License warnings. |
   | | | 3154 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-645/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/645 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  |
   | uname | Linux 61df0233a9b8 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / ce4bafd |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | findbugs | v3.1.0-RC1 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-645/2/testReport/ |
   | Max. process+thread count | 468 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-645/2/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] xiaoyuyao commented on a change in pull request #641: HDDS-1318. Fix MalformedTracerStateStringException on DN logs. Contributed by Xiaoyu Yao.

2019-03-26 Thread GitBox
xiaoyuyao commented on a change in pull request #641: HDDS-1318. Fix 
MalformedTracerStateStringException on DN logs. Contributed by Xiaoyu Yao.
URL: https://github.com/apache/hadoop/pull/641#discussion_r269311007
 
 

 ##
 File path: 
hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/XceiverClientGrpc.java
 ##
 @@ -217,7 +220,21 @@ public XceiverClientReply sendCommand(
   ContainerCommandRequestProto request, List excludeDns)
   throws IOException {
 Preconditions.checkState(HddsUtils.isReadOnly(request));
-return sendCommandWithRetry(request, excludeDns);
+return sendCommandWithTraceIDAndRetry(request, excludeDns);
 
 Review comment:
   Unfortunately, I'm not aware of a switch to turn tracing off globally. That 
will be a much bigger change than the scope of this ticket. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] xiaoyuyao commented on a change in pull request #641: HDDS-1318. Fix MalformedTracerStateStringException on DN logs. Contributed by Xiaoyu Yao.

2019-03-26 Thread GitBox
xiaoyuyao commented on a change in pull request #641: HDDS-1318. Fix 
MalformedTracerStateStringException on DN logs. Contributed by Xiaoyu Yao.
URL: https://github.com/apache/hadoop/pull/641#discussion_r269311177
 
 

 ##
 File path: 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/tracing/StringCodec.java
 ##
 @@ -25,12 +25,15 @@
 import io.jaegertracing.internal.exceptions.TraceIdOutOfBoundException;
 import io.jaegertracing.spi.Codec;
 import io.opentracing.propagation.Format;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
 
 /**
  * A jaeger codec to save the current tracing context t a string.
 
 Review comment:
   sure, will fix it in next commit.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] xiaoyuyao commented on a change in pull request #641: HDDS-1318. Fix MalformedTracerStateStringException on DN logs. Contributed by Xiaoyu Yao.

2019-03-26 Thread GitBox
xiaoyuyao commented on a change in pull request #641: HDDS-1318. Fix 
MalformedTracerStateStringException on DN logs. Contributed by Xiaoyu Yao.
URL: https://github.com/apache/hadoop/pull/641#discussion_r269310591
 
 

 ##
 File path: 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/ozShell/TestOzoneShell.java
 ##
 @@ -919,13 +926,19 @@ public void testGetKey() throws Exception {
 bucket.createKey(keyName, dataStr.length());
 keyOutputStream.write(dataStr.getBytes());
 keyOutputStream.close();
+assertFalse("put key without malformed tracing",
+logs.getOutput().contains("MalformedTracerStateString"));
+logs.clearOutput();
 
 String tmpPath = baseDir.getAbsolutePath() + "/testfile-"
 + UUID.randomUUID().toString();
 String[] args = new String[] {"key", "get",
 url + "/" + volumeName + "/" + bucketName + "/" + keyName,
 tmpPath};
 execute(shell, args);
+assertFalse("get key without malformed tracing",
 
 Review comment:
   malformed trace can be easily reproed without the production code fix when 
getKey is called (e.g., in this test). 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16115) [JDK 11] TestHttpServer#testJersey fails

2019-03-26 Thread Akira Ajisaka (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16115?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-16115:
---
Summary: [JDK 11] TestHttpServer#testJersey fails  (was: [JDK 11] 
TestJersey fails)

> [JDK 11] TestHttpServer#testJersey fails
> 
>
> Key: HADOOP-16115
> URL: https://issues.apache.org/jira/browse/HADOOP-16115
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Akira Ajisaka
>Priority: Major
>
> {noformat}
> [INFO] Running org.apache.hadoop.http.TestHttpServer
> [ERROR] Tests run: 26, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 
> 5.954 s <<< FAILURE! - in org.apache.hadoop.http.TestHttpServer
> [ERROR] testJersey(org.apache.hadoop.http.TestHttpServer)  Time elapsed: 
> 0.128 s  <<< ERROR!
> java.io.IOException: Server returned HTTP response code: 500 for URL: 
> http://localhost:40339/jersey/foo?op=bar
>   at 
> java.base/sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1913)
>   at 
> java.base/sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1509)
>   at 
> org.apache.hadoop.http.HttpServerFunctionalTest.readOutput(HttpServerFunctionalTest.java:260)
>   at 
> org.apache.hadoop.http.TestHttpServer.testJersey(TestHttpServer.java:526)
>   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.base/java.lang.reflect.Method.invoke(Method.java:566)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:239)
>   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16132) Support multipart download in S3AFileSystem

2019-03-26 Thread Justin Uang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16132?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16802119#comment-16802119
 ] 

Justin Uang commented on HADOOP-16132:
--

[~gabor.bota], I just rebased it and pushed the new change to here: 
[https://github.com/apache/hadoop/pull/645.] I would really appreciate your 
comments!

> Support multipart download in S3AFileSystem
> ---
>
> Key: HADOOP-16132
> URL: https://issues.apache.org/jira/browse/HADOOP-16132
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Justin Uang
>Priority: Major
> Attachments: HADOOP-16132.001.patch, HADOOP-16132.002.patch, 
> HADOOP-16132.003.patch, HADOOP-16132.004.patch, HADOOP-16132.005.patch, 
> seek-logs-parquet.txt
>
>
> I noticed that I get 150MB/s when I use the AWS CLI
> {code:java}
> aws s3 cp s3:/// - > /dev/null{code}
> vs 50MB/s when I use the S3AFileSystem
> {code:java}
> hadoop fs -cat s3:/// > /dev/null{code}
> Looking into the AWS CLI code, it looks like the 
> [download|https://github.com/boto/s3transfer/blob/ca0b708ea8a6a1213c6e21ca5a856e184f824334/s3transfer/download.py]
>  logic is quite clever. It downloads the next couple parts in parallel using 
> range requests, and then buffers them in memory in order to reorder them and 
> expose a single contiguous stream. I translated the logic to Java and 
> modified the S3AFileSystem to do similar things, and am able to achieve 
> 150MB/s download speeds as well. It is mostly done but I have some things to 
> clean up first. The PR is here: 
> https://github.com/palantir/hadoop/pull/47/files
> It would be great to get some other eyes on it to see what we need to do to 
> get it merged.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ajayydv commented on a change in pull request #641: HDDS-1318. Fix MalformedTracerStateStringException on DN logs. Contributed by Xiaoyu Yao.

2019-03-26 Thread GitBox
ajayydv commented on a change in pull request #641: HDDS-1318. Fix 
MalformedTracerStateStringException on DN logs. Contributed by Xiaoyu Yao.
URL: https://github.com/apache/hadoop/pull/641#discussion_r269293268
 
 

 ##
 File path: 
hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/XceiverClientGrpc.java
 ##
 @@ -217,7 +220,21 @@ public XceiverClientReply sendCommand(
   ContainerCommandRequestProto request, List excludeDns)
   throws IOException {
 Preconditions.checkState(HddsUtils.isReadOnly(request));
-return sendCommandWithRetry(request, excludeDns);
+return sendCommandWithTraceIDAndRetry(request, excludeDns);
 
 Review comment:
   shall we do this only when tracing is enabled?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ajayydv commented on a change in pull request #641: HDDS-1318. Fix MalformedTracerStateStringException on DN logs. Contributed by Xiaoyu Yao.

2019-03-26 Thread GitBox
ajayydv commented on a change in pull request #641: HDDS-1318. Fix 
MalformedTracerStateStringException on DN logs. Contributed by Xiaoyu Yao.
URL: https://github.com/apache/hadoop/pull/641#discussion_r269292509
 
 

 ##
 File path: 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/ozShell/TestOzoneShell.java
 ##
 @@ -919,13 +926,19 @@ public void testGetKey() throws Exception {
 bucket.createKey(keyName, dataStr.length());
 keyOutputStream.write(dataStr.getBytes());
 keyOutputStream.close();
+assertFalse("put key without malformed tracing",
+logs.getOutput().contains("MalformedTracerStateString"));
+logs.clearOutput();
 
 String tmpPath = baseDir.getAbsolutePath() + "/testfile-"
 + UUID.randomUUID().toString();
 String[] args = new String[] {"key", "get",
 url + "/" + volumeName + "/" + bucketName + "/" + keyName,
 tmpPath};
 execute(shell, args);
+assertFalse("get key without malformed tracing",
 
 Review comment:
   Shall we check the case when it is malformed?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16132) Support multipart download in S3AFileSystem

2019-03-26 Thread Justin Uang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16132?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Justin Uang updated HADOOP-16132:
-
Status: Open  (was: Patch Available)

> Support multipart download in S3AFileSystem
> ---
>
> Key: HADOOP-16132
> URL: https://issues.apache.org/jira/browse/HADOOP-16132
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Justin Uang
>Priority: Major
> Attachments: HADOOP-16132.001.patch, HADOOP-16132.002.patch, 
> HADOOP-16132.003.patch, HADOOP-16132.004.patch, HADOOP-16132.005.patch, 
> seek-logs-parquet.txt
>
>
> I noticed that I get 150MB/s when I use the AWS CLI
> {code:java}
> aws s3 cp s3:/// - > /dev/null{code}
> vs 50MB/s when I use the S3AFileSystem
> {code:java}
> hadoop fs -cat s3:/// > /dev/null{code}
> Looking into the AWS CLI code, it looks like the 
> [download|https://github.com/boto/s3transfer/blob/ca0b708ea8a6a1213c6e21ca5a856e184f824334/s3transfer/download.py]
>  logic is quite clever. It downloads the next couple parts in parallel using 
> range requests, and then buffers them in memory in order to reorder them and 
> expose a single contiguous stream. I translated the logic to Java and 
> modified the S3AFileSystem to do similar things, and am able to achieve 
> 150MB/s download speeds as well. It is mostly done but I have some things to 
> clean up first. The PR is here: 
> https://github.com/palantir/hadoop/pull/47/files
> It would be great to get some other eyes on it to see what we need to do to 
> get it merged.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ajayydv commented on a change in pull request #641: HDDS-1318. Fix MalformedTracerStateStringException on DN logs. Contributed by Xiaoyu Yao.

2019-03-26 Thread GitBox
ajayydv commented on a change in pull request #641: HDDS-1318. Fix 
MalformedTracerStateStringException on DN logs. Contributed by Xiaoyu Yao.
URL: https://github.com/apache/hadoop/pull/641#discussion_r269291995
 
 

 ##
 File path: 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/tracing/StringCodec.java
 ##
 @@ -25,12 +25,15 @@
 import io.jaegertracing.internal.exceptions.TraceIdOutOfBoundException;
 import io.jaegertracing.spi.Codec;
 import io.opentracing.propagation.Format;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
 
 /**
  * A jaeger codec to save the current tracing context t a string.
 
 Review comment:
   can we fix this typo as well?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #645: HADOOP-16132 Support multipart download in S3AFileSystem

2019-03-26 Thread GitBox
hadoop-yetus commented on issue #645: HADOOP-16132 Support multipart download 
in S3AFileSystem
URL: https://github.com/apache/hadoop/pull/645#issuecomment-476818930
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 24 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 3 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 987 | trunk passed |
   | +1 | compile | 31 | trunk passed |
   | +1 | checkstyle | 23 | trunk passed |
   | +1 | mvnsite | 34 | trunk passed |
   | +1 | shadedclient | 713 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 45 | trunk passed |
   | +1 | javadoc | 25 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 30 | the patch passed |
   | +1 | compile | 28 | the patch passed |
   | +1 | javac | 28 | the patch passed |
   | -0 | checkstyle | 18 | hadoop-tools/hadoop-aws: The patch generated 8 new 
+ 5 unchanged - 0 fixed = 13 total (was 5) |
   | +1 | mvnsite | 32 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 735 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 50 | the patch passed |
   | +1 | javadoc | 22 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 272 | hadoop-aws in the patch passed. |
   | +1 | asflicense | 28 | The patch does not generate ASF License warnings. |
   | | | 3186 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-645/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/645 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  |
   | uname | Linux 215678f1cf61 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / ce4bafd |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | findbugs | v3.1.0-RC1 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-645/1/artifact/out/diff-checkstyle-hadoop-tools_hadoop-aws.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-645/1/testReport/ |
   | Max. process+thread count | 410 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-645/1/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #612: HDDS-1285. Implement actions need to be taken after chill mode exit w…

2019-03-26 Thread GitBox
hadoop-yetus commented on issue #612: HDDS-1285. Implement actions need to be 
taken after chill mode exit w…
URL: https://github.com/apache/hadoop/pull/612#issuecomment-476818806
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 26 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 12 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 56 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1068 | trunk passed |
   | +1 | compile | 947 | trunk passed |
   | +1 | checkstyle | 213 | trunk passed |
   | +1 | mvnsite | 75 | trunk passed |
   | +1 | shadedclient | 1029 | branch has no errors when building and testing 
our client artifacts. |
   | 0 | findbugs | 0 | Skipped patched modules with no Java source: 
hadoop-ozone/integration-test |
   | +1 | findbugs | 51 | trunk passed |
   | +1 | javadoc | 54 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 23 | Maven dependency ordering for patch |
   | +1 | mvninstall | 64 | the patch passed |
   | +1 | compile | 878 | the patch passed |
   | +1 | javac | 878 | the patch passed |
   | +1 | checkstyle | 207 | the patch passed |
   | +1 | mvnsite | 76 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 718 | patch has no errors when building and testing 
our client artifacts. |
   | 0 | findbugs | 0 | Skipped patched modules with no Java source: 
hadoop-ozone/integration-test |
   | +1 | findbugs | 52 | the patch passed |
   | +1 | javadoc | 53 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 97 | server-scm in the patch passed. |
   | +1 | unit | 604 | integration-test in the patch passed. |
   | +1 | asflicense | 44 | The patch does not generate ASF License warnings. |
   | | | 6296 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-612/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/612 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  |
   | uname | Linux 4bbc9fb63d7d 4.4.0-139-generic #165~14.04.1-Ubuntu SMP Wed 
Oct 31 10:55:11 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 82d4772 |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | findbugs | v3.1.0-RC1 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-612/4/testReport/ |
   | Max. process+thread count | 4709 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/server-scm hadoop-ozone/integration-test U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-612/4/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16037) DistCp: Document usage of Sync (-diff option) in detail

2019-03-26 Thread Siyao Meng (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16037?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16802092#comment-16802092
 ] 

Siyao Meng commented on HADOOP-16037:
-

Thanks for committing, [~ste...@apache.org]!

> DistCp: Document usage of Sync (-diff option) in detail
> ---
>
> Key: HADOOP-16037
> URL: https://issues.apache.org/jira/browse/HADOOP-16037
> Project: Hadoop Common
>  Issue Type: Task
>  Components: documentation, tools/distcp
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Fix For: 3.2.1
>
> Attachments: HADOOP-16037.001.patch
>
>
> Create a new doc section similar to "Update and Overwrite" for -diff option. 
> Provide step by step guidance.
> Current doc link: 
> https://hadoop.apache.org/docs/current/hadoop-distcp/DistCp.html



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16037) DistCp: Document usage of Sync (-diff option) in detail

2019-03-26 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16037?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16802078#comment-16802078
 ] 

Hudson commented on HADOOP-16037:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #16287 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/16287/])
HADOOP-16037. DistCp: Document usage of Sync (-diff option) in detail. (stevel: 
rev ce4bafdf442c004b6deb25eaa2fa7e947b8ad269)
* (edit) hadoop-tools/hadoop-distcp/src/site/markdown/DistCp.md.vm


> DistCp: Document usage of Sync (-diff option) in detail
> ---
>
> Key: HADOOP-16037
> URL: https://issues.apache.org/jira/browse/HADOOP-16037
> Project: Hadoop Common
>  Issue Type: Task
>  Components: documentation, tools/distcp
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Fix For: 3.2.1
>
> Attachments: HADOOP-16037.001.patch
>
>
> Create a new doc section similar to "Update and Overwrite" for -diff option. 
> Provide step by step guidance.
> Current doc link: 
> https://hadoop.apache.org/docs/current/hadoop-distcp/DistCp.html



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] justinuang opened a new pull request #645: HADOOP-16132 Support multipart download in S3AFileSystem

2019-03-26 Thread GitBox
justinuang opened a new pull request #645: HADOOP-16132 Support multipart 
download in S3AFileSystem
URL: https://github.com/apache/hadoop/pull/645
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16037) DistCp: Document usage of Sync (-diff option) in detail

2019-03-26 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16037?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-16037:

   Resolution: Fixed
Fix Version/s: 3.2.1
   Status: Resolved  (was: Patch Available)

+1, committed to branch-3.2 and trunk

> DistCp: Document usage of Sync (-diff option) in detail
> ---
>
> Key: HADOOP-16037
> URL: https://issues.apache.org/jira/browse/HADOOP-16037
> Project: Hadoop Common
>  Issue Type: Task
>  Components: documentation, tools/distcp
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Fix For: 3.2.1
>
> Attachments: HADOOP-16037.001.patch
>
>
> Create a new doc section similar to "Update and Overwrite" for -diff option. 
> Provide step by step guidance.
> Current doc link: 
> https://hadoop.apache.org/docs/current/hadoop-distcp/DistCp.html



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #630: HADOOP-15999 S3Guard OOB: improve test resilience and probes

2019-03-26 Thread GitBox
hadoop-yetus commented on issue #630: HADOOP-15999 S3Guard OOB: improve test 
resilience and probes
URL: https://github.com/apache/hadoop/pull/630#issuecomment-476775614
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 26 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 2 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 23 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1181 | trunk passed |
   | +1 | compile | 954 | trunk passed |
   | +1 | checkstyle | 204 | trunk passed |
   | +1 | mvnsite | 118 | trunk passed |
   | +1 | shadedclient | 1079 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 156 | trunk passed |
   | +1 | javadoc | 91 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 20 | Maven dependency ordering for patch |
   | +1 | mvninstall | 77 | the patch passed |
   | +1 | compile | 893 | the patch passed |
   | +1 | javac | 893 | the patch passed |
   | -0 | checkstyle | 212 | root: The patch generated 3 new + 6 unchanged - 0 
fixed = 9 total (was 6) |
   | +1 | mvnsite | 117 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 722 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 170 | the patch passed |
   | +1 | javadoc | 88 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 525 | hadoop-common in the patch passed. |
   | +1 | unit | 279 | hadoop-aws in the patch passed. |
   | +1 | asflicense | 42 | The patch does not generate ASF License warnings. |
   | | | 6912 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-630/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/630 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  |
   | uname | Linux eab766e33607 4.4.0-139-generic #165~14.04.1-Ubuntu SMP Wed 
Oct 31 10:55:11 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 82d4772 |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | findbugs | v3.1.0-RC1 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-630/3/artifact/out/diff-checkstyle-root.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-630/3/testReport/ |
   | Max. process+thread count | 1344 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws 
U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-630/3/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] avijayanhwx commented on issue #643: HDDS-1260. Create Recon Server lifecycle integration with Ozone.

2019-03-26 Thread GitBox
avijayanhwx commented on issue #643: HDDS-1260. Create Recon Server lifecycle 
integration with Ozone.
URL: https://github.com/apache/hadoop/pull/643#issuecomment-476765966
 
 
   LGTM +1. We can build a proper UI to serve up container-key mapping in 
[HDDS-1335](https://issues.apache.org/jira/browse/HDDS-1335)


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15960) Update guava to 27.0-jre in hadoop-common

2019-03-26 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16801974#comment-16801974
 ] 

Hadoop QA commented on HADOOP-15960:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} branch-3.2 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  5m 
33s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
42s{color} | {color:green} branch-3.2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
42s{color} | {color:green} branch-3.2 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
46s{color} | {color:green} branch-3.2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m 
28s{color} | {color:green} branch-3.2 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
18m  7s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  6m  
7s{color} | {color:green} branch-3.2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
32s{color} | {color:green} branch-3.2 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
21s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
46s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 15m 46s{color} 
| {color:red} root generated 13 new + 1326 unchanged - 1 fixed = 1339 total 
(was 1327) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m 50s{color} | {color:orange} root: The patch generated 1 new + 61 unchanged - 
1 fixed = 62 total (was 62) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  0s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
42s{color} | {color:red} hadoop-common-project/hadoop-common generated 1 new + 
0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  2m 
13s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs generated 3 new + 0 
unchanged - 0 fixed = 3 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
29s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
20s{color} | {color:green} hadoop-project in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
35s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}101m 45s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 16m 
33s{color} | {color:green} ha

[GitHub] [hadoop] hadoop-yetus commented on issue #612: HDDS-1285. Implement actions need to be taken after chill mode exit w…

2019-03-26 Thread GitBox
hadoop-yetus commented on issue #612: HDDS-1285. Implement actions need to be 
taken after chill mode exit w…
URL: https://github.com/apache/hadoop/pull/612#issuecomment-476750287
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 0 | Docker mode activated. |
   | -1 | patch | 6 | https://github.com/apache/hadoop/pull/612 does not apply 
to trunk. Rebase required? Wrong Branch? See 
https://wiki.apache.org/hadoop/HowToContribute for help. |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | GITHUB PR | https://github.com/apache/hadoop/pull/612 |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-612/3/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15960) Update guava to 27.0-jre in hadoop-common

2019-03-26 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16801924#comment-16801924
 ] 

Hadoop QA commented on HADOOP-15960:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 10m  
4s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} branch-3.1 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m 
12s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
51s{color} | {color:green} branch-3.1 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
19s{color} | {color:green} branch-3.1 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
19s{color} | {color:green} branch-3.1 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m 
52s{color} | {color:green} branch-3.1 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
20m 31s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  6m 
33s{color} | {color:green} branch-3.1 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
50s{color} | {color:green} branch-3.1 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
26s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 
31s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 12m 31s{color} 
| {color:red} root generated 13 new + 1275 unchanged - 1 fixed = 1288 total 
(was 1276) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
3m 16s{color} | {color:orange} root: The patch generated 1 new + 60 unchanged - 
1 fixed = 61 total (was 61) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 12s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
44s{color} | {color:red} hadoop-common-project/hadoop-common generated 1 new + 
0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  2m 
21s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs generated 3 new + 0 
unchanged - 0 fixed = 3 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
48s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
21s{color} | {color:green} hadoop-project in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
28s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 78m 17s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 16m  
5s{color} | {color:green} ha

[jira] [Commented] (HADOOP-16037) DistCp: Document usage of Sync (-diff option) in detail

2019-03-26 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16037?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16801913#comment-16801913
 ] 

Hadoop QA commented on HADOOP-16037:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
28m 17s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 32s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
32s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 42m  6s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HADOOP-16037 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12963752/HADOOP-16037.001.patch
 |
| Optional Tests |  dupname  asflicense  mvnsite  |
| uname | Linux 55802bd24553 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 5c0a81a |
| maven | version: Apache Maven 3.3.9 |
| Max. process+thread count | 454 (vs. ulimit of 1) |
| modules | C: hadoop-tools/hadoop-distcp U: hadoop-tools/hadoop-distcp |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16076/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> DistCp: Document usage of Sync (-diff option) in detail
> ---
>
> Key: HADOOP-16037
> URL: https://issues.apache.org/jira/browse/HADOOP-16037
> Project: Hadoop Common
>  Issue Type: Task
>  Components: documentation, tools/distcp
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Attachments: HADOOP-16037.001.patch
>
>
> Create a new doc section similar to "Update and Overwrite" for -diff option. 
> Provide step by step guidance.
> Current doc link: 
> https://hadoop.apache.org/docs/current/hadoop-distcp/DistCp.html



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15960) Update guava to 27.0-jre in hadoop-common

2019-03-26 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16801903#comment-16801903
 ] 

Hadoop QA commented on HADOOP-15960:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 15m 
45s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} branch-3.0 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m 
10s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
26s{color} | {color:green} branch-3.0 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
25s{color} | {color:green} branch-3.0 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
 3s{color} | {color:green} branch-3.0 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m 
29s{color} | {color:green} branch-3.0 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
19m 10s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  6m 
30s{color} | {color:green} branch-3.0 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
46s{color} | {color:green} branch-3.0 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
29s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 
19s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 13m 19s{color} 
| {color:red} root generated 10 new + 1253 unchanged - 1 fixed = 1263 total 
(was 1254) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
3m  9s{color} | {color:orange} root: The patch generated 1 new + 63 unchanged - 
1 fixed = 64 total (was 64) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 11s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
51s{color} | {color:red} hadoop-common-project/hadoop-common generated 1 new + 
0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
45s{color} | {color:red} hadoop-common-project/hadoop-kms generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  2m 
21s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs generated 3 new + 0 
unchanged - 0 fixed = 3 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
52s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
23s{color} | {color:green} hadoop-project in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 10m 
25s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} 

[jira] [Commented] (HADOOP-16206) Migrate from Log4j1 to Log4j2

2019-03-26 Thread Akira Ajisaka (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16801894#comment-16801894
 ] 

Akira Ajisaka commented on HADOOP-16206:


Maybe the target is 3.3 or 3.4. I think this issue takes a few months or more.

> Migrate from Log4j1 to Log4j2
> -
>
> Key: HADOOP-16206
> URL: https://issues.apache.org/jira/browse/HADOOP-16206
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Akira Ajisaka
>Priority: Major
>
> This sub-task is to remove log4j1 dependency and add log4j2 dependency.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on issue #624: HADOOP-15999. S3Guard: Better support for out-of-band operations

2019-03-26 Thread GitBox
steveloughran commented on issue #624: HADOOP-15999. S3Guard: Better support 
for out-of-band operations
URL: https://github.com/apache/hadoop/pull/624#issuecomment-476719352
 
 
   OK, updated the test to use `eventually()`; please check my branch for that 
patch.
   
   If you can run it and the tests work, all is well. If they fail, then we may 
have more insight as to what is wrong


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on issue #630: HADOOP-15999 S3Guard OOB: improve test resilience and probes

2019-03-26 Thread GitBox
steveloughran commented on issue #630: HADOOP-15999 S3Guard OOB: improve test 
resilience and probes
URL: https://github.com/apache/hadoop/pull/630#issuecomment-476718824
 
 
   Updated test with eventually called around operations where eventual 
consistency is possible, including some of the assertions. The only thing isn't 
checked is the s3guard list operations.
   
   Tested: S3 ireland with DDB, standalone and in parallel runs


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



  1   2   >