steveloughran commented on a change in pull request #2455:
URL: https://github.com/apache/hadoop/pull/2455#discussion_r522082082



##########
File path: 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AInputStream.java
##########
@@ -578,25 +583,31 @@ private void closeStream(String reason, long length, 
boolean forceAbort) {
           streamStatistics.streamClose(false, drained);
         } catch (Exception e) {
           // exception escalates to an abort
-          LOG.debug("When closing {} stream for {}", uri, reason, e);
+          LOG.warn("When closing {} stream for {}, will abort the stream", 
uri, reason, e);
           shouldAbort = true;
         }
       }
       if (shouldAbort) {
         // Abort, rather than just close, the underlying stream.  Otherwise, 
the
         // remaining object payload is read from S3 while closing the stream.
-        LOG.debug("Aborting stream");
-        wrappedStream.abort();
+        LOG.warn("Aborting stream {}", uri);

Review comment:
       revert to debug. We abort() a lot as week seek round files if read 
policy != random, and don't want application logs overloaded with messages

##########
File path: 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AInputStream.java
##########
@@ -87,6 +87,7 @@
    * set
    */
   private volatile boolean closed;
+  private S3Object object;

Review comment:
       can you add a javadoc comment explaining why this is needed "we have to 
keep a reference to this to stop garbage collection breaking the input stream"

##########
File path: 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AInputStream.java
##########
@@ -431,8 +432,9 @@ public synchronized int read() throws IOException {
   private void onReadFailure(IOException ioe, int length, boolean forceAbort)
           throws IOException {
 
-    LOG.info("Got exception while trying to read from stream {}" +
-        " trying to recover: " + ioe, uri);
+    LOG.info("Got exception while trying to read from stream " + uri
+        + ", client: " + client + " object: " + object

Review comment:
       why the change? The previous one used SLF4J expansion, and deliberately 
only printed the IOEs message, not the stack. 
   If you want the stack, add a separate log at debug with the full details, 
and use SL4J expansion of {} elements in the log string

##########
File path: 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AInputStream.java
##########
@@ -550,14 +552,17 @@ public synchronized void close() throws IOException {
    */
   @Retries.OnceRaw
   private void closeStream(String reason, long length, boolean forceAbort) {
-    if (isObjectStreamOpen()) {
+    if (!isObjectStreamOpen())

Review comment:
       nit: wrap in { }. Also add a comment "stream is already closed"




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to