[ 
https://issues.apache.org/jira/browse/HADOOP-13560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15575361#comment-15575361
 ] 

ASF GitHub Bot commented on HADOOP-13560:
-----------------------------------------

Github user steveloughran commented on a diff in the pull request:

    https://github.com/apache/hadoop/pull/130#discussion_r83423063
  
    --- Diff: 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
 ---
    @@ -118,21 +126,37 @@
       private long partSize;
       private boolean enableMultiObjectsDelete;
       private TransferManager transfers;
    -  private ExecutorService threadPoolExecutor;
    +  private ListeningExecutorService threadPoolExecutor;
       private long multiPartThreshold;
       public static final Logger LOG = 
LoggerFactory.getLogger(S3AFileSystem.class);
    +  private static final Logger PROGRESS =
    +      
LoggerFactory.getLogger("org.apache.hadoop.fs.s3a.S3AFileSystem.Progress");
    +  private LocalDirAllocator directoryAllocator;
       private CannedAccessControlList cannedACL;
       private String serverSideEncryptionAlgorithm;
       private S3AInstrumentation instrumentation;
       private S3AStorageStatistics storageStatistics;
       private long readAhead;
       private S3AInputPolicy inputPolicy;
    -  private static final AtomicBoolean warnedOfCoreThreadDeprecation =
    -      new AtomicBoolean(false);
       private final AtomicBoolean closed = new AtomicBoolean(false);
     
       // The maximum number of entries that can be deleted in any call to s3
       private static final int MAX_ENTRIES_TO_DELETE = 1000;
    +  private boolean blockUploadEnabled;
    +  private String blockOutputBuffer;
    +  private S3ADataBlocks.BlockFactory blockFactory;
    +  private int blockOutputActiveBlocks;
    +
    +  /*
    +   * Register Deprecated options.
    +   */
    +  static {
    +    Configuration.addDeprecations(new Configuration.DeprecationDelta[]{
    +        new Configuration.DeprecationDelta("fs.s3a.threads.core",
    +            null,
    --- End diff --
    
    I've just cut that section entirely. That's  harsh, but, well, it the fast 
output stream was always marked as experimental ... we've learned from the 
experiment and are now changing behaviour here, which is something we can look 
at covering in the release notes. I'll add that to the JIRA.


> S3ABlockOutputStream to support huge (many GB) file writes
> ----------------------------------------------------------
>
>                 Key: HADOOP-13560
>                 URL: https://issues.apache.org/jira/browse/HADOOP-13560
>             Project: Hadoop Common
>          Issue Type: Sub-task
>          Components: fs/s3
>    Affects Versions: 2.9.0
>            Reporter: Steve Loughran
>            Assignee: Steve Loughran
>         Attachments: HADOOP-13560-branch-2-001.patch, 
> HADOOP-13560-branch-2-002.patch, HADOOP-13560-branch-2-003.patch, 
> HADOOP-13560-branch-2-004.patch
>
>
> An AWS SDK [issue|https://github.com/aws/aws-sdk-java/issues/367] highlights 
> that metadata isn't copied on large copies.
> 1. Add a test to do that large copy/rname and verify that the copy really 
> works
> 2. Verify that metadata makes it over.
> Verifying large file rename is important on its own, as it is needed for very 
> large commit operations for committers using rename



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org

Reply via email to