[ 
https://issues.apache.org/jira/browse/NIFI-1107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15125348#comment-15125348
 ] 

ASF GitHub Bot commented on NIFI-1107:
--------------------------------------

Github user trkurc commented on a diff in the pull request:

    https://github.com/apache/nifi/pull/192#discussion_r51361382
  
    --- Diff: 
nifi-nar-bundles/nifi-aws-bundle/nifi-aws-processors/src/main/java/org/apache/nifi/processors/aws/s3/PutS3Object.java
 ---
    @@ -89,9 +134,51 @@
             .defaultValue(StorageClass.Standard.name())
             .build();
     
    +    public static final PropertyDescriptor MULTIPART_THRESHOLD = new 
PropertyDescriptor.Builder()
    +            .name("Multipart Threshold")
    +            .description("Specifies the file size threshold for switch 
from the PutS3Object API to the " +
    +                    "PutS3MultipartUpload API.  Flow files bigger than 
this limit will be sent using the stateful " +
    +                    "multipart process.\n" +
    +                    "The valid range is 50MB to 5GB.")
    +            .required(true)
    +            .defaultValue("5 GB")
    +            
.addValidator(StandardValidators.createDataSizeBoundsValidator(MIN_S3_PART_SIZE,
 MAX_S3_PUTOBJECT_SIZE))
    +            .build();
    +
    +    public static final PropertyDescriptor MULTIPART_PART_SIZE = new 
PropertyDescriptor.Builder()
    +            .name("Multipart Part Size")
    +            .description("Specifies the part size for use when the 
PutS3Multipart Upload API is used.\n" +
    +                    "Flow files will be broken into chunks of this size 
for the upload process, but the last part " +
    +                    "sent can be smaller since it is not padded.\n" +
    +                    "The valid range is 50MB to 5GB.")
    +            .required(true)
    +            .defaultValue("5 GB")
    +            
.addValidator(StandardValidators.createDataSizeBoundsValidator(MIN_S3_PART_SIZE,
 MAX_S3_PUTOBJECT_SIZE))
    +            .build();
    +
    +    public static final PropertyDescriptor MULTIPART_S3_AGEOFF_INTERVAL = 
new PropertyDescriptor.Builder()
    +            .name("Multipart Upload AgeOff Interval")
    +            .description("Specifies the interval at which existing 
multipart uploads in AWS S3 will be evaluated " +
    +                    "for ageoff.  Calls to onTrigger() will initiate the 
ageoff evaluation if this interval has been " +
    --- End diff --
    
    I think that that this should be "When processor is triggered" rather than 
"calls to onTrigger()" to prevent too much java'ism leaking out.


> Create new PutS3ObjectMultipart processor
> -----------------------------------------
>
>                 Key: NIFI-1107
>                 URL: https://issues.apache.org/jira/browse/NIFI-1107
>             Project: Apache NiFi
>          Issue Type: New Feature
>          Components: Extensions
>            Reporter: Joe Skora
>            Assignee: Joe Skora
>              Labels: s3
>             Fix For: 0.5.0
>
>
> A new `PutS3ObjectMultipart` processor using the AWS S3 API to upload files 
> larger than those supported by `PutS3Object` which has a [5GB 
> limit|http://docs.aws.amazon.com/AmazonS3/latest/dev/UploadingObjects.html] 
> limit.
> To support S3 compatible endpoints this will also add an `Endpoint Override 
> URL` property to `AbstractAWSProcessor` to set the service 
> [endpoint|http://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/AmazonWebServiceClient.html#setEndpoint(java.lang.String)]
>  to override the endpoint URL normally selected based on the the Amazon 
> region.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to