This is an automated email from the ASF dual-hosted git repository.
acosentino pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/camel.git
The following commit(s) were added to refs/heads/main by this push:
new dd4552bddda Camel-AWS2-S3: Add more examples in docs (#19296)
dd4552bddda is described below
commit dd4552bddda22a3c7c09db3cda33695e90b16e42
Author: Andrea Cosentino <[email protected]>
AuthorDate: Tue Sep 23 15:36:10 2025 +0200
Camel-AWS2-S3: Add more examples in docs (#19296)
Signed-off-by: Andrea Cosentino <[email protected]>
---
.../src/main/docs/aws2-s3-component.adoc | 78 ++++++++++++++++++++++
1 file changed, 78 insertions(+)
diff --git
a/components/camel-aws/camel-aws2-s3/src/main/docs/aws2-s3-component.adoc
b/components/camel-aws/camel-aws2-s3/src/main/docs/aws2-s3-component.adoc
index f11e5e6fdd2..8c95b466030 100644
--- a/components/camel-aws/camel-aws2-s3/src/main/docs/aws2-s3-component.adoc
+++ b/components/camel-aws/camel-aws2-s3/src/main/docs/aws2-s3-component.adoc
@@ -307,6 +307,34 @@ Parameters (`accessKey`, `secretKey` and `region`) are
mandatory for this operat
NOTE: If checksum validations are enabled, the url will no longer be browser
compatible because it adds a signed header that must be included in the HTTP
request.
+- HeadBucket: this operation checks if a bucket exists and you have permission
to access it
+
+[source,java]
+--------------------------------------------------------------------------------
+ from("direct:start")
+
.to("aws2-s3://mycamelbucket?amazonS3Client=#amazonS3Client&operation=headBucket")
+ .to("mock:result");
+--------------------------------------------------------------------------------
+
+This operation will check if the bucket _mycamelbucket_ exists and is
accessible.
+
+- HeadObject: this operation retrieves metadata from an object without
returning the object itself
+
+[source,java]
+--------------------------------------------------------------------------------
+ from("direct:start").process(new Processor() {
+
+ @Override
+ public void process(Exchange exchange) throws Exception {
+ exchange.getIn().setHeader(AWS2S3Constants.KEY, "camelKey");
+ }
+ })
+
.to("aws2-s3://mycamelbucket?amazonS3Client=#amazonS3Client&operation=headObject")
+ .to("mock:result");
+--------------------------------------------------------------------------------
+
+This operation will return metadata about the object camelKey in the bucket
_mycamelbucket_.
+
=== AWS S3 Producer minimum permissions
For making the producer work, you'll need at least PutObject and ListBuckets
permissions. The following policy will be enough:
@@ -488,6 +516,56 @@ In this case, the objects consumed will be moved to
_myothercamelbucket_ bucket
So if the file name is test, in the _myothercamelbucket_ you should see a file
called pre-test-suff.
+=== Additional Consumer Examples
+
+=== Consumer with prefix filtering
+
+You can configure the consumer to only process objects with a specific prefix:
+
+[source,java]
+--------------------------------------------------------------------------------
+
from("aws2-s3://mycamelbucket?amazonS3Client=#amazonS3Client&prefix=processed/&delay=30000")
+ .to("mock:result");
+--------------------------------------------------------------------------------
+
+This will only consume objects that start with "processed/" prefix from the
_mycamelbucket_ bucket, with a 30-second polling delay.
+
+=== Consumer with custom polling and batch settings
+
+Configure custom polling intervals and batch sizes:
+
+[source,java]
+--------------------------------------------------------------------------------
+
from("aws2-s3://mycamelbucket?amazonS3Client=#amazonS3Client&delay=60000&maxMessagesPerPoll=5&includeBody=false")
+ .to("mock:result");
+--------------------------------------------------------------------------------
+
+This consumer polls every 60 seconds, processes up to 5 objects per poll, and
doesn't include the object body in the message (only metadata).
+
+=== Consumer with file filtering and no deletion
+
+Configure the consumer to not delete files after reading and include specific
file patterns:
+
+[source,java]
+--------------------------------------------------------------------------------
+
from("aws2-s3://mycamelbucket?amazonS3Client=#amazonS3Client&deleteAfterRead=false&fileName=*.pdf")
+ .to("mock:result");
+--------------------------------------------------------------------------------
+
+This consumer will read PDF files but won't delete them after processing.
+
+=== Consumer with done file pattern
+
+Use a done file pattern to ensure files are completely uploaded before
processing:
+
+[source,java]
+--------------------------------------------------------------------------------
+
from("aws2-s3://mycamelbucket?amazonS3Client=#amazonS3Client&doneFileName=*.done")
+ .to("mock:result");
+--------------------------------------------------------------------------------
+
+This consumer will only process files when a corresponding .done file exists
in the bucket.
+
=== Using the customer key as encryption
We introduced also the customer key support (an alternative of using KMS). The
following code shows an example.