anna842 opened a new issue #1196:
URL: https://github.com/apache/camel-kafka-connector/issues/1196
Hi,
I would like to use AWS2 S3 sink and source connectors in order to save some
currently not needed data to s3 and read it later if needed.
As Kafka topics can be big and write/read operations in S3 cost money, I am
using aggregate strategy when writing data to S3:
```
"connector.class":
"org.apache.camel.kafkaconnector.aws2s3.CamelAws2s3SinkConnector",
"topics": "some_topic",
"key.converter": "org.apache.kafka.connect.storage.StringConverter",
"camel.sink.path.bucketNameOrArn": "my-bucket",
"camel.sink.endpoint.region": "eu-west-1",
"camel.sink.endpoint.keyName":
"${date:now:yyyy-MM-dd}/${date:now:yyyy-MM-dd-HH:mm:ss:SSS}.json",
"value.converter":"org.apache.kafka.connect.json.JsonConverter",
"camel.sink.marshal":
"org.apache.camel.component.jackson.JacksonDataFormat",
"value.converter.schemas.enable":"false",
"camel.beans.aggregate":
"#class:org.apache.camel.kafkaconnector.aggregator.StringAggregator",
"camel.aggregation.size": "500",
"camel.aggregation.timeout": "2000"
```
But when I am reading files back to Kafka with s3 source connector, every
file becomes one large message.
My current source connector configuration:
```
"connector.class":
"org.apache.camel.kafkaconnector.aws2s3.CamelAws2s3SourceConnector",
"topics": "some_topic_from_s3",
"tasks.max": "1",
"key.converter": "org.apache.kafka.connect.storage.StringConverter",
"camel.source.path.bucketNameOrArn": "my-bucket",
"camel.source.endpoint.region": "eu-west-1",
"camel.source.endpoint.prefix":"2021-06-07",
"value.converter":"org.apache.kafka.connect.converters.ByteArrayConverter"
```
Is it possible to split aggregated messages?
If not, is it planned to implement this feature? What workaround would you
recommend for now?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]