Hi sir,

which property we need to set in application. properties files, as we just 
provided the authentication to s3 for pushing files bucket. When there is 
changes happen in db new event generated when even generated it create record 
and this resulting new parquet file and it will push to s3. So I think we 
provide only permission to s3 for authentication. and configurations i don't 
see we did for s3.

please check our application.properties file that I provide.
debezium.source.connector.class=io.debezium.connector.postgresql.PostgresConnector

debezium.source.offset.storage.file.filename=data/offsets.dat

debezium.source.offset.flush.interval.ms=120000

debezium.source.database.hostname=localhost

debezium.source.database.port=5432

debezium.source.database.user=postgres

debezium.source.database.password=root@123

debezium.source.database.dbname=template1

debezium.source.topic.prefix=tutorial

debezium.source.schema.include.list=public

debezium.source.include.schema.changes=true

#debezium.source.table.whitelist=public

#schema.include.list=public

ENABLE_DEBEZIUM_SCRIPTING=true

#e iceberg sink

debezium.sink.type=iceberg

Iceberg sink config

debezium.sink.iceberg.table-prefix=debeziumcdc_

debezium.sink.iceberg.upsert=true

debezium.sink.iceberg.upsert-keep-deletes=false

debezium.sink.iceberg.write.format.default=parquet

debezium.sink.iceberg.catalog-name=iceberg

Hadoop catalog, you can use other catalog supported by iceberg as well
enable event schemas - mandatory

debezium.format.value.schemas.enable=true

debezium.format.key.schemas.enable=true

debezium.format.value=json

debezium.format.key=json

do event flattening. unwrap message!

debezium.transforms=unwrap

debezium.transforms.unwrap.type=io.debezium.transforms.ExtractNewRecordState

debezium.transforms.unwrap.add.fields=op,table,source.ts_ms,db

debezium.transforms.unwrap.delete.handling.mode=rewrite

debezium.transforms.unwrap.drop.tombstones=false

############ SET LOG LEVELS

quarkus.log.level=INFO

quarkus.log.console.json=false

hadoop, parquet

quarkus.log.category."org.apache.hadoop".level=WARN

quarkus.log.category."org.apache.parquet".level=WARN

Ignore messages below warning level from Jetty, because it's a bit verbose

quarkus.log.category."org.eclipse.jetty".level=WARN

debezium.source.offset.storage=io.debezium.server.iceberg.offset.IcebergOffsetBackingStore

debezium.source.offset.storage.iceberg.table-name=debezium_offset_storage_custom_table

see 
https://debezium.io/documentation/reference/stable/development/engine.html#database-history-properties

debezium.source.schema.history.internal=io.debezium.server.iceberg.history.IcebergSchemaHistory

debezium.source.schema.history.internal.iceberg.table-name=debezium_database_history_storage_test

enable event schemas

debezium.format.value.schemas.enable=true

debezium.format.value=json

complex nested data types are not supported, do event flattening. unwrap 
message!

debezium.transforms=unwrap

debezium.transforms.unwrap.type=io.debezium.transforms.ExtractNewRecordState

debezium.transforms.unwrap.add.fields=op,table,source.ts_ms,db

debezium.transforms.unwrap.delete.handling.mode=rewrite

debezium.transforms.unwrap.drop.tombstones=true

######################

debezium.sink.batch.batch-size-wait=MaxBatchSizeWait

debezium.sink.batch.metrics.snapshot-mbean=debezium.postgres:type=connector-metrics,context=snapshot,server=testc

debezium.sink.batch.metrics.streaming-mbean=debezium.postgres:type=connector-metrics,context=streaming,server=testc

debezium.source.connector.class=io.debezium.connector.postgresql.PostgresConnector

debezium.sink.batch.batch-size-wait.max-wait-ms=240000

debezium.sink.batch.batch-size-wait.wait-interval-ms=120000

Use S3FileIO

debezium.sink.iceberg.io-impl=org.apache.iceberg.aws.s3.S3FileIO

debezium.sink.iceberg.s3.endpoint=[https://****.s3.us-east-2.amazonaws.com]

debezium.sink.iceberg.s3.path-style-access=true

debezium.sink.iceberg.s3.access-key-id=****

debezium.sink.iceberg.s3.secret-access-key=****

S3 config without hadoop catalog. Using InMemoryCatalog catalog And S3FileIO
using mino as S3

debezium.sink.iceberg.s3.endpoint=https://*.s3.us-east-2.amazonaws.com/

debezium.sink.iceberg.s3.path-style-access=true

debezium.sink.iceberg.s3.access-key-id=****

debezium.sink.iceberg.s3.secret-access-key=***

debezium.sink.iceberg.warehouse=s3a:///*/

debezium.sink.iceberg.catalog-impl=org.apache.iceberg.aws.glue.GlueCatalog

AWS_REGION=us-east-2

Reply via email to