afuyo commented on issue #7596:
URL: https://github.com/apache/hudi/issues/7596#issuecomment-1371344959
Hi @yihua thank you for your prompt answer. To run exactly same job on
Docker Demo is not easily done because Docker Demo has no schema registry. But
I run the demo and it obviously works. I also run the exactly same docker-demo
job in my local, none-docker setup and it also works. Allthough
JsonKafkaSource will never be used where I am at , and it needs to be Avro,
either AvroKafkaSource or other Avro-classes.
Is there any info on how this should be configured? Would really
appreciate any help with this.
The job below works on my local cluster:
```
spark-submit --class
org.apache.hudi.utilities.deltastreamer.HoodieDeltaStreamer
/opt/spark/hudi-utilities-bundle_2.12-0.11.1.jar
--table-type COPY_ON_WRITE
--source-class org.apache.hudi.utilities.sources.JsonKafkaSource
--source-ordering-field ts
--target-base-path /opt/spark/stock_ticks_cow
--target-table stock_ticks_cow
--props /opt/spark/kafka-source-docker.properties
--schemaprovider-class
org.apache.hudi.utilities.schema.FilebasedSchemaProvider
```
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]