This is an automated email from the ASF dual-hosted git repository. acosentino pushed a commit to branch oc-aws2-sqs in repository https://gitbox.apache.org/repos/asf/camel-kafka-connector-examples.git
commit 9333c0cad86aeeebbc7a5f002f9b4e2028af6c5f Author: Andrea Cosentino <[email protected]> AuthorDate: Wed Sep 16 11:36:56 2020 +0200 AWS2-SQS Source example: Adding Openshift instuctions --- aws2-sqs/aws2-sqs-source/README.adoc | 135 +++++++++++++++++++++++++++++++++-- 1 file changed, 130 insertions(+), 5 deletions(-) diff --git a/aws2-sqs/aws2-sqs-source/README.adoc b/aws2-sqs/aws2-sqs-source/README.adoc index c2512fa..37d7c05 100644 --- a/aws2-sqs/aws2-sqs-source/README.adoc +++ b/aws2-sqs/aws2-sqs-source/README.adoc @@ -1,14 +1,14 @@ # Camel-Kafka-connector AWS2 SQS Source -## Introduction - This is an example for Camel-Kafka-connector AW2-SQS -## What is needed +## Standalone + +### What is needed - An AWS SQS queue -## Running Kafka +### Running Kafka ``` $KAFKA_HOME/bin/zookeeper-server-start.sh config/zookeeper.properties @@ -16,7 +16,7 @@ $KAFKA_HOME/bin/kafka-server-start.sh config/server.properties $KAFKA_HOME/bin/kafka-topics.sh --create --bootstrap-server localhost:9092 --replication-factor 1 --partitions 1 --topic mytopic ``` -## Setting up the needed bits and running the example +### Setting up the needed bits and running the example You'll need to setup the plugin.path property in your kafka @@ -71,3 +71,128 @@ SQS to Kafka through Camel SQS to Kafka through Camel ``` +## Openshift + +### What is needed + +- An AWS SQS queue +- An Openshift instance + +### Running Kafka using Strimzi Operator + +First we install the Strimzi operator and use it to deploy the Kafka broker and Kafka Connect into our OpenShift project. +We need to create security objects as part of installation so it is necessary to switch to admin user. +If you use Minishift, you can do it with the following command: + +[source,bash,options="nowrap"] +---- +oc login -u system:admin +---- + +We will use OpenShift project `myproject`. +If it doesn't exist yet, you can create it using following command: + +[source,bash,options="nowrap"] +---- +oc new-project myproject +---- + +If the project already exists, you can switch to it with: + +[source,bash,options="nowrap"] +---- +oc project myproject +---- + +We can now install the Strimzi operator into this project: + +[source,bash,options="nowrap",subs="attributes"] +---- +oc apply -f https://github.com/strimzi/strimzi-kafka-operator/releases/download/0.19.0/strimzi-cluster-operator-0.19.0.yaml +---- + +Next we will deploy a Kafka broker cluster and a Kafka Connect cluster and then create a Kafka Connect image with the Debezium connectors installed: + +[source,bash,options="nowrap",subs="attributes"] +---- +# Deploy a single node Kafka broker +oc apply -f https://github.com/strimzi/strimzi-kafka-operator/raw/0.19.0/examples/kafka/kafka-persistent-single.yaml + +# Deploy a single instance of Kafka Connect with no plug-in installed +oc apply -f https://github.com/strimzi/strimzi-kafka-operator/raw/0.19.0/examples/connect/kafka-connect-s2i-single-node-kafka.yaml +---- + +Optionally enable the possibility to instantiate Kafka Connectors through specific custom resource: +[source,bash,options="nowrap"] +---- +oc annotate kafkaconnects2is my-connect-cluster strimzi.io/use-connector-resources=true +---- + +### Add Camel Kafka connector binaries + +Strimzi uses `Source2Image` builds to allow users to add their own connectors to the existing Strimzi Docker images. +We now need to build the connectors and add them to the image, +if you have built the whole project (`mvn clean package`) decompress the connectors you need in a folder (i.e. like `my-connectors/`) +so that each one is in its own subfolder +(alternatively you can download the latest officially released and packaged connectors from maven): + +[source,bash,options="nowrap"] +---- +oc start-build my-connect-cluster-connect --from-dir=./my-connectors/ --follow +---- + +We should now wait for the rollout of the new image to finish and the replica set with the new connector to become ready. +Once it is done, we can check that the connectors are available in our Kafka Connect cluster. +Strimzi is running Kafka Connect in a distributed mode. + +### Create connector instance + +Now we can create some instance of a AWS2-SQS source connector: + +You need to have enabled `use-connector-resources`, you can create the connector instance by creating a specific custom resource: + +[source,bash,options="nowrap"] +---- +oc apply -f - << EOF +apiVersion: kafka.strimzi.io/v1alpha1 +kind: KafkaConnector +metadata: + name: sqs-source-connector + namespace: oscerd-ckc + labels: + strimzi.io/cluster: my-connect-cluster +spec: + class: org.apache.camel.kafkaconnector.aws2sqs.CamelAws2sqsSourceConnector + tasksMax: 1 + config: + key.converter: org.apache.kafka.connect.storage.StringConverter + value.converter: org.apache.kafka.connect.storage.StringConverter + topics: sqs-topic + camel.source.path.queueNameOrArn: camel-connector-test + camel.source.maxPollDuration: 10000 + camel.component.aws2-sqs.accessKey: xxxx + camel.component.aws2-sqs.secretKey: yyyy + camel.component.aws2-sqs.region: region +EOF +---- + +You can check the status of the connector using + +[source,bash,options="nowrap"] +---- +oc exec -i -c kafka my-cluster-kafka-0 -- curl -s http://my-connect-cluster-connect-api:8083/connectors/sqs-source-connector/status +---- + +Just connect to your AWS Console and send message to the camel-connector-test, through the AWS Console. + +### Check received messages + +You can also run the Kafka console consumer to see the messages received from the topic: + +[source,bash,options="nowrap"] +---- +oc exec -i -c kafka my-cluster-kafka-0 -- bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic sqs-topic --from-beginning +SQS to Kafka through Camel +SQS to Kafka through Camel +---- +
