TheNeuralBit commented on a change in pull request #13112:
URL: https://github.com/apache/beam/pull/13112#discussion_r538655465



##########
File path: examples/kafka-to-pubsub/README.md
##########
@@ -0,0 +1,163 @@
+<!--
+    Licensed to the Apache Software Foundation (ASF) under one
+    or more contributor license agreements.  See the NOTICE file
+    distributed with this work for additional information
+    regarding copyright ownership.  The ASF licenses this file
+    to you under the Apache License, Version 2.0 (the
+    "License"); you may not use this file except in compliance
+    with the License.  You may obtain a copy of the License at
+
+      http://www.apache.org/licenses/LICENSE-2.0
+
+    Unless required by applicable law or agreed to in writing,
+    software distributed under the License is distributed on an
+    "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+    KIND, either express or implied.  See the License for the
+    specific language governing permissions and limitations
+    under the License.
+-->
+
+# Apache Beam pipeline example to ingest data from Apache Kafka to Google 
Cloud Pub/Sub
+
+This directory contains an [Apache Beam](https://beam.apache.org/) pipeline 
example that creates a pipeline
+to read data from a single or multiple topics from
+[Apache Kafka](https://kafka.apache.org/) and write data into a single topic
+in [Google Cloud Pub/Sub](https://cloud.google.com/pubsub).
+
+Supported data formats:
+- Serializable plaintext formats, such as JSON
+- 
[PubSubMessage](https://cloud.google.com/pubsub/docs/reference/rest/v1/PubsubMessage).
+
+Supported input source configurations:
+- Single or multiple Apache Kafka bootstrap servers
+- Apache Kafka SASL/SCRAM authentication over plaintext or SSL connection
+- Secrets vault service [HashiCorp Vault](https://www.vaultproject.io/).
+
+Supported destination configuration:
+- Single Google Cloud Pub/Sub topic.
+
+In a simple scenario, the example will create an Apache Beam pipeline that 
will read messages from a source Kafka server with a source topic, and stream 
the text messages into specified Pub/Sub destination topic. Other scenarios may 
need Kafka SASL/SCRAM authentication, that can be performed over plain text or 
SSL encrypted connection. The example supports using a single Kafka user 
account to authenticate in the provided source Kafka servers and topics. To 
support SASL authenticaton over SSL the example will need an SSL certificate 
location and access to a secrets vault service with Kafka username and 
password, currently supporting HashiCorp Vault.
+
+## Requirements
+
+- Java 8
+- Kafka Bootstrap Server(s) up and running
+- Existing source Kafka topic(s)
+- An existing Pub/Sub destination output topic
+- (Optional) An existing HashiCorp Vault
+- (Optional) A configured secure SSL connection for Kafka
+
+## Getting Started
+
+This section describes what is needed to get the exaple up and running.
+- Assembling the Uber-JAR
+- Local execution
+- Google Dataflow Template
+  - Set up the environment
+  - Creating the Dataflow Flex Template
+  - Create a Dataflow job to ingest data using the template
+- Avro format transferring.
+- E2E tests (TBD)
+
+## Assembling the Uber-JAR
+
+To run this example the Java project should be built into
+an Uber JAR file.
+
+Navigate to the Beam folder:
+
+```
+cd /path/to/beam
+```
+
+In order to create Uber JAR with Gradle, [Shadow 
plugin](https://github.com/johnrengelman/shadow)
+is used. It creates the `shadowJar` task that builds the Uber JAR:
+
+```
+./gradlew -p examples/kafka-to-pubsub clean shadowJar
+```
+
+ℹ️ An **Uber JAR** - also known as **fat JAR** - is a single JAR file that 
contains
+both target package *and* all its dependencies.
+
+The result of the `shadowJar` task execution is a `.jar` file that is generated
+under the `build/libs/` folder in kafka-to-pubsub directory.
+
+## Local execution

Review comment:
       Let's call this just "Running the pipeline", since it also describes how 
to run on other runners, not just locally.
   ```suggestion
   ## Running the pipeline
   ```
   
   To be clear, users would normally use this approach to run on Dataflow just 
like any other runner, but the way this is written it looks like you have to 
use the approach in the next section. In fact the next section is for running 
the pipeline as a Dataflow template. For that reason please rename the "Google 
Dataflow Execution" section to "Running as a Dataflow Template". 

##########
File path: 
examples/kafka-to-pubsub/src/main/java/org/apache/beam/examples/KafkaToPubsub.java
##########
@@ -0,0 +1,229 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.beam.examples;
+
+import static org.apache.beam.examples.kafka.consumer.Utils.configureKafka;
+import static org.apache.beam.examples.kafka.consumer.Utils.configureSsl;
+import static 
org.apache.beam.examples.kafka.consumer.Utils.getKafkaCredentialsFromVault;
+import static org.apache.beam.examples.kafka.consumer.Utils.isSslSpecified;
+import static 
org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkArgument;
+
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import org.apache.beam.examples.avro.AvroDataClass;
+import org.apache.beam.examples.avro.AvroDataClassKafkaAvroDeserializer;
+import org.apache.beam.examples.options.KafkaToPubsubOptions;
+import org.apache.beam.examples.transforms.FormatTransform;
+import org.apache.beam.sdk.Pipeline;
+import org.apache.beam.sdk.PipelineResult;
+import org.apache.beam.sdk.io.gcp.pubsub.PubsubIO;
+import org.apache.beam.sdk.options.PipelineOptionsFactory;
+import org.apache.beam.sdk.transforms.Values;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/**
+ * The {@link KafkaToPubsub} pipeline is a streaming pipeline which ingests 
data in JSON format from
+ * Kafka, and outputs the resulting records to PubSub. Input topics, output 
topic, Bootstrap servers
+ * are specified by the user as template parameters. <br>
+ * Kafka may be configured with SASL/SCRAM security mechanism, in this case a 
Vault secret storage
+ * with credentials should be provided. URL to credentials and Vault token are 
specified by the user
+ * as template parameters.
+ *
+ * <p><b>Pipeline Requirements</b>
+ *
+ * <ul>
+ *   <li>Kafka Bootstrap Server(s).
+ *   <li>Kafka Topic(s) exists.
+ *   <li>The PubSub output topic exists.
+ *   <li>(Optional) An existing HashiCorp Vault secret storage
+ * </ul>
+ *
+ * <p><b>Example Usage</b>
+ *
+ * <pre>
+ * # Set the pipeline vars
+ * PROJECT=id-of-my-project
+ * BUCKET_NAME=my-bucket
+ *
+ * # Set containerization vars
+ * IMAGE_NAME=my-image-name
+ * TARGET_GCR_IMAGE=gcr.io/${PROJECT}/${IMAGE_NAME}
+ * BASE_CONTAINER_IMAGE=my-base-container-image
+ * TEMPLATE_PATH="gs://${BUCKET_NAME}/templates/kafka-pubsub.json"
+ *
+ * # Create bucket in the cloud storage
+ * gsutil mb gs://${BUCKET_NAME}
+ *
+ * # Go to the beam folder
+ * cd /path/to/beam
+ *
+ * <b>FLEX TEMPLATE</b>
+ * # Assemble uber-jar
+ * ./gradlew -p templates/kafka-to-pubsub clean shadowJar
+ *
+ * # Go to the template folder
+ * cd /path/to/beam/templates/kafka-to-pubsub
+ *
+ * # Build the flex template
+ * gcloud dataflow flex-template build ${TEMPLATE_PATH} \

Review comment:
       Please remove the Dataflow template specific parts from this javadoc

##########
File path: 
examples/kafka-to-pubsub/src/main/java/org/apache/beam/examples/KafkaToPubsub.java
##########
@@ -0,0 +1,229 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.beam.examples;
+
+import static org.apache.beam.examples.kafka.consumer.Utils.configureKafka;
+import static org.apache.beam.examples.kafka.consumer.Utils.configureSsl;
+import static 
org.apache.beam.examples.kafka.consumer.Utils.getKafkaCredentialsFromVault;
+import static org.apache.beam.examples.kafka.consumer.Utils.isSslSpecified;
+import static 
org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkArgument;
+
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import org.apache.beam.examples.avro.AvroDataClass;
+import org.apache.beam.examples.avro.AvroDataClassKafkaAvroDeserializer;
+import org.apache.beam.examples.options.KafkaToPubsubOptions;
+import org.apache.beam.examples.transforms.FormatTransform;
+import org.apache.beam.sdk.Pipeline;
+import org.apache.beam.sdk.PipelineResult;
+import org.apache.beam.sdk.io.gcp.pubsub.PubsubIO;
+import org.apache.beam.sdk.options.PipelineOptionsFactory;
+import org.apache.beam.sdk.transforms.Values;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/**
+ * The {@link KafkaToPubsub} pipeline is a streaming pipeline which ingests 
data in JSON format from
+ * Kafka, and outputs the resulting records to PubSub. Input topics, output 
topic, Bootstrap servers
+ * are specified by the user as template parameters. <br>
+ * Kafka may be configured with SASL/SCRAM security mechanism, in this case a 
Vault secret storage
+ * with credentials should be provided. URL to credentials and Vault token are 
specified by the user
+ * as template parameters.
+ *
+ * <p><b>Pipeline Requirements</b>
+ *
+ * <ul>
+ *   <li>Kafka Bootstrap Server(s).
+ *   <li>Kafka Topic(s) exists.
+ *   <li>The PubSub output topic exists.
+ *   <li>(Optional) An existing HashiCorp Vault secret storage
+ * </ul>
+ *
+ * <p><b>Example Usage</b>
+ *
+ * <pre>
+ * # Set the pipeline vars
+ * PROJECT=id-of-my-project
+ * BUCKET_NAME=my-bucket
+ *
+ * # Set containerization vars
+ * IMAGE_NAME=my-image-name
+ * TARGET_GCR_IMAGE=gcr.io/${PROJECT}/${IMAGE_NAME}
+ * BASE_CONTAINER_IMAGE=my-base-container-image
+ * TEMPLATE_PATH="gs://${BUCKET_NAME}/templates/kafka-pubsub.json"
+ *
+ * # Create bucket in the cloud storage
+ * gsutil mb gs://${BUCKET_NAME}
+ *
+ * # Go to the beam folder
+ * cd /path/to/beam
+ *
+ * <b>FLEX TEMPLATE</b>
+ * # Assemble uber-jar
+ * ./gradlew -p templates/kafka-to-pubsub clean shadowJar
+ *
+ * # Go to the template folder
+ * cd /path/to/beam/templates/kafka-to-pubsub
+ *
+ * # Build the flex template
+ * gcloud dataflow flex-template build ${TEMPLATE_PATH} \
+ *       --image-gcr-path "${TARGET_GCR_IMAGE}" \
+ *       --sdk-language "JAVA" \
+ *       --flex-template-base-image ${BASE_CONTAINER_IMAGE} \
+ *       --metadata-file "src/main/resources/kafka_to_pubsub_metadata.json" \
+ *       --jar "build/libs/beam-templates-kafka-to-pubsub-<version>-all.jar" \
+ *       --env 
FLEX_TEMPLATE_JAVA_MAIN_CLASS="org.apache.beam.templates.KafkaToPubsub"
+ *
+ * # Execute template:
+ *    API_ROOT_URL="https://dataflow.googleapis.com";
+ *    
TEMPLATES_LAUNCH_API="${API_ROOT_URL}/v1b3/projects/${PROJECT}/locations/${REGION}/flexTemplates:launch"
+ *    JOB_NAME="kafka-to-pubsub-`date +%Y%m%d-%H%M%S-%N`"
+ *
+ *    time curl -X POST -H "Content-Type: application/json" \
+ *            -H "Authorization: Bearer $(gcloud auth print-access-token)" \
+ *            -d '
+ *             {
+ *                 "launch_parameter": {
+ *                     "jobName": "'$JOB_NAME'",
+ *                     "containerSpecGcsPath": "'$TEMPLATE_PATH'",
+ *                     "parameters": {
+ *                         "bootstrapServers": "broker_1:9091, broker_2:9092",
+ *                         "inputTopics": "topic1, topic2",
+ *                         "outputTopic": 
"projects/'$PROJECT'/topics/your-topic-name",
+ *                         "secretStoreUrl": 
"http(s)://host:port/path/to/credentials",
+ *                         "vaultToken": "your-token"
+ *                     }
+ *                 }
+ *             }
+ *            '
+ *            "${TEMPLATES_LAUNCH_API}"
+ * </pre>
+ *
+ * <p><b>Example Avro usage</b>
+ *
+ * <pre>
+ * This template contains an example Class to deserialize AVRO from Kafka and 
serialize it to AVRO in Pub/Sub.
+ *
+ * To use this example in the specific case, follow the few steps:
+ * <ul>
+ * <li> Create your own class to describe AVRO schema. As an example use 
{@link AvroDataClass}. Just define necessary fields.
+ * <li> Create your own Avro Deserializer class. As an example use {@link 
AvroDataClassKafkaAvroDeserializer}. Just rename it, and put your own Schema 
class as the necessary types.
+ * <li> Modify the {@link FormatTransform}. Put your Schema class and 
Deserializer to the related parameter.
+ * <li> Modify write step in the {@link KafkaToPubsub} by put your Schema 
class to "writeAvrosToPubSub" step.
+ * </ul>
+ * </pre>
+ */
+public class KafkaToPubsub {
+
+  /* Logger for class.*/
+  private static final Logger LOG = 
LoggerFactory.getLogger(KafkaToPubsub.class);
+
+  /**
+   * Main entry point for pipeline execution.
+   *
+   * @param args Command line arguments to the pipeline.
+   */
+  public static void main(String[] args) {
+    KafkaToPubsubOptions options =
+        
PipelineOptionsFactory.fromArgs(args).withValidation().as(KafkaToPubsubOptions.class);
+
+    run(options);
+  }
+
+  /**
+   * Runs a pipeline which reads message from Kafka and writes it to GCS.
+   *
+   * @param options arguments to the pipeline
+   */
+  public static PipelineResult run(KafkaToPubsubOptions options) {
+    // Configure Kafka consumer properties
+    Map<String, Object> kafkaConfig = new HashMap<>();
+    Map<String, String> sslConfig = new HashMap<>();
+    if (options.getSecretStoreUrl() != null && options.getVaultToken() != 
null) {
+      Map<String, Map<String, String>> credentials =
+          getKafkaCredentialsFromVault(options.getSecretStoreUrl(), 
options.getVaultToken());
+      kafkaConfig = 
configureKafka(credentials.get(KafkaPubsubConstants.KAFKA_CREDENTIALS));
+    } else {
+      LOG.warn(
+          "No information to retrieve Kafka credentials was provided. "
+              + "Trying to initiate an unauthorized connection.");
+    }
+
+    if (isSslSpecified(options)) {
+      sslConfig.putAll(configureSsl(options));
+    } else {
+      LOG.info(
+          "No information to retrieve SSL certificate was provided by 
parameters."
+              + "Trying to initiate a plain text connection.");
+    }
+
+    List<String> topicsList = new 
ArrayList<>(Arrays.asList(options.getInputTopics().split(",")));
+
+    checkArgument(
+        topicsList.size() > 0 && topicsList.get(0).length() > 0,
+        "inputTopics cannot be an empty string.");
+
+    List<String> bootstrapServersList =
+        new 
ArrayList<>(Arrays.asList(options.getBootstrapServers().split(",")));
+
+    checkArgument(
+        bootstrapServersList.size() > 0 && topicsList.get(0).length() > 0,
+        "bootstrapServers cannot be an empty string.");
+
+    // Create the pipeline
+    Pipeline pipeline = Pipeline.create(options);
+    LOG.info(
+        "Starting Kafka-To-PubSub pipeline with parameters bootstrap servers:"
+            + options.getBootstrapServers()
+            + " input topics: "
+            + options.getInputTopics()
+            + " output pubsub topic: "
+            + options.getOutputTopic());
+
+    /*
+     * Steps:
+     *  1) Read messages in from Kafka
+     *  2) Extract values only
+     *  3) Write successful records to PubSub
+     */
+
+    if (options.getOutputFormat() == FormatTransform.FORMAT.AVRO) {
+      pipeline
+          .apply(
+              "readAvrosFromKafka",
+              FormatTransform.readAvrosFromKafka(
+                  options.getBootstrapServers(), topicsList, kafkaConfig, 
sslConfig))
+          .apply("createValues", Values.create())
+          .apply("writeAvrosToPubSub", 
PubsubIO.writeAvros(AvroDataClass.class));
+
+    } else {

Review comment:
       Is it worth having this PUBSUB path? The README and javadoc only discuss 
the AVRO path. I think we should just have that one and remove the enum




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to