[
https://issues.apache.org/jira/browse/AIRFLOW-6786?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17287873#comment-17287873
]
ASF GitHub Bot commented on AIRFLOW-6786:
-----------------------------------------
luup2k commented on a change in pull request #12388:
URL: https://github.com/apache/airflow/pull/12388#discussion_r579772714
##########
File path: airflow/providers/apache/kafka/hooks/kafka_consumer.py
##########
@@ -0,0 +1,88 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements. See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership. The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License. You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied. See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+
+from kafka import KafkaConsumer
+
+from airflow.hooks.base_hook import BaseHook
+
+
+class KafkaConsumerHook(BaseHook):
+ """KafkaConsumerHook Class."""
+
+ DEFAULT_HOST = 'kafka1'
+ DEFAULT_PORT = 9092
+
+ def __init__(self, topic, host=DEFAULT_HOST, port=DEFAULT_PORT,
kafka_conn_id='kafka_default'):
+ super().__init__(None)
+ self.conn_id = kafka_conn_id
+ self._conn = None
+ self.server = None
+ self.consumer = None
+ self.extra_dejson = {}
+ self.topic = topic
+ self.host = host
+ self.port = port
+
+ def get_conn(self) -> KafkaConsumer:
+ """
+ A Kafka Consumer object.
+
+ :return:
+ A Kafka Consumer object.
+ """
+ if not self.consumer:
+ conn = self.get_connection(self.conn_id)
+ service_options = conn.extra_dejson
+ host = conn.host or self.DEFAULT_HOST
+ port = conn.port or self.DEFAULT_PORT
+
+ self.server = f"""{host}:{port}"""
+ self.consumer = KafkaConsumer(self.topic,
bootstrap_servers=self.server, **service_options)
+ return self.consumer
+
+ def get_messages(self, timeout_ms=5000) -> dict:
+ """
+ Get all the messages haven't been consumed, it doesn't
Review comment:
> "Get all the messages haven't been consumed,"
If we use poll() without max_records, the behavior is returns at most
"max_poll_records" #records. "max_poll_records" is setted to 500 by default at
Consumer Init config.
So, we're not going to consume "all" message except we put a very high
number as max_poll_records(could be a memory bomb) or we have a low number of
message in the topic.
https://kafka-python.readthedocs.io/en/master/apidoc/KafkaConsumer.html#kafka.KafkaConsumer.poll
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
> Adding KafkaConsumerHook, KafkaProducerHook, and KafkaSensor
> ------------------------------------------------------------
>
> Key: AIRFLOW-6786
> URL: https://issues.apache.org/jira/browse/AIRFLOW-6786
> Project: Apache Airflow
> Issue Type: New Feature
> Components: contrib, hooks
> Affects Versions: 1.10.9
> Reporter: Daniel Ferguson
> Assignee: Daniel Ferguson
> Priority: Minor
>
> Add the KafkaProducerHook.
> Add the KafkaConsumerHook.
> Add the KafkaSensor which listens to messages with a specific topic.
> Related Issue:
> #1311 (Pre-dates Jira Migration)
> Reminder to contributors:
> You must add an Apache License header to all new files
> Please squash your commits when possible and follow the 7 rules of good Git
> commits
> I am new to the community, I am not sure the files are at the right place or
> missing anything.
> The sensor could be used as the first node of a dag where the second node can
> be a TriggerDagRunOperator. The messages are polled in a batch and the dag
> runs are dynamically generated.
> Thanks!
> Note, as per denied PR [#1415|https://github.com/apache/airflow/pull/1415],
> it is important to mention these integrations are not suitable for
> low-latency/high-throughput/streaming. For reference, [#1415
> (comment)|https://github.com/apache/airflow/pull/1415#issuecomment-484429806].
> Co-authored-by: Dan Ferguson
> [[email protected]|mailto:[email protected]]
> Co-authored-by: YuanfΞi Zhu
--
This message was sent by Atlassian Jira
(v8.3.4#803005)