Re: Anyone else mis-interpret the "KafkaConsumer" and "KafkaProducer" all the time?

2018-03-22 Thread vino yang
Hi guys,

I also agree change the connectors renaming. When I wrote the RabbitMQ
connector, I watched the Kafka connector's implementation. There are two
communication pair : KafkaPublisher / KafkaSubscriber and KafkaConsumer /
KafkaProducer. I feel confused about them.

I think this mode would be better :


   - inside API : we should use Kafka client API to interact with Kafka
   Server , it's kafka's point of view, we could use KafkaProducer /
   KafkaConsumer;
   - outer API : it's edgent's point of view, we should use a unified
   naming, the name could be source / sink or some other edgent self-created
   named all connectors should follow this specification.


Vino yang
Thanks.

2018-03-23 0:56 GMT+08:00 Christofer Dutz :

> Hi Dale,
>
> Happy to read from you :-)
>
> It was just something I had to explain every time I showed the code for
> the currently by far most interesting use-case for my plc4x pocs at the
> moment (pumping data from a PLC to a Kafka topic) . So I thought, that if I
> have to explain it every time, cause people are confused, then probably we
> should talk about making things more clear.
>
> Chris
>
> Outlook for Android herunterladen
>
> 
> From: Dale LaBossiere 
> Sent: Thursday, March 22, 2018 5:44:42 PM
> To: dev@edgent.apache.org
> Subject: Re: Anyone else mis-interpret the "KafkaConsumer" and
> "KafkaProducer" all the time?
>
> A bit of background…
>
> The Kafka connector is two classes instead of a single KafkaStreams
> connector (with publish(),subscribe()) because at least a while ago, don’t
> know if this is still the case, Kafka had two completely separate classes
> for a “consumer” and a “producer" each with very different config setup
> params. By comparison MQTT has a single MqttClient class (with
> publish()/subscribe()).
>
> At the time, the decision was to name the Edgent Kafka classes similar to
> the underlying Kafka API classes.  Hence KafkaConsumer (~wrapping Kafka’s
> ConsumerConnector) and KafkaProducer (~wrapping Kafka’s KafkaProducer).
> While not exposed today, it’s conceivable that some day one could create an
> Edgent Kafka connector instance by providing a Kafka API class directly
> instead of just a config map - e.g., supplying a Kafka KafkaProducer as an
> arg to the Edgent KafkaProducer connector's constructor.  So having the
> names align seems like goodness.
>
> I don’t think the Edgent connectors should be trying to make it
> unnecessary for a user to understand or to mask the underlying system’s
> API… just make it usable, easily usable for a simple/common cases, in an
> Edgent topology context (worrying about when to make an actually external
> connection, recovering from broken connections / reconnecting, handling
> common tuple types).
>
> As for the specific suggestions, I think simply switching the names of
> Edgent’s KafkaConsumer and KafkaProducer is a bad idea :-)
>
> Offering KafkaSource and KafkaSink is OK I guess (though probably
> retaining the current names for a release or three).  Though I’ll note the
> Edgent API uses “source” and “sink” as verbs, which take a Supplier and a
> Consumer fn as args respectively.  Note Consumer used in the context with
> sink.
>
> Alternatively there’s KafkaSubscriber and KafkaPublisher.  While clearer
> than Consumer/Producer, I don’t know if they’re any better than Source/Sink.
>
> In the end I guess I don’t feel strongly about it all… though wonder if
> it’s really worth the effort in changing.  At least the Edgent connector’s
> javadoc is pretty good / clear for the classes and their use... I think :-)
>
> — Dale
>
>
> > On Mar 20, 2018, at 9:59 PM, vino yang  wrote:
> >
> > Hi Chris,
> >
> > All data processing framework could think it as a *pipeline . *The
> Edgent's
> > point of view, there could be two endpoints :
> >
> >
> >   - source : means data injection;
> >   - sink : means data export;
> >
> > There are many frameworks use this conventional naming rule, such as
> Apache
> > Flume, Apache Flink, Apache Spark(structured streaming) .
> >
> > I think "KafkaConsumer" could be replaced with "KafkaSource" and
> > "KafkaProducer" could be named "KafkaSink".
> >
> > And middle of the pipeline is the transformation of the data, there are
> > many operators to transform data ,such as map, flatmap, filter, reduce...
> > and so on.
> >
> > Vino yang.
> > Thanks.
> >
> > 2018-03-20 20:51 GMT+08:00 Christofer Dutz :
> >
> >> Hi,
> >>
> >> have been using the Kafka integration quite often in the past and one
> >> thing I always have to explain when demonstrating code and which seems
> to
> >> confuse everyone seeing the code:
> >>
> >> I would expect a KafkaConsumer to consume Edgent messages and publish
> them
> >> to Kafka and would expect a KafkaProducer to produce Edgent events.
> >>
> >> Unfortunately it seems to be the other way around. 

Re: Anyone else mis-interpret the "KafkaConsumer" and "KafkaProducer" all the time?

2018-03-22 Thread vino yang
Hi Dale,

When I wroted the RabbitMQ connector I followed the Kafka Connector's style
(and I also looked the MQTT connectors). And I chose the Kafka connector as
the implementation template. The reason is the two classes
(RabbitmqProducer and RabbitmqConsumer) should not share one rabbitmq's
connection and channel (implemented in RabbitmqConnector). The two classes
maybe use in one topology (as consumer and producer) and split the inner
connection and channel would be better.

2018-03-23 2:28 GMT+08:00 Dale LaBossiere :

> I see the new RabbitMQ connector followed the same API scheme as the Kafka
> connector.  i.e., adding Rabbitmq{Consumer,Producer} for the source/sink
> respectively.  It looks like it could have followed the MqttStreams
> approach instead.
>
> @yanghua, is there a reason you chose to offer 
> o.a.e.connectors.rabbitmq.Rabbitmq{Consumer,Producer}
> instead of just RabbitmqStreams?
>
> — Dale
>
> > On Mar 22, 2018, at 1:11 PM, Dale LaBossiere 
> wrote:
> >
> > Hi Chris.  Hopefully the background provided some useful context.  But
> like I said, I don’t feel strongly about some renaming if folks agree
> that’s the right think to do.
> >
> > — Dale
> >
> >> On Mar 22, 2018, at 12:56 PM, Christofer Dutz <
> christofer.d...@c-ware.de> wrote:
> >> It was just something I had to explain every time I showed the code for
> the currently by far most interesting use-case for my plc4x pocs at the
> moment (pumping data from a PLC to a Kafka topic) . So I thought, that if I
> have to explain it every time, cause people are confused, then probably we
> should talk about making things more clear.
> >
>
>


Re: Anyone else mis-interpret the "KafkaConsumer" and "KafkaProducer" all the time?

2018-03-22 Thread Dale LaBossiere
I see the new RabbitMQ connector followed the same API scheme as the Kafka 
connector.  i.e., adding Rabbitmq{Consumer,Producer} for the source/sink 
respectively.  It looks like it could have followed the MqttStreams approach 
instead.

@yanghua, is there a reason you chose to offer 
o.a.e.connectors.rabbitmq.Rabbitmq{Consumer,Producer} instead of just 
RabbitmqStreams?

— Dale

> On Mar 22, 2018, at 1:11 PM, Dale LaBossiere  wrote:
> 
> Hi Chris.  Hopefully the background provided some useful context.  But like I 
> said, I don’t feel strongly about some renaming if folks agree that’s the 
> right think to do.
> 
> — Dale
> 
>> On Mar 22, 2018, at 12:56 PM, Christofer Dutz  
>> wrote:
>> It was just something I had to explain every time I showed the code for the 
>> currently by far most interesting use-case for my plc4x pocs at the moment 
>> (pumping data from a PLC to a Kafka topic) . So I thought, that if I have to 
>> explain it every time, cause people are confused, then probably we should 
>> talk about making things more clear.
> 



Re: Rename "iotp" to "iot-ibm" or similar?

2018-03-22 Thread Dale LaBossiere
Yeah, “iotp” isn’t very clear/helpful.  I’ve often used “wiotp” when referring 
to “Watson IoT Platform”.
FWIW, wiotp's API’s base package name is com.ibm.iotf (IoT Framework) and 
Edgent’s connector was originally named “iotf”.  Then renamed when they changed 
the name, but not package name, to “IoT Platform”.

+1 on a good naming scheme :-)

Adding “ibm” and/or “watson” to the [package] name seems like a reasonable idea.

Don’t forget about the “iotp" samples, website doc, recipes, and backward 
compatibility :-(  Probably have to deprecate but retain the old names.

— Dale

> On Mar 20, 2018, at 5:10 AM, Christofer Dutz  
> wrote:
> 
> Hi,
> 
> Well on our company in-house conference last Thursday to Sunday I had one 
> session on "implementing IoT Platform adapters for Apache Edgent" and I got 
> some people together and we started to work on an AWS IoT Connector. Seems 
> that the concepts of the iot module can be implemented 1-to-1 with AWS. 
> 
> So this will be the thing I'll be working on actively. I think supporting as 
> many IoT Platforms as possible will help Edgents adoption. Probably 
> Mindsphere which is probably very interesting for PLC4X+Edgent solutions will 
> stay a fantasy as I'm not willing to pay the 10k€/developer/year just to be 
> able to access their libs (Not even mentioning that this totally disqualifies 
> it for being added to an Apache project).
> 
> So probably Googles platform will follow 
> 
> But I still want to rename the "iotp" module to something like "iot-watson" 
> or "iot-ibm-watson"
> 
> 
> Chris
> 
> 
> 
> Am 12.03.18, 03:37 schrieb "vino yang" :
> 
>+1 , I think we also could support to EdgexFoundry in the future.
> 
>2018-03-11 20:00 GMT+08:00 Christofer Dutz :
> 
>> Hi all,
>> 
>> I am currently thinking of adding modules to support other IoT Platforms:
>> 
>>  *   Google IoT
>>  *   AWS IoT
>>  *   Siemens MindSphere
>> 
>> For that the “iotp” sort of doesn’t quite fit as it’s not just “one” IoT
>> platform.
>> So would you be ok with renaming that to something like: “iot-ibm” or
>> similar?
>> 
>> Then we’d have:
>> 
>>  *   Iot-ibm
>>  *   Iot-aws
>>  *   Iot-google
>>  *   Iot-mindsphere/siemens
>> 
>> Chris
>> 
> 
> 



Re: [disscuss] make TStream support groupBy operator?

2018-03-22 Thread Dale LaBossiere
Also see the SensorsAggregates sample [1].

If this info addresses your question wrt groupBy maybe that’s an indicator that 
more doc is needed to note this.  e.g., something in one or more of the TStream 
class javadoc, TStream.last(), TWindow, FAQ [2] and/or The Power of Edgent [3]. 
 Your thoughts?  Maybe you could contribute some additional clarifying info?

— Dale

[1] 
https://github.com/apache/incubator-edgent-samples/blob/develop/topology/src/main/java/org/apache/edgent/samples/topology/SensorsAggregates.java
 

[2] http://edgent.apache.org/docs/faq 
[3] http://edgent.apache.org/docs/power-of-edgent.html 


> On Mar 22, 2018, at 1:06 PM, Dale LaBossiere  wrote:
> 
> Hi,  (I’ve been in my first month of retirement, yeah it would be helpful if 
> other original Edgent developers chimed in)
> 
> Edgent provides count or time based Windows of tuples.  A window inherently 
> supports multiple independently managed partitions by a key.   Continuous or 
> batch aggregations can be performed on each partition.
> 
> See TStream.last() and TWindow.  Additionally, see edgent.analytics.math3.* 
> in particular the javadoc for edgent.analytics.math3.Aggregations.
> 
> Hope that helps.
> 
> — Dale
> 
>> On Mar 20, 2018, at 5:28 AM, vino yang  wrote:
>> 
>> Hi Chris,
>> 
>> My background is BigData (MapReduce, Spark, Flink, Kafka Stream) those data
>> processing frameworks all provide the groupBy / keyBy operation and
>> aggregation operator. It comes from traditional RDBMS. Edgent like a single
>> JVM's Flink / Kafka Stream works on edges(gateway or IoT).
>> They are similar with each other in some ways.
>> 
>> Vino yang.
>> Thanks!
>> 
>> 2018-03-20 17:05 GMT+08:00 Christofer Dutz :
>> 
>>> Hi Vino,
>>> 
>>> unfortunately I can't contribute any opinion on this as I don't yet
>>> understand the consequences.
>>> I know that in an asynchronous event processing system some operations
>>> that might be useful have to be sacrificed for the sake of asynchonisity.
>>> 
>>> For me Kafka Stream sort of feeling like the cloud-brother of Edgent, it
>>> does seem to support groupBy.
>>> 
>>> Would be really cool if some of the formerly active people could at least
>>> leave some comments on questions like this. You don't have to actually work
>>> on things, but giving us new guys some guidance would be awesome.
>>> 
>>> I don't want to ruin thing you built over years, just because I'm not that
>>> into the topic ... yet.
>>> 
>>> Chris
>>> 
>>> 
>>> 
>>> 
>>> Am 16.03.18, 13:02 schrieb "Christofer Dutz" :
>>> 
>>>   I'm currently at a conference, so I can't be as responsive as I used
>>> to be ... All will be back to normal next Tuesday ;-)
>>> 
>>>   Chris
>>> 
>>>   Outlook for Android herunterladen
>>> 
>>>   
>>>   From: vino yang 
>>>   Sent: Friday, March 16, 2018 2:26:10 AM
>>>   To: dev@edgent.apache.org
>>>   Subject: Re: [disscuss] make TStream support groupBy operator?
>>> 
>>>   Hi all,
>>> 
>>>   Anyone can give some opinion? Chris ? I think we should support some
>>> reduce
>>>   operation(aggregation function, such as max / avg / min sum) for both
>>>   stream and windowed stream, these features based on the keyBy or
>>> groupBy
>>>   operation.
>>> 
>>>   Vino yang
>>>   Thanks!
>>> 
>>>   2018-03-13 12:52 GMT+08:00 vino yang :
>>> 
 Hi guys,
 
 Does Edgent current support groupBy operator?
 
 Vino yang
 Thanks.
 
>>> 
>>> 
>>> 
> 



Re: Anyone else mis-interpret the "KafkaConsumer" and "KafkaProducer" all the time?

2018-03-22 Thread Dale LaBossiere
Hi Chris.  Hopefully the background provided some useful context.  But like I 
said, I don’t feel strongly about some renaming if folks agree that’s the right 
think to do.

— Dale

> On Mar 22, 2018, at 12:56 PM, Christofer Dutz  
> wrote:
> It was just something I had to explain every time I showed the code for the 
> currently by far most interesting use-case for my plc4x pocs at the moment 
> (pumping data from a PLC to a Kafka topic) . So I thought, that if I have to 
> explain it every time, cause people are confused, then probably we should 
> talk about making things more clear.



Re: [disscuss] make TStream support groupBy operator?

2018-03-22 Thread Dale LaBossiere
Hi,  (I’ve been in my first month of retirement, yeah it would be helpful if 
other original Edgent developers chimed in)

Edgent provides count or time based Windows of tuples.  A window inherently 
supports multiple independently managed partitions by a key.   Continuous or 
batch aggregations can be performed on each partition.

See TStream.last() and TWindow.  Additionally, see edgent.analytics.math3.* in 
particular the javadoc for edgent.analytics.math3.Aggregations.

Hope that helps.

— Dale

> On Mar 20, 2018, at 5:28 AM, vino yang  wrote:
> 
> Hi Chris,
> 
> My background is BigData (MapReduce, Spark, Flink, Kafka Stream) those data
> processing frameworks all provide the groupBy / keyBy operation and
> aggregation operator. It comes from traditional RDBMS. Edgent like a single
> JVM's Flink / Kafka Stream works on edges(gateway or IoT).
> They are similar with each other in some ways.
> 
> Vino yang.
> Thanks!
> 
> 2018-03-20 17:05 GMT+08:00 Christofer Dutz :
> 
>> Hi Vino,
>> 
>> unfortunately I can't contribute any opinion on this as I don't yet
>> understand the consequences.
>> I know that in an asynchronous event processing system some operations
>> that might be useful have to be sacrificed for the sake of asynchonisity.
>> 
>> For me Kafka Stream sort of feeling like the cloud-brother of Edgent, it
>> does seem to support groupBy.
>> 
>> Would be really cool if some of the formerly active people could at least
>> leave some comments on questions like this. You don't have to actually work
>> on things, but giving us new guys some guidance would be awesome.
>> 
>> I don't want to ruin thing you built over years, just because I'm not that
>> into the topic ... yet.
>> 
>> Chris
>> 
>> 
>> 
>> 
>> Am 16.03.18, 13:02 schrieb "Christofer Dutz" :
>> 
>>I'm currently at a conference, so I can't be as responsive as I used
>> to be ... All will be back to normal next Tuesday ;-)
>> 
>>Chris
>> 
>>Outlook for Android herunterladen
>> 
>>
>>From: vino yang 
>>Sent: Friday, March 16, 2018 2:26:10 AM
>>To: dev@edgent.apache.org
>>Subject: Re: [disscuss] make TStream support groupBy operator?
>> 
>>Hi all,
>> 
>>Anyone can give some opinion? Chris ? I think we should support some
>> reduce
>>operation(aggregation function, such as max / avg / min sum) for both
>>stream and windowed stream, these features based on the keyBy or
>> groupBy
>>operation.
>> 
>>Vino yang
>>Thanks!
>> 
>>2018-03-13 12:52 GMT+08:00 vino yang :
>> 
>>> Hi guys,
>>> 
>>> Does Edgent current support groupBy operator?
>>> 
>>> Vino yang
>>> Thanks.
>>> 
>> 
>> 
>> 



Re: Anyone else mis-interpret the "KafkaConsumer" and "KafkaProducer" all the time?

2018-03-22 Thread Christofer Dutz
Hi Dale,

Happy to read from you :-)

It was just something I had to explain every time I showed the code for the 
currently by far most interesting use-case for my plc4x pocs at the moment 
(pumping data from a PLC to a Kafka topic) . So I thought, that if I have to 
explain it every time, cause people are confused, then probably we should talk 
about making things more clear.

Chris

Outlook for Android herunterladen


From: Dale LaBossiere 
Sent: Thursday, March 22, 2018 5:44:42 PM
To: dev@edgent.apache.org
Subject: Re: Anyone else mis-interpret the "KafkaConsumer" and "KafkaProducer" 
all the time?

A bit of background…

The Kafka connector is two classes instead of a single KafkaStreams connector 
(with publish(),subscribe()) because at least a while ago, don’t know if this 
is still the case, Kafka had two completely separate classes for a “consumer” 
and a “producer" each with very different config setup params. By comparison 
MQTT has a single MqttClient class (with publish()/subscribe()).

At the time, the decision was to name the Edgent Kafka classes similar to the 
underlying Kafka API classes.  Hence KafkaConsumer (~wrapping Kafka’s 
ConsumerConnector) and KafkaProducer (~wrapping Kafka’s KafkaProducer).  While 
not exposed today, it’s conceivable that some day one could create an Edgent 
Kafka connector instance by providing a Kafka API class directly instead of 
just a config map - e.g., supplying a Kafka KafkaProducer as an arg to the 
Edgent KafkaProducer connector's constructor.  So having the names align seems 
like goodness.

I don’t think the Edgent connectors should be trying to make it unnecessary for 
a user to understand or to mask the underlying system’s API… just make it 
usable, easily usable for a simple/common cases, in an Edgent topology context 
(worrying about when to make an actually external connection, recovering from 
broken connections / reconnecting, handling common tuple types).

As for the specific suggestions, I think simply switching the names of Edgent’s 
KafkaConsumer and KafkaProducer is a bad idea :-)

Offering KafkaSource and KafkaSink is OK I guess (though probably retaining the 
current names for a release or three).  Though I’ll note the Edgent API uses 
“source” and “sink” as verbs, which take a Supplier and a Consumer fn as args 
respectively.  Note Consumer used in the context with sink.

Alternatively there’s KafkaSubscriber and KafkaPublisher.  While clearer than 
Consumer/Producer, I don’t know if they’re any better than Source/Sink.

In the end I guess I don’t feel strongly about it all… though wonder if it’s 
really worth the effort in changing.  At least the Edgent connector’s javadoc 
is pretty good / clear for the classes and their use... I think :-)

— Dale


> On Mar 20, 2018, at 9:59 PM, vino yang  wrote:
>
> Hi Chris,
>
> All data processing framework could think it as a *pipeline . *The Edgent's
> point of view, there could be two endpoints :
>
>
>   - source : means data injection;
>   - sink : means data export;
>
> There are many frameworks use this conventional naming rule, such as Apache
> Flume, Apache Flink, Apache Spark(structured streaming) .
>
> I think "KafkaConsumer" could be replaced with "KafkaSource" and
> "KafkaProducer" could be named "KafkaSink".
>
> And middle of the pipeline is the transformation of the data, there are
> many operators to transform data ,such as map, flatmap, filter, reduce...
> and so on.
>
> Vino yang.
> Thanks.
>
> 2018-03-20 20:51 GMT+08:00 Christofer Dutz :
>
>> Hi,
>>
>> have been using the Kafka integration quite often in the past and one
>> thing I always have to explain when demonstrating code and which seems to
>> confuse everyone seeing the code:
>>
>> I would expect a KafkaConsumer to consume Edgent messages and publish them
>> to Kafka and would expect a KafkaProducer to produce Edgent events.
>>
>> Unfortunately it seems to be the other way around. This seems a little
>> unintuitive. Judging from the continued confusion when demonstrating code
>> eventually it’s worth considering to rename these (swap their names).
>> Eventually even rename them to “KafkaSource” (Edgent Source that consumes
>> Kafka messages and produces Edgent events) and “KafkaConsumer” (Consumes
>> Edgent Events and produces Kafka messages). After all the Classes are in
>> the Edgent namespace and come from the Edgent libs, so the fixed point when
>> inspecting these should be clear. Also I bet no one would be confused if we
>> called something that produces Kafka messages a consumer as there should
>> never be code that handles this from a Kafka point of view AND uses Edgent
>> at the same time.
>>
>> Chris
>>
>>
>>



Re: Anyone else mis-interpret the "KafkaConsumer" and "KafkaProducer" all the time?

2018-03-22 Thread Dale LaBossiere
A bit of background…

The Kafka connector is two classes instead of a single KafkaStreams connector 
(with publish(),subscribe()) because at least a while ago, don’t know if this 
is still the case, Kafka had two completely separate classes for a “consumer” 
and a “producer" each with very different config setup params. By comparison 
MQTT has a single MqttClient class (with publish()/subscribe()).

At the time, the decision was to name the Edgent Kafka classes similar to the 
underlying Kafka API classes.  Hence KafkaConsumer (~wrapping Kafka’s 
ConsumerConnector) and KafkaProducer (~wrapping Kafka’s KafkaProducer).  While 
not exposed today, it’s conceivable that some day one could create an Edgent 
Kafka connector instance by providing a Kafka API class directly instead of 
just a config map - e.g., supplying a Kafka KafkaProducer as an arg to the 
Edgent KafkaProducer connector's constructor.  So having the names align seems 
like goodness.

I don’t think the Edgent connectors should be trying to make it unnecessary for 
a user to understand or to mask the underlying system’s API… just make it 
usable, easily usable for a simple/common cases, in an Edgent topology context 
(worrying about when to make an actually external connection, recovering from 
broken connections / reconnecting, handling common tuple types).

As for the specific suggestions, I think simply switching the names of Edgent’s 
KafkaConsumer and KafkaProducer is a bad idea :-)

Offering KafkaSource and KafkaSink is OK I guess (though probably retaining the 
current names for a release or three).  Though I’ll note the Edgent API uses 
“source” and “sink” as verbs, which take a Supplier and a Consumer fn as args 
respectively.  Note Consumer used in the context with sink.

Alternatively there’s KafkaSubscriber and KafkaPublisher.  While clearer than 
Consumer/Producer, I don’t know if they’re any better than Source/Sink.

In the end I guess I don’t feel strongly about it all… though wonder if it’s 
really worth the effort in changing.  At least the Edgent connector’s javadoc 
is pretty good / clear for the classes and their use... I think :-)

— Dale


> On Mar 20, 2018, at 9:59 PM, vino yang  wrote:
> 
> Hi Chris,
> 
> All data processing framework could think it as a *pipeline . *The Edgent's
> point of view, there could be two endpoints :
> 
> 
>   - source : means data injection;
>   - sink : means data export;
> 
> There are many frameworks use this conventional naming rule, such as Apache
> Flume, Apache Flink, Apache Spark(structured streaming) .
> 
> I think "KafkaConsumer" could be replaced with "KafkaSource" and
> "KafkaProducer" could be named "KafkaSink".
> 
> And middle of the pipeline is the transformation of the data, there are
> many operators to transform data ,such as map, flatmap, filter, reduce...
> and so on.
> 
> Vino yang.
> Thanks.
> 
> 2018-03-20 20:51 GMT+08:00 Christofer Dutz :
> 
>> Hi,
>> 
>> have been using the Kafka integration quite often in the past and one
>> thing I always have to explain when demonstrating code and which seems to
>> confuse everyone seeing the code:
>> 
>> I would expect a KafkaConsumer to consume Edgent messages and publish them
>> to Kafka and would expect a KafkaProducer to produce Edgent events.
>> 
>> Unfortunately it seems to be the other way around. This seems a little
>> unintuitive. Judging from the continued confusion when demonstrating code
>> eventually it’s worth considering to rename these (swap their names).
>> Eventually even rename them to “KafkaSource” (Edgent Source that consumes
>> Kafka messages and produces Edgent events) and “KafkaConsumer” (Consumes
>> Edgent Events and produces Kafka messages). After all the Classes are in
>> the Edgent namespace and come from the Edgent libs, so the fixed point when
>> inspecting these should be clear. Also I bet no one would be confused if we
>> called something that produces Kafka messages a consumer as there should
>> never be code that handles this from a Kafka point of view AND uses Edgent
>> at the same time.
>> 
>> Chris
>> 
>> 
>>