Re: Re: [discuss] What about splitting the kafka Connector into kafka 0.8 and 0.9?

2018-04-04 Thread Vino yang
Hi Chris, Take it easy, I think the compile failure may not affect a lot. I am 
glad to hear that higher Kafka client can be backward compatible well.  
Vino yang Thanks. On 2018-04-04 23:21 , Christofer Dutz Wrote: Hi Vino, Yeah 
... but I did it without an ASF header ... that's why the build was failing for 
23 days ( (I am really ashamed about that) I tried updating the two Kafka 
dependencies to the 1.1.0 verison (and to Scala 2.12) and that worked without 
any noticeable problems. Chris Am 04.04.18, 13:38 schrieb "vino yang" 
<yanghua1...@gmail.com>: Hi Chris, I rechecked the old mails between you and 
me. I misunderstand your message. I thought you will create the annotation. In 
fact, you have created the annotation. I will do this work soon, hold on. Vino 
yang. Thanks. 2018-04-04 19:32 GMT+08:00 vino yang <yanghua1...@gmail.com>: > 
Hi Chris, > > I have not done this. And I would upgrade it soon. > > Vino yang 
> Thanks! > > 2018-04-04 19:23 GMT+08:00 Christofer Dutz 
<christofer.d...@c-ware.de>: > >> Hi, >> >> so I updated the libs locally, 
built and re-ran the example with this >> version and it now worked without any 
problems. >> >> Chris >> >> >> >> Am 04.04.18, 12:58 schrieb "Christofer Dutz" 
<christofer.d...@c-ware.de >> >: >> >>     Hi all, >> >>     reporting back 
from my easter holidays :-) >> >>     Today I had to help a customer with 
getting a POC working that uses >> PLC4X and Edgent. Unfortunately it seems 
that in order to use the kafka >> connector I can only use 0.x versions of 
Kafka. When connecting to 1.x >> versions I get stack-overflows and OutOfMemory 
errors. I did a quick test >> with updating the kafaka libs from the ancient 
0.8.2.2 to 1.1.0 seemed to >> not break anything ... I'll do some local tests 
with an updated Kafka >> client. >> >>     @vino yang ... have you been working 
on adding the Annotations to the >> client? >> >>     @all others ... does 
anyone have objections to updating the kafka >> client libs to 1.1.0? It 
shouldn't break anything as it should be backward >> compatible. As we are 
currently not using anything above the API level of >> 0.8.2 there should also 
not be any Exceptions (I don't know of any removed >> things, which could be a 
problem). >> >>     Chris >> >> >> >>     Am 20.03.18, 10:33 schrieb 
"Christofer Dutz" < >> christofer.d...@c-ware.de>: >> >>         Ok, >> >>      
   So I just added a new Annotation type to the Kafka module. >> >>         
org.apache.edgent.connectors.kafka.annotations.KafkaVersion >> >>         It 
has a fromVersion and a toVersion attribute. Both should be >> optional so just 
adding the annotation would have no effect (besides a few >> additional CPU 
operations). The annotation can be applied to methods or >> classes (every 
method then inherits this). I hope that's ok, because >> implementing this on a 
parameter Level would make things extremely >> difficult. >> >>         @vino 
yang With this you should be able to provide Kafka version >> constraints to 
your code changes. Just tell me if something's missing or >> needs to be done 
differently >> >>         For now this annotation will have no effect as I 
haven't >> implemented the Aspect for doing the checks, but I'll start working 
on that >> as soon as you have annotated something. >> >>         Chris >> >>   
      Am 20.03.18, 10:11 schrieb "Christofer Dutz" < >> 
christofer.d...@c-ware.de>: >> >>             Ok ... maybe I should add the 
Annotation prior to continuing >> my work on the AWS connector ... >> >> >>     
        Chris >> >>             Am 04.03.18, 08:10 schrieb "vino yang" 
<yanghua1...@gmail.com >> >: >> >>                 The reason is that Kafka 
0.9+ provided a new consumer API >> which has more >>                 features 
and better performance. >> >>                 Just like Flink's implementation 
: >>                 https://github.com/apache/flin >> 
k/tree/master/flink-connectors. >> >>                 vinoyang >>               
  Thanks. >> >> >> >> >> >> >> >> >> >

Re: [discuss] What about splitting the kafka Connector into kafka 0.8 and 0.9?

2018-04-04 Thread vino yang
Hi Chris,

I rechecked the old mails between you and me. I misunderstand your message.
I thought you will create the annotation. In fact, you have created the
annotation.

I will do this work soon, hold on.

Vino yang.
Thanks.

2018-04-04 19:32 GMT+08:00 vino yang <yanghua1...@gmail.com>:

> Hi Chris,
>
> I have not done this. And I would upgrade it soon.
>
> Vino yang
> Thanks!
>
> 2018-04-04 19:23 GMT+08:00 Christofer Dutz <christofer.d...@c-ware.de>:
>
>> Hi,
>>
>> so I updated the libs locally, built and re-ran the example with this
>> version and it now worked without any problems.
>>
>> Chris
>>
>>
>>
>> Am 04.04.18, 12:58 schrieb "Christofer Dutz" <christofer.d...@c-ware.de
>> >:
>>
>> Hi all,
>>
>> reporting back from my easter holidays :-)
>>
>> Today I had to help a customer with getting a POC working that uses
>> PLC4X and Edgent. Unfortunately it seems that in order to use the kafka
>> connector I can only use 0.x versions of Kafka. When connecting to 1.x
>> versions I get stack-overflows and OutOfMemory errors. I did a quick test
>> with updating the kafaka libs from the ancient 0.8.2.2 to 1.1.0 seemed to
>> not break anything ... I'll do some local tests with an updated Kafka
>> client.
>>
>> @vino yang ... have you been working on adding the Annotations to the
>> client?
>>
>> @all others ... does anyone have objections to updating the kafka
>> client libs to 1.1.0? It shouldn't break anything as it should be backward
>> compatible. As we are currently not using anything above the API level of
>> 0.8.2 there should also not be any Exceptions (I don't know of any removed
>> things, which could be a problem).
>>
>> Chris
>>
>>
>>
>> Am 20.03.18, 10:33 schrieb "Christofer Dutz" <
>> christofer.d...@c-ware.de>:
>>
>> Ok,
>>
>> So I just added a new Annotation type to the Kafka module.
>>
>> org.apache.edgent.connectors.kafka.annotations.KafkaVersion
>>
>> It has a fromVersion and a toVersion attribute. Both should be
>> optional so just adding the annotation would have no effect (besides a few
>> additional CPU operations). The annotation can be applied to methods or
>> classes (every method then inherits this). I hope that's ok, because
>> implementing this on a parameter Level would make things extremely
>> difficult.
>>
>> @vino yang With this you should be able to provide Kafka version
>> constraints to your code changes. Just tell me if something's missing or
>> needs to be done differently
>>
>> For now this annotation will have no effect as I haven't
>> implemented the Aspect for doing the checks, but I'll start working on that
>> as soon as you have annotated something.
>>
>> Chris
>>
>> Am 20.03.18, 10:11 schrieb "Christofer Dutz" <
>> christofer.d...@c-ware.de>:
>>
>> Ok ... maybe I should add the Annotation prior to continuing
>> my work on the AWS connector ...
>>
>>
>> Chris
>>
>> Am 04.03.18, 08:10 schrieb "vino yang" <yanghua1...@gmail.com
>> >:
>>
>> The reason is that Kafka 0.9+ provided a new consumer API
>> which has more
>> features and better performance.
>>
>> Just like Flink's implementation :
>> https://github.com/apache/flin
>> k/tree/master/flink-connectors.
>>
>> vinoyang
>> Thanks.
>>
>>
>>
>>
>>
>>
>>
>>
>>
>


Re: [discuss] What about splitting the kafka Connector into kafka 0.8 and 0.9?

2018-04-04 Thread vino yang
Hi Chris,

I have not done this. And I would upgrade it soon.

Vino yang
Thanks!

2018-04-04 19:23 GMT+08:00 Christofer Dutz <christofer.d...@c-ware.de>:

> Hi,
>
> so I updated the libs locally, built and re-ran the example with this
> version and it now worked without any problems.
>
> Chris
>
>
>
> Am 04.04.18, 12:58 schrieb "Christofer Dutz" <christofer.d...@c-ware.de>:
>
> Hi all,
>
> reporting back from my easter holidays :-)
>
> Today I had to help a customer with getting a POC working that uses
> PLC4X and Edgent. Unfortunately it seems that in order to use the kafka
> connector I can only use 0.x versions of Kafka. When connecting to 1.x
> versions I get stack-overflows and OutOfMemory errors. I did a quick test
> with updating the kafaka libs from the ancient 0.8.2.2 to 1.1.0 seemed to
> not break anything ... I'll do some local tests with an updated Kafka
> client.
>
> @vino yang ... have you been working on adding the Annotations to the
> client?
>
> @all others ... does anyone have objections to updating the kafka
> client libs to 1.1.0? It shouldn't break anything as it should be backward
> compatible. As we are currently not using anything above the API level of
> 0.8.2 there should also not be any Exceptions (I don't know of any removed
> things, which could be a problem).
>
> Chris
>
>
>
> Am 20.03.18, 10:33 schrieb "Christofer Dutz" <
> christofer.d...@c-ware.de>:
>
> Ok,
>
> So I just added a new Annotation type to the Kafka module.
>
> org.apache.edgent.connectors.kafka.annotations.KafkaVersion
>
> It has a fromVersion and a toVersion attribute. Both should be
> optional so just adding the annotation would have no effect (besides a few
> additional CPU operations). The annotation can be applied to methods or
> classes (every method then inherits this). I hope that's ok, because
> implementing this on a parameter Level would make things extremely
> difficult.
>
> @vino yang With this you should be able to provide Kafka version
> constraints to your code changes. Just tell me if something's missing or
> needs to be done differently
>
> For now this annotation will have no effect as I haven't
> implemented the Aspect for doing the checks, but I'll start working on that
> as soon as you have annotated something.
>
> Chris
>
> Am 20.03.18, 10:11 schrieb "Christofer Dutz" <
> christofer.d...@c-ware.de>:
>
> Ok ... maybe I should add the Annotation prior to continuing
> my work on the AWS connector ...
>
>
> Chris
>
> Am 04.03.18, 08:10 schrieb "vino yang" <yanghua1...@gmail.com
> >:
>
> The reason is that Kafka 0.9+ provided a new consumer API
> which has more
> features and better performance.
>
> Just like Flink's implementation :
> https://github.com/apache/flink/tree/master/flink-
> connectors.
>
> vinoyang
> Thanks.
>
>
>
>
>
>
>
>
>


Re: Anyone else mis-interpret the "KafkaConsumer" and "KafkaProducer" all the time?

2018-03-26 Thread vino yang
Hi Dale,

The producer and consumer do not share one connection, just avoid they are
both used in one topology (if we share the connection, the producer and
consumer which close the connection would trigger another failure).
Actually, this case I just described rarely occurs(that means the most of
the scenario there is only one connection) . So I think we should take it
easy about the case you described.

Vino yang.
Thanks.

2018-03-27 4:27 GMT+08:00 Dale LaBossiere <dml.apa...@gmail.com>:

> Hi Vino, thanks for the clarification.
>
> One last question :-). Is there ever a situation when it’s
> desirable/possible for a Producer and Consumer to share a single RabbetMQ
> connection?  e.g., a low throughput device wanting to minimize
> connections?  If so, the separate Producer and Consumer split doesn't
> support that case.
>
> > On Mar 22, 2018, at 9:39 PM, vino yang <yanghua1...@gmail.com> wrote:
> >
> > Hi Dale,
> >
> > When I wroted the RabbitMQ connector I followed the Kafka Connector's
> style
> > (and I also looked the MQTT connectors). And I chose the Kafka connector
> as
> > the implementation template. The reason is the two classes
> > (RabbitmqProducer and RabbitmqConsumer) should not share one rabbitmq's
> > connection and channel (implemented in RabbitmqConnector). The two
> classes
> > maybe use in one topology (as consumer and producer) and split the inner
> > connection and channel would be better.
> >
> > 2018-03-23 2:28 GMT+08:00 Dale LaBossiere <dlab...@apache.org>:
> >
> >> I see the new RabbitMQ connector followed the same API scheme as the
> Kafka
> >> connector.  i.e., adding Rabbitmq{Consumer,Producer} for the source/sink
> >> respectively.  It looks like it could have followed the MqttStreams
> >> approach instead.
> >>
> >> @yanghua, is there a reason you chose to offer
> o.a.e.connectors.rabbitmq.Rabbitmq{Consumer,Producer}
> >> instead of just RabbitmqStreams?
> >>
> >> — Dale
> >>
> >>> On Mar 22, 2018, at 1:11 PM, Dale LaBossiere <dml.apa...@gmail.com>
> >> wrote:
> >>>
> >>> Hi Chris.  Hopefully the background provided some useful context.  But
> >> like I said, I don’t feel strongly about some renaming if folks agree
> >> that’s the right think to do.
> >>>
> >>> — Dale
> >>>
> >>>> On Mar 22, 2018, at 12:56 PM, Christofer Dutz <
> >> christofer.d...@c-ware.de> wrote:
> >>>> It was just something I had to explain every time I showed the code
> for
> >> the currently by far most interesting use-case for my plc4x pocs at the
> >> moment (pumping data from a PLC to a Kafka topic) . So I thought, that
> if I
> >> have to explain it every time, cause people are confused, then probably
> we
> >> should talk about making things more clear.
> >>>
> >>
> >>
>
>


Re: My Edgent and PLC4X Article on the Cover-Page

2018-03-23 Thread vino yang
Hi Chris,

You did a great job!  It's good for expanding the edgent's influence!

Vino yang
Thanks.

2018-03-23 17:57 GMT+08:00 Christofer Dutz <christofer.d...@c-ware.de>:

> Hi,
>
> today I bought the magazine with my Article and was totally amazed that it
> has become one of the cover-page articles :-)
>
> Here some images (not the article itself though)
>
> https://drive.google.com/open?id=1PsDY1T-6G2VShUtGqSkkBsBxmiXAOvEs
>
> https://drive.google.com/open?id=1x8oSM7iYs-FG_fURo7uHcoTeXKbMlGrO
>
> https://drive.google.com/open?id=13d-vo_oJfgNFy15gmb2U8BoBxzCrcf0Y
>
> So hopefully this will bring the one or the other new subscription here ;-)
>
> Chris
>


Re: Anyone else mis-interpret the "KafkaConsumer" and "KafkaProducer" all the time?

2018-03-22 Thread vino yang
Hi guys,

I also agree change the connectors renaming. When I wrote the RabbitMQ
connector, I watched the Kafka connector's implementation. There are two
communication pair : KafkaPublisher / KafkaSubscriber and KafkaConsumer /
KafkaProducer. I feel confused about them.

I think this mode would be better :


   - inside API : we should use Kafka client API to interact with Kafka
   Server , it's kafka's point of view, we could use KafkaProducer /
   KafkaConsumer;
   - outer API : it's edgent's point of view, we should use a unified
   naming, the name could be source / sink or some other edgent self-created
   named all connectors should follow this specification.


Vino yang
Thanks.

2018-03-23 0:56 GMT+08:00 Christofer Dutz <christofer.d...@c-ware.de>:

> Hi Dale,
>
> Happy to read from you :-)
>
> It was just something I had to explain every time I showed the code for
> the currently by far most interesting use-case for my plc4x pocs at the
> moment (pumping data from a PLC to a Kafka topic) . So I thought, that if I
> have to explain it every time, cause people are confused, then probably we
> should talk about making things more clear.
>
> Chris
>
> Outlook for Android<https://aka.ms/ghei36> herunterladen
>
> 
> From: Dale LaBossiere <dml.apa...@gmail.com>
> Sent: Thursday, March 22, 2018 5:44:42 PM
> To: dev@edgent.apache.org
> Subject: Re: Anyone else mis-interpret the "KafkaConsumer" and
> "KafkaProducer" all the time?
>
> A bit of background…
>
> The Kafka connector is two classes instead of a single KafkaStreams
> connector (with publish(),subscribe()) because at least a while ago, don’t
> know if this is still the case, Kafka had two completely separate classes
> for a “consumer” and a “producer" each with very different config setup
> params. By comparison MQTT has a single MqttClient class (with
> publish()/subscribe()).
>
> At the time, the decision was to name the Edgent Kafka classes similar to
> the underlying Kafka API classes.  Hence KafkaConsumer (~wrapping Kafka’s
> ConsumerConnector) and KafkaProducer (~wrapping Kafka’s KafkaProducer).
> While not exposed today, it’s conceivable that some day one could create an
> Edgent Kafka connector instance by providing a Kafka API class directly
> instead of just a config map - e.g., supplying a Kafka KafkaProducer as an
> arg to the Edgent KafkaProducer connector's constructor.  So having the
> names align seems like goodness.
>
> I don’t think the Edgent connectors should be trying to make it
> unnecessary for a user to understand or to mask the underlying system’s
> API… just make it usable, easily usable for a simple/common cases, in an
> Edgent topology context (worrying about when to make an actually external
> connection, recovering from broken connections / reconnecting, handling
> common tuple types).
>
> As for the specific suggestions, I think simply switching the names of
> Edgent’s KafkaConsumer and KafkaProducer is a bad idea :-)
>
> Offering KafkaSource and KafkaSink is OK I guess (though probably
> retaining the current names for a release or three).  Though I’ll note the
> Edgent API uses “source” and “sink” as verbs, which take a Supplier and a
> Consumer fn as args respectively.  Note Consumer used in the context with
> sink.
>
> Alternatively there’s KafkaSubscriber and KafkaPublisher.  While clearer
> than Consumer/Producer, I don’t know if they’re any better than Source/Sink.
>
> In the end I guess I don’t feel strongly about it all… though wonder if
> it’s really worth the effort in changing.  At least the Edgent connector’s
> javadoc is pretty good / clear for the classes and their use... I think :-)
>
> — Dale
>
>
> > On Mar 20, 2018, at 9:59 PM, vino yang <yanghua1...@gmail.com> wrote:
> >
> > Hi Chris,
> >
> > All data processing framework could think it as a *pipeline . *The
> Edgent's
> > point of view, there could be two endpoints :
> >
> >
> >   - source : means data injection;
> >   - sink : means data export;
> >
> > There are many frameworks use this conventional naming rule, such as
> Apache
> > Flume, Apache Flink, Apache Spark(structured streaming) .
> >
> > I think "KafkaConsumer" could be replaced with "KafkaSource" and
> > "KafkaProducer" could be named "KafkaSink".
> >
> > And middle of the pipeline is the transformation of the data, there are
> > many operators to transform data ,such as map, flatmap, filter, reduce...
> > and so on.
> >
> > Vino yang.
> > Thanks.
> >
> > 2018-03-20 20:51 GMT+08:00 Christofer Dutz <christofer.d...@c

Re: Anyone else mis-interpret the "KafkaConsumer" and "KafkaProducer" all the time?

2018-03-22 Thread vino yang
Hi Dale,

When I wroted the RabbitMQ connector I followed the Kafka Connector's style
(and I also looked the MQTT connectors). And I chose the Kafka connector as
the implementation template. The reason is the two classes
(RabbitmqProducer and RabbitmqConsumer) should not share one rabbitmq's
connection and channel (implemented in RabbitmqConnector). The two classes
maybe use in one topology (as consumer and producer) and split the inner
connection and channel would be better.

2018-03-23 2:28 GMT+08:00 Dale LaBossiere :

> I see the new RabbitMQ connector followed the same API scheme as the Kafka
> connector.  i.e., adding Rabbitmq{Consumer,Producer} for the source/sink
> respectively.  It looks like it could have followed the MqttStreams
> approach instead.
>
> @yanghua, is there a reason you chose to offer 
> o.a.e.connectors.rabbitmq.Rabbitmq{Consumer,Producer}
> instead of just RabbitmqStreams?
>
> — Dale
>
> > On Mar 22, 2018, at 1:11 PM, Dale LaBossiere 
> wrote:
> >
> > Hi Chris.  Hopefully the background provided some useful context.  But
> like I said, I don’t feel strongly about some renaming if folks agree
> that’s the right think to do.
> >
> > — Dale
> >
> >> On Mar 22, 2018, at 12:56 PM, Christofer Dutz <
> christofer.d...@c-ware.de> wrote:
> >> It was just something I had to explain every time I showed the code for
> the currently by far most interesting use-case for my plc4x pocs at the
> moment (pumping data from a PLC to a Kafka topic) . So I thought, that if I
> have to explain it every time, cause people are confused, then probably we
> should talk about making things more clear.
> >
>
>


Re: Anyone else mis-interpret the "KafkaConsumer" and "KafkaProducer" all the time?

2018-03-20 Thread vino yang
Hi Chris,

All data processing framework could think it as a *pipeline . *The Edgent's
point of view, there could be two endpoints :


   - source : means data injection;
   - sink : means data export;

There are many frameworks use this conventional naming rule, such as Apache
Flume, Apache Flink, Apache Spark(structured streaming) .

I think "KafkaConsumer" could be replaced with "KafkaSource" and
"KafkaProducer" could be named "KafkaSink".

And middle of the pipeline is the transformation of the data, there are
many operators to transform data ,such as map, flatmap, filter, reduce...
and so on.

Vino yang.
Thanks.

2018-03-20 20:51 GMT+08:00 Christofer Dutz <christofer.d...@c-ware.de>:

> Hi,
>
> have been using the Kafka integration quite often in the past and one
> thing I always have to explain when demonstrating code and which seems to
> confuse everyone seeing the code:
>
> I would expect a KafkaConsumer to consume Edgent messages and publish them
> to Kafka and would expect a KafkaProducer to produce Edgent events.
>
> Unfortunately it seems to be the other way around. This seems a little
> unintuitive. Judging from the continued confusion when demonstrating code
> eventually it’s worth considering to rename these (swap their names).
> Eventually even rename them to “KafkaSource” (Edgent Source that consumes
> Kafka messages and produces Edgent events) and “KafkaConsumer” (Consumes
> Edgent Events and produces Kafka messages). After all the Classes are in
> the Edgent namespace and come from the Edgent libs, so the fixed point when
> inspecting these should be clear. Also I bet no one would be confused if we
> called something that produces Kafka messages a consumer as there should
> never be code that handles this from a Kafka point of view AND uses Edgent
> at the same time.
>
> Chris
>
>
>


Re: [discuss] What about splitting the kafka Connector into kafka 0.8 and 0.9?

2018-03-20 Thread vino yang
Hi Chris,

I will try to do this, if I have any question, keep in touch!

Vino yang
Thanks.

2018-03-20 17:32 GMT+08:00 Christofer Dutz <christofer.d...@c-ware.de>:

> Ok,
>
> So I just added a new Annotation type to the Kafka module.
>
> org.apache.edgent.connectors.kafka.annotations.KafkaVersion
>
> It has a fromVersion and a toVersion attribute. Both should be optional so
> just adding the annotation would have no effect (besides a few additional
> CPU operations). The annotation can be applied to methods or classes (every
> method then inherits this). I hope that's ok, because implementing this on
> a parameter Level would make things extremely difficult.
>
> @vino yang With this you should be able to provide Kafka version
> constraints to your code changes. Just tell me if something's missing or
> needs to be done differently
>
> For now this annotation will have no effect as I haven't implemented the
> Aspect for doing the checks, but I'll start working on that as soon as you
> have annotated something.
>
> Chris
>
> Am 20.03.18, 10:11 schrieb "Christofer Dutz" <christofer.d...@c-ware.de>:
>
> Ok ... maybe I should add the Annotation prior to continuing my work
> on the AWS connector ...
>
>
> Chris
>
> Am 04.03.18, 08:10 schrieb "vino yang" <yanghua1...@gmail.com>:
>
> The reason is that Kafka 0.9+ provided a new consumer API which
> has more
> features and better performance.
>
> Just like Flink's implementation :
> https://github.com/apache/flink/tree/master/flink-connectors.
>
> vinoyang
> Thanks.
>
>
>
>
>


Re: [disscuss] make TStream support groupBy operator?

2018-03-20 Thread vino yang
Hi Chris,

I think the Edgent community seems not very active. Now industry IoT and
Edge Computing (fog) are more and more popular. It's the Edgent's
opportunity now. In China, many big IT company are force on the IoT and
edge computing.

The old committers do not pay close attention to the project now?

Vino yang
Thanks

2018-03-20 17:28 GMT+08:00 vino yang <yanghua1...@gmail.com>:

> Hi Chris,
>
> My background is BigData (MapReduce, Spark, Flink, Kafka Stream) those
> data processing frameworks all provide the groupBy / keyBy operation and
> aggregation operator. It comes from traditional RDBMS. Edgent like a single
> JVM's Flink / Kafka Stream works on edges(gateway or IoT).
> They are similar with each other in some ways.
>
> Vino yang.
> Thanks!
>
> 2018-03-20 17:05 GMT+08:00 Christofer Dutz <christofer.d...@c-ware.de>:
>
>> Hi Vino,
>>
>> unfortunately I can't contribute any opinion on this as I don't yet
>> understand the consequences.
>> I know that in an asynchronous event processing system some operations
>> that might be useful have to be sacrificed for the sake of asynchonisity.
>>
>> For me Kafka Stream sort of feeling like the cloud-brother of Edgent, it
>> does seem to support groupBy.
>>
>> Would be really cool if some of the formerly active people could at least
>> leave some comments on questions like this. You don't have to actually work
>> on things, but giving us new guys some guidance would be awesome.
>>
>> I don't want to ruin thing you built over years, just because I'm not
>> that into the topic ... yet.
>>
>> Chris
>>
>>
>>
>>
>> Am 16.03.18, 13:02 schrieb "Christofer Dutz" <christofer.d...@c-ware.de
>> >:
>>
>> I'm currently at a conference, so I can't be as responsive as I used
>> to be ... All will be back to normal next Tuesday ;-)
>>
>> Chris
>>
>> Outlook for Android<https://aka.ms/ghei36> herunterladen
>>
>> 
>> From: vino yang <yanghua1...@gmail.com>
>> Sent: Friday, March 16, 2018 2:26:10 AM
>> To: dev@edgent.apache.org
>> Subject: Re: [disscuss] make TStream support groupBy operator?
>>
>> Hi all,
>>
>> Anyone can give some opinion? Chris ? I think we should support some
>> reduce
>> operation(aggregation function, such as max / avg / min sum) for both
>> stream and windowed stream, these features based on the keyBy or
>> groupBy
>> operation.
>>
>> Vino yang
>> Thanks!
>>
>> 2018-03-13 12:52 GMT+08:00 vino yang <yanghua1...@gmail.com>:
>>
>> > Hi guys,
>> >
>> > Does Edgent current support groupBy operator?
>> >
>> > Vino yang
>> > Thanks.
>> >
>>
>>
>>
>


Re: [disscuss] make TStream support groupBy operator?

2018-03-20 Thread vino yang
Hi Chris,

My background is BigData (MapReduce, Spark, Flink, Kafka Stream) those data
processing frameworks all provide the groupBy / keyBy operation and
aggregation operator. It comes from traditional RDBMS. Edgent like a single
JVM's Flink / Kafka Stream works on edges(gateway or IoT).
They are similar with each other in some ways.

Vino yang.
Thanks!

2018-03-20 17:05 GMT+08:00 Christofer Dutz <christofer.d...@c-ware.de>:

> Hi Vino,
>
> unfortunately I can't contribute any opinion on this as I don't yet
> understand the consequences.
> I know that in an asynchronous event processing system some operations
> that might be useful have to be sacrificed for the sake of asynchonisity.
>
> For me Kafka Stream sort of feeling like the cloud-brother of Edgent, it
> does seem to support groupBy.
>
> Would be really cool if some of the formerly active people could at least
> leave some comments on questions like this. You don't have to actually work
> on things, but giving us new guys some guidance would be awesome.
>
> I don't want to ruin thing you built over years, just because I'm not that
> into the topic ... yet.
>
> Chris
>
>
>
>
> Am 16.03.18, 13:02 schrieb "Christofer Dutz" <christofer.d...@c-ware.de>:
>
> I'm currently at a conference, so I can't be as responsive as I used
> to be ... All will be back to normal next Tuesday ;-)
>
> Chris
>
>     Outlook for Android<https://aka.ms/ghei36> herunterladen
>
> 
> From: vino yang <yanghua1...@gmail.com>
> Sent: Friday, March 16, 2018 2:26:10 AM
> To: dev@edgent.apache.org
> Subject: Re: [disscuss] make TStream support groupBy operator?
>
> Hi all,
>
> Anyone can give some opinion? Chris ? I think we should support some
> reduce
> operation(aggregation function, such as max / avg / min sum) for both
> stream and windowed stream, these features based on the keyBy or
> groupBy
> operation.
>
> Vino yang
>     Thanks!
>
> 2018-03-13 12:52 GMT+08:00 vino yang <yanghua1...@gmail.com>:
>
> > Hi guys,
> >
> > Does Edgent current support groupBy operator?
> >
> > Vino yang
> > Thanks.
> >
>
>
>


Re: [disscuss] make TStream support groupBy operator?

2018-03-15 Thread vino yang
Hi all,

Anyone can give some opinion? Chris ? I think we should support some reduce
operation(aggregation function, such as max / avg / min sum) for both
stream and windowed stream, these features based on the keyBy or groupBy
operation.

Vino yang
Thanks!

2018-03-13 12:52 GMT+08:00 vino yang <yanghua1...@gmail.com>:

> Hi guys,
>
> Does Edgent current support groupBy operator?
>
> Vino yang
> Thanks.
>


Re: [discuss] What about splitting the kafka Connector into kafka 0.8 and 0.9?

2018-03-14 Thread vino yang
Hi Chris,

the version upgrade and the main features which affect the produce/consume
API list below :


   - 0.8.2.x -> 0.9+ : support save offset, position by Kafka server
   itself, Kerberos, TLS authentication, need not specify zookeeper server
   - 0.9.x -> 0.10.x : let consumer control max records, introduced
   "max.poll.records" config item
   - 0.10.x -> 0.11.x : support producer exactly-once (with
   *beginTransaction*/*commitTransaction *API)
   - 0.11.x -> 1.x : seems no specific feature which affect the API

Vino yang
Thanks.

2018-03-14 9:27 GMT+08:00 vino yang <yanghua1...@gmail.com>:

> Hi Chris,
>
> No objections about this approach. Good division of the work. I will
> provide the mapping of Kafka version and the specified feature later.
>
> Vino yang
> Thanks.
>
> 2018-03-13 20:11 GMT+08:00 Christofer Dutz <christofer.d...@c-ware.de>:
>
>> Well I have implemented something like the Version checking before, so I
>> would opt to take care of that.
>>
>> I would define an Annotation with an optional "from" and "to" version ...
>> you could use that
>> I would need something that provides the version of the server from your
>> side.
>>
>> With this I would then implement an Aspect that intercepts these calls,
>> does the check and eventually throws Exceptions with a message what the
>> minimum or maximum version for a feature would be.
>>
>> I would use a compile-time weaver as this does not add any more
>> dependencies or setup complexity to the construct.
>>
>> Any objections to this approach?
>>
>> Chris
>>
>>
>> Am 13.03.18, 03:06 schrieb "vino yang" <yanghua1...@gmail.com>:
>>
>> Hi Chris,
>>
>> It looks like a good idea. I think to finish this job, we can split
>> it into
>> three sub tasks:
>>
>>- upgrade kafka version to 1.x and test it to match the 0.8.x
>>connector's function and behaivor;
>>- Carding and defining the annotation which contains different
>> kafka
>>version and features
>>- expose the new feature's API to user and check with annotation
>>
>> What's your opinion?
>>
>>
>> 2018-03-12 21:00 GMT+08:00 Christofer Dutz <christofer.d...@c-ware.de
>> >:
>>
>> > Don't know if this would be an option:
>> >
>> > If we defined and used a Java annotation which defines what
>> Kafka-Version
>> > a feature is available from (or up to which version it is
>> supported) and
>> > then we could do quick checks that compare the current version with
>> the
>> > annotations on the methods we call. I think this type of check
>> should be
>> > quite easy to understand and we wouldn't have to build, maintain,
>> test,
>> > document etc. loads of separate modules.
>> >
>> > Chris
>> >
>> >
>> >
>> > Am 12.03.18, 13:30 schrieb "vino yang" <yanghua1...@gmail.com>:
>> >
>> > Hi Chris,
>> >
>> > OK, Hope for listening someone's opinion.
>> >
>> > Vino yang.
>> >
>> > 2018-03-12 20:23 GMT+08:00 Christofer Dutz <
>> christofer.d...@c-ware.de
>> > >:
>> >
>> > > Hi Vino,
>> > >
>> > > please don't interpret my opinion as some official project
>> decision.
>> > > For discussions like this I would definitely prefer to hear
>> the
>> > opinions
>> > > of others in the project.
>> > > Perhaps having a new client API and having compatibility
>> layers
>> > inside the
>> > > connector would be another option.
>> > > So per default the compatibility level of the Kafka client
>> lib would
>> > be
>> > > used but a developer could explicitly choose
>> > > older compatibility levels, where we have taken care of the
>> work to
>> > decide
>> > > what works and what doesn't.
>> > >
>> > > Chris
>> > >
>> > >
>> > >
>> > > Am 12.03.18, 13:07 schrieb "vino yang" <yanghua1...@gmail.com
>> >:
>> > >
>> > > Hi Chris,
>> >   

Re: [discuss] What about splitting the kafka Connector into kafka 0.8 and 0.9?

2018-03-13 Thread vino yang
Hi Chris,

No objections about this approach. Good division of the work. I will
provide the mapping of Kafka version and the specified feature later.

Vino yang
Thanks.

2018-03-13 20:11 GMT+08:00 Christofer Dutz <christofer.d...@c-ware.de>:

> Well I have implemented something like the Version checking before, so I
> would opt to take care of that.
>
> I would define an Annotation with an optional "from" and "to" version ...
> you could use that
> I would need something that provides the version of the server from your
> side.
>
> With this I would then implement an Aspect that intercepts these calls,
> does the check and eventually throws Exceptions with a message what the
> minimum or maximum version for a feature would be.
>
> I would use a compile-time weaver as this does not add any more
> dependencies or setup complexity to the construct.
>
> Any objections to this approach?
>
> Chris
>
>
> Am 13.03.18, 03:06 schrieb "vino yang" <yanghua1...@gmail.com>:
>
> Hi Chris,
>
> It looks like a good idea. I think to finish this job, we can split it
> into
> three sub tasks:
>
>- upgrade kafka version to 1.x and test it to match the 0.8.x
>connector's function and behaivor;
>- Carding and defining the annotation which contains different kafka
>version and features
>- expose the new feature's API to user and check with annotation
>
> What's your opinion?
>
>
> 2018-03-12 21:00 GMT+08:00 Christofer Dutz <christofer.d...@c-ware.de
> >:
>
> > Don't know if this would be an option:
> >
> > If we defined and used a Java annotation which defines what
> Kafka-Version
> > a feature is available from (or up to which version it is supported)
> and
> > then we could do quick checks that compare the current version with
> the
> > annotations on the methods we call. I think this type of check
> should be
>     > quite easy to understand and we wouldn't have to build, maintain,
> test,
> > document etc. loads of separate modules.
> >
> > Chris
> >
> >
> >
> > Am 12.03.18, 13:30 schrieb "vino yang" <yanghua1...@gmail.com>:
> >
> > Hi Chris,
> >
> > OK, Hope for listening someone's opinion.
> >
> > Vino yang.
> >
> > 2018-03-12 20:23 GMT+08:00 Christofer Dutz <
> christofer.d...@c-ware.de
> > >:
> >
> > > Hi Vino,
> > >
> > > please don't interpret my opinion as some official project
> decision.
> > > For discussions like this I would definitely prefer to hear the
> > opinions
> > > of others in the project.
> > > Perhaps having a new client API and having compatibility layers
> > inside the
> > > connector would be another option.
> > > So per default the compatibility level of the Kafka client lib
> would
> > be
> > > used but a developer could explicitly choose
> > > older compatibility levels, where we have taken care of the
> work to
> > decide
> > > what works and what doesn't.
> > >
> > > Chris
> > >
> > >
> > >
> > > Am 12.03.18, 13:07 schrieb "vino yang" <yanghua1...@gmail.com
> >:
> > >
> > > Hi Chris,
> > >
> > > In some ways, I argee with you. Though kafka API has the
> > > compatibility. But
> > >
> > >
> > >- old API + higher server version : this mode would
> miss some
> > key
> > > new
> > >feature.
> > >- new API + older server version : this mode, users are
> in a
> > puzzle
> > >about which feature they could use and which could not.
> Also,
> > new
> > > API will
> > >do more logic judgement and something else (which cause
> > performance
> > > cost)
> > >for backward compatibility.
> > >
> > > I think it's the main reason that other framework split
> > different kafka
> > > connector with versions.
> > >
> > > Anyway, I will respect your decision. Can I claim 

[disscuss] make TStream support groupBy operator?

2018-03-12 Thread vino yang
Hi guys,

Does Edgent current support groupBy operator?

Vino yang
Thanks.


Re: [discuss] What about splitting the kafka Connector into kafka 0.8 and 0.9?

2018-03-12 Thread vino yang
Hi Chris,

It looks like a good idea. I think to finish this job, we can split it into
three sub tasks:

   - upgrade kafka version to 1.x and test it to match the 0.8.x
   connector's function and behaivor;
   - Carding and defining the annotation which contains different kafka
   version and features
   - expose the new feature's API to user and check with annotation

What's your opinion?


2018-03-12 21:00 GMT+08:00 Christofer Dutz <christofer.d...@c-ware.de>:

> Don't know if this would be an option:
>
> If we defined and used a Java annotation which defines what Kafka-Version
> a feature is available from (or up to which version it is supported) and
> then we could do quick checks that compare the current version with the
> annotations on the methods we call. I think this type of check should be
> quite easy to understand and we wouldn't have to build, maintain, test,
> document etc. loads of separate modules.
>
> Chris
>
>
>
> Am 12.03.18, 13:30 schrieb "vino yang" <yanghua1...@gmail.com>:
>
>     Hi Chris,
>
> OK, Hope for listening someone's opinion.
>
> Vino yang.
>
> 2018-03-12 20:23 GMT+08:00 Christofer Dutz <christofer.d...@c-ware.de
> >:
>
> > Hi Vino,
> >
> > please don't interpret my opinion as some official project decision.
> > For discussions like this I would definitely prefer to hear the
> opinions
> > of others in the project.
> > Perhaps having a new client API and having compatibility layers
> inside the
> > connector would be another option.
> > So per default the compatibility level of the Kafka client lib would
> be
> > used but a developer could explicitly choose
> > older compatibility levels, where we have taken care of the work to
> decide
> > what works and what doesn't.
> >
> > Chris
> >
> >
> >
> > Am 12.03.18, 13:07 schrieb "vino yang" <yanghua1...@gmail.com>:
> >
> > Hi Chris,
> >
> > In some ways, I argee with you. Though kafka API has the
> > compatibility. But
> >
> >
> >- old API + higher server version : this mode would miss some
> key
> > new
> >feature.
> >- new API + older server version : this mode, users are in a
> puzzle
> >about which feature they could use and which could not. Also,
> new
> > API will
> >do more logic judgement and something else (which cause
> performance
> > cost)
> >for backward compatibility.
> >
> > I think it's the main reason that other framework split
> different kafka
> > connector with versions.
> >
> > Anyway, I will respect your decision. Can I claim this task about
> > upgrading
> > the kafka client's version to 1.x?
> >
> >
> > 2018-03-12 16:30 GMT+08:00 Christofer Dutz <
> christofer.d...@c-ware.de
> > >:
> >
> > > Hi Vino,
> > >
> > > I would rather go a different path. I talked to some Kafka
> pros and
> > they
> > > sort of confirmed my gut-feeling.
> > > The greatest changes to Kafka have been in the layers behind
> the API
> > > itself. The API seems to have been designed with backward
> > compatibility in
> > > mind.
> > > That means you can generally use a newer API with an older
> broker as
> > well
> > > as use a new broker with an older API (This is probably even
> the
> > safer way
> > > around). As soon as you try to do something with the API which
> your
> > broker
> > > doesn't support, you get error messages.
> > >
> > > https://cwiki.apache.org/confluence/display/KAFKA/
> > Compatibility+Matrix
> > >
> > > I would rather update the existing connector to a newer Kafka
> > version ...
> > > 0.8.2.2 is quite old and we should update to a version of at
> least
> > 0.10.0
> > > (I would prefer a 1.x) and stick with that. I doubt many will
> be
> > using an
> > > ancient 0.8.2 version (09.09.2015). And everything starting
> with
> > 0.10.x
> > > should be interchangeable.
> >     >
> > > I wouldn't like to have yet another project maintaining a Zoo
> of
> > a

Re: [discuss] What about splitting the kafka Connector into kafka 0.8 and 0.9?

2018-03-12 Thread vino yang
Hi Chris,

OK, Hope for listening someone's opinion.

Vino yang.

2018-03-12 20:23 GMT+08:00 Christofer Dutz <christofer.d...@c-ware.de>:

> Hi Vino,
>
> please don't interpret my opinion as some official project decision.
> For discussions like this I would definitely prefer to hear the opinions
> of others in the project.
> Perhaps having a new client API and having compatibility layers inside the
> connector would be another option.
> So per default the compatibility level of the Kafka client lib would be
> used but a developer could explicitly choose
> older compatibility levels, where we have taken care of the work to decide
> what works and what doesn't.
>
> Chris
>
>
>
> Am 12.03.18, 13:07 schrieb "vino yang" <yanghua1...@gmail.com>:
>
> Hi Chris,
>
> In some ways, I argee with you. Though kafka API has the
> compatibility. But
>
>
>- old API + higher server version : this mode would miss some key
> new
>feature.
>- new API + older server version : this mode, users are in a puzzle
>about which feature they could use and which could not. Also, new
> API will
>do more logic judgement and something else (which cause performance
> cost)
>for backward compatibility.
>
> I think it's the main reason that other framework split different kafka
> connector with versions.
>
> Anyway, I will respect your decision. Can I claim this task about
> upgrading
> the kafka client's version to 1.x?
>
>
> 2018-03-12 16:30 GMT+08:00 Christofer Dutz <christofer.d...@c-ware.de
> >:
>
> > Hi Vino,
> >
> > I would rather go a different path. I talked to some Kafka pros and
> they
> > sort of confirmed my gut-feeling.
> > The greatest changes to Kafka have been in the layers behind the API
> > itself. The API seems to have been designed with backward
> compatibility in
> > mind.
> > That means you can generally use a newer API with an older broker as
> well
> > as use a new broker with an older API (This is probably even the
> safer way
> > around). As soon as you try to do something with the API which your
> broker
> > doesn't support, you get error messages.
> >
> > https://cwiki.apache.org/confluence/display/KAFKA/
> Compatibility+Matrix
> >
> > I would rather update the existing connector to a newer Kafka
> version ...
> > 0.8.2.2 is quite old and we should update to a version of at least
> 0.10.0
> > (I would prefer a 1.x) and stick with that. I doubt many will be
> using an
> > ancient 0.8.2 version (09.09.2015). And everything starting with
> 0.10.x
> > should be interchangeable.
> >
> > I wouldn't like to have yet another project maintaining a Zoo of
> adapters
> > for Kafka.
> >
> > Eventually a Kafka-Streams client would make sense though ... to
> sort of
> > extend the Edgent streams from the edge to the Kafka cluster.
> >
> > Chris
> >
> >
> >
> > Am 12.03.18, 03:41 schrieb "vino yang" <yanghua1...@gmail.com>:
> >
> > Hi guys,
> >
> > How about this idea, I think we should support kafka's new
> client API.
> >
> > 2018-03-04 15:10 GMT+08:00 vino yang <yanghua1...@gmail.com>:
> >
> > > The reason is that Kafka 0.9+ provided a new consumer API
> which has
> > more
> > > features and better performance.
> > >
> > > Just like Flink's implementation : https://github.com/apache/
> > > flink/tree/master/flink-connectors.
> > >
> > > vinoyang
> > > Thanks.
> > >
> > >
> >
> >
> >
>
>
>


Re: [discuss] What about splitting the kafka Connector into kafka 0.8 and 0.9?

2018-03-12 Thread vino yang
Hi Chris,

In some ways, I argee with you. Though kafka API has the compatibility. But


   - old API + higher server version : this mode would miss some key new
   feature.
   - new API + older server version : this mode, users are in a puzzle
   about which feature they could use and which could not. Also, new API will
   do more logic judgement and something else (which cause performance cost)
   for backward compatibility.

I think it's the main reason that other framework split different kafka
connector with versions.

Anyway, I will respect your decision. Can I claim this task about upgrading
the kafka client's version to 1.x?


2018-03-12 16:30 GMT+08:00 Christofer Dutz <christofer.d...@c-ware.de>:

> Hi Vino,
>
> I would rather go a different path. I talked to some Kafka pros and they
> sort of confirmed my gut-feeling.
> The greatest changes to Kafka have been in the layers behind the API
> itself. The API seems to have been designed with backward compatibility in
> mind.
> That means you can generally use a newer API with an older broker as well
> as use a new broker with an older API (This is probably even the safer way
> around). As soon as you try to do something with the API which your broker
> doesn't support, you get error messages.
>
> https://cwiki.apache.org/confluence/display/KAFKA/Compatibility+Matrix
>
> I would rather update the existing connector to a newer Kafka version ...
> 0.8.2.2 is quite old and we should update to a version of at least 0.10.0
> (I would prefer a 1.x) and stick with that. I doubt many will be using an
> ancient 0.8.2 version (09.09.2015). And everything starting with 0.10.x
> should be interchangeable.
>
> I wouldn't like to have yet another project maintaining a Zoo of adapters
> for Kafka.
>
> Eventually a Kafka-Streams client would make sense though ... to sort of
> extend the Edgent streams from the edge to the Kafka cluster.
>
> Chris
>
>
>
> Am 12.03.18, 03:41 schrieb "vino yang" <yanghua1...@gmail.com>:
>
> Hi guys,
>
> How about this idea, I think we should support kafka's new client API.
>
> 2018-03-04 15:10 GMT+08:00 vino yang <yanghua1...@gmail.com>:
>
> > The reason is that Kafka 0.9+ provided a new consumer API which has
> more
> > features and better performance.
> >
> > Just like Flink's implementation : https://github.com/apache/
> > flink/tree/master/flink-connectors.
> >
> > vinoyang
> > Thanks.
> >
> >
>
>
>


Re: [discuss] What about splitting the kafka Connector into kafka 0.8 and 0.9?

2018-03-11 Thread vino yang
Hi guys,

How about this idea, I think we should support kafka's new client API.

2018-03-04 15:10 GMT+08:00 vino yang <yanghua1...@gmail.com>:

> The reason is that Kafka 0.9+ provided a new consumer API which has more
> features and better performance.
>
> Just like Flink's implementation : https://github.com/apache/
> flink/tree/master/flink-connectors.
>
> vinoyang
> Thanks.
>
>


Re: Rename "iotp" to "iot-ibm" or similar?

2018-03-11 Thread vino yang
+1 , I think we also could support to EdgexFoundry in the future.

2018-03-11 20:00 GMT+08:00 Christofer Dutz :

> Hi all,
>
> I am currently thinking of adding modules to support other IoT Platforms:
>
>   *   Google IoT
>   *   AWS IoT
>   *   Siemens MindSphere
>
> For that the “iotp” sort of doesn’t quite fit as it’s not just “one” IoT
> platform.
> So would you be ok with renaming that to something like: “iot-ibm” or
> similar?
>
> Then we’d have:
>
>   *   Iot-ibm
>   *   Iot-aws
>   *   Iot-google
>   *   Iot-mindsphere/siemens
>
> Chris
>


Re: Can I Contribute a RabbitMQ connector?

2018-03-10 Thread vino yang
Hi Christofer Dutz,

Is there any progress about this issue?

vino yang
Thanks.


2018-03-05 9:33 GMT+08:00 vino yang <yanghua1...@gmail.com>:

> Hi Christofer Dutz,
>
> That's all right. I think the RabbitMQ's licenses would not be a problem.
> There are many open source systems and libraries which have integrated with
> it.
>
> RabbitMQ can support the MQTT with adapter (plugin), it would be good for
> the IoT devices. More detail : http://www.rabbitmq.com/mqtt.html
>
> I hope the connector would be added to edgent to enhance it's
> connectivity.
>
> vinoyang
> Thanks.
>
> 2018-03-04 20:35 GMT+08:00 Christofer Dutz <christofer.d...@c-ware.de>:
>
>> Hi Vinoyang,
>>
>> I had a look at your pull-requests (well the gitignore one is quite
>> trivial and hence very easy to review ;))
>>
>> There was one thing that I wanted to double-check, but the used RabitMQ
>> client library is triple licensed, one of them being Apache 2.0, so that's
>> perfect.
>>
>> Unfortunately I'm not yet that deep into the details of Edgent, so I
>> would like to ask Dale or someone who knows the inner workings a little
>> more than me to re-check and do the merge.
>>
>> Chris
>>
>>
>>
>> Am 26.02.18, 07:22 schrieb "vino yang" <yanghua1...@gmail.com>:
>>
>> Hi christofer.dutz:
>>
>> Thanks for your reply! I will do this task.
>>
>> vinoyang
>>
>> 2018-02-26 14:16 GMT+08:00 Christofer Dutz <christofer.d...@c-ware.de
>> >:
>>
>> > Sorry,
>> >
>> > I was traveling yesterday and sick the days before.
>> > Well I did have a look and I do agree that a rabbitmq connector
>> could be a
>> > valuable addition. As long as the needed libraries have Apache
>> compatible
>> > licensees.
>> >
>> > Chris
>> >
>> > Outlook for Android<https://aka.ms/ghei36> herunterladen
>> >
>> > 
>> > From: vino yang <yanghua1...@gmail.com>
>> > Sent: Monday, February 26, 2018 4:36:23 AM
>> > To: dev@edgent.apache.org
>> > Subject: Re: Can I Contribute a RabbitMQ connector?
>> >
>> > No one can give some comment?
>> >
>> > 2018-02-25 15:39 GMT+08:00 vino yang <yanghua1...@gmail.com>:
>> >
>> > > Hi :
>> > > I find there is no one RabbitMQ connector for edgent. I want to
>> > contribute
>> > > a RabbitMQ connector. How about this idea? Is there someone
>> working for
>> > > this?
>> > >
>> > > Thanks!
>> > > vinoyang
>> > >
>> >
>>
>>
>>
>


Re: Can I Contribute a RabbitMQ connector?

2018-03-04 Thread vino yang
Hi Christofer Dutz,

That's all right. I think the RabbitMQ's licenses would not be a problem.
There are many open source systems and libraries which have integrated with
it.

RabbitMQ can support the MQTT with adapter (plugin), it would be good for
the IoT devices. More detail : http://www.rabbitmq.com/mqtt.html

I hope the connector would be added to edgent to enhance it's connectivity.

vinoyang
Thanks.

2018-03-04 20:35 GMT+08:00 Christofer Dutz <christofer.d...@c-ware.de>:

> Hi Vinoyang,
>
> I had a look at your pull-requests (well the gitignore one is quite
> trivial and hence very easy to review ;))
>
> There was one thing that I wanted to double-check, but the used RabitMQ
> client library is triple licensed, one of them being Apache 2.0, so that's
> perfect.
>
> Unfortunately I'm not yet that deep into the details of Edgent, so I would
> like to ask Dale or someone who knows the inner workings a little more than
> me to re-check and do the merge.
>
> Chris
>
>
>
> Am 26.02.18, 07:22 schrieb "vino yang" <yanghua1...@gmail.com>:
>
> Hi christofer.dutz:
>
> Thanks for your reply! I will do this task.
>
> vinoyang
>
> 2018-02-26 14:16 GMT+08:00 Christofer Dutz <christofer.d...@c-ware.de
> >:
>
> > Sorry,
> >
> > I was traveling yesterday and sick the days before.
> > Well I did have a look and I do agree that a rabbitmq connector
> could be a
> > valuable addition. As long as the needed libraries have Apache
> compatible
> > licensees.
> >
> > Chris
> >
> > Outlook for Android<https://aka.ms/ghei36> herunterladen
> >
> > 
> > From: vino yang <yanghua1...@gmail.com>
> > Sent: Monday, February 26, 2018 4:36:23 AM
> > To: dev@edgent.apache.org
> > Subject: Re: Can I Contribute a RabbitMQ connector?
> >
> > No one can give some comment?
> >
> > 2018-02-25 15:39 GMT+08:00 vino yang <yanghua1...@gmail.com>:
> >
> > > Hi :
> > > I find there is no one RabbitMQ connector for edgent. I want to
> > contribute
> > > a RabbitMQ connector. How about this idea? Is there someone
> working for
> > > this?
> > >
> > > Thanks!
> > > vinoyang
> > >
> >
>
>
>


[discuss] What about splitting the kafka Connector into kafka 0.8 and 0.9?

2018-03-03 Thread vino yang
The reason is that Kafka 0.9+ provided a new consumer API which has more
features and better performance.

Just like Flink's implementation :
https://github.com/apache/flink/tree/master/flink-connectors.

vinoyang
Thanks.


Re: Can I Contribute a RabbitMQ connector?

2018-02-25 Thread vino yang
No one can give some comment?

2018-02-25 15:39 GMT+08:00 vino yang <yanghua1...@gmail.com>:

> Hi :
> I find there is no one RabbitMQ connector for edgent. I want to contribute
> a RabbitMQ connector. How about this idea? Is there someone working for
> this?
>
> Thanks!
> vinoyang
>


Can I Contribute a RabbitMQ connector?

2018-02-24 Thread vino yang
Hi :
I find there is no one RabbitMQ connector for edgent. I want to contribute
a RabbitMQ connector. How about this idea? Is there someone working for
this?

Thanks!
vinoyang


Re: request contribute permission

2018-02-24 Thread vino yang
Hi Justin Mclean:

OK, I know.

Thanks!

2018-02-25 13:10 GMT+08:00 Justin Mclean :

> Hi,
>
> > You mean I do not need jira's "Assign" permission?
>
> No but if you need that we can give you that without being committer.
>
> > Just sending discussions
> > to mailing list, if the community accepts my idea, I could contribute my
> > code then send my PR to apache/edgent repository?
>
> Yes that’s the best way to go about it. If you need a hand with anything
> just ask and someone on this list should be able to help.
>
> Thanks,
> Justin


Re: request contribute permission

2018-02-24 Thread vino yang
Hi Justin Mclean:

You mean I do not need jira's "Assign" permission? Just sending discussions
to mailing list, if the community accepts my idea, I could contribute my
code then send my PR to apache/edgent repository?




2018-02-25 12:12 GMT+08:00 Justin Mclean :

> Hi,
>
>
> > Hi, I am an engineer working in Tencent bigdata department. I am
> interested
> > in Apache edgent. I want to request contribute permission, Thanks.
>
> That’s for your interest in the project.
>
> To order to become a committer you need to contribute to the project with
> a few pull requests or some other contribution. [1] A good start is by
> involved in discussions on the mailing list.
>
> Thanks,
> Justin
>
> 1. https://cwiki.apache.org/confluence/display/EDGENT/Committers
>
>