Hi All
I am trying to code a solution for following scenarios.
1. I have a stream of Tuples with multiple numeric fields (e.g. A, B, C,
D, E ... etc )
2. I want the ability to do different Windowing and Aggregation on each
field or a group of fields in the Tuple. e.g. Sum A over a Period of 2
.
>
> If you could try out the latest version, that would be great. If not,
> we can probably merge your PR and think about a minor release.
>
> Best,
> Max
>
> On Fri, Sep 16, 2016 at 6:10 AM, Chawla,Sumit <sumitkcha...@gmail.com>
> wrote:
> > Hi Max
> &g
ime.taskmanager.Task.run(Task.java:584)
> > at java.lang.Thread.run(Thread.java:745)
> >
> >
> > Regards
> > Sumit Chawla
> >
> >
> > On Tue, Sep 13, 2016 at 2:20 PM, Chawla,Sumit <sumitkcha...@gmail.com>
> > wrote:
> >
> >> Hi A
Hi All
The release-0.2.0-incubating supports Flink 1.0.3. With Flink 1.1.0 out, is
there a plan to support it with any 0.2.0 patch? I tried compiling 0.2.0
with Flink 1.1.0,
and got couple of compliation errors in
FlinkGroupAlsoByWindowWrapper.java. Going
back to master i see lots of change in
apable of
> > handling Unbounded Pipelines. Is it possible for you to upgrade?
> >
> > On Wed, Aug 31, 2016 at 5:17 PM, Chawla,Sumit <sumitkcha...@gmail.com>
> > wrote:
> >
> > > @Ajioscha, My assumption is here that atleast one trigger should fire.
> > >
id> wrote:
>
> > KafkaIO is implemented using the UnboundedRead API, which is supported by
> > the DirectRunner. You should be able to run without the
> withMaxNumRecords;
> > if you can't, I'd be very interested to see the stack trace that you get
> > when you try to
t with Flink:
> .withMaxNumRecords(500)
>
> Cheers,
> Aljoscha
>
> On Wed, 31 Aug 2016 at 08:14 Chawla,Sumit <sumitkcha...@gmail.com> wrote:
>
> > Hi Aljoscha
> >
> > The code is not different while running on Flink. It have removed
> business
lement.getKey(), count));
}
}));
pipeline.run();
Regards
Sumit Chawla
On Fri, Aug 26, 2016 at 9:46 AM, Thomas Groh <tg...@google.com.invalid>
wrote:
> If you use the DirectRunner, do you observe the same behavior?
>
> On Thu, Aug 25, 2016 at 4:32 PM
> reading from an unbounded input using the DirectRunner and I get the
> expected output panes.
>
> Just to clarify, the second half of the trigger ('when the first element
> has been there for at least 30+ seconds') simply never fires?
>
> On Thu, Aug 25, 2016 at 2:38 PM, Chawla,Su
acking in
> the absence of new records .
>
> On Thu, Aug 25, 2016 at 10:29 AM, Chawla,Sumit <sumitkcha...@gmail.com>
> wrote:
>
> > Thanks Raghu.
> >
> > I don't have much control over changing KafkaIO properties. I added
> > KafkaIO code for comple
> time if there aren't any records to read from Kafka. Could you file a jira?
>
> thanks,
> Raghu.
>
> On Wed, Aug 24, 2016 at 2:10 PM, Chawla,Sumit <sumitkcha...@gmail.com>
> wrote:
>
> > Hi All
> >
> >
> > I am trying to do some simple bat
Hi All
I am trying to do some simple batch processing on KafkaIO records. My beam
pipeline looks like following:
pipeline.apply(KafkaIO.read()
.withTopics(ImmutableList.of(s"mytopic"))
.withBootstrapServers("localhost:9200")
.apply("ExtractMessage", ParDo.of(new
wrote:
> Explicit +Raghu
>
> On Fri, Aug 19, 2016 at 4:24 PM, Chawla,Sumit <sumitkcha...@gmail.com>
> wrote:
>
> > Hi All
> >
> > I am trying to use KafkaIO as unbounded source, but the translation is
> > failing. I am
Hi All
I am trying to use KafkaIO as unbounded source, but the translation is
failing. I am using FlinkRunner for the pipe. It complains about
the org.apache.kafka.common.TopicPartition being not-serializable.
pipeline.apply(
KafkaIO.read()
+1
Regards
Sumit Chawla
On Fri, Aug 12, 2016 at 9:29 AM, Aparup Banerjee (apbanerj) <
apban...@cisco.com> wrote:
> + 1, me2
>
>
>
>
> On 8/12/16, 9:27 AM, "Amit Sela" wrote:
>
> >+1 as in I'll join ;-)
> >
> >On Fri, Aug 12, 2016, 19:14 Eugene Kirpichov
on via ParDo and your own
> logic. If you want some logic similar to Write (but with different
> windowing and triggering) then it is a pretty simple composite to derive
> something from.
>
> On Thu, Jul 28, 2016 at 12:37 PM, Chawla,Sumit <sumitkcha...@gmail.com>
> wrote:
>
gards
> JB
>
>
> On 07/28/2016 09:00 PM, Chawla,Sumit wrote:
>
>> Hi Kenneth
>>
>> Thanks for looking into it. I am currently trying to implement Sinks for
>> writing data into Cassandra/Titan DB. My immediate goal is to run it on
>> Flink Runner.
question is, indeed, specific to the Flink
> > runner, and I'll then defer to somebody familiar with it.
> >
> > On Wed, Jul 27, 2016 at 5:25 PM Chawla,Sumit <sumitkcha...@gmail.com>
> > wrote:
> >
> > > Thanks a lot Eugene.
> > >
> > > >>>M
e/beam/sdk/io/gcp/bigquery/BigQueryIO.java#L1797
> Most other non-file-based connectors, indeed, don't (KafkaIO, DatastoreIO,
> BigtableIO etc.)
>
> I'm not familiar with the Flink API, however I'm a bit confused by your
> last paragraph: the Beam programming model is intentionally
&g
Hi
Please suggest me on what is the best way to write a Sink in Beam. I see
that there is a Sink abstract class which is in experimental state.
What is the expected outcome of this one? Do we have the api frozen, or
this could still change? Most of the existing Sink implementations like
20 matches
Mail list logo