to do with the
late elements in the window evaluation function
This is what I have done since I cannot afford to loose any financial data.
Regards,
Vinay Patil
On Mon, Nov 7, 2016 at 10:58 PM, Till Rohrmann [via Apache Flink Mailing
List archive.] <ml-node+s1008284n14424...@n3.nabble.com>
for this, but since I read the above part
in documentation so we are going for Cassandra now (to store records and
query them for a special case)
Regards,
Vinay Patil
On Wed, Aug 31, 2016 at 4:51 AM, Stephan Ewen <se...@apache.org> wrote:
> In streaming, memory is mainly needed for state (key/va
/flink-user/201601.mbox/%3CCANC1h_
v0pqfvfto478ft5cbgm-bf-do742gouz528bw7vrj...@mail.gmail.com%3E
Regards,
Vinay Patil
I was not aware that there is separate mailing list for discussions.
>From next time I will use that mailing list for my queries.
Thanks Till for that information.
Regards,
Vinay Patil
On Fri, Jul 8, 2016 at 5:38 PM, Till Rohrmann <trohrm...@apache.org> wrote:
> Hi Vinay,
>
&
Dataset API for historical
re-processing)
Regards,
Vinay Patil
?
For the other sub-task, for all 56 entries I am seeing bytes received.
(this may be because of applying rebalance after the source)
P.S: I am reading over million records from Kafka , so need to utilize
enough resources [Performance is the key here].
Regards,
Vinay Patil
On Mon, Jul 4, 2016 at 8
This is what I was looking for.
Thank you Ufuk
Regards,
Vinay Patil
On Tue, Jul 5, 2016 at 5:39 PM, Ufuk Celebi <u...@apache.org> wrote:
> There is also this:
> https://flink.apache.org/contribute-code.html#snapshots-nightly-builds
>
> The Hadoop 2 version is built for Had
Yes, I had already done that yesterday but got some dependency error while
doing it (since it was not able to download one jar from nexux) , so
thought if there was any other way.
Anyways will try to do that.
Thanks
Regards,
Vinay Patil
On Tue, Jul 5, 2016 at 3:11 PM, Aljoscha Krettek <al
Correct , it means I cannot use it for running on cluster ?
In my code I have updated my dependency to 1.1-SNAPSHOT, so I wanted to
test it on cluster with version 1.1
Regards,
Vinay Patil
On Tue, Jul 5, 2016 at 2:56 PM, Aljoscha Krettek <aljos...@apache.org>
wrote:
> Flink 1.1
Thanks,
so is operator chaining useful in terms of utilizing the resources or we
should keep the chaining to minimal use, say 3-4 operators and disable
chaining ?
I am worried because I am seeing all the operators in one box on flink UI.
Regards,
Vinay Patil
On Mon, Jul 4, 2016 at 7:13 PM
single box achieve maximum parallelism ?
The data we are processing is huge volume of data (60,000 records per
second), so wanted to be sure what we can correct to achieve better
results.
Regards,
Vinay Patil
On Fri, Jul 1, 2016 at 9:23 PM, Aljoscha Krettek <aljos...@apache.org>
wro
of our project when we deploy on cluster and check the Job
Graph , everything is shown in one box , why this happens ? Is it because
of chaining of streams ?
So the box here represent the function flow, right ?
Regards,
Vinay Patil
On Thu, Jun 30, 2016 at 7:29 PM, Vinay Patil <vinay18
be at the start of the
window or before we are applying window ?
Regards,
Vinay Patil
On Thu, Jun 30, 2016 at 2:11 PM, Aljoscha Krettek <aljos...@apache.org>
wrote:
> Hi,
> I think the problem is that the DeltaFunction needs to have this signature:
>
> DeltaFunction<CoGroup
lateness. In Flink 1.1 we will introduce a setting for windows that allows
> to specify an allowed lateness. With this, late elements will be dropped
> automatically. This feature is already available in the master, by the way.
>
> Cheers,
> Aljoscha
>
> On Wed, 29 Jun 2016 at 14:1
say the
timestamps received are out of order.
Late Data : does it have a threshold after which it does not accept late
data ?
Regards,
Vinay Patil
On Wed, Jun 29, 2016 at 5:15 PM, Aljoscha Krettek <aljos...@apache.org>
wrote:
> Hi,
> the element will be kept around indefinite
face this issue ?
I am actually not able to understand how it will differ in real time
streams.
Regards,
Vinay Patil
On Tue, Jun 28, 2016 at 5:07 PM, Aljoscha Krettek <aljos...@apache.org>
wrote:
> Hi,
> ingestion time can only be used if you don't care about the timestamp in
> t
?
Also assuming that the records are always going to be in order , which is
the best approach : Ingestion Time or Event Time ?
Regards,
Vinay Patil
On Tue, Jun 28, 2016 at 2:41 PM, Aljoscha Krettek <aljos...@apache.org>
wrote:
> Hi,
> first regarding tumbling windows: even if you h
Regards,
Vinay Patil
On Mon, Jun 27, 2016 at 11:31 PM, Vinay Patil <vinay18.pa...@gmail.com>
wrote:
> Just an update, when I keep IngestionTime and remove the timestamp I am
> generating, I am getting all the records, but for Event Time I am getting
> one less record, I c
.
Even when I try assigning timestamp for IngestionTime, I get one record
less, so should I safely use Ingestion Time or is it always advisable to
use EventTime ?
Regards,
Vinay Patil
On Mon, Jun 27, 2016 at 8:16 PM, Vinay Patil <vinay18.pa...@gmail.com>
wrote:
> Hi ,
>
> Act
which is
guaranteed to be in order.
Will using the triggers help here ?
Regards,
Vinay Patil
*+91-800-728-4749*
On Mon, Jun 27, 2016 at 7:05 PM, Aljoscha Krettek <aljos...@apache.org>
wrote:
> Hi,
> what timestamps are you assigning? Is it guaranteed that all of them would
> fall in
a sysout, I am getting 20
sysouts instead of 10 (10 sysouts for source and 10 for dest stream)
Unable to understand why one record is coming less to co-group
Regards,
Vinay Patil
On Wed, Jun 15, 2016 at 1:39 PM, Fabian Hueske <fhue...@gmail.com> wrote:
> Can you add a flag to eac
Hi All,
Just an update on this:
Setting the codec using DataFileWriter setCodec method :
writer.setCodec(CodecFactory.snappyCodec());
Pls help me with this issue
Regards,
Vinay
On Jun 22, 2016 10:47 PM, "Vinay Patil" <vinay18.pa...@gmail.com> wrote:
> Hi ,
>
> I
);
The error still occurs even after removing the snappy jar dependency from
pom.
Do we have to add any other dependency ?
Regards,
Vinay Patil
ollect(paramIterable.iterator().next());
}
}
}).print();
Regards,
Vinay Patil
On Tue, Jun 14, 2016 at 1:37 PM, Vinay Patil <vinay18.pa...@gmail.com>
wrote:
> You are right, debugged it for all elements , I can do that now.
> Thanks a lot.
>
> Regards,
> Vinay Patil
>
>
You are right, debugged it for all elements , I can do that now.
Thanks a lot.
Regards,
Vinay Patil
On Tue, Jun 14, 2016 at 11:56 AM, Jark Wu <wuchong...@alibaba-inc.com>
wrote:
> In `coGroup(Iterable iter1, Iterable iter2,
> Collector out)` , when both iter1 and iter2 a
,only the matched element from both stream will come in
the coGroup function.
What I want is how do I check for unmatched elements from both streams and
write it to sink.
Regards,
Vinay Patil
*+91-800-728-4749*
On Tue, Jun 14, 2016 at 2:07 AM, Matthias J. Sax <mj...@apache.org> wrote:
>
ements and also the unmatched
elements to sink(S3)
How do I get the unmatched elements from each stream ?
Regards,
Vinay Patil
to the pom we are using
Regards,
Vinay Patil
*+91-800-728-4749*
On Sat, Jun 11, 2016 at 1:36 AM, Robert Metzger <rmetz...@apache.org> wrote:
> Are you using Maven for building your job jar?
> If yes, can you post your pom file on the mailing list?
>
> On Fri, Jun 10, 2016 at 7:16 PM,
Hi Guys,
I have deployed my application on a cluster, however when I try to run the
application it throws *NoClassDefFoundError for KeyedDeserializationSchema*,
all the dependencies are provided correctly since I have run it on a
different standalone node.
Please Help
Regards,
Vinay Patil
argument to the above command
Thanks and Regards,
Vinay Patil
Hi Stephan,
Yes using DeserializationSchema solution will definitely work.
I am not able to get the dependency for SuccessException.
Any help on this
Regards,
Vinay Patil
*+91-800-728-4749*
On Thu, May 26, 2016 at 3:32 PM, Stephan Ewen <se...@apache.org> wrote:
> Hi!
>
&g
verifying that the configuration is correct and that we
are able to read from kafka.
Am I doing it right, is there any better approach ?
Regards,
Vinay Patil
*+91-800-728-4749*
On Thu, May 26, 2016 at 1:01 PM, Aljoscha Krettek <aljos...@apache.org>
wrote:
> Hi,
> what we are doing in most inter
)
Please help with these issues
Regards,
Vinay Patil
33 matches
Mail list logo