I find it never call the spout fail or ack method from the code.
--
View this message in context:
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Storm-topology-running-in-flink-tp13495.html
Sent from the Apache Flink User Mailing List archive. mailing list archive at
Hello,
I'm having issues editing the default Flink memory settings. I'm attempting to
run a Flink task on a cluster at scale. The log shows my edited config settings
having been read into the program, but they're having no effect. Here's the
trace:
17/06/05 23:45:41 INFO flink.FlinkRunner:
After upgrade i started getting this exception, is this a bug?
2017-06-05 23:45:03,423 INFO
org.apache.flink.runtime.executiongraph.ExecutionGraph- UTCStream ->
Sink: UTC sink (2/12) (f78ef7f7368d27f414ebb9d0db7a26c8) switched from
RUNNING to FAILED.
java.lang.Exception: Could not
with 1.3 what should we use for processFunction?
org.apache.flink.streaming.api.functions.RichProcessFunction
org.apache.flink.streaming.api.functions.ProcessFunction.{OnTimerContext,
Context}
--
View this message in context:
Hello,
I just reading about this, because I am developing my degree final project
about how performance spark and flink.
I've developed a machine learning algorithm, and I want to trigger the
execution in Flink.
When I do it with my code it takes around 5 minutes (all this time just in
the
Hi Gordon,
The yarn session gets created when I try to run the following command:
yarn-session.sh -n 4 -s 2 -jm 1024 -tm 3000 -d --ship deploy-keys/
However when I try to access the Job Manager UI, it gives me exception as :
javax.net.ssl.SSLHandshakeException:
Hi Gordan,
Thank you for your response.
I have done the necessary configurations by adding all the node ip's from
Resource Manager , is this correct ?
Also I will try to check if wildcard works as all our hostname begins with a
same pattern.
For ex : SAN=dns:ip-192-168.* should work , right ?
Hello everyone,
I am a completely newcomer of streaming engines and big data platforms but I am
considering using Flink for my master thesis and before starting using it I am
trying to do some kind of evaluation of the system. In particular I am
interested in observing how the system reacts in
Hi Vinay!
1. Will the existing functionality provided by Amazon to configure
in-transit data encrytion work for Flink as well. This is explained here:
http://docs.aws.amazon.com/emr/latest/ReleaseGuide/emr-encryption-enable-security-configuration.html
Thank you Till.
Gordon can you please help.
Regards,
Vinay Patil
On Fri, Jun 2, 2017 at 9:10 PM, Till Rohrmann [via Apache Flink User
Mailing List archive.] wrote:
> Hi Vinay,
>
> I've pulled my colleague Gordon into the conversation who can probably
> tell
Hi everybody,
in my job I have a groupReduce operator with parallelism 4 and one of the
sub-tasks takes a huge amount of time (wrt the others).
My guess is that the objects assigned to that slot have much more data to
reduce (an thus are somehow computationally heavy within the groupReduce
Hi Chesnay, thank you very much for your help.
If naming datastreams, counter “Records sent” are correct, i.e.,
map.filter(condition1).name.filter(condition2).name
thanks,
best regards
Luis.
Hi Robert,
I have few more questions to clarify.
1) Why do you say printing the values to the standard out would display
duplicates even if exactly once works? What is the reason for this? Could
you brief me on this?
2) I observed duplicates (by writing to a file) starting from the
I think Till answered all your question but just to rephrase a bit.
1. The within and TimeCharacteristic are working on different levels. The
TimeCharacteristics tells how events are assigned a timestamp. The within
operator specifies the maximal time between first and last event of a
matched
14 matches
Mail list logo