Hi Joey,
Good question!
I will copy it to Till and Chesnay who know this part of the implementation.
Thanks, vino.
2018-08-03 11:09 GMT+08:00 Joey Echeverria :
> I don’t have logs available yet, but I do have some information from ZK.
>
> The culprit appears to be the
I don’t have logs available yet, but I do have some information from ZK.
The culprit appears to be the /flink/default/leader/dispatcher_lock znode.
I took a look at the dispatcher code here:
Hi Mich,
I have reviewed your code in the github you provided.
I copied your code to org.apache.flink.table.examples.scala under
flink-examples-table. It passed the compilation and didn't report the
exception you provided, although there are other exceptions (it's about
hdfs, this is because of
See the section on Operators here
https://docs.google.com/document/d/1b5d-hTdJQsPH3YD0zTB4ZqodinZVHFomKvt41FfUPMc/edit?usp=sharing
On Thu, Aug 2, 2018 at 3:42 PM Harshvardhan Agrawal <
harshvardhan.ag...@gmail.com> wrote:
> Hello,
>
> I have recently started reading Stream Processing with Apache
Fabian,
https://github.com/apache/flink/pull/6481
I added a page, but did not remove or edit any existing page. Let me know
what you'd like to see trimmed.
On Thu, Aug 2, 2018 at 8:44 AM Fabian Hueske wrote:
> Hi Elias,
>
> Thanks for the update!
> I think the document can be added to the
Hello,
I have recently started reading Stream Processing with Apache Flink by
Fabian and Vasiliki. In Chapter 3 of the book there is a statement that
says: None of the functions expose an API to set time stamps of emitted
records, manipulate the event-time clock of a task, or emit watermarks.
Thanks or the tips Gary and Vino. I’ll try to reproduce it with test data and
see if I can post some logs.
I’ll also watch the leader znode to see if the election isn’t happening or if
it’s not being retrieved.
Thanks!
-Joey
On Aug 1, 2018, at 11:19 PM, Gary Yao
Hi Elias,
Thanks for the update!
I think the document can be added to the docs now.
It has some overlap with the Event Time Overview page.
IMO, it should not replace the overview page but rather be a new page.
Maybe, we can make the overview a bit slimmer and point to the more
detailed
Hi,
I'm not aware that multiple Flink operators can share transport
connections. They usually perform independent communication with the
target system. If the pressure is too high for Elasticsearch, have you
thought about reducing the parallelism of the sink. Also the buffering
options could
Hi Juan,
currently, there is no way of handling late events in SQL. This feature
got requested multiple times so it is likely that some contributor will
pick it up soon. I filed FLINK-10031 [1] for it. There is also [2] that
aims for improving the situation with time windows.
Regards,
Timo
Use RichCoFlatMapFunction if you require access to the runtime context.
On 02.08.2018 14:26, Puneet Kinra wrote:
Hi
Is there any way to get thread name in coFlatMap function.
or getting runtime context.
--
*Cheers *
*
*
*Puneet Kinra*
*
*
*Mobile:+918800167808 | Skype :
Hello,
We are using the SQL api and we were wondering if it’s possible to capture and
log late events. We could not find a way considering the time window is managed
inside the SQL.
Is there a way to do this?
Thank you,
Juan
Appreciate if anyone had a chance to look at the Scala code in GitHub and
advise
https://github.com/michTalebzadeh/Flink
Regards,
Dr Mich Talebzadeh
LinkedIn *
https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
Hi
Is there any way to get thread name in coFlatMap function.
or getting runtime context.
--
*Cheers *
*Puneet Kinra*
*Mobile:+918800167808 | Skype : puneet.ki...@customercentria.com
*
*e-mail :puneet.ki...@customercentria.com
*
Hello,
I am using Elastic search5 Connector. Can I use same connection while
sinking multiple streams to Elastic search? Currently, I think it creates a
different transport connection for each sink. I think it's creating a lot of
connections with the cluster. Cause I am sinking 5-6 streams.
Hi,
Paul is right.
Which and how much data is stored in state for a window depends on the type
of the function that is applied on the windows:
- ReduceFunction: Only the reduced value is stored
- AggregateFunction: Only the accumulator value is stored
- WindowFunction or ProcessWindowFunction:
Hi Pedro,
It sounds like a bug from Flink itself.
You can create an issue in JIAR and give enough information, such as logs,
completed exceptions, Flink versions, and your usage environment.
Thanks, vino.
2018-08-02 16:45 GMT+08:00 PedroMrChaves :
> Hello,
>
> It happens whether the WEB UI is
Hello,
It happens whether the WEB UI is opened or not and it no longer works.
When this happens I have to restart the job managers.
regards,
Pedro.
-
Best Regards,
Pedro Chaves
--
Sent from: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/
Hi Paul,
Yes, I am talking about the normal case, Flink must store the data in the
window as a state to prevent failure.
In some scenarios your understanding is also correct, and flink uses the
window pane to optimize window calculations.
So, if your scene is in optimized mode, ignore this.
Thanks Timo,
Did as suggested getting this compilation error
[info] Compiling 1 Scala source to
/home/hduser/dba/bin/flink/md_streaming/target/scala-2.11/classes...
[error]
/home/hduser/dba/bin/flink/md_streaming/src/main/scala/myPackage/md_streaming.scala:136:
could not find implicit value for
Whenever you use Scala and there is a Scala specific class use it.
remove: import
org.apache.flink.streaming.api.environment.StreamExecutionEnvironment
add: import org.apache.flink.streaming.api.scala._
This will use
org.apache.flink.streaming.api.scala.StreamExecutionEnvironment.
Timo
Am
Tremendous. Many thanks.
Put the sbt build file and the Scala code here
https://github.com/michTalebzadeh/Flink
Regards,
Dr Mich Talebzadeh
LinkedIn *
https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
Thanks everyone for the advice
This worked and passed the compilation error
import org.apache.flink.table.api.TableEnvironment
import org.apache.flink.table.api.scala._
import org.apache.flink.api.scala._
….
val dataStream = streamExecEnv
.addSource(new
Hi Joey,
If the other components (e.g., Dispatcher, ResourceManager) are able to
finish
the leader election in a timely manner, I currently do not see a reason why
it
should take the REST server 20 - 45 minutes.
You can check the contents of znode /flink/.../leader/rest_server_lock to
see
if
24 matches
Mail list logo