Hi Jose
In some scenarios I think it would make sense to optionally allow the job
going on even if there are some exceptions.
But IMHO the scenario might be more likely to debug something but the
production. And in my own limited experience most of user actually could do
this themself i.e.
Ok, so I loaded the dump into Eclipse Mat and followed:
https://cwiki.apache.org/confluence/display/FLINK/Debugging+ClassLoader+leaks
- On the Histogram, I got over 30 entries for: ChildFirstClassLoader
- Then I clicked on one of them "Merge Shortest Path..." and picked
"Exclude all
Hi Harshit,
I guess you are using the reference code of the master instead of Flink
1.14 which is the version you are using.
TumblingEventTimeWindows is introduced in Flink 1.16 which is still not
released. However, it could be seen as an utility class and so I think you
could just copy it into
I can look into RocksDB metrics, I need to configure Prometheus at some point
anyway. However, going back to the original question, is there no way to gain
more insight into this with the state processor API? You've mentioned potential
issues (too many states, missing compaction) but, with my
Hi Martin,
Thanks for your answer. Regarding my contribution, I will for sure check the
contributing guide and get familiar with Flink source code. I hope it will end
up well and I will be able to write that functionality.
Best regards,
Jasmin
On 19.04.2022., at 09:39, Martijn Visser
I am deploying as a docker on our servers, due to some restrictions I can
only pass Keystore URLs.
one option is yarn.ship-files !. can you help me with pointing to the
sample code, and how job manager can ship this file?
download as part of job's main function and send to all task managers..
Hello Community,
We are trying to adopt flink-stop instead of orchestrating the stop
manually. While using the command, it failed with error "Operation not
found under key:
org.apache.flink.runtime.rest.handler.job.AsynchronousJobOperationKey@9fbfc15a".
the job eventually stopped but I wanted to
Also https://shopify.engineering/optimizing-apache-flink-applications-tips
might be helpful (has a section on profiling, as well as classloading).
On Tue, Apr 19, 2022 at 4:35 AM Chesnay Schepler wrote:
> We have a very rough "guide" in the wiki (it's just the specific steps I
> took to debug
> I assume that when you say "new states", that is related to new descriptors
> with different names? Because, in the case of windowing for example, each
> window "instance" has its own scoped (non-global and keyed) state, but that's
> not regarded as a separate column family, is it?
Yes,
Dear Team,
I am new to pyflink and request for your support in issue I am facing with
Pyflink. I am using Pyflink version 1.14.4 & using reference code from
pyflink getting started pages.
I am getting following error .
ImportError: cannot import name 'TumblingEventTimeWindows' from
Hi Flink Users,
Does anyone know what happened to the /status endpoint of a job?
Calling /jobs/0c39e6ce662379449e7f7f965ff1eee0/status gives me a 404.
Thanks & best, Peter
Hi,
I have a question regarding flink checkpointing configuration.
I am obtaining my knowledge from the official docs here:
https://nightlies.apache.org/flink/flink-docs-stable/docs/deployment/config/
and running Flink 1.14.4
I would like to be able to do a checkpoint every 10 minutes which at
Hello,
We are actually working on a similar problem against S3. The checkpointing
thing got me thinking if the checkpoint would indeed succeed with a large
backlog of files. I always imagined that SplitEnumerator lists all
available files and SourceReader is responsible for reading those files
Hi,
After the previous JobManager fails, K8S start the new JobManager, but the
Leader saved in HA is still the old JobManager address. After the Dispatcher
gets the old JobManager leader, it will try to connect to it.
This error can be ignored, and it will return to normal after waiting for a
Hi Roman,
I assume that when you say "new states", that is related to new descriptors
with different names? Because, in the case of windowing for example, each
window "instance" has its own scoped (non-global and keyed) state, but that's
not regarded as a separate column family, is it?
For
We have a very rough "guide" in the wiki (it's just the specific steps I
took to debug another leak):
https://cwiki.apache.org/confluence/display/FLINK/Debugging+ClassLoader+leaks
On 19/04/2022 12:01, huweihua wrote:
Hi, John
Sorry for the late reply. You can use MAT[1] to analyze the dump
Hi, John
Sorry for the late reply. You can use MAT[1] to analyze the dump file. Check
whether have too many loaded classes.
[1] https://www.eclipse.org/mat/
> 2022年4月18日 下午9:55,John Smith 写道:
>
> Hi, can anyone help with this? I never looked at a dump file before.
>
> On Thu, Apr 14, 2022
Hi Alexis,
Thanks a lot for the information,
MANIFEST files list RocksDB column families (among other info); ever
growing size of these files might indicate that some new states are
constantly being created.
Could you please confirm that the number of state names is constant?
> Could you
https://www.yuque.com/docs/share/8625d14b-d465-48a3-8dc1-0be32b138f34?#lUX6
《tpcds-各引擎耗时》
链接有效期至 2022-04-22 10:31:05
LuNing Wong 于2022年4月18日周一 09:44写道:
> 补充,用的Hive 3.1.2 Hadoop 3.1.0做的数据源。
>
> LuNing Wong 于2022年4月18日周一 09:42写道:
>
> > Flink版本是1.14.4,
Hi Jasmin,
Vertica is not implemented as a JDBC dialect, which is a requirement for
Flink to support this. You can find an overview of the currently supported
JDBC dialects in the documentation [1]. It would be interesting if you
could contribute support for Vertica towards Flink.
Best regards,
Hi,
I'm using Flink-1.14.4 and failed to load in WindowReaderFunction the state
of a stateful trigger attached to a session window.
I found that the following data become available in WindowReaderFunction:
- the state defined in the ProcessWindowFunction
- the registered timers of the stateful
这个文件:myReplace.py
On Tue, Apr 19, 2022 at 2:38 PM 799590...@qq.com <799590...@qq.com> wrote:
>
> 是下面这个类吗? 没有import
> org.apache.flink.table.functions.ScalarFunction
>
> 用flinksql创建Function的时候没有要求import ScalarFunction
>
> 下面是上传py文件的逻辑代码
>
> String originalFilename = file.getOriginalFilename();
是下面这个类吗? 没有import
org.apache.flink.table.functions.ScalarFunction
用flinksql创建Function的时候没有要求import ScalarFunction
下面是上传py文件的逻辑代码
String originalFilename = file.getOriginalFilename();
destFile = new File(System.getProperty("user.dir") + uploadTempPath,
originalFilename);
NameError: name 'ScalarFunction' is not defined
你 import ScalarFunction了吗?
On Tue, Apr 19, 2022 at 2:08 PM 799590...@qq.com.INVALID
<799590...@qq.com.invalid> wrote:
>
> 以下是刚刚的报错日志,现在里面没有Python callback server start
>
以下是刚刚的报错日志,现在里面没有Python callback server start
failed了,是因为原来黄底部分获取的Python环境变量的路径为C:\Python310(早先安装的python,后来安装python36,设置环境PATH变量为python36),重启机器后好了
2022-04-19 13:58:11 |INFO |http-nio-9532-exec-3 |HiveCatalog.java:257
|org.apache.flink.table.catalog.hive.HiveCatalog |Setting hive conf dir as
25 matches
Mail list logo