I have a pyflink job that starts using the Datastream api and converts to
the Table API. In the datastream portion, there is a MapFunction. I am
getting the following error:
flink run -py sample.py
java.lang.IllegalArgumentException: The configured managed memory fraction
for Python worker proces
Is it possible to rename execution stages from the Table API? Right now the
entire select transformation appears in plaintext in the task name so the
log entries from ExecutionGraph are over 10,000 characters long and the log
files are incredibly difficult to read.
for example a simple selected fie
Hi Timo,
thanks for the help here, wrapping the MapView in a case class indeed
solved the problem.
It was not immediately apparent from the documentation that using a MapView
as top level accumulator would cause an issue. it seemed a straightforward
intuitive way to use it :)
Cheers
Clemens
On We
Could you please ensure that you are using the native Kubernetes mode[1]?
For standalone on K8s[2], you need to manually set the annotation in the
jobmanager yaml file.
[1].
https://ci.apache.org/projects/flink/flink-docs-master/docs/deployment/resource-providers/native_kubernetes/
[2].
https://c
Hi,
You need to make sure that PyFlink is available in the cluster nodes. There are
a few ways to achieve this, e.g.
- Install PyFlink on all the cluster nodes
- Install PyFlink in a virtual environment and specify it via python archive [1]
Regards,
Dian
[1]
https://ci.apache.org/projects/flin
+ user mailing list
I don't have permission to assign to you, but here is the JIRA ticket:
https://issues.apache.org/jira/browse/FLINK-23519
Thanks!
On Tue, Jul 27, 2021 at 4:40 AM Yun Tang wrote:
> Hi Mason,
>
> I think this request is reasonable and you could create a JIRA ticket so
> that w
This thread is duplicated on the dev mailing list [1].
[1]
https://lists.apache.org/x/thread.html/r87fa8153137a4968f6a4f6b47c97c4d892664d864c51a79574821165@%3Cdev.flink.apache.org%3E
Best,
D.
On Tue, Jul 27, 2021 at 5:38 PM Kathula, Sandeep
wrote:
> Hi,
>
> We have a simple Beam applicati
Hi,
We have a simple Beam application like a work count running with Flink
runner (Beam 2.26 and Flink 1.9). We are using Beam’s value state. I am trying
to read the state from savepoint using Flink's State Processor API but getting
a NullPointerException. Converted the whole code into Pur
Hi Till,
Having been unaware of this mail thread I've created a Jira Bug
https://issues.apache.org/jira/browse/FLINK-23509 which proposes also a simple
solution.
Regards
Matthias
Diese Nachricht ist ausschliesslich für den Adressaten bestimmt und beinhaltet
unter Umständen vertrauliche Mitte
Hi Mason,
I think this request is reasonable and you could create a JIRA ticket so that
we could resolve it later.
Best,
Yun Tang
From: Mason Chen
Sent: Tuesday, July 27, 2021 15:15
To: Yun Tang
Cc: Mason Chen ; user@flink.apache.org
Subject: Re: as-variable
Hi!
Try this:
sql.zipWithIndex.foreach { case (sql, idx) =>
val result = tableEnv.executeSql(sql)
if (idx == 7) {
result.print()
}
}
igyu 于2021年7月27日周二 下午4:38写道:
> tableEnv.executeSql(sql(0))
> tableEnv.executeSql(sql(1))
> tableEnv.executeSql(sql(2))
> tableEnv.ex
tableEnv.executeSql(sql(0))
tableEnv.executeSql(sql(1))
tableEnv.executeSql(sql(2))
tableEnv.executeSql(sql(3))
tableEnv.executeSql(sql(4))
tableEnv.executeSql(sql(5))
tableEnv.executeSql(sql(6))
tableEnv.executeSql(sql(7)).print()
that is OK
but I hope
Hi Team, I have set the "kubernetes.jobmanager.annotations". But I can't
find these in the k8s deployment. As these can be found in the job manager
pod.
Is it by design or just be missed?
This feels like the simplest error, but I'm struggling to get past it. I
can run pyflink jobs locally just fine by submitting them either via
`python sample.py` or `flink run --target local -py sample.py`. But, when I
try to execute on a remote worker node, it always fails with this error:
table_e
Yup, your understand is correct—that was the analogy I was trying to make!
> On Jul 26, 2021, at 7:57 PM, Yun Tang wrote:
>
> Hi Mason,
>
> In rocksDB, one state is corresponding to a column family and we could
> aggregate all RocksDB native metrics per column family. If my understanding
> is
15 matches
Mail list logo