退订
First of all, are you sure the input data is correct? From the stacktrace
it seems to me the issue might be that the input data is invalid.
Looking at the code of AvroToRowDataConverters, It sounds like STRING
should work with avro enums. Can you provide a minimal reproducer (without
confluent
Hi Oran, can you check your ES logs / metrics?
Most issues we see with the ES sink are around incorrect batching and/or
overloaded clusters. Could it be your ES write queue is building up?
On Wed, Oct 13, 2021 at 1:06 AM Oran Shuster wrote:
> Flink version 1.13.1
> ES Version 7.12.0
> Flink
Hi,
Assume I have a source, stateful operator and a sink operator:
Source -> Batch data and upload -> Push message to sink -> Sink runs insert
/ merges into a data warehouse.
I am wondering, what would happen in case the data is uploaded from the
stateful operator, and has not yet reached the
Hi Yuval,
If your pipeline can implement an exactly-once delivery guarantee depends on
your pipeline. Usually Flink’s fault tolerance mechanism is built
around periodically snapshots of intermediate states called checkpoints. As
long as checkpointing is enabled and all the operators you are
Hi Yuval,
If the pipeline fails before the next checkpoint all the records in the buffer
should be replayed beginning from the last taken checkpoint. The
replay usually starts from the source and reading records again from the
external system.
The assumption is always that after a successful
Hi,
Could you elaborate on why you would like to replace the S3 client?
Best regards,
Martijn
On Wed, 13 Oct 2021 at 17:18, Tamir Sagi
wrote:
> I found the dependency
>
>
> org.apache.hadoop
> hadoop-aws
> 3.3.1
>
>
> apparently its possible, there is a method
>
The KafkaSource, and KafkaSourceBuilder appear to prevent users from
providing their own KafkaSubscriber. Am I overlooking something?
In my case I have an external system that controls which topics we should
be ingesting, and it can change over time. I need to add, and remove topics
as we refresh
>
> Can you provide a minimal reproducer (without confluent schema registry)
> with a valid input?
>
Please download and unzip the attached file.
- src/main/avro/MyProtocol.avdl
- MyRecord, MyEntry, and the MyEnumType is defined
- "mvn generate-sources" will auto-generate Java
Hello,
I just started testing Flink 1.14.0 and noticed some weird behavior. This is
for a Flink cluster with zookeeper for HA and two job managers (one leader, one
backup). The UI on the leader works fine. The UI on the other job manager does
not load any job-specific data. Same applies to the
Hi Matthias,
Do you have any update here?
Thank you,
Doug
From: Gusick, Doug S [Engineering]
Sent: Thursday, October 7, 2021 9:03 AM
To: Hailu, Andreas [Engineering] ; Matthias Pohl
Cc: user@flink.apache.org; Erai, Rahul [Engineering]
Subject: RE: FlinkJobNotFoundException
Hi Matthias,
I
To test a Flink Table UDF I wrote a while ago I created this code to test
it:
(Full link:
https://github.com/nielsbasjes/yauaa/blob/v6.0/udfs/flink-table/src/test/java/nl/basjes/parse/useragent/flink/table/TestTableFunction.java#L80
)
// The base execution environment
StreamExecutionEnvironment
I found the dependency
org.apache.hadoop
hadoop-aws
3.3.1
apparently its possible, there is a method
setAmazonS3Client
I think I found the solution.
Thanks.
Tamir.
From: Tamir Sagi
Sent: Wednesday, October 13, 2021 5:44 PM
To:
Hey community.
I would like to know if there is any way to replace the S3 client in Hadoop
plugin[1] to a custom client(AmazonS3).
I did notice that Hadoop plugin supports replacing the implementation of
S3AFileSystem using
"fs.s3a.impl" (in flink-conf.yaml it will be "s3.impl") but not the
Hi!
I suppose you want to read from different topics every now and then? Does
the topic-pattern option [1] in Table API Kafka connector meet your needs?
[1]
https://ci.apache.org/projects/flink/flink-docs-master/docs/connectors/table/kafka/#topic-pattern
Preston Price 于2021年10月14日周四 上午1:34写道:
Hi!
To implement the renaming of fields with the new API, try this:
tableEnv.createTemporaryView(
"AgentStream",
inputStream,
Schema.newBuilder()
.columnByExpression("useragent", "f0")
.columnByExpression("expectedDeviceClass", "f1")
Hi guys,
I'm still running into this problem. I checked the logs, and there is no
evidence that the python process crashed. I checked the process IDs and
they are still active after the error. No `killed process` messages in
/var/log/messages.
I don't think it's necessarily related to
退订
退订
| |
宋京昌
|
|
邮箱:sjc999...@126.com
|
签名由 网易邮箱大师 定制
@Roc Marshal 你好:
我大致翻了下你的FLINK-15352分支上的关于mysqlcatalog的测试代码,想问一个问题:
目前的mysql实现,tabelenvironment通过jdbc驱动,去加载mysql元数据,那么,反过来,通过flink java
api或者sqlclient,执行DDL建表语句create
catalog.database.table,将元数据写入mysql,之后当我第二次要调用相关表的时候就不需要再建表了,因为mysql已经有相关元数据了,这个能支持吗?
在
目前我遇到的问题是不同Job的日志无法再一个Session中区分。
看了京东写的文章。
https://www.infoq.cn/article/1nvlduu82ihmusxxqruq
未来社区在这方面有什么规划吗。
https://issues.apache.org/jira/browse/FLINK-17969
这个Ticket的PR也被关了。
select??sumsumtypeMySQLMySQL
(id,type,value)
SQL??
CREATE TABLE kafka_table (
vin STRING,
speed DOUBLE,
brake DOUBLE,
hard_to
是不是加了 'lookup.async' = 'true',当 rowkey 为 null 的时候会出现这个问题
https://issues.apache.org/jira/browse/FLINK-24528
Michael Ran 于2021年7月23日周五 上午10:44写道:
> java.util.concurrent.ExecutionException: java.lang.NoClassDefFoundError:
> org/apache/flink/hbase/shaded/org/apache/commons/io/IOUtils缺jar
> 在
这个没有支持的打算是因为,目前我们假定Flink SQL处理的数据都是干净的经过清洗的是吧。
Ada Luna 于2021年9月19日周日 下午7:43写道:
>
> 主要是脏数据,Source、Sink或者其他算子产生的脏数据,向把这些数据侧向输出到外部数据库里存起来。
>
> Caizhi Weng 于2021年9月16日周四 下午1:52写道:
> >
> > Hi!
> >
> > 就我所知目前暂时没有支持 side output 的打算。可以描述一下需求和场景吗?
> >
> > Ada Luna 于2021年9月15日周三 下午8:38写道:
> >
> > >
旭晨,你好。
关于你描述的问题,当前的MySQLCatalog的实现是不支持的,如果需要此功能,则需要重写对应的方法。https://github.com/apache/flink/pull/16962
中 twalthr 老师与 jark 老师提及后续会重构这一部分。你可以在对应的JIRA或者PR上直接进行留言做进一步的讨论。
目前基于AbstractJdbcCatalog实现的XXXCatalog,
包括PostgresCatalog和正在实现的MySQLCatalog都是不支持创建和更改表的。GenericInMemoryCatalog与HiveCatalog是支持的。
Hi!
从报错信息来看 client 在尝试链接位于 http://127.0.0.1:8081/ 的集群。你的 yarn session
应该不在本地吧?所以很可能是 sql client 对应的 flink 配置出错,检查一下对应的 flink 配置文件看看。
maker_d...@foxmail.com 于2021年10月14日周四 上午9:18写道:
> 各位大家好:
> 紧急求助!
> 我之前一直用sql-client提交SQL任务,今天突然不能提交了,报错如下:
>
> Exception in thread "main"
>
Hi!
这看起来像一个 bug,我已经记了一个 issue [1],可以在那里关注问题进展。
如 issue 中所描述,目前看来如果常量字符串一样长,或者都 cast 成 varchar 可以绕过该问题。可以先这样绕过一下。
[1] https://issues.apache.org/jira/browse/FLINK-24537
kcz <573693...@qq.com.invalid> 于2021年10月13日周三 下午5:29写道:
> 因为select出多个sum的值,每一个sum的值都是一个type类型的数据,最后我将它插入到MySQL表里面,MySQL表结构为
>
你可以把使用反向条件把脏数据输出到另外一张表去。source会复用的。其实和side output效果是一致的
On Oct 13, 2021 at 16:28:57, Ada Luna wrote:
> 这个没有支持的打算是因为,目前我们假定Flink SQL处理的数据都是干净的经过清洗的是吧。
>
> Ada Luna 于2021年9月19日周日 下午7:43写道:
>
>
> 主要是脏数据,Source、Sink或者其他算子产生的脏数据,向把这些数据侧向输出到外部数据库里存起来。
>
>
> Caizhi Weng 于2021年9月16日周四 下午1:52写道:
>
> >
各位大家好:
紧急求助!
我之前一直用sql-client提交SQL任务,今天突然不能提交了,报错如下:
Exception in thread "main" org.apache.flink.table.client.SqlClientException:
Unexpected exception. This is a bug. Please consider filing an issue.
at org.apache.flink.table.client.SqlClient.main(SqlClient.java:215)
Caused by:
30 matches
Mail list logo