能否看一下对应的包里是否有这个类, 方法如下(假设你的blink安装包在 /tmp/blink):
cd /tmp/blink/opt/connectors/kafka011
jar -tf flink-connector-kafka-0.11_2.11-*.jar | grep Crc32C
On Fri, Feb 22, 2019 at 2:56 PM 张洪涛 wrote:
>
>
> 大家好!
>
>
> 我正在测试Blink sql client kafka sink connector ,但是发现写入失败,以下是我的步骤
>
>
> 环境配置
> blink standal
/flink/kafka011/shaded/org/apache/kafka/common/utils/Crc32C$PureJavaChecksumFactory.class
> >>
> org/apache/flink/kafka011/shaded/org/apache/kafka/common/utils/Crc32C.class
> >>
> >>
> org/apache/flink/kafka011/shaded/org/apache/kafka/common/utils/PureJav
>
> >> > >On Fri, Feb 22, 2019 at 6:54 PM 张洪涛 wrote:
> >> > >
> >> > >>
> >> > >>
> >> > >> 是包含这个类的
> >> > >>
> >> > >>
> >> > >> jar -tf flink-connector-kafka-0.11_2.11-*.jar | grep Crc32C
> >> > >>
>
1. Flink SQL 和 Hive 语法并不完全兼容
2&3. 帮你找专业人士回答 @Bowen Li
On Wed, Feb 27, 2019 at 2:43 PM bigdatayunzhongyan
wrote:
> Sorry!
>
> 1、SQL解析有问题,无法识别formatted,desc xxx可以
> Flink SQL> desc formatted customer;
> [ERROR] Could not execute SQL statement. Reason:
> org.apache.flink.table.api.TableExcepti
看起来 tableSink 由于什么原因变成null了。需要debug一下。
你能提供一下你的 conf/sql-client-defaults.yaml 吗?
On Thu, Mar 7, 2019 at 1:31 PM yuess_coder <642969...@qq.com> wrote:
> 错误日志是
>
>
>
>
> java.lang.NullPointerException
> at
> org.apache.flink.table.api.TableEnvironment.insertInto(TableEnvironment.scala:1300)
> a
t;"
> # (optional) port from cluster to gateway
> gateway-port: 0
>
>
>
>
>
> #==
> # Catalog properties
>
> #======
>
Try:
xx.orderBy('id.desc, 'value1.asc)
*Best Regards,*
*Zhenghua Gao*
On Sat, Mar 16, 2019 at 10:28 AM 刘 文
wrote:
>
> 输出结果,只按id降序排序,没有按value1升序排序。
>
>
>
>
>
>
>
> package
> com.opensourceteams.module.bigdata.flink.example.tableapi.operation.or
yaml格式不对,看起来是缩进导致的。
你可以找个在线yaml编辑器验证一下, 比如 [1]
更多yaml格式的说明,参考 [2][3]
[1] http://nodeca.github.io/js-yaml/
[2] http://www.ruanyifeng.com/blog/2016/07/yaml.html
[3] https://en.wikipedia.org/wiki/YAML
*Best Regards,*
*Zhenghua Gao*
On Mon, Apr 1, 2019 at 11:51 AM 曾晓勇 <469663...@qq.com>
format 和 schema 应该在同一层。
参考一下 flink-sql-client 测试里TableNumber1的配置文件: test-sql-client-defaults.yaml
*Best Regards,*
*Zhenghua Gao*
On Mon, Apr 1, 2019 at 4:09 PM 曾晓勇 <469663...@qq.com> wrote:
> @1543332...@qq.com
>
> 谢谢,格式问题后面我检查了也已经调整正确了,直接从flink官网下载最新的版本在启动的时候报错,具体报错如下,目前想调试
st row.
[1]
https://github.com/apache/flink/blob/master/flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/plan/rules/physical/stream/StreamExecDeduplicateRule.scala
*Best Regards,*
*Zhenghua Gao*
On Tue, Aug 6, 2019 at 2:28 PM huang wrote:
> Hi all,
>
/java/org/codehaus/janino/Scanner.java#L71
*Best Regards,*
*Zhenghua Gao*
On Wed, Aug 7, 2019 at 12:02 AM Vincent Cai wrote:
> Hi all,
> 在Spark中,可以通过调用Dataset的queryExecution.debug.codegen() 方法获得 Catalyst 产生的代码。
> 在Flink是否有类似的方法可以获得code gen的代码?
>
>
> 参考链接:
> https://medium
JM is restarted by YARN on failure [1].
[1]
https://ci.apache.org/projects/flink/flink-docs-stable/ops/jobmanager_high_availability.html#yarn-cluster-high-availability
*Best Regards,*
*Zhenghua Gao*
On Tue, Aug 13, 2019 at 4:51 PM 陈帅 wrote:
> 请教一下:flink on yarn,提交方式是per job的话,如何保证高可用?
>
+1 for dropping.
*Best Regards,*
*Zhenghua Gao*
On Thu, Dec 5, 2019 at 11:08 AM Dian Fu wrote:
> +1 for dropping them.
>
> Just FYI: there was a similar discussion few months ago [1].
>
> [1]
> http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/DISCUSS-Drop
1) ML里直接发截图无法展示,可以用第三方图床,然后链接过来。
2) 请确认 time1/time2 类型是否是 TIMESTAMP
3) 文档中的 TIMESTAMP '2003-01-02 10:00:00' 代表标准SQL的时间常量(timestamp literal),你的
TIMESTAMP time1 无法被视作时间常量。
*Best Regards,*
*Zhenghua Gao*
On Mon, Dec 16, 2019 at 3:51 PM 1530130567 <1530130...@qq.com> wrote:
> 各位
DataStream 用户可以直接在StateDescriptor中设置TTL [1]
详见文档中的代码。
[1]
https://ci.apache.org/projects/flink/flink-docs-release-1.10/dev/stream/state/state.html#state-time-to-live-ttl
*Best Regards,*
*Zhenghua Gao*
On Thu, Mar 12, 2020 at 9:25 PM 潘明文 wrote:
> 您好,
>checkpoint 基于RocksDBStateBacke
]
https://github.com/apache/flink/blob/master/flink-formats/flink-json/src/main/java/org/apache/flink/formats/json/TimeFormats.java#L38
*Best Regards,*
*Zhenghua Gao*
On Mon, Mar 23, 2020 at 4:27 PM 吴志勇 <1154365...@qq.com> wrote:
> 如题:
> 我向kafka中输出了json格式的数据
> {"id"
CURRENT_TIMESTAMP 返回值类型是 TIMESTAMP (WITHOUT TIME ZONE),
其语义可参考 java.time.LocalDateTime。
其字符形式的表示并不随着时区变化而变化(如你所见,和UTC+0 一致)。
你的需求可以通过 CONVERT_TZ(timestamp_string, time_zone_from_string,
time_zone_to_string)
*Best Regards,*
*Zhenghua Gao*
On Mon, Mar 23, 2020 at 10:12 PM DONG, Weike
wrote
请确认一下 kafka connector 的jar包是否在 flink/lib 下。
目前的报错看起来是找不到kafka connector的jar包。
*Best Regards,*
*Zhenghua Gao*
On Wed, Mar 25, 2020 at 4:18 PM 赵峰 wrote:
> 不是语法问题,我建表也没有问题,是查询报错。你有没有试查询数据或者数据写人文件表中
>
>
>
>
> 参考下这个文档:
>
> https://ci.apache.org/projects/flink/flink-doc
Hi Chief,
目前Hive connector读取数据是通过 InputFormatSourceFunction 来实现的。
InputFormatSourceFunction 的工作模式不是预分配的模式,而是每个source task向master请求split。
如果某些source task提前调度起来且读完了所有的split,后调度起来的source task就没有数据可读了。
你可以看看JM/TM日志,确认下是不是前十个调度起来的source task读完了所有的数据。
*Best Regards,*
*Zhenghua Gao*
On Wed, Mar 25
Hi Jark,
这里的确是有问题的。
目前的问题是Calcite本身并不支持TIMESTAMP WITH TIME ZONE.
*Best Regards,*
*Zhenghua Gao*
On Tue, Mar 24, 2020 at 11:00 PM Jark Wu wrote:
> Thanks for reporting this Weike.
>
> 首先,我觉得目前 Flink 返回 TIMESTAMP WITHOUT TIME ZONE 应该是有问题的。
> 因为 SQL 标准(SQL:2011 Part 2 Section 6.32)定
Hi 张宇
看起来是TypeMappingUtils中校验字段物理类型和逻辑类型的bug。
开了一个issue: https://issues.apache.org/jira/browse/FLINK-16800
*Best Regards,*
*Zhenghua Gao*
On Fri, Mar 20, 2020 at 5:28 PM 宇张 wrote:
> hi,
> 了解了,我重新整理一下:
> streamTableEnv
> .connect(
>
not enabled as default.
'connector.lookup.max-retries' = '3', -- optional, max retry times
if lookup database failed
[1]
https://ci.apache.org/projects/flink/flink-docs-release-1.10/dev/table/connect.html#jdbc-connector
*Best Regards,*
*Zhenghua Gao*
On Wed, Apr 15, 2020 at
*Best Regards,*
*Zhenghua Gao*
On Wed, Apr 15, 2020 at 7:48 PM guaishushu1...@163.com <
guaishushu1...@163.com> wrote:
> hi 大家
> 想问下flink-1.10-sql支持维表DDL吗,看社区文档好像mysql和hbase支持,但是需要什么字段显示声明为创建的表是维表呀?
>
>
>
> guaishushu1...@163.com
>
23 matches
Mail list logo