Re:Re: RichMapFunction的问题

2020-05-24 文章 guanyq



>> 1、这个RichMapFunction的并发度上不去,只能整到4,并发度上去后各种问题,但从主机内存以及分配给taskmanager的内存足够;

-- 能粘贴下代码么
-- 还有提交的命令
>> 2、这个RichMapFunction的所有slot都分配到同一个taskmanager上,即同一个主机。没有找到接口可以分散到不同的taskmanager上;

-- 什么模式提交的job(yarn session,yarn,还是stand alone模式)


在 2020-05-25 11:47:48,"tison"  写道:
>关于第一个问题,最好细化一下【各种问题】是什么问题。
>
>关于第二个问题,我印象中目前 Flink 不支持按并发(SubTask)级别指定调度的位置,绕过方案可以是设置每个 TM 仅持有一个
>Slot。这方面我抄送 Xintong,或许他的工作能帮到你。
>
>Best,
>tison.
>
>
>xue...@outlook.com  于2020年5月25日周一 上午11:29写道:
>
>> 遇到两个问题:
>>   背景:flink v1.10集群,几十台主机,每台CPU 16,内存 50G,整个job的并发是200
>>   比如我的一个RichMapFunction在open中会加载存量数据。
>>   因维度数据和主数据是非常离散的,因此这些维度数据都需要加载到内存
>>
>> 1、这个RichMapFunction的并发度上不去,只能整到4,并发度上去后各种问题,但从主机内存以及分配给taskmanager的内存足够;
>>
>>
>> 2、这个RichMapFunction的所有slot都分配到同一个taskmanager上,即同一个主机。没有找到接口可以分散到不同的taskmanager上;
>>
>> 说简单点:
>>
>> 1、 对于RichMapFunction的open中需要加载大量维度数据,并发度上不去受什么影响;
>>
>> 2、 对于一个算子如何干预使其分散到不同的taskmanager上;
>>
>>
>>
>>
>> 发送自 Windows 10 版邮件应用
>>
>>


??????RichMapFunction??????

2020-05-24 文章 ??????(Jiacheng Jiang)
flink 1.10??slot??tmcluster.evenly-spread-out-slots: true






----
??: "xue...@outlook.com"https://go.microsoft.com/fwlink/?LinkId=550986gt;

Re: RichMapFunction的问题

2020-05-24 文章 tison
关于第一个问题,最好细化一下【各种问题】是什么问题。

关于第二个问题,我印象中目前 Flink 不支持按并发(SubTask)级别指定调度的位置,绕过方案可以是设置每个 TM 仅持有一个
Slot。这方面我抄送 Xintong,或许他的工作能帮到你。

Best,
tison.


xue...@outlook.com  于2020年5月25日周一 上午11:29写道:

> 遇到两个问题:
>   背景:flink v1.10集群,几十台主机,每台CPU 16,内存 50G,整个job的并发是200
>   比如我的一个RichMapFunction在open中会加载存量数据。
>   因维度数据和主数据是非常离散的,因此这些维度数据都需要加载到内存
>
> 1、这个RichMapFunction的并发度上不去,只能整到4,并发度上去后各种问题,但从主机内存以及分配给taskmanager的内存足够;
>
>
> 2、这个RichMapFunction的所有slot都分配到同一个taskmanager上,即同一个主机。没有找到接口可以分散到不同的taskmanager上;
>
> 说简单点:
>
> 1、 对于RichMapFunction的open中需要加载大量维度数据,并发度上不去受什么影响;
>
> 2、 对于一个算子如何干预使其分散到不同的taskmanager上;
>
>
>
>
> 发送自 Windows 10 版邮件应用
>
>


RichMapFunction的问题

2020-05-24 文章 xue...@outlook.com
遇到两个问题:
  背景:flink v1.10集群,几十台主机,每台CPU 16,内存 50G,整个job的并发是200
  比如我的一个RichMapFunction在open中会加载存量数据。
  因维度数据和主数据是非常离散的,因此这些维度数据都需要加载到内存

1、这个RichMapFunction的并发度上不去,只能整到4,并发度上去后各种问题,但从主机内存以及分配给taskmanager的内存足够;

2、这个RichMapFunction的所有slot都分配到同一个taskmanager上,即同一个主机。没有找到接口可以分散到不同的taskmanager上;

说简单点:

1、 对于RichMapFunction的open中需要加载大量维度数据,并发度上不去受什么影响;

2、 对于一个算子如何干预使其分散到不同的taskmanager上;




发送自 Windows 10 版邮件应用



Re: Could not find a suitable table factory for 'TableSourceFactory'

2020-05-24 文章 Leonard Xu
Hi,
确实,connector包太多,DataStream 和 Table 分两套的问题,format的包也需要用户导入问题,确实比较困扰用户。

社区也在讨论flink打包方案[1]来降低用户接入成本。


祝好,
Leonard Xu
[1]http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/DISCUSS-Releasing-quot-fat-quot-and-quot-slim-quot-Flink-distributions-tc40237.html#none
 


> 在 2020年5月25日,11:12,macia kk  写道:
> 
> 非常感谢大佬,好人一生平安
> 
> 看来是 flink-connector-kafka_2.11 留在 build.sbt 里冲突了
> 
> 我好奇的是,作为新人,我没招文档里找到要用到 `flink-sql-connector-kafka`, 搜了很多才知道是这个问题
> 
> Leonard Xu  于2020年5月25日周一 上午10:44写道:
> 
>> Hi,
>> 
>> 
>>> 对了还有个问题,我之前看文档使用 `flink-connector-kafka_2.11`一直都无法运行,后来看别人也遇到这道这个问题,改成
>>> `flink-sql-connector-kafka-0.11`
>>> 才可以运行,这两个有什么区别,如果不一样的话,对于 table API 最好标明一下用后者
>> 
>> flink-connector-kafka_2.11 是dataStream API编程使用的
>> flink-sql-connector-kafka-0.11_2.11 是 Table API & SQL
>> 编程使用的,其中0.11是kafka版本,2.11是scala版本
>> 如果是Table API & SQL程序不用加 flink-connector-kafka_2.11
>> 的依赖,你的case把dataStream的connector依赖去掉,
>> 把 sql connector的依赖改为 flink-sql-connector-kafka-0.11_2.11 试下
>> 
>> 
>> Best,
>> Leonard Xu
>> 
>> 
>> 
>> 
>> 
>> 
>>> 
>>> macia kk  于2020年5月25日周一 上午10:05写道:
>>> 
 built.sbt
 
 val flinkVersion = "1.10.0"
 libraryDependencies ++= Seq(
 "org.apache.flink" %% "flink-streaming-scala" % flinkVersion ,
 "org.apache.flink" %% "flink-scala" % flinkVersion,
 "org.apache.flink" %% "flink-statebackend-rocksdb" % flinkVersion,
 
 "org.apache.flink" % "flink-table-common" % flinkVersion,
 "org.apache.flink" %% "flink-table-api-scala" % flinkVersion,
 "org.apache.flink" %% "flink-table-api-scala-bridge" % flinkVersion,
 "org.apache.flink" %% "flink-table-planner-blink" % flinkVersion %
>> "provided",
 
 "org.apache.flink" %% "flink-connector-kafka" % flinkVersion,
 "org.apache.flink" %% "flink-sql-connector-kafka-0.11" %
>> flinkVersion,// < Kafka 0.11
 "org.apache.flink" % "flink-json" % flinkVersion
 )
 
 
 Leonard Xu  于2020年5月25日周一 上午9:33写道:
 
> Hi,
> 你使用的kafka connector的版本是0.11的吗?报错看起来有点像版本不对
> 
> Best,
> Leonard Xu
> 
>> 在 2020年5月25日,02:44,macia kk  写道:
>> 
>> 感谢,我在之前的邮件记录中搜索到了答案。我现在遇到了新的问题,卡主了好久:
>> 
>> Table API, sink to Kafka
>> 
>>  val result = bsTableEnv.sqlQuery("SELECT * FROM " + "")
>> 
>>  bsTableEnv
>>.connect(
>>  new Kafka()
>>.version("0.11") // required: valid connector versions are
>>.topic("aaa") // required: topic name from which the table is
> read
>>.property("zookeeper.connect", "xxx")
>>.property("bootstrap.servers", "yyy")
>>  )
>>.withFormat(new Json())
>>.withSchema(new Schema()
>>  .field("ts", INT())
>>  .field("table", STRING())
>>  .field("database", STRING())
>>)
>>.createTemporaryTable("z")
>> 
>>  result.insertInto("m")
>> 
>> Error:
>> 
>> java.lang.NoSuchMethodError:
>> 
> 
>> org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer011.(Ljava/lang/String;Lorg/apache/flink/streaming/util/serialization/KeyedSerializationSchema;Ljava/util/Properties;Ljava/util/Optional;)V
>>  at
> 
>> org.apache.flink.streaming.connectors.kafka.Kafka011TableSink.createKafkaProducer(Kafka011TableSink.java:58)
>>  at
> 
>> org.apache.flink.streaming.connectors.kafka.KafkaTableSinkBase.consumeDataStream(KafkaTableSinkBase.java:95)
>>  at
> 
>> org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecSink.translateToPlanInternal(StreamExecSink.scala:140)
>>  at
> 
>> org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecSink.translateToPlanInternal(StreamExecSink.scala:48)
>>  at
> 
>> org.apache.flink.table.planner.plan.nodes.exec.ExecNode$class.translateToPlan(ExecNode.scala:58)
>>  at
> 
>> org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecSink.translateToPlan(StreamExecSink.scala:48)
>>  at
> 
>> org.apache.flink.table.planner.delegation.StreamPlanner$$anonfun$translateToPlan$1.apply(StreamPlanner.scala:60)
>>  at
> 
>> org.apache.flink.table.planner.delegation.StreamPlanner$$anonfun$translateToPlan$1.apply(StreamPlanner.scala:59)
>>  at
> 
>> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>>  at
> 
>> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>>  at scala.collection.Iterator$class.foreach(Iterator.scala:891)
>>  at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
>>  at
>> scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
>>  at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
>>  at
> 

Re: Could not find a suitable table factory for 'TableSourceFactory'

2020-05-24 文章 Leonard Xu
Hi,


> 对了还有个问题,我之前看文档使用 `flink-connector-kafka_2.11`一直都无法运行,后来看别人也遇到这道这个问题,改成
> `flink-sql-connector-kafka-0.11`
> 才可以运行,这两个有什么区别,如果不一样的话,对于 table API 最好标明一下用后者

 flink-connector-kafka_2.11 是dataStream API编程使用的
 flink-sql-connector-kafka-0.11_2.11 是 Table API & SQL 
编程使用的,其中0.11是kafka版本,2.11是scala版本
如果是Table API & SQL程序不用加 flink-connector-kafka_2.11 
的依赖,你的case把dataStream的connector依赖去掉,
把 sql connector的依赖改为 flink-sql-connector-kafka-0.11_2.11 试下
 

Best,
Leonard Xu






> 
> macia kk  于2020年5月25日周一 上午10:05写道:
> 
>> built.sbt
>> 
>> val flinkVersion = "1.10.0"
>> libraryDependencies ++= Seq(
>>  "org.apache.flink" %% "flink-streaming-scala" % flinkVersion ,
>>  "org.apache.flink" %% "flink-scala" % flinkVersion,
>>  "org.apache.flink" %% "flink-statebackend-rocksdb" % flinkVersion,
>> 
>>  "org.apache.flink" % "flink-table-common" % flinkVersion,
>>  "org.apache.flink" %% "flink-table-api-scala" % flinkVersion,
>>  "org.apache.flink" %% "flink-table-api-scala-bridge" % flinkVersion,
>>  "org.apache.flink" %% "flink-table-planner-blink" % flinkVersion % 
>> "provided",
>> 
>>  "org.apache.flink" %% "flink-connector-kafka" % flinkVersion,
>>  "org.apache.flink" %% "flink-sql-connector-kafka-0.11" % flinkVersion,  
>>   // < Kafka 0.11
>>  "org.apache.flink" % "flink-json" % flinkVersion
>> )
>> 
>> 
>> Leonard Xu  于2020年5月25日周一 上午9:33写道:
>> 
>>> Hi,
>>> 你使用的kafka connector的版本是0.11的吗?报错看起来有点像版本不对
>>> 
>>> Best,
>>> Leonard Xu
>>> 
 在 2020年5月25日,02:44,macia kk  写道:
 
 感谢,我在之前的邮件记录中搜索到了答案。我现在遇到了新的问题,卡主了好久:
 
 Table API, sink to Kafka
 
   val result = bsTableEnv.sqlQuery("SELECT * FROM " + "")
 
   bsTableEnv
 .connect(
   new Kafka()
 .version("0.11") // required: valid connector versions are
 .topic("aaa") // required: topic name from which the table is
>>> read
 .property("zookeeper.connect", "xxx")
 .property("bootstrap.servers", "yyy")
   )
 .withFormat(new Json())
 .withSchema(new Schema()
   .field("ts", INT())
   .field("table", STRING())
   .field("database", STRING())
 )
 .createTemporaryTable("z")
 
   result.insertInto("m")
 
 Error:
 
 java.lang.NoSuchMethodError:
 
>>> org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer011.(Ljava/lang/String;Lorg/apache/flink/streaming/util/serialization/KeyedSerializationSchema;Ljava/util/Properties;Ljava/util/Optional;)V
   at
>>> org.apache.flink.streaming.connectors.kafka.Kafka011TableSink.createKafkaProducer(Kafka011TableSink.java:58)
   at
>>> org.apache.flink.streaming.connectors.kafka.KafkaTableSinkBase.consumeDataStream(KafkaTableSinkBase.java:95)
   at
>>> org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecSink.translateToPlanInternal(StreamExecSink.scala:140)
   at
>>> org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecSink.translateToPlanInternal(StreamExecSink.scala:48)
   at
>>> org.apache.flink.table.planner.plan.nodes.exec.ExecNode$class.translateToPlan(ExecNode.scala:58)
   at
>>> org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecSink.translateToPlan(StreamExecSink.scala:48)
   at
>>> org.apache.flink.table.planner.delegation.StreamPlanner$$anonfun$translateToPlan$1.apply(StreamPlanner.scala:60)
   at
>>> org.apache.flink.table.planner.delegation.StreamPlanner$$anonfun$translateToPlan$1.apply(StreamPlanner.scala:59)
   at
>>> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
   at
>>> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
   at scala.collection.Iterator$class.foreach(Iterator.scala:891)
   at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
   at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
   at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
   at
>>> scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
   at scala.collection.AbstractTraversable.map(Traversable.scala:104)
   at
>>> org.apache.flink.table.planner.delegation.StreamPlanner.translateToPlan(StreamPlanner.scala:59)
   at
>>> org.apache.flink.table.planner.delegation.PlannerBase.translate(PlannerBase.scala:153)
   at
>>> org.apache.flink.table.api.internal.TableEnvironmentImpl.translate(TableEnvironmentImpl.java:682)
   at
>>> org.apache.flink.table.api.internal.TableEnvironmentImpl.insertIntoInternal(TableEnvironmentImpl.java:355)
   at
>>> org.apache.flink.table.api.internal.TableEnvironmentImpl.insertInto(TableEnvironmentImpl.java:334)
   at
>>> org.apache.flink.table.api.internal.TableImpl.insertInto(TableImpl.java:411)
   at
>>> com.shopee.data.ordermart.airpay_v3.AirpayV3Flink$.createPipeline(AirpayV3Flink.scala:74)
   at
>>> 

Re: Flink SQL 新手问题,RowTime field should not be null, please convert it to a non-null long value

2020-05-24 文章 Leonard Xu
Hi,


> 还有个小问题,类似上面的问题,如何写flink SQL跳过没有ts字段的kafka消息?

 有解析异常就fail 还是 跳过解析异常的record,json forma有两个参数可以配置:
'format.fail-on-missing-field' = 'true',  -- optional: flag whether to fail if 
a field is missing or not,
-- 'false' by default
'format.ignore-parse-errors' = 'true',-- optional: skip fields and rows 
with parse errors instead of failing;
这两个参数不能同时为true。

祝好,
Leonard Xu


> Cheers,
> Enzo
> 
> On Mon, 25 May 2020 at 10:01, Leonard Xu  > wrote:
> 
>> Hi,
>> 
>> 这个报错信息应该挺明显了,eventTime是不能为null的,请检查下Kafka里的数据ts字段是不是有null值或者没有这个字段的情况,如果是可以用个简单udf处理下没有值时ts需要指定一个值。
>> 
>> 祝好,
>> Leonard Xu
>> 
>>> 在 2020年5月25日,09:52,Enzo wang  写道:
>>> 
>>> 请各位帮忙看一下是什么问题?
>>> 
>>> 数据流如下:
>>> Apache -> Logstash -> Kafka -> Flink ->ES -> Kibana
>>> 
>>> 日志到Kafka里面已经为JSON,格式如下:
>>> {
>>>   "path":"/logs/user_conn_speed.log.1",
>>>   "bytes_received":"8597",
>>>   "ts":"2020-05-25T08:51:15Z",
>>>   "message":"20.228.255.68 183685 2 10701 3 [2020-05-25T08:51:15Z]
>> \"GET /speed.gif HTTP/1.1\" 200 8597",
>>>   "client":"20.228.255.68",
>>>   "uid":"183685",
>>>   "ver_id":"3",
>>>   "status_code":"200",
>>>   "type":"logs",
>>>   "conn_speed_ms":"10701",
>>>   "host":"81b034ef6c72",
>>>   "@timestamp":"2020-05-25T00:51:16.267Z",
>>>   "request":"/speed.gif",
>>>   "@version":"1",
>>>   "device_id":"2",
>>>   "http_ver":"1.1"
>>> }
>>> 
>>> Flink SQL 中Kafka源表DDL:
>>> CREATE TABLE user_conn_speed_log (
>>>uid BIGINT,
>>>device_id INT,
>>>ver_id INT,
>>>conn_speed_ms INT,
>>>client STRING,
>>>http_ver STRING,
>>>status_code INT,
>>>ts TIMESTAMP(3),
>>>proctime as PROCTIME(),
>>>WATERMARK FOR ts as ts - INTERVAL '5' SECOND
>>> ) WITH (
>>>'connector.type' = 'kafka',
>>>'connector.version' = 'universal',
>>>'connector.topic' = 'user_conn_speed_log',
>>>'connector.startup-mode' = 'earliest-offset',
>>>'connector.properties.zookeeper.connect' = 'localhost:2181',
>>>'connector.properties.bootstrap.servers' = 'localhost:9092',
>>>'format.type' = 'json'
>>> );
>>> 
>>> ES 表:
>>> CREATE TABLE log_per_sec (
>>>window_start VARCHAR,
>>>window_end VARCHAR,
>>>log_cnt BIGINT
>>> ) WITH (
>>>'connector.type' = 'elasticsearch',
>>>'connector.version' = '6',
>>>'connector.hosts' = 'http://localhost:9200  
>>> >',
>> 
>>>'connector.index' = 'user_conn_speed_log',
>>>'connector.document-type' = 'logs_per_sec',
>>>'connector.bulk-flush.max-actions' = '1',
>>>'format.type' = 'json',
>>>'update-mode' = 'append'
>>> );
>>> 
>>> Flink SQL命令:
>>> 
>>> Flink SQL> INSERT INTO log_per_sec
 SELECT
  CAST((TUMBLE_START(ts, INTERVAL '1' SECOND)) as VARCHAR) as
>> window_start,
  CAST((TUMBLE_END(ts, INTERVAL '1' SECOND)) as VARCHAR) as window_end,
  count(*) as log_cnt
 FROM user_conn_speed_log
 GROUP BY TUMBLE(ts, INTERVAL '1' SECOND);
>>> [INFO] Submitting SQL update statement to the cluster...
>>> [INFO] Table update statement has been successfully submitted to the
>> cluster:
>>> Job ID: 0f8d982d150c9fcb4ea5e78a8d7b2d85
>>> 
>>> Flink 报错:
>>> 
>>> 2020-05-25 08:52:53
>>> org.apache.flink.runtime.JobException: Recovery is suppressed by
>> NoRestartBackoffTimeStrategy
>>>at
>> org.apache.flink.runtime.executiongraph.failover.flip1.ExecutionFailureHandler.handleFailure(ExecutionFailureHandler.java:110)
>>>at
>> org.apache.flink.runtime.executiongraph.failover.flip1.ExecutionFailureHandler.getFailureHandlingResult(ExecutionFailureHandler.java:76)
>>>at
>> org.apache.flink.runtime.scheduler.DefaultScheduler.handleTaskFailure(DefaultScheduler.java:192)
>>>at
>> org.apache.flink.runtime.scheduler.DefaultScheduler.maybeHandleTaskFailure(DefaultScheduler.java:186)
>>>at
>> org.apache.flink.runtime.scheduler.DefaultScheduler.updateTaskExecutionStateInternal(DefaultScheduler.java:180)
>>>at
>> org.apache.flink.runtime.scheduler.SchedulerBase.updateTaskExecutionState(SchedulerBase.java:496)
>>>at
>> org.apache.flink.runtime.jobmaster.JobMaster.updateTaskExecutionState(JobMaster.java:380)
>>>at jdk.internal.reflect.GeneratedMethodAccessor86.invoke(Unknown
>> Source)
>>>at
>> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>>at java.base/java.lang.reflect.Method.invoke(Method.java:567)
>>>at
>> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcInvocation(AkkaRpcActor.java:284)
>>>at
>> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcMessage(AkkaRpcActor.java:199)
>>>at
>> org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.handleRpcMessage(FencedAkkaRpcActor.java:74)
>>>at
>> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleMessage(AkkaRpcActor.java:152)
>>>at akka.japi.pf 
>>> 

Re: Could not find a suitable table factory for 'TableSourceFactory'

2020-05-24 文章 macia kk
对了还有个问题,我之前看文档使用 `flink-connector-kafka_2.11`一直都无法运行,后来看别人也遇到这道这个问题,改成
`flink-sql-connector-kafka-0.11`
才可以运行,这两个有什么区别,如果不一样的话,对于 table API 最好标明一下用后者

macia kk  于2020年5月25日周一 上午10:05写道:

> built.sbt
>
> val flinkVersion = "1.10.0"
> libraryDependencies ++= Seq(
>   "org.apache.flink" %% "flink-streaming-scala" % flinkVersion ,
>   "org.apache.flink" %% "flink-scala" % flinkVersion,
>   "org.apache.flink" %% "flink-statebackend-rocksdb" % flinkVersion,
>
>   "org.apache.flink" % "flink-table-common" % flinkVersion,
>   "org.apache.flink" %% "flink-table-api-scala" % flinkVersion,
>   "org.apache.flink" %% "flink-table-api-scala-bridge" % flinkVersion,
>   "org.apache.flink" %% "flink-table-planner-blink" % flinkVersion % 
> "provided",
>
>   "org.apache.flink" %% "flink-connector-kafka" % flinkVersion,
>   "org.apache.flink" %% "flink-sql-connector-kafka-0.11" % flinkVersion,  
>   // < Kafka 0.11
>   "org.apache.flink" % "flink-json" % flinkVersion
> )
>
>
> Leonard Xu  于2020年5月25日周一 上午9:33写道:
>
>> Hi,
>> 你使用的kafka connector的版本是0.11的吗?报错看起来有点像版本不对
>>
>> Best,
>> Leonard Xu
>>
>> > 在 2020年5月25日,02:44,macia kk  写道:
>> >
>> > 感谢,我在之前的邮件记录中搜索到了答案。我现在遇到了新的问题,卡主了好久:
>> >
>> >  Table API, sink to Kafka
>> >
>> >val result = bsTableEnv.sqlQuery("SELECT * FROM " + "")
>> >
>> >bsTableEnv
>> >  .connect(
>> >new Kafka()
>> >  .version("0.11") // required: valid connector versions are
>> >  .topic("aaa") // required: topic name from which the table is
>> read
>> >  .property("zookeeper.connect", "xxx")
>> >  .property("bootstrap.servers", "yyy")
>> >)
>> >  .withFormat(new Json())
>> >  .withSchema(new Schema()
>> >.field("ts", INT())
>> >.field("table", STRING())
>> >.field("database", STRING())
>> >  )
>> >  .createTemporaryTable("z")
>> >
>> >result.insertInto("m")
>> >
>> > Error:
>> >
>> > java.lang.NoSuchMethodError:
>> >
>> org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer011.(Ljava/lang/String;Lorg/apache/flink/streaming/util/serialization/KeyedSerializationSchema;Ljava/util/Properties;Ljava/util/Optional;)V
>> >at
>> org.apache.flink.streaming.connectors.kafka.Kafka011TableSink.createKafkaProducer(Kafka011TableSink.java:58)
>> >at
>> org.apache.flink.streaming.connectors.kafka.KafkaTableSinkBase.consumeDataStream(KafkaTableSinkBase.java:95)
>> >at
>> org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecSink.translateToPlanInternal(StreamExecSink.scala:140)
>> >at
>> org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecSink.translateToPlanInternal(StreamExecSink.scala:48)
>> >at
>> org.apache.flink.table.planner.plan.nodes.exec.ExecNode$class.translateToPlan(ExecNode.scala:58)
>> >at
>> org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecSink.translateToPlan(StreamExecSink.scala:48)
>> >at
>> org.apache.flink.table.planner.delegation.StreamPlanner$$anonfun$translateToPlan$1.apply(StreamPlanner.scala:60)
>> >at
>> org.apache.flink.table.planner.delegation.StreamPlanner$$anonfun$translateToPlan$1.apply(StreamPlanner.scala:59)
>> >at
>> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>> >at
>> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>> >at scala.collection.Iterator$class.foreach(Iterator.scala:891)
>> >at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
>> >at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
>> >at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
>> >at
>> scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
>> >at scala.collection.AbstractTraversable.map(Traversable.scala:104)
>> >at
>> org.apache.flink.table.planner.delegation.StreamPlanner.translateToPlan(StreamPlanner.scala:59)
>> >at
>> org.apache.flink.table.planner.delegation.PlannerBase.translate(PlannerBase.scala:153)
>> >at
>> org.apache.flink.table.api.internal.TableEnvironmentImpl.translate(TableEnvironmentImpl.java:682)
>> >at
>> org.apache.flink.table.api.internal.TableEnvironmentImpl.insertIntoInternal(TableEnvironmentImpl.java:355)
>> >at
>> org.apache.flink.table.api.internal.TableEnvironmentImpl.insertInto(TableEnvironmentImpl.java:334)
>> >at
>> org.apache.flink.table.api.internal.TableImpl.insertInto(TableImpl.java:411)
>> >at
>> com.shopee.data.ordermart.airpay_v3.AirpayV3Flink$.createPipeline(AirpayV3Flink.scala:74)
>> >at
>> com.shopee.data.ordermart.airpay_v3.AirpayV3Flink$.main(AirpayV3Flink.scala:30)
>> >at
>> com.shopee.data.ordermart.airpay_v3.AirpayV3Flink.main(AirpayV3Flink.scala)
>> >at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>> >at
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>> >at
>> 

Re: Flink SQL 新手问题,RowTime field should not be null, please convert it to a non-null long value

2020-05-24 文章 Enzo wang
Hi Leonard,

谢谢,你说的是对的,之前kafka有一些脏数据,没有ts字段,导致的问题,将
'connector.startup-mode' = 'earliest-offset',
改变成
'connector.startup-mode' = 'latest-offset',
就可用了。

还有个小问题,类似上面的问题,如何写flink SQL跳过没有ts字段的kafka消息?

Cheers,
Enzo

On Mon, 25 May 2020 at 10:01, Leonard Xu  wrote:

> Hi,
>
> 这个报错信息应该挺明显了,eventTime是不能为null的,请检查下Kafka里的数据ts字段是不是有null值或者没有这个字段的情况,如果是可以用个简单udf处理下没有值时ts需要指定一个值。
>
> 祝好,
> Leonard Xu
>
> > 在 2020年5月25日,09:52,Enzo wang  写道:
> >
> > 请各位帮忙看一下是什么问题?
> >
> > 数据流如下:
> > Apache -> Logstash -> Kafka -> Flink ->ES -> Kibana
> >
> > 日志到Kafka里面已经为JSON,格式如下:
> > {
> >"path":"/logs/user_conn_speed.log.1",
> >"bytes_received":"8597",
> >"ts":"2020-05-25T08:51:15Z",
> >"message":"20.228.255.68 183685 2 10701 3 [2020-05-25T08:51:15Z]
> \"GET /speed.gif HTTP/1.1\" 200 8597",
> >"client":"20.228.255.68",
> >"uid":"183685",
> >"ver_id":"3",
> >"status_code":"200",
> >"type":"logs",
> >"conn_speed_ms":"10701",
> >"host":"81b034ef6c72",
> >"@timestamp":"2020-05-25T00:51:16.267Z",
> >"request":"/speed.gif",
> >"@version":"1",
> >"device_id":"2",
> >"http_ver":"1.1"
> > }
> >
> > Flink SQL 中Kafka源表DDL:
> > CREATE TABLE user_conn_speed_log (
> > uid BIGINT,
> > device_id INT,
> > ver_id INT,
> > conn_speed_ms INT,
> > client STRING,
> > http_ver STRING,
> > status_code INT,
> > ts TIMESTAMP(3),
> > proctime as PROCTIME(),
> > WATERMARK FOR ts as ts - INTERVAL '5' SECOND
> > ) WITH (
> > 'connector.type' = 'kafka',
> > 'connector.version' = 'universal',
> > 'connector.topic' = 'user_conn_speed_log',
> > 'connector.startup-mode' = 'earliest-offset',
> > 'connector.properties.zookeeper.connect' = 'localhost:2181',
> > 'connector.properties.bootstrap.servers' = 'localhost:9092',
> > 'format.type' = 'json'
> > );
> >
> > ES 表:
> > CREATE TABLE log_per_sec (
> > window_start VARCHAR,
> > window_end VARCHAR,
> > log_cnt BIGINT
> > ) WITH (
> > 'connector.type' = 'elasticsearch',
> > 'connector.version' = '6',
> > 'connector.hosts' = 'http://localhost:9200 ',
>
> > 'connector.index' = 'user_conn_speed_log',
> > 'connector.document-type' = 'logs_per_sec',
> > 'connector.bulk-flush.max-actions' = '1',
> > 'format.type' = 'json',
> > 'update-mode' = 'append'
> > );
> >
> > Flink SQL命令:
> >
> > Flink SQL> INSERT INTO log_per_sec
> > > SELECT
> > >   CAST((TUMBLE_START(ts, INTERVAL '1' SECOND)) as VARCHAR) as
> window_start,
> > >   CAST((TUMBLE_END(ts, INTERVAL '1' SECOND)) as VARCHAR) as window_end,
> > >   count(*) as log_cnt
> > > FROM user_conn_speed_log
> > > GROUP BY TUMBLE(ts, INTERVAL '1' SECOND);
> > [INFO] Submitting SQL update statement to the cluster...
> > [INFO] Table update statement has been successfully submitted to the
> cluster:
> > Job ID: 0f8d982d150c9fcb4ea5e78a8d7b2d85
> >
> > Flink 报错:
> >
> > 2020-05-25 08:52:53
> > org.apache.flink.runtime.JobException: Recovery is suppressed by
> NoRestartBackoffTimeStrategy
> > at
> org.apache.flink.runtime.executiongraph.failover.flip1.ExecutionFailureHandler.handleFailure(ExecutionFailureHandler.java:110)
> > at
> org.apache.flink.runtime.executiongraph.failover.flip1.ExecutionFailureHandler.getFailureHandlingResult(ExecutionFailureHandler.java:76)
> > at
> org.apache.flink.runtime.scheduler.DefaultScheduler.handleTaskFailure(DefaultScheduler.java:192)
> > at
> org.apache.flink.runtime.scheduler.DefaultScheduler.maybeHandleTaskFailure(DefaultScheduler.java:186)
> > at
> org.apache.flink.runtime.scheduler.DefaultScheduler.updateTaskExecutionStateInternal(DefaultScheduler.java:180)
> > at
> org.apache.flink.runtime.scheduler.SchedulerBase.updateTaskExecutionState(SchedulerBase.java:496)
> > at
> org.apache.flink.runtime.jobmaster.JobMaster.updateTaskExecutionState(JobMaster.java:380)
> > at jdk.internal.reflect.GeneratedMethodAccessor86.invoke(Unknown
> Source)
> > at
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> > at java.base/java.lang.reflect.Method.invoke(Method.java:567)
> > at
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcInvocation(AkkaRpcActor.java:284)
> > at
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcMessage(AkkaRpcActor.java:199)
> > at
> org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.handleRpcMessage(FencedAkkaRpcActor.java:74)
> > at
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleMessage(AkkaRpcActor.java:152)
> > at akka.japi.pf  >.UnitCaseStatement.apply(CaseStatements.scala:26)
> > at akka.japi.pf  >.UnitCaseStatement.apply(CaseStatements.scala:21)
> > at scala.PartialFunction$class.applyOrElse(PartialFunction.scala:123)
> > at akka.japi.pf  >.UnitCaseStatement.applyOrElse(CaseStatements.scala:21)
> > at
> 

Re: Could not find a suitable table factory for 'TableSourceFactory'

2020-05-24 文章 macia kk
built.sbt

val flinkVersion = "1.10.0"
libraryDependencies ++= Seq(
  "org.apache.flink" %% "flink-streaming-scala" % flinkVersion ,
  "org.apache.flink" %% "flink-scala" % flinkVersion,
  "org.apache.flink" %% "flink-statebackend-rocksdb" % flinkVersion,

  "org.apache.flink" % "flink-table-common" % flinkVersion,
  "org.apache.flink" %% "flink-table-api-scala" % flinkVersion,
  "org.apache.flink" %% "flink-table-api-scala-bridge" % flinkVersion,
  "org.apache.flink" %% "flink-table-planner-blink" % flinkVersion % "provided",

  "org.apache.flink" %% "flink-connector-kafka" % flinkVersion,
  "org.apache.flink" %% "flink-sql-connector-kafka-0.11" %
flinkVersion,// < Kafka 0.11
  "org.apache.flink" % "flink-json" % flinkVersion
)


Leonard Xu  于2020年5月25日周一 上午9:33写道:

> Hi,
> 你使用的kafka connector的版本是0.11的吗?报错看起来有点像版本不对
>
> Best,
> Leonard Xu
>
> > 在 2020年5月25日,02:44,macia kk  写道:
> >
> > 感谢,我在之前的邮件记录中搜索到了答案。我现在遇到了新的问题,卡主了好久:
> >
> >  Table API, sink to Kafka
> >
> >val result = bsTableEnv.sqlQuery("SELECT * FROM " + "")
> >
> >bsTableEnv
> >  .connect(
> >new Kafka()
> >  .version("0.11") // required: valid connector versions are
> >  .topic("aaa") // required: topic name from which the table is
> read
> >  .property("zookeeper.connect", "xxx")
> >  .property("bootstrap.servers", "yyy")
> >)
> >  .withFormat(new Json())
> >  .withSchema(new Schema()
> >.field("ts", INT())
> >.field("table", STRING())
> >.field("database", STRING())
> >  )
> >  .createTemporaryTable("z")
> >
> >result.insertInto("m")
> >
> > Error:
> >
> > java.lang.NoSuchMethodError:
> >
> org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer011.(Ljava/lang/String;Lorg/apache/flink/streaming/util/serialization/KeyedSerializationSchema;Ljava/util/Properties;Ljava/util/Optional;)V
> >at
> org.apache.flink.streaming.connectors.kafka.Kafka011TableSink.createKafkaProducer(Kafka011TableSink.java:58)
> >at
> org.apache.flink.streaming.connectors.kafka.KafkaTableSinkBase.consumeDataStream(KafkaTableSinkBase.java:95)
> >at
> org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecSink.translateToPlanInternal(StreamExecSink.scala:140)
> >at
> org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecSink.translateToPlanInternal(StreamExecSink.scala:48)
> >at
> org.apache.flink.table.planner.plan.nodes.exec.ExecNode$class.translateToPlan(ExecNode.scala:58)
> >at
> org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecSink.translateToPlan(StreamExecSink.scala:48)
> >at
> org.apache.flink.table.planner.delegation.StreamPlanner$$anonfun$translateToPlan$1.apply(StreamPlanner.scala:60)
> >at
> org.apache.flink.table.planner.delegation.StreamPlanner$$anonfun$translateToPlan$1.apply(StreamPlanner.scala:59)
> >at
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
> >at
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
> >at scala.collection.Iterator$class.foreach(Iterator.scala:891)
> >at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
> >at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
> >at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
> >at
> scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
> >at scala.collection.AbstractTraversable.map(Traversable.scala:104)
> >at
> org.apache.flink.table.planner.delegation.StreamPlanner.translateToPlan(StreamPlanner.scala:59)
> >at
> org.apache.flink.table.planner.delegation.PlannerBase.translate(PlannerBase.scala:153)
> >at
> org.apache.flink.table.api.internal.TableEnvironmentImpl.translate(TableEnvironmentImpl.java:682)
> >at
> org.apache.flink.table.api.internal.TableEnvironmentImpl.insertIntoInternal(TableEnvironmentImpl.java:355)
> >at
> org.apache.flink.table.api.internal.TableEnvironmentImpl.insertInto(TableEnvironmentImpl.java:334)
> >at
> org.apache.flink.table.api.internal.TableImpl.insertInto(TableImpl.java:411)
> >at
> com.shopee.data.ordermart.airpay_v3.AirpayV3Flink$.createPipeline(AirpayV3Flink.scala:74)
> >at
> com.shopee.data.ordermart.airpay_v3.AirpayV3Flink$.main(AirpayV3Flink.scala:30)
> >at
> com.shopee.data.ordermart.airpay_v3.AirpayV3Flink.main(AirpayV3Flink.scala)
> >at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> >at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> >at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> >at java.lang.reflect.Method.invoke(Method.java:498)
> >at
> org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:321)
> >at
> 

Re: Flink SQL 新手问题,RowTime field should not be null, please convert it to a non-null long value

2020-05-24 文章 Leonard Xu
Hi,
这个报错信息应该挺明显了,eventTime是不能为null的,请检查下Kafka里的数据ts字段是不是有null值或者没有这个字段的情况,如果是可以用个简单udf处理下没有值时ts需要指定一个值。

祝好,
Leonard Xu

> 在 2020年5月25日,09:52,Enzo wang  写道:
> 
> 请各位帮忙看一下是什么问题? 
> 
> 数据流如下:
> Apache -> Logstash -> Kafka -> Flink ->ES -> Kibana
> 
> 日志到Kafka里面已经为JSON,格式如下:
> {
>"path":"/logs/user_conn_speed.log.1",
>"bytes_received":"8597",
>"ts":"2020-05-25T08:51:15Z",
>"message":"20.228.255.68 183685 2 10701 3 [2020-05-25T08:51:15Z] \"GET 
> /speed.gif HTTP/1.1\" 200 8597",
>"client":"20.228.255.68",
>"uid":"183685",
>"ver_id":"3",
>"status_code":"200",
>"type":"logs",
>"conn_speed_ms":"10701",
>"host":"81b034ef6c72",
>"@timestamp":"2020-05-25T00:51:16.267Z",
>"request":"/speed.gif",
>"@version":"1",
>"device_id":"2",
>"http_ver":"1.1"
> }
> 
> Flink SQL 中Kafka源表DDL:
> CREATE TABLE user_conn_speed_log (
> uid BIGINT,
> device_id INT,
> ver_id INT,
> conn_speed_ms INT,
> client STRING,
> http_ver STRING,
> status_code INT,
> ts TIMESTAMP(3),
> proctime as PROCTIME(),  
> WATERMARK FOR ts as ts - INTERVAL '5' SECOND 
> ) WITH (
> 'connector.type' = 'kafka',  
> 'connector.version' = 'universal', 
> 'connector.topic' = 'user_conn_speed_log', 
> 'connector.startup-mode' = 'earliest-offset', 
> 'connector.properties.zookeeper.connect' = 'localhost:2181',  
> 'connector.properties.bootstrap.servers' = 'localhost:9092', 
> 'format.type' = 'json'  
> );
> 
> ES 表:
> CREATE TABLE log_per_sec (
> window_start VARCHAR,
> window_end VARCHAR,
> log_cnt BIGINT
> ) WITH (
> 'connector.type' = 'elasticsearch', 
> 'connector.version' = '6', 
> 'connector.hosts' = 'http://localhost:9200 ',  
> 'connector.index' = 'user_conn_speed_log',  
> 'connector.document-type' = 'logs_per_sec', 
> 'connector.bulk-flush.max-actions' = '1', 
> 'format.type' = 'json', 
> 'update-mode' = 'append'
> );
> 
> Flink SQL命令:
> 
> Flink SQL> INSERT INTO log_per_sec
> > SELECT
> >   CAST((TUMBLE_START(ts, INTERVAL '1' SECOND)) as VARCHAR) as window_start,
> >   CAST((TUMBLE_END(ts, INTERVAL '1' SECOND)) as VARCHAR) as window_end,
> >   count(*) as log_cnt
> > FROM user_conn_speed_log
> > GROUP BY TUMBLE(ts, INTERVAL '1' SECOND);
> [INFO] Submitting SQL update statement to the cluster...
> [INFO] Table update statement has been successfully submitted to the cluster:
> Job ID: 0f8d982d150c9fcb4ea5e78a8d7b2d85
> 
> Flink 报错:
> 
> 2020-05-25 08:52:53
> org.apache.flink.runtime.JobException: Recovery is suppressed by 
> NoRestartBackoffTimeStrategy
> at 
> org.apache.flink.runtime.executiongraph.failover.flip1.ExecutionFailureHandler.handleFailure(ExecutionFailureHandler.java:110)
> at 
> org.apache.flink.runtime.executiongraph.failover.flip1.ExecutionFailureHandler.getFailureHandlingResult(ExecutionFailureHandler.java:76)
> at 
> org.apache.flink.runtime.scheduler.DefaultScheduler.handleTaskFailure(DefaultScheduler.java:192)
> at 
> org.apache.flink.runtime.scheduler.DefaultScheduler.maybeHandleTaskFailure(DefaultScheduler.java:186)
> at 
> org.apache.flink.runtime.scheduler.DefaultScheduler.updateTaskExecutionStateInternal(DefaultScheduler.java:180)
> at 
> org.apache.flink.runtime.scheduler.SchedulerBase.updateTaskExecutionState(SchedulerBase.java:496)
> at 
> org.apache.flink.runtime.jobmaster.JobMaster.updateTaskExecutionState(JobMaster.java:380)
> at jdk.internal.reflect.GeneratedMethodAccessor86.invoke(Unknown Source)
> at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.base/java.lang.reflect.Method.invoke(Method.java:567)
> at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcInvocation(AkkaRpcActor.java:284)
> at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcMessage(AkkaRpcActor.java:199)
> at 
> org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.handleRpcMessage(FencedAkkaRpcActor.java:74)
> at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleMessage(AkkaRpcActor.java:152)
> at akka.japi.pf 
> .UnitCaseStatement.apply(CaseStatements.scala:26)
> at akka.japi.pf 
> .UnitCaseStatement.apply(CaseStatements.scala:21)
> at scala.PartialFunction$class.applyOrElse(PartialFunction.scala:123)
> at akka.japi.pf 
> .UnitCaseStatement.applyOrElse(CaseStatements.scala:21)
> at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:170)
> at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171)
> at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171)
> at akka.actor.Actor$class.aroundReceive(Actor.scala:517)
> at akka.actor.AbstractActor.aroundReceive(AbstractActor.scala:225)
> at akka.actor.ActorCell.receiveMessage(ActorCell.scala:592)
> at 

Re: Could not find a suitable table factory for 'TableSourceFactory'

2020-05-24 文章 Leonard Xu
Hi,
你使用的kafka connector的版本是0.11的吗?报错看起来有点像版本不对

Best,
Leonard Xu

> 在 2020年5月25日,02:44,macia kk  写道:
> 
> 感谢,我在之前的邮件记录中搜索到了答案。我现在遇到了新的问题,卡主了好久:
> 
>  Table API, sink to Kafka
> 
>val result = bsTableEnv.sqlQuery("SELECT * FROM " + "")
> 
>bsTableEnv
>  .connect(
>new Kafka()
>  .version("0.11") // required: valid connector versions are
>  .topic("aaa") // required: topic name from which the table is read
>  .property("zookeeper.connect", "xxx")
>  .property("bootstrap.servers", "yyy")
>)
>  .withFormat(new Json())
>  .withSchema(new Schema()
>.field("ts", INT())
>.field("table", STRING())
>.field("database", STRING())
>  )
>  .createTemporaryTable("z")
> 
>result.insertInto("m")
> 
> Error:
> 
> java.lang.NoSuchMethodError:
> org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer011.(Ljava/lang/String;Lorg/apache/flink/streaming/util/serialization/KeyedSerializationSchema;Ljava/util/Properties;Ljava/util/Optional;)V
>at 
> org.apache.flink.streaming.connectors.kafka.Kafka011TableSink.createKafkaProducer(Kafka011TableSink.java:58)
>at 
> org.apache.flink.streaming.connectors.kafka.KafkaTableSinkBase.consumeDataStream(KafkaTableSinkBase.java:95)
>at 
> org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecSink.translateToPlanInternal(StreamExecSink.scala:140)
>at 
> org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecSink.translateToPlanInternal(StreamExecSink.scala:48)
>at 
> org.apache.flink.table.planner.plan.nodes.exec.ExecNode$class.translateToPlan(ExecNode.scala:58)
>at 
> org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecSink.translateToPlan(StreamExecSink.scala:48)
>at 
> org.apache.flink.table.planner.delegation.StreamPlanner$$anonfun$translateToPlan$1.apply(StreamPlanner.scala:60)
>at 
> org.apache.flink.table.planner.delegation.StreamPlanner$$anonfun$translateToPlan$1.apply(StreamPlanner.scala:59)
>at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>at scala.collection.Iterator$class.foreach(Iterator.scala:891)
>at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
>at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
>at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
>at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
>at scala.collection.AbstractTraversable.map(Traversable.scala:104)
>at 
> org.apache.flink.table.planner.delegation.StreamPlanner.translateToPlan(StreamPlanner.scala:59)
>at 
> org.apache.flink.table.planner.delegation.PlannerBase.translate(PlannerBase.scala:153)
>at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.translate(TableEnvironmentImpl.java:682)
>at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.insertIntoInternal(TableEnvironmentImpl.java:355)
>at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.insertInto(TableEnvironmentImpl.java:334)
>at 
> org.apache.flink.table.api.internal.TableImpl.insertInto(TableImpl.java:411)
>at 
> com.shopee.data.ordermart.airpay_v3.AirpayV3Flink$.createPipeline(AirpayV3Flink.scala:74)
>at 
> com.shopee.data.ordermart.airpay_v3.AirpayV3Flink$.main(AirpayV3Flink.scala:30)
>at 
> com.shopee.data.ordermart.airpay_v3.AirpayV3Flink.main(AirpayV3Flink.scala)
>at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>at java.lang.reflect.Method.invoke(Method.java:498)
>at 
> org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:321)
>at 
> org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:205)
>at org.apache.flink.client.ClientUtils.executeProgram(ClientUtils.java:138)
>at 
> org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:664)
>at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:213)
>at 
> org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:895)
>at 
> org.apache.flink.client.cli.CliFrontend.lambda$main$10(CliFrontend.java:968)
>at java.security.AccessController.doPrivileged(Native Method)
>at javax.security.auth.Subject.doAs(Subject.java:422)
>at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1982)
>at 
> org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
>at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:968)
> 
> 
> 麻烦帮我看下,谢谢
> 
> Lijie Wang  于2020年5月25日周一 上午12:34写道:
> 
>> 

Re: Could not find a suitable table factory for 'TableSourceFactory'

2020-05-24 文章 macia kk
感谢,我在之前的邮件记录中搜索到了答案。我现在遇到了新的问题,卡主了好久:

  Table API, sink to Kafka

val result = bsTableEnv.sqlQuery("SELECT * FROM " + "")

bsTableEnv
  .connect(
new Kafka()
  .version("0.11") // required: valid connector versions are
  .topic("aaa") // required: topic name from which the table is read
  .property("zookeeper.connect", "xxx")
  .property("bootstrap.servers", "yyy")
)
  .withFormat(new Json())
  .withSchema(new Schema()
.field("ts", INT())
.field("table", STRING())
.field("database", STRING())
  )
  .createTemporaryTable("z")

result.insertInto("m")

Error:

java.lang.NoSuchMethodError:
org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer011.(Ljava/lang/String;Lorg/apache/flink/streaming/util/serialization/KeyedSerializationSchema;Ljava/util/Properties;Ljava/util/Optional;)V
at 
org.apache.flink.streaming.connectors.kafka.Kafka011TableSink.createKafkaProducer(Kafka011TableSink.java:58)
at 
org.apache.flink.streaming.connectors.kafka.KafkaTableSinkBase.consumeDataStream(KafkaTableSinkBase.java:95)
at 
org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecSink.translateToPlanInternal(StreamExecSink.scala:140)
at 
org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecSink.translateToPlanInternal(StreamExecSink.scala:48)
at 
org.apache.flink.table.planner.plan.nodes.exec.ExecNode$class.translateToPlan(ExecNode.scala:58)
at 
org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecSink.translateToPlan(StreamExecSink.scala:48)
at 
org.apache.flink.table.planner.delegation.StreamPlanner$$anonfun$translateToPlan$1.apply(StreamPlanner.scala:60)
at 
org.apache.flink.table.planner.delegation.StreamPlanner$$anonfun$translateToPlan$1.apply(StreamPlanner.scala:59)
at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
at scala.collection.Iterator$class.foreach(Iterator.scala:891)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
at scala.collection.AbstractTraversable.map(Traversable.scala:104)
at 
org.apache.flink.table.planner.delegation.StreamPlanner.translateToPlan(StreamPlanner.scala:59)
at 
org.apache.flink.table.planner.delegation.PlannerBase.translate(PlannerBase.scala:153)
at 
org.apache.flink.table.api.internal.TableEnvironmentImpl.translate(TableEnvironmentImpl.java:682)
at 
org.apache.flink.table.api.internal.TableEnvironmentImpl.insertIntoInternal(TableEnvironmentImpl.java:355)
at 
org.apache.flink.table.api.internal.TableEnvironmentImpl.insertInto(TableEnvironmentImpl.java:334)
at 
org.apache.flink.table.api.internal.TableImpl.insertInto(TableImpl.java:411)
at 
com.shopee.data.ordermart.airpay_v3.AirpayV3Flink$.createPipeline(AirpayV3Flink.scala:74)
at 
com.shopee.data.ordermart.airpay_v3.AirpayV3Flink$.main(AirpayV3Flink.scala:30)
at 
com.shopee.data.ordermart.airpay_v3.AirpayV3Flink.main(AirpayV3Flink.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:321)
at 
org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:205)
at org.apache.flink.client.ClientUtils.executeProgram(ClientUtils.java:138)
at 
org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:664)
at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:213)
at 
org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:895)
at 
org.apache.flink.client.cli.CliFrontend.lambda$main$10(CliFrontend.java:968)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1982)
at 
org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:968)


麻烦帮我看下,谢谢

Lijie Wang  于2020年5月25日周一 上午12:34写道:

> Hi,我不能加载你邮件中的图片。从下面的报错看起来是因为找不到 match 的connector。可以检查一下 DDL 中的 with 属性是否正确。
>
>
>
> 在 2020-05-25 00:11:16,"macia kk"  写道:
>
> 有人帮我看下这个问题吗,谢谢
>
>
>
>
>
>
> org.apache.flink.client.program.ProgramInvocationException: The main
> method caused an error: 

Re:Could not find a suitable table factory for 'TableSourceFactory'

2020-05-24 文章 Lijie Wang
Hi,我不能加载你邮件中的图片。从下面的报错看起来是因为找不到 match 的connector。可以检查一下 DDL 中的 with 属性是否正确。



在 2020-05-25 00:11:16,"macia kk"  写道:

有人帮我看下这个问题吗,谢谢






org.apache.flink.client.program.ProgramInvocationException: The main method 
caused an error: findAndCreateTableSource failed.
Caused by: org.apache.flink.table.api.NoMatchingTableFactoryException: Could 
not find a suitable table factory for 
'org.apache.flink.table.factories.TableSourceFactory' in
the classpath.
Reason: Required context properties mismatch.

The matching candidates:
org.apache.flink.table.sources.CsvAppendTableSourceFactory
Mismatched properties:
'connector.type' expects 'filesystem', but is 'kafka'
'format.type' expects 'csv', but is 'json'

The following properties are requested:
connector.properties.bootstrap.servers=ip-10-128-145-1.idata-server.shopee.io:9092connector.properties.group.id=keystats_aripay
connector.property-version=1
connector.startup-mode=latest-offset
connector.topic=ebisu_wallet_id_db_mirror_v1
connector.type=kafka
format.property-version=1
format.type=json
schema.0.data-type=INT
schema.0.name=ts
schema.1.data-type=VARCHAR(2147483647)
schema.1.name=table
schema.2.data-type=VARCHAR(2147483647)
schema.2.name=database
update-mode=append

The following factories have been considered:
org.apache.flink.table.sources.CsvBatchTableSourceFactory
org.apache.flink.table.sources.CsvAppendTableSourceFactory
org.apache.flink.streaming.connectors.kafka.KafkaTableSourceSinkFactory
at 
org.apache.flink.table.factories.TableFactoryService.filterByContext(TableFactoryService.java:322)
at 
org.apache.flink.table.factories.TableFactoryService.filter(TableFactoryService.java:190)
at 
org.apache.flink.table.factories.TableFactoryService.findSingleInternal(TableFactoryService.java:143)
at 
org.apache.flink.table.factories.TableFactoryService.find(TableFactoryService.java:96)
at 
org.apache.flink.table.factories.TableFactoryUtil.findAndCreateTableSource(TableFactoryUtil.java:52)
... 39 more


Could not find a suitable table factory for 'TableSourceFactory'

2020-05-24 文章 macia kk
有人帮我看下这个问题吗,谢谢

[image: image.png]
[image: image.png]

org.apache.flink.client.program.ProgramInvocationException: The main
method caused an error: findAndCreateTableSource failed.
Caused by: org.apache.flink.table.api.NoMatchingTableFactoryException:
Could not find a suitable table factory for
'org.apache.flink.table.factories.TableSourceFactory' in
the classpath.
Reason: Required context properties mismatch.

The matching candidates:
org.apache.flink.table.sources.CsvAppendTableSourceFactory
Mismatched properties:
'connector.type' expects 'filesystem', but is 'kafka'
'format.type' expects 'csv', but is 'json'

The following properties are requested:
connector.properties.bootstrap.servers=ip-10-128-145-1.idata-server.shopee.io:9092connector.properties.group.id=keystats_aripay
connector.property-version=1
connector.startup-mode=latest-offset
connector.topic=ebisu_wallet_id_db_mirror_v1
connector.type=kafka
format.property-version=1
format.type=json
schema.0.data-type=INTschema.0.name=ts
schema.1.data-type=VARCHAR(2147483647)schema.1.name=table
schema.2.data-type=VARCHAR(2147483647)schema.2.name=database
update-mode=append

The following factories have been considered:
org.apache.flink.table.sources.CsvBatchTableSourceFactory
org.apache.flink.table.sources.CsvAppendTableSourceFactory
org.apache.flink.streaming.connectors.kafka.KafkaTableSourceSinkFactory
at 
org.apache.flink.table.factories.TableFactoryService.filterByContext(TableFactoryService.java:322)
at 
org.apache.flink.table.factories.TableFactoryService.filter(TableFactoryService.java:190)
at 
org.apache.flink.table.factories.TableFactoryService.findSingleInternal(TableFactoryService.java:143)
at 
org.apache.flink.table.factories.TableFactoryService.find(TableFactoryService.java:96)
at 
org.apache.flink.table.factories.TableFactoryUtil.findAndCreateTableSource(TableFactoryUtil.java:52)
... 39 more


?????? ????????????????????????????????????????

2020-05-24 文章 kris wu



Best,
kris.


----
??:"tison"

回复:关于水位线Watermark的理解

2020-05-24 文章 smq
恩恩,我是刚接触flink不久,所以很多地方没有很清楚,谢谢指点


---原始邮件---
发件人: tisonhttps://ci.apache.org/projects/flink/flink-docs-stable/dev/stream/operators/windows.html#allowed-lateness
[2]
https://github.com/apache/flink/blob/master/flink-streaming-java/src/main/java/org/apache/flink/streaming/runtime/operators/windowing/WindowOperator.java



Benchao Li 

回复:关于水位线Watermark的理解

2020-05-24 文章 smq
感谢!


---原始邮件---
发件人: Benchao Li

Re: 使用广播流要怎么保证广播流比数据流先到?

2020-05-24 文章 tison
高老师的方案应该是比较 make sense 的,你从网络上去限制某个先到后到很麻烦,而且就算可以,也会涉及 Flink
网络层很底层的逻辑。通常来说希望【先到】的含义是【先处理】,那你把物理上先到的缓存起来后处理就可以了。

Best,
tison.


1048262223 <1048262...@qq.com> 于2020年5月24日周日 下午2:08写道:

> Hello,我的理解是这样的
> 广播流一般都是为了减少访问外部配置数据,提高性能来使用的,因此如果你是在这种场景下使用播流,我有一个在生产实践过的方法可供参考。
>
> 可以先在正常数据处理流的open方法中初始化访问一次配置,后续配置变更时再去使用广播中的数据对配置进行更新。如果硬要求某些数据必须在某个广播流配置数据更新后才能进行处理,则可以使用大佬们在上面提供的用state存储的方式进行解决。
>
>
> -- 原始邮件 --
> 发件人: Yun Gao  发送时间: 2020年5月24日 13:56
> 收件人: 462329521 <462329...@qq.com, user-zh  
> 主题: 回复:使用广播流要怎么保证广播流比数据流先到?
>
>
>
> Hello,据我了解,现在应该法有办法做到让一个流先到。
>
> 一种workaround的方法应该是在广播流全部到达之前,通过state先缓存收到的数据;然后等到广播流到达后再进行处理。
>
>
> --
> Sender:462329521<462329...@qq.com
> Date:2020/05/24 11:32:17
> Recipient:user-zh Theme:使用广播流要怎么保证广播流比数据流先到?
>
> 你好,我想问一下我们在业务系统中需要广播流比数据流先到,要怎么保证这种先后顺序?


Re: 关于水位线Watermark的理解

2020-05-24 文章 tison
整体没啥问题,但是我看你说【假如第一个数据的事件时间刚好为12:00的,那么此时水位线应该在11:59】,这个 Watermark 跟
allowedLateness 没啥关系哈,是独立的逻辑。

文档层面你可以看看[1],源码你可以看看[2]里面检索 allowedLateness

Best,
tison.

[1]
https://ci.apache.org/projects/flink/flink-docs-stable/dev/stream/operators/windows.html#allowed-lateness
[2]
https://github.com/apache/flink/blob/master/flink-streaming-java/src/main/java/org/apache/flink/streaming/runtime/operators/windowing/WindowOperator.java



Benchao Li  于2020年5月24日周日 下午9:56写道:

> Hi,
> 你理解的是正确的,进入哪个窗口完全看事件时间,窗口什么时候trigger,是看watermark。
>
> smq <374060...@qq.com> 于2020年5月24日周日 下午9:46写道:
>
> >
> >
> 使用时间时间窗口处理关于数据延迟,加入允许延迟时间为1min,窗口大小是10min,那么在12:00-12:10这个窗口中,如果事件时间是在12:09:50这个数据在12:10:50这个数据到达,并且此时水位线刚好在12:09:50,那么这个延迟数据可以被处理,这个可以理解。
> >
> >
> 但是,假如第一个数据的事件时间刚好为12:00的,那么此时水位线应该在11:59,这个数据能进入12:00-12:10这个窗口被处理吗。按道理来说应该被正确处理。那么这样的话,进入窗口是按照事件时间,触发是按照水印时间。不知道这么理解对不对,这个问题想了很久。
>
>
>
> --
>
> Best,
> Benchao Li
>


Re: 关于水位线Watermark的理解

2020-05-24 文章 Benchao Li
Hi,
你理解的是正确的,进入哪个窗口完全看事件时间,窗口什么时候trigger,是看watermark。

smq <374060...@qq.com> 于2020年5月24日周日 下午9:46写道:

>
> 使用时间时间窗口处理关于数据延迟,加入允许延迟时间为1min,窗口大小是10min,那么在12:00-12:10这个窗口中,如果事件时间是在12:09:50这个数据在12:10:50这个数据到达,并且此时水位线刚好在12:09:50,那么这个延迟数据可以被处理,这个可以理解。
>
> 但是,假如第一个数据的事件时间刚好为12:00的,那么此时水位线应该在11:59,这个数据能进入12:00-12:10这个窗口被处理吗。按道理来说应该被正确处理。那么这样的话,进入窗口是按照事件时间,触发是按照水印时间。不知道这么理解对不对,这个问题想了很久。



-- 

Best,
Benchao Li


Re: 關於LocalExecutionEnvironment使用MiniCluster的配置

2020-05-24 文章 Jeff Zhang
在zeppelin也集成了flink的local 模式,可以通过设置 local.number-taskmanager 和
flink.tm.slot来设置tm和slot的数目,
具体可以参考这个视频 https://www.bilibili.com/video/BV1Te411W73b?p=3

tison  于2020年5月24日周日 下午9:46写道:

> 是这样的。
>
> 这里的配置可以参考[1][2]两个类,具体你 Maven 启动的代码路径还跟[3][4]有关。
>
> 这边可能确实文档比较缺失。可以看下配置传递的路径,TM 的数量还有 RPC 的共享格式等配置,至少编程接口上都是可以配的。
>
> Best,
> tison.
>
> [1]
>
> https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/minicluster/MiniCluster.java
> [2]
>
> https://github.com/apache/flink/blob/ab947386ed93b16019f36c50e9a3475dd6ad3c4a/flink-runtime/src/main/java/org/apache/flink/runtime/minicluster/MiniClusterConfiguration.java
> [3]
>
> https://github.com/apache/flink/blob/master/flink-clients/src/main/java/org/apache/flink/client/deployment/executors/LocalExecutor.java
> [4]
>
> https://github.com/apache/flink/blob/master/flink-clients/src/main/java/org/apache/flink/client/program/PerJobMiniClusterFactory.java
>
>
>
>
> 月月  于2020年5月24日周日 下午9:11写道:
>
> > 您好,
> > 在單機模式使用maven執行專案時,會自動啟動MiniCluster,
> > 我想請問在這種情形下,預設是配置一個JobManager以及一個TaskManager嗎?
> >
> > 找了一下文件中並沒有相關的說明。
> >
> > 感謝!
> >
>


-- 
Best Regards

Jeff Zhang


关于水位线Watermark的理解

2020-05-24 文章 smq
使用时间时间窗口处理关于数据延迟,加入允许延迟时间为1min,窗口大小是10min,那么在12:00-12:10这个窗口中,如果事件时间是在12:09:50这个数据在12:10:50这个数据到达,并且此时水位线刚好在12:09:50,那么这个延迟数据可以被处理,这个可以理解。
但是,假如第一个数据的事件时间刚好为12:00的,那么此时水位线应该在11:59,这个数据能进入12:00-12:10这个窗口被处理吗。按道理来说应该被正确处理。那么这样的话,进入窗口是按照事件时间,触发是按照水印时间。不知道这么理解对不对,这个问题想了很久。

Re: 關於LocalExecutionEnvironment使用MiniCluster的配置

2020-05-24 文章 tison
是这样的。

这里的配置可以参考[1][2]两个类,具体你 Maven 启动的代码路径还跟[3][4]有关。

这边可能确实文档比较缺失。可以看下配置传递的路径,TM 的数量还有 RPC 的共享格式等配置,至少编程接口上都是可以配的。

Best,
tison.

[1]
https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/minicluster/MiniCluster.java
[2]
https://github.com/apache/flink/blob/ab947386ed93b16019f36c50e9a3475dd6ad3c4a/flink-runtime/src/main/java/org/apache/flink/runtime/minicluster/MiniClusterConfiguration.java
[3]
https://github.com/apache/flink/blob/master/flink-clients/src/main/java/org/apache/flink/client/deployment/executors/LocalExecutor.java
[4]
https://github.com/apache/flink/blob/master/flink-clients/src/main/java/org/apache/flink/client/program/PerJobMiniClusterFactory.java




月月  于2020年5月24日周日 下午9:11写道:

> 您好,
> 在單機模式使用maven執行專案時,會自動啟動MiniCluster,
> 我想請問在這種情形下,預設是配置一個JobManager以及一個TaskManager嗎?
>
> 找了一下文件中並沒有相關的說明。
>
> 感謝!
>


關於LocalExecutionEnvironment使用MiniCluster的配置

2020-05-24 文章 月月
您好,
在單機模式使用maven執行專案時,會自動啟動MiniCluster,
我想請問在這種情形下,預設是配置一個JobManager以及一個TaskManager嗎?

找了一下文件中並沒有相關的說明。

感謝!


回复:使用广播流要怎么保证广播流比数据流先到?

2020-05-24 文章 1048262223
Hello,我的理解是这样的
广播流一般都是为了减少访问外部配置数据,提高性能来使用的,因此如果你是在这种场景下使用播流,我有一个在生产实践过的方法可供参考。
可以先在正常数据处理流的open方法中初始化访问一次配置,后续配置变更时再去使用广播中的数据对配置进行更新。如果硬要求某些数据必须在某个广播流配置数据更新后才能进行处理,则可以使用大佬们在上面提供的用state存储的方式进行解决。


-- 原始邮件 --
发件人: Yun Gao