The following is the environment I use:
1. flink.version: 1.9.0
2. java version "1.8.0_212"
3. scala version: 2.11.12
When I wrote the following code in the scala programming language, I found
the following error:
// set up the batch execution environment
val bbSettings =
The following is the environment I use:
1. flink.version: 1.9.0
2. java version "1.8.0_212"
3. scala version: 2.11.12
When I wrote the following code in the scala programming language, I found
the following error:
// set up the batch execution environment
val bbSettings =
You crossed the upper limits of the check point system of Flink a way
high. Try to distribute events equally over time by adding some sort of
controlled back pressure after receiving data from kinesis streams.
Otherwise the spike coming during 5 seconds time would always create
problems. Tomorrow
I'm trying to set up a 3 node Flink cluster (version 1.9) on the following
machines:
Node 1 (Master) : 4 GB (3.8 GB) Core2 Duo 2.80GHz, Ubuntu 16.04 LTS
Node 2 (Slave) : 16 GB, Core i7-3.40GHz, Ubuntu 16.04 LTS
Node 3 (Slave) : 16 GB, Core i7-3,40GHz, Ubuntu 16.04 LTS
I have followed the
I want to implement grouping set in stream. I am new to flink sql. I
want to find a example to teach me how to self define rule and
implement corresponding operator. Can anyone give me any suggestion?
Still getting the same error message using your command. Which Maven
version are you using?
On Tue, Sep 10, 2019 at 6:39 PM Debasish Ghosh
wrote:
> I could build using the following command ..
>
> mvn clean install -Dcheckstyle.skip -DskipTests -Dscala-2.12
> -Drat.skip=true
>
> regards.
>
> On
Hi,
that would be regular SQL cast syntax:
SELECT a, b, c, CAST(eventTime AS TIMESTAMP) FROM ...
Am Di., 10. Sept. 2019 um 18:07 Uhr schrieb Niels Basjes :
> Hi.
>
> Can you give me an example of the actual syntax of such a cast?
>
> On Tue, 10 Sep 2019, 16:30 Fabian Hueske, wrote:
>
>> Hi
Hi there, I have the following use case:I get transaction logs from multiple servers. Each server puts its logs into its own Kafka partition so that within each partition the elements are monothonically ordered by time. Within the stream of transactions, we have some special events. Let's call
Hi Niels,
I think (not 100% sure) you could also cast the event time attribute to
TIMESTAMP before you emit the table.
This should remove the event time property (and thereby the
TimeIndicatorTypeInfo) and you wouldn't know to fiddle with the output
types.
Best, Fabian
Am Mi., 21. Aug. 2019 um
For me task count seems to be huge in number with the mentioned resource
count. To rule out the possibility of issue with state backend can you
start writing sink data as , i.e., data ignore sink. And try
whether you could run it for longer duration without any issue. You can
start decreasing the
I could build using the following command ..
mvn clean install -Dcheckstyle.skip -DskipTests -Dscala-2.12 -Drat.skip=true
regards.
On Tue, Sep 10, 2019 at 9:06 PM Yuval Itzchakov wrote:
> Hi,
> I'm trying to build Flink from source. I'm using Maven 3.6.1 and executing
> the following command:
@Rohan - I am streaming data to kafka sink after applying business logic.
For checkpoint, I am using s3 as a distributed file system. For local
recovery, I am using Optimized iops ebs volume.
@Vijay - I forget to mention that incoming data volume is ~ 10 to 21GB per
minute compressed(lz4) avro
Hi Hanan,
BroadcastState and CoMap (or CoProcessFunction) have both advantages and
disadvantages.
Broadcast state is better if the broadcasted side is small (only low data
rate).
Its records are replicated to each instance but the other (larger) stream
does not need to be partitioned and stays
Hi.
Can you give me an example of the actual syntax of such a cast?
On Tue, 10 Sep 2019, 16:30 Fabian Hueske, wrote:
> Hi Niels,
>
> I think (not 100% sure) you could also cast the event time attribute to
> TIMESTAMP before you emit the table.
> This should remove the event time property (and
Hi,
I'm trying to build Flink from source. I'm using Maven 3.6.1 and executing
the following command:
mvn clean install -DskipTests -Dfast -Dscala-2.12
Running both on the master branch and the release-1.9.0 tag, I get the
following error:
[ERROR] Failed to execute goal
Never mind, turns out it was an error on my part. Somehow I managed do add
an "S" to an attribute mistakenly :)
On Tue, Sep 10, 2019 at 7:29 PM Yuval Itzchakov wrote:
> Still getting the same error message using your command. Which Maven
> version are you using?
>
> On Tue, Sep 10, 2019 at 6:39
I managed to find what was going wrong. I will write here just for the
record.
First, the master machine was not login automatically at itself. So I had
to give permission for it.
chmod og-wx ~/.ssh/authorized_keys
chmod 750 $HOME
Then I put the number of "mesos.resourcemanager.tasks.cpus" to
版本:flink1.7.1
系统:centos 7
提交job前已经执行start-cluster.sh
作业提交主机名:test04
当我用如下命令提交flink job时总是报以下错误,请大家帮忙排查原因,谢谢
$ bin/flink run -m test04:8081 -c
org.apache.flink.quickstart.factory.FlinkConsumerKafkaSinkToKuduMainClass
flink-scala-project1.jar
Starting execution of program
??
??jasine
chenkafka??
??
maqy
----
??:"Jimmy Wong"https://zhuanlan.zhihu.com/p/77677075
??
maqy
-- --
是的,是这个问题,发现包打在胖包里面了,但是找不到,把包放在flink lib 目录下就好了,很奇怪
> 在 2019年9月11日,上午9:35,Dian Fu 写道:
>
> 看你的报错,Kafka010TableSourceSinkFactory不在classpath里,需要把kafka
> connector的jar(0.10需要依赖flink-connector-kafka-0.10_2.11或者flink-connector-kafka-0.10_2.12)放到依赖里。
>
>
>> 在 2019年9月10日,下午12:31,越张 写道:
>>
>> 代码:
>>
Blink文档中有介绍到EMIT Strategy,可以用WITH DELAY '1' MINUTE BEFORE WATERMARK或者EMIT
WITHOUT DELAY AFTER WATERMARK等类似的语法来控制窗口触发。
但是我使用这种语法作业运行就会报SQL解析错误,请问有没有办法可以在sql中实现控制窗口触发的操作?
Table result = tEnv.sqlQuery("select " +
"count(*) " +
"from dept group by tumble(crt_time,
I want to implement grouping set in stream. I am new to flink sql. I
want to find a example to teach me how to self define rule and
implement corresponding operator. Can anyone give me any suggestion?
kafka??kafka??group
-- --
??:
on 2019/9/10 13:47, 蒋涛涛 wrote:
尝试手段:
1. 手动迁移IO比较高的任务到其他机器,但是yarn任务提交比较随机,只能偶尔为之
2. 目前没有SSD,只能用普通STATA盘,目前加了两块盘提示磁盘IO能力,但是单盘对单任务的磁盘IO瓶颈还在
还有哪些策略可以解决或者缓解么?
It seems the tricks to improve RocksDB's throughput might be helpfu.
With writes and reads accessing mostly the recent data, our goal
24 matches
Mail list logo