(无主题)

2021-06-19 文章 田磊
我用flink跑hbase的数据,flink的界面显示任务已经finished,正在running的任务为0。而yarn的界面显示正在running的状态,一直都结束不了,需要手动kill,是什么情况啊。


| |
totorobabyfans
|
|
邮箱:totorobabyf...@163.com
|

签名由 网易邮箱大师 定制

flink-1.13.1 sql error

2021-06-19 文章 kcz
sql??
CREATE TABLE user_behavior (
  user_id BIGINT,
  item_id BIGINT,
  category_id BIGINT,
  behavior STRING,
  ts STRING
) WITH (
  'connector' = 'kafka',
  'topic' = 'user_behavior',
  'scan.startup.mode' = 'latest-offset',
  'properties.bootstrap.servers' = 'localhost:9092',
  'format' = 'json'
);


select * from user_behavior;



pom.xml??
flink.version=1.13.1


Re: Re:Re: Re: Re:Re: flink sql job 提交到yarn上报错

2021-06-19 文章 JasonLee
hi

先执行一下 export HADOOP_CLASSPATH=`hadoop classpath` 就可以了



-
Best Wishes
JasonLee
--
Sent from: http://apache-flink.147419.n8.nabble.com/


Re: 退订

2021-06-19 文章 JasonLee
hi

退订发邮件到 user-zh-unsubscr...@flink.apache.org 就可以了




-
Best Wishes
JasonLee
--
Sent from: http://apache-flink.147419.n8.nabble.com/


Flink v1.12.2 Kubernetes Session Mode无法挂载ConfigMap中的log4j.properties

2021-06-19 文章 Chenyu Zheng
开发者您好,
我最近正在尝试使用Kubernetes Session 
Mode启动Flink,但是发现无法挂载ConfigMap中的log4j.properties。请问这是一个bug吗?有没有方法绕开这个问题,动态挂载log4j.properties?
我的yaml:
apiVersion: v1
data:
  flink-conf.yaml: |-
taskmanager.numberOfTaskSlots: 1
blob.server.port: 6124
kubernetes.rest-service.exposed.type: ClusterIP
kubernetes.jobmanager.cpu: 1.00
high-availability.storageDir: 
s3p://hulu-caposv2-flink-s3-bucket/session-cluster-test/ha-backup/
queryable-state.proxy.ports: 6125
kubernetes.service-account: stream-app
high-availability: 
org.apache.flink.kubernetes.highavailability.KubernetesHaServicesFactory
jobmanager.memory.process.size: 1024m
taskmanager.memory.process.size: 1024m
kubernetes.taskmanager.annotations: 
cluster-autoscaler.kubernetes.io/safe-to-evict:false
kubernetes.namespace: test123
restart-strategy: fixed-delay
restart-strategy.fixed-delay.attempts: 5
kubernetes.taskmanager.cpu: 1.00
state.backend: filesystem
parallelism.default: 4
kubernetes.container.image: 
cubox.prod.hulu.com/proxy/flink:1.12.2-scala_2.11-java8-stdout7
kubernetes.taskmanager.labels: 
capos_id:session-cluster-test,stream-component:jobmanager
state.checkpoints.dir: 
s3p://hulu-caposv2-flink-s3-bucket/session-cluster-test/checkpoints/
kubernetes.cluster-id: session-cluster-test
kubernetes.jobmanager.annotations: 
cluster-autoscaler.kubernetes.io/safe-to-evict:false
state.savepoints.dir: 
s3p://hulu-caposv2-flink-s3-bucket/session-cluster-test/savepoints/
restart-strategy.fixed-delay.delay: 15s
taskmanager.rpc.port: 6122
jobmanager.rpc.address: session-cluster-test-flink-jobmanager
kubernetes.jobmanager.labels: 
capos_id:session-cluster-test,stream-component:jobmanager
jobmanager.rpc.port: 6123
  log4j.properties: |-
logger.kafka.name = org.apache.kafka
logger.hadoop.level = INFO
appender.rolling.type = RollingFile
appender.rolling.filePattern = ${sys:log.file}.%i
appender.rolling.layout.pattern = %d{-MM-dd HH:mm:ss,SSS} %-5p %-60c %x 
- %m%n
logger.netty.name = 
org.apache.flink.shaded.akka.org.jboss.netty.channel.DefaultChannelPipeline
rootLogger = INFO, rolling
logger.akka.name = akka
appender.rolling.strategy.type = DefaultRolloverStrategy
logger.akka.level = INFO
appender.rolling.append = false
logger.hadoop.name = org.apache.hadoop
appender.rolling.fileName = ${sys:log.file}
appender.rolling.policies.type = Policies
rootLogger.appenderRef.rolling.ref = RollingFileAppender
logger.kafka.level = INFO
appender.rolling.name = RollingFileAppender
appender.rolling.layout.type = PatternLayout
appender.rolling.policies.size.type = SizeBasedTriggeringPolicy
appender.rolling.policies.size.size = 100MB
appender.rolling.strategy.max = 10
logger.netty.level = OFF
logger.zookeeper.name = org.apache.zookeeper
logger.zookeeper.level = INFO
kind: ConfigMap
metadata:
  labels:
app: session-cluster-test
capos_id: session-cluster-test
  name: session-cluster-test-flink-config
 namespace: test123

---

apiVersion: batch/v1
kind: Job
metadata:
  labels:
capos_id: session-cluster-test
  name: session-cluster-test-flink-startup
  namespace: test123
spec:
  backoffLimit: 6
  completions: 1
  parallelism: 1
  template:
metadata:
  annotations:
caposv2.prod.hulu.com/streamAppSavepointId: "0"
cluster-autoscaler.kubernetes.io/safe-to-evict: "false"
  creationTimestamp: null
  labels:
capos_id: session-cluster-test
stream-component: start-up
spec:
  containers:
  - command:
- ./bin/kubernetes-session.sh
- -Dkubernetes.cluster-id=session-cluster-test
image: cubox.prod.hulu.com/proxy/flink:1.12.2-scala_2.11-java8-stdout7
imagePullPolicy: IfNotPresent
name: flink-startup
resources: {}
securityContext:
  runAsUser: 
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /opt/flink/conf
  name: flink-config-volume
  dnsPolicy: ClusterFirst
  restartPolicy: Never
  schedulerName: default-scheduler
  securityContext: {}
  serviceAccount: stream-app
  serviceAccountName: stream-app
  terminationGracePeriodSeconds: 30
  volumes:
  - configMap:
  defaultMode: 420
  items:
  - key: flink-conf.yaml
path: flink-conf.yaml
  - key: log4j.properties
path: log4j.properties
  name: session-cluster-test-flink-config
name: flink-config-volume
  ttlSecondsAfterFinished: 86400

启动的jobmanager container volume mount没有log4j.properties
volumes:
  - configMap:
  defaultMode: 420
  items:
  - key: flink-conf.yaml
path: flink-conf.yaml
  name: flink-config-session-cluster-test
name: flink-config-volume

Conf目录下也确实缺少了log配置

退订

2021-06-19 文章 Gauler Tan