[jira] [Created] (FLINK-14468) Update Kubernetes docs
Maximilian Bode created FLINK-14468: --- Summary: Update Kubernetes docs Key: FLINK-14468 URL: https://issues.apache.org/jira/browse/FLINK-14468 Project: Flink Issue Type: Task Components: Documentation Reporter: Maximilian Bode Two minor improvements to documented Kubernetes resource definitions: * avoid referencing deprecated extensions/v1beta1/Deployment * run unprivileged -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-7502) PrometheusReporter improvements
Maximilian Bode created FLINK-7502: -- Summary: PrometheusReporter improvements Key: FLINK-7502 URL: https://issues.apache.org/jira/browse/FLINK-7502 Project: Flink Issue Type: Improvement Components: Metrics Affects Versions: 1.4.0 Reporter: Maximilian Bode Assignee: Maximilian Bode Priority: Minor * do not throw exceptions on metrics being registered for second time * allow port ranges for setups where multiple reporters are on same host (e.g. one TaskManager and one JobManager) * do not use nanohttpd anymore, there is now a minimal http server included in [Prometheus JVM client|https://github.com/prometheus/client_java] -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (FLINK-6781) Make fetch size configurable in JDBCInputFormat
Maximilian Bode created FLINK-6781: -- Summary: Make fetch size configurable in JDBCInputFormat Key: FLINK-6781 URL: https://issues.apache.org/jira/browse/FLINK-6781 Project: Flink Issue Type: New Feature Components: Batch Connectors and Input/Output Formats Affects Versions: 1.2.1 Reporter: Maximilian Bode Assignee: Maximilian Bode Priority: Minor For batch jobs that read from large tables, it is useful to be able to configure the SQL statement's fetch size. In particular, for Oracle's JDBC driver the default fetch size is 10. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Created] (FLINK-3905) Add KafkaOutputFormat (DataSet API)
Maximilian Bode created FLINK-3905: -- Summary: Add KafkaOutputFormat (DataSet API) Key: FLINK-3905 URL: https://issues.apache.org/jira/browse/FLINK-3905 Project: Flink Issue Type: New Feature Components: Kafka Connector Affects Versions: 1.0.3 Reporter: Maximilian Bode Assignee: Maximilian Bode Right now, Flink can ingest records from and write records to Kafka in the DataStream API, via the {{FlinkKafkaConsumer08}} and {{FlinkKafkaProducer08}} and the corresponding classes for Kafka 0.9. In Flink batch jobs, interaction with Kafka is currently not supported. If there is an easy way to create an inverse to the OutputFormatSinkFunction, something like a SinkFunctionOutputFormat, this might be the way to go? -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (FLINK-3836) Change Histogram to enable Long counters
Maximilian Bode created FLINK-3836: -- Summary: Change Histogram to enable Long counters Key: FLINK-3836 URL: https://issues.apache.org/jira/browse/FLINK-3836 Project: Flink Issue Type: Improvement Components: Core Affects Versions: 1.0.2 Reporter: Maximilian Bode Priority: Minor Change flink/flink-core/src/main/java/org/apache/flink/api/common/accumulators/Histogram.java to enable Long counts instead of Integer. In particular, change the TreeMap to be. -- This message was sent by Atlassian JIRA (v6.3.4#6332)