[GitHub] flink pull request: [FLINK-3399] CountWithTimeoutTrigger
GitHub user shikhar opened a pull request: https://github.com/apache/flink/pull/1636 [FLINK-3399] CountWithTimeoutTrigger trigger that fires once the number of elements in a pane reaches the given count or the timeout expires, whichever happens first You can merge this pull request into a Git repository by running: $ git pull https://github.com/shikhar/flink count-with-timeout-trigger Alternatively you can review and apply these changes as the patch at: https://github.com/apache/flink/pull/1636.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #1636 commit 72d8c902d42d0709cdf27897ea75dab826fe1aa4 Author: shikharDate: 2016-02-15T04:38:50Z [FLINK-3399] CountWithTimeoutTrigger --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Commented] (FLINK-3399) Count with timeout trigger
[ https://issues.apache.org/jira/browse/FLINK-3399?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15146900#comment-15146900 ] ASF GitHub Bot commented on FLINK-3399: --- GitHub user shikhar opened a pull request: https://github.com/apache/flink/pull/1636 [FLINK-3399] CountWithTimeoutTrigger trigger that fires once the number of elements in a pane reaches the given count or the timeout expires, whichever happens first You can merge this pull request into a Git repository by running: $ git pull https://github.com/shikhar/flink count-with-timeout-trigger Alternatively you can review and apply these changes as the patch at: https://github.com/apache/flink/pull/1636.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #1636 commit 72d8c902d42d0709cdf27897ea75dab826fe1aa4 Author: shikharDate: 2016-02-15T04:38:50Z [FLINK-3399] CountWithTimeoutTrigger > Count with timeout trigger > -- > > Key: FLINK-3399 > URL: https://issues.apache.org/jira/browse/FLINK-3399 > Project: Flink > Issue Type: Improvement >Reporter: Shikhar Bhushan >Priority: Minor > > I created an implementation of a trigger that I'd like to contribute, > https://gist.github.com/shikhar/2cb9f1b792be31b7c16e > An example application - if a sink function operates more efficiently if it > is writing in a batched fashion, then the windowing mechanism + this trigger > can be used. Count to have an upper bound on batch size & better control on > memory usage, and timeout to ensure timeliness of the outputs. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (FLINK-3399) Count with timeout trigger
Shikhar Bhushan created FLINK-3399: -- Summary: Count with timeout trigger Key: FLINK-3399 URL: https://issues.apache.org/jira/browse/FLINK-3399 Project: Flink Issue Type: Improvement Reporter: Shikhar Bhushan Priority: Minor I created an implementation of a trigger that I'd like to contribute, https://gist.github.com/shikhar/2cb9f1b792be31b7c16e An example application - if a sink function operates more efficiently if it is writing in a batched fashion, then the windowing mechanism + this trigger can be used. Count to have an upper bound on batch size & better control on memory usage, and timeout to ensure timeliness of the outputs. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (FLINK-3398) Flink Kafka consumer should support auto-commit opt-outs
[ https://issues.apache.org/jira/browse/FLINK-3398?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shikhar Bhushan updated FLINK-3398: --- Description: Currently the Kafka source will commit consumer offsets to Zookeeper, either upon a checkpoint if checkpointing is enabled, otherwise periodically based on {{auto.commit.interval.ms}} It should be possible to opt-out of committing consumer offsets to Zookeeper. Kafka has this config as {{auto.commit.enable}} (0.8) and {{enable.auto.commit}} (0.9). was: Currently the Kafka source will commit consumer offsets to Zookeeper, either upon a checkpoint if checkpointing is enabled, otherwise periodically based on {{auto.commit.interval.ms}} It should be possible to opt-out of committing consumer offsets to Zookeeper. Kafka has this config as 'auto.commit.enable' (0.8) and 'enable.auto.commit' (0.9). > Flink Kafka consumer should support auto-commit opt-outs > > > Key: FLINK-3398 > URL: https://issues.apache.org/jira/browse/FLINK-3398 > Project: Flink > Issue Type: Bug >Reporter: Shikhar Bhushan > > Currently the Kafka source will commit consumer offsets to Zookeeper, either > upon a checkpoint if checkpointing is enabled, otherwise periodically based > on {{auto.commit.interval.ms}} > It should be possible to opt-out of committing consumer offsets to Zookeeper. > Kafka has this config as {{auto.commit.enable}} (0.8) and > {{enable.auto.commit}} (0.9). -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (FLINK-3398) Flink Kafka consumer should support auto-commit opt-outs
Shikhar Bhushan created FLINK-3398: -- Summary: Flink Kafka consumer should support auto-commit opt-outs Key: FLINK-3398 URL: https://issues.apache.org/jira/browse/FLINK-3398 Project: Flink Issue Type: Bug Reporter: Shikhar Bhushan Currently the Kafka source will commit consumer offsets to Zookeeper, either upon a checkpoint if checkpointing is enabled, otherwise periodically based on {{auto.commit.interval.ms}} It should be possible to opt-out of committing consumer offsets to Zookeeper. Kafka has this config as 'auto.commit.enable' (0.8) and 'enable.auto.commit' (0.9). -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (FLINK-3367) Annotate all user-facing API classes with @Public or @PublicEvolving
[ https://issues.apache.org/jira/browse/FLINK-3367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15146743#comment-15146743 ] ASF GitHub Bot commented on FLINK-3367: --- Github user asfgit closed the pull request at: https://github.com/apache/flink/pull/1606 > Annotate all user-facing API classes with @Public or @PublicEvolving > > > Key: FLINK-3367 > URL: https://issues.apache.org/jira/browse/FLINK-3367 > Project: Flink > Issue Type: Task >Affects Versions: 1.0.0 >Reporter: Fabian Hueske >Assignee: Fabian Hueske > Fix For: 1.0.0 > > > At the moment, only stable public classes are annotated with @Public. It is > not possible to identify whether a non-annotated class is supposed to be > API-facing or not. > This issue proposes to annotate all API classes either with @Public or > @PublicEvolving. Classes which are not annotated belong to Flink's internals. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (FLINK-3397) Failed streaming jobs should fall back to the most recent checkpoint/savepoint
Gyula Fora created FLINK-3397: - Summary: Failed streaming jobs should fall back to the most recent checkpoint/savepoint Key: FLINK-3397 URL: https://issues.apache.org/jira/browse/FLINK-3397 Project: Flink Issue Type: Improvement Components: Streaming Affects Versions: 1.0.0 Reporter: Gyula Fora Priority: Minor The current fallback behaviour in case of a streaming job failure is slightly counterintuitive: If a job fails it will fall back to the most recent checkpoint (if any) even if there were more recent savepoint taken. This means that savepoints are not regarded as checkpoints by the system only points from where a job can be manually restarted. I suggest to change this so that savepoints are also regarded as checkpoints in case of a failure and they will also be used to automatically restore the streaming job. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Closed] (FLINK-3367) Annotate all user-facing API classes with @Public or @PublicEvolving
[ https://issues.apache.org/jira/browse/FLINK-3367?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Fabian Hueske closed FLINK-3367. Resolution: Done Done with c9db63b935725368a5750e8d4dc94e6b015a1688 53f8d773be5e1fd36e8675c2ca2520b6153febf3 e5f33b6d12d8f3d3d2d4bc216ce074d2605fc9ae > Annotate all user-facing API classes with @Public or @PublicEvolving > > > Key: FLINK-3367 > URL: https://issues.apache.org/jira/browse/FLINK-3367 > Project: Flink > Issue Type: Task >Affects Versions: 1.0.0 >Reporter: Fabian Hueske >Assignee: Fabian Hueske > Fix For: 1.0.0 > > > At the moment, only stable public classes are annotated with @Public. It is > not possible to identify whether a non-annotated class is supposed to be > API-facing or not. > This issue proposes to annotate all API classes either with @Public or > @PublicEvolving. Classes which are not annotated belong to Flink's internals. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[GitHub] flink pull request: [FLINK-3367] Add PublicEvolving and Internal a...
Github user fhueske commented on the pull request: https://github.com/apache/flink/pull/1606#issuecomment-183988942 Merging this --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Commented] (FLINK-3367) Annotate all user-facing API classes with @Public or @PublicEvolving
[ https://issues.apache.org/jira/browse/FLINK-3367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15146742#comment-15146742 ] ASF GitHub Bot commented on FLINK-3367: --- Github user fhueske commented on the pull request: https://github.com/apache/flink/pull/1606#issuecomment-183988942 Merging this > Annotate all user-facing API classes with @Public or @PublicEvolving > > > Key: FLINK-3367 > URL: https://issues.apache.org/jira/browse/FLINK-3367 > Project: Flink > Issue Type: Task >Affects Versions: 1.0.0 >Reporter: Fabian Hueske >Assignee: Fabian Hueske > Fix For: 1.0.0 > > > At the moment, only stable public classes are annotated with @Public. It is > not possible to identify whether a non-annotated class is supposed to be > API-facing or not. > This issue proposes to annotate all API classes either with @Public or > @PublicEvolving. Classes which are not annotated belong to Flink's internals. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[GitHub] flink pull request: [FLINK-3367] Add PublicEvolving and Internal a...
Github user asfgit closed the pull request at: https://github.com/apache/flink/pull/1606 --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Commented] (FLINK-3226) Translate optimized logical Table API plans into physical plans representing DataSet programs
[ https://issues.apache.org/jira/browse/FLINK-3226?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15146694#comment-15146694 ] ASF GitHub Bot commented on FLINK-3226: --- Github user vasia commented on the pull request: https://github.com/apache/flink/pull/1632#issuecomment-183957839 Thanks for the review @twalthr! I've addressed your comments. > Translate optimized logical Table API plans into physical plans representing > DataSet programs > - > > Key: FLINK-3226 > URL: https://issues.apache.org/jira/browse/FLINK-3226 > Project: Flink > Issue Type: Sub-task > Components: Table API >Reporter: Fabian Hueske >Assignee: Chengxiang Li > > This issue is about translating an (optimized) logical Table API (see > FLINK-3225) query plan into a physical plan. The physical plan is a 1-to-1 > representation of the DataSet program that will be executed. This means: > - Each Flink RelNode refers to exactly one Flink DataSet or DataStream > operator. > - All (join and grouping) keys of Flink operators are correctly specified. > - The expressions which are to be executed in user-code are identified. > - All fields are referenced with their physical execution-time index. > - Flink type information is available. > - Optional: Add physical execution hints for joins > The translation should be the final part of Calcite's optimization process. > For this task we need to: > - implement a set of Flink DataSet RelNodes. Each RelNode corresponds to one > Flink DataSet operator (Map, Reduce, Join, ...). The RelNodes must hold all > relevant operator information (keys, user-code expression, strategy hints, > parallelism). > - implement rules to translate optimized Calcite RelNodes into Flink > RelNodes. We start with a straight-forward mapping and later add rules that > merge several relational operators into a single Flink operator, e.g., merge > a join followed by a filter. Timo implemented some rules for the first SQL > implementation which can be used as a starting point. > - Integrate the translation rules into the Calcite optimization process -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[GitHub] flink pull request: [FLINK-3226] Translate logical joins to physic...
Github user vasia commented on the pull request: https://github.com/apache/flink/pull/1632#issuecomment-183957839 Thanks for the review @twalthr! I've addressed your comments. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Commented] (FLINK-2237) Add hash-based Aggregation
[ https://issues.apache.org/jira/browse/FLINK-2237?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15146581#comment-15146581 ] ASF GitHub Bot commented on FLINK-2237: --- Github user ggevay commented on the pull request: https://github.com/apache/flink/pull/1517#issuecomment-183896027 I've added some more documentation. > Add hash-based Aggregation > -- > > Key: FLINK-2237 > URL: https://issues.apache.org/jira/browse/FLINK-2237 > Project: Flink > Issue Type: New Feature >Reporter: Rafiullah Momand >Assignee: Gabor Gevay >Priority: Minor > > Aggregation functions at the moment are implemented in a sort-based way. > How can we implement hash based Aggregation for Flink? -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[GitHub] flink pull request: [FLINK-2237] [runtime] Add hash-based combiner...
Github user ggevay commented on the pull request: https://github.com/apache/flink/pull/1517#issuecomment-183896027 I've added some more documentation. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Commented] (FLINK-3304) AvroOutputFormat.setSchema() doesn't work in yarn-cluster mode
[ https://issues.apache.org/jira/browse/FLINK-3304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15146670#comment-15146670 ] ASF GitHub Bot commented on FLINK-3304: --- Github user kl0u commented on the pull request: https://github.com/apache/flink/pull/1635#issuecomment-183938932 Thanks a lot @rmetzger > AvroOutputFormat.setSchema() doesn't work in yarn-cluster mode > -- > > Key: FLINK-3304 > URL: https://issues.apache.org/jira/browse/FLINK-3304 > Project: Flink > Issue Type: Bug >Affects Versions: 1.0.0, 0.10.1 >Reporter: Sebastian Klemke >Assignee: Klou > > Quoting flink cli (schema and names modified): > "The program finished with the following exception: > User-defined object org.apache.flink.api.java.io.AvroOutputFormat@5f253dfb > (org.apache.flink.api.java.io.AvroOutputFormat) contains non-serializable > field userDefinedSchema = > {"type":"record","name":"Pojo","namespace":"com.example","fields":[{"name":"id","type":["null","string"],"default":null,"subtype":"objectid"}],"EntityVersion":"0.1.0"} > > org.apache.flink.api.common.operators.util.UserCodeObjectWrapper.(UserCodeObjectWrapper.java:84) > > org.apache.flink.api.common.operators.GenericDataSinkBase.(GenericDataSinkBase.java:68) > > org.apache.flink.api.java.operators.DataSink.translateToDataFlow(DataSink.java:258) > > org.apache.flink.api.java.operators.OperatorTranslation.translate(OperatorTranslation.java:64) > > org.apache.flink.api.java.operators.OperatorTranslation.translateToPlan(OperatorTranslation.java:49) > > org.apache.flink.api.java.ExecutionEnvironment.createProgramPlan(ExecutionEnvironment.java:939) > > org.apache.flink.api.java.ExecutionEnvironment.createProgramPlan(ExecutionEnvironment.java:907) > > org.apache.flink.client.program.ContextEnvironment.execute(ContextEnvironment.java:57) > com.example.Tool.main(Tool.java:86) > Shutting down YARN cluster" -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[GitHub] flink pull request: FLINK-3304: Making the Avro Schema serializabl...
Github user kl0u commented on the pull request: https://github.com/apache/flink/pull/1635#issuecomment-183938932 Thanks a lot @rmetzger --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Commented] (FLINK-2719) ProcessFailureStreamingRecoveryITCase>AbstractProcessFailureRecoveryTest.testTaskManagerProcessFailure failed on Travis
[ https://issues.apache.org/jira/browse/FLINK-2719?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15146641#comment-15146641 ] Robert Metzger commented on FLINK-2719: --- Again: https://s3.amazonaws.com/archive.travis-ci.org/jobs/109141895/log.txt > ProcessFailureStreamingRecoveryITCase>AbstractProcessFailureRecoveryTest.testTaskManagerProcessFailure > failed on Travis > --- > > Key: FLINK-2719 > URL: https://issues.apache.org/jira/browse/FLINK-2719 > Project: Flink > Issue Type: Bug >Affects Versions: 0.10.0 >Reporter: Till Rohrmann >Priority: Critical > Labels: test-stability > Fix For: 1.0.0 > > > The test case > {{ProcessFailureStreamingRecoveryITCase>AbstractProcessFailureRecoveryTest.testTaskManagerProcessFailure}} > failed on travis with the following exception > {code} > Failed tests: > > ProcessFailureStreamingRecoveryITCase>AbstractProcessFailureRecoveryTest.testTaskManagerProcessFailure:211 > The program encountered a FileNotFoundException : File does not exist: > /tmp/cbe4a9aa-3b9a-455d-b7b4-a9abf7c2d9d5/03801d139e79e850249e386ffd89c13ca727bcd8 > {code} > Most likely, this is a problem of the Travis infrastructure that we could not > create the temp file. Maybe we should harden this. > https://s3.amazonaws.com/archive.travis-ci.org/jobs/81028955/log.txt -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[GitHub] flink pull request: FLINK-3304: Making the Avro Schema serializabl...
Github user rmetzger commented on the pull request: https://github.com/apache/flink/pull/1635#issuecomment-183918146 +1 to merge the change. Thank you. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Commented] (FLINK-3304) AvroOutputFormat.setSchema() doesn't work in yarn-cluster mode
[ https://issues.apache.org/jira/browse/FLINK-3304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15146640#comment-15146640 ] ASF GitHub Bot commented on FLINK-3304: --- Github user rmetzger commented on the pull request: https://github.com/apache/flink/pull/1635#issuecomment-183918146 +1 to merge the change. Thank you. > AvroOutputFormat.setSchema() doesn't work in yarn-cluster mode > -- > > Key: FLINK-3304 > URL: https://issues.apache.org/jira/browse/FLINK-3304 > Project: Flink > Issue Type: Bug >Affects Versions: 1.0.0, 0.10.1 >Reporter: Sebastian Klemke >Assignee: Klou > > Quoting flink cli (schema and names modified): > "The program finished with the following exception: > User-defined object org.apache.flink.api.java.io.AvroOutputFormat@5f253dfb > (org.apache.flink.api.java.io.AvroOutputFormat) contains non-serializable > field userDefinedSchema = > {"type":"record","name":"Pojo","namespace":"com.example","fields":[{"name":"id","type":["null","string"],"default":null,"subtype":"objectid"}],"EntityVersion":"0.1.0"} > > org.apache.flink.api.common.operators.util.UserCodeObjectWrapper.(UserCodeObjectWrapper.java:84) > > org.apache.flink.api.common.operators.GenericDataSinkBase.(GenericDataSinkBase.java:68) > > org.apache.flink.api.java.operators.DataSink.translateToDataFlow(DataSink.java:258) > > org.apache.flink.api.java.operators.OperatorTranslation.translate(OperatorTranslation.java:64) > > org.apache.flink.api.java.operators.OperatorTranslation.translateToPlan(OperatorTranslation.java:49) > > org.apache.flink.api.java.ExecutionEnvironment.createProgramPlan(ExecutionEnvironment.java:939) > > org.apache.flink.api.java.ExecutionEnvironment.createProgramPlan(ExecutionEnvironment.java:907) > > org.apache.flink.client.program.ContextEnvironment.execute(ContextEnvironment.java:57) > com.example.Tool.main(Tool.java:86) > Shutting down YARN cluster" -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[GitHub] flink pull request: FLINK-3304: Making the Avro Schema serializabl...
GitHub user kl0u opened a pull request: https://github.com/apache/flink/pull/1635 FLINK-3304: Making the Avro Schema serializable. This solves the issue FLINK-3304 by making the Avro Schema serializable. This is done by having a custom serializer which transforms the Schema into a JSON string, and the deserializer de-serializes the JSON to re-create the original schema. You can merge this pull request into a Git repository by running: $ git pull https://github.com/kl0u/flink avro Alternatively you can review and apply these changes as the patch at: https://github.com/apache/flink/pull/1635.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #1635 commit 173bf6a013f78fab2352f23cb7dae9399aa0ba5a Author: Kostas KloudasDate: 2016-02-11T17:24:29Z FLINK-3304: Making the Avro Schema serializable. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Commented] (FLINK-3304) AvroOutputFormat.setSchema() doesn't work in yarn-cluster mode
[ https://issues.apache.org/jira/browse/FLINK-3304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15146496#comment-15146496 ] ASF GitHub Bot commented on FLINK-3304: --- GitHub user kl0u opened a pull request: https://github.com/apache/flink/pull/1635 FLINK-3304: Making the Avro Schema serializable. This solves the issue FLINK-3304 by making the Avro Schema serializable. This is done by having a custom serializer which transforms the Schema into a JSON string, and the deserializer de-serializes the JSON to re-create the original schema. You can merge this pull request into a Git repository by running: $ git pull https://github.com/kl0u/flink avro Alternatively you can review and apply these changes as the patch at: https://github.com/apache/flink/pull/1635.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #1635 commit 173bf6a013f78fab2352f23cb7dae9399aa0ba5a Author: Kostas KloudasDate: 2016-02-11T17:24:29Z FLINK-3304: Making the Avro Schema serializable. > AvroOutputFormat.setSchema() doesn't work in yarn-cluster mode > -- > > Key: FLINK-3304 > URL: https://issues.apache.org/jira/browse/FLINK-3304 > Project: Flink > Issue Type: Bug >Affects Versions: 1.0.0, 0.10.1 >Reporter: Sebastian Klemke >Assignee: Klou > > Quoting flink cli (schema and names modified): > "The program finished with the following exception: > User-defined object org.apache.flink.api.java.io.AvroOutputFormat@5f253dfb > (org.apache.flink.api.java.io.AvroOutputFormat) contains non-serializable > field userDefinedSchema = > {"type":"record","name":"Pojo","namespace":"com.example","fields":[{"name":"id","type":["null","string"],"default":null,"subtype":"objectid"}],"EntityVersion":"0.1.0"} > > org.apache.flink.api.common.operators.util.UserCodeObjectWrapper.(UserCodeObjectWrapper.java:84) > > org.apache.flink.api.common.operators.GenericDataSinkBase.(GenericDataSinkBase.java:68) > > org.apache.flink.api.java.operators.DataSink.translateToDataFlow(DataSink.java:258) > > org.apache.flink.api.java.operators.OperatorTranslation.translate(OperatorTranslation.java:64) > > org.apache.flink.api.java.operators.OperatorTranslation.translateToPlan(OperatorTranslation.java:49) > > org.apache.flink.api.java.ExecutionEnvironment.createProgramPlan(ExecutionEnvironment.java:939) > > org.apache.flink.api.java.ExecutionEnvironment.createProgramPlan(ExecutionEnvironment.java:907) > > org.apache.flink.client.program.ContextEnvironment.execute(ContextEnvironment.java:57) > com.example.Tool.main(Tool.java:86) > Shutting down YARN cluster" -- This message was sent by Atlassian JIRA (v6.3.4#6332)