[ 
https://issues.apache.org/jira/browse/FLINK-6996?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16108974#comment-16108974
 ] 

ASF GitHub Bot commented on FLINK-6996:
---------------------------------------

GitHub user pnowojski opened a pull request:

    https://github.com/apache/flink/pull/4456

    [FLINK-6996][kafka] Increase Xmx for tests

    As reported by @NicoK, sometimes 1000m was not enough memory to run 
at-least-once tests with broker failures on Travis. I remember having the same 
issue in #4239 where I have set this same value to `2048`. Hopefully it will 
solve the problems.

You can merge this pull request into a Git repository by running:

    $ git pull https://github.com/pnowojski/flink kafka010

Alternatively you can review and apply these changes as the patch at:

    https://github.com/apache/flink/pull/4456.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

    This closes #4456
    
----
commit 8c211b8fd975af441c5a762ee45a62a7dd44f173
Author: Piotr Nowojski <[email protected]>
Date:   2017-08-01T13:02:56Z

    [hotfix][docs] Add section in docs about writing unit and integration tests

commit dd7060497454c2450be3f33a4cf7bdf8cc854f14
Author: Piotr Nowojski <[email protected]>
Date:   2017-08-01T14:05:49Z

    [FLINK-6996][kafka] Increase Xmx for tests
    
    Sometimes 1000m was not enough memory to run at-least-once tests with 
broker failures on Travis

----


> FlinkKafkaProducer010 doesn't guarantee at-least-once semantic
> --------------------------------------------------------------
>
>                 Key: FLINK-6996
>                 URL: https://issues.apache.org/jira/browse/FLINK-6996
>             Project: Flink
>          Issue Type: Bug
>          Components: Kafka Connector
>    Affects Versions: 1.2.0, 1.3.0, 1.2.1, 1.3.1
>            Reporter: Piotr Nowojski
>            Assignee: Piotr Nowojski
>            Priority: Blocker
>             Fix For: 1.4.0, 1.3.2
>
>
> FlinkKafkaProducer010 doesn't implement CheckpointedFunction interface. This 
> means, when it's used like a "regular sink function" (option a from [the java 
> doc|https://ci.apache.org/projects/flink/flink-docs-master/api/java/org/apache/flink/streaming/connectors/kafka/FlinkKafkaProducer010.html])
>  it will not flush the data on "snapshotState"  as it is supposed to.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

Reply via email to